subfolder
stringclasses
367 values
filename
stringlengths
13
25
abstract
stringlengths
1
39.9k
introduction
stringlengths
0
316k
conclusions
stringlengths
0
229k
year
int64
0
99
month
int64
1
12
arxiv_id
stringlengths
8
25
1607
1607.06476_arXiv.txt
Quasar-driven outflows naturally account for the missing component of the extragalactic $\gamma$-ray background through neutral pion production in interactions between protons accelerated by the forward outflow shock and interstellar protons. We study the simultaneous neutrino emission by the same protons. We adopt outflow parameters that best fit the extragalactic $\gamma$-ray background data and derive a cumulative neutrino background of $\sim10^{-7}\,\rm GeV\,cm^{-2}\,s^{-1}\,sr^{-1}$ at neutrino energies $\enu\gtrsim 10$ TeV, which naturally explains the most recent IceCube data without tuning any free parameters. The link between the $\gamma$-ray and neutrino emission from quasar outflows can be used to constrain the high-energy physics of strong shocks at cosmological distances.
16
7
1607.06476
1607
1607.01060_arXiv.txt
Observations of jets in X-ray binaries show a correlation between radio power and black hole spin. This correlation, if confirmed, points towards the idea that relativistic jets may be powered by the rotational energy of black holes. In order to examine this further, we perform general-relativistic radiative transport calculations on magnetically arrested accretion flows, which are known to produce powerful jets via the Blandford-Znajek (BZ) mechanism. We find that the X-ray and $\gamma$-ray emission strongly depend on spin and inclination angle. Surprisingly, the high-energy power does not show the same dependence on spin as the BZ jet power, but instead can be understood as a redshift effect. In particular, photons observed perpendicular to the spin axis suffer little net redshift until originating from close to the horizon. Such observers see deeper into the hot, dense, highly-magnetized inner disk region. This effect is largest for rapidly rotating black holes due to a combination of frame dragging and decreasing horizon radius. While the X-ray emission is dominated by the near horizon region, the near-infrared radiation originates at larger radii. Therefore, the ratio of X-ray to near-infrared power is an observational signature of black hole spin.
\label{sec:intro} It is widely believed that relativistic jets are powered by the rotational energy of black holes. \citet{BZ77} showed that magnetic field lines, anchored in an external accretion disk, are twisted by frame dragging in the vicinity of a rotating black hole. These field lines expand under their own pressure, transporting energy outwards and accelerating any ``frozen-in'' plasma into jets aligned with the spin axis. Recent general-relativistic magnetohydrodynamic (GRMHD) simulations of ``magnetically arrested disks'' \citep[MADs;][]{N+03} showed that this process can operate with efficiencies of $> 100 \%$ \citep{TNM11,MTB12}. That is, more energy flows out of the black hole than flows in, which can only be achieved by extracting rotational energy from the black hole. Using 5GHz radio emission from X-ray binaries (XRBs) as a proxy for jet power, \citet{NM12} found a correlation between jet power and black hole spin \citep[but see][]{Fender+10,Russell+13}. Their results were consistent with $P_\text{jet} \sim a^2$, which is the scaling derived by \citet{BZ77} for slowly rotating black holes. They also found good agreement with the more accurate scaling $P_\text{jet} \sim \Omega_H^2$ \citep{TNM10,TMN12}, which works up to $a\approx 0.95$. Here, $a$ is the dimensionless spin of the black hole, $\Omega_H = a/2\,r_H$ is the angular velocity of the horizon, and $r_H = 1 + \sqrt{1 - a^2}$ is the horizon radius (we work in units where $GM=c=1$, however we occasionally reintroduce factors of $c$ for clarity). If confirmed, this correlation provides observational evidence that jets are probably powered by the rotational energy of black holes. Although it is well established that jets produce radio emission at large radii \citep[e.g.,][]{Fender10}, the high-energy (X-ray and $\gamma$-ray) radiation could originate much closer to the black hole, and so the contribution of jets to this radiation is less certain. It has long been argued that inverse Compton emission from a corona of hot electrons surrounding the inner accretion disk can produce the observed X-ray spectrum in XRBs \citep[e.g.,][]{Titarchuk94,MZ95,Gierlinski+97,Esin+97,Esin+01,Poutanen98, CB+06,Yuan+07,NM08,Niedzwiecki+14,Niedzwiecki+15,QL15}. However, it is also possible that a significant fraction of the X-ray emission originates in jets \citep[e.g.,][]{MiRo94,Markoff+01,Markoff+03,Markoff+05,FKM04,BRP06,Kaiser06, GBD06,Kylafis+08,Maitra+09,PC09,PM12,Markoff+15,O'Riordan+16}. Near the black hole where the jet originates, it is not even necessarily easy to distinguish what one means by a disk vs a jet due to the generically low plasma $\beta$ parameter and inflow-outflow regions in both the disk and jet \citep{MG04,McKinney06}. Clearly, there is much uncertainty about the potentially complicated relationship between the high-energy emission, the inner regions of the disk/jet, and the central black hole. In particular, even if jets are powered by the rotational energy of black holes, due to the uncertainties in the source of the high-energy radiation, it is not clear \emph{a priori} how this radiation should depend on spin. To investigate this issue, we take fully three-dimensional GRMHD simulations with different black hole spins. We perform radiative transfer calculations with Comptonization to obtain the spectrum of radiation with a focus on high-energy radiation resolved by the region near the black hole. We restrict our attention to the low/hard state in XRBs, since it is widely accepted that jets exist during this state \citep[with transient jets launched during state transitions;][]{FBG04}. Interestingly, although we find a strong spin dependence for the high-energy power, this does not follow the Blandford-Znajek (BZ) scaling. Furthermore, the effects of spin are maximum for observers located perpendicular to the spin axis of the black hole. We show that the high-energy emission originates from very close to the horizon, and the strong spin and viewing angle dependence can be understood as a redshift effect. While the X-ray power strongly depends on spin and observer inclination, the near-infrared (NIR) emission originates at larger radii and so is less sensitive to redshift effects. Therefore, for systems whose inclination angles are known, the ratio of X-ray to NIR power in the low/hard state can potentially be used to estimate spin. Since the black hole spin does not vary between the low/hard and high/soft states, this ratio would compliment measurements of spin in the high/soft state \citep[see e.g.,][for a review]{McClintock+11}. In Section~\ref{sec:model} we briefly describe our GRMHD simulations and radiative transport post-processing method. In Section~\ref{sec:results} we show the dependence of radiated power on spin and calculate the effects of redshift. In Section~\ref{sec:discussion} we summarize and discuss our findings.
\label{sec:discussion} In this work, we calculated the effects of spin on high-energy emission from the low/hard state in XRBs. We modelled the low/hard state as a MAD accretion flow, and investigated both prograde retrograde spins. We found that the X-ray power strongly depends on spin and observer inclination. In particular, the spin dependence is strikingly different from the BZ dependence expected for jet emission. In our models, the X-rays are dominated by the inner disk, and the strong dependence on spin and viewing angle can be understood as a redshift effect. For high spins and inclination angles, observers receive photons from smaller radii and therefore regions of larger synchrotron emissivity. Since the high-energy emission originates close to the horizon, it is more sensitive to spin than the low-energy emission that originates from larger radii. We identified the ratio of the X-ray power to NIR power as an observational signature of spin. This quantity could potentially be used to estimate spin, and would compliment measurements of spin based on observations in the high/soft state. While we expect this ratio to be particularly useful in systems with large inclinations, in general, its dependence on quantities such as the viewing angle introduces significant degeneracy. Therefore, by itself, this ratio can not uniquely determine the black hole spin. However, since the high-energy spectrum in the low/hard state is clearly sensitive to both spin and viewing angle, it may be possible to use more features of the spectrum to constrain these quantities. In particular, following the approach of the continuum-fitting (CF) and Fe line methods \citep[e.g.,][]{McClintock+11}, one could build up models of high-energy spectra for different spins and inclinations and, for a given observational spectrum, find the best $\chi^2$ fit. This new approach could potentially cross-validate existing methods based on fitting observations in the high/soft state. A disadvantage of this method is that it can not easily distinguish between prograde and retrograde spins. Both the CF and Fe line methods use the ISCO, which is monotonic with spin. The method described here relies on the horizon radius and the effects of redshift and so is more symmetric with spin. Therefore, more information would be needed to break the degeneracy between retrograde and prograde spins. The dependence of the high-energy power on spin is due to the combination of two main components, the redshift and synchrotron emissivity profiles. Interestingly, the behaviour of the redshift is in fact a very general feature of rotating black holes, and is largely independent of the details of accretion. On the other hand, the emissivity itself is a model-dependent quantity. Our results rely on the fact that the comoving synchrotron power in our MAD models is strongly dominated by the near horizon region. The observed high-energy radiation should therefore be highly variable on timescales of the order of a few light crossing times. Furthermore, we expect the variability timescale for the lower frequency emission to be longer since this originates at larger radii. The spectra shown in Figure~\ref{fig:spectra_spin} are consistent with the basic observed X-ray hardness/flux relations for XRBs in the low/hard state \citep{FBG04}. The time-averaged X-ray hardness ratio \citep[defined to be the ratio of the flux at $6.3$--$10.5$ keV to the flux at $3.8$--$7.3$ keV;][]{FBG04,Belloni+05} varies between 0.7 and 0.9, with higher spins slightly softer than lower spins. The luminosities in the low spin cases are likely somewhat lower than expected for the low/hard state, and are probably more consistent with the so-called quiescent state \citep[e.g.,][]{RM06}. However, this is not a serious issue. As we show in Appendix~\ref{sec:accretion_rate}, small changes in the accretion rate can significantly increase the total luminosity without greatly affecting the frequency of emission. Therefore, increasing the luminosity would not change our conclusions regarding the scaling in Figure~\ref{fig:rad_power_vs_spin_angles} or the ratio $P_X/P_\text{NIR}$ in Figure~\ref{fig:PX_PNIR}. \citet{Moscibrodzka+09} considered the effects of spin and viewing angle on radiation from non-MAD \citep[called SANE in ][]{Narayan+12} accretion flows in the context of Sgr A*. Interestingly, while they found that the X-ray flux increases dramatically with both spin and observer inclination, they attribute this dependence to a different effect than the one described here. In their models, the X-ray emission is produced by scattering from hot electrons at $r=r_\text{ISCO}$, and so the dependence on spin manifests itself in a very similar manner to thin disks \citep[see e.g.,][]{McClintock+11}. In our models, by contrast, most of the observed high-energy radiation originates from right outside the horizon, with the ISCO playing no special role. This can likely be attributed to the fact that the disks considered here are geometrically thicker, and so the density does not drop off significantly inside the ISCO. Therefore, the our results are probably more relevant for low luminosity, radiatively inefficient systems, in which the disk is expected to be geometrically thick. Furthermore, our work improves upon this study in two major areas. Firstly, our simulations are fully 3D, which is required to avoid decaying turbulence and reach a well defined steady state \citep{Cowling33,Sadowski+15}. Axisymmetric simulations can not reliably capture the effects of spin, since the resulting radiation will be influenced by the extent to which the spin has affected the flow by the time the turbulence decays. Secondly, in MAD models, the final amount of magnetic flux at the horizon is independent of the initial flux content of the torus, which in SANE models can artificially introduce a spin dependence \citep{TM12, TMN12}. Therefore, MAD models are more reliable for studying the effects of spin on the high-energy radiation. While our calculations apply to MAD accretion flows in the low/hard state, the redshift effects described here might also be important when considering thin MADs in the high/soft state. \citet{avara+15} demonstrated an $80\%$ deviation from the standard Novikov-Thorne radiative efficiency, with most of the radiation coming from at or below the ISCO. As shown here, for rapidly spinning black holes, radiation from small radii is very strongly affected by variations in spin and viewing angle. Therefore, if the radiation from thin MADs originates at smaller radii than expected for standard thin disks, this could have important implications for measurements of spin in the high/soft state. Our analysis was carried out for a black hole mass of $M=10M_\odot$. However, since the relevant length and time scales are set by $M$, we can scale our results to arbitrary masses as follows. Assuming that the accretion rate is a fixed fraction of the Eddington rate $\dot{M} \sim \dot{M}_\text{Edd} \sim M$, from Appendix~\ref{sec:accretion_rate} we find that $n \sim M^{-1}$, $B \sim M^{-1/2}$, and $\Theta \sim M^0$. These relationships can be used to scale the spectral features in Figure~\ref{fig:spectra_spin} to supermassive black holes. Importantly, however, this scaling is only appropriate for systems which are well described by RIAFs. Therefore, our results are potentially relevant for accreting supermassive black hole systems such as Sgr A* and low luminosity subclasses of AGN such as LINERS and BL Lac objects \citep[see e.g.,][]{YN14}. Although BL Lacs (and blazars in general) have jets roughly aligned with the observer, at small radii there could be a misalignment between the jet and spin axes (see Section~\ref{subsec:tilt}). Such a misalignment could significantly enhance the high-energy emission from close to the black hole, leading to the intriguing possibility that near-horizon emission is responsible for the short-timescale variability observed in these systems \citep[e.g.,][]{Aharonian+07,Albert+07,Aleksic+11,Ackermann+16}. The current work is somewhat limited by the assumption of a thermal distribution of electrons. The highly-magnetized inner disk region could contain a significant number of non-thermal particles due to acceleration by magnetic reconnection \citep[e.g.,][]{SS14}. However, thermal electrons might dominate emission from near the horizon, as has been sufficient to explain the low-hard like state in Sgr A* and M87 \citep{Dexter+12,Broderick+14,BT15}. Furthermore, different prescriptions for treating the electron temperature might reduce the dominance of emission from the inner disk and instead ``light up'' the funnel wall region \citep[e.g.,][]{MF13,Moscibrodzka+14}. These prescriptions usually separate the jet and disk based on $b^2/\rho$ or the plasma $\beta$. In our models, the inner disk is highly magnetized and so differentiating between the jet and disk based on the magnetization alone would in fact treat the inner disk region in a similar manner to the jet. The treatment of the electron physics in accretion disks and jets remains an active area of research, and we will apply our results with new models of electron physics as they become available.
16
7
1607.01060
1607
1607.08231_arXiv.txt
We investigate the interplay between moduli dynamics and inflation, focusing on the KKLT-scenario and cosmological $\alpha$-attractors. General couplings between these sectors can induce a significant backreaction and potentially destroy the inflationary regime; however, we demonstrate that this generically does not happen for $\alpha$-attractors. Depending on the details of the superpotential, the volume modulus can either be stable during the entire inflationary trajectory, or become tachyonic at some point and act as a waterfall field, resulting in a sudden end of inflation. In the latter case there is a universal supersymmetric minimum where the scalars end up, preventing the decompactification scenario. The gravitino mass is independent from the inflationary scale with no fine-tuning of the parameters. The observational predictions conform to the universal value of attractors, fully compatible with the Planck data, with possibly a capped number of e-folds due to the interplay with moduli.
Compactifications of string theory generically come with many \emph{moduli}: classically massless scalar fields that parametrize properties of the internal manifold and that give rise to unobserved long-range interactions.While one expects quantum effects to generate masses for these scalar fields, it is difficult to realize this while retaining computational control \cite{DineSeiberg}. In the case of Calabi-Yau compactifications, the moduli parametrize deformations of the manifold's \Kahler form, its complex structure and the string coupling. One may generate a mass for the latter two by turning on fluxes in the internal manifold \cite{GKP}. However, the \Kahler moduli cannot be stabilized in this manner. Instead, Kachru, Kallosh, Linde and Trivedi (KKLT) \cite{KKLT} argued that one can stabilize the \Kahler moduli using non-perturbative corrections while maintaining computational control. The central issue we intend to address in this paper is how the presence of the moduli sector can affect an inflationary regime. Coupling inflation with other moduli generically leads to mutual \emph{backreaction}. On the one hand, the inflationary energy can destabilize the moduli. This was anticipated in \cite{Kallosh2004}, where it was shown that stabilizing the \Kahler modulus in the simplest model of inflation leads to a bound $H < m_{\nicefrac{3}{2}}$ on the inflationary Hubble scale $H$, related to the gravitino mass $m_{\nicefrac{3}{2}}$ in the vacuum after inflation\footnote{In the same paper \cite{Kallosh2004}, it was pointed out that using a specific combination of two exponentials in the superpotential generically improves the decoupling of the two physical scales. This so-called KL model and its coupling to inflation was further explored in \cite{Davis:2008fv,Kallosh:2011qk}.}. Conversely, the dynamics of the volume modulus may induce a backreaction which renders the inflaton scalar potential too steep to support inflation. The issue of moduli stabilization during inflation was subsequently investigated in an explicit string theory setup in \cite{Kachru:2003}. In this paper, a scalar field $r_1$ parametrizing the separation between an anti-brane and a brane serves as the inflaton. A warped geometry sourced by five-form fluxes generates a naturally flat potential for $r_1$. As described above, fluxes serve to stabilize all moduli except the volume modulus, which stabilize with a KKLT-like structure. The interplay of $r_1$ and the \Kahler modulus generically yields a large shift in the second slow-roll parameter $\eta$, thus spoiling inflation. More generally, the interplay between moduli stabilization and supersymmetry breaking has been extensively studied in literature (see e.g. \cite{Dudas:2012wi,Buchmuller:2014vda,Vercnocke}). For quadratic inflation, this topic has been investigated in detail at the supergravity level in \cite{Challenges}, where the super- and \Kahler potentials were sum separable between the moduli and inflaton sectors. In every setup considered in \cite{Challenges}, the naive stability bound $H < m_{\nicefrac{3}{2}}$ was verified and there was a destabilization of the \Kahler modulus on the inflationary trajectory, at large field values of the inflaton. Generating enough e-folds of inflation imposed stringent constraints on the parameter space. The aim of this paper is to study the interplay of KKLT-like moduli stabilization and supergravity $\alpha$-attractor models of inflation. The $\alpha$-attractor models provide an elegant description of the inflationary dynamics with robust predictions \cite{SuperconformalAttractors, Carrasco:2015uma, Carrasco:2015rva} that are in excellent agreement with the latest data on the cosmic microwave background \cite{Planck2015,Ade:2015lrj,Ade:2015tva,Array:2015xqh}. Moreover, they have been coupled to various other sectors \cite{Kallosh:2015lwa,deSitterLandscape,Carrasco:2015pla,Kallosh:2016ndd,Kallosh:2016gqp,Kallosh:2016sej} (see \cite{Ueno:2016dim,Eshaghi:2016kne} for reheating constraints on this class of models). In this paper, we will investigate their resilience under moduli backreaction. Specifically, we will show that combining these two sectors together yields surprising consequences. While the \Kahler modulus will turn out to be always stable during inflation, the major effect of the backreaction can be instead beneficial. It generically induces/enhances the attractor inflationary regime as well as produces a supersymmetric vacuum where the scalars can sit at the end of their evolution. Remarkably, this allows to decouple the inflationary and SUSY breaking scales with no amount of fine-tuning. We first review the supergravity descriptions of both $\alpha$-attractors and moduli stabilization in Sec.~\ref{SECtwosectors}, and outline the strategy of our analysis and the main physics traits arising from coupling these two moduli sectors. We then proceed to discuss the vacuum structure and inflationary features of different coupling cases. In Sec.~\ref{Sec.Product}, we analyse the product coupling case while in Sec.~\ref{Sec.General} we show how the latter can be generalized with surprising physics outcomes. The resulting inflationary dynamics with concrete examples and predictions are the topics of Sec.~\ref{Sec.Dynamics}. In Sec.~\ref{Sec.GeneralNilpotent} we show how the corresponding construction simplifies in the presence of a nilpotent superfield. We conclude in Sec.~\ref{SECconclusions} with a summary of our results and future perspectives. \vspace{-0.3cm}
In this work, we have provided strong evidences for a relative immunity of inflationary $\alpha$-attractors to the backreaction of \Kahler moduli, within the KKLT stabilization scenario. Specifically, we have shown that the effects of a \Kahler modulus $T$ is negligible during the expansion period, which is driven by the real component of the superfield $\Phi$. This phenomenon has been observed in all the three coupling cases analysed in this paper (i.e.~product coupling, general coupling and with nilpotent sGoldstino). This stability is intimately connected to the fact that inflation takes place at the boundary of moduli space. In this limit, the coupling of the two sectors produces indeed a number of interesting features. The original KKLT minimum is raised to positive values thanks to the supersymmetry breaking in the $\Phi$-direction. On the other hand, its stability and supersymmetry ($D_{T}W=0$) features remain unaffected. This so-called {$\alpha$-KKLT minimum} becomes a perfect starting point for inflation. Once we switch to the canonical variable for the inflaton field, this boundary point gets indeed stretched to a long dS plateau. Moving away from the boundary, the inflaton always follows the characteristic exponential fall-off, yielding universal cosmological predictions given by Eq.~\eqref{usual-predictions}. This can always be induced by inserting a profile function $f(\Phi)$ (a generic Taylor expansion) into $W$, analogously to what happens to the original $\alpha$-attractor models \cite{SuperconformalAttractors,DoubleAttractors,Kallosh:2015lwa,AlphaScale,deSitterLandscape,Carrasco:2015pla}. More interestingly, we have shown that, in the case of general couplings (analysed in Sec.~\ref{Sec.General}), the exponential deviation from a positive plateau simply becomes a genuine and natural consequence of the mutual backreaction between $T$ and $\Phi$. In the latter case, the observational prediction are universal and restricted to Eq. \eqref{PredictionsGeneralCoupling}. Approaching the end of inflation, the interplay between the modulus $T$ and $\Phi$ does become important. It produces a waterfall effects which ends inflation and leaves all the scalars in a phenomenologically suitable vacuum. This vacuum is supersymmetric, in absence of any uplifting mechanism to de Sitter. Remarkably, this means that there is generically no connection between the gravitino mass in the vacuum and the Hubble scale of inflation. Although some proposals have pointed out how to decouple these physical scales \cite{Kallosh2004}, our results suggest a new approach to solving this problem. The above findings represent a novelty in the landscape of previous studies about the interaction between moduli and inflation. Especially in the case of large-field scenarios, the claims have been often negative: the backreaction of the \Kahler modulus was destabilizing the original inflationary dynamics. In the most optimistic scenario, a certain amount of fine-tuning was required in order to generate the minimum amount of e-folds of exponential expansion, although with some modification of the original inflationary predictions. In \cite{Challenges} this effect was dubbed ``flattening'' as it generically lowered the value of the tensor-to-scalar ratio with respect to the original $\phi^2$- predictions. The present study appears to be free of such problems: the backreaction of the moduli does not destabilize the inflationary trajectory. Instead, the additional sector induces an inflationary profile in the case of general couplings, and moreover it offers a universal supersymmetric minimum to the scalars after inflation. Moreover, strict bounds between the value of the gravitino mass and the inflationary scale were always representing a threat to model-builders. A number of aspects deserve further study. On the phenomenological side, these include a detailed investigation of the choice of the inflationary profile. We have provided a proof of principle that one can either introduce a tailor-made Minkowski minimum along the asymptotic-KKLT branch, or end up in the universal Minkowski minimum along the other branch. It remains to be seen what is generic, and how stable various choices are. In contrast, on the string theory side, it remains a challenge of embedding $\alpha$-attractors in a concrete scenario. Despite some approximate realizations in specific contexts (see e.g. \cite{Cicoli:2008gp,Broy:2015zba} for fibred Calabi-Yau geometries), one would like to identify the natural mechanism underlying the attractor nature of these models, once the appropriate inflaton modulus sector has been recognized. In this respect, the present study provides a useful guideline to determine the generic structure which always preserves the asymptotic inflationary plateau at the boundary of moduli space. Whereas the hyperbolic \Kahler geometry of the inflaton plays again a crucial role, the coupling patterns here discussed leave a certain freedom for the superpotential. The general coupling set-up \eqref{eq:GeneralCouplingSuper} seems most promising, as it essentially consists of two copies of no-scale KKLT. It requires no additional inflationary profile and has universal observational predictions \eqref{PredictionsGeneralCoupling}.
16
7
1607.08231
1607
1607.01917_arXiv.txt
{The generalized solution for the warp factor of the Randall-Sundrum metric is presented which is symmetric with respect to both branes and explicitly periodic in extra variable. Given that the curvature of the 5-dimensional space-time is small, the expected rate of neutrino-induced inclined events at the Surface Detector of the Pierre Auger Observatory is calculated. Both the ``downward-going'' (DG) and ``Earth-skimming'' (ES) neutrinos are considered. By comparing the expected event rate with the recent Auger data on searching for neutrino candidates, the lower bound on the fundamental gravity scale $M_5$ is obtained. The ratio of the number of the ES air showers to the number of the DG showers is estimated as a function of $M_5$.}
\label{intro} Ultra high energy (UHE) cosmic neutrinos play a key role in the determination of the composition of the ultra high energy cosmic rays (UHECRs) and their origin. UHE neutrinos are expected to be produced in astrophysical sources in the decays of charged pions created in the interactions of UHECRs with matter or radiation. They can be also produced via interaction of the UHECRs with the cosmic microwave background during propagation to the Earth (cosmogenic neutrinos). UHE cosmic neutrinos are not deviated by magnetic fields and could point back to their sources. Recently, three neutrinos of energy 1-2 PeV, as well as tens of neutrinos above 10 TeV were detected with the IceCube experiment \cite{IceCube:2014}. The cosmic neutrinos with energies near 1 EeV are detectable with the Surface Detector (SD) of the Pierre Auger Observatory (PAO) \cite{PAO}. In order to isolate neutrino-induced events at the SD of the PAO, it is necessary to look for deeply penetrating quasi-horizontal (inclined) air showers \cite{Berezinsky:1969}-\cite{Zas:2005}. The PAO can efficiently search for two types of neutrino-induced inclined air showers (see fig.~\ref{nu_events}): \begin{enumerate} \item Downward-going (DG) neutrino-induced showers. They are initiated by neutrinos moving with large zenith angle $\theta$ which interact in the atmosphere close to the SD. At the PAO the search is restricted to showers with $\theta > 60^\circ$ \cite{Auger:2015}. Note that the background from hadronic showers above $10^{17}$ eV is $\mathrm{O}(1)$ in 20 years, and it is negligible above $10^{19}$ eV \cite{Anchordoqui:2010}. \item Earth-skimming (ES) showers induced by upward tau neutrinos. They can interact in the Earth's crust producing tau leptons. The latter are efficiently produced at zenith angles $90^\circ < \theta < 95^\circ$ \cite{Auger:2015}. The tau leptons escape the Earth and decay in the atmosphere close to the SD \cite{Bertou:2002}-\cite{Feng:2002}. \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=10cm,clip]{nu_events} \caption{Different types of showers induced by DG and ES neutrinos (fig.~1 in \cite{Auger:2011}).} \label{nu_events} \end{figure} Recently, the Auger Collaboration reported on searches for DG neutrinos in the zenith angle bins $60^\circ - 75^\circ$ and $75^\circ - 90^\circ$, as well as for ES neutrinos. The data were collected by the SD of the PAO from 1 January 2004 until 20 June 2013 \cite{Auger:2015}.% \footnote{This search period is equivalent of 6.4 years of a complete Auger SD working continuously \cite{Auger:2015}.} No neutrino candidates were found. Assuming the diffuse flux of UHE neutrinos to be $dN/dE_\nu = k E_\nu^{-2}$ in the energy range $1.0\times10^{17}$ eV - $2.5\times10^{19}$ eV, the stringent limit was obtained: \begin{equation}\label{Auger_bound} k < 6.4 \times 10^{-9} \mathrm{\ GeV \ cm^2 \ s^{-1} \ sr^{-1}} \;. \end{equation} This Auger limit is a factor 3.64 below the Waxman-Bachall bound on neutrino production in optically thin astrophysical sources \cite{WB:2001}: \begin{equation}\label{WM_bound} E_\nu^2 \frac{dN}{dE_\nu} = 2.33 \times 10^{-8} \mathrm{\ GeV \ cm^2 \ s^{-1} \ sr^{-1}} \;. \end{equation} In the Standard Model (SM) neutrino-nucleon cross sections are expected to be very small in comparison with hadronic cross sections even at UHEs \cite{Sarkar:2008}. That is why, the UHE cosmic neutrinos can be regarded as unique probes of new interactions. In the present paper a theory with an extra dimension (ED) is considered to be a ``new physics'' theory. We will see that effects coming from the ED can be significant or even dominant in the $\nu N$-scattering at UHEs.
In the present paper we have studied the neutrino-induced inclined (quasi-horizontal) events at the Surface Detector of the Pierre Auger Observatory in the Randall-Sundrum scenario with the extra dimension and warped metric. We have presented the general solution for the metric \eqref{sigma} which is symmetric with respect to both branes and explicitly periodic in extra variable. In the framework of the RS-like model with the small curvature of the 5-dimensional space-time, the exposures of the SD of the PAO for the downward going and Earth-skimming neutrinos are estimated (fig.~\ref{Auger_exposures}). The lower bound on the fundamental gravity scale $\bar{M}_5$ is obtained \eqref{M5_bound}. The ratio of the number of the ES air showers to the number of the DG showers, $N_{\mathrm{ES}}/N_{\mathrm{DG}}$, is calculated as a function of $\bar{M}_5$ (fig.~\ref{ES_vs_DG}).
16
7
1607.01917
1607
1607.05533_arXiv.txt
The prediction of the arrival time for fast coronal mass ejections (CMEs) and their associated shocks is highly desirable in space weather studies. In this paper, we use two shock propagation models, i.e. Data Guided Shock Time Of Arrival (DGSTOA) and Data Guided Shock Propagation Model (DGSPM), to predict the kinematical evolution of interplanetary shocks associated with fast CMEs. DGSTOA is based on the similarity theory of shock waves in the solar wind reference frame, and DGSPM on the non-similarity theory in the stationary reference frame. The inputs are the kinematics of the CME front at the maximum speed moment obtained from the geometric triangulation method applied to STEREO imaging observations together with the Harmonic Mean approximation. The outputs provide the subsequent propagation of the associated shock. We apply these models to the CMEs on 2012 January 19, January 23, and March 7. We find that the shock models predict reasonably well the shock's propagation after the impulsive acceleration. The shock's arrival time and local propagation speed at Earth predicted by these models are consistent with in situ measurements of WIND. We also employ the Drag-Based Model (DBM) as a comparison, and find that it predicts a steeper deceleration than the shock models after the rapid deceleration phase. The predictions of DBM at 1 AU agree with the following ICME or sheath structure, not the preceding shock. These results demonstrate the applicability of the shock models used here for future arrival time prediction of interplanetary shocks associated with fast CMEs.
Coronal mass ejections (CMEs) are large-scale eruptions of plasma and magnetic field from the Sun into interplanetary (IP) space. Soon after their discovery in the 1970s, CMEs were regarded as major sources for severe space weather events \citep{Sheeleyetal1985,Gosling1993,Dryer1994}. For example, the geoeffective CME would be a threat for astronauts performing space activity, spacecraft, navigation \& communication systems, airplane-passengers at high altitudes, ground power grids \& pipelines (e.g., \cite{Boteleretal1998,Lanzerotti2005,NRC2008}). Fast CMEs often lead to strong IP shocks ahead of them when propagating in the heliosphere, and the shocks have additional space weather effects, such as producing solar energetic particle (SEP) events \citep{Gopalswamy2003,Cliver2009}, compressing the geo-magnetosphere when colliding with the Earth \citep{Greenandbaker2015}, and even causing a ``tsunami'' throughout the whole heliosphere \citep{Intriligatoretal2015}. Therefore, predicting arrival times of these fast CMEs/shocks at the Earth has become a significant ingredient of space weather forecasting. Various kinds of models in this aspect have been developed during the past decades, such as empirical models, physics-based models, and MHD models. \cite{zhaoanddryer2014} gave an overall review for these models as well as their current applications. The models in arrival time prediction usually adopt the observables of CMEs/shocks obtained near the Sun as inputs to predict whether/when they will arrive at the Earth. The in situ observations at L1 spacecraft are then used to verify the prediction results. In other words, the predictions are carried out only at two ends, i.e. inputs on the Sun and outputs near the Earth. A prediction for the CME/shock's propagation could not be checked by observations beyond 30 solar radii ($R_s$) in the heliosphere during the SOHO era because the field of view (FOV) of SOHO/LASCO is within this distance. The launch of STEREO in 2006 heralded a new epoch for studies in this area. In contrast to SOHO, the FOV of the imaging telescopes (HI1/HI2) onboard STEREO allows CMEs/shocks to be tracked over much longer distances, even beyond the Earth's orbit. Techniques have been developed to track solar disturbances based on the wide-angle imaging observations of STEREO, e.g., the triangulation technique developed by \cite{Liuetal2010a} has no free parameters and can determine the CME/shock kinematics as a function of distance. This triangulation technique initially assumes a relatively compact CME structure simultaneously seen by the two STEREO spacecraft. \cite{Lugazetal2010} and \cite{Liuetal2010b} later incorporate into the triangulation concept to a spherical front attached to the Sun for the geometry of wide CMEs (see detailed discussions on the CME geometry assumptions in the triangulation technique by \cite{Liuetal2010b,Liuetal2016}). The triangulation technique with these two CME geometries has been successfully applied to investigate the propagation of CMEs through and their interactions with the inner heliosphere between the Sun and Earth (e.g., \cite{Liuetal2010a,Liuetal2010b,Liuetal2013,Liuetal2016,Lugazetal2010,Daviesetal2013, Mishra2013}). In this paper, we present our improved shock propagation models that utilize the early kinematics of the CME front as input, and predict the shock's propagation in the subsequent IP space and its arrival time at a given distance from the Sun. In particular, these predictions will be verified not only by the in situ measurements at 1 AU but also by direct imaging of STEREO in IP space. This study attempts to make qualitative comparisons between model predictions and wide-angle imaging observations over long distances in the heliosphere. The investigation will improve our understanding of the kinematics of the CMEs/shocks during their outward propagation in IP space.
In this study we employed two models to predict the propagation of shocks associated with fast CMEs in the heliosphere, i.e. DGSTOA and DGSPM. The former is based on the similarity theory of blast waves in the ambient flow (solar wind) reference frame, and the latter on the non-similarity theory of blast waves in the stationary reference frame. The inputs of these models are kinematical characteristics of the associated CME front when the CME reaches its maximum speed close to the Sun, and these parameters are obtained from a continuous tracking for the white-light feature of a CME viewed in coordinated imaging of STEREO by the HM triangulation \citep{Liuetal2013}. We applied these models to three cases of the 2012 January 19 (Case 1), January 23 (Case 2), and March 7 (Case 3) CMEs/shocks, and found that the models can give a reasonable prediction for the propagation of the shock associated with fast CMEs after the initial impulsive acceleration phase. The shock's arrival time and local speed at Earth predicted by the models match generally well the in situ observations of WIND. We also used DBM to predict the arrival time and local speed at Earth, and found that its predictions agree better with the following ICME (Case 1 and Case 3) or sheath structure (Case 2) rather than with the preceding shock. Table 2 summarizes the mean of the absolute prediction errors for the CME/shock arrival times and their local propagation speeds at Earth predicted by the HM triangulation, DGSPM, DGSTOA, and DBM. We can see that the mean of prediction errors for arrival times ranges from 6.0 to 7.6 hr, and the mean error for propagation speeds ranges from 110 to 180 km/s. The prediction accuracies of different models are similar. These predictions are reasonably good as far as we know. For example, \cite{zhaoanddryer2014} presented a review of current status of the prediction models for the CME/shock arrival time, and found that their prediction errors are about 10-12 hr at present. The prediction accuracies of arrival time derived in this study are improved by about 4-5 hr compared with them. Prediction results of DGSPM and DGSTOA are obtained earlier than those of DBM due to no free input parameter in the formers. The HM triangulation technique can provide a description about the whole propagation process of the CME front, including the impulsive acceleration, rapid deceleration, and slow deceleration phases \citep{Liuetal2013}. However, we need to point out that the shock models (DGSPM, DGSTOA) provided by this study may not apply to slow CMEs with initial speeds lower than the ambient solar wind speed. This kind of CMEs usually undergoes a gradual acceleration in the beginning, and then propagates outward at nearly a constant speed \citep{Liuetal2016}. One reason for the prediction errors of the shock models may come from the uncertainties in the input parameters. For example, the maximum speed $V_M$ of the CME front is one of the most important input parameters. This parameter has uncertainties shown as blue error bars in Figure 1, Figure 4, and Figure 6. Specific speaking, the relative uncertainties of $V_M$ are 22.4\% (305/1362) for Case 1, 11.8\% (182/1542) for Case 2, and 14.0\% (331/2369) for Case 3, respectively. As a comparison, the relative errors of the shock arrival time predicted by DGSPM are 13.1\% (8.31/63.63), 30.1\% (10.55/35.0), 8.9\% (3.06/34.26), and those of the local shock speed at 1 AU are 40.1\% (187/466), 11.1\% (80/719), 19.8\% (215/1088). The solar wind speed, adopted from 1 AU observations at the CME launch time, also does not represent the real flow speed just upstream of the shock. Considering so many input uncertainties, the prediction errors are acceptable. The shock speed computed from the upstream and downstream solar wind parameters is more consistent with the values predicted by the shock models than the average speed in the sheath. The propagation processes of all three cases consist of three phases from the Sun to Earth: an initial impulsive acceleration, a rapid deceleration, and finally a gradual deceleration \citep{Liuetal2013}. The impulsive acceleration is caused by the Lorentz force, and ends up at a distance of 10-20 $R_s$. Predictions of DGSPM and DGSTOA start when the CME front reaches its maximum speed. But prediction of DBM starts at 50 $R_s$ (set empirically) further out because the drag coefficient ($\gamma$) needs to be determined by fitting. The rapid deceleration occurs over a very short distance from 10-20 $R_s$ to about 50 $R_s$. The shock models adopted here can not reproduce this rapid deceleration (for Case 1 and Case 3, Case 2 is better). An overestimation of the solar wind speed (taken from 1 AU), and therefore a too large convection of the ambient flow adopted in these models, can be one potential reason. Besides this, the energy loss due to accelerating the local energetic particles by the shock can be another reason \citep{Liuetal2013}, as this energy loss is not taken into account of in these single-fluid shock models. For example, \cite{Mewaldtetal2008} estimated that the total content of energy for energetic particles produced by shock can account for 10\% of the associated CME's kinetic energy, or even more. As a contrast, DBM successfully reproduces this rapid deceleration as we fit the DBM predictions to the HM triangulation results in these regions to determine its free drag coefficient. The rapid deceleration evolves to a gradual deceleration at a distance around 50 $R_s$. Predictions of DGSPM and DGSTOA agree with the kinematics of the CME front tracked by the HM triangulation in this phase. The predicted shock arrival time and local propagation speed at Earth are consistent with the in situ measurements of WIND. However, the DBM predicts a stronger deceleration than the HM triangulation during this gradual deceleration stage, which indicates that a different $\gamma$ value may be needed for the gradual deceleration. This result implies that different physical mechanisms are responsible for the different deceleration processes, which cannot be described by a single drag-based deceleration process. The DBM predicted result lags the shock propagation more and more (see Figure 1, Figure 4, and Figure 6). At 1 AU, it's prediction agrees with the leading boundary of the ICME (Case 1 and Case 3) or the back boundary of the sheath (Case 2) behind the shock for both the arrival time and the local propagation speed. This may indicate that the standoff distance between the ICME and its preceding shock is increasing during their outward propagation. It will lead to another point to be clarified: what is the nature of the CME front in the wide-angle imaging observations? As pointed out by \cite{Liuetal2011,Liuetal2013}, the density within the CME is stronger than that of the preceding shock and sheath close to the Sun, therefore the white-light feature in imaging observations in these regions represents the front boundary of the CME main body; far away from the Sun, the density compression of the shock eventually dominates over that at the CME main body due to fast expansion of the CME, and the white-light feature gradually shifts to the preceding shock structure as a result. More direct evidences on this point will be investigated in future studies.
16
7
1607.05533
1607
1607.00585_arXiv.txt
Using high resolution $N$-body simulations, we recently reported that a dynamically cool inner disk embedded in a hotter outer disk can naturally generate a steady double-barred (S2B) structure. Here we study the kinematics of these S2B simulations, and compare them to integral-field observations from \atlas and \sauron. We show that S2B galaxies exhibit several distinct kinematic features, namely: (1) significantly distorted isovelocity contours at the transition region between the two bars, (2) peaks in $\sigma_\mathrm{LOS}$ along the minor axis of inner bars, which we term ``$\sigma$-humps'', that are often accompanied by ring/spiral-like features of increased $\sigma_\mathrm{LOS}$, (3) $h_3-\bar{v}$ anti-correlations in the region of the inner bar for certain orientations, and (4) rings of positive $h_4$ when viewed at low inclinations. The most impressive of these features are the $\sigma$-humps; these evolve with the inner bar, oscillating in strength just as the inner bar does as it rotates relative to the outer bar. We show that, in cylindrical coordinates, the inner bar has similar streaming motions and velocity dispersion properties as normal large-scale bars, except for $\sigma_z$, which exhibits peaks on the minor axis, i.e., humps. These $\sigma_z$ humps are responsible for producing the $\sigma$-humps. For three well-resolved early-type S2Bs (\object{NGC 2859}, \object{NGC 2950}, and \object{NGC 3941}) and a potential S2B candidate (\object{NGC 3384}), the S2B model qualitatively matches the integral-field data well, including the ``$\sigma$-hollows'' previously identified. We also discuss the kinematic effect of a nuclear disk in S2Bs.
Optical and infrared observations have shown that $\sim1/3$ of early-type barred galaxies host a misaligned inner bar (also termed ``secondary bar'') \citep{erw_spa_02,lai_etal_02, erw_04}. S2B structures are also seen in later Hubble types, although we still lack systematic statistics because of the stronger dust extinction in their central regions \citep{erw_05}. Numerical simulations are powerful tools for studying the formation and evolution of such multi-bar structures. Previous $N$-body+hydrodynamical simulations suggested that gas dissipation plays a vital role in inducing and maintaining an inner bar \citep[e.g.][]{fri_mar_93, shl_hel_02, eng_shl_04}. However, the observation of galaxies without a large amount of gas \citep{pet_wil_04} indicated gas might not be the key ingredient to maintaining, or even forming, S2Bs. Increasingly, $N$-body simulations have successfully formed S2Bs without the requirement of gas \citep{rau_sal_99, rau_etal_02, deb_she_07, hel_etal_07, sah_mac_13, du_etal_15}. Nevertheless, the essential initial conditions by which S2Bs form is still unclear. In \citet{du_etal_15}, we explored a large parameter space of the mass, dynamical temperature (Toomre-$Q$), and thickness of the stellar disk in isolated pure-disk 3-D $N$-body simulations. Our simulations suggested that a dynamically cool inner disk can naturally trigger small-scale bar instabilities leading to S2Bs, without the need for gas. This result is also consistent with the result of \citet{woz_15}, who successfully formed long-lived S2Bs with $N$-body+hydrodynamical simulations in which a nuclear disk forming from accumulated gas followed by star formation which plays an important role in generating the inner bar. This scenario is also consistent with the recent observation of \object{NGC 6949} that the size of the star burst nuclear molecular disk matches well with the size of the inner bar \citep{rom_fat_15}. Observations \citep{but_cro_93,fri_mar_93, cor_etal_03} suggest that the two bars in an S2B rotate independently, which is also found in numerical simulations \citep[e.g.][]{deb_she_07, she_deb_09, sah_mac_13, woz_15, du_etal_15}. Instead of being rigid bodies, the amplitudes and pattern speeds oscillate as the two bars rotate through each other \citep{deb_she_07, du_etal_15}, which is consistent with loop-orbit predictions of \citet{mac_ath_07} \citep[see also][]{mac_spa_97, mac_spa_00, mac_ath_08, mac_sma_10}. Such dynamically decoupled inner bars in S2Bs have been hypothesized to be a mechanism for driving gas past the inner Lindblad resonance (ILR) of outer bars to feed supermassive black holes that power active galactic nuclei \citep{shl_etal_89,shl_etal_90}. Two-dimensional integral-field unit (IFU) spectroscopy provides a very powerful method for studying bars from a kinematic point of view. Several kinematic signatures of bars have been predicted and observed. Many theoretical analyses \citep[e.g.][]{mil_smi_79, vau_dej_97, bur_ath_05} have shown that bars twist the mean velocity ($\bar{v}$) fields because of significant radial streaming motions, thus making the kinematic major axis misaligned with the photometric major axis of the whole disk. For both stars and gas, the kinematic major axis generally turns toward the opposite direction with respect to the major axis of bars. IFU observations of the early-type galaxies have shown that barred galaxies are more likely to generate larger kinematic misalignments than unbarred galaxies \citep{cap_etal_07, kra_etal_11}. The central elliptical velocity dispersion ($\sigma$) peak should be aligned with the orientation of the large-scale bar \citep{mil_smi_79, vau_dej_97}. Over the extent of the bar, the third order Gauss-Hermite moment ($h_3$) is correlated with $\bar{v}$ in edge-on views \citep{bur_ath_05}. In face-on views, a minimum in $h_4$ is present when a boxy/peanut (B/P) bulge exists \citep{deb_etal_05, mendez_etal_14}. We know little about the kinematic properties of S2Bs. The misalignment between the kinematic major axis and the photometric major axis has also been expected to be observed in $\bar{v}$ fields of S2Bs \citep{che_fur_78, moi_mus_00}. However, \citet{moi_etal_04} found the twists due to the inner bar on the stellar velocity field are quite small compared with the twists in gaseous kinematics, which led them to question the existence of decoupled inner bars. \citet{she_deb_09} showed that twists due to inner bars are smaller than previously expected, thus the kinematics of S2Bs can still be consistent with observations of \citet{moi_etal_04}. \citet{de_etal_08} studied 2-D stellar velocity and velocity dispersion maps of four S2Bs (\object{NGC 2859}, \object{NGC 3941}, \object{NGC 4725}, and \object{NGC 5850}) with the \sauron \ IFU. Surprisingly, the velocity dispersion maps revealed two local minima, which they termed ``$\sigma$-hollows'', located near the ends of the inner bar in each galaxy \citep[see also][]{de_etal_12}. They proposed that $\sigma$-hollows occur as a result of the contrast between the velocity dispersion of a hotter bulge and the inner bar which is dominated by ordered motions and thus has a low $\sigma$. The S2B model of \citet{she_deb_09} also exhibited a misalignment between the inner bar and the velocity dispersion. Self-consistent numerical models are very powerful tools for understanding the dynamics and kinematics of S2Bs. In \citet{du_etal_15}, we were able to form S2Bs from pure disks; we summarize these results in \refsec{subsection:models}. In this paper, we analyse the kinematics of the S2B model. We introduce the Voronoi binning method used in extracting the kinematics in \refsec{subsection:extkinem}. In \refsec{section:Atlas3D}, we show that the S2B model qualitatively matches well with the kinematics of S2Bs in the \atlas \ \citep{cap_etal_11} and \sauron \ \citep{ems_etal_04} surveys, especially for the $\sigma$-humps/hollows. The detailed kinematic analyses of the S2B model are presented in \refsec{section:kinem}. In \refsec{section:discu}, we discuss the kinematic effects of a nuclear disk in the S2Bs. Finally, our conclusions are summarized in \refsec{section:conclusion}.
\label{section:conclusion} This study sheds new light on the kinematic properties of double-barred galaxies. Using well-resolved, self-consistent simulations, we have studied the kinematic properties of double-barred galaxies in comparison to single-barred galaxies. By quantifying the LOSVDs with Gauss-Hermite moments, we find that many significant kinematic features are closely associated with the inner bar. The most notable feature is $\sigma$-humps that appear on the minor axis of inner bars, matching well with the integral-field observations of the stellar kinematics from the \atlas and \sauron \ surveys. Accompanied by $\sigma$-ring/spiral-like features, $\sigma$-humps may help to explain the ubiquitous $\sigma$-hollows in S2Bs seen in previous observations. Generally, $\sigma$-humps evolve and oscillate together with the inner bar. Based on the analysis of intrinsic motions of bars, we show that the inner bar is essentially a scale-down version of normal large-scale bars from the kinematic point of view. The only difference is the $\sigma_z$-humps appearing on the minor axis of the inner bar. Combined with $\sigma_\parallel$ enhancements and $\sigma_\perp$ humps produced in normal bars, $\sigma_z$-humps are the key to generating the observed $\sigma$-humps in S2Bs. The isovelocity contours are significantly distorted. However, at the central regions, the kinematic major axis is only slightly distorted toward the opposite direction with respect to the inner bars. The most significant asymmetric twists are present at intermediate radii, in the transition region between the two bars instead of the photometric end of the inner bar. Because of the elongated streaming motions in bars, some non-Gaussian features appear. The outer bar exhibits an $h_3-\bar{v}$ correlation, as expected. However, in the central regions, $h_3$ becomes anti-correlated with $\bar{v}$ as a result of the increasing dominance of the inner bar. The inner bar exhibits significant positive $h_4$ rings in nearly face-on cases, suggesting that the inner bar has a sharply peaked $v_z$ distribution.
16
7
1607.00585
1607
1607.05643_arXiv.txt
The Sandage-Loeb (SL) test is a promising method for probing dark energy because it measures the redshift drift in the spectra of Lyman-$\alpha$ forest of distant quasars, covering the ``redshift desert" of $2\lesssim z\lesssim5$, which is not covered by existing cosmological observations. Therefore, it could provide an important supplement to current cosmological observations. In this paper, we explore the impact of SL test on the precision of cosmological constraints for two typical holographic dark energy models, i.e., the original holographic dark energy (HDE) model and the Ricci holographic dark energy (RDE) model. To avoid data inconsistency, we use the best-fit models based on current combined observational data as the fiducial models to simulate 30 mock SL test data. The results show that SL test can effectively break the existing strong degeneracy between the present-day matter density $\Omega_{m0}$ and the Hubble constant $H_0$ in other cosmological observations. For the considered two typical dark energy models, not only can a 30-year observation of SL test improve the constraint precision of $\Omega_{m0}$ and $h$ dramatically, but can also enhance the constraint precision of the model parameters $c$ and $\alpha$ significantly.
\label{sec:intro} At the end of last century, the type Ia supernovae observations discovered that our universe is undergoing an accelerating expansion \cite{Riess:1998cb,Perlmutter:1998np}. In order to explain this apparently counterintuitive behavior of the universe, a mysterious energy component, dubbed ``dark energy''~(DE), is usually assumed to exist and dominate the evolution of current universe. However, other than the fact that it is almost uniformly distributed, gravitionally repulsive, and contributes about 70\% of the total energy in the universe, people actually know little about the nature of DE. In spite of this, cosmologists still have already proposed numerous DE models, making attempts to uncover its mystery. On the other hand, if one wishes to place more comprehensive cosmological constraints on a underlying cosmological model and then precisely acknowledge the geometry and matter contents of the universe, it should be necessary to measure the expansion rate of universe at different redshifts. Among all the known datasets, cosmic background microwave anisotropies (CMB) measurements probe the rate of expansion at the redshift $z\sim1100$, while for much lower redshift~($z<2$), the expansion history measurements could depend on weak lensing, baryon acoustic oscillation~(BAO), type Ia supernovae~(SN) and so forth. However, up to now, the redshift range between $z\sim2$ to 1100, regarded as ``redshift desert", is still a blank area for which the existing dark energy probes are unable to provide useful information about the expansion history of our universe. Therefore, the redshift drift data in the ``redshift desert" of $2\lesssim z\lesssim5$ will provide an important supplement to the current observational data and play a more significant role in future parameter constraints. Redshift drift observation is a purely geometric measurement of the expansion of the universe, which was originally proposed by Sandage to directly measure the temporal variation of the redshift of extra-galactic sources in 1962 \cite{sandage}, and then improved by Loeb in 1998 who suggested the possibility of measuring the redshift drift by decades-long observation of the redshift variation of distant quasars (QSOs) Lyman-$\alpha$ absorption lines \cite{loeb}. Thus, the method of redshift drift measurement is also referred to as the ``Sandage-Loeb" test. The Sandage-Loeb (SL) test is a unique method to directly measure the cosmic expansion rate in the ``redshift desert" of $2\lesssim z\lesssim5$, which is never covered by any other existing dark energy probes. Combining the SL test data from this high-redshift range with other data from low-redshift region, such as the % SN, the % BAO and the like, will definitely lead to significant impact on the dark energy constraints. The scheduled 39-meter European Extremely Large Telescope (E-ELT) equipped with a high-resolution spectrograph called the Cosmic Dynamics Experiment (CODEX) is designed to collect such SL test signals. A great deal of work has been done on the effect of the SL test on cosmological parameter estimation \cite{sl1,sl2,sl3,sl4,sl5,sl6,sl7}, some of which improperly assumed 240 or 150 observed QSOs in the simulations. Nevertheless, on the strength of an extensive Monte Carlo simulation, using a telescope with a spectrograph like CODEX, only about 30 QSOs will be bright enough and/or lie at a high enough redshift for the actual observation. Furthermore, as is known to all, in most existing papers about the SL test, the best-fit $\Lambda$ cold dark matter ($\Lambda$CDM) model is usually chosen as the fiducial model in simulating the mock SL test data. In this way, when these simulated SL test data are combined with other data to constrain some dynamical dark energy models (or modified gravity models), tension may exist among the combined data, leading to an inappropriate joint constraint. In our previous works \cite{msl1,msl2,msl3,msl4}, we quantified the impact of future redshift-drift measurement on parameter estimation for different dark energy models. In order to correctly quantify the impact of the future SL test data on dark energy constraints, producing the simulated SL test data consistent with other actual observations is extremely significant and indispensable. Here, we have to point out that the SL test data alone cannot tightly constrain dark energy models owing to the lack of low-redshift data. For this reason, the combination of simulated SL test data with currently available actual data covering the low-redshift region is supposed to be very necessary for the constraints on dark energy. On the other hand, when we combine the SL test data with other current observational data, the existing parameter degeneracies in current observations will be broken effectively, with the precision of parameter estimation in the widely studied dark energy models improved greatly at the same time \cite{msl1,msl2,msl3,msl4}. And, in order to eliminate the potential inconsistencies between the current data and simulated future SL data, we decide to choose the best-fitting dark energy models as the fiducial models to produce the simulated future data. Among all the existing dark energy models, the holographic dark energy model, which is a dynamical DE model based on the holographic principle of quantum gravity, is a very competitive candidate for DE. Based on the effective quantum field theory, Cohen et al. \cite{Cohen} pointed out that, if gravity is considered, the total energy of a system with size $L$ should not exceed the mass of a black hole with the same size, i.e., $L^{3}\rho_{de}\lesssim LM_{pl}^{2}$. This energy bound leads to the density of holohraphic dark energy, \begin{equation}% \rho_{de}=3c^{2} M_{pl}^{2}L^{-2}, \end{equation} where $c$ is a dimensionless parameter characterizing some uncertainties in the effective quantum field theory, $M_{pl}$ is the reduced Planck mass defined by $M_{pl}^2=(8\pi G)^{-1}$, and $L$ is the infrared~(IR) cutoff in the theory. Li \cite{Li} suggested that the IR cutoff $L$ should be given by the future event horizon of the universe. This yields the original holographic dark energy model (see \cite{Wang:2013zca,Cui:2015oda,Guo:2015gpa,Xu:2016grp,Feng:2016djj,Zhang:2015uhk,Wang:2016tsz} for recent constraints). Furthermore, Gao et al. \cite{CJGao} proposed to consider the average radius of the Ricci scalar curvature as the IR cutoff, and this model is called the holographic Ricci dark energy model (see also \cite{Zhang:2009un}). For convenience, hereafter we will call them HDE and RDE, respectively. Recently, Geng et al. \cite{msl1,msl2,msl3,msl4} employed the simulated Sandage-Loeb test data to explore many different kinds of dark energy models. In these analyses, the two popular and competitive models, namely, the HDE model and the RDE model, are absent. Thus, as a further step along this line, in this paper we will provide such an analysis to make the analysis of the constraining power of the future SL test on dark energy more general and complete. The organization of this paper is as follows. In Sec. \ref{sec:model}, we will briefly review the holographic dark energy models. In Sec. \ref{sec:cosmol}, we will present the observational data used in this work, as well as the basic introduction of the SL test. In Sec. \ref{sec:resul}, we will show the results of the cosmological constraints, and quantify the improvement in the parameter constraints from the SL test. Conclusion will be given in Sec. \ref{sec:conclu}. \begin{figure} \includegraphics[width=9cm]{hde.pdf} \caption{\label{fig1}Constraints (68.3\% and 95.4\% CL) in the $\Omega_{m0}$-$c$ plane and in the $\Omega_{m0}$-$h$ plane for HDE model with current only and current+SL 30-year data. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{rde.pdf} \end{center} \caption{\label{fig2}Constraints (68.3\% and 95.4\% CL) in the $\Omega_{m0}$-$\alpha$ plane and in the $\Omega_{m0}$-$h$ plane for RDE model with current only and current+SL 30-year data. } \end{figure}
\label{sec:resul} We constrain the HDE and RDE models by using the current data (current only) and the combination of current data and the 30-year SL test data (current+SL 30-year). The detailed fit results are given in Table~\ref{tab1}, with the $\pm1\sigma$ errors quoted. Hereafter, ``current" denotes the current SN+BAO+CMB+$\rm H_0$ data combination for convenience. As can be seen from this table, when the SL 30-year data are combined, almost all the constraint results would be improved significantly. \begin{table*} \renewcommand{\arraystretch}{1.2} \begin{tabular*}{7cm}{@{\extracolsep{\fill}}cccccc} \hline\hline &\multicolumn{2}{c}{current only}&&\multicolumn{2}{c}{current+SL 30-year}\\ \cline{2-3}\cline{5-6} Error & HDE & RDE&& HDE &RDE \\ \hline $\sigma(c)$&$0.0415$&--&&$0.0338$&--\\ $\sigma(\alpha)$&--&0.0153&&--&$0.0101$\\ $\sigma(\Omega_m)$&0.0099&0.0107&&0.0018&0.0030\\ $\sigma(h)$&0.0122&0.0100&&0.0041&0.0068\\ \hline\hline \end{tabular*} \caption{\label{tab2}Errors of parameters in the HDE and RDE models for the fits to current only and current+SL 30-yr data.} \end{table*} \begin{table*} \renewcommand{\arraystretch}{1.2} \begin{tabular*}{7cm}{@{\extracolsep{\fill}}cccccc} \hline\hline &\multicolumn{2}{c}{current only}&&\multicolumn{2}{c}{current+SL 30-year}\\ \cline{2-3}\cline{5-6} Precision & HDE & RDE&& HDE &RDE \\ \hline $\varepsilon(c)$&$6.89\%$&--&&$5.62\%$&--\\ $\varepsilon(\alpha)$&--&$3.94\%$&&--&$2.60\%$\\ $\varepsilon(\Omega_m)$&$3.55\%$&$3.64\%$&&$0.65\%$&$1.02\%$\\ $\varepsilon(h)$&$1.73\%$&$1.28\%$&&$0.58\%$&$0.87\%$\\ \hline\hline \end{tabular*} \caption{\label{tab3}Constraints precisions of parameters in the HDE and RDE models for the fits to current only and current+SL 30-yr data.} \end{table*} With the purpose of observing the improvements of parameter constraints from the SL test simulated data visually, we show the joint constraint results in Figures~\ref{fig1} and~\ref{fig2}. In Figure~\ref{fig1}, we show the 68.3\% and 95.4\% CL posterior distribution contours in the $\Omega_m-h$ plane and $\Omega_m-c$ plane for the holographic dark energy model. The current only and the current+SL 30-year results are presented in red and blue, respectively. In Figure~\ref{fig2}, we present the joint constraints on the Ricci dark energy model (68.3\% and 95.4\% CL) in the $\Omega_m-h$ and $\Omega_m-\alpha$ planes, with the current only constraint shown in red while the current+SL 30-year constraint exhibited in blue. From these figures, we clearly find that the degeneracy directions are evidently changed by adding the SL 30-year data. In order to quantify the improvements, we list the errors of parameters in the HDE and RDE models for the fits to the current data and the current+SL 30-year data, in Table~\ref{tab2}. Because of the fact that the fit results are not in the form of totally normal distributions, we define the error as $\sigma = \sqrt{\frac{\sigma_+^2+\sigma_-^2}{2}}$, where $\sigma_+$ and $\sigma_-$ denote the $1\sigma$ deviations of upper and lower limits, respectively. In view of the best-fit value and the error of the parameter in the fit, we can calculate the constraint precision of the parameter. For a parameter $\xi$, we can define the constraint precision as $\varepsilon(\xi) = \sigma(\xi)/\xi_{bf}$, in which $\xi_{bf}$ is the best-fit value of $\xi$. We thus list the constraints precision of parameters in the HDE and RDE models for the fits to current only and current+SL 30-year data in Table ~\ref{tab3}. In Tables ~\ref{tab2} and ~\ref{tab3}, we can clearly see that the precision of parameters is enhanced evidently when the SL 30-year data are combined. Concretely speaking, for the HDE model, the precision of $\Omega_m$, $h$, and $c$ are promoted from 3.55\%, 1.73\%, and 6.89\% to 0.65\%, 0.58\%, and 5.62\%, respectively. In the RDE model, the constraints precision of $\Omega_m$, $h$, and $\alpha$ are improved from 3.64\%, 1.28\%, and 3.94\% to 2.60\%, 1.02\%, and 0.87\%, respectively. The improvements are also fairly remarkable. Therefore, we can conclude that the joint geometric constraints on dark energy models would be improved enormously when a 30-year observation of the SL test is included. From Figures~\ref{fig1} and~\ref{fig2}, for the two holographic dark energy models in this work, we can find that the SL test can effectively break the existing parameter degeneracies and obviously improve the precision of parameter estimation. The results are consistent with those of previous studies on dark energy models~\cite{msl1,msl2,msl3,msl4}. Hence, we can further confirm that the improvement of the parameter estimation by SL test data should be independent of the cosmological models in the background, which shows that the involvement of SL test in future cosmological constraints is expected to be significant and necessary. In this paper, we have used the specific best-fit dark energy model as the fiducial model, instead of the $\Lambda$CDM model, to produce the simulated SL test data. The results have shown that this method is very useful to make a clear analysis for the data comparison, i.e., how the SL test breaks the degeneracy (see Figures \ref{fig1} and~\ref{fig2}). For the issue of quantifying the impact of the SL test data on dark energy constraints in the future geometric measurements, such as the space-based project WFIRST, we refer the interested reader to Refs. \cite{msl2,msl3}.
16
7
1607.05643
1607
1607.02255_arXiv.txt
\noindent Typical black hole binaries in outburst show spectral states and transitions, characterized by a clear connection between the inflow onto the black hole and outflows from its vicinity. The transient stellar mass black hole binary V404 Cyg apparently does not fit in this picture. Its outbursts are characterized by intense flares and intermittent plateau and low-luminosity states, with a dynamical intensity range of several orders of magnitude on time-scales of hours. During the 2015 June-July X-ray outburst a joint \swift\ and \inte\ observing campaign captured V404 Cyg in one of these plateau states. The simultaneous \swift/XRT and \inte/JEM-X/ISGRI spectrum is reminiscent of that of obscured/absorbed AGN. It can be modelled as a Comptonization spectrum, heavily absorbed by a partial covering, high-column density material ($N_\textrm{H} \approx 1-3 \times10^{24}\,\textrm{cm}^{-2}$), and a dominant reprocessed component, including a narrow Iron-K$\alpha$ line. Such spectral distribution can be produced by a geometrically thick accretion flow able to launch a clumpy outflow, likely responsible for both the high intrinsic absorption and the intense reprocessed emission observed. Similarly to what happens in certain obscured AGN, the low-flux states might not be (solely) related to a decrease in the intrinsic luminosity, but could instead be caused by an almost complete obscuration of the inner accretion flow.
Black hole (BH) X-ray binaries (BHBs) are typically transient systems that alternate between long periods of (X-ray) quiescence and relatively short outbursts. During the outbursts their luminosity increases by several orders of magnitude (from $\sim$10$^{32-34}$ erg/s in quiescence to $\sim$10$^{38-39}$ erg/s or more in outburst), due to an increase in the mass transfer rate to the BH. When active, most BHBs show an ``hysteresis'' behaviour that becomes apparent as cyclic loops in a so-called Hardness-Intensity diagram (HID; see e.g., \citealt{Homan2001}). These cyclic patterns have a clear and repeatable association with mechanical feedback in the form of different kind of outflows (relativistic jets and winds, see \citealt{Fender2009} and \citealt{Ponti2012}). In a typical BHB different spectral-timing states can be identified with different areas of the q-shaped track visible in the HID. In the \textit{hard state} the X-ray energy spectrum is dominated by strong hard emission, peaking between $\sim$50-150 keV (e.g., \citealt{Sunyaev1979}, \citealt{Joinet2008}, \citealt{Motta2009}). The likely radiative mechanism involved is Compton up-scattering of soft seed photons either produced in a cool geometrically thin accretion disk truncated at large radii, or by synchrotron-self-Compton emission from hot electrons located close to the central black hole (e.g., \citealt{Poutanen2014}). In the \textit{soft state}, instead, the spectrum is dominated by thermal emission from a geometrically thin accretion disk that is thought to extend down or close to the innermost stable circular orbit (\citealt{Bardeen1972}). It is in this state that the peak X-ray luminosity is normally reached. In between these two states are the so-called \textit{intermediate} states, where the energy spectra typically show both the hard Comptonized component and the soft thermal emission from the accretion disk. In these states the most dramatic changes in the emission - reflecting changes in the accretion flow - can be revealed through the study of the fast-time variability (e.g., \citealt{Belloni2016}). While most BHBs that emit below the Eddington limit fit into this picture, systems accreting at the most extreme rates do not. A typical example is the BHB GRS 1915+105, which has been accreting close to Eddington during most of an on-going 23-years long outburst. Another example is the enigmatic high-mass X-ray binary V4641 Sgr (\citealt{Revnivtsev2002}), which in 1999 showed a giant outburst, associated to a super-Eddington accretion phase, followed by a lower accretion rate phase during which its X-ray spectrum resembled closely the spectrum of the well-known BHB Cyg X-1 in the hard state. While GRS 1915+105 displays relatively soft spectra when reaching extreme luminosities, V4641 Sgr did not, showing instead significant reflection and heavy and variable absorption, due to an extended optically thick envelope/outflow ejected by the source itself (\citealt{Revnivtsev2002}, \citealt{Morningstar2014}). When the accretion rate approaches or exceeds the Eddington accretion rate, the radiative cooling time scale to radiate all the dissipated energy locally (a key requirement for thin disks) becomes longer than the accretion time scale. Therefore, radiation is trapped and advected inward with the accretion flow, and consequently both the radiative efficiency and the observed luminosity decrease. This configuration is known as \textit{slim disk} (\citealt{Begelman1979}, \citealt{Abramowicz1988}). The slim disk model has been successfully applied to stellar mass black holes, such as the obscured BHB candidate SS 433 (\citealt{Fabrika2004}), to ultraluminous X-ray sources (\citealt{Watarai2001}), and to super massive BHs (narrow-line Seyfert galaxies, e.g. \citealt{Mineshige2000}). High-accretion rate induced slim disks have been recently associated to high obscuration (high absorption) in a sample of weak emission-line AGN (\citealt{Luo2015}). In those sources, which are likely seen close to edge on, a geometrically thick accretion flow found close to the central supermassive BH is thought to screen the emission from the central part of the system, dramatically reducing the X-ray luminosity. Flared disks are also the most commonly used explanation for obscuration in X-ray binaries seen at high inclinations (see \citealt{White1982a} and, in particular, \citealt{Revnivtsev2002} for the case of V4641 Sgr, \citealt{Fabrika2004} for SS 433 and \citealt{Corral-Santana2013} for Swift J1357.2--0933). In both the AGN and BH X-ray binary populations, a large fraction of faint (obscured), high-inclination sources seems to be missed by current X-ray surveys (e.g., \citealt{Ballantyne2006}, \citealt{Severgnini2011} and \citealt{Corral-Santana2013}). Even considering the entire population of accreting sources as a whole - encompassing stellar mass objects (compact and not), Ultra-luminous X-ray sources (ULXs \citealt{Feng2011}) and active galactic nuclei (AGN) - only a small fraction of the known systems seems to be accreting close to Eddington rates, one of them being V404 Cyg (\citealt{Zycki1999}). V404 Cyg is an intermediate to high-inclination (\citealt{Sanwal1996}), intrinsically luminous, likely often super-Eddington during outbursts (\citealt{Zycki1999}) confirmed BHB (\citealt{Casares1992}): studying this system opens the opportunity to probe a regime where high accretion rates, heavy and non-homogeneous absorption and reflection are interlaced and all play a key role in the emission from the source. Hence, understanding the physics of V404 Cyg's emission could shed light on the accretion related processes occurring not only in stellar mass BHs, but also in ULX sources and, most importantly, in AGN.
We have analysed unique simultaneous \inte\ and \swift\ observations of the black hole candidate V404 Cyg (GS 2023+338) during the 2015 summer outburst. We observed the source in a rare, long, plateau, reflection-dominated state, where the energy spectrum was stable enough to allow time-averaged spectral analysis. Fits to the source X-ray spectrum in the 0.6--200 keV energy range revealed heavily absorbed, Comptonized emission as well as significant reprocessed emission, dominating at high energies (above $\sim$10\, keV). The measured average high column density ($N_\textrm{H} \approx 1-3 \times 10^{24}\,\textrm{cm}^{-2}$) is likely due to absorption by matter expelled from the central part of the system. The overall X-ray spectrum is consistent with the X-ray emission produced by a thick accretion flow, or \textit{slim disk}, similar to that expected in obscured AGN accreting at high accretion rates (i.e. close to the Eddington rate), where the emission from the very centre of the system is shielded by a geometrically thick accretion flow. We therefore suggest that in some of the low-flux/plateau states detected between large X-ray flares during the 2015 outburst, the spectrum of V404 Cyg is similar to the spectrum of an obscured AGN. Given the analogy and the extreme absorption measured, we argue that occasionally the observed X-ray flux might be very different from the system intrinsic flux, which is almost completely reprocessed before reaching the observer. This may be particularly important when comparing the X-ray and radio fluxes, since the latter is likely always emitted sufficiently far away from the disk mid-plane and therefore never obscured. Given the fact that accretion should work on the same principles in BHBs and AGN, once a suitable scale in mass is applied, detailed studies of V404 Cyg and stellar mass black holes with similar characteristics could help in shading light on some of the inflow/outflow dynamics at play in some, still poorly understood, classes of obscured AGN. \bigskip SEM acknowledges the anonymous referee whose useful comments largely contributed to improve this work. SEM acknowledges the University of Oxford and the Violette and Samuel Glasstone Research Fellowship program and ESA for hospitality. SEM also acknowledges Rob Fender, Andy Beardmore and Robert Antonucci for useful discussion. JJEK acknowledges support from the Academy of Finland grant 268740 and the ESA research fellowship programme. SEM and JJEK acknowledge support from the Faculty of the European Space Astronomy Centre (ESAC). EK acknowledges the University of Oxford for hospitality. MG acknowledges SRON, which is supported financially by NWO, the Netherlands Organisation for Scientific Research. This work is based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), and with the participation of Russia and the USA.
16
7
1607.02255
1607
1607.06256_arXiv.txt
We investigate the influence of matter along the line of sight and in the strong lens vicinity on the properties of quad image configurations and on the measurements of the Hubble constant ($H_0$). We use simulations of light propagation in a nonuniform universe model with the distribution of matter in space based on the data from Millennium Simulation. For a given strong lens and haloes in its environment we model the matter distribution along the line of sight many times, using different combinations of precomputed deflection maps representing subsequent layers of matter on the path of rays. We fit the simulated quad image configurations with time delays using nonsingular isothermal ellipsoids (NSIE) with external shear as lens models, treating the Hubble constant as a free parameter. We get a large artificial catalog of lenses with derived values of the Hubble constant, $H^\mathrm{fit}$. The average and median of $H^\mathrm{fit}$ differ from the true value used in simulations by $\le 0.5~\mathrm{km/s/Mpc}$ which includes the influence of matter along the line of sight and in the lens vicinity, and uncertainty in lens parameters, except the slope of the matter distribution, which is fixed. The characteristic uncertainty of $H^\mathrm{fit}$ is $\sim 3~\mathrm{km/s/Mpc}$. Substituting the lens shear parameters with values estimated from the simulations reduces the uncertainty to $\sim 2~\mathrm{km/s/Mpc}$.
The measurement of the Hubble constant based on the cosmic ladder has a long history (see e.g. \citealp{freedmad10}, \citealp{riess11} for reviews). The methods based on CMB anisotropy (\citealt{WMAP9}, \citealt{planckXVI}) give a high formal accuracy of $H_0$ derivation but are not in full agreement with each other. On the other hand measurements based on the gravitational lenses with time delays (\citealp{refsdal64}) have their own attraction, at least as a consistency check, since they are independent of other methods. In the idealized case of an isolated lens with a known mass distribution profile placed in (otherwise) uniform universe (\citealp{refsdal64}) the accuracy of the Hubble constant derivation depends only on the accuracy of the time delay measurement and is straightforward. In a more realistic approach one has to take into account other mass concentrations, since strong gravitational lenses are typically observed in complex environments (see e.g. \citealp{Will06}). \citet{oguri07} gives the value of $H_0$ ($68\pm6\pm8~\mathrm{km/s/Mpc}$) based on 16 lensed QSOs. The values for individual systems have a large spread, but the claimed statistical and systematical errors for the sample are of the order of 10\%. Similarly \citet{paraficz10} find $H_0$ based on 18 systems to be $68^{+6}_{-4}~\mathrm{km/s/Mpc}$ but $76^{+3}_{-3}~\mathrm{km/s/Mpc}$ when they use a sub-sample of 5 elliptical lenses with an extra constraint on their mass profiles, which illustrates the importance of systematical errors involved. \citet{rathna15} obtain $68\pm6~\mathrm{km/s/Mpc}$ based on 10 systems. The systematic errors are not estimated. \citet{Suy10} obtain Hubble constant ($H_0=70.6\pm3.1~\mathrm{km/s/Mpc}$) with a single, well constrained observationally strong lens B1698+656 using a cosmological model with fixed density parameters. The matter distribution in the lens environment and along the line of sight is modeled in details based on abundant observations. \citet{Suy13} use two lenses and WMAP data to constrain cosmological parameters. The Hubble constant is found with 4\% -- 6\% accuracy depending on the assumptions on cosmological model. Similarly \citet{fadely10} employ the first gravitational lens Q0957+561, to obtain $H_0=79.3^{+6.7}_{-8.5}~\mathrm{km/s/Mpc}$. The influence of the mass distribution in the lens vicinity and along the line of sight on its models has been investigated by many authors (\citealp{KA88}, \citealp{KKS97}, \citealp{barkana96}, \citealp{ChKK03}, \citealp{WBO04}, \citealp{WBO05}, \citealp{Mom06}, \citealp{Aug07} and \citealp{AN11} to cite a few). \citet{KZ04} investigate the influence of the lens environment on derived model parameters (including $H_0$) using a synthetic group of galaxies. The problem of the environment influence is investigated in their work by \citet{Suy10}, \citet{Wong11}, \citet{Suy13}, \citet{Collett13}, \citet{McCully14}, \citet{McCully16} among others. A mixture of observational, numerical, and statistical methods is used to improve the accuracy of the external shear and convergence estimates. The description of light travel in a nonuniform universe model, which addresses part of the problems arising in connection with $H_0$ derivation, is still being improved. \citet{McCully14}, \citet{McCully16} using a multiplane approach, assume that in most layers the deflection can be modeled as shear, but they allow for more than a single layer, where the full treatment can be applied. This approach saves computation time by using shear approximation in majority of layers but its algebra is rather complicated. \citet{dAloisio13} develop a formalism based on \citet{barkana96} approach, effectively improving the {\it single lens plus external shear model} at the cost of introducing some additional parameters. \citet{Sch14a}, \citet{Sch14b} discusses the mass sheet degeneracy in the multiplane context and investigates the accuracy of cosmological parameters derived from its modeling. In this paper we continue our investigation of the environmental and line of sight effects which influence the action of strong gravitational lenses. Our calculations are based on the results of the Millennium Simulation \citep{Spr05} provided by its online database \citep{ls06}. The main purpose of this study is the evaluation of the influence of such effects on the accuracy of the measurement of the Hubble constant. The matter density distribution obtained from the Millennium Simulation (or any other simulation investigating gravitational instability on scales of a few hundred Mpc) is not sufficient as a basis for strong lensing study by galaxies because of too low resolution \citep{hilbert07}. \citet{hilbert09} use the matter distribution from the Millennium Simulation to investigate weak lensing effects, but their methods are not directly applicable to our purposes. We follow an approach in many aspects similar to work of \citet{Collett13}. We use the information on the distribution in space of gravitationally bound haloes provided by \citet{b11} and \citet{b4} and based on the Millennium Simulation. The haloes are characterized by their virial masses, radii, and velocities only. The mass distribution inside haloes, their ellipticity and orientation in space have to be specified (see Sec.~\ref{raydefl}). We use different density profiles for haloes as compared to \citet{Collett13}. We follow the approach of \citet{JK12} (hereafter Paper I) and \citet{JK14} (hereafter Paper II), changing our simulations methodology. More emphasis is put on the modeling of the matter distribution along the line of sight (hereafter LOS), which is key to assess the systematic uncertainties in the Hubble constant measurement. We use the fact that matter distribution in space is uncorrelated on distances of hundreds of Mpc and model LOS as a random combination of many uncorrelated weak lenses between the source and the observer. Using a large number of such combinations we get a large number of LOS models, and its influence can be statistically investigated. With galaxies being neighbours of strong lenses (called environment, hereafter ENV) the problem is more difficult, since they are correlated with each other. We consider only one ENV model for each strong lens (based on the distribution of galaxies in its vicinity in the Millennium Simulation). We do not follow \citet{KZ04}, who get different environments switching the roles of the main lens and its neighbours. Thus the number of different ENV models is equal to the number of strong lenses considered (1920 in the reported investigation). Some measure of ENV effects can also be obtained by comparing the results of simulations including both, LOS and ENV, with the results based on the inclusion of LOS only. We also use the model with the main lens in a uniform universe model (hereafter UNI) for comparison. Our investigation concentrates on the influence of the matter along the line of sight and the strong lens environment on the fitted values of the Hubble constant. The mass profile of the lens (another major source of errors in $H_0$ modeling) is not investigated here. The lenses are chosen at random from a set of sufficiently massive Millennium haloes. On average their environments are poor as compared e.g. to a set of six quad lenses investigated by \citet{Wong11}. This suggests that the results of our approach maybe applicable to the samples of less extreme lenses which may be found by ongoing large sky surveys. In Sec.~\ref{sec:model} we describe our approaches to light propagation. Sec.~\ref{sec:quad} presents tools used to compare different models and the results of such comparison. Sec.~\ref{sec:fits} is devoted to the main problem of measuring Hubble constant based on several lenses with measured time delays. Discussion and conclusions follow in Sec.~\ref{sec:concl}. \section[]{Model of the light propagation} \label{sec:model} \subsection{Ray deflections and time delays} \label{raydefl} We follow the methods of Papers I and II, using the multiplane approach to gravitational lensing (e.g. \citealp{SW88}; \citealp{SS92}) employing the results of the Millennium Simulation \citep{Spr05} and the non-singular isothermal ellipsoids (NSIE) as models for individual haloes (\citealp{KSB94}; \citealp{K06}). In our approach the matter distribution is described as a {\it background} component represented by matter density given on a low resolution $256^3$ grid plus gravitationally bound haloes given by \citet{b11} and \citet{b4}. The Millennium Simulation uses periodic boundary conditions, so calculation of the gravitational acceleration based on known matter density distribution and Fourier transform is straightforward in 3D. The component of the acceleration perpendicular to the rays (with GR correction factor) can be used to calculate the deflections and time delays due to the background. We use nonsingular isothermal ellipsoids (NSIE) to model matter distribution in all haloes. The NSIE model as described by \citet{KSB94} gives the deflection and lens potential in analytical form, but corresponds to infinite mass. In 2D real notation one has (\citealp{K06}): \begin{eqnarray} \alpha_x(x,y,\alpha_0,q,r_0)&=& \frac{\alpha_0}{q^\prime}~\mathrm{arctan} \left(\frac{q^\prime~x}{\omega+r_0}\right)\\ \alpha_y(x,y,\alpha_0,q,r_0)&=& \frac{\alpha_0}{q^\prime}~\mathrm{artanh} \left(\frac{q^\prime~y}{\omega+q^2r_0}\right)~\mathrm{, where}\\ \omega(x,y,q,r_0)&=&\sqrt{q^2(x^2+r_0^2)+y^2}~~~~q^\prime=\sqrt{1-q^2} \end{eqnarray} The ray crosses the lens plane at $(x,y)$, the lens center is placed at the origin of the coordinate system, the major axis along $x$. The axis ratio is given by $q$, $r_0$ is the core radius, and $\alpha_0$ is the approximate asymptotic value of the deflection angle. Each projected halo is represented as a difference between two NSIE distributions with the same characteristic deflection angles $\alpha_0$, axis ratios $q$, and position angles, but different values of core radii $r_1 \ll r_2$, which makes its mass finite: \begin{eqnarray} \bm{\alpha}&=& \bm{\alpha}(x,y,\alpha_0,q,r_1)-\bm{\alpha}(x,y,\alpha_0,q,r_2)\\ \lim_{r\rightarrow\infty}\bm{\alpha}&=& \alpha_0(r_2-r_1)\frac{\bm{r}}{r^2}~~~~~\Leftrightarrow\\ M&=&\frac{c^2}{4G}\alpha_0(r_2-r_1) \label{virialmass} \end{eqnarray} (compare Paper I). The above formula gives the value of characteristic deflection $\alpha_0$ for a halo of given mass and virial radius $r_\mathrm{vir}\approx r_2$. (We use $r_1 =0.001 r_2$ which guarantees the smoothness of formulae at $r=0$ and has little impact on the whole lens.) The axis ratios $q$ and position angles are not given by cosmological simulations. For $q$ we assume a distribution probability within the range $0.5 \le q \le 1$ with a maximum at $q=0.7$, loosely resembling the results of \citet{KY07}. The position angles in the sky are random. Since the {\it background } contains the whole mass, including mass of the haloes, the latter must be "void corrected" by some negative density distribution. We use discs with the negative density approaching zero at the outer radius: \begin{equation} \Sigma(r)= -\frac{3M}{\pi r_\mathrm{d}^2}\left(1-\frac{r}{r_\mathrm{d}}\right) \end{equation} where $\Sigma$ is the surface mass density, $M$ is the mass of the halo and $r_\mathrm{d}$ - the radius of the negative density disc defined by $M=4/3\pi\rho r_\mathrm{d}^3$, where $\rho$ is the mean density in the Universe at the epoch of interest. (In Papers I and II we were using constant surface density discs, but the present approach avoids discontinuities at the edge). A void corrected halo does not deflect rays outside its $r_\mathrm{d}$ radius, so only a finite number of haloes has to be included in calculations. The {\it snapshots} of the Millennium Simulation, giving the matter distribution in space, correspond to a discrete set of redshifts $\{z_i\}$. We follow this arrangement placing our deflection planes at the same redshifts. Each deflection plane represents the influence of matter in a layer perpendicular to the rays with the thickness given by the distance traveled by photons between consecutive planes. For each layer we construct a {\it deflection map} giving two components of the deflection angle on a grid covering the region of interest. Similarly we construct a {\it time delay map} representing the influence of gravitational potential of the layer. The light travels a few hundred $\mathrm{Mpc}$ through each layer, so we do not expect any correlations between the distributions of matter belonging to different layers along a ray. Since before cutting out the layer, we randomly shift and rotate Millennium cubes belonging to different epochs (\citealp{carbone08}), to avoid the consequences of periodic boundary conditions, such correlations are excluded in our approach, anyway. Thus choosing a random path of rays through space means choosing one deflection map and time delay map for every layer. Since the choice of locations of ray paths in different layers is independent, we can apply the {\it prismatic transformation} \citep{gfs88} in every layer without loosing generality. (The deflection in one layer influences the positions of a ray in all subsequent layers. As long as matter distributions in different layers are not correlated, this fact has no consequences.) We transform our maps in such a way that the deflection angle at the middle point of any map vanishes: \begin{equation} \bm{\alpha}_i(\bm{\beta}_i) = \bm{\alpha}_i^\prime(\bm{\beta}_i) -\bm{\alpha}_{i0}^\prime ~~~~~\Delta t(\bm{\beta}_i)= \Delta t^\prime(\bm{\beta}_i)-d_i\bm{\alpha}_{i0}^\prime\cdot\bm{\beta}_i \end{equation} where the variables before transformation are denoted with primes, and $\bm{\alpha}_{i0}^\prime$ is the original deflection at the central point. The subscript $i$ enumerates the layers, $\bm{\beta}_i$ gives the position in the $i$-th layer, and $d_i$ is the comoving distance to the layer. After the transformation the central ray (going through middle points of all maps) is not deflected at all and may be used as an axis in a rectangular coordinate system. Propagation of light beams corresponding to different sets of deflection maps can be compared when using such coordinate system. \subsection{Deflection maps} We use light beams of the $6^{\prime\prime}\times 6^{\prime\prime}$ crossection at the observer's position. The deflection maps cover a slightly larger solid angle of $10^{\prime\prime}\times 10^{\prime\prime}$ to allow for the beam deformations. The beams are wide enough to accommodate a typical image configuration resulting from a galaxy scale strong lens. We use two kinds of maps for weak and strong lensing separately. Weak lensing maps represent deflections and time delays for a ray bundle traveling at random direction and starting at random location in a given layer. By chance the deflection on a map constructed in such a way may not be weak, reflecting the possibility of finding another strong lens along the line of sight. This has an impact on the results, but we do not reject such maps a'priori. Strong lensing maps represent the deflections and time delays by a strong lens and its neighbours. The lenses with measured time delays investigated by \cite{oguri07}, \cite{paraficz10}, and \cite{rathna15} have redshifts range $0.26 \le z_\mathrm{L} \le 0.89$. The corresponding sources belong to the redshift range $0.65 \le z_\mathrm{S}\le 3.60$. We place our lenses on ten adjacent Millennium layers with a similar redshift range ($0.32 - 0.83$). Each strong lens is found by looking for an appropriate halo close to a randomly chosen point inside the simulation cube, which, treated as singular isothermal sphere, would give the Einstein radius between $0.5^{\prime\prime}$ and $1.5^{\prime\prime}$ for a source at $z_\mathrm{S} \approx 2$. Finally we use randomly directed beam of rays passing through and map deflections and time delays caused by the halo and its environment. On a separate map we store deflections and time delays due to the halo alone. We construct 16 strong lensing maps for each of the ten Millennium layers at $0.32\le z_i \le 0.83$, which is the assumed range of the lens redshifts. Similarly we calculate a sample of weak lensing maps covering the redshifts $0 \le z_i \le 2.62$, which corresponds to the possible range between the observer and the source. There are 64 weak lensing maps in every layer. All maps use grids of the size $512\times 512$. The choice of the size and number of maps results from the memory capacity considerations; we are able to store such an atlas of strong and weak lensing maps in the RAM. In order to increase the number of simulated cases, we repeat the whole process 12 times, every time creating a new atlas of independently calculated maps. Thus there are $16\times 10\times 12=1920$ strong lensing maps each representing a different halo with its surroundings. In principle there is no problem in using maps belonging to different atlases, but it would be technically less efficient. \subsection{Simulations of light propagation} The multiplane approach describes the path of a ray as (eg \citealt{SW88}): \begin{equation} \bm{\beta}_N = \bm{\beta}_1 - \sum_{i=1}^{N-1}~\frac{d_{iN}}{d_{N}}~\bm{\alpha}_i(\bm{\beta}_i) \label{multiplane} \end{equation} where $\bm{\beta}_N$ is the position of the ray in the $N$-th layer, $d_{ij}$ is the angular diameter distance as measured by an observer at epoch $i$ to the source at epoch $j$. W also use the subscripts $O$, $L$, and $S$ for the observer, lens, and source planes respectively, and $d_i\equiv d_{Oi}$. $\bm{\alpha}_i(\bm{\beta}_i)$ is the deflection angle in the $i$-th layer at the position $\bm{\beta}_i$. Since we consider sources at different redshifts we do not use {\it scaled} deflection angles in the above and further equations as is customary in the multiplane approach \citep{SEF92}. In a flat cosmological model the angular diameter distances in the lens equation can be replaced by comoving distances, which we shall denote $D_{ij}$ with the same subscript convention. In the calculations we apply more efficient recurrent formula of \citet{SS92}, equivalent to the above equation. Knowing the light path, one can calculate the {\it geometric} part of the relative time delay $\Delta t_\mathrm{geom}$ (as compared with the propagation time along a null geodesics in a uniform universe model) using the formula \citep{SEF92}: \begin{equation} c\Delta t_N^\mathrm{geom}(\bm{\beta}_1) = \frac{1}{2}~\sum_{i=1}^{N-1}(1+z_i)~\frac{d_{i+1}}{d_id_{i,i+1}} \left(d_i(\bm{\beta}_{i+1}-\bm{\beta}_i)\right)^2 \label{geomdelay} \end{equation} where we consider a ray coming to the observer from the direction $\bm{\beta}_1$ (which defines its earlier path, so all $\bm{\beta}_i$ are known) and the factors $1+z_i$ represent time dilatation. The deflection in each layer can be calculated as a gradient of the deflection potential, which is also a measure of {\it gravitational} time delay $\Delta t_\mathrm{grav}$ \citep{SEF92}: \begin{equation} \bm{\alpha}_i = -\frac{1}{d_i}\frac{\partial\Psi_i}{\partial\bm{\beta}_i} ~~~~~~c\Delta t_\mathrm{grav}(\bm{\beta}_i) = \Psi_i(\bm{\beta}_i) + C \end{equation} The potential is defined up to a constant $C$. The cumulative {\it gravitational} time delay after crossing all the layers is: \begin{equation} c\Delta t_N^\mathrm{grav}(\bm{\beta}_1)= \sum_{i=1}^{N-1}~(1+z_i)\Psi_i(\bm{\beta}_i) \end{equation} where again $\bm{\beta}_1$ defines the path and we account for time dilatation. Finally: \begin{equation} \Delta t_N(\bm{\beta}_1) = \Delta t_N^\mathrm{geom}(\bm{\beta}_1) +\Delta t_N^\mathrm{grav}(\bm{\beta}_1) \end{equation} The above expression contains an unknown additive constant. Only the difference in calculated time delays between two rays can have a clear physical meaning. After choosing a set of the deflection maps (see below) we model the backward propagation of a bundle of rays using Eq.~\ref{multiplane} for each ray. The result of the ray shooting is a vector array: \begin{equation} \bm{\beta}_N^{kl}=\bm{\beta}_N(\bm{\beta}_1^{kl}) \end{equation} where $\bm{\beta}_N^{kl}$ gives the positions in the source plane of rays apparently coming from the directions $\bm{\beta}_1^{kl}$ on the observer's sky. Superscripts $k$, $l$ enumerate the rays. Similarly the time delays are stored in an array $\Delta t_N^{kl}$. Our simulations proceed as follows. We use all our strong lenses from the ten redshift layers within $0.32\le z_i\le 0.83$ range, a total of $16 \times 10 \times 12=1920$ maps representing main lenses with their surroundings which cause the ENV effects. For each lens we draw a source redshift belonging to $1.23 \le z_\mathrm{S} \le 2.62$, which covers most of the source redshifts range mentioned above. Next we choose a random combination of weak lensing maps in layers between the observer and the main lens and between the main lens and the source, which represent LOS effects. After that we backward shoot a beam of $512\times 512$ rays covering the $6^{\prime\prime}\times 6^{\prime\prime}$ solid angle. We check that all rays remain within the maps for all layers. We store the positions of rays in the source plane. This is the type of simulation, dubbed LOS+ENV, which is the most realistic. We repeat every such calculation 64 times, using different combinations of weak lensing maps, but keeping the strong lens, its environment, and source redshift unchanged. For comparison we perform also very similar simulations using the same choice of weak lensing maps, but replacing the main lens with surroundings by the same lens isolated. Such an approach is dubbed LOS. For each lens we use the same 64 weak lens combinations as in LOS+ENV approach. In the ENV simulations we use main lenses with surroundings, but remove weak lensing maps, thus neglecting the LOS effect. The lenses are physically correlated with their neighbours, so there is no room for making random combinations with other surroundings, and the number of cases considered is much smaller as compared to approaches including LOS. Finally using an isolated lens in a uniform universe model (UNI) we get another case for comparison. The methods described above investigate the results of strong lensing with perturbations from LOS and/or ENV. For the interpretation of the results an independent estimate of perturbing effects is needed. To find the external shear and convergence acting on ray bundles we use the weak lensing approximation. For each of our weak lensing maps we calculate the derivative of the deflection angle $\bm{\alpha}$ with the respect the ray position $\bm{\beta}$ using finite differences on a scale of $\sim 1~\mathrm{arcsec}$ similar to a typical separation between strong lens images: \begin{equation} \mathsf{\Gamma}_i^\prime\equiv \left|\left|\frac{\Delta\bm{\alpha}_i}{\Delta\bm{\beta}_i}\right|\right| \equiv \left|\left|\begin{array}{cc} \kappa^\prime+\gamma_1^\prime & \gamma_2^\prime \\ \gamma_2^\prime & \kappa^\prime-\gamma_1^\prime \end{array} \right|\right| \label{weak_one} \end{equation} where $\kappa^\prime$, $\gamma_1^\prime$, and $\gamma_2^\prime$ give the convergence and shear components defined up to a scaling factor depending on the observer - lens - source distances. Using the same method we calculate also weak lensing effect of the strong lens environment, applying the above formula to the deflections caused by the neighbours, but neglecting the main lens itself. \begin{figure} \includegraphics[width=84mm]{fig1.eps} \caption{Probability distribution of the external convergence caused by LOS and/or ENV calculated in the weak lensing approximation (Eq.~\ref{weak_all}) for all cases resulting in successful fits of image configurations (solid lines). For comparison with \citet{Suy13} Fig.~6 the distributions weighted by the likelihood of $\gamma^\mathrm{ext}=0.089\pm0.006$ are shown with dotted lines. The average values and standard deviations are given above the respective plots. } \label{kappa} \end{figure} In the case of quad lenses, the source is close to the optical axis. Using comoving distances in a flat Universe one can approximate a ray belonging to a quad image as: \begin{eqnarray} \bm{\beta}_i&=& \bm{\beta}_1~~~~~~~~~~~~~~~ \left|\left|\frac{\partial\bm{\beta}_i}{\partial\bm{\beta}_1}\right|\right| =\mathsf{I} ~~~~~~~~~~~~i\le n_{L}\\ \bm{\beta}_i&=& \frac{D_L D_{iS}}{D_i D_{LS}}~\bm{\beta}_1~~~~ \left|\left|\frac{\partial\bm{\beta}_i}{\partial\bm{\beta}_1}\right|\right|= \frac{D_L D_{iS}}{D_i D_{LS}}~\mathsf{I} ~~~~n_\mathrm{L}<i<n_\mathrm{S} \label{approx0} \end{eqnarray} where $\bm{\beta}_i$ is the ray position in the i-th plane ($\bm{\beta}_S=0$) and $\mathsf{I}$ is the unity matrix. Differentiating our Eq.~\ref{multiplane} and substituting $||\partial\bm{\beta}_i/\partial\bm{\beta}_1||$ from Eq.~\ref{approx0}, we get (compare \citealt{McCully16} in different notation): \begin{eqnarray} \mathsf{\Gamma}&\equiv & \mathsf{I} - \left|\left|\frac{\partial\bm{\beta}_S}{\partial\bm{\beta}_1}\right|\right| = \left|\left|\begin{array}{cc} \kappa^\mathrm{ext}+\gamma_1^\mathrm{ext} & \gamma_2^\mathrm{ext} \\ \gamma_2^\mathrm{ext} & \kappa^\mathrm{ext}-\gamma_1^\mathrm{ext} \end{array} \right|\right|\\ &=&\sum_{i=1}^{n_L}\frac{D_{iS}}{D_{S}}\mathsf{\Gamma}_i^\prime +\sum_{i=n_L+1}^{n_S-1} \frac{D_L}{D_i}\frac{D_{iS}}{D_{LS}}\frac{D_{iS}}{D_{OS}} \mathsf{\Gamma}_i^\prime \label{weak_all} \end{eqnarray} The weak lensing matrix is calculated in the linear approximation with strong lens influence on ray paths taken in the zeroth approximation. In the LOS+ENV case all layers $0<i<n_s$ are included in Eq.~\ref{weak_all}. In the LOS approach the strong lens plane is omitted ($\mathsf{\Gamma}_i^\prime=0$ for $i=n_L$) and in the ENV case all other planes are omitted ($\mathsf{\Gamma}_i^\prime=0$ for $i\ne n_L$). The results for the convergence are shown in Fig.~\ref{kappa}. The distributions of the shear are shown in Fig.~\ref{gamma}, where they are compared to the results of model fitting. The external convergence distribution in Fig.~\ref{kappa} plotted with solid line for the LOS case can be compared with Fig.3 of \citet{Collett13}. Our results give $\kappa^\mathrm{ext}$ closer to zero. While they use fixed source redshift $z_\mathrm{S}=1.4$, we consider a range of redshifts ($1.23\le z_S \le 2.62$), which should make our convergences higher. On the other hand they ignore the influence of strong lensing, while all our beams contain a strong lens at $0.32\le z_\mathrm{L} \le 0.83$ which is not directly contributing to the convergence, but makes contribution of layers farther away less important (Eq.~\ref{weak_all}). For comparison with Fig.~6 of \citet{Suy13} we show also conditional probability distribution for the external convergence weighted by the likelihood of $\gamma^{ext}=0.089\pm0.006$. The fraction of cases with such a large shear value (corresponding to overdense lines of sight) is small among our simulated models, so the distributions have to be plotted with broader bins. The visual comparison of our plot in LOS+ENV case with their Fig.~6 suggests a rough similarity of the conditional probability distributions of $\kappa^\mathrm{ext}$. Our $\kappa^\mathrm{ext}$ plots are also qualitatively similar to the results of \citet{smith14} shown in their Fig.~5.
\label{sec:concl} In this paper we have followed the methods of Papers I and II in simulating multiple image configurations of strong gravitational lenses based on the results of Millennium Simulation (\citealt{Spr05}, \citealt{ls06}). As compared to our previous papers we have changed the method of main lenses selection. We now look for strong haloes in randomly chosen sub regions of Millennium Simulation volume instead of looking for strong lenses inside a simulated wide, low resolution beam of rays. When finding an appropriate candidate we map the matter distribution in its vicinity obtaining a model of strong lens with its environment. When looking for lenses we do not place any extra requirements on their possible environments. (Thus we may be missing some important property of the observed lenses - compare \citealp{Will06}). We use the mass density profile of the lens from NSIE model and treat it as fixed in our calculations. The parameters of the lens may vary but it remains an isothermal ellipsoid. This guarantees the analytical formulae for the deflection angle and gravitational time delay in our models, but also removes the important source of the uncertainty, the lens mass profile slope degeneracy (compare \citealp{Suy12}). Thus our estimates of the accuracy of the Hubble constant measurements do not include the uncertainty of the mass profile slope. We do not include external convergence as a parameter in our models. We have improved our description of matter along the line of sight investigating a large number of cases. In the multiplane approach we use different combinations of weak deflection maps between the observer and the strong lens and between the lens and the source for the fixed strong lensing map and the source position. Thus the influence of different lines of sight on a given source--lens--observer configuration can be examined statistically based on ray tracing. We do not attempt to obtain or employ the relations between the shear, convergence and galaxy over-density near the lens as \citet{Suy13}, \citet{Suy10} do for individual lenses. Comparison of the typical shear and convergence values used in our simulations (Fig.~\ref{kappa}, Fig.~\ref{gamma}) with the values employed in modeling of real quad lenses (\citealp{Wong11}, \citealp{Suy10}, \citet{Suy13}) suggests that our results may have no direct application to the extreme cases analyzed there. According to our study the shear and convergence along random lines of sight are rather small. On the other hand higher than average density of matter along any line of sight makes strong lensing more likely to be observed and may serve as a selection effect not included in our study. According to our simulations the effects of matter along the line of sight and in the strong lens vicinity are on average weak and do not produce a substantial bias in the expected value of the Hubble constant, which remains close to the true value within $\sim 0.5~\mathrm{km/s/Mpc}$. The uncertainty of a measurements based on a single lens is $\sigma\approx 3~\mathrm{km/s/Mpc}$, LOS and ENV contributing roughly $2~\mathrm{km/s/Mpc}$ each. The distribution is not Gaussian and a probability obtaining a value with $>3~\sigma$ error from a single lens is $\sim 3$ per cent. Using a sample of dozen lenses gives the average with $\sim 0.7~\mathrm{km/s/Mpc}$ uncertainty. The external shear can be found by fitting a model to the image configuration or by estimating it from the simulation of light propagation (Eq.~\ref{weak_all}). Since both methods produce similar results (Fig.~\ref{gamma}), we make a numerical experiment substituting the estimated values as model parameters. We also supplement our model with the estimated value of the convergence. As a result we get the new distributions of fitted Hubble constant, which are much more symmetric and narrower as compared to the approach not restricting model parameters. The average values and medians of the $H^\mathrm{fit}$ are now at $\le 0.2~\mathrm{km/s/Mpc}$ from the true value and the uncertainty $\sim 2~\mathrm{km/s/Mpc}$ - compare Table.~1. This suggests that on average the uncertainty of the Hubble constant measurement resulting from the unknow shear and convergence values is $\sim 2~\mathrm{km/s/Mpc}$. It also shows that estimates of the shear and convergence values independent of lens modeling may substantially improve the accuracy of $H_0$ measurements, a fact well known to the strong lensing community. Results of Sec.~\ref{sec:quad} give a quantitative estimate of the LOS and ENV effects on the time delay distance measurements. We find that using a pair of images and assuming the lens model to be fixed we get $\sim 0.07$ 1-$\sigma$ relative error in the results (Fig.~2). Distances based on fits to the image configurations and derived Hubble constant values (Fig.~\ref{aahub}) have the accuracy of $\sim 4$ per cent. Thus our time delay distance estimates remain tentative. We have simulated a measurement of the Hubble constant based on a small sample of observed time delay lenses drawing the objects from our huge set of fitted lens models. Each sample includes image configurations belonging to different lenses. Drawing the samples large number of times and using a method of outliers elimination, we get distributions of sample-averaged Hubble constant measurements (Figs.~\ref{hhhub}, ~\ref{sssig}). Our experiment shows that in 68\% of cases the result lies within $\sim 0.7~\mathrm{km/s/Mpc}$ or $\sim 1$\% from the true (used in the light propagation simulations) value, which should be treated as a line of sight and lens environment only contribution to the error in $H_0$.
16
7
1607.06256
1607
1607.08761_arXiv.txt
Arp194 is a system of recently collided galaxies, where the southern galaxy (S) passed through the gaseous disc of the northern galaxy (N) which in turn consists of two close components. This system is of special interest due to the presence of regions of active star-formation in the bridge between galaxies, the brightest of which (the region A) has a size of at least 4 kpc. We obtained three spectral slices of the system for different slit positions at the 6-m telescope of SAO RAS. We estimated the radial distribution of line-of-sight velocity and velocity dispersion as well as the intensities of emission lines and oxygen abundance $12+\log(\mathrm{O/H})$. The gas in the bridge is only partially mixed chemically and spatially: we observe the O/H gradient with the galactocentric distances both from S and N galaxies and a high dispersion of O/H in the outskirts of N-galaxy. Velocity dispersion of the emission-line gas is the lowest in the star-forming sites of the bridge and exceeds 50-70 km/s in the disturbed region of N-galaxy. Based on the SDSS photometrical data and our kinematical profiles we measured the masses of stellar population and the dynamical masses of individual objects. We confirm that the region A is the gravitationally bound tidal dwarf with the age of $10^7 - 10^8 $ yr, which is falling onto the parent S- galaxy. There is no evidence of the significant amount of dark matter in this dwarf galaxy.
Arp 194 = VV126 is a tightly interacting system containing two main galaxies with active star-formation -- southern (S) and northern (N) ones, separated by $\sim 30$ kpc (in projection). N-galaxy, in turn, consists of two apparently merging galaxies (Na-component possessing the distinct spiral arms and the less noticeable and redder component Nb, projected onto the eastern spiral arm of Na-galaxy). The peripheral structure of the galaxies, especially of the northern ones, is perturbed by the recent close approach or direct collision of N and S-galaxies. The best high-resolution images of this system may be found in the HST archive data.\footnote{The color image may be found at the HST web-page: http://hubblesite.org/gallery/album/galaxy/interacting/pr2009018a}. Arp194 is of special interest due to the presence of the chain of star-formation regions between the galaxies which seldom occurs in interacting systems. Indeed, if galaxies retained their integrality after the close passage, the extended regions containing young stars are usually observed either in long tidal tails or inside the galactic discs, but not in the bridge between galaxies. Among the Arp galaxies the extended islands of star-formation between the closely spaced systems are noticeable only in Arp269, Arp270 and, probably, in Arp59 where the interaction involves several galaxies. The location of individual objects and their designation adopted in the current paper as well as the positions of the slit are shown in Fig. \ref{fig1} superimposed on the HST composite colour image. \begin{figure*} \includegraphics[width=11.5cm]{arp194_reg2.eps} \caption{The HST composite colour image in BVI bands with overplotted positions of the slits and regions of star-formation.} \label{fig1} \end{figure*} The mean system velocity of the components of Arp194 is close to 10500 $\textrm{km~s$^{-1}$}$, the difference between the central velocities of S- and N- galaxies does not exceed 30--40 $\textrm{km~s$^{-1}$}$. According to \cite{BushouseStanford1992} their stellar magnitudes in K band are 12.7 (S) and 12.1 (N), which correspond to the luminosities $3.6\cdot10^{10}(S)$ and $6.3\cdot10^{10}$ (N) in solar units adopting the distance 144 Mpc ($H_0=73\ \mathrm{ km\ s^{-1}\ Mpc^{-1}}$). Hence their NIR luminosities and consequently the stellar masses are comparable to that of the Galaxy. The first spectral observations of the system were conducted at 6-m telescope BTA at the dawn of its operation by Karachentsev and Zasov in the end of 70-s using the UAGS spectrograph and the image intensifier with photographic registration of spectra. The spectrograms were reduced by V. Metlov and published in \cite{Metlov1980}. The distribution of line-of-sight velocities of emission gas along the three spectral slices obtained in the cited paper unveiled the complexity of motions of gas and the presence of giant H{\sc ii} regions with absolute stellar magnitudes of -15 -- -17. A total dynamical mass of the system according to \cite{Metlov1980} exceeds $10^{11}M\odot$. A comparison of the velocity estimates with our measurements (this work) revealed a significant difference of some local values (up to several tens of $\textrm{km~s$^{-1}$}$), mostly in the regions of strong emission. However, the overall manner of distribution of gas velocity along the slits from \cite{Metlov1980} agrees with the later estimates. More elaborate study of the system Arp194 was carried out by \cite{Marziani2003} using the results of spectral and photometrical observations at the 2.1 m telescope. These authors obtained the spectral slices for two slit orientations and concluded that the unusual morphology of the system is related to the passage of the more compact southern galaxy through the disc of the northern one several hundred Myr ago, which caused the formation of the sequence of bright emission islands between the galaxies and the apparently expanding ring structure around the northern galaxy (a collisional ring) which contains gas and star-forming regions. However, the detailed image of the system obtained later with HST demonstrates more complex picture: the arc of star-forming regions surrounding the system from the north side almost coincides with the spiral arm of Na-galaxy which is deformed and highly elongated to the west, at the opposite side there are no large-scale structures that could be attributed to the ring. \cite{Marziani2003} gave the convincing reasoning that the emission regions between the galaxies represent the `blobs of stripped gas due to the interpenetrating encounter' of southern and northern components of the system. In other words, they resulted from the star-burst in the cold gas lost by galaxies during the collision. The absence of noticeable radiation in J and K-bands speaks in favour of young age of these emission bright regions. \cite{Marziani2003} estimated the age of the brightest stellar island A which is close to S-galaxy (see Fig.\ref{fig1}): $T=7\cdot10^7$ yr and concluded that it falls into the central part of the galaxy, which is in a good agreement with the conclusions of the current paper. The goal of current paper is the study of the individual star-forming regions on the periphery of the galaxies and between them, which could be considered as candidates to the tidal dwarfs parallel with the analysis of dynamics and chemical composition of gas in the system. The paper is organized as follows: in Sec.~\ref{Obs} we present the results of our long-slit spectral observations of Arp194 conducted at BTA, including the description of data reduction and the kinematic and chemical abundance radial profiles. Sections \ref{Discussion} and \ref{conclusion} are devoted to the analysis of the obtained data and present general results. In these sections we also involve the photometric data from SDSS.
We performed the long-slit spectral observations of interacting system Arp194 using three positions of the slit. The main results of data processing and the analysis are given below. \begin{itemize} \item We obtained the distribution of the line-of-sight velocity and velocity dispersion as well as the intensities of emission lines and oxygen abundance $12 + \log(\mathrm{O/H})$ along the slits. \item A special attention was paid to bright condensations between galaxies: the extended star-forming island (region A), multi-component site of star-forming knots (region B) and the compact object C (see Fig.1). We conclude that the region A is gravitationally bound short-lived tidal dwarf galaxy which is falling into the massive southern galaxy. The comparison of our dynamical estimate of mass with that followed from the photometry indicates that it is devoid of a considerable amount of dark matter. Region B is apparently not in the gravitational equilibrium. Object C appeared to be a background spiral galaxy. \item The gas in the system is only partially chemically mixed: the regions with low intensity of emission lines, crossed by the slits, do not reveal a significant systemic variation of O/H. At the same time, we observe the tendency of O/H to decrease with galactocentric distances both from S and N galaxies. Local velocity dispersion exceeding 50 $\textrm{km~s$^{-1}$}$ mostly takes place in the diffuse gas devoid of the extended regions of star-formation evidencing a strong turbulent motions of gas. The velocity and the abundance distributions allow to conclude that the gas was stripped from both galaxies and avoided a strong mixing. Most likely it will return back to their parent galaxies rather than it follows a cross-fueling scenario advanced by \cite{Marziani2003}. \item We compared the colour indexes of several discrete regions of star-formation (including the central parts of the main galaxies of Arp194) with evolutionary tracks using both PEGASE2 and Starburst99 models. We found that the colour of northern galaxies corresponds to the old stellar systems; the colour of the southern galaxy gives evidence that it experienced a strong burst of star-formation; regions A and B have the age of stellar population of $10^7-10^8$ yr. \item System Arp 194 is not typical, because it has the extended fireplaces of intense star-formation between the galaxies. We propose that the rare occurrence of regions of star-formation in the bridges between closely interacting galaxies is the result of short duration of the stage between the fading of strong turbulent motions of gas caused by strong interaction and the accretion of gas from the bridge back to the parent galaxies. \end{itemize}
16
7
1607.08761
1607
1607.03535_arXiv.txt
{ The Central Molecular Zone (CMZ) at the center of our Galaxy is the best template to study star formation processes under extreme conditions, similar to those in high-redshift galaxies. We observed on-the-fly maps of para-H$_{2}$CO transitions at 218 GHz and 291 GHz towards seven Galactic Center clouds. From the temperature-sensitive integrated intensity line ratios of H$_{2}$CO(3$_{2,1}-$2$_{2,0}$)/H$_{2}$CO(3$_{0,3}-$2$_{0,2}$) and H$_{2}$CO(4$_{2,2}-$3$_{2,1}$)/H$_{2}$CO(4$_{0,4}-$3$_{0,3}$) in combination with radiative transfer models, we produce gas temperature maps of our targets. These transitions are sensitive to gas with densities of $\sim$10$^{5}$ cm$^{-3}$ and temperatures <150 K. The measured gas temperatures in our sources are all higher (>40 K) than their dust temperatures ($\sim$25 K). Our targets have a complex velocity structure that requires a careful disentanglement of the different components. We produce temperature maps for each of the velocity components and show that the temperatures of the components differ, revealing temperature gradients in the clouds. Combining the temperature measurements with the integrated intensity line ratio of H$_{2}$CO(4$_{0,4}-$3$_{0,3}$)/H$_{2}$CO(3$_{0,3}-$2$_{0,2}$), we constrain the density of this warm gas to 10$^{4}-$10$^{6}$ cm$^{-3}$. We find a positive correlation of the line width of the main H$_{2}$CO lines with the temperature of the gas, direct evidence for gas heating via turbulence. Our data is consistent with a turbulence heating model with a density of n = 10$^5$ cm$^{-3}$.}
\label{Intro} The central region of the Milky Way, the so-called Central Molecular Zone (CMZ), is an exceptional star-forming environment. This region contains $\sim$10\% of the Galaxy's total molecular gas and produces 5$-$10\% of the Galaxy's infrared and Lyman continuum luminosity \citep{Morris1996}. The conditions (pressure, magnetic field strength, turbulence, gas temperature, etc.) in this region are much more extreme than in Galactic plane clouds \citep{Morris1996}. The star formation rate in the CMZ is a factor of 10$-$100 lower than expected for the huge amount of dense gas and dust contained in this region \citep[e.g.][]{Yusef-Zadeh2009, Immer2012a, Longmore2013, Kruijssen2013}. The high gas temperatures are one of the key properties of the CMZ clouds, influencing the chemistry of the gas as well as the star formation efficiency of the clouds. It determines the thermal Jeans mass as well as the sound speed which in turn sets the Mach number. Understanding the gas temperature structure of Galactic Center clouds is thus crucial for understanding the fragmentation and star forming mechanisms within them. The discrepancy between observed dust and gas temperatures is a long-known feature of CMZ clouds. While multi-wavelength observations of the dust emission in the CMZ yield dust temperatures of $\sim$20 K \citep{Lis1999,Molinari2011}, comparable to dust temperatures of Galactic plane clouds, the gas temperatures are much higher \citep[$>$50 K,][]{Guesten1981,Huettemeister1993,Ao2013,Mills2013,Ott2014,Ginsburg2016}. Many previous gas temperature measurements in the CMZ are based on observations of the NH$_{3}$ molecule which traces low-density gas \citep[n $\sim$ 10$^{3}$ cm$^{-3}$, e.g.][]{Guesten1981,Huettemeister1993, Ott2014}. \citet{Ao2013} mapped the inner $\sim$75 pc of the CMZ in the para-H$_{2}$CO transitions at 218 GHz, sensitive to warmer (T $>$ 20 K) and denser (n $\sim$ 10$^{4}-$10$^{5}$ cm$^{-3}$) gas. Their results show high gas temperatures towards many CMZ clouds, comparable to those measured in prior studies. This survey was extended to the whole CMZ ($-$0.4$^{\circ}$ $<$ l $<$ 1.6$^{\circ}$) by \citet{Ginsburg2016}. The inferred gas temperatures range from $\sim$ 60 K to $>$ 100 K, where the highest values are measured towards Sgr B2, the 20 and 50 km/s clouds, and G0.253+0.016 (``The Brick''). Comparing their results with dust temperature measurements of the whole CMZ, they show that the gas is uniformly hotter than the dust. The high gas temperatures are consistent with heating through turbulence, while uniform cosmic ray heating is excluded as a dominant heating mechanism. H$_{2}$CO is a slightly asymmetric rotor molecule. It has two different species (i.e. ortho and para) for which the K$_{\rm a}$\footnote{K$_{\rm a}$ is the projection of J along the symmetry axis for the limiting case of a prolate (oblate) top.} quantum number is odd or even. These two species are not connected by radiative transitions. The differences in the population of levels separated by $\Delta$K$_{\rm a}$~=~2$\cdot$n are due to collisions. \citet{Mangum1993} presented a detailed study of the usability of different H$_{2}$CO transitions for the determination of the kinetic temperature and density of the gas in molecular clouds. For a range of total angular momentum quantum numbers $J$ (here J = 2 and 3), modeling of the relative intensities of (in our case) para-H$_2$CO lines (K$_{\rm a}$ quantum number of 0 or 2), delivers estimates of the density and the temperature. The K$_{\rm a}$ ladders are close in frequency and thus can be observed with the same telescope, even very often in the same spectrum which makes them calibration-independent. The J = 3$-$2 and 4$-$3 H$_{2}$CO K$_{\rm a}$ ladders at 218 and 291 GHz, respectively, can be observed with the Atacama Pathfinder Experiment (APEX) telescope, each group within one band. \citet{Mangum1993} showed that measuring several H$_{2}$CO intensity ratios of transitions with different J values yields better constraints of the kinetic temperature than can be obtained from just one H$_{2}$CO intensity ratio. This is clear since then a larger range of level energies is covered. In this paper, we report a detailed gas temperature study of seven molecular clouds in the CMZ, using the H$_{2}$CO thermometer at 218 and 291 GHz. The names of the observed sources and their coordinates are listed in Table \ref{SourceCoord}. We chose our targets to be high density clouds with previous warm gas temperature measurements. As shown in Fig. \ref{Targets-CMZ}, they span the whole CMZ. None of these clouds are photon-dominated or X-ray-dominated regions. There is evidence of wide-spread shocks in the form of SiO emission in the CMZ including these clouds. Our sample contains potential star forming clouds (Sgr C, 20 km/s cloud, 50 km/s cloud, G0.480$-$0.006), quiescent clouds (G0.411+0.050) and shock heated clouds (G0.253+0.016). There is an ongoing debate whether Sgr D is part of the CMZ or not \citep{Mehringer1998, Blum1999, Sawada2009}. However, this uncertainty does not influence our results or conclusions. In Section \ref{Obs}, we describe the observations and the calibration of the data. In Section \ref{Analysis}, we present how the H$_{2}$CO spectra, ratio and uncertainty maps were produced. In Section \ref{Modeling}, the radiative transfer modeling is described. We discuss the different results in Section \ref{TempMeasurements} and give conclusions in Section \ref{Summary}.
\label{Summary} In this paper, we present H$_{2}$CO observations of five and seven CMZ clouds at 218 and 291 GHz, respectively. Combining integrated intensity H$_{2}$CO line ratios with radiative transfer models, we obtain gas temperature maps for our clouds. The two different sets of H$_{2}$CO lines (H$_{2}$CO(3$-$2) at 218 GHz and H$_{2}$CO(4$-$3) at 291 GHz) yield two independent estimates of the gas temperature. Our observations at 218 GHz are a factor of $\sim$1.5 deeper than previous H$_{2}$CO CMZ observations by \citet[][compare our median rms values of $\sim$45 mK per pixel to their noise level of 70 mK per pixel]{Ginsburg2016}. While Ginsburg et al. focus on the global temperature differences in the CMZ, we disentangle the different velocity components of the gas in our sources and investigate their temperature structures. This is a significant step since the CMZ clouds exhibit the widest velocity components observed in our galaxy. From a comparison of the H$_{2}$CO main lines at 218 and 291 GHz, we accessed that the H$_{2}$CO(3$_{0,3}-$2$_{0,2}$) line is optically thick in some parts of the clouds. Combining the observed line ratios R$_{321}$ and R$_{404}$, we constrain the density of the warm cloud gas to 10$^{4}-$10$^{6}$ cm$^{-3}$. Our temperature maps at 218 and 291 GHz show clear temperature gradients in our clouds. This indicates that heating mechanisms that act on the bulk of the molecular gas cannot be the main heating sources. Cosmic ray heating is only possible if the heating is non-uniform on very small scales. In a following paper, we will compare our results with complementary observations of shock and star formation tracers as well as supernova remnants in the clouds to understand if these gradients are caused by local heating through cloud collisions, feedback from new born stars or the explosion of stars. Comparing the line widths of the main H$_{2}$CO lines at 218 and 291 GHz with the measured temperatures at selected positions in our clouds, we found a clear positive correlation between these two parameters. This indicates that turbulence plays an important role in the heating of the gas. Our data is consistent with a turbulence model with a density n = 10$^5$ cm$^{-3}$ which assumes that each cloud has a line-of-sight length of 1 pc.
16
7
1607.03535
1607
1607.00536_arXiv.txt
We perform an analysis of the Einstein-Skyrme cosmological model in Bianchi-IX background. We analytically describe asymptotic regimes and semi-analytically -- generic regimes. It appears that depending on the product of Newtonian constant $\kappa$ with Skyrme coupling $K$, in absence of the cosmological term there are three possible regimes -- recollapse with $\kK < 2$ and two power-law regimes -- $\propto t^{1/2}$ for $\kK=2$ and $\propto t$ for $\kK > 2$. In presence of the positive cosmological term, power-law regimes turn to the exponential (de Sitter) ones while recollapse regime turn to exponential if the value for $\Lambda$-term is sufficiently large, otherwise the regime remains recollapse. Negative cosmological term leads to the recollapse regardless of $\kK$. All nonsingular regimes have the squashing coefficient $a(t) \to 1$ at late times, which is associated with restoring symmetry dynamics. Also all nonsingular regimes appear to be linearly stable -- exponential solutions always while power-law for an open region of the initial conditions.
One of the most important nonlinear field theories is the sigma model, with its applications covering many aspects of quantum physics (see e.g. \cite{Manton1} for review), but within this model it is impossible to build static soliton solutions in 3+1 dimensions. To overcome this Skyrme introduced~\cite{Skyrme} term which allows static soliton solutions with finite energy, called {\it Skyrmions} (see also \cite{Manton1, ped2} for review), to exist. It appears that excitations around Skyrme solutions may represent Fermionic degrees of freedom, suitable to describe baryons (see~\cite{ferm1} for detailed calculations and~\cite{ferm2, ferm3, ferm4, ferm5} for examples). Winding number of Skyrmions is identified with the baryon number in particle physics~\cite{wind}. Apart from particle and nuclear physics, Skyrme theory is relevant to astrophysics~\cite{astro}, Bose-Einstein condensates~\cite{EBc}, nematic liquids~\cite{nematic}, magnetic structures~\cite{magnetic} and condensed matter physics~\cite{cond}. Also, Skyrme theory naturally appears in AdS/CFT context~\cite{adscft}. Due to highly nonlinear character of sigma and Skyrme models, it is very difficult to build exact solution in both of them. So, to make field equations more tractable, one usually adopts certain {\it ansatz}. For Skyrme model one of the best known and mostly used is hedgehog {\it ansatz} for spherically symmetric systems, which reduces field equations to a single scalar equation. It worth mentioning that recently this {\it ansatz} was generalized~\cite{CH} for non-spherically-symmetric cases. Use of hedgehog {\it ansatz} allows study of self-gravitating Skyrme models. In particular, it was demonstrated the potential presence of Skyrme hair for spherically-symmetric black-hole configurations~\cite{hair}. This is the first genuine counterexample to ``no-hair'' conjecture which appears to be stable~\cite{h1}; its particle-like~\cite{h2} counterparts and dynamical configurations~\cite{h3} have been studied numerically. After that, more realistic spherically- and axially-symmetric black-hole and regular configurations were studied~\cite{recent}. Apart from spherically-symmetric configurations, of particular interest are cosmologically-type solutions. Generalized hedgehog {\it ansatz} makes it possible to write down simplified field equations for non-spherically-symmetric configurations which we used to perform analysis of Bianchi-I and Kantowski-Sachs models for Einstein-Skyrme cosmology with $\Lambda$-term~\cite{we14} (particular subcase was studied in~\cite{another}). The paper~\cite{fabr15} was a logical continuation of them, as the particular solution of the Bianchi-IX cosmological model was described. The analysis suggests that, based on the static counterpart of this model, the construction of exact multi-Skyrmion configurations composed by elementary spherically symmetric Skyrmions with non-trivial winding number in four-dimensions is possible~\cite{rest} (see also~\cite{sun} for possible generalization to higher $SU(N)$ models). In this paper we are going to consider full Bianchi-IX cosmological model in Einstein-Skyrme system. Our study is motivated from both field theory and cosmological point of view. Indeed, this is one of few (if not the only) systems where one can study analytically dynamical and cosmological consequences of the conserved topological charge, which in this particular case is associated with the baryon number. From the cosmological point of view, Bianchi-IX model is well-known and well-studied in cosmology -- for instance, for the proof of inevitability of the physical singularity through oscillatory approach to it~\cite{belinski}. So that, if we consider Bianchi-IX model, the results could be translated and compared with the counterparts from our physical Universe. The structure of the manuscript is as follows: first we review Einstein-Skyrme system and derive basic equations, then we study asymptotic case both with and without $\Lambda$-term. After that we study general case, address linear stability of the obtained solutions and finally discuss and summarize the results.
In current paper we considered Bianchi-IX cosmological model in Einstein-Skyrme system (\ref{full}). The original system was simplified and considered with growth of complexity, which allows us to build semi-analytical solution. Purely analytical solutions are obtained for the simplest case with $a(t)\equiv 1$ and $\Lambda=0$ -- in that case there are three possible solutions -- one with recollapse for $\kK<2$ and two power-laws -- $\propto t^{1/2}$ for $\kK=2$ and $\propto t$ for $\kK > 2$. All three are presented in Fig. \ref{pre01a} and one cannot miss their similarity with three different Friedmann solutions from classical cosmology -- with spatial curvature $k=\pm 1$ and $0$. The scales with time are different but the qualitative behavior is the same -- in some sense $(2-\kK)$ plays a role similar to the spatial curvature. Further complications of the system act as modifications of the obtained exact solution. Turning $a(t)$ dynamical (but with still $\Lambda=0$) leads to oscillatory behavior like presented in Fig.~\ref{pre01}(a)--(c). Let us remind that oscillatory behavior is a part of early Bianchi-IX universe, as discovered by Belinskij, Khalatnikov and Lifshits~\cite{belinski}. If one keep $a(t)\equiv 1$ but make $\Lambda > 0$, then power-law regimes turn to exponential while recollapse regime turn to exponential if (\ref{L1}) is satisfied; if not, they remain recollapse. Finally, if one combine both -- dynamical $a(t)$ with $\Lambda > 0$, the resulting trajectories have oscillations and exponential (de Sitter) late-time asymptote for $\kK \geqslant 2$; for $\kK < 2$ one have oscillations and de Sitter if $\Lambda > \Lambda_{cr}$ and recollapse if $\Lambda < \Lambda_{cr}$; the separation between these two cases is presented in Fig. \ref{pre01}(d). Recollapse behavior is also encountered in anti de Sitter case -- when $\Lambda < 0$ -- and in this case the result is independent on $\kK$. The value for $\Lambda_{cr}$ cannot exceed $\Lambda_0$ from (\ref{L1}) but could be much less (orders of magnitude), as our numerical investigation suggests. In Fig.~\ref{pre02} we provided the distribution of $\Lambda_{cr}$ over initial conditions space for three different $a_0$ on (a)--(c) panels and linear cut over $a_0$ on (d). One can see that all nonsingular regimes have $a(t) \to 1$ at late times. From metric (\ref{BIX}) point of view, $a(t) = 1$ solution is the most symmetric one (so that it has more Killing fields then $a(t) \ne 1$ one), so that we can see that all nonsingular regimes have symmetry restoring dynamics, and all these solutions are stable. Singular regimes, which do not possess this feature, are either $\kK < 2$ cases with $\Lambda < \Lambda_{cr}$ or $\Lambda < 0$ AdS cases; for the latter the value for $\kK$ is irrelevant. For more physical analysis we use real values for the Skyrme coupling constants~\cite{real}. Then one can immediately see that $\kK \lll 1$ and so $\kK < 2$ is the case. For $\kK < 2$ from (\ref{sol_noL}) one can derive the ``lifetime'' -- with real values for couplings substituted, this time appear to be of the order of Planck time, which means that without $\Lambda$-term or some other matter sources with sufficient density, Bianchi-IX universe with Skyrme would collapse immediately. On the other hand, on this time scales the space-time cannot be described by classical means and additional investigation with involvement of quantum physics is required. Finally, if we substitute coupling constants into (\ref{L1}), the resulting value for the cosmological constant appears to be in agreement with other estimates from quantum field theory, treating it as vacuum energy, and is around 120 orders of magnitude higher than the observed value (so-called ``cosmological constant problem'', see e.g.~\cite{weinberg}). In a sense the results of current paper complement the results of~\cite{we14}, where we studied Bianchi-I and Kantowski-Sachs universes in Einstein-Skyrme system. In both papers the cosmological constant (or probably some other matter field) is necessary for viable cosmological behavior. But unlike~\cite{we14}, where we demonstrated need for the upper bound on the value of $\Lambda$-term, in current paper we found the lower bound. It is interesting that different topologies in presence of Skyrme source require either not too large or not too low values for the cosmological constant. This finalize our study of Bianchi-IX Skyrme-Einstein system. We described its dynamics and derived conditions for different regimes to take place. Generally, Einstein-Skyrme systems are very interesting and are not much considered, probably due to their complexity, so each new result improves our understanding of cosmological hadron dynamics. In particular, these systems offer the interesting possibility to study the cosmological consequences to have conserved topological charge. Thus the present analysis is quite relevant as the energy-momentum tensor a Skyrmions of unit topological charge.
16
7
1607.00536
1607
1607.05852_arXiv.txt
We present a study of the parsec-scale multi-frequency properties of the quasar S4~1030+61 during a prolonged radio and $\gamma$-ray activity. Observations were performed within \textit{Fermi} $\gamma$-ray telescope, OVRO 40-m telescope and MOJAVE VLBA monitoring programs, covering five years from 2009. The data are supplemented by four-epoch VLBA observations at 5, 8, 15, 24, and 43~GHz, which were triggered by the bright $\gamma$-ray flare, registered in the quasar in 2010. The S4~1030+61 jet exhibits an apparent superluminal velocity of (6.4$\pm$0.4)c and does not show ejections of new components in the observed period, while decomposition of the radio light curve reveals nine prominent flares. The measured variability parameters of the source show values typical for \textit{Fermi}-detected quasars. Combined analysis of radio and $\gamma$-ray emission implies a spatial separation between emitting regions at these bands of about 12~pc and locates the $\gamma$-ray emission within a parsec from the central engine. We detected changes in the value and direction of the linear polarization and the Faraday rotation measure. The value of the intrinsic brightness temperature of the core is above the equipartition state, while its value as a function of distance from the core is well approximated by the power-law. Altogether these results show that the radio flaring activity of the quasar is accompanied by injection of relativistic particles and energy losses at the jet base, while S4~1030+61 has a stable, straight jet well described by standard conical jet theories.
Multi-wavelength observations are a powerful tool to study the emission mechanisms at subparsec and parsec-scales of active galactic nuclei (AGN). Relativistic jets of quasars are oriented close to the line of sight, which in combination with the Doppler boosting effects makes them to be some of the most powerful objects in the Universe and to show strong variability at wavelengths from radio to $\gamma$ rays. Whereas the very long baseline radio interferometry (VLBI) technique enables us to study subparsec-to-parsec structure of quasars, $\gamma$-ray telescopes have much poorer resolution and the location of this emission is not established well. \begin{figure*} \centering \includegraphics[angle=-90,scale=.26]{1030+611_C2_Istack1.eps}\quad \includegraphics[angle=-90,scale=.26]{1030+611_X1_Istack1.eps}\quad \includegraphics[angle=-90,scale=.26]{1030+611_U1_Istack1.eps}\quad \includegraphics[angle=-90,scale=.26]{1030+611_K1_Istack1.eps}\quad \includegraphics[angle=-90,scale=.26]{1030+611_Q1_Istack1.eps}\\ \caption{Stacked naturally weighted contour images of total intensity over four multi-wavelength epochs at 5.0, 8.4, 15.4, 23.8 and 43.2~GHz (from left to right). Contours of equal intensity are plotted starting from 3~rms level at $\times$2 steps. The 1 rms value equals to 0.07, 0.07, 0.14, 0.10 and 0.23~mJy at 5.0, 8.1, 15.4, 23.8 and 43.2~GHz respectively. The full width at half maximum (FWHM) of the synthesized beam is shown in the bottom left corner of every image. Color circles represent model-fit components. The color figures are available in the electronic version of the article.} \label{fig_cxust} \end{figure*} \begin{figure*} \centering \includegraphics[angle=-90,scale=.7]{1030+611_U1_Imultiepoch.eps} \caption{S4~1030+61 naturally weighted contour images of total intensity at 15.4~GHz from 2009 (left) to 2013 (right) for all the MOJAVE epochs as listed in Sections~\ref{s:multifreqVLBA} and \ref{s:MOJAVE}. Contours of equal intensity are plotted starting from 4 r.m.s.\ level at $\times$5 steps. FWHM of the synthesized beam is shown by crosses. Symbols represent model-fit components, while solid lines connect the positions of components at first and last epochs. The designation of the components is given in Section~\ref{s:struct}.} \label{fig_uall} \end{figure*} Many authors have discussed the connection between radio and $\gamma$-emission. \citet{vt_96,jorstad_etal01,lv_03,kovalev_etal09,pushkarev_etal10,leon-tavares_etal11,fuhrmann_etal14,ramakrishnan_etal15} and \citet{karamanavis_etal16} reported significant correlations between these bands. While localizing the $\gamma$-ray emitting region authors agree that it originates in the relativistic jet of AGN, but dispute about more precise location of this emission within a jet. Two scenarios are favoured: the sub-parsec region around the central massive black hole \citep[e.g.,][]{bl_95,tavecchio_etal10,dotson_etal2012} and the regions located few parsecs downstream the jet, where jet becomes transparent at radio wavelengths \citep[e.g.,][]{lv_03}. The parsec-scale core was identified by \cite{kovalev_etal09,pushkarev_etal10} as a likely location for both the $\gamma$-ray and radio flares, which appears within typical timescales of up to a few months of each other. Many $\gamma$-ray flares are connected with the ejection of a newly born components \citep[e.g.,][]{jorstad_etal01,agudo_etal11} but sometimes no observed components can be associated with the $\gamma$-ray flares \citep[e.g.,][]{pk_98}, though many sources show dual properties \citep[e.g.,][]{marscher_etal08,ramakrishnan_etal14,morozova_etal14}. \begin{table} \caption{VLBA central frequencies. The full table is available online.\label{tb_frq}} \begin{tabular}{lcr} \hline IEEE band & IF & Frequency\\ & &(MHz)\\ \hline C & 1 & 4604.5\\ & 2 & 4612.5\\ & 3 & 4999.9\\ & 4 & 5007.5\\ X & 1 & 8104.5\\ & 2 & 8112.5\\ \hline \end{tabular} \end{table} \begin{table} \caption{Amplitude corrections for the S2087E VLBA experiment. The full table is available online.\label{tb_acorr}} \begin{tabular}{@{}cccccc@{}} \hline Antenna & Band & Epoch & IF & Polarization & Correction\\ \hline BR & C & All & 1,2 & RCP & 1.10\\ KP & C & All & 1,2 & RCP & 1.10\\ KP & C & All & 1,2 & LCP & 1.13\\ MK & C & All & 1 & LCP & 1.10\\ PT & C & All & 4 & RCP & 0.90\\ FD & X & All & 1,2 & RCP & 1.08\\ \hline \end{tabular} \end{table} In this paper we present multi-epoch multi-frequency VLBI analysis of the flat spectrum radio quasar S4~1030+61 (TXS 1030+611, 1FGL J1033.8+6048), which shows strong variability at radio and $\gamma$-ray wavelengths. This is a high optical polarization quasar at a redshift of 1.4009 \citep{schneider_etal10,2012AJ....143..119I}. It is being monitored by the Large Area Telescope (LAT) on board of \textit{Fermi Gamma-ray Space Telescope} (\Fermi) and by the 40-m radio telescope of the Owens Valley Radio Observatory (OVRO) since 2008, providing good time coverage. After a bright $\gamma$-ray flare occurred in 2010~\citep[][]{ciprini_atel10,smith_atel10,carrasco_atel10}, a four-epoch VLBA campaign at 5, 8, 15, 24, and 43~GHz was started. These VLBI observations are supplemented by observations within the MOJAVE\footnote{Stands for Monitoring Of Jets in Active galactic nuclei with VLBA experiments~\citep[][]{lister_etal09,lister_etal13}} program made in 2009---2013. Here we report a detailed kinematic, polarized and radio-to-$\gamma$-ray study of the blazar. \begin{figure} \centering \includegraphics[scale=.31]{kinematics_sep.eps} \caption{Angular separation of the modelled components from the core versus time at 15.4~GHz. The solid lines are a weighted linear fit to the estimated core distances. The accuracy in position is estimated with equation~(\ref{eq:posacc}). The error bars of C2 and C3 component positions are smaller than the symbol size.} \label{fig_kins} \end{figure} \begin{figure} \centering \includegraphics[scale=.33]{kinematics_flux.eps} \caption{Radio light curves of the modelled components at 15.4~GHz for 14 observational VLBA epochs.} \label{fig_kin} \end{figure} This paper is structured as follows: in Section 2 we describe VLBA, OVRO, and \Fermi\ observations, and data reduction. In Section 3 we present results of the jet structure and kinematics study, core shift analysis, study of the polarization properties, decomposition of the radio light curve onto flares, and joint radio-$\gamma$-ray analysis. In Section 4 we discuss these results, with the astrophysical interpretation touched. Section 5 summarizes our findings. Through the paper the model of flat $\Lambda$CDM cosmology was used with $H_0$ = 68~km s$^{-1}$, $\Omega_M$ = 0.3, and $\Omega_{\Lambda}$ = 0.7 \citep[][]{planck_15}. The angular size of 1~mas in this cosmology corresponds to 8.68~pc linear scale at the redshift 1.4009. The luminosity distance is 10320.2~Mpc. We refer the `radio core` at each frequency as the brightest component at the apparent, stationary base of the jet and associate it with the region of an optical depth $\tau \thickapprox 1$ \citep{marscher08}. The position angles (PA) are measured from North through East. The spectral index $\alpha$ is defined as $S \propto \nu^{\alpha}$, where $S$ is the flux density, observed at frequency $\nu$. \begin{table*} \caption{Results of Gaussian model fitting and component parameters at 4.6--43.2~GHz. Column designation: (1) observation date; (2) the name of the component ("U" stands for unclassified); (3) the integrated flux density in the component and its error; (4) the radial distance of the component center from the center of the image and its error; (5) the position angle of the center of the component relative to the image center; (6) the FWHM major axis of the fitted Gaussian; (7) axial ratio of major and minor axes of fitted Gaussian; (8) major axis position angle of fitted Gaussian. The full table is available online.\label{tb_mdf}} \begin{tabular}{@{}lcr@{$\pm$}lr@{$\pm$}lcccc@{}} \hline Date & Name & \multicolumn{2}{c}{Flux density} & \multicolumn{2}{c}{Distance} & P.A. & Major & Ratio & Major P.A.\\ & & \multicolumn{2}{c}{(Jy)} & \multicolumn{2}{c}{(mas)} &($^\circ$)& (mas) & & ($^\circ$) \\ (1)& (2) & \multicolumn{2}{c}{(3)} & \multicolumn{2}{c}{(4)} &(5)& (6) &(7) & (8) \\ \hline \multicolumn{10}{c}{4.6~GHz}\\ \hline 2010--05--24 & Core & 0.147&0.007&0.00&0.04&0.0&0.202&1.0&\dots\\ & C2 & 0.055&0.003&1.02&0.07&173.8&0.480&1.0&\dots\\ & C1 & 0.0263&0.0013&3.02&0.14&166.0&1.094&1.0&\dots\\ & U1 & 0.0210&0.0011&4.43&0.18&162.9&1.918&1.0&\dots\\ & U0 & 0.0126&0.0006&7.5&0.4&$-$177.6&4.607&1.0&\dots\\ 2010--07--09 & Core & 0.147&0.007&0.00&0.04&0.0&0.146&1.0&\dots\\ \hline \end{tabular} \end{table*} \begin{table*} \begin{center} \caption{Measured integrated flux density (I), linear polarization (p) and degree of polarization (m), electric vector position angle and rms noises at the central region (corresponds to the peak flux density of the map) of S4~1030+61 maps at seven frequencies.\label{tb_fp}} \begin{tabular}{ccccccccc} \hline Parameter & Epoch$^a$ & \multicolumn{7}{c}{Frequency (GHz)}\\ \cline{3-9} & & 4.6 & 5.0 & 8.1 & 8.4 & 15.4 & 23.8 & 43.2\\ \hline I (mJy) & 1 & 183.4 & 183.4 & 219.4 & 228.3 & 279.1 & 329.5 & 401.5\\ & 2 & 185.4 & 184.5 & 216.4 & 223.9 & 282.8 & 365.8 & 470.3\\ & 3 & 210.1 & 209.2 & 238.0 & 243.7 & 320.1 & 452.0 & 628.7\\ & 4 & 220.2 & 216.1 & 271.4 & 278.5 & 404.4 & 585.6 & 757.2\\ p (mJy) & 1 & 2.5 & 2.9 & 6.6 & 6.7 & 3.3 & 1.7 & 7.8\\ & 2 & 4.9 & 5.2 & 8.6 & 8.9 & 8.6 & 6.2 & 5.1\\ & 3 & 5.0 & 4.9 & 9.4 & 10.0& 7.7 & 4.8 & 6.9\\ & 4 & 4.2 & 3.7 & 8.8 & 9.5 & 5.1 & 3.6 & 7.4\\ m (\%) & 1 & 1.4 & 1.6 & 3.0 & 2.94& 1.2 & 0.5 & 1.9\\ & 2 & 2.6 & 2.8 & 4.0 & 4.0 & 3.0 & 1.7 & 1.1\\ & 3 & 2.4 & 2.3 & 4.0 & 4.1 & 2.4 & 1.1 & 1.1\\ & 4 & 1.9 & 1.7 & 3.2 & 3.4 & 1.3 & 0.6 & 1.0\\ EVPA (deg) & 1 & 86.5 & 81.7 & 54.6 & 50.3 & 49.1 & 74.0 & 165.4\\ & 2 & 76.9 & 66.5 & 55.9 & 53.3 & 68.3 & 61.3 & 34.5\\ & 3 & 74.3 & 66.5 & 60.3 & 57.5 & 69.4 & 80.5 & 64.1\\ & 4 & 80.2 & 76.0 & 73.0 & 70.5 & 96.7 & 108.7 & 57.7\\ $\sigma_I$ (mJy) & 1 & 0.15 & 0.14 & 0.15 & 0.4 & 0.18 & 0.17 & 0.4\\ & 2 & 0.18 & 0.12 & 0.14 & 0.12 & 0.17 & 0.2 & 0.6\\ & 3 & 0.16 & 0.14 & 0.14 & 0.14 & 0.17 & 0.2 & 0.5\\ & 4 & 0.2 & 0.13 & 0.12 & 0.12 & 0.16 & 0.19 & 0.3\\ $\sigma_p$ (mJy) & 1 & 0.2 & 0.15 & 0.16 & 0.17 & 0.17 & 0.21 & 0.5\\ & 2 & 0.18 & 0.15 & 0.16 & 0.15 & 0.18 & 0.3 & 0.6\\ & 3 & 0.19 & 0.16 & 0.16 & 0.15 & 0.18 & 0.3 & 0.5\\ & 4 & 0.2 & 0.16 & 0.16 & 0.14 & 0.17 & 0.2 & 0.4\\ \hline \multicolumn{9}{l}{$^a$ Epochs are labeled as follows: 1 for 2010--05--24, 2 for 2010--07--09, 3 for 2010--08--28, 4 for 2010--10--18.}\\ \end{tabular} \end{center} \end{table*}
S4~1030+61 shows flaring activity at radio wavelengths through the five years of observations without reaching a quiescent state. Meanwhile at $\gamma$ rays it has two prominent activity periods. The OVRO light curve (Fig.~\ref{fig_ovro}) shows that the source experienced its largest flare in early 2014, and may be connected with enhanced activity in the $\gamma$-ray band. Despite the strong activity of the source, our kinematic analysis at 15~GHz does not provide evidence for the birth of another component. The C1 component seems to be stationary, that can be explained by (i) changes of the angle between the line of the jet propagation and the line of the sight (bending of the jet), (ii) interaction of the component with the boundary between the jet outflow and the interstellar medium \citep[e.g.,][]{homan_etal03} or by (iii) a standing shock in a jet. The decrease of the brightness temperature with distance at 15~GHz well follows a power-law up to the location of the C1 component (see Fig.~\ref{fig_tbs}), which places the region where jet bends, further downstream the jet. This makes option (i) to be unlikely. The spectral index maps show optically-thin emission at the location of C1 (see Fig.~\ref{fig_spi}), which also rules out option (ii). The observed value of the brightness temperature ($T_\mathrm{obs}$) may be connected to the intrinsic temperature ($T_\mathrm{int}$) via the Doppler factor: $T_\mathrm{obs} = {\delta}T_\mathrm{int}$. Taking into account the estimate of ${\delta}=15$, the average core brightness temperature is close to the equipartition value of $10^{11}$~K during the first six VLBA epochs (2009--06--25 - 2010--10--18), where the source shows a moderate flux variations. Meanwhile, the average brightness temperature after the separation of C3 components is $3.3\times10^{11}$~K. \cite{homan_etal06} show that the AGN jet cores are close to equipartition in their median-low state, meanwhile jets at their maximum brightness state go out of equipartition. The upper limit of $2\times10^{11}$K in intrinsic brightness temperature reported by the authors is close to our estimate. They suggest that jets become particle dominated during the flaring activity, resulting from particle injection or acceleration at the jet base. Hardening of the S4~1030$+$61 spectrum during the outburst evolution at the end of 2010 (see Fig.~\ref{fig_spi}) supports this suggestion. The brightness temperature of the source's jet decreases with the distance $r$ from the jet base and the size of the components $d$ as $T_\mathrm{b,jet}\propto r^{-f}$ and $T_\mathrm{b,jet}\propto d^{-\xi}$ accordingly. The estimated power-law indices at 4.6 and 15.4~GHz are $f=2.75\pm0.04$ and $\xi=2.8$. Considering these results, the power-law index of the jet width decrease with distance from the core $l$ can be estimated as $f/\xi$ \cite[see][]{pushkarev_kovalev_12} and resulted in the value of $l$ close to 1. Applying this to equation~(\ref{eq:tb_d}) and (\ref{eq:kr}), together with $\alpha=-0.82\pm0.2$ at the position of C2 component, leads to the values of power-law indices $n=1.7$, $b=1.1$. These results implies that the equipartition between magnetic and particle energies holds in the S4~1030+61 jet, while transverse magnetic field component dominates in the magnetic field density. The deprojected distance of the apparent radio core from the central black hole can be estimated as \begin{equation} r_\mathrm{core} = {\frac{\Omega_{r\nu}}{\nu\mathrm{sin}\theta}} \approx {\frac{\Omega_{r\nu}\sqrt{1+\beta^2_\mathrm{app}}}{\nu}}, \label{eq:csh_abs} \end{equation} where $\nu$ is the observed frequency in GHz, and $\Omega_{r\nu}$ is the core position offset (pc$\cdot$GHz), given by equation~(\ref{eq:cshoffset})~\citep{L98}. The resulted value is of (32$\pm$8)~pc$\cdot$GHz. Taking into consideration the maximum observed apparent speed, the absolute distance of the 15.4~GHz core from the AGN central engine then equals to (14$\pm$3)~pc, which is close to the median value of 13.2~pc measured by~\cite{pushkarev_etal12} for quasars. Thus, the 15~GHz core lies outside of the broad line region. Estimation of the magnetic field strength at 1~pc can be made through the proportionality \citep{hirotani_05,og_09mf,zdziarski_etal15} \begin{equation} B_1 \simeq 0.025 \Big( \frac{\Omega^3_{r\nu}(1+z)^3}{{\delta}^2\phi \mathrm{sin}^2\theta} \Big)^{1/4} G, \end{equation} where $\phi$ is the half-openning angle of the jet. Considering $2\phi \simeq 0.26\Gamma^{-1}$ \citep{2009AA...507L..33P}, the magnetic field strength estimate at 1~pc is of 2.2~G. The magnetic field strength in the core at observed frequency can be found as \begin{equation} B_\mathrm{core}(\nu) = B_1 r^{-1}_\mathrm{core} \end{equation} and for 15.4~GHz core results in 0.16~G. If we assume that the 2010--04--15 $\gamma$-ray flare and the 2011--03--07 radio flare are connected, then the time lag of 0.8~years between these flares (see Section~\ref{s:c_g_r}) resulted in an estimate of deprojected distance of the $\gamma$-ray emitting region from the 15~GHz core of about 12~pc. The core shift measure (see above) places the 15~GHz core at absolute distance of (14$\pm$2)~pc from the central engine. Applying these results, we interpret that the $\gamma$-ray emission originates at 2~pc from the central engine. The cross-correlation of the radio and $\gamma$-ray light curves is not significant. This could be due to the much faster variability time scales at the $\gamma$-ray band. Inspecting Fig.~\ref{fig_ovro} by eye, it is evident that both bands experience enhanced activity around the same time. The lack of detection of a VLBI component at this period could be due to dissipation of the disturbance, causing $\gamma$-ray emission, on the way from $\gamma$-ray production region to the radio core position. The ongoing $\gamma$-ray activity after the strongest flare may be due to scattering of the seed photons on the jet medium. ~\cite{marscher_etal10a,marscher_etal10b} suggest that this medium might be a relatively slow sheath of the jet or a faster spine from observations of PKS~1510--089. Meanwhile, the $\gamma$ rays from the main flare are caused by inverse Compton scattering. The short variability interval of the $\gamma$-ray flux suggests compactness of the emitting region. Following \cite{jorstad_etal05}, the size of the emitting region ($R$) cannot be larger than $R < \Delta t \delta c$, where $\Delta t$ is the minimum variability timescale. The minimum detectable variability, seen on the $\gamma$-ray light curve is within 2 weeks (and limited by \Fermi\ detectability), which implies $R$ to be less than 0.18~pc. The size of the 43.2~GHz radio core is $\leq0.12$~pc (considering the size of the modelled core to be 0.014~mas). The deprojected distance of the 43.2~GHz core to the central engine, following equation~(\ref{eq:cshoffset}) and equation~(\ref{eq:csh_abs}), is estimated to be 1.3~pc. This value is close to the estimated earlier possible location of the $\gamma$-ray emission region of 2~pc from the central engine. It implies that the $\gamma$-ray emission may be localized in the 7~mm core region. Variations of the parsec-scale jet orientation with time and distance from the jet base have been observed in many blazars \citep[e.g.,][]{stirling_etal03,agudo_09,lister_etal13}. \citet{rani_etal14} relate these variations with the $\gamma$-ray flux variability, and received significant correlation between them. Structural changes in jet orientation of S4~1030+61 are seen in Fig.~\ref{fig_cxust}. Its PA changes from about $\thicksim$166\degr\ (at 5 mas from the core) to 170\degr\ (at the position of C2) and to about $-170$\degr\ (at the position of C3). Such variations might cause an absence of a correlation between radio and $\gamma$-ray bands in S4~1030+61: changes of jet orientation on sub-parsec-scales after strongest $\gamma$-ray flare may cause $\gamma$-ray flux to be too faint for its detection by \Fermi. The core region of the source is strongly affected by Faraday effects, which result in the complex behavior of the linear polarization degree with $\lambda$. The dependence differs from the expected behavior for the optically thick region of a jet \citep{PS67}. Possible explanations for such behavior are the following: (i) anomalous ~\citep{sokoloff_etal98} or inverse~\citep{homan_12} depolarization, which appears in a regular twisted or tangled magnetic fields; (ii) spectral depolarization~\citep{conway_etal74} due to smearing of multiple components within the observed region. Unfortunately, distinguishing between these alternatives is difficult. The EVPA vs. $\lambda^2$ behavior is consistent with both hypotheses. If the option (i) holds, then the relatively similar behavior of the degree of polarization in time implies a constant field pitch angle during the radio flare. The change in the amplitude then may imply a change in the strength of the magnetic field during the flare, which is supported by observed changes in the Faraday rotation. In turn, the RM is connected with both the magnetic field along the line of sight, and the electron density. Though \cite{BM10} pointed out that the RM in the core region should be treated cautiously. Quantities there change on scales much smaller than the observed interferometric beam, thus all characteristics will be smeared. The new component will undergo compression while passing the radio core resulting in changes both in strength of magnetic field and electron density. ~\citet[and references therein]{dammando_etal13} show that the source at any active state could be described by changing only in the electron distribution. The option (ii) can be true, since the core shows inverted spectrum and the C3 component is too close to the core to be studied separately. \citet{taylor_00} and \citet{zt_01} also observed temporal variations in polarized intensity and RM value in the cores of 3C~273 and 3C~279. The authors relate these changes to the creation and ejection of new components there. Indeed, \cite{lister_etal13} report on emergence of the new components in these sources at times close to observing epochs of \cite{taylor_00} and \cite{zt_01}. \citet{lico_etal14} and \citet{giroletti_etal15} show significant temporal variations of polarized flux density, RM and direction of intrinsic EVPA in the core of Markarian~421 during its $\gamma$-ray activity, which connects the magnetic field and $\gamma$-ray emission. Assuming RM changes of 400~\rmu (twice the measured value in 8.1--15.4~GHz range) in the core of S4~1030$+$61 through the 14 observational epochs, the 15.4~GHz EVPA then should rotate on about 9\degr, which has no significant influence on the $\phi_0$ relative to the observed EVPA, given in Fig.~\ref{fig_uall}. This likely connects changes in EVPA at 15.4~GHz with the orientation of the magnetic field. No clear connection of the magnetic field direction with the activity state can be made.
16
7
1607.05852
1607
1607.00193_arXiv.txt
{ We evaluate the prompt atmospheric neutrino flux at high energies using three different frameworks for calculating the heavy quark production cross section in QCD: NLO perturbative QCD, $k_T$ factorization including low-$x$ resummation, and the dipole model including parton saturation. We use QCD parameters, the value for the charm quark mass and the range for the factorization and renormalization scales that provide the best description of the total charm cross section measured at fixed target experiments, at RHIC and at LHC. Using these parameters we calculate differential cross sections for charm and bottom production and compare with the latest data on forward charm meson production from LHCb at $7$ TeV and at $13$ TeV, finding good agreement with the data. In addition, we investigate the role of nuclear shadowing by including nuclear parton distribution functions (PDF) for the target air nucleus using two different nuclear PDF schemes. Depending on the scheme used, we find the reduction of the flux due to nuclear effects varies from $10\%$ to $50 \%$ at the highest energies. Finally, we compare our results with the IceCube limit on the prompt neutrino flux, which is already providing valuable information about some of the QCD models.} \begin{document}
Measurements of high-energy extraterrestrial neutrinos by the IceCube Collaboration \cite{Aartsen:2013bka,IceCube} have heightened interest in other sources of high-energy neutrinos. A background to neutrinos from astrophysical sources are neutrinos produced in high energy cosmic ray interactions with nuclei in the Earth's atmosphere. While pion and kaon production and decay dominate the low energy ``conventional'' neutrino flux \cite{Honda:2006qj,Barr:2004br,Gaisser:2014pda}, short-lived charmed hadron decays to neutrinos dominate the ``prompt'' neutrino flux \cite{Gondolo:1995fq, Pasquali:1998ji,Pasquali:1998xf,Martin:2003us,ERS,Bhattacharya:2015jpa,Garzelli:2015psa,Gauld:2015yia,Gauld:2015kvh} at high energies. The precise cross-over energy where the prompt flux dominates the conventional flux depends on the zenith angle and is somewhat obscured by the large uncertainties in the prompt flux. The astrophysical flux appears to dominate the atmospheric flux at an energy of $E_\nu\sim 1$ PeV. Atmospheric neutrinos come from hadronic interactions which occur at much higher energy. With the prompt neutrino carrying about a third of the parent charm energy $E_c$, which in turn carries about 10\% of the incident cosmic ray nucleon energy $E_{CR}$, the relevant center of mass energy for the $pN$ collision that produces $E_\nu\sim 1$ PeV is $\sqrt{s}\sim 7.5$ TeV, making a connection to LHC experiments, e.g., \cite{Aaij:2013mga,Aaij:2015bpa}. There are multiple approaches to evaluating the prompt neutrino flux. The standard approach is to use NLO perturbative QCD (pQCD) in the collinear approximation with the integrated parton distribution functions (PDFs) and to evaluate the heavy quark pair production which is dominated by the elementary gluon fusion process \cite{Nason:1987xz,Nason:1989zy,Mangano:1991jk}. Such calculations were performed in \cite{Pasquali:1998ji,Pasquali:1998xf,Martin:2003us} (see also \cite{Gondolo:1995fq}). Recent work to update these predictions using modern PDFs and models of the incident cosmic ray energy spectrum and composition appears in \cite{Bhattacharya:2015jpa}, and including accelerator physics Monte Carlo interfaces, in \cite{Garzelli:2015psa,Gauld:2015yia,Gauld:2015kvh}. Using $x_c=E_c/E_{CR}\sim 0.1$ for charm production, one can show that high energies require gluon PDF with longitudinal momentum fractions $x_1\sim x_c$ and $x_2\sim 4m_c^2/(x_c s ) \ll x_1$. For a factorization scale $M_F\sim 0.5-4 m_c$, this leads to large uncertainties. In addition, due to the small $x$ of the gluon PDFs in the target one may need to address the resummation of large logarithms at low $x$. In particular, comparisons with LHCb data at $7$ TeV \cite{Aaij:2013mga} were used in ref.~\cite{Gauld:2015kvh} to reduce uncertainties in pQCD calculation (see also ref.~\cite{Cacciari:2015fta}). Using FONLL \cite{Cacciari:1993mq,Cacciari:1998it,Cacciari:2001td,Nason:2004rx} predictions for the $p_T$ distribution of charm mesons obtained with different PDFs, they have shown that LHCb data for $D$ mesons and $B$ mesons can reduce the theoretical uncertainty due to the choice of scales to $10\%$ and the uncertainty due to the PDF by as much as a factor of 2 at high energies in the region of large rapidity and small $p_T$. Still, the uncertainty due to the low $x$ gluon PDF remains relatively large. Given the fact that the gluon PDF is probed at very small values of $x$, it is important to investigate approaches that resum large logarithms $\ln (1/x)$ and that can incorporate other novel effects in this regime, such as parton saturation. Such effects are naturally incorporated in the so-called dipole model approaches \cite{Nikolaev:1990ja, mueller, Nikolaev:1995ty, gbw, forshaw, sgbk, bgbk, Kopeliovich:2002yv, Raufeisen:2002ka, kowalski, iim, Goncalves:2006ch,soyez,albacete,aamqs,gkmn, Albacete:2009fh, Albacete:2010sy, Ewerz:2011ph, Jeong:2014mla, Block:2014kza} and within the $k_T$ (or high energy) factorization framework \cite{Catani:1990eg,Collins:1991ty,Levin:1991ya,Ryskin:1995sj}. There is another major source of uncertainty in the low $x$ region. The target air nuclei have an average nucleon number of $\langle A\rangle=14.5$. Traditionally in the perturbative approach, the nuclear effects are entirely neglected and a linear scaling with $A$ is used for the cross section. Nuclear shadowing effects, however, may be not negligible at very low $x$ and low factorization scale. In the present paper, we expand our previous work (BERSS) \cite{Bhattacharya:2015jpa} to include nuclear effects in the target and analyze the impact of the low $x$ resummation and saturation effects on the prompt neutrino flux. We incorporate nuclear effects in the target PDFs by using in our perturbative calculation two different sets of nuclear parton distribution functions: nCTEQ15 \cite{Kovarik:2015cma} and EPS09 \cite{Eskola:2009uj}. As there is no nuclear data in the relevant energy regime, these nuclear PDFs are largely unconstrained in the low $x$ region ($x<0.01$) and there is a substantial uncertainty associated with nuclear effects. Nevertheless, for charm production, the net effect is a suppression of the cross section and the corresponding neutrino flux. At $E_\nu=10^6$ GeV, the central values of the nCTEQ PDF yields a flux as low as $\sim 73\%$ of the flux evaluated with free nucleons in the target, while the corresponding reduction from the EPS09 PDF is at the level of $ 10\% $. We also show our results using the dipole approach, with significant theoretical improvements with respect to our previous work (ERS) \cite{ERS}. These include models of the dipole cross sections that are updated to include more precise experimental data. Furthermore, we calculate the prompt neutrino flux in the $k_T$ factorization approach, using unintegrated gluon distribution functions with low $x$ resummation and also with saturation effects. We compare these calculations to the dipole and NLO pQCD results. Overall we find that for all calculations, there is a consistent description of the total charm cross section at high energies, for $pp$ and $pN$ production of $c\bar{c}$. We also evaluate the $b\bar{b}$ cross section and the contribution of beauty hadrons to the atmospheric lepton flux. For each approach we find that our choice for theoretical parameters is in agreement with the latest LHCb data \cite{Aaij:2013mga,Aaij:2015bpa} on charm transverse momentum and rapidity distributions in the forward region, and the total cross sections at $7$ TeV and at $13$ TeV. In addition to including nuclear and low $x$ effects, we also consider four different cosmic ray fluxes \cite{Gaisser:2012zz,Gaisser:2013bla,Stanev:2014mla} and show how the prompt neutrino flux strongly depends on the choice of the primary cosmic ray flux. The present paper is organized as follows. In the next section we present calculations of the total and differential charm cross section. We present comparisons of all three approaches, pQCD, dipole model and $k_T$ factorization, and we show the impact of nuclear effects on the total charm cross sections. We show comparisons of our theoretical results with the rapidity distributions measured at LHCb energies. In sec.~3 we compute neutrino fluxes for muon and tau neutrinos and compare them with the IceCube limit. Finally, in sec.~4 we state our conclusions. Detailed formulas concerning the fragmentation functions and meson decays are collected in the Appendix.
\subsection{LHC and IceCube} As figs.\ \ref{fig:dsdylhcbnlo}, \ref{fig:dsdylhcbdm}, \ref{fig:dsdylhcbkt} show, rapidity distributions measured at $7$ and $13$ TeV \cite{Aaij:2015bpa,Aaij:2013mga} seem to be somewhat in tension within all three approaches if one considers a fixed prescription for the scales independent of energy. The theoretical error bands, however, do accommodate the data as noted in ref. \cite{Gauld:2015kvh}. Figs. \ref{fig:dsdylhcbnlo}, \ref{fig:dsdylhcbdm}, \ref{fig:dsdylhcbkt} compare the distributions of charm quarks with the measured $D^0$ distributions. In the case of the $k_T$ factorization approach the $7$ TeV data seem to be more consistent with the calculation with the nonlinear gluon density, or the lower band of the calculation with linear gluon density, whereas the data at $13$ TeV are more in line with the evaluation with the linear evolution. This is rather counterintuitive and perhaps could suggest that the calculation with nonlinear evolution is disfavored by the data. However, given the spread of the uncertainty of the calculation it is not possible to make decisive statement at this time and more studies are necessary. The dipole model evaluation favors the Soyez form for $\sqrt{s}=7$ TeV and the AAMQS or Block form for $\sqrt{s}=13$ TeV for our fixed value of $\alpha_s$. The central pQCD predictions seem to indicate that the distributions don't rise quickly enough with increasing $\sqrt{s}$. In ref.~\cite{Gauld:2015yia}, the NLO pQCD prediction of the ratio of $d\sigma/dy$ in the forward region for LHCb for $\sqrt{s}=13$ TeV to $\sqrt{s}=7$ TeV was predicted to be on the order of 1.3-1.5, which we also see in fig.~\ref{fig:dsdylhcbnlo}. The data show the ratio to be closer to a factor of 2. Nevertheless, for all three approaches, the LHCb data fall within the theoretical uncertainty bands. We have calculated the rapidity and $p_T$ distribution using our theoretical QCD parameters, i.e. the range of factorization scales for a given charm mass which was determined from the energy dependence of the total charm cross section. We have found our results to be consistent with LHCb data. The range of $m_T$ dependent factorization scales in the pQCD evaluation adequately cover the range of LHCb data, while the range of $m_c$ dependent factorization scales overestimate the uncertainty. In the case of the dipole models, in which there is no explicit $p_T$ dependence, we have only made comparison with the LHCb rapidity distributions. \begin{figure} [t] \centering \includegraphics[width=0.5\textwidth]{Zpd-comparison-ifl2.pdf} \caption{$Z_{pD^0}(x_{max})/Z_{pD^0}(x_{max}=1)$ for the H3p flux and $E=10^6$ and $10^7$ GeV.} \label{fig:xmax} \end{figure} The IceCube Neutrino Observatory and other high energy neutrino detectors may be useful in getting a handle on forward charm production. Indeed, the high energy prompt lepton flux depends on charm production at even higher rapidity than measured by LHCb, as can be seen by the following argument. In both the high and low energy forms of the prompt lepton fluxes, the $Z$-moments for cosmic ray production of charm, e.g., $Z_{pD^0}(E)$, depend on the lepton energy $E$. To evaluate the $Z$-moment for charm production, the energy integral over $E'$ in eq.~(\ref{eq:zmomdef}) can be cast in the form of an integral over $x_E=E/E'$ that runs from $0\to 1$, account for incident cosmic rays ($p$) with energy $E'$ producing, in this case, $D^0$ with energy $E$. Fig.~\ref{fig:xmax} shows the fraction of the $Z$-moment integral in eq.~(\ref{eq:zmomdef}) for $x_E=0\to x_{max}$ for two different energies using NLO pQCD with the central scale choice and the H3p cosmic ray flux. For $E=10^6$ GeV, about $10\%$ of the $Z$-moment comes from $x_E<x_c=3.6\times 10^{-2}$, while for $E=10^7$ GeV, this same percentage comes from $x_E<x_c=1.5\times 10^{-2}$. We can use the value of $x_E>x_c$ that gives 90\% of the $Z$-moment as a guide to what are the useful kinematic ranges in high energy $pp$ collider experiments. We approximate \begin{equation} x_E\simeq x_F\simeq \frac{m_T}{\sqrt{s}}e^{y_{cm}}\simeq \frac{m_D}{\sqrt{s}}e^{y_{cm}}\, , \end{equation} in terms of the hadronic center of mass rapidity, which leads to \begin{equation} y_{cm}> \frac{1}{2}\ln \Biggl(\frac{x_c\, 2 m_p E}{m_D^2}\Biggr)\equiv y_{cm}^c \end{equation} for 90\% of the $Z$-moment evaluation. For $E=10^6$ GeV, this indicates that the $Z$-moment is dominated by $y_{cm}>4.9$ with $\sqrt{s}=1.4-7.3$ TeV. For $E=10^7$ GeV, $y_{cm}>5.7$ and $\sqrt{s}=4.4-35$ TeV. These approximate results show that the LHCb results directly constrain only a small portion of the contribution of charm production to the prompt neutrino flux. Finally, let us note that there could potentially be another source of charm in the proton. It has been suggested \cite{Brodsky:1980pb,Hobbs:2013bia} that there could exist heavy quark pairs in the Fock state decomposition of bound hadrons. This would be an additional non-perturbative contribution and is usually referred to as intrinsic charm to distinguish it from the perturbative and radiatively generated component considered in this work. Intrinsic charm parameterized in PDFs has been explored in e.g., refs.\ \cite{Dulat:2013hea,Ball:2016neh}. There are studies that explore how to probe intrinsic charm in direct and indirect ways \cite{Boettcher:2015sqn,Halzen:2013bqa}. Intrinsic charm, if it exists, would be mostly concentrated at high values of $x$ of the proton and may therefore be another contribution to the very forward production relevant to the prompt lepton flux. Its unique features were recently discussed in \cite{Halzen:2016pwl,Halzen:2016thi,Laha:2016dri}. We note however, that the current IceCube limit on the prompt flux is already quite constraining and leaves a rather narrow window for a sizeable intrinsic charm component. Eventual IceCube observations (rather than limits) of the prompt atmospheric lepton flux, may be unique in its ability to measure or constrain the physics at high rapidities. \subsection{Summary} In this work we have presented results for prompt neutrino flux using several QCD approaches: an NLO perturbative QCD calculation including nuclear effects, three different dipole models and the $k_T$ factorization approach with the unintegrated gluon density from the unified BFKL-DGLAP framework with and without saturation effects. Numerical results are listed in tables \ref{table:flux-ncteq15}, \ref{table:flux-ct14eps}, \ref{table:flux-dmkt} and \ref{table:flux-ncteq15-nutau}. The energy dependence of the total charm cross section, measured from low energies ($100$ GeV) to LHC energies ($13$ TeV), is best described with NLO pQCD approach. On the other hand the dipole and $k_T$ factorization approaches are theoretically suited to describe heavy quark production at high energies, however, they need additional corrections at lower energies. We have included theoretical uncertainties due to the choice of PDFs, choice of scales and nuclear effects, constrained by the total charm cross section measurements for energies between $100$ GeV up to $13$ TeV. We have shown that differential cross sections for charmed mesons obtained with these QCD parameters are in good agreement with the LHCb data. We have found that the prompt neutrino flux is higher in case of the dipole model and the $k_T$ factorization model (without saturation) than the NLO pQCD case. The former seem to be numerically consistent with the previous ERS \cite{ERS} results, while NLO pQCD is smaller than BERSS \cite{Bhattacharya:2015jpa}. For the nCTEQ15-14 evaluation, this is mostly due to the nuclear effects. In particular, we have found that the nuclear effect on the prompt neutrino flux is large in case of the pQCD approach with nCTEQ15-14 PDFs, as large as $30\%$ at high energies, while this effect is smaller ($\sim 20\%$) for the dipole model approach. The EPS09 nuclear corrections suppress the pQCD flux calculations with free nucleons by only $\sim 10\%$. The nuclear corrections are also significant in the $k_T$ factorization approach, as large as $50\%$ at high energies, thus lowering the flux to the level comparable with that obtained using the NLO pQCD with nuclear PDFs. Contributions from $b\bar{b}$ are on the order of 5-10\% to the prompt flux of $\nu_\mu+\bar{\nu}_\mu$ in the energy range of interest to IceCube. For completeness, we have also evaluated the flux of $\nu_\tau + \bar{\nu}_\tau$ from $D_s$ and $B$ decays. We have also shown results for different cosmic ray primary fluxes and show how the shape of the particular choice affects the neutrino flux. As before, the updated fluxes for the primary CR give much lower results than the simple broken power law used in many previous estimates. Finally we have compared our predictions with the IceCube limit \cite{IceCube}. We have found that the current IceCube limit seems to exclude some dipole models and the upper limit of the $k_T$ factorization model (without any nuclear shadowing), while our results obtained with the NLO pQCD approach with nCTEQ15-14 and the calculations based on the $k_T$ factorization with nuclear corrections included are substantially lower and thus evade this limit. Since it is very important to determine the energy at which prompt neutrinos become dominant over the conventional neutrino flux, we expect that the calculation of the conventional flux might be improved by using the two experiments at the LHC that have detectors in the forward region, the Large Hadron Collider forward (LHCf) experiment \cite{LHCf} and the Total, Elastic and Diffractive Cross-section Measurement experiment (TOTEM) \cite{TOTEM}. The LHCf experiment measures neutral particles emitted in the very forward region ($ 8.8 < y < 10.7$), where particles carry a large fraction of the collision energy, of relevance to the better understanding of the development of showers of particles produced in the atmosphere by high-energy cosmic rays. The TOTEM experiment takes precise measurements of protons as they emerge from collisions in the LHC at small angles to the beampipe, thus in the forward region. In addition to measuring the total and elastic cross section, TOTEM has measured the pseudorapidity distribution of charged particles at $\sqrt s = 8$ TeV in the forward region ($ 5.3 < |\eta| < 6.4$). These measurements could be used to constrain models of particle production in cosmic ray interactions with the atmosphere and potentially affect the conventional neutrino flux, which is coming from pion decays. Future IceCube measurements have a good chance of providing valuable information about the elusive physics at very small $x$, in the kinematic range which is beyond the reach of the present colliders. Keeping in mind the caveats involved in the current IceCube treatment of the atmospheric cascade and the incoming cosmic ray fluxes, the observation or non-observation of the prompt flux may give important insight into the QCD mechanism for heavy quark production. First, the nuclear gluon distribution in the region $x \le 0.01$ is currently not constrained with collider or fixed target experiments. On the other hand, we expect that the upcoming 6 year IceCube data will be sensitive to our pQCD flux results, especially those obtained with the EPS framework that includes nuclear effects. Second, from our study we find that the IceCube limit shown in fig.~\ref{fig:flux-icecube} already severely constrains the dipole model approach, even with the lowest cosmic ray flux (H3p). While it is possible that a modified dipole approach, such as a next-to-leading order calculation, would yield a lower charm cross section that is not in tension with IceCube, the dipole model calculation used here is not flexible enough to modify so that it evades the limit, i.e., this tension cannot be solved by adjusting dipole parameters, because they are constrained by the LHCb data. \subsection*{Note added} After submitting our paper, the IceCube Collaboration released their upper limit on the prompt atmospheric muon neutrino flux as 1.06 times the ERS flux based on the 6 year data \cite{Aartsen:2016xlq}. We find that our results presented in this work are below this new IceCube limit.
16
7
1607.00193
1607
1607.02369_arXiv.txt
{} {This paper reports accurate laboratory frequencies of the rotational ground state transitions of two astronomically relevant molecular ions, NH$_3$D$^+$ and CF$^+$.} {Spectra in the millimeter-wave band were recorded by the method of rotational state-selective attachment of He-atoms to the molecular ions stored and cooled in a cryogenic ion trap held at 4~K. The lowest rotational transition in the {\it A} state (ortho state) of NH$_3$D$^+$ ($J_K=1_0-0_0$), and the two hyperfine components of the ground state transition of CF$^+$ ($J=1-0$) were measured with a relative precision better than 10$^{-7}$.} {For both target ions the experimental transition frequencies agree with recent observations of the same lines in different astronomical environments. In the case of NH$_3$D$^+$ the high-accuracy laboratory measurements lend support to its tentative identification in the interstellar medium. For CF$^+$ the experimentally determined hyperfine splitting confirms previous quantum-chemical calculations and the intrinsic spectroscopic nature of a double-peaked line profile observed in the $J=1-0$ transition towards the Horsehead PDR. } {}
Molecular cations are important constituents of the interstellar medium (ISM). Exothermic and barrierless ion-molecule reactions are the key drivers of the chemistry in many interstellar environments such as the diffuse medium, photon-dominated regions (PDRs) and dense, cold molecular clouds. In many cases molecular cations have also proven to be excellent probes of the physical conditions in a variety of astronomical sources. Recent examples are the detection of ArH$^+$ in the diffuse galactic and extragalactic interstellar medium as a tracer of almost purely atomic hydrogen gas \citep{BSO2013,SNM2014,MMS2015}, and the observation and use of the {\it ortho/para}-ratio of H$_2$D$^+$ as a chemical clock to determine the age of a molecular cloud \citep{BSC2014}. One essential prerequisite for the detection and use of new molecular probes is the knowledge of highly accurate ($\Delta \nu / \nu< 10^{-6}$) rotational transition frequencies from laboratory experiments. In the case of reactive molecular ions, standard absorption spectroscopy through electrical discharges is often hampered by too low production yields of the target ions, and spectral contamination from a multitude of simultaneously generated molecular species. In particular rotational ground state transitions, often the most readily observed lines in the cold interstellar medium, are difficult to measure in the high excitation conditions of a laboratory plasma. These limitations can be overcome by employing sensitive action spectroscopic techniques on mass-selected ions in cryogenic ion traps. In our group we have in the past years developed several of these techniques based on laser induced reactions \citep[LIR;][]{SKL1999,SLR2002} and demonstrated their potential to determine highly accurate rotational transition frequencies \citep{ARM2008,GKK2010,GKK2013,JAW2014}. Here we present high-resolution measurements of the rotational ground state transitions of NH$_3$D$^+$ and CF$^+$, whose astrophysical importance as well as previous laboratory studies will be summarized in the remainder of this section. We then, in Section \ref{sec_lab}, introduce the experimental setup and the method of rotational state-selective attachment of helium atoms to mass-selected ions stored in a cryogenic (4~K) 22-pole ion trap instrument \citep{ABK2014}, previously applied in our group for rotational spectroscopy of C$_3$H$^+$ \citep{BKS2014}, and present experimental results and spectroscopic analyses on both ions. In Section \ref{discussion} we will discuss our results in the context of recent astrophysical observations. \subsection{The ammonium ion, NH$_4^+$} The ammonium ion, NH$_4^+$, is related to nitrogen chemistry in the interstellar medium, which has obtained renewed interest in the last years due to astronomical observations of light nitrogen hydrides (NH, NH$_2$, NH$_3$) with {\it Herschel}/HIFI in cold protostellar \citep{HMB2010,BCH2010} and diffuse gas \citep{PBC2010,PDM2012}. In the cold interstellar medium NH$_4^+$, formed by subsequent hydrogenation reactions starting from N$^+$ with H$_2$, is assumed to be the gas-phase precursor of the ubiquitous ammonia molecule, NH$_3$, the dominant product of its dissociative recombination with electrons \citep[e.g.,][and references therein]{LeBourlot1991,MWM2012}. Ammonia, once formed either by the above gas-phase route or by sublimation from ices in warmer regions of the ISM, can be transformed back to NH$_4^+$ by exothermic proton transfer reactions, dominantly with H$_3^+$, so that the chemistry of these species is closely linked. The high deuterium fractionation of ammonia seen in many dark clouds \citep{RTC2000,SOO2000,LRG2002,TSM2002,LGR2006} is caused primarily by similar transfer reactions with deuterated forms of H$_3^+$, producing deuterated variants of NH$_4^+$, followed by dissociative recombination with electrons \citep{RC2001,FPW2006,SHC2015}. Whereas NH$_4^+$, a spherical top molecule, is not observable by radio astronomy, its mono- (and doubly-) deuterated forms have comparatively large permanent electric dipole moments \citep[0.26~D in the case of NH$_3$D$^+$,][]{NA1986} and possess rotational transition lines in the mm- and submm-wavelength range. The monodeuterated variant NH$_3$D$^+$ was recently tentatively detected toward the massive star-forming region Orion-IRc2 and the cold prestellar core B1-bS \citep{CTF2013}. In both sources a spectral feature at 262816.7~MHz was observed and identified as the $J_K=1_0-0_0$ rotational ground state transition of NH$_3$D$^+$ based on accompanying high-resolution infrared data of the $\nu _4$ vibrational band \citep{DCH2013} that significantly reduced the uncertainty of the transition frequency prediction compared to an earlier study \citep{NA1986}. Nevertheless, the reported frequency uncertainty of $\pm 6$~MHz is comparable to the typical spacing between spectral features in a line-rich source like Orion, allowing for ambiguity in the line assignments. Here we present a direct laboratory measurement of the observed rotational transition, with $2$ orders of magnitude higher accuracy, lending additional support to the identification of NH$_3$D$^+$ in space. \subsection{The fluoromethylidynium cation, CF$^+$} The fluoromethylidynium cation, CF$^+$, has been detected in a variety of interstellar environments, namely in two photodissociation regions (PDRs) \citep{NSM2006,GPG2012}, in several diffuse and translucent galactic gas clouds \citep{LPG2014,LGP2015}, in the envelope of a high-mass protostar \citep{FBS2015}, and recently even in an extragalactic source \citep{MKB2016}. The main interest in CF$^+$ observations lies in its formation pathway: CF$^+$ is formed by an exchange of the fluorine atom of hydrogen fluoride (HF) in a bimolecular reaction with ionized atomic carbon (C$^+$), and thus can serve as a molecular tracer for HF and C$^+$ \citep{NWS2005,NSM2006,GPG2012,LGP2015}. Both of these species are keystones in astrochemistry, HF is the main reservoir of fluorine in the ISM and a useful probe of molecular H$_2$ column density, whereas C$^+$ is the main gas-phase reservoir of carbon in the diffuse interstellar medium, providing the dominant cooling process via its $^2P_{3/2}-^2P_{1/2}$ fine structure transition at 158~$\mu$m (1.9~THz). Both the C$^+$ fine structure line, and the rotational ground state line of HF (at around 1.2~THz) are not observable from the ground, whereas CF$^+$ (with a rotational constant of around 51.3~GHz) has many accessible transitions at mm-wavelengths. Its rotational spectrum is well studied up to 1.6~THz by absorption spectroscopy through magnetically enhanced negative glow discharges of CF$_4$ and H$_2$ \citep{PAH1986,CCP2010,CCB2012}. The CF$^+$ ground state transition, however, had not been observed in the laboratory plasma. In fact, the highest resolution measurement of this line so far comes from its astronomical observation in the Horsehead PDR \citep{GPG2012}. Interestingly, the authors found a double-peaked feature at the expected CF$^+$ ($J=1-0$) transition frequency in this narrow-line source, which they originally attributed to kinematic effects based on similarity to the observed C$^+$ line profile. Shortly thereafter, however, quantum chemical calculations revealed that the line profile can be explained spectrocopically by hyperfine splitting (hfs) caused by the non-zero nuclear spin ($I=1/2$) of the fluorine nucleus \citep{GRG2012}. The low temperatures achievable in our cryogenic ion trap experiments enabled us to resolve the hyperfine splitting of the CF$^+$ ground state transition experimentally for the first time, and to compare the derived fluorine spin rotation constant to the quantum chemical calculations.
\label{discussion} In this work we present a highly accurate ($\Delta \nu / \nu =6\cdot 10^{-8}$) experimental frequency for the $J_K=1_0-0_0$ rotational ground state transition of ortho-NH$_3$D$^+$ that can directly be compared to available astronomical observations. The narrow spectral emission feature at 262816.73(10)~MHz detected by \cite{CTF2013} toward the cold prestellar core B1-bS agrees within twice its experimental uncertainty with our experimental value of $262\,816.904(15)$~MHz. The small discrepancy might be attributed to uncertainty in the LSR velocity of the source. Using the laboratory rest frequency we obtain a v$_{\mbox{LSR}}$ of 6.7~km/s from the astronomical line. This value is well in the range of observed v$_{\mbox{LSR}}$ in single dish observations of Barnard B1-b \citep[e.g.,][]{LGR2006,DGR2013,CBA2014}, but falls somewhat between the systemic velocities of 6.3~km/s and 7.2~km/s associated with the two embedded B1-bS and B1-bN cores derived from high-spatial resolution observations, of e.g. N$_2$H$^+$, by \cite{HH2013}. Nevertheless, from a purely spectroscopic view our laboratory measurements support the assignment of the astronomical lines observed in B1-bS and Orion IRc2 to NH$_3$D$^+$. Ancillary support for this identification can be obtained by the observation of additional rotational transitions, for which accurate rest frequencies can now be predicted from our spectroscopic analysis (Tables \ref{nh3d+} and \ref{nh3dpar}). The para ground state transition is here of particular interest, since it will give information on the ortho/para-ratio of NH$_3$D$^+$, which is intimately linked to the spin state ratios of ammonia and H$_3^+$ and their deuterated variants \citep{SHC2015}. Unfortunately, both the $J_K=2_1-1_1$ para ground state line and the next higher ortho transition $J_K=2_0-1_0$, both at around 525.6~GHz, are extremely difficult to observe from ground. \\ Both transitions were covered in the HEXOS line survey towards Orion KL \citep{CBN2014,CBN2015} using the HIFI instrument \citep{GHP2010} on the Herschel Space Observatory \citep{PRP2010}. Spectra extracted from the publicly available archival data show no spectral features with significant intensity above the rms value of $\sigma=20$~mK (in $T_A^*$ units) at the two predicted transition frequencies. This is in accordance with the expected estimated peak intensity of around 25~mK for this line based on the intensity of the $J_K=1_0-0_0$ line ($\sim200$~mK) observed by \cite{CTF2013} with the 30m IRAM telescope towards the IRc2 position (only $3\arcsec$ offset of the HEXOS survey pointing position). Our estimate assumes that NH$_3$D$^+$ is present in the compact ridge region (with a source size of a few arcseconds), based on the observed v$_{\mbox{LSR}}=9$~km/s and narrow-linewidth usually attributed to this component. In this case the $J=2-1$ emission lines seen with the $40\arcsec$ Herschel beam are severely more diluted than the $J=1-0$ line observed with the IRAM 30m telescope ($10\arcsec$ beam). We furthermore assumed thermalisation of the rotational levels with an excitation temperature of >80~K (except for the ortho:para ratio, which was set to 1) because of the rather small Einstein coefficients for spontaneous emission $A_{ij}$ of the order $4\cdot 10^{-5}$~s$^{-1}$ and an H$_2$ density of $10^{6}$~cm$^{-3}$ in the compact ridge \citep{CBN2014}. The cold prestellar cloud B1-bS, exhibiting much fewer and narrower lines at sub-millimeter wavelengths than the complex Orion region, is likely a better target to search for confirming transitions of NH$_3$D$^+$ (the two 525.6~GHz lines should be of similar intensity as the detected ortho ground state line assuming a rotational excitation temperature of 12~K). Their detection, however, needs to wait for the next generation receivers onboard the airborne SOFIA observatory. An alternative is to search for higher deuterated variants of the ammonium ion, i.e. NH$_2$D$_2^+$ and ND$_3$H$^+$, as suggested by \cite{CTF2013}. The feasibility of spectroscopic studies on these species, the prerequisite for an astronomical search, will be discussed below. \\ Owing to the low ion temperatures and thereby narrow Doppler widths achievable in our trap experiments, we were able to resolve the two hyperfine components of the CF$^+$ $J=1-0$ rotational ground state transition, and extract their transition frequencies to within $3-4$~kHz, i.e. to a relative accuracy better than $10^{-7}$. From this we accurately determined the spin-rotation interaction constant to $C_I=227(3)$~kHz. This value is in excellent agreement with a high-level theoretical value of 229.2~kHz that was used by \cite{GRG2012} to account for an observed double-peaked line structure in the CF$^+$ $J=1-0$ transition towards the Horsehead PDR \citep{GPG2012}. Thus our spectroscopic experiments confirm the intrinsic nature of the astronomically observed line structure, ruling out that they stem from kinematic effects.\\ State-selective attachment of He atoms to mass-selected, cold molecular ions, used in this work for the spectroscopy of NH$_3$D$^+$ and CF$^+$, is a very powerful and general method for rotational spectroscopy. Due to the low internal ion temperature it is particularly well suited to probe rotational ground state transitions, which are difficult to observe with standard absorption experiments through high-temperature plasmas. In contrast to the more established action spectroscopic method of laser induced reactions (LIR) \citep{SLR2002}, the method directly probes purely rotational transitions, which for LIR is only possible in specific cases \citep[as for H$_2$D$^+$,][]{ARM2008} or through IR-mm-wavelength double resonance techniques \citep{GKK2013,JAW2014}. Moreover, for LIR a suitable endothermic reaction scheme is needed, whereas the attachment of He to the cold ions, and, even more importantly, a rotational state dependency of the attachment process needed for the spectroscopic scheme, has been observed in our laboratory for a considerable number of cations. These are, apart from NH$_3$D$^+$ and CF$^+$ presented here, C$_3$H$^+$ \citep{BKS2014}, CO$^+$, HCO$^+$, CD$^+$ \citep{PhDKluge}, CH$_2$D$^+$, and CD$_2$H$^+$ (discussed in forthcoming publications). The observable depletion signal for a given rotational transition depends on various experimental parameters and intrinsic properties of the studied molecular ion, as we have thoroughly investigated on known rotational transitions of CD$^+$ and HCO$^+$ \citep{PhDKluge}. Under the premise that the participating rotational states do exhibit a difference in the ternary attachment rate, the depletion signal will be the stronger, the more the population ratio of the two levels changes, upon resonant excitation, from the initial thermal Boltzmann distribution (i.e. without radiative excitation). In the best case, when radiative processes dominate over collisional ones, the population ratio of the two states reaches ``radiative'' equilibrium given by the ratio of their statistical weights. The method therefore profits from the low achievable ion temperature, and is best suited for ground state transitions with large rotational energy spacing. \\ Although the presented method is very sensitive in the sense that only a few thousand mass-selected molecular ions are needed to record rotational spectra, searching for unknown transitions over ranges larger than a few 10 MHz is very time-consuming owing to the long experimental cycle times (around 2~s per frequency point) and the multiple iterations needed to achieve a sufficient signal-to-noise ratio for detecting depletion signals of the order of typically only several percent. A very promising approach towards recording rotational transitions of new species, like the higher deuterated variants of NH$_4^+$, is to significantly reduce the experimental search range by obtaining accurate predictions for pure rotational transitions from preceding measurements of their ro-vibrational spectra with high accuracy via LIR (if applicable) or LIICG \citep[Laser Induced Inhibition of Complex Growth, as described for ro-vibrational spectroscopy in][]{ABK2014,AYB2015,JKS2016}. That frequency accuracies below 1~MHz can be achieved in this way has been demonstrated in our group on the examples of CH$_2$D$^+$ \citep{GKK2010,GKK2013}, deuterated variants of H$_3^+$ \citep{JKS2016}, and by determining the ground-state combination differences of CH$_5^+$ \citep{AYB2015} by using a narrow-linewidth OPO as infrared radiation source, in the latter two studies referenced to a frequency comb \citep{AKS2012}, and taking advantage of the low ion temperatures in the trap instruments. Once accurate predictions of the rotational transitions are available, they can be recorded directly with the method of state-selective attachment of He atoms presented here, or by LIR IR-mm-wavelength double resonance methods if applicable, thus providing highest accuracy and resolution needed for a comparison to radio-astronomical observations, as demonstrated here in the case of NH$_3$D$^+$ and CF$^+$. \\
16
7
1607.02369
1607
1607.05255_arXiv.txt
We present an analysis of surveying the inner Solar System for objects that may pose some threat to the Earth. Most of the analysis is based on understanding the capability provided by Sentinel, a concept for an infrared space-based telescope placed in a heliocentric orbit near the distance of Venus. From this analysis, we show 1) the size range being targeted can affect the survey design, 2) the orbit distribution of the target sample can affect the survey design, 3) minimum observational arc length during the survey is an important metric of survey performance, and 4) surveys must consider objects as small as $D=15-30$~m to meet the goal of identifying objects that have the potential to cause damage on Earth in the next 100 years. Sentinel will be able to find 50\% of all impactors larger than 40 meters in a 6.5 year survey. The Sentinel mission concept is shown to be as effective as any survey in finding objects bigger than $D=140$~m but is more effective when applied to finding smaller objects on Earth-impacting orbits. Sentinel is also more effective at finding objects of interest for human exploration that benefit from lower propulsion requirements. To explore the interaction between space and ground search programs, we also study a case where Sentinel is combined with the Large Synoptic Survey Telescope and show the benefit of placing a space-based observatory in an orbit that reduces the overlap in search regions with a ground-based telescope. In this case, Sentinel$+$LSST can find more than 70\% of the impactors larger than 40 meters assuming a 6.5 year lifetime for Sentinel and 10 years for LSST.
Near Earth Asteroids (NEAs) are a population of asteroids that spend at least part of the time in the inner solar system that are both potential targets for future exploration missions and possible threats of Earth impact. Surveys for NEAs, notably NEAT \citep{pra99}, LINEAR \citep{sto00}, Pan-STARRS \citep{jed03,jed06} and the Catalina Sky Survey \citep{lar07}, have found over 90\% of NEAs larger than about 1 km in diameter and a total of over 13,000 of all sizes \citep{jed15}. \citet{ted00} and \citet{cel04} presented the idea of of space-based infrared survey instruments and the Earth-orbiting WISE telescope has now demonstrated NEA detection in infrared wavelengths from space, adding about 200 additional NEA discoveries \citep{mai15b}. Because of the power-law distribution of NEA sizes and following a recent re-examination of the population of small NEAs \citep{bos15}, current estimates show that there are many times as many small NEAs that have not been found, perhaps as many as 4 million larger than 30 meters that could cause substantial damage upon impact. \citet{che04} and \citet{ver09} considered the probability of impact of these small objects and demonstrated that impactors are likely to come from a limited range of orbital parameters, providing a sub-population of NEAs of most interest to planetary defense. This interest motivates the drive for better NEA searches for planetary defense. The National Research Council in 2010 recommended that ``surveys should attempt to detect as many 30- to 50-meter objects as possible'' \citep{nrc10}. Such searches will also provide numerous targets that are within reach of available exploration mission launch and rendezvous capabilities. Current surveys, while making clever and effective use of their facilities, are adding only about 2000 new discoveries per year. The rate of detection will rise by two orders of magnitude with the 2022 completion of the Large Synoptic Survey Telescope \citep[LSST,][]{ive07,ive14}, which claims the potential to reach 82\% completion on PHAs larger than 140 meters in 10 years. The LSST survey completion rate for smaller objects will be lower, 34\% completion in 10 years down to Tunguska-like 40-meter objects. The Sentinel mission \citep{lu13,rei15} is proposed to search for NEAs from a heliocentric orbit near Venus' distance from the Sun (0.7 Astronomical Units, AU). As will be shown in this work, Sentinel surveys a unique volume of space and produces a complementary set of objects with a discovery rate even higher than that of LSST. The goal of the present work was to understand how an infrared space-based observatory can best be used to extend present-day surveys to a smaller size range for discovery of future impact threats. Working in the thermal infrared provides advantages for detecting NEAs since the target flux depends mostly on its heliocentric distance and much less on its albedo than does the visible flux. This property of an infrared survey provides a complementary aspect when run in parallel with optical surveys. Because of this weak dependence on albedo, infrared flux can more readily be used to deduce the size of the objects. Also, the phase angle dependence of brightness is less for thermal infrared observations than for visible reflectance. Additionally, the background source density is lower in the infrared, reducing difficulties due to confusing objects with background sources. Presented here is an analysis of a series of mission options that include the baseline concept for the Sentinel Mission \citep{lu13,rei15}. There is no attempt here to demonstrate an optimal mission architecture: there are likely to be other designs that can work as well. The purpose of this work was to investigate Sentinel and see if it is sufficient to the task and also probe potentially beneficial variations in the mission design. Our statistical modeling method allows very fast execution and permits testing many different scenarios without requiring true exposure-by-exposure fidelity of a simulated detection dataset. Using this tool, the Sentinel design will be shown to be a very good approach for finding hazardous objects.
We have looked at many variations for Sentinel and find that, for small impactors, the nominal orbit provides a very effective location for surveying and we do not find any options for substantial improvement that do not markedly increase the mission cost -- aperture of the telescope being the most important. In particular, Sentinel can reach 50\% completeness for impactors larger than 40 meters. Observatory location is not critical for any sub-population except for impactors and ARRM targets. In particular, observing from an Earth-neighborhood location is inferior to the interior orbit for objects of interest to planetary defense or human exploration targets. One clear lesson from this work is the need to be concerned with modeling performance down to a $D=15-20$~m size range for these two goals. For planetary defense, this conclusion is supported by the fact that most near-term threats come from the smallest objects and you have to get down to this small size before an event is likely in the next 100 years. Depending on the current completeness level of any population, optimization of a survey could lead to changes in the survey design if only the undiscovered population were considered. If a survey were merely an incremental improvement, this could be a very important consideration. Our survey modeling leads to guidance on an optimized strategy assuming you haven't yet found any objects. The real situation is somewhere in the middle between these two cases for Sentinel but closer to the ``new'' survey end of the spectrum due to working to much smaller sizes and working in the infrared where the biases against finding low albedo objects are removed. Modeling the undiscovered sub-populations will be left to future work, if warranted. Working to such small sizes leads to an fundamentally higher discovery rate than is currently taking place. The reality of higher discovery rates requires significant effort for followup as well and new surveys really must work to do their own followup as much as possible. In our survey design we also make a strong distinction between detection and cataloging. When modeling survey performance it is also important to track the observational arc that would be obtained on any newly discovered object. There is a minimum arc required to ensure linking against other later observations and this point is often overlooked. For our survey performance metrics we insist on a minimum arc of 28 days but this value is more of a guideline than a hard requirement. The minimum required arc needed to claim an object is cataloged might be worth additional study. Object detection is indeed easier in some locations in the sky, -- the so-called ``sweet spots'' where the probability of detection is at its peak \citep[cf.,][]{che04,ver09}. As shown in Figures \ref{fig-neo-v}--\ref{fig-vimp-e}, the discovery sweet spot depends on the observatory location and it also depends on object size and aperture. It is clear from our analysis that the best detections of an object will most likely come from these areas of the sky but the minimum arc length goal pushes the followup observations into other regions of the sky. For this reason, the FOR of a survey that does its own followup must necessarily cover more sky than just the sweet spots. NEA surveying is a difficult enough task, particularly at the small size range, that cost effective survey plans need to consider combining efforts from other facilities. This combined survey strategy can be accomplished in many ways. One easy example is to fly more than one space observatory and placed in different locations in space. For instance, a set of three Sentinel telescopes in a Venus-like orbit spaced by 120\mydeg\ of longitude would be three times as fast at finding the objects down to $D=15$~m. This group would also be much more effective at followup and give much longer observational arcs for the resulting object orbit catalog. Such strategies are very powerful but also quite expensive. In the near term, any such space-based facility really should work to be complementary to ground-based surveys, represented in our study by LSST so that both surveys can add the largest number of objects to our NEA catalogs. In the case of Sentinel plus LSST, we find the combined survey will find more than 70\% of asteroid impactors larger than 40 meters. The choice of orbit for a survey telescope also an important consideration. Our modeling of the combined Sentinel plus LSST performance will be a catalog of perhaps 1000's of possible impactors that will require continued, potentially long-term followup observations to sift out the real impactors from close misses. Additionally, this catalog will permit routine prediction of future close passes for continued study, significantly reducing the number of objects that will sneak up on the Earth without warning before its first detection. We are poised on the threshold of a huge step forward in NEA surveys, a step that will begin to reveal a more complete picture of the objects in space around us. This step will come from a space-based approach combined with ground-based observations, especially LSST\null. The task of mitigating against impacts does not end with this step. Continued surveys will be required for a while to get good observational arcs on all discovered objects as well as continuing to strive for ever better completeness for small objects. This initial phase of catalog will likely require at least two such survey efforts separated in time. After that work, followup becomes ever more increasingly targeted and no longer requires all-sky surveys. Nonetheless, protecting the Earth against impactors is a long-term and unending task and doing so is clearly within our grasp.
16
7
1607.05255
1607
1607.07299_arXiv.txt
This paper describes the Bayesian Technique for Multi-image Analysis (\batman), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real Integral-Field Spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of two. Our analysis reveals that the algorithm prioritizes conservation of all the statistically-significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of \batman\ is not to be used as a `black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially-resolved data prior to its analysis. The source code is publicly available at \url{http://astro.ft.uam.es/SELGIFS/BaTMAn}.
\label{sec_introduction} One of the basic problems in astronomical data analysis is the characterization of spatially-resolved information and the measurement of physical properties (and their variation) across extended sources. Soon this kind of tasks became increasingly demanding as new observations started to provide larger and larger amounts of data, prompting the need for a certain level of automation. Nowadays, human supervision is often limited to problematic cases, and many current and forthcoming datasets are so vast that a significant part of the analysis is left entirely to computer programs. One of the first instances of such application is the identification of (potentially extended and/or blended) sources in photometric images. Since the advent of large extragalactic surveys in the late 70's, a wide variety of techniques have been developed in order to automatically create source catalogues from astronomical images \citep[e.g.][among many others]{Stetson87, Bertin&Arnouts96, Makovoz&Marleau05, Savage&Oliver07, Molinari+11, Hancock+12, Menshchikov+12}. The widespread use of Integral-Field Spectroscopy (IFS) has literally added a new dimension to the problem, in the sense that spatial and spectral information are combined in order to locate the sources \citep[see e.g.][and references therein, in the context of radio observations of the 21-cm line]{Koribalski12}. Recently, new algorithms working on different types of spatially-resolved data have been developed in order to tackle specific scientific problems beyond source detection. In many cases, the aim is to compute and characterize maps that trace the spatial distribution of a given physical quantity such as e.g. the temperature and composition of the hot intracluster medium \citep{Sanders&Fabian01, Diehl&Statler06}, the properties of the stellar population in early-type galaxies \citep{Cappellari&Copin03}, the moments of the velocity distribution along the line of sight \citep{Krajnovic+06}, or the emission of warm ionized ionized gas and {\sc Hii} regions in galaxies \citep{Papaderos+02, HIIexplorer, Pipe3D}. What most of these goals have in common is the challenge of performing a coherent spatial segmentation of an image (or an IFS datacube) as a first step. On the one hand, it is necessary to average the signal over a large region in order to increase the signal-to-noise ratio (hereafter S/N) and carry out meaningful measurements. On the other hand, averaging over too large area does not only lead to a loss in spatial resolution, but it may also (in some cases, strongly) bias the results and their physical interpretation if the signal within the chosen region is not homogeneous. As mentioned above, this problem has already been tackled by different authors. They have proposed several schemes, specifically optimized for a wide variety of problems, to divide an image into connected regions in a fully automatic way. Some of them \citep[e.g.][]{Sanders&Fabian01, Cappellari&Copin03, Diehl&Statler06} are based on a Voronoi tessellation, whereas some others rely upon the identification of suitable intensity thresholds \citep[e.g.][]{Stetson87, Bertin&Arnouts96, HIIexplorer} or isocontours \citep[e.g.][]{Papaderos+02, Sanders06} at specific wavelengths in order to define physically-motivated regions. More recently, \citet{Pipe3D} proposed an alternative method, also aiming to obtain a tessellation with a target S/N, that imposes `continuity' in the surface brightness (i.e. a maximum contrast within any region) in order to better adapt to the morphology of the data. This work takes another step in this direction, presenting an alternative approach that is not aimed to obtain a specific S/N but to identify spatial regions where the signal is statistically consistent with being constant within the provided errors: if two regions carry the same information, it will always be advantageous to merge them in order to (further) increase the signal-to-noise ratio; if they are `different' (do not carry the same information), they should be kept separate in order not to introduce artificial biases. Such prescription preserves the information contained in the input dataset, and it imposes no condition on the shape of the tessellation. It is extremely general, and it can be applied to any kind of spatially-resolved data. We describe the mathematical basis of the method in Section~\ref{sec_idea}, while the details of the algorithm and its implementation are discussed in Section~\ref{sec_code}. Section~\ref{sec_data} proposes a set of benchmark problems, and the analysis of the results is discussed in Section~\ref{sec_results}. Our main conclusions are summarized in Section~\ref{sec_conclusions}.
\label{sec_conclusions} This article describes the Bayesian Technique for Multi-image Analysis (\batman), a new segmentation algorithm designed to characterize and coherently tessellate simultaneously many layers of a given multi-image, which we define as a dataset containing two regularly-sampled spatial dimensions and an arbitrary number $n_\lambda$ of `spectral' layers, such as e.g. Integral-Field Spectroscopic (IFS) data. \batman's tessellation attempts to identify spatially-connected regions that are statistically consistent with the underlying signal being constant, given the information (measurements and corresponding errors) provided in the input dataset. If the difference between any two regions is found to be significant, they are kept separate in order to avoid unnecessary loss of information. It is important to note that these considerations are independent on the signal-to-noise ratio: if two regions carry the same information (have compatible signal within the errors), they should always be merged together; if they do not, it may be completely unphysical to average over them, and \batman\ will keep them separate. In order to test the performance of the algorithm and provide some guidance to future users, we have created a set of test cases that comprises both synthetic and real data, analysed in different ways. According to the results of these tests, we conclude that: \begin{enumerate} \item The output tessellation depends on the precise choice of the input data set, and therefore it is of paramount importance that the user devotes some time to investigate the information that should be considered relevant as a preliminary step of any scientific analysis. \item \batman\ adapts to the spatial structures present in the data for a wide variety of morphologies, regardless of the statistical properties of the noise. By construction, gradients pose a significant challenge to the algorithm. When they are present, the output tessellation tends to trace the isocontour lines. \item The exact number and size of the regions are mainly set by the local signal-to-noise ratio. The higher the S/N of the data, the easiest it is to distinguish whether two spaxels/regions are different. When S/N is low, many spaxels may be consistent with having a similar signal, and only a small number of large-size, independent regions can be identified. \item The precise value adopted for the combined prior parameter $K$, the only free parameter of the algorithm, also affects the number and size of the regions in the output tessellation by setting the end of the iterative procedure. Lower values of $K$ result in more iterations and therefore a smaller number regions. This may have a substantial impact on the number of (potentially spurious) structures identified on the smallest scales, particularly when $n_\lambda=1$ (`monochromatic' mode). \item In the proposed formalism, the expected values \mm\ of the posterior probability distribution of the signal within each region are given by the inverse variance-weighted average~\eqref{eq_mean}. Our synthetic tests show that these values provide a good representation of the true signal. \item The formal errors \sg, given by expression~\eqref{eq_variance}, are indicative of the true uncertainties, but they may underestimate them by as much as a factor of the order of two (e.g. in the presence of gradients and/or spurious regions). \item Our analysis of real astronomical data shows that the segmentation of IFS datacubes is a complex task, involving many issues that are specific to the scientific problem under study. Our tests based on \ngc, focusing on the measurement of several Balmer lines, suggest that \batman\ may be most helpful in the low S/N regime. The algorithm is capable of recovering the underlying structure of the object even in the most difficult case (\hd), especially when it is applied directly on the CALIFA data (and not so much when the \shifu\ measurements are taken as input). The reduction of the noise with respect to the spaxel-by-spaxel (no binning) approach is clearly visible in the galaxy outskirts. \end{enumerate} As a final remark, let us stress once again that, in contrast to many other segmentation algorithms, \batman\ aims to preserve all the spatial information contained in the original data, as long as it is considered statistically significant. Such philosophy represents a new (and, in the opinion of the authors, much needed) approach to the analysis of astronomical images in the advent of the vast amount and spatial resolution of IFS data to come.
16
7
1607.07299
1607
1607.02938_arXiv.txt
We have developed a novel two-layer anti-reflection (AR) coating method for large-diameter infrared (IR) filters made of alumina, for % the use at cryogenic temperatures in millimeter wave measurements. Thermally-sprayed mullite and polyimide foam (Skybond Foam) are used as the AR material. An advantage of the Skybond Foam is that the index of refraction is chosen between 1.1 and 1.7 by changing the filling factor. Combination with mullite is suitable for wide-band millimeter wave measurements with sufficient IR cutoff capability. We present the material properties, fabrication of a large-diameter IR filter made of alumina with this AR coating method, and characterizations at cryogenic temperatures. This technology can be applied to a low-temperature receiver system with a large-diameter focal plane % for next-generation cosmic microwave background (CMB) polarization measurements, % such as POLARBEAR-2 (PB-2).
\label{sec:Introduction} % As astronomical and cosmological observations always demand higher sensitivities, importance of large focal planes with detectors at cryogenic temperatures ever increases. % The observation of the cosmic microwave background (CMB) polarization, which is one of the best probes for studying the early universe~\cite{Kamionkowski, Seljak}, is an outstanding example where such cryogenic large detector arrays are needed. % Recently, a few CMB polarization experiments have reported the detection of the faint sub-degree-scale B-mode signal, the odd-parity mode of the CMB polarization pattern~\cite{Seljak}. The use of a large array with an order of 10,000 polarization detectors, such as the focal plane of POLARBEAR-2 (PB-2)~\cite{POLARBEAR2}, SPT-3G~\cite{SPT3G}, and BICEP-3~\cite{BICEP3}, % is essential for characterizing the B-mode power spectrum. The need of a large focal plane leads to a challenge in the thermal design, as large thermal load is expected from a large optical window.~\cite{Hanany:2012vj} In order to keep the detector sensivitity high enough, efficient infra-red (IR) filters should be considered. One promising solution to this problem is to introduce an alumina plate as an IR filter with high thermal conductivity~\cite{Inoue1,Inoue2,POLARBEAR2,SPT3G,BICEP3}. % The alumina filter absorbs the incident IR emission efficiently. % The absorbed power is conducted from the central region of the filter to the edge, which is % thermally connected % to % the cryogenic stage. The thermal conductance of alumina in the temperature range between 50 and 100~K is very high; % three orders of magnitude as large as % those of conventional plastic filters, such as PTFE, nylon, and black polyethylene. The temperature gradient and optical loading of the alumina plate are thus very small even with a large diameter, which is ideal as the IR filter material. % A remaining technological challenge on the use of alumina IR filter is to establish AR coatings for a large frequency coverage. % Alumina is known to be highly reflective in the millimeter wavelength. We thus need an % AR coating on each surface of the alumina disc. AR coating on such a large surface is easily peeled off owing to the thermal shock at cold temperatures. % The aforementioned next-generation CMB projects demand multi-layer AR coatings on large IR filters with a typical diameter of 500~mm. % In the previous studies, the POLARBEAR group attempted to develop % a two-layer AR coating method with epoxy on an alumina surface, with a diameter of 50~mm~\cite{Inoue1,Inoue2,Rosen:2013zza}. However, the epoxy and alumina cracked during thermal cycling. To avoid the cracking, stress-relief grooves were adopted to reduce the stress between alumina and the epoxy layer.~\cite{Inoue1} % This technology was also used in % the BICEP-3~\cite{BICEP3} group, % which succeeded in making an single-layer AR coating with epoxy for a diameter of about $500~\mathrm{mm}$. However, this method is expensive. Also, it is not easy to maintain the thickness uniformity of the AR layers on the thin alumina plate. % Alternatively, % the ACTpol~\cite{ACTpolAR} group succeeded in making two- and three-layer AR coatings on silicon lenses, whose diameters were approximately 300 mm. They tried and succeeded using the technology of subwavelength grating (SWG). % It is theoretically possible to apply this technology to an alumina surface; however, it is difficult because the dicing blade is subject to wear so that % groove pitch and depth are changed. It is also % difficult to make a large ingot of silicon. Machining the material is also very expensive. % Yet another method studied for the SPIDER project is % a polyimide sheet as an AR coating on the % sapphire with a diameter of about 250~mm \cite{Sean}. % The SPIDER method is very easy to perform and less expensive than the method with epoxy or SWG. % Finally, the first application of thermal sprayed mullite as AR coating was made on a 50 mm diameter alumina disc~ \cite{Toki:2013}. It is also possible to tune the dielectric constant of a thermal sprayed layer by spraying microspheres and alumina powder with different ratios. A three-layer thermal sprayed broadband AR coating that achieves over a 100 \% fractional bandwidth has been demonstrated on a 50 mm diameter alumina disc~\cite{Oliver}.. In this study, we newly employ the polyimide foam coating on the mullite coating described above to establish a two-layer AR coating method. We apply this technique to a thin and large alumina disc with a 460 mm diameter as an IR filter, and demonstrate the performance of the AR coating. We fabricated a 2 mm thick and 460 mm diameter alumina filter, and demonstrated the performance of the new AR coating. Our development focuses on the use in future CMB measurements but the method developed can also be used for other applications. % In Section~\ref{sec:FilterDesign} we describe the design of a large-diameter alumina IR filter with our new AR coating method. Section~\ref{sec:MaterialProperties} explains our measurements of material properties. The fabrication process is detailed in Section~\ref{sec:Fabrication}. Section~\ref{sec:Characterization} shows our characterization of the AR coatings on a large-diameter alumina IR filter. We discuss our results in Section~\ref{Disc} and give conclusions in Section~\ref{sec:Conclusion}. %
\label{sec:Conclusion} % We have newly introduced polyimide foam (Skybond Foam) as the AR coating material. % We show that the IOR of Skybond Foam is controlled between 1.1 and 1.7 by changing the filling factor. The tunable IOR allows us to design a multi-layer AR coating much easier. % We have made a two-layer AR coating with thermally-sprayed mullite as the first layer and Skybond Foam as the second layer on a 2 mm-thick alumina plate with 460 mm in diameter. We have confirmed that sample uniformity in the IOR and thickness is satisfactory for 150 GHz measurements. We have also confirmed that the AR coating is robust against thermal cycles. % The measured transmittances of this filter at 81 K are 95.8~\% for 95 GHz and 95.9~\% for 150 GHz. % The measured 3~dB cut-off frequency is 650 GHz at 19 K. % The results satisfy requirements on IR filters for next-generation CMB polarization measurements where a large focal-plane and a large window are needed. %
16
7
1607.02938
1607
1607.07206_arXiv.txt
Three approaches are considered to solve the equation which describes the time-dependent diffusive shock acceleration of test particles at the non-relativistic shocks. At first, the solution of Drury (1983) for the particle distribution function at the shock is generalized to any relation between the acceleration time-scales upstream and downstream and for the time-dependent injection efficiency. Three alternative solutions for the spatial dependence of the distribution function are derived. Then, the two other approaches to solve the time-dependent equation are presented, one of which does not require the Laplace transform. At the end, our more general solution is discussed, with a particular attention to the time-dependent injection in supernova remnants. It is shown that, comparing to the case with the dominant upstream acceleration time-scale, the maximum momentum of accelerated particles shifts toward the smaller momenta with increase of the downstream acceleration time-scale. The time-dependent injection affects the shape of the particle spectrum. In particular, i) the power-law index is not solely determined by the shock compression, in contrast to the stationary solution; ii) the larger the injection efficiency during the first decades after the supernova explosion, the harder the particle spectrum around the high-energy cutoff at the later times. This is important, in particular, for interpretation of the radio and gamma-ray observations of supernova remnants, as demonstrated on a number of examples.
How does a stellar object become a supernova remnant (SNR) after the supernova event? Some hints come from observations of Supernovae in other galaxies \cite[e.g][]{Weiler-et-al-1986} but -- since they are far away -- they are not quite informative for understanding of how young SNRs obtain their look, how properties of the explosion and ambient medium affect their evolution. Different time and length scales need to be treated in order to model the transfiguration of SN to SNR, that creates difficulties for numerical simulations. In the last years however a number of studies has been performed in order to understand the involved processes. They adopt either one-dimensional simulations \citep[e.g.][]{Badenes-et-al-2008,Patnaude-et-al-2015} or -- quite recently -- three-dimensional models \citep{Orlando-et-al-2015,Orlando-et-al-2016a}. In order to relate an SNR model to observations, one has to simulate emission. Radiation of the highly energetic particles is an important component of a model. The particle spectrum has to be known in order to simulate their emission. The {\em non-stationary} solution of the diffusion-convection equation has to be used in order to describe the distribution function $f(t,x,p)$ of these particles in young SNRs because the acceleration is not in the steady-state regime yet. There are evidences from numerical simulations that the particle spectrum could not be stationary even in the rather old SNRs \citep{Brose-et-al-2016}. There is well known approach \citep{Drury-1983,Forman-Drury-1983} to derive the time-dependent solution and expression for the acceleration time. The original formulation has been developed i) for the spatially constant flow velocities $u$ and diffusion coefficients $D$ before and after the shock, ii) for the momentum dependence of the diffusion coefficient of the form $D\propto p^{\alpha}$ with the constant index $\alpha$, iii) for the impulsive or the constant particle injection, iv) for the monoenergetic injection of particles at the shock front and v) for the case when the acceleration time upstream $t_1$ is much larger than that downstream $t_2$. \citet{Toptygin-1980} was the first to consider the time-dependent acceleration and has given a solution for $t_1=t_2$ and the diffusion coefficient independent of the particle momentum $p$. \citet{Drury-1991} has presented a way to generalize his own solution to include also the spatial dependence of the flow velocity $u(x)$ and the diffusion coefficient $D(x)$. \citet{Ostr-Schlick-1996} have found a generalization of the \citet{Toptygin-1980} solution ($t_1=t_2$ and momentum independent $D$) which allows one to consider different $t_1$ and $t_2$. They have also obtained the expression for the acceleration time if there are the free-escape boundaries upstream and downstream of the shock. \citet{Tang-Chevalier-2015} have generalized the \citet{Toptygin-1980} solution to the time evolution of the pre-existing seed cosmic rays, i.e. the authors have generalized the treatment to the impulsive (at time $t=0$) injection of particles residing in the half-space before the shock and being distributed with some spectrum $Q\rs{p}(p)$. The approach to treat the time-dependent non-linear acceleration is developed by \citet{Blasi-et-al-2007} who have not obtained the solution but made an important progress in derivation of the acceleration time for the case when the particle back-reaction on the flow is important. In the present paper, the Drury's test-particle approach is extended to more general situations. Namely, few different representations for $f(t,x,p)$ are obtained; a way to avoid the $t_1\gg t_2$ limitation in deriving the distribution function at the shock $f\rs{o}(t,p)$ is presented; a solution is written in a way to allow for any time variation of the injection efficiency; a possibility for the diffusion coefficient to have other than the power-law dependence on momentum is considered. The structure of the paper is as follows. The task and main assumptions are stated in Sect.~\ref{kineq2:kineqbase}. The three different approaches to solve the non-stationary equation are presented in Sections~\ref{kineq2:kineqI}, \ref{kineq2:kineqII} and \ref{kineq2:kineqIII} respectively. Then, in Sect.~\ref{kineq2:discussion}, we demonstrate when and to which extent our generalized solution differs from the original Drury's formulation (Sect.~\ref{kineq2:pmax00}) and discuss implications of the time-dependent injection efficiency on the particle spectrum (Sects.~\ref{kineq2:injteffect}, \ref{kineq2:pinter}). Sect.~\ref{kineq2:conclusions} concludes. Some mathematical identities used in the present paper are listed in the Appendix \ref{kineq2:app1}.
\label{kineq2:conclusions} In the present paper, we generalized the solution of \citet{Drury-1983,Forman-Drury-1983} which describes the time-dependent diffusive shock acceleration of test-particles. The three representations of the spatial variation of the particle distribution function $f(t,x,p)$ are presented. Namely, Eq.~(\ref{kineq2:gensolapII}) gives $f(t,x,p)$ through $f(t,x=0,p)$, Eq.~(\ref{kineq2:solfx2}) yields $f(t,x,p)$ versus $f(t=\infty,x=0,p)$, Eq.~(\ref{kineq2:solfx2b}) relates $f(t,x,p)$ with $f(t=\infty,x,p)$. Our generalized solution (\ref{kineq2:gensol}) for the distribution function at the shock $f\rs{o}(t,p)\equiv f(t,x=0,p)$ is valid for any ratio between the acceleration time-scales upstream and downstream of the shock and allows one to consider the time variation of the injection efficiency. It is shown that, if the ratio $t_1/t_2$ decreases (i.e. the significance of the downstream acceleration time grows) then the particle maximum momentum is smaller comparing to $p\rs{max}$ calculated under assumption $t_1 \gg t_2$. The reason is visible from Eq.~(\ref{kineq2:highplimit4a}). Namely, $p\rs{max}$ is determined by the ratio between the length-scale of the shock motion and lenght-scale of the diffusion: the larger the diffusion lenght-scale the smaller the maximum momentum. However, if the ratio $t_1/t_2$ is larger than few then the simpler expression (\ref{kineq2:solfTPQ}) may be used for the particle distribution function $f\rs{o}(t,p)$, with $\varphi\rs{o}$ given by (\ref{kineq2:t1phi}). If, in addition, the injection is continuous and constant ($Q\rs{t}=1$), then our generalized solution becomes the same as in \citet{Drury-1983,Forman-Drury-1983}. The time dependence of the injection efficiency is an important factor in formation of the shape of the particle spectrum at all momenta. The high-energy end of the accelerated particle spectrum is formed by particles injected at the very beginning. Therefore, the temporal evolution of injection, especially during the first decades after the supernova explosion, does affect the non-thermal spectra of young SNRs and has to be considered in interpretation of the X-ray and gamma-ray data. The stationary solution of the shock particle acceleration predicts that the power-law index of the cosmic ray distribution $s\rs{f}$ is determined by the shock compression only. In contrast, in young SNRs where acceleration is not presumably steady-state, this index (let's call it $s\rs{t}$ to distinguish from the stationary index $s\rs{f}$) depends also on the indexes $\alpha$ and $\beta$ in the approximate expressions for the diffusion coefficient $D\propto p^{\alpha}$ and for the temporal evolution of the injection efficiency $Q\rs{t}\propto t^{\beta}$. Namely, it is $s\rs{t}\approx s\rs{f}+\alpha\beta$. This property of the time-dependent solution could be responsible for deviation of the observed radio index from the classical value $0.5$ in some young SNRs. Since the acceleration times for electrons emitting at radio frequencies are very small, the observed slopes of the radio spectra could reflect the current evolution of the injection in SNRs.
16
7
1607.07206
1607
1607.03049_arXiv.txt
{} {We investigate the photometric modulation induced by magnetic activity cycles and study the relationship between rotation period and activity cycle(s) in late-type (FGKM) stars. } {We analysed light curves, spanning up to nine years, of 125 nearby stars provided by the All Sky Automated Survey (ASAS). The sample is mainly composed of low-activity, main-sequence late-A to mid-M-type stars. We performed a search for short (days) and long-term (years) periodic variations in the photometry. We modelled the light curves with combinations of sinusoids to measure the properties of these periodic signals. To provide a better statistical interpretation of our results, we complement our new results with results from previous similar works.} {We have been able to measure long-term photometric cycles of 47 stars, out of which 39 have been derived with false alarm probabilities (FAP) of less than 0.1 per cent. Rotational modulation was also detected and rotational periods were measured in 36 stars. For 28 stars we have simultaneous measurements of activity cycles and rotational periods, 17 of which are M-type stars. We measured both photometric amplitudes and periods from sinusoidal fits. The measured cycle periods range from 2 to 14 yr with photometric amplitudes in the range of 5-20 mmag. We found that the distribution of cycle lengths for the different spectral types is similar, as the mean cycle is 9.5 years for F-type stars, 6.7 years for G-type stars, 8.5 years for K-type stars, 6.0 years for early M-type stars, and 7.1 years for mid-M-type stars. On the other hand, the distribution of rotation periods is completely different, trending to longer periods for later type stars, from a mean rotation of 8.6 days for F-type stars to 85.4 days in mid-M-type stars. The amplitudes induced by magnetic cycles and rotation show a clear correlation. A trend of photometric amplitudes with rotation period is also outlined in the data. The amplitudes of the photometric variability induced by activity cycles of main-sequence GK stars are lower than those of early- and mid-M dwarfs for a given activity index. Using spectroscopic data, we also provide an update in the empirical relationship between the level of chromospheric activity as given by $\log_{10}R'_{HK}$ and the rotation periods.} {}
It is widely recognised that starspots in late-type dwarf stars lead to periodic light variations associated with the rotation of the stars \citet{Kron1947}. Starspots trace magnetic flux tube emergence and provide valuable information on the forces acting on flux tubes and photospheric motions, both important agents in the dynamo theory (Parker 1955, Steenbeck et al. 1966, Bonanno et al. 2002). Rotation plays a crucial role in the generation of stellar activity \citep{Skumanich1972}. This becomes evident from the strong correlation of magnetic activity indicators with rotation periods \citep{Pallavicini1981, WalterBowyer1981, Vaughan1981, Middelkoop1981, Mekkaden1985, Vilhu1984, SimonFekel1987, Drake1989,Montes2004,Dumusque2011, Dumusque2012,Masca2015}. Stellar rotation coupled with convective motions generate strong magnetic fields in the stellar interior and produce different magnetic phenomena, including starspots in the photosphere. Big spotted areas consist of groups of small spots whose lifetime is not always easy to estimate, but the main structure can survive for many rotations, causing the coherent brightness variations that we can measure \citep{HallHenry1994}. In solar-like main-sequence stars the light modulations associated with rotation are of order a few percent (Dorren and Guinan 1982, Radick et al. 1983), while in young fast rotating stars these modulations can be significantly larger (e.g. Frasca et al. 2011). Starspot induced light modulation was also proposed for M dwarfs decades ago \citep{Chugainov1971} and more recently investigated by, for example \citet{Irwin2011}, \citet{Kiraga2012} and \citet{West2015}. Long-term magnetic activity similar to that of the Sun is also observed on stars with external convection envelopes \citep{Baliunas1985, Radick1990, Baliunas1996,Strassmeier1997,Lovis2011,Savanov2012,Robertson2013}. Photometric and spectroscopic time series observations over decades have revealed stellar cycles similar to the 11 yr sunspot cycle. In some active stars even multiple cycles are often observed \citep{BerdyuginaTuominen1998,BerdyuginaJarvinen2005}. It is possible to distinguish between cycles that are responsible for overall oscillation of the global level of the activity (similar to the 11 yr solar cycle) and cycles that are responsible for the spatial rearrangement of the active regions (flip-flop cycles) at a given activity level, such as the 3.7 yr cycle in sunspots \citep{BerdyuginaUsoskin2003, Moss2004}. The correct understanding of the different types of stellar variability, their relationships, and their link to stellar parameters is a key aspect to properly understand the behaviour of the stellar dynamo and its dependence on stellar mass. While extensive work has been conducted in FGK stars over many decades, M dwarfs have not received so much attention with only a few tens of long-term activity cycles reported in the literature \citep{Robertson2013, GomesdaSilva2012}. Understanding the full frame of stellar variability in stars with a low level of activity is also crucial for exoplanet surveys. Modern spectrographs can now reach sub ms$^{-1}$ precision in the radial velocity measurements \citep{Pepe2011} and next generation instrumentation is expected to reach a precision of a few cms$^{-1}$ \citep{Pepe2013}. At such precision level, the stellar activity induced signals in the radial velocity curves become a very important limiting factor in the search for Earth-like planets. Activity induced signals in timescales of days associated with rotation and in timescales of years for magnetic cycles are two of the most prominent sources of radial velocity induced signals \citep{Dumusque2011}. Measuring and understanding this short-term and long-term variability in different types of stars and the associated effects in radial velocity curves, and being able to predict the behaviour of a star, is required to disentangle stellar induced signals from Keplerian signals. Ground-based automatic photometric telescopes have been running for decades, providing the photometric precision and time coverage to explore rotation periods and activity cycles for sufficiently bright stars with a low activity level. In this work, using All Sky Automated Survey (ASAS) available long-term series of photometric data, we attempt the determination of rotational periods and long-term activity cycles of a sample of G to mid-M stars, with emphasis on the less studied low activity M dwarfs. We investigate the relationships between the level of magnetic activity derived from Ca II H\&K, and the rotation and activity cycle periods derived from the photometric series. Several studies have taken advantage of the excellent quality of the Kepler space telescope light curves to determine activity cycles and rotation period, for example \citet{Vida2014}, however the limited time span of the observations restricts these studies to cycles of only a few years. Here we investigate the presence of significantly longer activity cycles in our sample of stars.
The previous analysis of variability in photometric time series provided a collection of magnetic cycles, rotation periods, and chromospheric activity level for 55 stars, out of which 34 are low activity M-type stars, for which only a few tens of magnetic cycles are reported in the literature. We measured the magnetic cycles for 5 G-type stars for which we found a mean cycle length of $9.0$ yr with a dispersion of $4.9$ yr. We found a mean cycle length of $6.7$ yr with a dispersion of 2 years for 12 K-type stars. We measured a mean cycle length of $7.4$ yr with a dispersion of $3.0$ yr for 22 early M-type stars, and we found a mean cycle length of $7.56$ yr with a dispersion of $2.6$ yr for 9 mid M-type stars. We note that we might be mixing some flip-flop cycles along with the global cycles, but in most cases we do not find multiple cycles making it difficult to distinguish between the two types. Subsequently, we measured the rotation periods for 9 G-type stars with a mean rotation period of $27.9$ days and a dispersion of $3.9$ days. For 13 K-type stars we measured a typical rotation of $20.3$ with a dispersion of $16.4$ days. For 20 early M-type stars, we measured an average rotation period of $35.7$ days with a dispersion of $23.2$ days, and an average rotation period of $78.1$ days with a dispersion of $54.8$ days for 9 mid M-type stars. In order to put these results on a broader context we include in the discussion and plots other FGKM stars with known cycles and rotation periods selected from \citet{Noyes1984}, \citet{Baliunas1995}, \citet{Lovis2011}, \citet{Robertson2013}, and \citet{Masca2015}. In total we deal with more than 150 stars with similar number of G-, K-, and M-type and far fewer F-type stars to study the distribution of cycle lengths and rotation periods for the different spectral types and the activity-rotation relationships. \subsection{Cycle length distribution} Figure~\ref{cic_histo} shows the distribution histogram of cycles by length and spectral type. We find that, like G-type stars, early M-type stars peak at the 2-4 year bins, but K-type stars peak at the 6 year bin. The double peak seen in the distribution might reveal information about peak in global cycles (10 yr) and in flip-flop cycles (6 yr). There are not enough detections to see any particular behaviour for F-type and mid M-type stars. Table~\ref{cycle_stats} shows the main statistics of the typical cycle for each spectral type. \begin {table} \begin{center} \caption {Statistics of the length of known cycles\label{tab:cycle_stats}} \begin{tabular}{ l l l l l } \hline Sp. Type & N & Mean length & Median length & $\sigma$ \\ & & (yr) & (yr) &(yr) \\\hline F & 10 & 9.5 & 7.4 & 5.3 \\ G & 55 & 6.7 & 6.0 & 3.6 \\ K & 51 & 8.5 & 7.6 & 3.6 \\ Early M & 47 & 6.0 & 5.2 & 2.9 \\ Mid M & 10 & 7.1 & 6.8 & 2.7 \\ \hline \label{cycle_stats} \end{tabular} \end{center} \end {table} \begin{figure} \includegraphics[width=9.0cm]{cyc_histo} \caption{Distribution of cycle lengths. Grey filled columns show the global distribution while coloured lines the individual distributions.} \label{cic_histo} \end{figure} \subsection{Rotation period distribution} Figure~\ref{rot_histo} shows the distribution of rotation periods and Table~\ref{rot_stats} lists the typical periods and measured scatter. When looking at the rotation periods we find an upper limit of the distribution growing steadily towards larger periods for later spectral types, and saturating at $\sim$ 50 days for almost all spectral types (see Fig.~\ref{rot_histo}). M-type stars, especially mid-Ms, show larger scatter going over that saturation limit and reaching periods longer than 150 days. Many of the M-type stars are extreme slow rotators and that get reflected in their extremely low chromospheric activity levels. Figure~\ref{bv_RHK} shows the distribution of the $\log_{10}R'_{HK}$ against the colour B-V. For solar-type stars the lower envelope of the distribution goes around $\sim -5.2$, but M dwarfs reach level of almost $\sim -6$. \begin {table} \begin{center} \caption {Statistics of rotation periods\label{tab:rot_stats}} \begin{tabular}{ l l l l l } \hline Sp. Type & N & Mean Period & Median Period & $\sigma$ Period \\ & & (d) & (d) &(d) \\\hline F & 25 & 8.6 & 7.0 & 6.2 \\ G & 44 & 19.6 & 18.4 & 11.1 \\ K & 53 & 27.4 & 29.3 & 15.7 \\ Early M & 43 & 36.2 & 33.4 & 29.9 \\ Mid M & 11 & 85.4 & 86.2 & 53.4 \\ \hline \label{rot_stats} \end{tabular} \end{center} \end {table} \begin{figure} \includegraphics[width=9.0cm]{bv_rot} \caption{Rotation periods vs. B-V colour of stars analysed in this work (filled symbols) and stars from the literature (open symbols). } \label{rot_histo} \end{figure} \begin{figure} \includegraphics[width=9.0cm]{bv_RHK} \caption{Chromospheric activity level $\log_{10}R'_{HK}$ against the colour B-V of the stars. Filled symbols show the stars analysed in this work. } \label{bv_RHK} \end{figure} \subsection{Activity-rotation relation} \citet{Masca2015} proposed a rotation-activity relation for late-F- to mid-M-type main-sequence stars with $\log_{10}R'_{HK}$ $\sim -4.50$ up to $\sim -5.85$. The new measurements presented in Tables~\ref{data_sample} and ~\ref{Periods}, and the data from \citet{Noyes1984}, \citet{Baliunas1995}, and \citet{Lovis2011} serve to extend and better define the relationship. Figure~\ref{rhk_period} shows the new measurements along with those the literature. These measurements are compatible for almost all spectral types and levels of chromospheric activity, as F-type stars are the only clear outlayers. This speaks of a relation that is more complex than what was proposed. Combining all data we can extend the relationship to faster rotators with levels of chromospheric activity up to $\log_{10}R'_{HK}$ $\sim -4$ and we are able to give an independent relationship for each spectral type. Figure~\ref{rhk_period_sp} shows the measurements for every individual spectral type. Assuming a relationship such as \begin{equation} \begin{split} \log_{10}(P_{rot})= A + B \cdot \log_{10}R'_{HK} \end{split} \label{eq_rhk_period} ,\end{equation} \begin {table*} \begin{center} \caption {Parameters for Eq.~\ref{eq_rhk_period} \label{tab:rhk_period_tab}} \begin{tabular}{ l l l l l } \hline Dataset & N & A & B & $\sigma$ Period \\ & & & &(\%) \\\hline G-K-M ($\log_{10}R'_{HK} > -4.5$) & 37 & --10.118 $\pm$ 0.027 & --4.500 $\pm$ 0.006 & 23 \\ G-K-M ($\log_{10}R'_{HK} \leq -4.5$) & 94 & --2.425 $\pm$ 0.001 & --0.791 $\pm$ 0.001 & 19 \\ \\ F & 25 & -3.609 $\pm$ 0.015 & --0.946 $\pm$ 0.003 & 39 \\ G ($\log_{10}R'_{HK} > -4.6$) & 17 & --11.738 $\pm$ 0.052 & --2.841 $\pm$ 0.011 & 17 \\ G ($\log_{10}R'_{HK} \leq -4.6$) & 27 & 0.138 $\pm$ 0.006 & --0.261 $\pm$ 0.002 & 20 \\ K ($\log_{10}R'_{HK} > -4.6$) & 18 & --7.081 $\pm$ 0.030 & --1.838 $\pm$ 0.007 & 8 \\ K ($\log_{10}R'_{HK} \leq -4.6$) & 32 & --1.962 $\pm$ 0.005 & --0.722 $\pm$ 0.002 & 16 \\ M & 38 & --2.490 $\pm$ 0.002 & --0.804 $\pm$ 0.001 & 18 \\ \hline \label{rhk_period_tab} \end{tabular} \end{center} \end {table*} where $P_{rot}$ is in days and the typical residuals of the fit is smaller than 23 per cent of the measured periods for a given level of activity, for stars from G to mid M, and with residuals smaller than 39 per cent in the case of F-type stars. Table~\ref{rhk_period_tab} shows the coefficients of the best fit for every individual dataset. This relationship provides an estimate of the rotation period of stars with low levels of chromospheric activity. \begin{figure*} \includegraphics[width=20.0cm]{rhk_period} \caption{Rotation period vs. chromospheric activity level $\log_{10}R'_{HK}$. Filled symbols show the stars analysed in this work. The dashed line shows the best fit to the data, leaving out the F-type stars.} \label{rhk_period} \end{figure*} \begin{figure} \includegraphics[width=9.0cm]{rhk_period_F} \includegraphics[width=9.0cm]{rhk_period_G} \includegraphics[width=9.0cm]{rhk_period_K} \includegraphics[width=9.0cm]{rhk_period_M} \caption{Rotation period vs. chromospheric activity level $\log_{10}R'_{HK}$ for each spectral type. Filled symbols show the stars analysed in this work. The dashed line shows the best fit to the data for each individual dataset.} \label{rhk_period_sp} \end{figure} The original rotation-activity relationship, which was proposed by \citet{Noyes1984} and updated by \citet{Mamajek2008}, used as their observable the Rossby number -- $Ro = P_{Rot}/\tau_{c}$, i.e. the rotation period divided by the convective turnover, instead of the rotation period. The use of the convective turnover here raises some problems. The convective turnover can be determined from theoretical models (e.g. \citet{Gilman1980}, \citet{Gilliland1985}, \citet{Rucinski1986} or \citet{KimDemarque1996}) or empirically (\citet{Noyes1984}, \citet{Stepien1994}, or \citet{Pizzolato2003}). While the values of $\tau_{c}$ are consistent for G-K stars, they diverge badly for M dwarfs. Theoretical models indicate a steep increase of $\tau_{c}$ with decreasing mass, while purely empirical models indicate that $\tau_{c}$ increases with decreasing mass down to 0.8 M$_{\odot}$, but then levels off \citep{Kiraga2007}. The behaviour of $\tau_{c}$ beyond 0.6 M$_{\odot}$ is uncertain. For the measurement of $\tau_{c}$ we adopt the definition of \citet{Rucinski1986} because this definition produces the tightest correlation, i.e. \begin{equation} \begin{split} log (\tau_{c}) = 1.178 - 1.163 x + 0.279 x^2 - 6.14 x^3 ~(x > 0) \end{split} \label{eq_tau} \end{equation} \begin{equation} \begin{split} log (\tau_{c}) = 1.178 - 0.941 x + 0.460 x^2 ~(x < 0), \end{split} \label{eq_tau} \end{equation} where $x = 0.65 - (B-V)$. Figure~\ref{rhk_Rossby} shows the distribution of the calculated Rossby numbers against the $\log_{10}R'_{HK}$. When presenting our results this way we find that, for solar-type stars, the distribution is very similar inside the limits studied in \citet{Mamajek2008}, but moving towards lower levels of activity increases the scatter. On the other hand, F-type and M-type stars do not follow the same exact relationship, hinting that the mechanism might be more complex than originally assumed or that the estimation of the convective turnover could be less reliable the further we get from the Sun, which was already stated by \citet{Noyes1984} for stars with $B-V$ > 1. Unfortunately there is no reliable calibration for the convective turnover in the case of M-dwarf stars. Even with the bigger scatter the global behaviour of the data remains similar with a change of slope when reaching the very active regime ($\log_{10}R'_{HK} \sim -4.4$). Analogous to what we did in Eq.~\ref{eq_rhk_period}, we find that the distribution can be described as\begin{equation} \begin{split} Ro = A + B \cdot \log_{10}R'_{HK} \end{split} \label{eq_rhk_Rossby} ,\end{equation} where the typical scatter of the residuals is $\sim$ 28 per cent of the measured $Ro$ for the very active region and $\sim$ 19 per cent for the moderately active region. Table~\ref{rhk_Rossby_tab} shows the parameters of equation \ref{eq_rhk_Rossby}. \begin {table*} \begin{center} \caption {Parameters for Eq.~\ref{eq_rhk_Rossby} \label{tab:rhk_Rossby_tab}} \begin{tabular}{ l l l l l } \hline Dataset & N & A & B & $\sigma$ Ro \\ & & & &(\%) \\\hline $\log_{10}R'_{HK} > -4.4$ & 7 & --3.533 $\pm$ 0.796 & --0.912 $\pm$ 0.195 & 28 \\ $\log_{10}R'_{HK} \leq -4.4$ & 150 & --10.431 $\pm$ 0.078 & --2.518 $\pm$ 0.016 & 19 \\ \hline \label{rhk_Rossby_tab} \end{tabular} \end{center} \end {table*} \begin{figure} \includegraphics[width=9.0cm]{rhk_rosby} \caption{Rossby number vs. chromospheric activity level $\log_{10}R'_{HK}$ for each spectral type. Filled symbols show the stars analysed in this work. The dashed line shows the best fit to the data.} \label{rhk_Rossby} \end{figure} \subsection{Activity versus photometric cycle amplitude} A relation between the cycle amplitude in $Ca II_{HK}$ flux variations and the mean activity level of the stars was proposed by \citet{SaarBrandenburg2002}. It was found that stars with a higher mean activity level show also larger cycle amplitudes. We investigate the behaviour of the photometric amplitude of the cycle with the mean activity level as well as with the Rossby number. Comparing the cycle amplitude with the $\log_{10}R'_{HK}$ we are able to see a weak tendency. Even if there is a large scatter, a trend such that the photometric amplitude of the cycle increases towards higher activity stars is found (see Figure~\ref{rhk_amp_c}). This agrees with the \citet{SaarBrandenburg2002} work. \begin{figure} \includegraphics[width=9.0cm]{rhk_amp_c} \caption{Measured cycle photometric amplitude vs. chromospheric activity level $\log_{10}R'_{HK}$.} \label{rhk_amp_c} \end{figure} When comparing the cycle amplitude with the rotation period we found a more clear correlation. While the scatter is large, is clear that the amplitude decreases longer periods (see Figure~\ref{Rossby}). This is different to what \citet{SaarBrandenburg2002} found when studying the $Ca II$ variations, where cycle amplitude saturation may be happening. \begin{figure} \includegraphics[width=9.0cm]{rosby} \caption{Measured cycle photometric amplitude versus the rotation period.} \label{Rossby} \end{figure} \subsection{Rotation-cycle relation} The existence of a relationship between the length of the magnetic cycle and rotation period has been studied for a long time. \citet{Baliunas1996} suggested $P_{cyc}/P_{rot}$ as an observable to study how both quantities relate to each other. It was suggested that the length of the cycle scales as $\sim D^l$, where $l$ is the slope of the relation and $D$ is the dynamo number. Slopes that are different from $\sim 1$ would imply a correlation between the length of the cycle and rotation period. Figure~\ref{cycles} shows results on a log-log scale of $P_{cyc}/P_{rot}$ versus $1/P_{rot}$ for all the stars for which we were able to determine both the rotation period and length of the cycle. The slope for our results is $1.01 \pm 0.06$, meaning no correlation between both quantities. However, we cover from early-G to late-M stars, including main-sequence and pre-main-sequence stars, with rotation periods ranging from $\sim 0.2$ to more than $\sim 160$ days. Previous works that found correlations were based on more homogeneous, and thus suitable samples, to study this relationship. For stars with multiple cycles we choose the global cycle (longer cycle), but the possibility of some of the short period cycles being flip-flop cycles cannot be ruled out. If we restrict our analysis to main-sequence FGK stars, we obtain a slope of $0.89\pm 0.05$, meaning that there is a weak correlation between both quantities. The measure is compatible with the result of $0.81 \pm 0.05$ from \citet{Olah2009} and implies a weaker correlation than that found by \citet{Baliunas1996} who found a slope of $0.74$ when sharing a good part of the sample. This supports the idea that there is a common dynamo behaviour for this group of stars, which does not apply to the M-dwarf stars for which we, as \citet{Savanov2012}, do not find a correlation. \begin{figure} \includegraphics[width=9.0cm]{cycle} \caption{$P_{cyc}/P_{rot}$ versus $1/P_{rot}$ in log-log scale. Filled dots represent main-sequence stars while empty circles stand for pre-main-sequence stars. The dashed line shows the fit to the full dataset. The solid line shows the fit to the main-sequence stars with radiative core.} \label{cycles} \end{figure} In a direct comparison of the length of the cycle with the rotation period, we see an absence of long cycles for extreme slow rotators. Figure~\ref{rot_cycle} shows the distribution of cycle lengths against rotation periods. While the cycle lengths of those stars with rotation periods below the saturation level of $\sim 50$ days distribute approximately uniformly from $\sim 2$ to $\sim 20$ years, those stars rotating slower than $\sim 50$ days show only cycles shorter cycles. At this point, it is unclear whether or not this behaviour is real or is related to a selection or observational bias. The number of stars with both rotation and cycle measured in this region is small, and the time span of the observation is shorter than 10 years. Further investigation is needed to clarify the actual distribution. \begin{figure} \includegraphics[width=9.0cm]{rot_cycle} \caption{Cycle length versus rotation periods. Filled symbols show the stars analysed in this work. } \label{rot_cycle} \end{figure} Finally, we have also compared the photometric amplitudes induced by the activity cycles and the rotational modulation that seem to show a clear correlation. Both amplitudes increase together with an almost 1:1 relationship, even if with great scatter, and the behaviour seems to be similar for every spectral type (see Fig.~\ref{amp_amp}). \begin{figure} \includegraphics[width=9.0cm]{amp_amp} \caption{Measured photometric amplitude of the cycles vs. the photometric amplitude of the rotational modulation as measured in this work. } \label{amp_amp} \end{figure}
16
7
1607.03049
1607
1607.02490_arXiv.txt
We report on the scrambling performance and focal-ratio-degradation (FRD) of various octagonal and rectangular fibers considered for MAROON-X. Our measurements demonstrate the detrimental effect of thin claddings on the FRD of octagonal and rectangular fibers and that stress induced at the connectors can further increase the FRD. We find that fibers with a thick, round cladding show low FRD. We further demonstrate that the scrambling behavior of non-circular fibers is often complex and introduce a new metric to fully capture non-linear scrambling performance, leading to much lower scrambling gain values than are typically reported in the literature ($\leq$1000 compared to 10,000 or more). We find that scrambling gain measurements for small-core, non-circular fibers are often speckle dominated if the fiber is not agitated.
\label{sec:intro} MAROON-X is a new fiber-fed, red-optical, high-precision radial-velocity spectrograph for one of the twin 6.5m Magellan Telescopes in Chile. The instrument is currently under construction at the University of Chicago\cite{seifahrt1}. MAROON-X will be fed by a \SI{100}{\micron} octagonal fiber at $f$/3.3 from the telescope, delivering a FOV of 0.95'' on sky. To achieve a resolving power of ~80,000, a micro-lens based pupil slicer and double scrambler\cite{seifahrt2} reformats the light into three \SI{50 x 150}{\micron} rectangular fibers at $f$/5.0, which are then close stacked to form a pseudo-slit at the spectrograph input. We have tested the scrambling performance and focal-ratio-degradation (FRD) of a number of octagonal and rectangular fibers to select the best fibers for this application.
\label{sec:discussion} We have obtained near-field and far-field images as well as photometric FRD measurements of available octagonal and rectangular fibers of different sizes and from different manufacturers to select the best fibers for MAROON-X. We find that far-field images of fiber with input pupils containing a central obscuration quickly reveal FRD and cladding modes due to the sharp transition between the central dark area and the bright ring in a FRD-free far-field image. For both octagonal and rectangular fibers we find that thin claddings increase losses and make fibers more vulnerable to mechanical stresses. The latter appears also true for fibers with thin buffers. Rectangular claddings produce high FRD due to asymmetric stresses in standard connector ferrules, even with low-shrinkage (0.4\%) adhesive. Removing this stress (i.e., working with bare fibers) greatly improves the FRD behavior but poses practical limits on the usage of these fibers. Scrambling behavior is often complex and our $SG_{min}$ metric captures the full non-linear scrambling behavior, leading to much lower scrambling gain values than are typically reported in the literature. Scrambling gain measurements for small-core, non-circular fibers are often speckle dominated if the fiber is not agitated.
16
7
1607.02490
1607
1607.05938_arXiv.txt
We study the neutrino flavor evolution in the neutrino-driven wind from a binary neutron star merger remnant consisting of a massive neutron star surrounded by an accretion disk. With the neutrino emission characteristics and the hydrodynamical profile of the remnant consistently extracted from a three-dimensional simulation, we compute the flavor evolution by taking into account neutrino coherent forward scattering off ordinary matter and neutrinos themselves. We employ a ``single-trajectory'' approach to investigate the dependence of the flavor evolution on the neutrino emission location and angle. We also show that the flavor conversion in the merger remnant can affect the (anti-)neutrino absorption rates on free nucleons and may thus impact the $r$-process nucleosynthesis in the wind. We discuss the sensitivity of such results on the change of neutrino emission characteristics, also from different neutron star merger simulations.
16
7
1607.05938
1607
1607.02035_arXiv.txt
In this work we explore dynamical perturbations induced by the massive asteroids Ceres and Vesta on main-belt asteroids through secular resonances. First we determine the location of the linear secular resonances with Ceres and Vesta in the main belt, using a purely numerical technique. Then we use a set of numerical simulations of fictitious asteroids to investigate the importance of these secular resonances in the orbital evolution of main-belt asteroids. We found, evaluating the magnitude of the perturbations in the proper elements of the test particles, that in some cases the strength of these secular resonances is comparable to that of known non-linear secular resonances with the giant planets. Finally we explore the asteroid families that are crossed by the secular resonances we studied, and identified several cases where the latter seem to play an important role in their post-impact evolution.
For more than a century, a lot of studies have been done on the dynamical structure of the \textrm{Main Asteroid Belt}, a large concentration of asteroids with semi-major axes between those of Mars and Jupiter. Daniel Kirkwood proposed and then discovered the famous gaps in the distribution of main belt asteroids, which bear his name. The \textrm{``Kirkwood gaps''} are almost vacant ranges in the distribution of semi-major axes, corresponding to the locations of the strongest mean motion resonances with Jupiter, occurring when the ratio of the orbital motions of an asteroid and a planet (in this case Jupiter) can be expressed as the ratio of two small integers. Now we know that mean motion resonances with all the planets of the solar system exist throughout the main belt, rendering it a dynamically complex region. The gravitational interactions in the solar system also cause secular perturbations, which affect the orbits on long timescales. If the frequency, or a combination of frequencies, of the variations of the orbital elements of a small body becomes nearly commensurate to those of the planetary system, a secular resonance occurs, amplifying the effect of the perturbations. The importance of secular resonances has been pointed out already in the 19th century by \citet{Leverrier,Tisserand} and \citet{Charlier1,Charlier2}, who noticed a match between the $\nu_{\rm 6}$ secular resonance and the inner end of the main belt. A century later, thanks to the works of \citet{Froeschle1989,Morbidelli1991,Knezevic1991,Milani1992} and \citet{Michel1997} amongst others, we have a map of the locations of the most important secular resonances throughout the solar system. We thus have a clear picture of how the dynamical environment of the solar system, and consequently of the main belt, is shaped by the major planets. Recently in \citet{Novakovic2015a} we have reported on the role of the linear secular resonance with (1)~Ceres on the post-impact orbital evolution of asteroids belonging to the (1726) Hoffmeister family. Contrary to previous belief, in which massive asteroids were only considered to be able to influence the orbits of smaller bodies by their mutual close encounters \citep{Nesvorny2002} and maybe low order mean-motion resonances \citep{Christou2012}, we have a concrete example that they can strongly affect the secular evolution of the orbits of the latter through secular resonances. Also, \citet{Li2016} have found that a secular resonance between two members of the Himalia Jovian satellite group can affect their orbital evolution, and \citet{Carruba2016} showed that secular resonances with Ceres tend to drive away asteroids in the orbital neighborhood of Ceres, giving more evidence that secular resonances in general can be important even if the perturbing body is relatively small. These results generate a number of questions regarding the potential role of such resonances in the dynamical evolution of the main asteroid belt in general. The scope of this work is to improve our general picture of the dynamical structure of the main belt, by studying the importance of the secular perturbations caused by the two most massive asteroids (1)~Ceres and (4)~Vesta.
We have found the locations of the four linear secular resonances with (1)~Ceres and (4)~Vesta using a numerical approach that identifies asteroids which according to their proper frequencies appear to be in resonance. The secular resonances with Ceres mostly cover the middle part of the main belt, with some extension to the high inclination part of the outer belt, whereas those with Vesta cover the inner belt and a moderate to high inclination part of the middle and outer belt. Our numerical simulations have shown that the effects of these resonances on the orbits of main belt asteroids is considerable, especially when the latter have semi-major axes close to the respective perturbing massive asteroid. \citet{Milani1992,Milani1994} have studied the effect of non-linear secular resonances with the giant planets on the proper elements of main-belt asteroids. They found that resonant asteroids' proper elements undergo secular oscillations with amplitudes comparable to what we measured for the secular resonances with Ceres and Vesta. In the outer belt, which is considered to be far enough from both, we cannot clearly distinguish the impact of the secular resonances among the other dynamical mechanisms that act in the region. Although, as we have shown, the effect of the latter diminishes with increasing distance from the relevant massive asteroid in each case, it is crucial to note that in specific regions of the main belt, secular resonances with massive asteroids are equally, if not more important than the non-linear ones with the giant planets. Finally we have identified which asteroid families are crossed by each resonance. There are cases where the size of the families in the proper elements space is comparable to the amplitude of the oscillations induced by the secular resonance that crosses them (e.g. 1726 Hoffmeister). In these cases the secular resonances studied here should have the most evident effect on the post-impact evolution of asteroid family members.
16
7
1607.02035
1607
1607.04613_arXiv.txt
{Microquasars are potential $\gamma$-ray emitters. Indications of transient episodes of $\gamma$-ray emission were recently reported in at least two systems: Cyg X-1 and Cyg X-3. The identification of additional \gr -emitting microquasars is required to better understand how $\gamma$-ray emission can be produced in these systems.}% {Theoretical models have predicted very high-energy (VHE) \gr\ emission from microquasars during periods of transient outburst. Observations reported herein were undertaken with the objective of observing a broadband flaring event in the \gr\ and X-ray bands.}% {Contemporaneous observations of three microquasars, GRS 1915+105, Circinus X-1, and V4641 Sgr, were obtained using the High Energy Spectroscopic System (H.E.S.S.) telescope array and the \ Rossi X-ray Timing Explorer (\textit{RXTE}) satellite. X-ray analyses for each microquasar were performed and VHE \gr\ upper limits from contemporaneous \hess\ observations were derived.}% {No significant \gr\ signal has been detected in any of the three systems. The integral \gr\ photon flux at the observational epochs is constrained to be $I(> 560 {\rm\ GeV}) < 7.3\times10^{-13}$ cm$^{-2}$ s$^{-1}$, $I(> 560 {\rm\ GeV}) < 1.2\times10^{-12}$ cm$^{-2}$ s$^{-1}$, and $I(> 240 {\rm\ GeV}) < 4.5\times10^{-12}$ cm$^{-2}$ s$^{-1}$ for GRS 1915+105, Circinus X-1, and V4641 Sgr, respectively.}% {The \gr\ upper limits obtained using \hess\ are examined in the context of previous Cherenkov telescope observations of microquasars. The effect of intrinsic absorption is modelled for each target and found to have negligible impact on the flux of escaping $\gamma$-rays. When combined with the X-ray behaviour observed using \textit{RXTE}, the derived results indicate that if detectable VHE \gr\ emission from microquasars is commonplace, then it is likely to be highly transient.}
\label{sec:intro} Microquasars are X-ray binaries that exhibit spatially resolved, extended radio emission. The nomenclature is motivated by a structural similarity with the quasar family of active galactic nuclei (AGN). Both object classes are believed to comprise a compact central object embedded in a flow of accreting material, and both exhibit relativistic, collimated jets. In the current paradigm, both microquasars and AGN derive their power from the gravitational potential energy that is liberated as ambient matter falls onto the compact object. Notwithstanding their morphological resemblance, microquasars and radio-loud AGN represent complementary examples of astrophysical jet production on dramatically disparate spatial and temporal scales. Indeed, conditions of accretion and mass provision that pertain to the supermassive ($10^{6}M_{\odot}\lesssim M_{\rm BH} \lesssim 10^{9}M_{\odot}$) black holes that power AGN and of the stellar-mass compact primaries of microquasars are markedly different. In the latter, a companion star (or donor) provides the reservoir of matter for accretion onto a compact stellar remnant (or primary), which can be either a neutron star or a black hole. Partial dissipation of the resultant power output occurs in a disk of material surrounding the primary, producing the thermal and non-thermal X-ray emission, which is characteristic of all X-ray binary systems. Microquasars are segregated on the basis of associated non-thermal radio emission, indicative of synchrotron radiation in a collimated outflow, which carries away a sizeable fraction of the accretion luminosity \citep{2004MNRAS.355.1105F}. In AGN, superficially similar jet structures are known to be regions of particle acceleration and non-thermal photon emission. The resulting radiation spectrum can extend from radio wavelengths into the very high-energy (VHE; $E_{\gamma}>100$ GeV) \gr\ regime. Very high-energy \gr\ emission has been observed from many AGN in the blazar sub-class\footnote{ \url{http://tevcat.uchicago.edu/}}, where the jet axis is aligned close to the observer line-of-sight, as well as from a few radio galaxies (e.g. M87, \citealp{2003A&A...403L...1A}; Cen A, \citealp{2009ApJ...695L..40A}; NGC 1275, \citealp{Aleksic2012}) and starburst galaxies (e.g. M82, \citealp{Acciari2009}; NGC 253, \citealp{Abramowski2012}). If similar jet production and efficient particle acceleration mechanisms operate in microquasars and AGNs, this might imply that the former object class are plausible sources of detectable VHE \gr\ emission as well, assuming that appropriate environmental conditions prevail. The primarily relevant environmental conditions include the density of nearby hadronic material, which provides scattering targets for inelastic proton scattering interactions; these interactions produce pions that produce $\gamma$-rays when they subsequently decay. The ambient magnetic field strength is also important and influences the rate at which electrons lose energy via synchrotron radiation. Synchrotron photons contribute to the reservoir of soft photons that are available for inverse Compton (IC) up-scattering into the VHE $\gamma$-ray regime. The argument for phenomenological parity between AGN and microquasars, possibly related to their structural resemblance, has been strengthened in recent years as the spectral properties of both radio and X-ray emission are remarkably similar for both stellar mass and supermassive black holes. In recent years these similarities led to the postulation of a so-called fundamental plane, which describes a three-dimensional, phenomenological correlation between the radio (5 GHz) and X-ray (2--10 keV) luminosities and the black hole mass \citep{uniplane, Falcke2004}. However, the fundamental plane does not appear to extend into the TeV band. To date, only one well-established microquasar has been observed to emit in the VHE \gr\ regime. This is the Galactic black hole Cygnus X-1, which was marginally detected (at the $\sim4\sigma$ level) by the MAGIC telescope immediately prior to a $2-50\,$keV X-ray flare observed by the INTEGRAL satellite, the \textit{Swift} Burst Alert Telescope (BAT), and the \textit{RXTE} All-Sky Monitor (ASM) \citep{2007ApJ...665L..51A,2008A&A...492..527M}. \cite{Laurent2011} recently identified linear polarized soft $\gamma$-ray emission from Cygnus X-1 (see also \citealp{Jourdain2012}), thereby locating the emitter within the jets and identifying their capacity to accelerate particles to high energies (see however \citealp{Romero2014}). Further motivation for observing microquasars in the VHE band arises from the recent identification of the high-mass microquasar Cygnus X-3 as a transient high-energy (HE; 100MeV $<E_{\gamma}<$ 100 GeV) \gr\ source by the \textit{Fermi} \citep{2009Sci...326.1512F} and AGILE \citep{2009Natur.462..620T} satellites. The identification of a periodic modulation of the HE signal is consistent with the orbital frequency of Cygnus X-3 and provides compelling evidence for effective acceleration of charged particles to GeV energies within the binary system \citep{2009Sci...326.1512F}. Based on evidence from subsequent reobservations, the HE \gr\ flux from Cygnus X-3 appears to be correlated with transitions observed in X-rays in and out of the so-called ultra-soft state, which exhibits bright soft X-ray emission and low fluxes in hard X-rays and is typically associated with contemporaneous radio flaring activity \cite[e.g.][]{2012MNRAS.421.2947C}. Unfortunately, repeated observations of Cygnus X-3 using the MAGIC telescope did not yield a significant detection \citep{2010ApJ...721..843A}, despite the inclusion of data that were obtained simultaneously with the periods of enhanced HE emission detected using \textit{Fermi}. However, the intense optical and ultraviolet radiation fields produced by the Wolf-Rayet companion star in Cygnus X-3 imply a large optical depth for VHE \gr s due to absorption via $e^{+}e^{-}$ pair production \citep[e.g.][]{2010MNRAS.406..689B, 2012MNRAS.421.2956Z}. Accordingly, particle acceleration mechanisms akin to those operating in Cygnus X-3 may yield detectable VHE \gr\ fluxes in systems with fainter or cooler donors. Mechanisms for $\gamma$-ray production in microquasars have been widely investigated, resulting in numerous hadronic \citep[see e.g.][]{2003A&A...410L...1R} and leptonic \citep[see e.g.][]{1999MNRAS.302..253A, 2002A&A...388L..25G, 2006A&A...447..263B, 2006ApJ...643.1081D, 2010MNRAS.404L..55D} models, describing the expected fluxes and spectra of microquasars in the GeV-TeV band. In both scenarios, a highly energetic population of the relevant particles is required and, consequently, emission scenarios generally localize the radiating region within the jet structures of the microquasar. Leptonic models rely upon IC scattering of photons from the primary star in the binary system or photons produced through synchrotron emission along the jet to produce VHE \gr\ emission. In this latter scenario, they closely resemble models of extragalactic jets \citep{1981ApJ...243..700K,1989ApJ...340..181G}, but typically invoke internal magnetic fields that are stronger by factors $\sim1000$. Consideration of hadronic models is motivated by the detection of Doppler-shifted emission lines associated with the jets of the microquasar SS 433 \citep[e.g.][]{1984ARA&A..22..507M}, indicating that at least some microquasar jets comprise a significant hadronic component. Models of VHE \gr\ production by hadronic particles generally invoke electromagnetic cascades initiated by both neutral and charged pion decays \citep{2003A&A...410L...1R,1996A&A...309..917A,2005ApJ...632.1093R}. Electron-positron pair production, $\gamma\gamma\rightarrow e^{+}e^{-}$ can absorb VHE $\gamma$-rays . In the case of 1 TeV $\gamma$-rays, the cross section for this process is maximised for ultraviolet target photons ($E_{\rm ph}\sim10$ eV), where its value may be approximated in terms of the Thomson cross section as $\sigma_{\gamma\gamma}\approx\sigma_{T}/5$ \citep[e.g.][]{1967PhRv..155.1404G}. In high-mass systems, the companion star is expected to produce a dense field of these target photons to interact with the $\gamma$-rays \citep[e.g.][]{1987ApJ...322..838P,1995SSRv...72..593M,2005ApJ...634L..81B,2006A&A...451....9D}. This process can be very significant and probably contributes to the observed orbital modulation in the VHE \gr\ flux from LS 5039 \citep{2006A&A...460..743A}. In contrast, the ultraviolet spectrum of low-mass microquasars is likely dominated by the reprocessing of X-ray emission in the cool outer accretion flow \citep{1994A&A...290..133V,2009MNRAS.392.1106G}, although jet emission might also be significant \citep{2006MNRAS.371.1334R}. Regardless of their origin, the observed optical and ultraviolet luminosities of low-mass X-ray binaries (LMXBs) are generally orders of magnitude lower than those of high-mass systems \citep{2006MNRAS.371.1334R}, and the likelihood of strong \gr\ absorption is correspondingly reduced. However, microquasars may only become visible in the TeV band during powerful flaring events. These transient outbursts, characterised by the ejection of discrete superluminal plasma clouds, are usually observed at the transition between low- and high-luminosity X-ray states \citep{2004MNRAS.355.1105F}. Monitoring black-hole X-ray binaries with radio telescopes and X-ray satellites operating in the last decade enabled a classification scheme of such events to be established \citep{Homan2005}. Hardness-intensity diagrams (HIDs) plot the source X-ray intensity against X-ray colour (or hardness) and have subsequently been extensively used to study the spectral evolution of black-hole outbursts. At the transition from the so-called low-hard state to the high-soft states through the hard-to-soft intermediate states, the steady jet associated with the low-hard state is disrupted. These transient ejections, produced once the accretion disk collapses inwards, are more relativistic than the steady low-hard jets \citep{2004MNRAS.355.1105F}. Internal shocks can develop in the outflow, possibly accelerating particles that subsequently give rise to radio optically thin flares observed from black-hole systems; this phenomenological description is also extensible to neutron stars, although in that case jet radio power is lower by a factor 5--30 (\citealp{2006MNRAS.366...79M}). Outburst episodes have also been observed in cases in which the source remained in the hard state without transition to the soft state \citep{Homan2005}. The detection (at the $\sim4\sigma$ level) by the MAGIC telescope of the high-mass, black-hole binary Cygnus X-1 took place during an enhanced $2-50\,$keV flux low-hard state as observed with the INTEGRAL satellite, the \textit{Swift} BAT, and the \textit{RXTE} ASM \citep{2008A&A...492..527M}. However, although the source X-ray spectrum remained unchanged throughout the TeV flare, such a bright hard state was unusually long when compared with previous observations of the source. Here we report on contemporaneous observations with H.E.S.S. and \textit{RXTE} of the three microquasars V4641~Sgr, GRS~1915+105, and Circinus X-1. Information on the targets, the H.E.S.S. and \textit{RXTE} observations, and the corresponding trigger conditions are detailed in Sect.~\ref{sec:targets}. Analysis results are reported in Sect.~\ref{sec:mqs:results} and discussed in Sect.~\ref{sec:mqs:context}. In the appendix, detailed information on the X-ray analysis is reported, which in particular includes HIDs corresponding to the time of observations for the three studied sources.
\label{sec:conclusion} Contemporaneous VHE \gr\ and X-ray observations of GRS 1915+105, Cir X-1, and V4641 Sgr were obtained using \hess\ and \textit{RXTE}. Analysis of the resultant \hess\ data did not yield a significant detection for any of the target microquasars. However, X-ray binaries are dynamic systems and as such are likely to exhibit evolution of their radiative properties, both as a function of orbital phase and also in response to non-deterministic properties. It follows that the non-detections presented in this work do not indicate that the target binary systems do not emit detectable VHE \gr\ emission at phases other than those corresponding to the \hess\ observations. GRS 1915+105 appears to have been observed during an extended plateau state, the archival multiwavelength data suggesting the presence of continuous, mildly relativistic radio jets at the time of observation. The \textit{RXTE} observations of Cir X-1 yield data that are consistent with strongly varying obscuration of the X-ray source shortly after periastron passage, but these data are not indicative of bright flaring during the \hess\ observation epochs. Conversely, V4641 Sgr appears to have been observed during an episode of mild, transient flaring, although rapid source variability, combined with the limited duration of the strictly simultaneous \hess\ and \textit{RXTE} exposure, complicates interpretation. Microquasars continue to be classified as targets of opportunity for IACTs, requiring a rapid response to any external trigger to maximise the likelihood of obtaining a significant detection. These conditions are realised with the commissioning of the H.E.S.S. 28 m telescope, which aims to lower the energy threshold from 100 GeV to about 30 GeV (\cite{Parsons2015}; \cite{Holler2015}; \cite{HollerCrab2015}) while simultaneously enabling very rapid follow-up observations \citep{Hofverberg2013a}. To exploit these new opportunities and an increasing understanding of the behaviour of microquasars, the triggering strategies for TeV follow-up observations have evolved significantly in recent years. In the future, alternative observational strategies, including continuous monitoring of candidate microquasars in the VHE \gr\ band, may become possible using dedicated sub-arrays of the forthcoming Cherenkov Telescope Array \citep{2010arXiv1008.3703C}. Irrespective of the non-detections presented herein, the tantalising observations of Cygnus X-3 at GeV energies and Cygnus X-1 by the MAGIC telescope ensures that the motivations for observing microquasars using IACTs remain compelling. Indeed, by further constraining the \gr\ emission properties of microquasars, subsequent observations will inevitably yield an enhanced understanding of astrophysical jet production on all physical scales. More optimistically, the detection of additional \gr -bright microquasars would greatly facilitate a comprehensive characterisation of the particle acceleration and radiative emission mechanisms that operate in such systems.
16
7
1607.04613
1607
1607.08784_arXiv.txt
Spatial correlations of the observed sizes and luminosities of galaxies can be used to estimate the magnification that arises through weak gravitational lensing. However, the intrinsic properties of galaxies can be similarly correlated through local physical effects, and these present a possible contamination to the weak lensing estimation. In an earlier paper \citep{Ciarlariello2015} we modelled the intrinsic size correlations using the halo model, assuming the galaxy sizes reflect the mass in the associated halo. Here we extend this work to consider galaxy magnitudes and show that these may be even more affected by intrinsic correlations than galaxy sizes, making this a bigger systematic for measurements of the weak lensing signal. We also quantify how these intrinsic correlations are affected by sample selection criteria based on sizes and magnitudes.
Weak gravitational lensing can be observed through the statistical analysis of coherent distortions in the shape, size and brightness of the images of distant galaxies. Measurements of galaxy shape correlations induced by weak lensing, also called cosmic shear, have been demonstrated to be a powerful probe and can potentially constrain the cosmological model with high precision. Cosmic shear correlations were detected for the first time in 2000 \citep{Bacon2000, Kaiser2000a, vanWaerbeke2000, Wittman2000} and recently more accurately measured by surveys such CFHTLens \citep{Heymans2013} and KiDS \citep{Kuijken2015a}. Future surveys such as LSST\footnote{\tt www.lsst.org} and Euclid\footnote{\tt sci.esa.int/euclid} are expected to significantly improve shear measurements. Although cosmic shear has traditionally been the primary goal of weak lensing studies, more attention has recently been given to size and brightness magnification as complementary probes. Magnification can push small or faint objects above the size and magnitude thresholds of a survey; this leads to a signal that can be detected by cross-correlating a foreground population of galaxies with a distant background sample. This is also known as magnification bias and was first detected by \cite{Scranton2005} using background quasars. More recently, other background sources such as Lyman-break galaxies have been used to study dark matter halo profiles \citep{Hildebrandt2009, vanWaerbeke2010a, Hildebrandt2011, Hildebrandt2013, Ford2012, Bauer2014}. In \cite{Vallinotto2011} size and magnitude magnification were used to calibrate the cosmic shear measurements errors, but the first detection of cosmic magnification directly using galaxy sizes and magnitudes was performed in \cite{Schmidt2012}, where a weighted magnification estimator was applied to an X-ray-selected sample of galaxy groups; they found measurements of the projected surface density that are consistent with shear measurements. \cite{Huff2014} used the Fundamental Plane relation for early-type galaxies to detect cosmic magnification by means of size measurements; however recently \cite{Joachimi2015a} detected a possible contamination from spatial correlations of Fundamental Plane residuals that should be taken into account. Recently, \cite{Duncan2016} presented the first measurement of individual cluster estimates using weak lensing size and flux magnification. There are several good reasons for using size and magnitude information along with cosmic shear. Size and magnitude information is already available from cosmic shear surveys, and ideally one should exploit all of the data's statistical power to constrain the cosmological model. For example, \cite{Casaponsa2013} have shown that the size information that comes from shape estimation methods can readily be used for cosmic magnification measurements, provided that there is sufficient signal-to-noise and the sizes are larger than the point spread function. Using different weak lensing probes can be important to mitigate the impact of shape distortion systematics. For cosmic shear measurements, in addition to systematics arising from instrumental effects and atmospheric conditions, there are also systematics which have an astrophysical origin such as intrinsic alignments. The mechanisms that generate the intrinsic alignments are not fully understood and seem to depend on the galaxy type. The large-scale gravitational field seems to have a central role in generating alignments; essentially, the gravitational tidal field changes the shape of the halo in which an elliptical galaxy is embedded or, for spiral galaxies, it can induce angular momentum correlation to align their disc spins. These intrinsic alignments produce a signal that can mimic the effect of weak lensing and bias cosmological parameter constraints. A review of the various intrinsic alignment models and the methods to assess the contamination can be found in \cite{Troxel2015}. \cite{Heavens2013} have shown that adding cosmic magnification via size distortions can help to increase the constraining power compared to cosmic shear measurements on their own; they also demonstrated that size measurements can be made largely uncorrelated with shape measurements if the square root of the area of the galaxy image is used as size estimator. This analysis has been extended in \cite{Alsing2015} which provided an estimate of the convergence dispersion expected from size measurements and, analogously to intrinsic alignments for cosmic shear, they study the possible impact of marginalising over intrinsic size correlations on constraints of cosmological parameters such as the dark energy equation of state parameters. In \cite{Ciarlariello2015} (hereafter CCP15), we investigated a theoretical model in which intrinsic size correlations arise in a simple halo model, assuming larger and more massive galaxies reside in more massive haloes and linking observed galaxy sizes to halo and subhalo masses through the relation found by \cite{Kravtsov2013}. Haloes are populated with subhaloes by means of a subhalo mass function which accounts for the fact the size of the largest subhaloes is limited by the total halo mass. The main result from \citetalias{Ciarlariello2015} is that it may not be possible to ignore intrinsic correlations when weak lensing is measured from galaxy sizes. In this paper we extend the analysis given in \citetalias{Ciarlariello2015} to account for intrinsic correlations of magnitudes. The halo model developed for intrinsic size correlations is applied to magnitudes and galaxy luminosities are correlated with the mass of the haloes and subhaloes following the relation given in \cite{Vale2008}. In order to calculate the potential impact of these correlations on more realistic surveys we also include in our model size and magnitude thresholds. The paper is organised as follows. In Section 2 we discuss how the convergence is estimated from size and magnitude measurements, in both the ideal case and for more realistic surveys. Section 3 explains our halo model-based approach, how we relate the sub-halo masses to observed quantities and our model of the size-magnitude distribution. Section 4 works out the relevant three dimensional two-point power spectra for the convergence and the intrinsic size and magnitude fields. In Section 5 we translate these statistics for the two dimensional size and magnitude estimators, both for a fully projected sample and for a tomographic binning approach; we end with brief conclusions in Section 6.
In our previous paper \citep{Ciarlariello2015}, we examined the issue of intrinsic size correlations in a halo model, where the sizes of galaxies were assumed to be a simple function of the sub-halo mass. Here, we have extended this analysis by examining the correlations in galaxy brightness, and by introducing intrinsic scatter in the mass-size and mass brightness relations. We have also included realistic selection effects into our predictions to account for the reduced responsivity of the mean properties of galaxies to convergence. Overall, we find these improvements in the modelling have not affected the main conclusion of \citet{Ciarlariello2015}, that intrinsic correlations in the galaxy properties used to trace magnification are an important systematic to measurements of the convergence power spectrum; if ignored, they can significantly bias the cosmological interpretation of the convergence measurements. The principle determining factor of the importance of the intrinsic correlations is their estimator weighted bias\citep{Ciarlariello2015}, e.g. Eq. \ref{barbm}. These depend significantly on the form of the mass-size and mass-luminosity relations. Because of the steeper relationship between the sub-halo mass and the luminosity reflected in the Vale and Ostriker (2008) relation, we expect a higher bias for the magnitude correlations compared to that expected for sizes, as can be seen in Figure \ref{fig:BIAS-tot_Bj}. Because of this, the intrinsic contamination to magnitude correlations can actually be comparable to the convergence signal itself (Fig. \ref{fig:MAG_Bj}). Our results in have also been evaluated using a specific halo model, in particular using the \citet{Sheth1999} mass function and its associated halo bias model. If we instead use the \citet{Press1974} mass function and halo bias model can result in an increase of around 20\% in both size and magnitude bias. Thus, while there is some sensitivity to the implementation of the halo model, our main conclusions in terms of the impact of intrinsic size and magnitude correlations are not significantly affected. The addition of scatter in the mass-size and mass-luminosity relations does not directly affect expectations of the two-point correlations. However, it does impact the probability that a galaxy of a given mass will be selected, and therefore the bias weighting of the sample. Our modelling of the distribution indicates a significant sensitivity to the size and magnitude cuts, consistent with observations and indicating a responsiivity of the means to magnification significantly suppressed compared to the ideal values. In the absence of intrinsic systematics, it is beneficial to combine sizes and magnitudes together into a single noise-weighted estimator \citep{Alsing2015}. However, given the difference in the expected intrinsic correlations, combining sizes and magnitudes may make the systematic contamination worse than for sizes alone. But this may be mitigated depending on how the intrinsic correlations are marginalised over. The tomographic analysis shows that, like intrinsic shape correlations, intrinsic size and brightness correlations are a serious problem within narrow bins, and ameliorating them requires exploiting cross-correlations between bins where the II contributions are negligible. However, at low redshifts, and in neighbouring bins, the GI terms can also be a comparable systematic. Our theoretical results emphasise the need to better quantify these intrinsic correlations, particularly on small scales where the halo model is approximate and potentially is missing important physics. Hydrodynamic simulations have more realistic small scale physics, but may not have the full dynamic range essential for weak lensing analyses. Semi-analytic models, based on simulated merging trees and constrained to match related galaxy observables, may improve the situation. Equally essential is to focus on measuring these effects in large scale surveys, focusing on low redshifts and large scales where the intrinsic signal is expected to dominate over shot noise and the convergence signal. We are presently investigating whether these correlations can be observed in the SDSS. Measurements of such correlations are observationally challenging and they are subject to many of the same systematics as shape measurements. However, unlike shape estimators, the magnification estimators have the additional complication of requiring accurate measurements of the mean sizes and magnitudes and their responsivities to lensing under the selection function.
16
7
1607.08784
1607
1607.03175_arXiv.txt
In beyond-generalized Proca theories including the extension to theories higher than second order, we study the role of a spatial component $v$ of a massive vector field on the anisotropic cosmological background. We show that, as in the case of the isotropic cosmological background, there is no additional ghostly degrees of freedom associated with the Ostrogradski instability. In second-order generalized Proca theories we find the existence of anisotropic solutions on which the ratio between the anisotropic expansion rate $\Sigma$ and the isotropic expansion rate $H$ remains nearly constant in the radiation-dominated epoch. In the regime where $\Sigma/H$ is constant, the spatial vector component $v$ works as a dark radiation with the equation of state close to $1/3$. During the matter era, the ratio $\Sigma/H$ decreases with the decrease of $v$. As long as the conditions $|\Sigma| \ll H$ and $v^2 \ll \phi^2$ are satisfied around the onset of late-time cosmic acceleration, where $\phi$ is the temporal vector component, we find that the solutions approach the isotropic de Sitter fixed point ($\Sigma=0=v$) in accordance with the cosmic no-hair conjecture. In the presence of $v$ and $\Sigma$ the early evolution of the dark energy equation of state $w_{\rm DE}$ in the radiation era is different from that in the isotropic case, but the approach to the isotropic value $w_{\rm DE}^{{\rm (iso)}}$ typically occurs at redshifts $z$ much larger than 1. Thus, apart from the existence of dark radiation, the anisotropic cosmological dynamics at low redshifts is similar to that in isotropic generalized Proca theories. In beyond-generalized Proca theories the only consistent solution to avoid the divergence of a determinant of the dynamical system corresponds to $v=0$, so $\Sigma$ always decreases in time.
Cosmology is facing a challenge of revealing the origin of unknown dark components dominating the present Universe. The standard cosmological model introduces a pure cosmological constant $\Lambda$ to the field equations of General Relativity (GR), and additionally a non-baryonic dark matter component in the form of non-relativistic particles, known as cold dark matter. This rather simple model is overall consistent with the tests at cosmological scales by the Cosmic Microwave Background (CMB) anisotropies \cite{CMB}, by the observed matter distribution in large-scale structures \cite{LSS}, and by the type Ia supernovae \cite{SNIa}. In the prevailing view the cosmological constant should arise from the vacuum energy density, which can be estimated by using techniques of the standard quantum field theory. Lamentably, the theoretically estimated value of vacuum energy is vastly larger than the observed dark energy scale. This is known as the cosmological constant problem \cite{Weinberg}. In this context, infrared modifications of gravity have been widely studied in the hope to screen the cosmological constant via a high-pass filter, naturally arising in higher dimensional set-ups, in massive gravity \cite{mgravity} or non-local theories \cite{nonlocal}. On the same footing as tackling the cosmological constant problem, infrared modifications of gravity may yield accelerated expansion of the Universe due to the presence of new physical degrees of freedom, providing an alternative framework for dark energy \cite{review}. The simplest realization is in form of an additional scalar degree of freedom, that couples minimally to gravity \cite{quin}. Richer phenomenology can be achieved by allowing for self-interactions of the scalar field or non-minimal couplings to gravity \cite{stensor}. Extensively studied classes are the Galileon \cite{Galileon1,Galileon2} and Horndeski \cite{Horndeski1} theories, whose constructions rely on the condition of keeping the equations of motion up to second order \cite{Horn2}. Easing this restriction allows for interactions with higher-order equations of motion, but it is still possible to construct theories that propagate only one scalar degree of freedom (DOF)--Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories \cite{GLPV} (see also Refs.~\cite{Hami} for the discussion of the propagating DOF). These beyond-Horndeski interactions generalize the previous ones and offer richer phenomenology \cite{GLPVcosmo,GLPVsph}. Even if the extension in the form of an additional scalar field is the simplest and most explored one, the inclusion of additional vector fields has been taken into consideration as well \cite{Barrow,Jimenez13,TKK,Fleury,Hull}. The Maxwell field with the standard kinetic term is a gauge-invariant vector field with two propagating transverse modes. The attempt to construct derivative self-interactions for a massless, Lorentz-invariant vector field yielded a no-go result \cite{Mukoh}, but this can be overcome by breaking the gauge invariance. The simplest way of breaking gauge invariance is to include a mass term for the vector field, which is known as the Proca field. The inclusion of allowed derivative self-interactions and non-minimal couplings to gravity results in the generalized Proca theories with second-order equations of motion \cite{Heisenberg,Tasinato,Allys,Jimenez2016}. As in the original Proca theory, there are still the three physical DOFs, one of them being the longitudinal mode and the other two corresponding to the standard transverse modes (besides two tensor polarizations). The phenomenology of (sub classes of) generalized Proca theories has been extensively studied in Refs.~\cite{scvector,Chagoya,cosmo,Geff}. As in the GLPV extension of scalar Horndeski theories, relaxing the condition of second-order equations of motions in generalized Proca theories allows us to construct new higher-order vector-tensor interactions \cite{HKT} without the Ostrogradski instability \cite{Ostro}. In Ref.~\cite{HKT} it was shown that, even in the presence of interactions beyond the domain of generalized Proca theories, there are no additional DOFs associated with the Ostrogradski ghost on the isotropic Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) background. In fact, the Hamiltonian ${\cal H}$ is equivalent to 0 due to the existence of a constraint that removes the appearance of a ghostly DOF. In Refs.~\cite{cosmo,Geff} the cosmology of generalized Proca theories was studied by introducing the temporal component $\phi(t)$ of a vector field $A^{\mu}$ at the background level (where $t$ is the cosmic time). There the spatial vector component was treated as a perturbation on the FLRW background. In concrete dark energy models it was found that $\phi(t)$ grows toward a de Sitter attractor, whereas the vector perturbation decays after entering the vector sound horizon. Thus the analysis of Refs.~\cite{cosmo,Geff} is self-consistent, but it remains to see whether or not the spatial vector component $v(t)$ as large as $\phi(t)$ can modify the cosmological dynamics. If $v(t)$ is non-negligible relative to $\phi(t)$, we need to take into account the spatial shear $\sigma(t)$ in the metric as well. In the context of anisotropic inflation, for example, it is known that there are cases in which the ratio between the anisotropic and isotropic expansion rates stays nearly constant \cite{aniinf}. In generalized Proca theories, we would like to clarify the behavior of $v(t)$ and the anisotropic expansion rate $\Sigma(t)=\dot{\sigma}(t)$ during the cosmic expansion history from the radiation era to the late-time accelerated epoch. In beyond-generalized Proca theories, it is important to study whether the Ostrogradski ghost appears or not on the anisotropic cosmological background. In this paper, we show the absence of additional ghostly DOF by explicitly computing the Hamiltonian in the presence of $v$ and $\Sigma$. An interesting property of beyond-generalized Proca theories with quartic and quintic Lagrangians is that the only consistent solution free from a determinant singularity of the dynamical system corresponds to $v=0$. In this case, the cosmological dynamics can be well described by the isotropic one studied in Ref.~\cite{HKT}. We organize our paper as follows. In Sec.~\ref{eqsec} we derive the Hamiltonian and the full equations of motion in beyond-generalized Proca theories on the anisotropic cosmological background. In Sec.~\ref{anasec} we analytically estimate the evolution of $v$ and $\Sigma$ in the radiation/matter eras and in the de Sitter epoch. In Sec.~\ref{numesec} we perform the numerical study to clarify the cosmological dynamics for both generalized and beyond-generalized Proca theories in detail, paying particular attention to the evolution of the dark energy equation of state $w_{\rm DE}$ as well as $v$ and $\Sigma$. Sec.~\ref{consec} is devoted to conclusions.
\label{consec} In beyond-generalized Proca theories, we have studied the anisotropic cosmological dynamics in the presence of a spatial vector component $v$. On the isotropic FLRW background it was found in Ref.~\cite{HKT} that, even with the Lagrangian density ${\cal L}^{\rm N}$ outside the domain of second-order generalized Proca theories, there is no additional DOF associated with the Ostrogradski ghost. In this paper we showed that the same result also holds on the anisotropic background. There exists the constraint equation (\ref{Hamicon1}) related with the Hamiltonian ${\cal H}$ as Eq.~(\ref{LHN}), so that ${\cal H}=0$. Hence the beyond-generalized Proca theories are free from the Ostrogradski instability with the Hamiltonian unbounded from below. In Sec.~\ref{anasecA} we analytically estimated the evolution of the anisotropic expansion rate $\Sigma$ and the spatial component $v$ in the early cosmological epoch. If the conditions (\ref{Sigcon}) hold in the radiation and matter eras, $\Sigma$ decreases in proportion to $a^{-3}$. Under the conditions (\ref{alcon}), the evolution of $v$ is given by $v={\rm constant}$ during the radiation era and $v \propto a^{-1/2}$ during the matter era. If $v$ is not very much smaller than $M_{\rm pl}$ during the radiation domination, there are cases in which the second condition of Eq.~(\ref{Sigcon}) is violated. In concrete dark energy models studied in Sec.~\ref{numesec}, we showed the existence of solutions on which the ratio $r_{\Sigma}=\Sigma/H$ remains nearly constant during the radiation era. In Sec.~\ref{deSittersec} we discussed the property of de Sitter fixed points relevant to the late-time cosmic acceleration. Besides the isotropic point (\ref{isofixed}), we found the existence of anisotropic fixed points (\ref{vc}) under the two conditions (\ref{dscon1}) and (\ref{dscon2}). For the theories in which the parameter ${\cal A}_V$ defined by Eq.~(\ref{AV}) vanishes, we only have the isotropic fixed point. For ${\cal A}_V \neq 0$ the anisotropic fixed points exist, but they are not stable. In both cases, the analytic estimation implies that the solutions approach the stable isotropic point in accordance with the cosmic no-hair conjecture. In Sec.~\ref{numesec} we studied the evolution of anisotropic cosmological solutions in a class of dark energy models given by the functions (\ref{G2345}). In the early cosmological epoch, the contributions of $v$ and $\Sigma$ to the energy density $\rho_{\rm DE}$ and the pressure $P_{\rm DE}$ can be larger than the isotropic contributions associated with the temporal vector component $\phi$. If $v$ is large such that the conditions (\ref{vin}) are satisfied, the dark energy equation of state is given by Eq.~(\ref{wdees1}) in the radiation era, which is close to $w_{\rm DE}=1/3$ in concrete models studied in Secs.~\ref{secA} and \ref{secB}. If the contribution of $\Sigma$ dominates over that of $v$ such that the conditions (\ref{vin2}) are satisfied, we have $w_{\rm DE} \simeq 1$ during the radiation era. In both cases, the dark energy equation of state is different from the isotropic value $w_{\rm DE}^{(\rm iso)}$ given by Eq.~(\ref{wdeiso}). However, the transition of $w_{\rm DE}$ to the value $w_{\rm DE}^{(\rm iso)}$ typically occurs at high redshifts (see Figs.~\ref{fig2} and \ref{fig3}), so the dark energy dynamics at low redshifts is similar to that in the isotropic case. In generalized Proca theories with $v$ not very much smaller than $M_{\rm pl}$, the spatial anisotropy in the radiation era can be sustained by $v$ with the nearly constant ratio $r_{\Sigma}\simeq q_Vv^2/(3M_{\rm pl}^2)$. In this regime, for the models (A) and (B) studied in Sec.~\ref{numesec}, the vector field behaves as a dark radiation characterized by $w_{\rm DE} \simeq 1/3$. As seen in the right panels of Fig.~\ref{fig1} and \ref{fig4}, the constant behavior of $r_{\Sigma}$ in the radiation era can occur for the models with large powers $p$ (like $p=5$) due to the possible choice of large initial values of $v$. On the other hand, for the models with small $p$ (like $p=1$), we have $q_V v^2/(3M_{\rm pl}^2) \ll |{\cal B}/(a^3H)|$ in Eq.~(\ref{rsigso}) and hence $r_{\Sigma}$ decreases as $\propto a^{-1}$ during the radiation era (see the left panels of Fig.~\ref{fig1} and \ref{fig4}). After the matter dominance, both $v$ and $\Sigma$ decrease toward the isotropic fixed point ($v=0=\Sigma$). In beyond-generalized Proca theories, we showed that the physical branch of solutions without having a determinant singularity of the dynamical system corresponds to $v=0$. In this case the anisotropic expansion rate simplify decreases as $\Sigma \propto a^{-3}$ from the onset of the radiation-dominated epoch, so the cosmological evolution is practically indistinguishable from the isotropic case. Interestingly, the beyond-generalized Proca theories do not allow the existence of anisotropic solutions with constant $r_{\Sigma}$. We have thus shown that, apart from the radiation era in the presence of a non-negligible spatial vector component $v$, the anisotropy does not survive for a class of dark energy models in the framework of (beyond-)generalized Proca theories. Thus, the analysis of Refs.~\cite{cosmo,Geff} where the spatial component was treated as a perturbation on the isotropic FLRW background can be justified except for the early cosmological epoch in which the vector field behaves as a dark radiation. It will be of interest to place detailed observational constraints on both isotropic and anisotropic dark energy models from the observations of CMB, type Ia supernovae, and large-scale structures.
16
7
1607.03175
1607
1607.08860_arXiv.txt
{NGC~6231 is a massive young star cluster, near the center of the Sco~OB1 association. While its OB members are well studied, its low-mass population has received little attention. We present high-spatial resolution Chandra ACIS-I X-ray data, where we detect 1613 point X-ray sources.} {Our main aim is to clarify global properties of NGC~6231 down to low masses through a detailed membership assessment, and to study the cluster stars' spatial distribution, the origin of their X-ray emission, the cluster age and formation history, and initial mass function.} {We use X-ray data, complemented by optical/IR data, to establish cluster membership. The spatial distribution of different stellar subgroups also provides highly significant constraints on cluster membership, as does the distribution of X-ray hardness. We perform spectral modeling of group-stacked X-ray source spectra.} {We find a large cluster population down to $\sim 0.3 M_{\odot}$ (complete to $\sim 1 M_{\odot}$), with minimal non-member contamination, with a definite age spread (1-8~Myrs) for the low-mass PMS stars. We argue that low-mass cluster stars also constitute the majority of the few hundreds unidentified X-ray sources. We find mass segregation for the most massive stars. The fraction of circumstellar-disk bearing members is found to be $\sim 5$\%. Photoevaporation of disks under the action of massive stars is suggested by the spatial distribution of the IR-excess stars. We also find strong \ha\ emission in 9\% of cluster PMS stars. The dependence of X-ray properties on mass, stellar structure, and age agrees with extrapolations based on other young clusters. The cluster initial mass function, computed over $\sim 2$ dex in mass, has a slope $\Gamma \sim -1.14$. The total mass of cluster members above $1 M_{\odot}$ is $2.28 \times 10^3 M_{\odot}$, and the inferred total mass is $4.38 \times 10^3 M_{\odot}$. We also study the peculiar, hard X-ray spectrum of the Wolf-Rayet star WR~79.} {}
\label{intro} Massive stars in OB associations are easily identified as bright blue stars standing against fainter and redder background field stars, and being bright and observable up to large distances have been intensively studied over decades. The low-mass stars born in the same associations, however, have received much less attention in the past, because they are much fainter and difficult to discern from faint field stars. This is especially true if an OB association lies far away, low on the Galactic plane. The advent of X-ray observations, however, has provided an invaluable tool to find young low-mass stars, which are orders of magnitude brighter in X-rays than older field stars of the same mass and spectral types. Moreover, the sensitivity and spatial resolution of X-ray data available today from the {\em Chandra} X-ray observatory has permitted to extend these studies to the many regions farther than 1~kpc, previously unaccessible (due to insufficient sensitivity and strong source confusion) to earlier-generation X-ray observatories such as ROSAT, thus enlarging enormously the sample of regions accessible for such detailed studies across almost the entire mass spectrum. We have studied in X-rays with Chandra the young cluster \object{NGC~6231}, part of the Sco~OB1 association, containing more than 100 OB stars (and one Wolf-Rayet star), with the purpose of studying its population, complete down to approximately one solar mass. These data show the existence of a very large low-mass star population in this cluster. We are therefore able to study in detail its spatial distribution, the cluster locus on the color-magnitude diagram and cluster age, the percentage of stars with T~Tauri-like properties, and the cluster initial mass function (IMF). NGC~6231 was already observed in X-rays with XMM-Newton (Sana et al.\ 2006a), with the detection of 610 X-ray sources; X-ray emitting OB stars were studied by Sana et al.\ (2006b), and the lower-mass X-ray sources by Sana et al.\ (2007). Despite these XMM-Newton data being potentially deeper than ours (total exposure 180~ks, with larger instrument effective area), and over a wider field, they are severely background- and confusion-limited, and their source catalog contains much less than one-half sources than the new X-ray source catalog we present in this paper. An extensive survey of the published literature on NGC~6231 was made by Sana et al.\ (2006a), mainly optical photometric and spectroscopic studies of cluster membership, distance, radial velocities, variability and binarity, with main emphasis on massive stars. Until not long ago, optical photometric studies extending to low-mass stars were few, with limited spatial coverage and/or relatively shallow limiting magnitude (Sung, Bessell and Lee 1998, hereafter SBL; Baume, V\'azquez and Feinstein 1999). More recently, several of these limitations were overcome by the work of Sung \e (2013a; SSB), who made deep (about $V<21$), wide-field UBVI and \ha\ photometry in the NGC~6231 region. Moreover, the cluster has data in the VPHAS$+$ DR2 catalog (Drew \e 2014) including {\em ugri} and \ha\ magnitudes, down to a depth comparable to SSB. Noteworthy is the large fraction of binaries among massive stars, as found by Raboud (1996), Garc{\'{\i}}a and Mermilliod (2001), and Sana \e (2008). SSB derive a distance modulus for NGC~6231 of 11.0 (1585~pc), a reddening $E(B-V) = 0.47$, and a nearly normal reddening law with $R=3.2$. We adopt these values for the present work. This paper is structured as follows: Section~\ref{xobs} presents our {\em Chandra} data on NGC~6231 and the source detection procedure; the X-ray source cluster morphology is studied in Section~\ref{morph}; Section~\ref{ident} describes the identification of detected sources with the available optical and near-IR catalogues; in Section~\ref{cmd} we examine the properties of the X-ray selected cluster population using color-magnitude and color-color diagrams; spatial distributions of subgroups of candidate members are studied in Section~\ref{spatial}; low-resolution ACIS X-ray spectra are studied in Section~\ref{xspec}, and X-rays from massive OB stars are studied in Section~\ref{himass}; eventually, the dependence of X-ray emission on stellar properties is studied in Section~\ref{xlumfn}, and compared to other young clusters within an evolutionary framework; a detailed cluster IMF is computed in Section~\ref{imf}; last, we summarize our findings in Section~\ref{concl}. An appendix is devoted to the spatial distribution of reddening.
\label{concl} We have observed with Chandra/ACIS-I the young cluster NGC~6231 in Sco~OB1, detecting 1613 point X-ray sources down to $\log L_X \sim 29.3$ (a complete sample down to $\log L_X \sim 29.8$). Most of the detected sources are without doubt low-mass cluster members, and permit an unprecedented study of the late-type population of this very rich cluster. The cluster morphology is found to be nearly spherical, with a small elongation orthogonal to the galactic plane. Comparison of the X-ray source list with existing optical and 2MASS catalogues has allowed identification of 85\% of the X-ray sources. The optical color-magnitude diagram of X-ray detected sources shows, in addition to many tens massive stars, a richly populated PMS cluster band with a definite spread in ages (between $\sim$1-8~Myr). The PMS stars appear slightly older than the massive (main-sequence or evolved) OB stars. Upon comparison with existing model isochrones, we find the Baraffe \e (2015) models in better agreement with our data than the older Siess \e (2000) models. The shape of the $V$-magnitude spread across the PMS band is suggestive of a (equal-mass) binary fraction of $\sim 20$\%. The IR color distribution suggests a rapid increase in extinction, by several magnitudes in $V$, at some distance behind the cluster. The likelihood of membership for different star groups, defined from their position on the optical CMD, was studied by means of their spatial distribution, X-ray hardness and luminosity. We argue that the vast majority of OB and A-F stars are cluster members, regardless of their X-ray detection. The X-ray detection fraction of late-B and A stars is consistent with the hypothesis that the actual X-ray emitter is a later-type companion, while the more massive star is X-ray dark. We confirm the spatial segregation of the most massive stars, as suggested by previous studies. We find 203 \ha-excess stars in the X-ray FOV, with \ha\ intensities typical of CTTS. About one-half of them lie in the cluster PMS band, and are added to our member list; of the latter, $\sim 40$\% were not detected in X-rays. The derived fraction of \ha-excess PMS stars is $\sim 9$\%, in good agreement with the trend shown by other clusters at the same age. The \ha-excess stars have no significantly different X-ray properties than the rest of PMS stars. We also find 48 bona-fide IR-excess sources, probably related to circumstellar disks, whose frequency is therefore $\sim 5$\% for NGC~6231 PMS stars. The spatial distribution of the IR-excess stars, both X-ray detected and undetected, is significantly wider than that of PMS stars: we argue that this is evidence of increased photoevaporation of disks in the vicinity of the cluster OB stars. A similar trend is also shown by X-ray undetected CTTS, strengthening this conclusion. Approximately 70\% of the optically unidentified X-ray sources show (spatial and X-ray) properties well consistent with those of the cluster PMS stars at their low-mass end, and we suggest that they are low-mass members as well. The remaining 30\% of unidentified sources show instead much harder, multi-component X-ray spectra very unlike cluster stars, and do not appear otherwise connected with the cluster. With more than 1300 stars with individually-determined membership, this dataset enables us to study NGC~6231 with an unprecedented detail, across a wide range of masses, down to $\sim 0.3 M_{\odot}$ (complete to $\sim 1 M_{\odot}$). The X-ray properties of NGC~6231 stars are in agreement with those of coeval clusters. The mass above which $L_X/L_{bol}$ drops is an age indicator, which agrees with the isochronal age, and is also consistent with the age-dependent discontinuity in the $(H\alpha,V)$ diagram. We find a complex, doubly-peaked X-ray spectrum for the Wolf-Rayet star WR~79, with more extreme characteristics than exhibited by stars of similar type; WR~79 is one of the few WC-type stars detected in X-rays. Using all available membership information, we compute the cluster IMF under different hypotheses, and over nearly 2 orders of magnitude in mass. We find a best-fit slope $\Gamma = -1.14$, consistent with that found by SSB but slightly closer to the Salpeter value. The total mass of individually selected cluster members above $1 M_{\odot}$ is found to be $2.28 \times 10^3 M_{\odot}$, while the total mass extrapolated to the bottom of the mass spectrum according to the Weidner \e (2010) analytical IMF is $4.38 \times 10^3 M_{\odot}$, in agreement with previous estimates. This confirms that NGC~6231 ranks among the most massive known young clusters in the Milky Way. \begin{appendix}
16
7
1607.08860
1607
1607.04878_arXiv.txt
We test a class of holographic models for the very early universe against cosmological observations and find that they are competitive to the standard $\Lambda$CDM model of cosmology. These models are based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and while they predict a different power spectrum from the standard power-law used in $\Lambda$CDM, they still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian evidence for the models, we find that $\Lambda$CDM does a better job globally, while the holographic models provide a (marginally) better fit to data without very low multipoles (i.e. $l\lesssim 30$), where the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models, while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT's can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and potentially explain its apparent anomalies.
16
7
1607.04878
1607
1607.06091_arXiv.txt
Using the Sloan Digital Sky Survey, {we adopt the sSFR-$\sigone$ diagram as a diagnostic tool to understand quenching in different environments.} sSFR is the specific star formation rate, and $\sigone$ is the \jwb{stellar} surface density in the inner kpc. Although both the host halo mass and group-centric distance affect the satellite population, we find that these can be characterised by a single number, the quenched fraction, such that key features of the sSFR-$\sigone$ diagram vary smoothly with this proxy for the ``environment''. Particularly, the sSFR of star-forming galaxies decreases smoothly with this quenched fraction, \refc{the sSFR of satellites being 0.1 dex lower than in the field}. Furthermore, $\sigone$ of the transition galaxies (\ie, the ``green valley'' or GV) decreases smoothly with the environment \refc{by as much as 0.2 dex for $\Ms = 10^{9.75-10}\Msun$} from the field, and decreasing for satellites in larger halos and at smaller radial distances within same-mass halos. We interpret this shift as indicating the relative importance of today's field quenching track vs. the cluster quenching track. These \jwb{environmental effects} in the sSFR-$\sigone$ diagram are most significant in our lowest mass range ($9.75 < \log \Ms/\Msun < 10$). One feature that is shared between all environments is that at a given $\Ms$ quenched galaxies have \refc{about 0.2-0.3 dex} higher $\sigone$ than the star-forming population . \blue{These results indicate that either $\sigone$ increases (subsequent to satellite quenching), or $\sigone$ for individual galaxies remains unchanged, but the original $\Ms$ or the time of quenching is significantly different from those now in the GV}.
\label{introduction} Identifying the physical mechanisms that cause and sustain the cessation of star formation in galaxies has been a persistent problem in galaxy evolution research. Studies of large surveys over the last decade have made significant progress toward understanding this problem. In particular, it is now firmly established that the galaxy population obeys three broad correlations \jwb{between morphology, environment and quenching: 1) quenched galaxies tend to have early-type morphology or dense structure, 2) quenched galaxies tend to live in dense or massive halo environments, and 3) galaxies with early-type morphologies are found in these dense environments. } \subsection{Quenching and Morphology/Structure} Ever since Hubble classified the galaxies \citep{hubble26} it has been known that the galaxy population can be roughly divided into star-forming disks and quenched spheroids. Early type morphologies or high stellar density strongly correlate with being quenched in the local universe (eg., \citealp{str01,kau03,bla03,bell08,vandokkum11,fang13,omand14,schawinski14,bluck14}) and at high-$z$ (\citealp{wuyts11,cheung12,bell12,szomoru12,wuyts12,barro13,lang14,tacchella15b}). One manifestation of this correlation between morphology and quenching is that the mass profiles of quenched and star-forming galaxies differ within the inner 1 kpc such that quenched galaxies are denser in this inner region \citep{fang13}. (This is illustrated in \fig{cartoon} which is a schematic diagram of sSFR vs. the stellar density within 1 kpc $\sigone$. The quenched population, marked ``Q,'' has at high $\sigone$ relative to the star-forming population, marked ``SF'' .) At \jwb{any time}, this central stellar density grows along with stellar mass while on the galaxy main sequence (\citealp{hopkins09,feldmann10,vandokkum15}). The high inner densities seen in quenched galaxies today were already in place by $z\sim 2$ \citep{patel13,barro15,tacchella15b}, which may suggest that if there is a connection between quenching and high inner density, it likely occurs at high-$z$. Several ideas have been proposed to explain this observed link between morphology and quenching. Major dissipative mergers for instance have been demonstrated \refc{in simulations} to build a central spheroid component through violent relaxation of pre-merger stars \citep{toomre72,mihos94,hopkins09}. The dissipative inflow of gas triggers a central starburst that, along with contributing to the bulge, also quenches a galaxy through fast consumption and/or winds. \refc{In simulations, the same inflow can accompany disk instabilities \citep{friedli95,immeli04,bournaud11_gasrichmergers} which are observed in local galaxies (\citealp{courteau96,macarthur03,carollo98,carollo01,carollo07}; see also the review by \citealp{kormendy04}. Violent disk instabilities are predicted in simulations to be especially relevant at high-$z$ \citep{noguchi99,bournaud07,dekel09,mandelker14,dekel14}. Instabilities can also be triggered by mergers \citep{barnes91,mihos96,hopkins06,zolotov15} and counter-rotating streams \citep{danovich15}}. {A candidate merger sequence has been identified at $z\sim 0$ \refc{observations} extending all the way from disturbed starbursts followed by fading, AGN turn-on, and eventual quiescence (\citealp{yesuf14}; see also \citealp{ellison11}).} While there is a structural correlation with quenching, its causality arrow is not clear. It may be the effect of quenching that is unrelated to morphology \citep{lillycarollo16}. It has also been argued and shown that a spheroid can and sometimes does regrow a star-forming disk (e.g., \citealp{brennan15,graham15}) unless accretion is prevented, for example in a hot massive halo \citep{gabor15,woo15,zolotov15} and/or by AGN (\citealp{dekbir06,kormendy13} and references therein). In addition, a significant number (25-65\%) of quenched galaxies appear to have (early-type) disks, especially at high-$z$ (\citealp{stockton04,mcgrath08,vanD08,vandenbergh09,bundy10,oesch10,vanderwel11,salim12, bruce12,bruce14}), and also a number of bulges appear to be blue (e.g., \citealp{carollo07}). \subsection{Quenching and the Halo Environment} \label{intro_quench_env} The quenched fraction for central galaxies (those that are in the centres of their halo potential wells) correlates with various measures of the halo environment, including the halo mass $\Mh$ (\citealp{woo13,gabor15,woo15,zu16}). This is often attributed to virial shock heating of infalling gas in haloes more massive than $\Mcrit \sim 10^{12} \Msun$ \citep{croton06,dekbir06}. Below this critical mass, accretion is cold and conducive to star formation, while above it, infalling gas reaches the sound speed and a stable shock forms, heating the gas \citep{ree77,bir03,ker05,dekbir06}. For a population of haloes, this is manifested as a mass range extending over one or two decades around $\Mcrit$ where there is a decrease in the cold accretion as a function of halo mass ($\Mh$) \citep{ocv08,keres09,vandevoort11}. Since the effects of this so-called ``halo quenching'' are expected to increase with the mass of the halo rather than galaxy stellar mass, they are seen most dramatically in the satellite population since their host haloes can be much more massive than those of field galaxies at the same stellar mass. These effects are seen as ``environment'' quenching in which the galaxy population is more likely to be red or have low sSFR in dense environments \citep{hog03,kau04,bal04,blanton05b,baldry06,bundy06,coo08,bamford09,patel09, skibba09a,wilman10,quadri12,haas12,wetzel12,hartley13,knobel15,carollo16} and in massive haloes \citep{woo13,woo15,bluck16}. {\citet{peng10} argued that this environmental quenching is separable from the quenching that happens in the field (they call this ``mass quenching''), but others} have argued that halo quenching can affect both centrals and satellites \citep{gabor15,woo15,zu16}. Indeed, the question has been raised as to whether the apparent separability between quenching of centrals and satellites is actually a single phenomenon that masquerades as two separable processes due to the lack of dependence on halo mass of the mass function of the satellite population (\citealp{knobel15,carollo16}), or if quenching is merely a function of halo age (\citealp{hearin13}). It remains an open question whether satellites experience additional quenching mechanisms than field galaxies, or merely more of the same physics that also quenches field galaxies. \subsection{Morphology/Structure and the Halo Environment} The morphology of galaxies is also correlated with dense environments (\citealp{holmberg40,dre80}). There is evidence that this so-called ``morphology-density'' relation is not due to environmentally-driven morphological change, but only to the increased quenched fraction in dense environments. For example, the early-type fraction of satellites, {determined by visual classification}, does not rise as steeply with decreasing group-centric distance as the quenched fraction of satellites (\citealp{bamford09}), and the early-type fraction of {\it quenched} satellites {determined by a combination of parametric and non-parametric fitting)} does not vary at all (\citealp{carollo16}) with distance. These authors use these observations to argue that the morphology-density relation is driven by the increase of the red fraction in dense environments rather than a real increase of early-type morphology in dense environments. In fact, if there are two separate quenching channels for satellites as in \citet{peng10}, \cite{carollo16} argue that both channels must have the same effect (either no or some effect) on the morphologies of satellites. {If they are the same effect, then it is possible that a single mechanism governs the quenching of galaxies, the same mechanism that operates in the field (which is correlated with morphology/structure), and is simply more advanced in clusters. This idea has not yet been studied in detail. } \subsection{Goals of this Paper} \label{goals} \begin{figure} \includegraphics[width=0.45\textwidth]{figures/cartoon.eps} \caption{Schematic diagram illustrating quenching paths in sSFR-$\sigone$ space at constant stellar mass. The region marked ``SF'' refers to the ``star-forming'' population, and the region marked ``Q'' refers to the ``quenched'' population. The possible satellite quenching paths are the subject of this study. The numbered arrows are discussed in \sec{goals}. } \label{cartoon} \end{figure} In this paper we aim to study the interplay of the above three correlations for satellite galaxies in order to clarify the role of the halo environment versus the role of morphology/structure-dependent quenching in governing their evolution. The diagnostic we will use is the diagram of the specific star-formation rate (sSFR) versus the stellar surface mass density within 1 kpc ($\sigone$). The distribution of galaxies in this parameter space has been used for understanding structural change during quenching for centrals and their quenching track (labelled ``Field'' in \fig{cartoon} - see \citealp{fang13,barro13,tacchella15b}). This diagram has not yet been compared between environments. Do satellites in different environments populate different regions of this diagram? Such a comparison will aid in understanding simultaneously how star-formation activity and galaxy structure evolve for satellites and how this evolution differs from that of isolated galaxies. The various satellite quenching mechanisms proposed in the literature - ram pressure stripping, starvation, harassment - are expected to act on satellites regardless of their $\sigone$. If such mechanisms are important in satellite evolution, their quenching track in sSFR-$\sigone$ space should resemble something like the arrow \refc{labelled ``2'' in \fig{cartoon}, \ie, in a different location than that of the field. In particular, if galaxies with low $\sigone$ tend to have more loosely bound gas, ram pressure stripping in particular may preferentially affect galaxies with low $\sigone$, resulting in a quenching track more like arrow ``1'' in \fig{cartoon}. Some of these mechanisms may cause an increase in $\sigone$ via some triggered star formation during or after quenching (arrows ``3''), or nothing more may happen to the satellites (arrow ``4''). This paper attempts to detect any of these effects. } Even if satellites begin to quench independently of their $\sigone$, their subsequent evolution in sSFR-$\sigone$ and even $\Ms$ may not be \jwb{simple to predict}. Do satellite quenching mechanisms eventually change $\sigone$ or $\Ms$? The position of the quenched satellite population in this diagram relative to the position of transitioning satellites will shed light on these issues. Note that the above correlations in the literature between morphology, environment and quenching were discovered using simply the quenched fraction. However, higher-order information can be gained about the transition process as sSFR declines by studying the entire distribution of sSFR rather than just the quenched fraction (as emphasised by \citealp{woo15}). This kind of study is required for answering the questions posed here. The literature is replete with many different measures of ``structure'' and ``morphology''. However, since they all strongly correlate with each other, the above correlations are likely to be true in broad strokes, regardless of which measure is used. In this paper, we adopt $\sigone$ (introduced by \citealp{cheung12} and \citealp{fang13}) as our fiducial measure of structure for a number of reasons: \begin{enumerate} \item Despite some uncertainties in its measurement (such as the effect of the PSF), it does not depend on galaxy radius, which is usually light-based in current catalogues (unlike, for example, the density within the effective light radius, $\Sigma_{e}$.) \item It is measured directly from the data and not a fit to a model profile, which can be plagued by degeneracies and result in large systematic residuals. \item It refers to the innermost part of a galaxy, which is arguably less affected by {stripping}. \item \jwb{The galaxy population seems to obey the same $\sigone$-$\Ms$ relation at low- and high-$z$ (\citealp{tacchella15b,tacchella16a} - or with only slight variation \citealp{barro15}), suggesting that $\sigone$ is a stable quantity that on the whole either stays constant with time or increases with time (as stellar mass increases). For individual galaxies, it is difficult to imagine how it could decrease.} \end{enumerate} {Clearly,} 1 kpc means very different things for galaxies of very different sizes (and thus masses). For this reason, \jwb{we also confirmed that the same analysis using the bulge mass \citep{mendel14} the \citet{sersic68} index $n$ \citep{simard11} instead of $\sigone$ produced similar results.} This is after all, not unexpected given the strong correlations between different measures of ``morphology'' and ``structure.'' \sec{data} describes our selection and quantities used from the Sloan Digital Sky Survey (SDSS). \sec{results} presents our findings regarding the distribution of satellites in the sSFR-structure plane compared to the field and as a function of the halo environment. \sec{discussion} discusses possible implications of our results on the nature of satellite quenching. This analysis assumes concordance cosmology: $H_o = 70~{\rm km~s^{-1} Mpc^{-1}}, \Omega_M = 0.3, \Omega_\Lambda=0.7$. Our halo mass estimates also assume $\sigma_8=0.9, \Omega_b =0.04$.
\label{discussion} \label{effects} We have shown that the sSFR-$\sigone$ diagram is a promising diagnostic of the differences in the quenching processes among different populations of galaxies. We have studied this diagram as a function of three population variables: stellar mass, halo mass (for satellites), and radial location within haloes (also for satellites). Different features are seen, which may ultimately be linked to the different quenching mechanisms that are thought to affect galaxies in different environments. The basic result is that the sSFR-$\sigone$ diagram varies with environment: the diagram for field galaxies differs from that of all satellites, and the diagrams for satellites differ depending on group-centric distance and halo mass. In particular, we find that: \begin{enumerate} \item \jwb{{\bf ``Environment'' can be approximated by the quenched fraction.} The environment of galaxies is in principle a function of both group-centric distance and the mass of the host halo. We confirm what has been seen in many studies that the quenched fraction increases with $\Mh$, and in massive haloes, with decreasing $\dist$. However, we find that these two quantities conspire such that a single quantity, the quenched fraction of the population (including in the field) accurately predicts the rest of the distribution of galaxies in the sSFR-$\sigone$ plane. In other words, using the quenched fraction as a proxy for the environment, the following features are seen to vary smoothly with that measure: } \item {\bf The median value of $\sigone$ in the GV decreases smoothly with the quenched fraction of the environment.} In other words, the position of the GV bridge between the SF and Q populations in the sSFR-$\sigone$ diagram shifts \refc{toward lower $\sigone$ by as much as 0.2 dex (at $\Ms=10^{9.75-10}$)} for satellites compared to the field, and for satellites in more massive haloes and lower cluster-centric distance. \item {\bf The sSFR ridge-line of SF galaxies falls slightly with increasing quenched fraction of the environment.} This is a small effect ($\sim 0.1$ dex) but it is systematic over a large range of $\Ms$. \item {\bf The median $\sigone$ of quenched galaxies in a given mass bin is always higher than the star-forming population by about 0.2-0.3 dex.} The exact value also varies slightly with environment (as measured by the quenched fraction), but the variation is smaller than the variation in the median $\sigone$ of the GV or of the sSFR of the SF population. \end{enumerate} The magnitude of the above variations with environment depends on $\Ms$. The quenched fraction, median $\sigone$ of the GV and sSFR of the SF population vary significantly with environment for low-mass galaxies ($\Ms \ltsima 10^{10} \Msun$) but much less for the most massive galaxies ($\Ms \gtsima 10^{10.5}\Msun$). This may be a reflection of the uniformly high quenched fractions for intermediate- and high-mass field galaxies (\fig{trendswithqf}). Since at a given mass the number of field galaxies is about twice the number of satellites, most quenched galaxies of mass $\Ms \gtsima 10^{10.5}\Msun$ likely quenched as field galaxies, some of which later became quenched satellites. Galaxies of lower mass on the other hand have a quenched fraction that is more than double the quenched fraction in the field, indicating that a significant portion of these satellites quenched as satellites. These results are consistent with the expectation of an environmentally induced quenching track that occurs in clusters \jwb{today} and that is in addition to the quenching that occurs in the field (\fig{cartoon}). In the field, it appears that galaxies must have high $\sigone$ relative to the rest of the SF main sequence \refc{(by about 0.2-0.3 dex)} in order to start quenching (\ie, to bring them to the GV; this is consistent with \citealp{cheung12,fang13,barro15}). Since external influences are presumably lacking in the field, quenching must occur due to the properties of the galaxy itself or its own halo, \eg, the mass of its halo, AGN state, stability of the disc, etc. Referring to these collectively as ``self-quenching'', the sSFR-$\sigone$ diagram in the field indicates that high $\sigone$ is associated with self-quenching. In contrast, the distribution of $\sigone$ for satellites in the GV is more similar to the distribution of $\sigone$ on the SF main sequence (blue histograms in \fig{histograms}, comparing $a$ to $b$ and $g$ to $h$), indicating that quenching in clusters is not associated with high $\sigone$. This is consistent with the influence of external factors that quench (at least to the GV) regardless of $\sigone$ or without altering $\sigone$. Furthermore, the small suppression of the sSFR of the SF population, also independent of $\sigone$ (\fig{histograms}B and D), may be an indication that the start of quenching occurs immediately upon becoming a satellite. Our finding that the features of the sSFR-$\sigone$ diagram vary smoothly with the environment (via $\fq$) may reflect the decreasing relative importance of self-quenching vs. external processes in more extreme enviroments. \subsection{Why is $\sigone$ of Quenched Galaxies Always High?} While our findings indicate that satellite quenching {\it begins} at a wide range of $\sigone$, satellites that have more or less {\it finished} their quenching all have the highest $\sigone$ \jwb{(compared to SF and GV galaxies) in a given mass bin}. \jwb{ As detailed in \sec{introduction}, it has been demonstrated that quenching is strongly associated with early-type morphology for the galaxy population as a whole, which is dominated by field galaxies. However, we have shown here that even in the densest environments, there are almost no quenched satellites with low $\sigone$. This is a surprising result in light of the various proposed satellite quenching mechanisms. This result rules out the simple vertically downward quenching track on the sSFR-$\sigone$ diagram, such as might be expected, for example, from the cut-off of accretion \citep{dekbir06} and the stripping of cold gas via ram pressure. Both of these scenarios are expected to quench satellites quickly regardless of $\sigone$ (although see \citealp{mccarthy08}). } \jwb{ To explain high $\sigone$ in quenched satellites, we discuss here some of the remaining scenarios which can be divided into two classes: those that involve the creation of new stars in the central kpc, and those in which the $\sigone$ value of {\it individual} galaxies remains intact. } Mechanisms for satellite quenching that involve the creation of new stars include galaxy-galaxy interactions (sometimes called ``harassment), which build the density in the inner kpc with each successive interaction (\citealp{lake98}). Ram pressure, usually thought to strip gas from a galaxy, can also potentially compress gas in the inner regions. Dekel et al. (in preparation) estimate that starting at 0.5$\Rvir$, ram pressure can increase $\sigone$ by up to a factor of 2 (what is needed to explain the sSFR-$\Ms$ diagram) if the gas fraction is $\sim 0.5$. \jwb{Such gas fractions are seen in lower mass galaxies than studied here, or at higher $z$, and thus this process may be relevant for those regimes. However, the galaxies in the GV observed here likely have much lower gas fractions, and so ram pressure compression may not be responsible for the high $\sigone$ of quenched satellites relative to GV satellites observed here.} Tidal compression may prove to be more promising. Tidal forces in haloes with shallow profiles are compressive and can increase $\sigone$ by a factor of 2 even with gas fractions as low as 0.1 (\citealp{dekel03tidalcompression}, Dekel et al., in preparation). However, such tidal compression is only efficient within 0.1$\Rvir$. In any case, our results imply that any quenching scenario that increases $\sigone$ must begin quenching (bringing the galaxies from the SF to the GV) {\it before} the increase of $\sigone$. This is indicated by the fact that the satellite quenching tracks increase in $\sigone$ only after galaxies have passed through the GV and are en route to the quenched region. It is also possible to explain the high apparent $\sigone$ of quenched galaxies without increasing $\sigone$ in individual galaxies. This is possible if galaxies lose mass during quenching and thus move from one mass bin to another. For example, tidal stripping of a satellite in a higher mass bin can strip both gas and stars, potentially lowering $\Ms$ significantly from the infall value (as much as 30-40\% or more per pericentre passage - \citealp{zentner03,taylor04}), but will leave $\sigone$ intact. Indeed, comparing the sSFR-$\sigone$ diagrams in \fig{ssfrvssigone}, $\sigone$ of the GV for satellites is always comparable to the $\sigone$ of the Q population in the immediately lower mass bin, supporting the plausibility of the stripping scenario. The position of the GV in the sSFR-$\sigone$ diagrams would imply that the start of quenching occurs before the stripping, which is reasonable considering that tidal stripping is most efficient at the orbital pericentre. This would be in line with the findings of \citet{pasquali10} who find that the mass-metallicity relation of satellites is offset toward lower mass. { One additional effect that can explain the high $\sigone$ of quenched galaxies without increasing $\sigone$ is one that has received little attention so far - namely, ``progenitor bias''. The idea is that at any given $\Ms$, Q galaxies that quench at earlier times are expected be denser on average than at later times, simply because progenitor star-forming galaxies, their dark matter halos and the whole Universe are denser at earlier epochs (e.g., \citealp{valentinuzzi10,carollo13,poggianti13,lillycarollo16}). Thus, the sSFR-$\sigone$ diagrams for the field population can be explained by the gradual displacement of the GV bridge \refc{toward lower $\sigone$} over time, filling the \refc{lower} end of the Q cloud. The evolution of the $\sigone$ vs. $\Ms$ relation of the whole Q galaxy population averaged over all environments shows a decrease of $\sim 0.3$ dex in $\sigone$ since a redshift of $z \sim 3$ (\citealp{barro15}), which well agrees with the width of the Q cloud. } As for the satellites, the morphology of the sSFR-$\sigone$ diagram could arise from the cumulative effects of $(1)$ a predominant role of field-like self-quenching at early epochs; and $(2)$ the switching-on of satellite-quenching at later epochs, which quenches at lower $\sigone$. (See also \citealp{gallart15} who suggest that dwarf satellite morphologies are determined at birth.) {It is possible that the $\sigone$-increasing mechanisms (such as those described above) and the effect of progenitor bias work in concert. How important these effects are can at least in part be tested through direct measurement of sSFR-$\sigone$ diagrams at earlier epochs, and their stellar populations (work in progress). } {\bf In summary}, we have used the sSFR-$\sigone$ diagram as a useful diagnostic of quenching among different populations of galaxies. The power of $\sigone$ to predict quenching that has been seen in bulk populations of galaxies is confirmed when galaxies are divided up into field vs satellites and satellites in different haloes and at different group-centric distances. The location of the transitioning galaxies (\ie, the GV) in the sSFR-$\sigone$ diagram varies smoothly with the environment, $\sigone$ being lower for satellites in larger halos and at smaller radial distances within the same-mass halos. We interpret this shift as indicating the relative importance in different environments of the field quenching track vs. the cluster quenching track. In all environments, the Q population always has high $\sigone$ relative to their masses. We proposed two types of scenarios to explain the high $\sigone$ of Q satellites relative to the low $\sigone$ of GV (and SF) satellites. One class of scenarios involves the creation of new stars in the inner kpc \jwb{(harassment, ram pressure compression and tidal compression)} while the other class does not involve such star formation \jwb{(tidal stripping causing significant mass loss and progenitor bias)}. It is the goal of future work to distinguish between these possibilities.
16
7
1607.06091
1607
1607.03999_arXiv.txt
During its 5 year mission, the \textit{Kepler} spacecraft has uncovered a diverse population of planetary systems with orbital configurations ranging from single-transiting planets to systems of multiple planets co-transiting the parent star. By comparing the relative occurrences of multiple to single-transiting systems, recent analyses have revealed a significant over-abundance of singles. Dubbed the ``\textit{Kepler} Dichotomy," this feature has been interpreted as evidence for two separate populations of planetary systems: one where all orbits are confined to a single plane, and a second where the constituent planetary orbits possess significant mutual inclinations, allowing only a single member to be observed in transit at a given epoch. In this work, we demonstrate that stellar obliquity, excited within the disk-hosting stage, can explain this dichotomy. Young stars rotate rapidly, generating a significant quadrupole moment which torques the planetary orbits, with inner planets influenced more strongly. Given nominal parameters, this torque is sufficiently strong to excite significant mutual inclinations between planets, enhancing the number of single-transiting planets, sometimes through a dynamical instability. Furthermore, as hot stars appear to possess systematically higher obliquities, we predict that single-transiting systems should be relatively more prevalent around more massive stars. We analyze the \textit{Kepler} data and confirm this signal to be present.
In our Solar System, the orbits of all 8 confirmed planets are confined to the same plane with an RMS inclination of $\sim$1-2$^\circ$, inspiring the notion that planets arise from protoplanetary disks \citep{Kant1755,Laplace1796}. By inference, one would expect extrasolar planetary systems to form with a similarly coplanar architecture. However, it is unknown whether such low mutual inclinations typically persist over billion-year timescales. Planetary systems are subject to many mechanisms capable of perturbing coplanar orbits out of alignment, including secular chaos \citep{Laskar1996,Lithwick2012}, planet-planet scattering \citep{Ford2008,Beauge2012} and Kozai interactions \citep{Naoz2011}. Despite numerous attempts, mutual inclinations between planets are notoriously difficult to measure directly \citep{Winn2015}. In light of this, investigations have turned to indirect methods. For example, by comparing the transit durations of co-transiting planets, \citet{Fabrycky2014} inferred generally low mutual inclinations ($\sim1.0-2.2^\circ$) within closely-packed \textit{Kepler} systems. Additionally, within a subset of systems (e.g., 47 Uma and 55 Cnc) stability arguments have been used to limit mutual inclinations to $\lesssim40^\circ$ \citep{Laughlin2002,Veras2004,Nelson2014}. On the other hand, \citet{Dawson2014a} have presented indirect evidence of unseen, inclined companions based upon peculiar apsidal alignments within known planetary orbits. Obtaining a better handle on the distribution of planetary orbital inclinations would lend vital clues to planet formation and evolution. Recent work has attempted to place better constraints upon planet-planet inclinations at a population level, by comparing the number of single to multi-transiting systems within the \textit{Kepler} dataset \citep{Johansen2012,Ballard2016}. Owing to the nature of the transit technique, an intrinsically multiple planet system might be observed as a single if the planetary orbits are mutually inclined. An emerging picture is that although a distribution of small $\sim 5^\circ$ mutual inclinations can explain the relative numbers of double and triple-transiting systems, a striking feature of the planetary census is a significant over-abundance of single-transiting systems. Furthermore, the singles generally possess larger radii (more with $R_{\textrm{p}}\gtrsim 4$ Earth radii), drawing further contrast. The problem outlined above has been dubbed the ``\textit{Kepler} Dichotomy," and is interpreted as representing at least two separate populations; one with low mutual inclinations and another with large mutual inclinations that are observed as singles. The physical origin of this dichotomy remains unresolved \citep{Morton2014,Becker2016}. To this end, \citet{Johansen2012} proposed the explanation that planetary systems with higher masses undergo dynamical instability, leaving a separate population of larger, mutually inclined planets, detected as single transits. While qualitatively attractive, this model has two primary shortcomings. First, it cannot explain the excess of smaller single-transiting planets. Second, unreasonably high-mass planets are needed to induce instability within the required $\sim$Gyr timescales. Accordingly, the dichotomy's full explanation requires a mechanism applicable to a more general planetary mass range. In this paper we propose such a mechanism - the torque arising from the quadrupole moment of a young, inclined star. The past decade has seen a flurry of measurements of the obliquities, or spin-orbit misalignments, of planet-hosting stars \citep{Winn2010,Albrecht2012,Huber2013,Morton2014,Mazeh2015,Li2016}. A trend has emerged whereby hot stars ($T_{\textrm{eff}}\gtrsim 6200$\,K) hosting hot Jupiters possess obliquities ranging from 0$^\circ$ to 180$^\circ$, as opposed to their more modestly inclined, cooler (lower-mass) counterparts. Further investigation has revealed a similar trend among stars hosting lower-mass and multiple-transiting planets \citep{Huber2013,Mazeh2015}. Most relevant to the \textit{Kepler} Dichotomy, \citet{Morton2014} concluded at 95\% confidence that single-transiting systems possess enhanced spin-orbit misalignment compared to multi-transiting systems. Precisely when these spin-orbit misalignments arose in each system's evolution is still debated \citep{Albrecht2012,Lai2012,Storch2014,Spalding2015}. However, the presence of stellar obliquities within currently coplanar, multi-planet systems hints at an origin during the disk-hosting stage \citep{Huber2013,Mazeh2015}. Indeed, many studies have demonstrated viable mechanisms for the production of disk-star misalignments, including turbulence within the protostellar core \citep{Bate2010,Spalding2014b,Fielding2015} and torques arising from stellar companions \citep{Batygin2012,Batygin2013,Spalding2014a,Lai2014,Spalding2015}. Furthermore, \citet{Spalding2015} proposed that differences in magnetospheric topology between high and low-mass T Tauri stars \citep{Gregory2012} may naturally account for the dependence of obliquities upon stellar (main sequence) $T_{\textrm{eff}}$. Crucially, if the star is inclined relative to its planetary system whilst young, fast-rotating and expanded \citep{Shu1987,Bouvier2013}, its quadrupole moment can be large enough to perturb a coplanar system of planets into a mutually-inclined configuration after disk dissipation. In what follows, we analyze this process quantitatively. First, we calculate the mutual inclination induced between two planets as a function of stellar oblateness ($J_2$), demonstrating a proof-of-concept that stellar obliquity suffices as a mechanism for over-producing single transiting systems. Following this, we use N-body simulations to subject the famed, 6-transiting system \textit{Kepler}-11 to the quadrupole moment of a tilted, oblate star. We show that not only are the planetary orbits mutually inclined, but for nominal parameters the system itself can undergo a dynamical instability, losing 3-5 of its planets, with larger mass planets preferentially retained. In this way, we naturally account for the slightly larger observed size of singles \citep{Johansen2012}.
This paper investigates the origin of the ``\textit{Kepler} Dichotomy," within the context of primordially-generated spin-orbit misalignments. We have shown that the quadrupole moment of such misaligned, young, fast-rotating stars is typically capable of exciting significant mutual inclinations between the hosted planetary orbits. In turn, the number of planets available for observation through transit around such a star is reduced, either through dynamical instability or directly as a result of the mutual inclinations, leaving behind an abundance of single-transiting systems \citep{Johansen2012}. The outcome is an apparent reduction in multiplicity of tilted, hot stars, with their observed singles being slightly larger, as a consequence of many having undergone dynamical instabilities, in accordance with observations. Through the conclusions of this work, the origins of hot Jupiters, of compact \textit{Kepler} systems, the \textit{Kepler} Dichotomy and spin-orbit misalignments, are all placed within a common context. \begin{table*}[t] \centering \begin{tabular}{ |p{3cm}||p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| } \hline \multicolumn{7}{|c|}{Kepler-11} \\ \hline Property & b & c & d & e & f & g\\ \hline Mass (Earth masses) & 1.9 & 2.9 & 7.3 & 8.0 & 2.0 & 8.0\\ Radius (Earth radii) & 1.80 & 2.87 & 3.12 & 4.19 & 2.49 & 3.33\\ $a$ (AU) & 0.091 & 0.107 & 0.155 & 0.195 & 0.250 & 0.466\\ Period (days) & 10.3 & 13.0 & 22.7 & 32.0 & 46.7 & 118.4 \\ \hline \end{tabular} \caption{The parameters of the \textit{Kepler}-11 system. The mass of \textit{Kepler}-11g only has upper limits set upon it, but we follow \citet{Lissauer2013} and choose a best fit mass of 8\,Earth masses here.} \label{Kepler11} \end{table*}
16
7
1607.03999
1607
1607.06869_arXiv.txt
We investigate the conditions required for planet formation via gravitational instability (GI) and protoplanetary disk (PPD) fragmentation around M-dwarfs. Using a suite of 64 SPH simulations with $10^6$ particles, the parameter space of disk mass, temperature, and radius is explored, bracketing reasonable values based on theory and observation. Our model consists of an equilibrium, gaseous, and locally isothermal disk orbiting a central star of mass $\MStar=\MSol/3$. Disks with a minimum Toomre $Q$ of $Q_{min} \lesssim 0.9$ will fragment and form gravitationally bound clumps. \update{ Some previous literature has found $Q_{min} < 1.3-1.5$ to be sufficient for fragmentation. Increasing disk height tends to stabilize disks, and when incorporated into $Q$ as $\Qeff\propto Q(H/R)^\alpha$ for $\alpha=0.18$ is sufficient to predict fragmentation. } Some discrepancies in the literature regarding $Q_{crit}$ may be due to different methods of generating initial conditions (ICs). A series of 15 simulations demonstrates that perturbing ICs slightly out of equilibrium can cause disks to fragment for higher $Q$. Our method for generating ICs is presented in detail. We argue that GI likely plays a role in PPDs around M-dwarfs and that disk fragmentation at large radii is a plausible outcome for these disks.
\label{sec:Introduction} The importance of gravitational instabilities (GI) in the evolution of protoplanetary disks (PPDs) and in planet formation remains hotly debated \citep{cameron1978,boss1997,durisen2007,boley2010,paardekooper2012}. In recent years, the core accretion (CA) plus gas capture model of giant planet formation has received much attention \citep{pollack1996,ppvi-helled}, but GI is still seen as a candidate for the direct formation of giant planets, especially at large orbital radii \citep{boley2009}. While CA gives a more natural explanation of terrestrial planet formation and small bodies, GI may be important for the formation of these objects via solid enhancement within spiral arms or fragments \citep{haghighipour2003}. \update{ GI may also play an important role during the embedded phase of star formation \citep{vorobyov2011}. } Understanding the role of GI in planet formation will require continued observation of PPDs \citep{andrews2005,isella2009,mann2015} and further theoretical work. Of primary importance are (i) disk cooling times \citep{gammie2001,rafikov2007,meru2011,meru2012}, which must be sufficiently short to allow density perturbations to grow against pressure support, and (ii) the Toomre $Q$ parameter \citep{toomre1964}: \begin{equation} \label{eq:ToomreQ} Q \equiv \frac{c_s \kappa}{\pi G \Sigma} \end{equation} where $c_s=\sqrt{\gamma k_B T/m}$ is the gas sound speed, $\kappa$ is the epicyclic frequency ($\kappa=\Omega$ for a massless disk), and $\Sigma$ is the disk surface density. As $Q$ decreases toward unity, PPDs become increasingly unstable, and if $Q$ becomes sufficiently small, disks will undergo fragmentation. The parameters required for fragmentation, such as disk mass ($\MDisk$), disk radius ($\RDisk$), and disk temperature ($T$), are constrained by the critical $Q$ required for fragmentation. Some previous studies have found values of $Q_{crit} = 1.3-1.5$ \citep{boss1998,boss2002,mqws}, although it has been noted that $Q$ can drop below unity and the disk may still tend to a self-regulating state \citep{boley2009}. \update{ Determining parameters required for fragmentation is complicated by issues of resolution. The constant ($\beta$) cooling simulations of \cite{meru2011} demonstrated non-convergence of SPH simulations. Further work \citep{meru2012,rice2012} suggested artificial viscosity is to blame. Work is underway to investigate this problem; however, resolution dependent effects are still poorly understood in SPH simulations of PPDs \citep{rice2014}. } Previous work has tended to focus on PPDs around solar mass stars. Motivated by the large population of low mass stars, we study GI around M-dwarfs with mass $\MStar=\MSol/3$. Around 10\% of known exoplanets are around M-dwarfs \citep{exoplanets.org}. Due to selection effects of current surveys such as Kepler \citep{borucki2010}, this is expected to be a large underestimate of the actual population. Recent discoveries show that disks around M and brown dwarfs are different from those around solar analogs: the mass distribution falls of more slowly with radius, and is denser at the midplane. These differences change disk chemistry and the condensation sequence. M-dwarf disks are also less massive and survive longer \citep{apai2009,apai2010}. Core accretion timescales, which scale as the orbital period, are long around M-dwarfs. Because the stars are much lower in luminosity, their disks are substantially cooler. Additionally, planets orbiting nearby M-dwarfs are likely to be the first smaller planets spectroscopically characterized \citep{seager2015}. \update{ The simulations of \cite{boss2006a} indicate that GI is able to form gas giants around M-dwarfs. \cite{boss2006b} and \cite{boss2008} even argue that super earths around M-dwarfs can be explained as gas giants, formed via GI, and stripped of their gaseous envelopes by photoevaporation. } In this paper we explore the conditions required for disk fragmentation under GI around M-dwarfs. Previous studies have found a range of values of the $Q_{crit}$ required for disk fragmentation \citep{boss1998,boss2002,mayer2008,boley2009}. Discrepancies may be due to different equations of state (EOS), cooling algorithms, numerical issues such as artificial viscosity prescriptions, Eulerian vs. Lagrangian codes, and initial conditions (ICs). ICs close to equilibrium are non-trivial to produce and so we explore the dependence on ICs of simulations of gravitationally unstable disks. \update{ We focus on probing disk fragmentation around M-Dwarfs, which remains poorly studied, and the importance of ICs. These warrant a simple, well understood isothermal EOS. We therefore probe the Toomre $Q$ required for fragmentation while leaving the question of the cooling required for fragmentation for future work. } We begin in \S\ref{sec:ICgen} by presenting our method for generating equilibrium initial conditions for smoothed-particle hydrodynamic (SPH) simulations of PPDs, with particular care taken in calculating density and velocity profiles. \S\ref{sec:SetOfRuns} describes the suite of simulations presented here and discuss the theoretical and observational motivations behind our disk profiles. \S\ref{sec:FragmentationAnalysis} presents our analysis of disk fragmentation around M-dwarfs and discusses the importance of ICs in simulations of unstable disks. \S\ref{sec:Clumps} presents our method for finding and tracking gravitationally bound clumps and discusses clump formation in our simulations. We present our discussion in \S\ref{sec:Discussion}. We consider the effects of thermodynamics and ICs on fragmentation in PPD simulations and argue that GI should play an important role in PPDs around M-dwarfs and that we expect disk fragmentation at large radii to occur around many M-dwarfs.
% \label{sec:Discussion} \subsection{Thermodynamics} \label{sec:ThermodynamicsDiscussion} For these simulations we used a locally isothermal approximation for several reasons. We wished to perform a large scan of parameter space without compromising resolution too strongly. A computationally fast isothermal EOS is straightforward to implement. We also desired to build on previous work and to extend it to poorly studied M-dwarf systems. Our work here is directed at exploring the dependence of the fragmentation boundary on stellar/disk mass, disk height, and ICs. We leave the dependence on EOS for future work. \update{ A non-isothermal EOS introduces non-trivial numerical issues, especially in the context of SPH simulations of protoplanetary disks. Unwarranted, poorly understood heating terms, especially from artificial viscosity (AV), are introduced into the energy equation. Previous results \citep{meru2011,meru2012,lodato2011,rice2012,rice2014} and our own initial tests indicate that in the context of PPDs, AV heating can dominate disk thermodynamics. Non-isothermal PPD simulations may not converge \citep{meru2011}, whereas in Appendix~\ref{appendix:Convergence} we demonstrate that our approach does converge. It is also unclear how rapidly disks will radiatively cool, an important parameter for the possibility of disk fragmentation \citep{gammie2001,rafikov2005,rafikov2007}. We hope to investigate these effects in future work. } The isothermal approximation used for these simulations limits the scope of our results. An isothermal EOS would well approximate a disk where stellar radiation, viscous accretion heating, and radiative losses to infinity, are nearly balanced and control the temperature of the disk. Furthermore, temperature is independent of density, which is only appropriate for an optically thin disk. The isothermal approximation applies only to scenarios where dynamical timescales are much longer than the timescales for heating/cooling back to thermal equilibrium with the background. \update{ These conditions may not hold in the disks under consideration. This experiment therefore does not realistically follow the thermal evolution of the disk. We are limited to a preliminary investigation into the large-scale dynamics of the disk before the putative equilibrium temperature profile would be expected to be strongly altered. During the initial stages of disk evolution, dynamical and physical timescales are of order the orbital period and disk radius, respectively. During this stage, the isothermal EOS can still provide insight into the global dynamics of a GI disk at a certain stage. However, once clumps form, the isothermal approximation no longer provides much insight. Clumps should get hot as they collapse. The dynamic timescales of dense clumps will be short as they accrete matter, decouple from the disk, and scatter with other clumps. Pressure support of clumps, which is poorly captured by an isothermal EOS, } should tend to increase their size and their coupling to the disk, meaning that the violent fragmentation of disks after initial clump formation which we see in our simulations may not be the final state of a typical fragmenting disk. Clumps which are sufficiently dense may decouple from the disk enough to experience strong shocks and tidal interactions which will cause heating. These processes will strongly influence clump growth, evolution, and survival, all of which are still under investigation \citep{nayakshin2010,galvagni2012}. We therefore limit ourselves to discussing the early stages of clump formation. Our results indicate that the critical value of $Q_{min} \lesssim 0.9$ required for fragmentation is significantly lower than some previous results which found closer to $Q_{min}\lesssim 1.5$. Although this makes requirements for fragmentation more stringent, it does not rule out GI and disk fragmentation as important mechanisms during planet formation in PPDs around M-dwarfs. \subsection{Previous results} \label{sec:ICDiscussion} We find that for most disks, $Q \lesssim 0.9$ is required for disk fragmentation. Re-parameterizing $Q$ as $Q_{eff}$ to include the stabilizing effect of disk height provides a more precise way to predict fragmentation. The ratio of $Q_{eff}/Q$ can vary by 30\% for reasonable disk parameters. This ratio will vary even more when considering solar type stars in addition to M-dwarfs. However, $Q$ still provides a reasonable metric for disk fragmentation. \update{Other isothermal studies have found different boundaries. In contrast to our results, \cite{nelson1998} found the threshold to be $Q\leq1.5$.} \cite{boss1998} found $Q$ as high as 1.3 would fragment. \cite{boss2002} found, using an isothermal EOS or diffusive radiative transfer, that $Q=1.3-1.5$ would fragment. \cite{mqws} found $Q=1.4$ isothermal disks would fragment. \cite{pickett2003} found that cooling a disk from $Q=1.8$ to $Q=0.9$ caused it to fragment. Their clumps did not survive, although as they note that may be due to numerical issues. \cite{boley2009} found that disks could be pushed below $Q=1$ by mass loading and still not fragment, by transporting matter away from the star and thereby decreasing $\Sigma$ and increasing $Q$. \update{It should be noted that some of these simulations were run at much lower resolution than ours. Differences may also be due in part to simulation methods: using cylindrical grids, spherical grids, or SPH methods; applying perturbations; or even 2D \citep{nelson1998} vs 3D simulations.} Since all the details of previous work are not available, the source of the discrepancy in critical $Q$ values is uncertain. One source of discrepancy is simply how $Q$ is calculated. As mentioned in \S\ref{sec:FragmentationAnalysis}, approximating $Q$ by ignoring disk self-gravity and pressure forces can overestimate $Q$ on the 10\% level for heavy disks. The discrepancy may also be due to the different methods of constructing equilibrium disks. As demonstrated in \S\ref{sec:SensitivityToICs}, overestimating velocities at less than the percent level can force a disk to fragment. Disks near the fragmentation boundary are very sensitive to ICs. \secondupdate{Initial conditions are not in general available for previous work, however we can note that some studies appear to display a rapid evolution of $Q$ at the beginning of the simulation. For example, some runs of \cite{pickett2003} evolve from $Q_{min}=1.5$ to $Q_{min}=1$ in fewer than 3~ORPs (see their Figure~14). Figure~1 of \cite{mqws} shows two isothermal simulations which evolve from a $Q_{min}$ of 1.38 and 1.65 to $Q_{min}=1$ in fewer than 2~ORPs. This is indicative of ICs which are out of equilibrium. In contrast, even our very unstable disks display a remarkably smooth and gradual initial evolution. Figure~\ref{fig:QeffvsTime} shows the behavior of the minimum $\Qeff$ (normalized by its initial value) for our fragmenting runs as a function of time until fragmentation. For all the runs, $\Qeff$ decreases gradually for most of the simulation until dropping rapidly shortly before the disk fragments. For our runs near the fragmentation boundary, this is much more gradual and much less pronounced than for the runs of \cite{pickett2003} or \cite{mqws} mentioned above. For us, a much smaller change in $Q$ takes around 10~ORPs. $\Qeff$ evolves even more slowly for non-fragmenting runs. $Q_{min}$ follows a similar behavior, although with more scatter (in large part because $Q$ does not determine the timescale until fragmentation as well as $\Qeff$ does).} \begin{figure} \includegraphics[width=\columnwidth]{\figfolder/figure11.pdf} \caption{\secondupdate{ Minimum \mbox{$\Qeff$} normalized by the initial minimum \mbox{$\Qeff$} vs fraction of time until fragmentation for all the fragmenting runs. The average of these runs is plotted in red. Disk fragmentation occurs at \mbox{$t/t_{fragment}=1$}. All runs follow similar trajectories in this plot, even though a significant range of initial \mbox{$\Qeff$} and \mbox{$t_{fragment}$} values are represented here (all the fragmenting runs in Fig.~\ref{fig:Runtime} are presented here). The simulations undergo an initially gradual decrease in \mbox{$\Qeff$} which steepens sharply shortly before fragmentation.} % \label{fig:QeffvsTime}} \end{figure} However it is not certain how close to equilibrium ICs should be to capture the relevant physics of PPDs. Actual disks are constantly evolving from the early stages of star formation until the end of the disk lifetime. We chose to use disks as close to equilibrium as possible, seeded only with SPH poisson noise, to avoid introducing numerical artifacts. Some authors introduce density perturbations which are controllable. If sufficiently large, they may serve to ameliorate the problems mentioned above by explicitly having fragmentation be driven by physically reasonable spiral modes (e.g.~\cite{boss2002}) or intentionally large random perturbations (e.g.~\cite{boley2009}). Others have considered mass loading as a means to grow to low $Q$ \citep{boley2009}. \subsection{GI in PPDs} Observed disks around M-dwarfs do not appear to have low enough inferred $Q_{min}$ values to be sufficiently unstable for fragmentation under GI, however this is what would be expected given the short timescales for the fragmentation of an unstable disk. Reported disk ages are of order $10^6$~years \citep{haisch2011}, orders of magnitude longer than fragmentation timescales, making the observation of a highly gravitationally unstable disk unlikely. This is a strong selection effect for observed disk parameters. Although fragmentation timescales are very rapid, disks may persist much longer in moderately unstable configurations where GI drives large scale structure but which have not grown sufficiently unstable as to be prone to fragmentation. Early work is being done on trying to observe GI driven structures, but with current instrumentation such structures will be difficult to resolve \citep{douglas2013,evans2015}. As shown in figure \ref{fig:FragBound}, the fiducial values for disk parameters adopted here place observed disks near the boundary for fragmentation. The fact that observations indicate normal disk parameters which are close to the boundary, rather than orders of magnitude off, suggests it is plausible that a significant portion of PPDs around M-dwarfs will undergo fragmentation. This would predict a sharp transition in the distribution of inferred $\Qeff$ values around $\Qeff=1$. Older disks tend to expand radially \citep{isella2009}, thereby decreasing $\Sigma$, increasing $Q$, and pushing them away from the $\Qeff=1$ boundary. We note that while $Q$ is a reasonably strong predictor of fragmentation, disk height is an additional parameter worth measuring to predict fragmentation. Given the results of these isothermal simulations, we expect GI to play a large role in the early stages of planet formation \update{ around M-dwarfs. The exact role of star mass/type on fragmentation remains unclear. The parameter space we scanned is sufficiently large that adding an extra dimension was prohibitive, we therefore only studied one star mass. Future work should be able to determine what stars are the most suitable for fragmentation. } Once large-scale density perturbations are formed via GI, the fate of the disk remains unclear. Future work should include more sophisticated thermodynamics to follow the evolution of the gaseous component of the disk to better determine under what conditions clumps will form, and what is required for them to survive. Additionally, decreasing the resolution in our isothermal SPH simulations appears to drive fragmentation (see~\S\ref{appendix:Convergence}). The importance of resolution in simulations is subject to much investigation and should be further pursued (e.g.~\cite{meru2011,meru2012,cartwright2009}). Planet formation will of course require the concentration of solids as well. Including dust in simulations of young PPDs will be required. In a fully 3D, highly non-axisymmetric environment, we may investigate how GI affects solids. Dust enhancement through pressure gradients, dust evolution through collisions and coagulation, and dust coupling to disk opacity and cooling, will all strongly affect prospects for planet formation. Dust dynamics in PPDs using SPH has been proposed recently \citep{price2012a,price2015}, and we hope to explore solid/gas interactions in the future.
16
7
1607.06869
1607
1607.03482_arXiv.txt
\noindent We simplify the nonlinear equations of motion of charged particles in an external electromagnetic field that is the sum of a plane travelling wave \ $F_t^{\mu\nu}(ct\!-\!z)$ \ and a static part \ $F_s^{\mu\nu}(x,y,z)$: \ by adopting the light-like coordinate $\xi=ct\!-\!z$ instead of time $t$ as an independent variable in the Action, Lagrangian and Hamiltonian, and deriving the new Euler-Lagrange and Hamilton equations accordingly, we make the unknown $z(t)$ disappear from the argument of $F_t^{\mu\nu}$. We study and solve first the single particle equations in few significant cases of extreme accelerations. % In particular we obtain a rigorous formulation of a {\it Lawson-Woodward}-type (no-final-acceleration) theorem and a compact derivation of {\it cyclotron autoresonance}, beside new solutions in the presence of uniform $F_s^{\mu\nu}$. We then extend our method to plasmas in hydrodynamic conditions and apply it to plane problems: the system of (Lorentz-Maxwell+continuity) partial differential equations may be partially solved or sometimes even completely reduced to a family of decoupled systems of ordinary ones; this occurs e.g. with the impact of the travelling wave on a vacuum-plasma interface (what may produce the {\it slingshot effect}). Our method can be seen as an application of the light-front approach. Since Fourier analysis plays no role in our general framework, the method can be applied to all kind of travelling waves, ranging from almost monochromatic to socalled ``impulses", which contain few, one or even no complete cycle.
In the general form the equation of motion of a charged particle in an external electromagnetic field $F^{\mu\nu}\!=\!\partial^\mu A^\nu\!-\!\partial^\nu A^\mu$ is non-autonomous and highly nonlinear in the unknowns $\bx(t),\Bp(t)$: \bea \ba{l} \displaystyle\dot\Bp(t)=q\bE[ct,\bx(t)] + \frac{\Bp(t) }{\sqrt{m^2c^2\!+\!\Bp^2(t)}} \wedge q\bB[ct,\bx(t)] ,\\[6pt] \displaystyle \dot \bx(t) =\frac{c\Bp(t) }{\sqrt{m^2c^2\!+\!\Bp^2(t)}} , \ea \label{EOM} \eea Here \ $m,q,\bx,\Bp$ \ are the rest mass, electric charge, position and relativistic momentum of the particle, \ $\bE=-\partial_t\bA/c-\nabla A^0$ and $\bB=\nabla\!\wedge\!\bA$ are the electric and magnetic field, $(A^\mu)=(A^0,-\bA)$ is the electromagnetic (EM) potential 4-vector ($E^i=F^{i0}$, $B^1=F^{32}$, etc.; we use Gauss CGS units). Usually, the analytical study of (\ref{EOM}) is somewhat simplified under one or more of the following physically relevant conditions: $F^{\mu\nu}$ are constant (i.e. static and uniform EM field) or vary ``slowly" in space or time; $F^{\mu\nu}$ are ``small" (so that nonlinear effects in the amplitudes are negligible); $F^{\mu\nu}$ are monochromatic waves or slow modulations of the latter; the motion remains non-relativistic.\footnote{In particular, standard textbooks of classical electrodynamics like \cite{Jackson,PanofskyPhillips,Chen74} discuss the solutions only under a constant or a slowly varying (in space or time) $F^{\mu\nu}$; in \cite{LanLif62} also under an arbitrary purely transverse wave (see section \ref{LW}), or a Coulomb electrostatic potential. } The amazing developments of laser technologies (especially {\it chirped pulse amplification} \cite{StriMou85, PerMou94,MouTajBul06}) have made available compact sources of extremely intense (up to $10^{23}\,$W/cm$^2$) coherent EM waves; the latter can be also concentrated in very short laser pulses (tens of femtoseconds), or superposed to very strong static EM fields. Even more intense and short laser pulses will be produced in the near future through new technologies (thin film compression, relativistic mirror compression, coherent amplification networks \cite{MouMirKhaSer14,TajNakMou17}). One of the main motivation behind these developments is the enhancement of the Laser Wake Field Acceleration (LWFA) mechanism\footnote{In the LWFA laser pulses in a plasma produce {\it plasma waves} (i.e. waves of huge charge density variations) via the {\it ponderomotive force} (see section \ref{LW}); these waves may accelerate electrons to ultrarelativistic regimes through extremely high acceleration gradients (such as 1GV/cm, or even larger).} \cite{Tajima-Dawson1979,Gorbunov-Kirsanov1987,Sprangle1988}, with a host of important applications (ranging from cancer therapy, to X-ray free electron laser, radioisotope production, high energy physics, etc.; see e.g. \cite{EsaSchLee09,TajNakMou17} for reviews). Extreme conditions occur also in a number of violent astrophysical processes (see e.g. \cite{TajNakMou17} and references therein). The interaction of isolated electric charges or continuous matter with such coherent waves (and, possibly, static EM fields) is characterized by so fast, huge, highly nonlinear and ultra-relativistic effects that the mentioned simplifying conditions are hardly fulfilled, and the standard approximation schemes are seriously challenged. Alternative approaches are therefore desirable. Here we develop an approach that is especially fruitful when the wave part of the EM field can be idealized as an external plane travelling wave $F_t^{\mu\nu}(ct\!-\!z)$ (where $\bx\!=\!x\bi\!+\!y\bj\!+\!z\bk$, with suitable cartesian coordinates) in the spacetime-region $\Omega$ of interest (i.e., where we are interested to follow the worldlines of the charged particles). This requires that the initial wave be of this form and radiative corrections, curvature of the front, diffraction effects be negligible in $\Omega$. Normally these conditions can be fulfilled in vacuum; sometimes also in low density matter (even in the form of a plasma, see section \ref{Plasmas}) for short times after the beginning of the interaction with the wave.\footnote{Causality helps in the fulfillment of these requirements: We can assign the initial conditions for the system of dynamic equations on the $t=t_0$ Cauchy hyperplane ${\sf S}_{t_0}$, where $t_0$ is the time of the beginning of wave-matter interaction. In a sufficiently small region $\D_{\bx}\subset {\sf S}_{t_0}$ around any point $\bx$ of the wave front the EM wave is practically indistinguishable from a plane one $F_t$. Therefore the solutions induced by the real wave and by its plane idealization $F_t$ will be practically indistinguishable within the future Cauchy development $D^+(\D_{\bx})$ of $\D_{\bx}$. } The starting point is the (rather obvious) observation that, since no particle can reach the speed of light, the function $\tilde \xi(t)=ct-z(t)$ is strictly growing and therefore we can adopt $\xi=ct-z$ as a parameter on the worldline of the particle. Integrating over $\xi$ in the particle action functional, applying Hamilton's principle and the Lejendre transform we thus find Lagrange and Hamilton equations with $\xi$ as the independent variable. Since the unknown $\hat\bx(\xi)=\bx(t)$ no more appears in the argument of the wave part $F_t$ of the EM field $$ \hat F^{\mu\nu}(\xi,\hat\bx)=F_t^{\mu\nu}(\xi)+F_s^{\mu\nu}(\hat\bx), $$ $F_t(\xi)$ acts as a known forcing term, and these new equations are simpler than the usual ones, where the unknown combination $ct\!-\!z(t)$ appears as the argument in $F_t^{\mu\nu}[ct\!-\!z(t)]$. The {\it light-like relativistic factor} $s=d\xi/d(c\tau)$ (light-like component of the momentum, in normalized units) plays the role of the Lorentz relativistic factor $\gamma=dt/d\tau$ in the usual formulation and has remarkable properties: all 4-momentum components are rational functions of it and of the transverse momentum; if the static electric and magnetic fields have only longitudinal components then $s$ is practically insensitive to fast oscillation of $F_t$. $s$ was introduced somehow {\it ad hoc} in \cite{JPA,FioDeN16} (see also \cite{FioDeN16b,Fio16b}); here we clarify its meaning and role. We shall see that the dependence of the dynamical variables on $\xi$ allows a more direct determination of a number of useful quantities (like the momentum, energy gain, etc) of the particle, either in closed form or by numerical resolution of the simplified differential equations; their dependence on $t$ can be of course recovered after determining $\hat z(\xi)$. The use of a light-like coordinate instead of $t$ as a possible `time' variable was first suggested by Dirac in \cite{Dir49} and is at the base of what is often denoted as the light-front formalism. The latter is today widely used in quantum field theory, and in particular in quantum electrodynamics in the presence of laser pulses; in the latter context it was first introduced in \cite{NevRoh71}. Its systematic use in classical electrodynamics is less common, though it is often used in studies of radiation reaction (see e.g. \cite{DiP08,KraNobJar13}), but almost exclusively with EM fields $F^{\mu\nu}$ consisting just of a travelling plane wave $F^{\mu\nu}_t(\xi)$; the motion of a classical charged particle in a generic external field of this type has been determined in \cite{LanLif62} by solving the Hamilton-Jacobi equation (see section \ref{LW}). A recent exception is Ref. \cite{HeiIld17}, where some interesting superintegrable motions based on symmetric EM fields $F^{\mu\nu}$ not reducing to $F^{\mu\nu}_t(\xi)$ are determined. In other works $\xi$ has been adopted {\it ad hoc} to simplify the equation of motion of the particle in a particular EM field, e.g. in \cite{KolLeb63,Dav63} a monochromatic plane wave and a longitudinal magnetic field (what leads to the phenomenon of cyclotron autoresonance). The main purpose of this paper is therefore a systematic description and development of the lightfront formalism in classical electrodynamics, both in vacuum and in plasmas; a number of significant applications are presented as illustrations of its advantages. Among the latter, also a few new general solutions in closed form in the presence of uniform static EM fields. The plan of the paper is as follows. In section \ref{GenForm} we first formulate the method for a single charged particle under a general EM field; the Hamiltonian and the Hamilton equations turn out to be {\it rational} in the unknowns $\hat \bx,\hat\Bpp,\hat s$. Then we apply it to the case that the EM field is the sum $F=F_t\!+\!F_s$ of a static part and a traveling-wave part (section \ref{travelling waves on static fields}) or to the case that the EM potential is independent of the transverse coordinates (section \ref{xiz}). In either case we prove several general properties of the solutions; in particular, we show that in the case of section \ref{xiz} integrating the equations of motion reduces to solving a Hamiltonian system with {\it one} degree of freedom; this can be done in closed form or numerically (depending on the cases) in the wave-particle interaction region, and by quadrature outside (as there energy is conserved). In section \ref{Exact} we illustrate the method and these properties while determining the explicit solutions under a general EM wave superposed to various combinations of uniform static fields; these examples are exactly integrable and pedagogical for the issue of extreme accelerations. % More precisely: we (re)derive in few lines the solutions \cite{LanLif62,EbeSle68,EbeSle69} when the static electric and magnetic fields $\bE_s,\bB_s$ are zero (section \ref{LW}), or have only uniform longitudinal components (one or both: sections \ref{Ezconst}, \ref{xix}, \ref{longiEB}), or beside the latter have uniform transverse components fulfilling $\bBp_s\!=\!\bk\wedge\bEp_s$ (section \ref{Extension}); here $\perp$ denotes the component orthogonal to the direction $\bk$ of propagation of the pulse. Section \ref{LW} includes a rigorous statement (Corollary \ref{corollary1}) and proof of a generalized version \cite{TrohaEtAl99} of the socalled Lawson-Woodward no-go theorem \cite{% Law84,Pal88,BocMooScu92,Pal95,EsaSprKra95}; the latter states that the final energy variation of a charged particle induced by an EM pulse is zero under some rather general conditions (motion in vacuum, zero static fields, etc), in spite of the large energy variations during the interaction. To obtain large final energy variations one has thus to violate one of these general conditions. The case treated in section \ref{xix} yields the known and already mentioned phenomenon of cyclotron autoresonance, which we recall in appendix \ref{Cyclotron}; we solve in few lines the equation of motion without the $\beta\simeq 1$ and the monochromaticity assumptions of \cite{KolLeb63,Dav63}, i.e. in a generic plane travelling wave. Whereas we have not found in the literature our general solutions for the cases treated in sections \ref{Ezconst}, \ref{longiEB}, \ref{Extension}. In section \ref{Plasmas} we show how to extend our approach to multi-particle systems and plasmas in hydrodynamic conditions. In section \ref{sling} we specialize it to plane plasma problems; two components of the Maxwell equations can be solved in terms of the other unknowns, and if the plasma is initially in equilibrium we are even able to reduce the system of partial differential equations (PDEs), for short times after the beginning of the interaction with the EM wave, to a family (parametrized - in the Lagrangian description - by the initial position $\bX$ of the generic electrons fluid element) of {\it decoupled} systems of Hamiltonian ODEs with {\it one} degree of freedom of the type considered in section \ref{xiz}; the latter can be solved numerically. The solutions of section \ref{sling} can be used to describe the initial motion of the electrons at the interface between the vacuum and a cold low density plasma while a short laser pulse (in the form of a travelling wave) impacts normally onto the plasma. In particular one can derive the socalled {\it slingshot effect} \cite{FioFedDeA14,FioDeN16,FioDeN16b}, i.e. the backward acceleration and expulsion of high energy electrons just after the laser pulse has hit the surface of the plasma; we illustrate these solutions in the simple case of a step-shaped initial plasma density. Finally, in the appendix we also show (section \ref{canontransf}) that the change of `time' $t\mapsto\xi$ induces a {\it generalized canonical} (i.e. {\it contact}) transformation and determine (section \ref{oscill}) rigorous asymptotic expansions in $1/k$ of definite integrals of the form \ $\int^\xi_{-\infty}dy\,f(y)e^{iky}$; \ the leading term is usually used to approximate slow modulations of monochromatic waves. However we stress that, since Fourier analysis and related notions play no role in the general framework, our method can be applied to all kind of travelling waves, ranging from (almost) monochromatic to so-called ``impulses", which contain few, one or even no complete cycle. \tableofcontents
16
7
1607.03482
1607
1607.08874_arXiv.txt
We report on an exploratory project aimed at performing immersive 3D visualization of astronomical data, starting with spectral-line radio data cubes from galaxies. This work is done as a collaboration between the Department of Physics and Astronomy and the Department of Computer Science at the University of Manitoba. We are building our prototype using the 3D engine Unity, because of its ease of use for integration with advanced displays such as a CAVE environment, a zSpace tabletop, or virtual reality headsets. We address general issues regarding 3D visualization, such as: load and convert astronomy data, perform volume rendering on the GPU, and produce physically meaningful visualizations using principles of visual literacy. We discuss some challenges to be met when designing a user interface that allows us to take advantage of this new way of exploring data. We hope to lay the foundations for an innovative framework useful for all astronomers who use spectral line data cubes, and encourage interested parties to join our efforts. This pilot project addresses the challenges presented by frontier astronomy experiments, such as the Square Kilometre Array and its precursors.
One of the major challenges faced by astronomers is to digest the large amount of diverse data generated by modern instruments or simulations. To truly exploit the data, it is necessary to develop visualization tools that allow exploration of all their complexity and dimensions, an aspect that is unfortunately often overlooked. Although astronomical data is commonly obtained in projection, producing 2-dimensional images, the addition of spectral information can make the data 3-dimensional. Examples include observations from Integral Field Units in the optical regime as well as microwave and radio telescope receivers, we use the latter for illustration below. Manipulating the resulting 3D data cubes in a meaningful way is a non-trivial task, that requires a knowledge of both the physics at play, and the visualization techniques involved. We have started an interdisciplinary project at the University of Manitoba, a collaboration of astrophysicists and computer scientists, to investigate the use of Virtual Reality (VR) environments. \paragraph{Radio data cubes of galaxies} An important astrophysical process is the emission by neutral hydrogen (HI) of a line with a rest-frame wavelength of 21 cm, that is detected with radio telescopes. Observations of this specific emission line are made at different wavelengths, in different receiver channels, that correspond to the same line but shifted by the Doppler effect, because of the motion of the emitting material \textendash{} hence channels can be labelled as a velocity dimension. Another similar example is the microwave line emission of carbon monoxide gas (CO). If the source is spatially resolved, the resulting product is a 3D data cube, that has 2~spatial dimensions and 1~velocity dimension. For our experimentations we have chosen the galaxy NGC~3198, a type SB(rs)c spiral located at 9.5~Mpc. The object was observed in optical, infrared, ultraviolet, as well as in radio. We are using data from the THINGS survey\footnote{\url{http://www.mpia.de/THINGS/Overview.html}} obtained at the NRAO Very Large Array (VLA). The data cube is 1024$\times$1024 pixels by 72 velocity channels, of the order of 75~million points.\footnote{Astronomical data can of course be much bigger than this, we defer the handling of larger-than-memory data to future work, on this topic see \citet{Hassan2011b}.} A~typical desktop of a radio-astronomer is shown on figure~\ref{fig:Karma}. The Karma software suite\footnote{\url{http://www.atnf.csiro.au/computing/software/karma/}} is still widely used despite being 20 years old and no longer actively maintained. The plots shown are a projection and a slice in the cube, so reducing the dimensionality of the data. In this paper, we intend on visualizing the entire data set. The Karma software does 3D volume rendering, and lets the user define the colour transfer functions in a precise way, but it offers very limited interactivity given its age. The next generation tool is the Viewer application from CASA: the Common Astronomy Software Applications package, however it does not currently support 3D rendering, which was not identified as a priority.\footnote{See a progress report at \url{https://science.nrao.edu/facilities/alma/alma-development-2015/VisualizationPortal.pdf}, the current focus is on porting the software to the cloud, see \citet{Rosolowsky2015b}.} Other specialized software like GAIA or even ds9 offer some 3D modes.\footnote{Another approach is to use generic 3D visualization software (see a review of some options for radio astronomers in \citealt{Punzo2015a}), or to write custom programs using visualization libraries (e.g. S2PLOT by \citealt{Barnes2006a}).} However all these software are made for the desktop, and none offers support for advanced displays. \paragraph{The role and challenges of 3D in scientific discovery } In this paper we are interested in displaying the 3D data in actual 3D space, to get a holistic view, in the expectation that this will generate a more correct perception, and help build an intuition, of the data. We think this is important for quick interpretation, and also necessary to discover structures that were not anticipated \textendash{} we emphasize that our data are from observations, so they are poorly structured and their actual content is not known in advance. The next-generation radio facilities being developed, such as the SKA, will produce amounts of data that will require much progress not only in terms of hardware but also in terms of software and assembling the visualization pipeline. While it is anticipated that automated analysis systems will be put in place, the direct inspection of the data will still remain critical to ensure proper operations and to foster discovery \citep{Hassan2011a}. The human brain is wired to analyze 3D environments, and we do have 3D displays, so it seems natural to use them to visualize our 3D data. In this regard, the interface between the machine and the human brain is the bottleneck in the interpretation of complex astronomical data: it was already pointed out by \citet{Norris1994a} that visualization tools have to be more user-friendly. These days there is (again) a lot of hope and momentum in the field of Virtual Reality (as well as in the field of Augmented Reality, or combinations thereof).% It has already many professional applications, in the fields of engineering, architecture, marketing, training, health, and scientific visualization. But developing interfaces allowing for Natural User Interaction (NUI) in 3D is still an active field of research \textendash{} we note that in their review of solutions for astronomers, \citet{Punzo2015a} deliberately did not consider the new generation of cheaper 3D hardware (such as the Leap Motion or the Oculus Rift), because of their still uncertain fate, and also because of the lack of expertise for these new interfaces. We think that astronomers should embrace this new technology, and develop the interfaces they need to take advantage of it. Producing tools and techniques that support and enhance our research is important, so that astronomy remains at the forefront of the field of visualization.
16
7
1607.08874
1607
1607.02215.txt
The traditional cultures of Aboriginal Australians include a significant astronomical component, perpetuated through oral tradition, ceremony, and art. This astronomical knowledge includes a deep understanding of the motion of objects in the sky, which was used for practical purposes such as constructing calendars and for navigation. There is also evidence that traditional Aboriginal Australians made careful records and measurements of cyclical phenomena, recorded unexpected phenomena such as eclipses and meteorite impacts, and could determine the cardinal points to an accuracy of a few degrees. Putative explanations of celestial phenomena appear throughout the oral record, suggesting traditional Aboriginal Australians sought to understand the natural world around them, in the same way as modern scientists, but within their own cultural context. There is also a growing body of evidence for sophisticated navigational skills, including the use of astronomically based songlines. Songlines are effectively oral maps of the landscape, and are an efficient way of transmitting oral navigational skills in cultures that do not have a written language. The study of Aboriginal astronomy has had an impact extending beyond mere academic curiosity, facilitating cross-cultural understanding, demonstrating the intimate links between science and culture, and helping students to engage with science.
\label{intro} \subsection{The European History of Aboriginal Astronomy} Lt. William Dawes arrived in Sydney Cove with the First Fleet on 26 January 1788. He took greater interest in the local Darug people than any other officer from the First Fleet, and became an authority on their language and culture. His notebooks \citep{dawes} are the earliest detailed description of Aboriginal culture. He became close friends, and perhaps partners, with a Darug girl (Patyegarang) and learnt some of her language. Unfortunately he clashed with Governor Phillip, and was therefore unable to stay in the colony as he wished at the end of his three-year term. Dawes was also an astronomer, and naturally took an interest in Darug astronomy. His notebooks contain many Darug words for stars and planets but, curiously, contain no mention of Darug knowledge of the sky. It seems inconceivable that Dawes, a man curious about both astronomy and Darug culture, didn't ask questions about the Darug understanding of the motion of celestial bodies, and yet he left us no record of the answers. His notebooks were lost for many years, but were recently found and are now available on-line \citep{dawes}, but are thought to be incomplete. %.. (story here with refs??). It is still possible that one day his missing notebooks will be found, and will contain an account of Darug astronomy. His fellow-officer Lieutenant-Colonel David Collins wrote of the Aboriginal inhabitants of Sydney: `Their acquaintance with astronomy is limited to the names of the sun and moon, some few stars, the Magellanic clouds, and the Milky Way. Of the circular form of the earth they have not the smallest idea, but imagine that the sun returns over their heads during the night to the quarter whence he begins his course in the morning.' \citep{collins98}. Even this fragment contains useful information, telling us that the Eora people believed that the Sun returned to the East {\it over} their heads rather than under the ground, as believed by most other groups. Another early account of Aboriginal astronomy was from the escaped convict William Buckley, who lived with the Wathaurung people from 1803 to 1835 \citep{morgan52}. The first published substantive accounts of Australian Aboriginal astronomy were by \cite{stanbridge57, stanbridge61}, who wrote about the sky knowledge of the Boorong people \citep[see also][]{hamacherfrew10}. This was followed by accounts \citep[e.g.][]{bates44, maegraith32, mathews94, parker05} of the sky knowledge of other groups, and also derivative and interpretational works \citep{griffin23, kotz11, macpherson81}. Apart from incidental mentions of sky knowledge by other writers \citep[e.g.][]{bates44, berndt48, mathews94,tindale25}, there was then no substantial discussion of Aboriginal astronomy until two monumental works by Mountford. One of these volumes \citep{mountford56} was based on the 1948 Arnhem Land expedition, that he led, and established Aboriginal Astronomy as a serious subject. Unfortunately, the second volume \citep{mountford76} included accounts of sacred material that should not have been published. Following a court case, this volume was partially withdrawn from sale, and the resulting distrust of researchers by Aboriginal elders has significantly impeded subsequent work in this field. The studies by the eccentric Irishwoman Daisy Bates are potentially very important, because she wrote extensively on Aboriginal sky-beliefs, based on her immersion in an Aboriginal community \citep{bates25, bates44}, but her writings are not often cited because errors of fact and judgement \citep{devries10} mar her writings, marking her as an unreliable witness. Nevertheless, there have been valuable attempts to deconvolve her errors from her writings \citep[e.g.][]{fredrick08, leaman}. Following Mountford's work, although there were several popular accounts \citep[e.g.][]{isaacs80, wells64}, there were no significant research studies until \cite{cairns88} suggested that Sydney rock art contained references to Aboriginal astronomy, \cite{clarke90} examined the Aboriginal astronomy of the people around Adelaide, and \cite{haynes90, haynes92} wrote extensively on the subject, suggesting that Australian Aboriginal people may be `the worldÕs first astronomers'. Subsequently, \citet{johnson98} wrote an authoritative academic monograph of the astronomy of several language groups. However, most of these works were based on the interpretation of earlier academic texts, and few presented any new data obtained from Aboriginal sources or from fieldwork. Although evidence of astronomical knowledge has been found in many Aboriginal cultures, the best-documented example is undoubtedly that of the Wardaman people, largely because of Wardaman senior elder Bill Yidumduma Harney's enthusiasm to share his traditional knowledge with the wider world. In particular, a significant step forward was the publication of the book Dark Sparkers \citep{DS} which contains an extremely detailed study of the sky knowledge and astronomical lore of the Wardaman people, at a level of detail which is unprecedented in the literature. Unfortunately only a small fraction of this knowledge can be presented in this review. By 2005, these detailed published accounts demonstrated that there was a significant body of Aboriginal Astronomical knowledge, and that the art (e.g. Fig. \ref{art}), ceremonies, and oral traditions of many traditional Aboriginal cultures contain references to celestial bodies such as the Sun, Moon, and stars. However, very few peer-reviewed articles had been written. As a result, most archaeologists and anthropologists still regarded the area as a `fringe' area, and it received little attention in mainstream archaeology or anthropology. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{DhuwarrinyYunupingu.jpg} \caption{A bark painting by Yolngu artist Dhuwarriny Yunupingu. All elements of it refer to stories involving astronomical constellations. The object in the centre may depict a comet.The crocodile at the bottom is the constellation Scorpius, and some other elements in a very similar painting are discussed by \citet[][Plate188]{grogerwurm73}. This particular painting is modern ($\sim$2000 AD) but follows a traditional design. } \label{art} \end{center} \end{figure} From 2005 onwards, there was a rapid growth of articles on Aboriginal astronomy (see Fig. \ref{papers}), reflecting a rapid increase not only in interest in the subject but in rigorous scholarly research.The International Year of Astronomy in 2009 saw an explosion of published literature on Aboriginal Astronomy \cite[e.g.][]{ED, n230}, including new information from fieldwork as well as the mining of existing texts. This increase in scholarly work was accompanied by a rapid growth of Aboriginal Astronomy outreach activities. These included an ABC TV `Message Stick' program devoted to Aboriginal Astronomy, public displays \citep[e.g.][]{goldsmith11}, the `First Astronomers' stage show which started at the Darwin festival and then toured several Arts Festivals \citep{harney09b}, and, during 2009 alone, over 100 public talks and media interviews on Aboriginal Astronomy \citep{n254}. A highlight was a one-day symposium on Aboriginal Astronomy at the Australian Institute of Aboriginal and Torres Strait Islanders Studies (AIATSIS) on 27 November 2009. This well-attended symposium at Australia's peak body on Aboriginal studies marked the beginning of acceptance of Aboriginal Astronomy as a serious research area by mainstream academia. It also marked the launch of the `Ilgajiri' art exhibition of Indigenous paintings of sky-knowledge, which later blossomed into the `Shared Skies' exhibition, funded by the Square Kilometre Array organisation, and which has toured Australia, Europe, and South Africa, featuring Australian and South African paintings of traditional sky knowledge \citep{goldsmith14a}. Three PhD degrees have been awarded in this field \citep{kotz11, hamacher11a, goldsmith14b}, and three Masters degrees \citep{morieson96, fredrick08, fuller14d}, and there are now several graduate students enrolled at the Universities of New South Wales (UNSW) and Western Sydney (WSU). Nevertheless, the subject field is currently small enough that this review can aim to summarise {\bf every} peer-reviewed paper on Aboriginal Astronomy, and also all non-peer-reviewed publications containing significant original research. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{papers.png} \caption{The growth of Aboriginal astronomy literature. The plot shows the number of papers per decade cited in this review. The number for the current decade has been multiplied by 1.4 to correct for the incomplete decade.} \label{papers} \end{center} \end{figure} Most early publications on Aboriginal Astronomy were primarily concerned with ceremonial aspects, although it was well-established that the stars were used for calendars and timing of harvesting food sources. \cite{haynes92} first suggested that the Australian Aboriginal people may be the world's first astronomers. But `astronomer' implies more than just telling stories about the sky - it implies a quest to understand the phenomena in the sky, develop a self-consistent model of how the world works, and apply this knowledge to practical applications such as calendars and navigation. From 2005 onwards, there was a growing trend to test this hypothesis \citep[e.g.][]{n214, n230}, and an increasing focus on ethno-science - the idea that traditional Australians were trying to understand their world, in terms of their traditional culture, much as science does nowadays. As a result of this explosion in the field, it is now clear that there is far more depth to the subject than realised by early researchers such as Mountford or Elkin \citep{n324}. Pre-contact Aboriginal Australians not only knew songs and stories about the sky, but they had a deep understanding of the positions and motions of celestial bodies, and used that knowledge to construct calendars, and to navigate. Evidence of this knowledge may be found in their songs and stories \citep{DS}, their art \citep{n255}, and their navigational skills \citep{n315}. Perhaps just as importantly, that knowledge was integrated into a world-view which incorporated models of how the world works \citep{n324}. The breadth of knowledge has also increased. We now have detailed studies of the astronomical components of several cultures, most notably the Wardaman \citep[e.g.][]{DS}, but also Yolngu \citep{ED}, % Kamilaroi \& Euahlayi\footnote{The distinction between Kamilaroi and Euahlayi traditions is often blurred. To avoid incorrect attribution, I use `Kamilaroi' throughout this review to mean `either Kamilaroi or Euahlayi', with apologies to Euahlayi readers.} % \citep{n311}, and South Australian groups \citep{clarke97a}. The field is rapidly broadening, with current studies looking at the astronomy of the Wiradjuri \citep{leaman16a, leaman16b}, and Torres Strait Islanders \citep{hamacher13c}. However, these still represent only a tiny fraction of the 300-odd Aboriginal language groups, and it is likely we have only scratched the surface. At the same time, the senior elders who possess much of this knowledge are ageing, and passing on, with relatively few new elders rising up to replace them. So recording this astronomical knowledge is something of a race against time. On the other hand, groups such as Kamilaroi are rebuilding their culture, with younger people learning language and ceremony, and the study of Aboriginal astronomy can lay a significant role in that rebuilding process \citep{n311}. It is hopefully not far-fetched to imagine a future Australia in which Aboriginal languages are taught as second languages in Australian high schools (in the same way that Welsh is taught in schools in Wales) and that Aboriginal ethnoscience is part of the Australian science curriculum. Aboriginal Astronomy is starting to be a significant and effective component of education and outreach activities \citep{bhathal08, bhathal11a,hollow08, pring02, pring06a, pring06b, wyatt14}. \citet{n324} has argued that Indigenous Astronomy can be used to teach students the basic motivation of science. This revolution in knowledge of Aboriginal astronomy can also be viewed in the context of a rapidly changing view of other aspects of Indigenous culture. Contrary to widespread opinions by earlier researchers \citep{n324}, we now know that pre-contact Indigenous people practised agriculture \citep{gammage11,pascoe14}, and possessed sophisticated navigational skills \citep{kerwin10}. In the last ten years the nature of the study of Aboriginal astronomy has also changed. Ten years ago researchers would approach elders and communities, clipboards in hand metaphorically if not literally, and ask for information about stories. The work of Fuller et al. \citep{n311,n322,n318} is completely different, and is better characterised as a collaboration between the research group (Fuller et al.) and the Kamilaroi and Euahlayi groups (Michael Anderson et al). The project had clear benefits to both scholarly achievement and to rebuilding the Kamilaroi and Euahlayi cultures, marked by a `giving back ceremony' at which teaching materials were given back to the community to help educate high-school students about Kamilaroi and Euahlayi culture. It also triggered a feature film about Euahlayi sky knowledge \citep{ellie}. It is hoped that future projects in this field will adopt this collaborative approach. Another important approach is that of \cite{nakata14} who propose to develop software, archives, and web interfaces to allow Indigenous communities to share their astronomical knowledge with the world on their terms and in a culturally sensitive manner. \subsection{The Aboriginal People} The ancestors of Aboriginal Australians left central Africa around 100,000 BC, and passed through the Middle East about 70,000 BC. DNA sequencing \citep{rasmussen} shows that they were closely related to the proto-Europeans who emerged from Africa at around the same time. The proto-Australians followed the coast of India and China, crossing through Papua New Guinea, rapidly spread across the Pleistocene continent of Sahul (Australia, New Guinea, and Tasmania), and arrived in Northern Australia, probably in a single wave \citep{hudjashov}, at least 40,000 years ago \citep{oconnell04}, probably before their counterparts reached Europe. Radiocarbon dating of Mungo Man in the Willandra Lakes region of New South Wales (NSW), \footnote{Throughout this review I use the following abbreviations for the states and territories of Australia: NSW: New South Wales; Qld: Queensland; Vic: Victoria; SA: South Australia; WA: Western Australia; NT: Northern Territory; ACT: Australian Capital Territory; Tas: Tasmania} % showed that they had reached NSW by 40,000 BC \citep{bowler03}. %{\bf insert something about other waves, and the people who came here via India}?? From 40,000 BC onwards they enjoyed a continuous, unbroken culture, with very little contact with outsiders, other than annual visits over the last few hundred years from Macassan trepang-collectors \citep{orchiston16}. This isolation continued, with no cultural discontinuities, until the arrival of the British in 1788, making Aboriginal Australians among the oldest continuous cultures in the world \citep{mcniven05}. At the time of British invasion, Australia's population was about 300,000 \citep{jupp01}, divided into about 250 distinct Aboriginal language groups with nearly 750 dialects \citep[e.g][]{walsh91}. Each had its own culture and language, although many shared common threads, such as the belief that the world was created in the `Dreaming' \index{Dreaming} by an\-cestral spirits. Some languages were closely related (as close as Italian and Spanish), while others were as different from each other as Italian is from Chinese. Many Aboriginal groups divide their world into two moieties: every person and every object has the characteristics of one or other of these moieties. For example, every Yolngu person is either {\it dua} or {\it yirritja}, and this is such a fundamental difference that it affects their language (with different verb endings for dua and yirritja), whom they associate with, and whom they can marry. Their sky is also divided into the two moieties. %with the evening Earth-shadow marking the boundary between them. Cant use this without a ref \cite{mountford56} notes that Groote Eylandt people divide the stars in the Milky Way into two moieties, while the central desert people \citep{mountford76} associate the summer constellations with one moiety, and the winter constellations with the other. Even within one community, the two moieties typically also have different stories, with further variation between different clans or groups. As a result, there are often several different versions of any particular story in a community, with further (secret) versions reserved for the various levels of initiated men and women. Most Aboriginal Australians were nomadic hunter/ gatherers, moving in an annual cycle of seasonal camps and hunting-places within the land that they owned, taking advantage of seasonal food sources. They practised careful land management to increase the food yield of their land \citep{gammage11}, including `firestick--farming' in which the land was burnt in a patchwork pattern to increase food production and reduce the severity of bush-fires. Some groups built stone traps for fish farming, planted crops such as yams, or built stone dwellings \citep{clark94, gammage11}. Many Aboriginal cultures were severely damaged by the arrival of the British in 1788, and by the consequent disease, reduced access to food, and, in some cases, genocide. The total Aboriginal population decreased from over 300,000 in 1788 to about 93,000 in 1900. In south-east Australia, entire cultures were destroyed. However, the language and culture of some groups in the north and centre of Australia are still essentially intact, and initiation ceremonies are still conducted at which knowledge is passed from one generation to the next. It is from these groups, particularly the Yolngu and Wardaman people, that we have obtained our most detailed information. The people who came to Australia over 40,000 years ago were biologically identical to modern humans, and may well have had extensive knowledge of the sky. However, there is no dated record of any Aboriginal astronomy until the invasion of Australia by the British in 1788, at which time astronomical knowledge appears to have deeply embedded in many Aboriginal cultures, and was presumably already ancient. Certainly it was an important part of several Aboriginal cultures in 1788, and was `considered one of their principal branches of education. ... it is taught by men selected for their intelligence and information' \citep{clarke09a,dawson81}. %Dawson[1881:98Ð99]. \cite{mountford76} %Mountford (1976: 449) reported that some Aboriginal people knew every star as faint as fourth magnitude, and knew myths associated with most of those stars. Similarly, elders such as Harney can name most stars in the sky visible to the naked eye, and understand intimately how the whole pattern rotates over their heads from east to west during the night, and how it shifts over the course of a year \citep{DS}. To name most of the $\sim$ 3000 stars visible to the naked eye from Northern Australia is a memory feat that rivals winners of the World Memory Championships \citep[e.g][]{foer11} and which must have taken years of learning. % DS page 61 \cite{maegraith32} says that `The most interesting fact about Aboriginal astronomy is that all the adult males of the tribe are fully conversant with all that is known, while no young man of the tribe knows much about the stars until after his initiation is complete ... The old men also instruct the initiated boys in the movements, colour and brightness of the stars.' Stars are a central part of Aboriginal sky-lore, and are often associated with creator spirits. Aboriginal men are also familiar with the changing position of the stars throughout the night and throughout the year, as described below in \S\ref{navigation}. Curiously, the stars regarded as the most important were often not the brightest, but instead importance seemed to depend on factors such as colour and the relationship to other stars \citep[e.g][]{haynes90, maegraith32}. The first definite evidence of astronomy in the world is probably either Stonehenge \citep{pearson13} or the older but less-well-known Warren Field \citep{gaffney13}, which was built around 8000 BC. Given the continuity of Indigenous Australian culture for at least 40,000 years with little contact with the outside world, it is plausible that Aboriginal astronomical knowledge predates these British sites. This is the basis of the statement that `the Australian Aborigines were arguably the world's first astronomers' \citep{haynes96}, but we currently have no firm evidence to support or refute that hypothesis. \subsection{Coverage and Limitations of this Review} To the best of the author's knowledge, this review paper summarises and cites {\bf all} published peer-reviewed literature on Australian Aboriginal astronomy, together with significant non-peer reviewed material that presents original research and cites sources. This paper therefore marks a watershed, in that it is unlikely that a review paper written in the future could reasonably expect to include all peer-reviewed literature, because of the rapid expansion of research in this field shown in Fig. \ref{papers}. All the information in this review is either already in the public domain, or appropriate permission has been obtained from the relevant traditional owners. The knowledge in this review is merely the tip of the iceberg, in that there is much research still to be done, and also because there is a wealth of sacred, and therefore secret, information that cannot be discussed here. It is also possible that the knowledge in this review is biased to `male knowledge', since the author, like most authors in this field, is male, and is more likely to be told `male stories'. It is possible that a female researcher would write a different and complementary review of the subject. This review also includes a significant amount of previously undocumented information obtained directly from Indigenous elders and others with traditional knowledge, some of whom are now deceased. For reasons of cultural sensitivity, many of these people cannot be named in this review. This paper therefore differs from conventional astrophysical papers by including a significant number of `private communications', for many of which the name of the contributor cannot be given. In this review, I handle this unusual situation by citing the private communication in a similar way to journal references, and referring to them as AC1, AC2, etc. where AC stands for `Aboriginal Contributor'. and list some details (language group, date, etc) in the bibliography, but usually withholding the name. Further details of these contributors (name, date, place, and, in many cases, a recording) can be made available in confidence to bona fide researchers. The primary goal of most this review and the literature cited is to describe the astronomical knowledge and culture of Aboriginal Australians before the arrival of Europeans in 1788. However, this is not intended to imply that Aboriginal knowledge or culture is frozen or static, and of course Aboriginal cultures continue to develop and evolve, and be infused by other cultures with which they come into contact. To avoid confusion, I refer to the Aboriginal Australians before 1788 as `traditional Australians', and to their culture and knowledge as `Aboriginal culture' and `Aboriginal knowledge' respectively. I also confine this review to the astronomy of `Aboriginal Australians' (i.e. those living in mainland Australia and Tasmania), rather than the broader `Indigenous' grouping, which includes Torres Strait Islanders. %Torres Strait Islander culture is rather different from mainland Aboriginal culture, as described by \cite{hamacher13c}. The astronomical knowledge of Torres Strait Islanders, a Melanesian people with links to both Aboriginal and Papuan cultures, is the focus of research by \cite{hamacher13c}. To keep the review to a reasonable length, I exclude non-astronomical phenomena such as aurorae, lightning, rainbows, and weather. For brevity, I generally focus on original research papers, rather than citing later papers that merely report the earlier work, unless they add to it by interpretation or consolidation with other work, or if the earlier work is difficult to access. \begin{figure}[h] \begin{center} \includegraphics[width=10cm]{map.jpg} \caption{Map showing approximate locations of some of the places and language groups discussed in the text.} \label{map} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{skyboss.jpg} \caption{Wardaman rock painting of the Sky Boss and the Rainbow Serpent. The serpent at the bottom represents the Milky Way, and the head of the Sky Boss is associated with the Coalsack nebula, although a researcher could not deduce this astronomical connection without access to the cultural insight of Wardaman elder Bill Yidumduma Harney. (Photo courtesy of Bill Yidumduma Harney)} \label{skyboss} \end{center} \end{figure} \subsection{Aboriginal Number Systems and \\ Writing} \cite{blake81} stated that `no Australian Aboriginal language has a word for a number higher than four,' despite the existence of well--documented Aboriginal number systems extending to far higher numbers \citep{altman, harris87, mcroberts90,tindale25, tully97}. \cite{harris87} comments: `Statements such as these, which do not even admit five, are not simply misleading; they are false'. There are many counter-examples to Blake's statement: \begin{itemize} \item the many well documented Aboriginal languages with number systems extending to well beyond four \citep{altman, harris87}, \item the observation that traditional Aboriginal people could count in multiples of five \citep{harris87} or twenty \citep{tindale25} %I also have a reference to base 5 to Tindale 1978, but I can't find a Tindale 1978 and I dont know where this came from \item the Gurindji `no-base' counting up to 50 with no compound words \citep{altman}, \item the report \citep{dawson81} %page 71 that the Tjpwurong people had a cardinal system extending to 28 - the number of days in a lunar month - each of which was identified as a place on the body, and as a verbal name which also described that part of the body, \item a series of 28 lines at Moon Dreaming reported as being `moon counting' \citep{DS}, \item Harney's \citep{DS}, report that as a stockman he could count a herd faster than whitefellas, explaining: %page 7 `We go five and five all the way and then bunch it up. Go five and five is ten. Then count the number of tens. We call that Yigaga' . \item the number system of the Gumatj clan of the Yolngu people (who count in base 5) up to over one thousand, although the compound words for large numbers tend to be unwieldy for everyday usage \citep{altman, harris87}, \item the account \citep{krefft65} of two Aboriginal men counting about 50 bags. Their spoken language, using increasingly complex compound words for numbers greater than 5, proved difficult for this task, so they used notches in a stick instead. \item the report \citep{gilmore} that `The Aboriginal power to count or compute in his native state was as great as our own. ... I have seen partially trained native stockmen give the exact number of cattle in a group up to four or five hundred almost without a moment's hesitation, yet authorities on the blacks continue to tell us that the Aboriginal only counted to ten or thereabouts'. \item everyday observations of Tiwi children counting beyond 50 in their language \citep{ED}, \item the observation that the Pleiades are called `the Seven Sisters' in many Aboriginal languages, or, alternatively that there are specifically five, six, or seven sisters \citep{johnson11}, %{\bf Tindale and Lindsay 1963 have a picture of the Seven Sisters in the wall painting in the southern cave at Owalinja, in South Australia, but you can't really see how many there are } \item the reports that specific numbers of participants are required in some ceremonies \citep[e.g.][]{berndt43}, \end{itemize} On the other hand, it has been argued that, while many Aboriginal groups construct `compound numbers' (like the english `twenty-one'), none have words like `one hundred' or `one thousand', although this too is disputed by \cite{harris87}. %who gives examples of Indigenous words for hundred and thousand. Recently, \cite{zhou} have argued that there is considerable variation in Aboriginal number systems, and most don't contain higher numbers, but can gain or lose higher digits over time. On the other hand, Zhou et al. do not discuss the counter-examples, listed above, to their statement that the upper limit of Aboriginal cardinal numbers is twenty. %Certainly there is abundant evidence that traditional Aboriginal people could count to large numbers. A further complication is that sign language is an integral part of many Aboriginal languages, or even a self-contained complete language \citep{kendon88, wright80}, and complex ideas can be conveyed silently using fingers. So it is perfectly possible that, in a particular language, Aboriginal people may have been familiar with the idea of `twenty', be able to count to twenty, be able to communicate it by sign language, but perhaps not have a spoken word for the number. For example, Mitchell in 1928 was able to barter food and a tomahawk for 10 days work with Aboriginal guides, using finger-counting \citep{baker98}. Similarly \citep{harney59} %bottom of page 47 reports an old man saying `after that many days - he held up five fingers to give the number as was the custom of the people'. \cite{morrill64} and William Buckley \citep{morgan52} gave similar accounts that the Mt. Elliott people counted verbally up to 5, then used their fingers, and then the ten fingers of another person, and so on, until they reached a `moon' (presumably 28). The word for `five' was {\it Murgai}, which is quite different from the word for hand ({\it Cabankabun}). In summary, traditional people were demonstrably able to count to much higher numbers than four, but in some cases using compound words (such as `hand of hands' or `twenty-one') or by using finger-language. Thus Blake's statement that `no Australian Aboriginal language has a word for a number higher than four' may refer to a linguistic nuance that does not include compound words such as `twenty-one' or non-verbal languages, but even then, this statement still seems inconsistent with the evidence cited above. Even worse, Blake's statement is often misinterpreted to mean `Aboriginal people can't count beyond four' or `Aboriginal people don't have a concept of numbers greater than four', both of which are obviously incorrect. It has also been said that Aboriginal people `made no measurements of space and time, nor did they engage in even the most elementary of mathematical calculations' \citep{haynes00}. %(Haynes, 2000:54). However, as will be shown in this review, there is plenty of evidence that Aboriginal people made careful measurements of space and time, resulting in elaborate calendars and navigational systems. It is also often stated that Aboriginal people had no written language. While this appears to be largely true, it is worth noting three possible exceptions. First, \cite{mathews97c} %Mathews (1897:293) explained how the pictograms on a message stick (Fig. \ref{fig:message_stick}) gave detailed information about the place and time of a future corroboree. Second, \cite{hahn64} %Hahn (1964:130) discussed how Aboriginal people in South Australia made notches in their digging sticks to measure their age in lunar months. Third, \citet{harney09c} identifies the `scratches' in painting and rock-carvings as a variety of symbols, which he says would be understood by other Wardaman people. No doubt further research will uncover more exceptions. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{message_stick.pdf} \caption{A message stick, taken from \cite{n268, mathews97c}, %Mathews (1897:292), depicting information including time, denoted by the phase of the moon. The message stick states that `Nanee (a) sent the message from the Bokhara river (b), by the hand of Imball (c), via the Birie (d), the Culgoa (e), and Cudnappa (f) rivers, to Belay (g); that the stick was dispatched at new moon (h), and Belay and his tribe are expected to be at Cudnappa river (f) at full moon (i); (j) represents a corroboree ground, and Belay understands from it that Nanee and his tribe are corroboreeing at the Bokhara river, which is their taorai, and, further, that on the meeting of the two tribes at full moon on the Cudnappa river a big corroboree will be held.' The new moon, which in this context represents a crescent, is depicted in the lower--left of Frame 1, labelled as (h) while the full moon is the full circle depicted in the upper--left of Frame 2, labelled as (i).} \label{fig:message_stick} \end{center} \end{figure}
Ten years ago, most published knowledge of Aboriginal astronomy referred to songs and stories, but not to practical applications, navigation, or understanding the motion of celestial objects. Very few peer-reviewed papers had been written, so that Aboriginal astronomy received little attention in mainstream anthropology. The published literature now shows unequivocally that traditional Aboriginal Australians were careful observers of the night sky, they had a deep knowledge of the sky, and their celestial knowledge played a major role in the culture, social structure, and oral traditions. Their knowledge went far beyond simply telling stories of the constellations. Instead, they appear to have been engaged in building self-consistent, but culturally appropriate, models that explained the phenomena observed in the sky. This search for understanding resembles modern-day science, and so is sometimes labelled `ethnoscience'. This knowledge also had practical applications, such as marking time and date, over scales ranging from hours to years, so that it could be used to communicate the dates of ceremonies, or mark when it was time to move camp to take advantage of seasonal food sources. Some Aboriginal groups also used it for navigation. \subsection{Stone Arrangements} There is unambiguous evidence that astronomical knowledge played a role in the construction of some stone arrangements. Many stone arrangements were aligned by the builders to cardinal directions with a precision that implies that astronomical observations were used to determine their orientation. At least one stone arrangement is carefully aligned to astronomically significant directions on the horizon at which the solstitial and equinoctial Sun sets. Monte Carlo tests have shown that the probability of these distributions occurring by chance is extremely small, confirming that these alignments were deliberately chosen by the builders of the stone arrangements. We may deduce from this that traditional Aboriginal people had a significant knowledge of astronomy, including a good understanding of the motion of the Sun, and made careful measurements to determine cardinal directions. This does not imply that the stone arrangements are ``astronomical observatories'', since the primary function of the stone arrangements is unknown. Stone arrangements tend to be under-represented in the literature, and evidence suggests that there are many stone arrangements that have not yet been recorded or classified. It is important that they be examined and classified, and if appropriate catalogued and protected, so that we may use them to understand more of the pre-contact Aboriginal cultures. \subsection{Relationship between Sky and Earth} Many groups regard the sky and the Earth as `parallel planes', with counterparts in the sky to places, animals, and people on the Earth. There is also a widespread belief that they used to be more intimately connected, and some groups believe that they used to be the same thing, or that they became inverted, so what used to be the Earth is now the sky. It is widely believed that spirits, and some people such as `clever men', are still able to move between the sky and the Earth \subsection{Navigation} Aboriginal people used a variety of techniques to navigate the length and breadth of Australia. These included songlines, which are effectively oral maps of the landscape, enabling the transmission of oral navigational skills in cultures that do not have a written language, They extend for large distances across Australia, were used as trading routes, and some modern Australian highways follow their paths. Songlines on the earth are sometimes mirrored by songlines in the sky, so that the sky can be used as mnemonic to remember a route, and the sky is also used directly as a navigational tool. Other navigational techniques include using the sky as a compass, or by noting particular stars or planets as signposts, or by following the path defined by the Milky Way or the ecliptic. \subsection{Ethnoscience} Ethnoscience is the attempt to describe and explain natural phenomena within an appropriate cultural context, resulting in predictive power and practical applications such as navigation, timekeeping, and tide prediction. Ethnoscience is similar in its goals to modern-day science, but is based on the limited information available to traditional Aboriginal people, and framed within the host culture. For example, it is remarkable that the Yolngu model of tides correctly predicted how the height and timing of the tide varies with the phase of the Moon and may be contrasted with Galileo's incorrect explanation which predicted only one tide each day, and was silent on how the tides varied with the phase of the moon \subsection{Other Outcomes} In addition to the growth in knowledge about Aboriginal astronomy, there have been three other outcomes from these studies. First, Aboriginal astronomy is more accessible to the general public than some other aspects of Aboriginal culture, so that Aboriginal astronomy has become a cultural bridge, giving non-Aboriginal people a glimpse of the depth and complexity of Aboriginal cultures, and promoting better understanding between cultures. Second, an unexpected and unplanned outcome, which has been dubbed `science by stealth', is that Aboriginal astronomy activities show how science is linked to culture in traditional Aboriginal societies, and thus illuminate how science is linked closely to our own European culture. Even reviews in the Arts media % give references??? have applauded this approach, suggesting that combining science with culture may be an effective way of bridging the Òtwo CulturesÓ. Third, there has been a growth of the use of Indigenous Astronomy in the classroom, both to teach Aboriginal culture, and to encourage engagement with science, particularly amongst Indigenous students. \subsection{The Future} Although Aboriginal astronomy was first reported over 150 years ago \citep{stanbridge57}, only in the last few years has there been a concerted scholarly attempt, which I have attempted to summarise here, to study the breadth and richness of Aboriginal astronomy. Quite apart from its scholarly value, this research can be a powerful tool in overcoming some of the lingering prejudice that continues to permeate Australian society. The research presented here is proving valuable in building greater understanding of the depth and complexity of Aboriginal cultures. My colleagues and I continue to find new (to us!) information and traditional knowledge, supporting the view that research in Aboriginal astronomy is in its infancy. Many tantalising lines of evidence suggest that far more awaits us, and there are several lines of enquiry that demand attention. For example, \begin{itemize} \item Only one culture (the Wardaman) has had its knowledge recorded to the depth of \cite{DS}. Presumably dozens of similar books could in principle be written on other Aboriginal cultures that are still strong, although this is probably contingent on building unusually productive partnerships like that between Bill Yidumduma Harney and Hugh Cairns. \item Some oral traditions discussed in this review, such as the Yolngu Morning star ceremony, are barely touched. I am aware that a far greater depth of public knowledge exists, but it would require signficant ethnography to document it. Many similar examples exist, and are ripe for research. \item I am frequently contacted by Aboriginal people wishing to tell their story. I would love to accommodate them all, but time, and the number of researchers in this area, are limited. Far more could be done with more researchers. Perhaps an online wikipedia-type tool, with appropriate safeguards against abuse, might help. \item Aboriginal astronomy papers tend to be written by a relatively small band of researchers with specific interests in this area. We are sometimes criticised for focussing on this area rather than discussing it in the wider context of Aboriginal ethnography. However, most of us are not qualified to do so. I would encourage anthropologists and ethnographers to enter this field. \item Similarly, I look forward to collaborative research that embraces and synthesises the different academic cultures. For example, a collaborative effort is required to resolve the discord between the observational evidence for complex Aboriginal number systems, cited here, and the ideas promoted in the linguistic literature, which place less emphasis on evidence. \item Aboriginal stone arrangements are under-represented in the archaeological and anthropological literature. It seems likely that other astronomically-aligned arrangements like Wurdi Young exist, but significant fieldwork will be required to find them. \item Astronomers enjoy a resource that signficantly contributes to the rapid pace of developments in their field: the Astrophysics Data System (http://adsabs.harvard.edu/ ) which contains links to web-accessible copies of virtually all astronomical academic papers. Aboriginal astronomy enjoys no such resource, and my astronomy colleagues will be shocked to hear that the research for this review involved searching dusty books on obscure shelves of actual libraries! While my colleagues and I have amassed an online collection of scanned papers, it falls far short of ADS. Construction of such a resource would significantly accelerate progress in this field. \end{itemize} Against this optimistic view, all Aboriginal cultures are under significant threat. The elders who possess ancient knowledge grow old and pass away, often taking their knowledge with them, and not enough bright young Aboriginal people are willing to adopt the traditional lifestyle, and forego other opportunities, to continue the tradition. Even where the tradition is strong, better education and exposure to media mean that traditional knowledge is not static, but evolves under the influence of modern education and new-age ideas. It is unclear what the future holds, but I am inspired by the Euahlayi and Kamilaroi elders with whom we have collaborated, who are rebuilding their culture and language. I share their hope that the Euahlayi and Kamilaroi language will one day be taught as a second language in NSW schools, perhaps leading to a similar revitalisation as that demonstrated by the teaching of traditional language and culture in other countries, such as Wales. I look forward to future research conducted as a collaboration between researchers and Aboriginal communities, with the twin goals of promoting better understanding of the culture while protecting the sanctity of sacred knowledge, and of strengthening and protecting traditional knowledge and culture, so that they may thrive for countless generations to come.
16
7
1607.02215
1607
1607.03577_arXiv.txt
We use the Advanced Camera for Surveys on the \emph{Hubble Space Telescope} to study the rich population of young massive star clusters in the main body of NGC 3256, a merging pair of galaxies with a high star formation rate (SFR) and SFR per unit area ($\Sigma_{\rm{SFR}}$). These clusters have luminosity and mass functions that follow power laws, $dN/dL \propto L^{\alpha}$ with $\alpha = -2.23 \pm 0.07$, and $dN/dM \propto M^{\beta}$ with $\beta = -1.86 \pm 0.34$ for $\tau < 10$ Myr clusters, similar to those found in more quiescent galaxies. The age distribution can be described by $dN/d\tau \propto \tau ^ \gamma$, with $\gamma \approx -0.67 \pm 0.08$ for clusters younger than about a few hundred million years, with no obvious dependence on cluster mass. This is consistent with a picture where $\sim 80 \%$ of the clusters are disrupted each decade in time. We investigate the claim that galaxies with high $\Sigma_{\rm{SFR}}$ form clusters more efficiently than quiescent systems by determining the fraction of stars in bound clusters ($\Gamma$) and the CMF/SFR statistic (CMF is the cluster mass function) for NGC 3256 and comparing the results with those for other galaxies. We find that the CMF/SFR statistic for NGC 3256 agrees well with that found for galaxies with $\Sigma_{\rm{SFR}}$ and SFRs that are lower by $1-3$ orders of magnitude, but that estimates for $\Gamma$ are only robust when the same sets of assumptions are applied. Currently, $\Gamma$ values available in the literature have used different sets of assumptions, making it more difficult to compare the results between galaxies.
NGC 3256 is a merging pair of galaxies $\approx 36$ Mpc away. Dynamical simulations suggest that the system began interacting $\approx 450$ Myr ago, and it has since undergone a period of major star and star cluster formation. It exhibits two tidal tails that are rich with young massive stellar clusters (Knierman et al. 2003; Maybhate et al. 2007; Trancho et al. 2007a; Mulia, Chandar, \& Whitmore 2015). The main body of NGC 3256 contains a dense population of clusters, many of which are embedded in the galaxy's dusty interstellar medium. The galaxy's intense star formation makes it an extreme environment in which to study cluster formation and destruction. The cluster population of NGC 3256 has been studied in a number of previous works. Zepf et al. (1999) used $B$ and $I$ band images taken with the Wide Field Planetary Camera 2 on the \emph{Hubble Space Telescope} (\emph{HST}) to examine the colors and luminosities of main body clusters. Using the fraction of blue light that they found in clusters, they estimated that the efficiency of cluster formation in NGC 3256 is $\sim 20\%$. Trancho et al. (2007b) performed optical spectroscopy of 23 clusters in the main body of NGC 3256, finding ages of a few to $\sim 150$ Myr and masses from $(2-40) \times 10^5$ $M_{\odot}$. Goddard, Bastian, \& Kennicutt (2010; hereafter G10) estimated ages and masses of NGC 3256 clusters from $UBVI$ photometry using \emph{HST}'s Advanced Camera for Surveys (ACS) and estimated the fraction of stars forming in bound clusters, hereafter referred to as $\Gamma$, to be $\Gamma = 22.9\% ^{+7.3}_{-9.8}$. Their method involved estimating the cluster formation rate (CFR), taken from the total mass of clusters younger than 10 Myr, and dividing by the galaxy's star formation rate (SFR). Some works, including G10, have suggested that there is a correlation between $\Gamma$ and SFR density, $\Sigma_{\rm{SFR}}$ (e.g., Silva-Villa \& Larsen 2011; Cook et al. 2012), and between $T_{L}(U)$ (the fraction of $U$ band light contained in clusters) and $\Sigma_{\rm{SFR}}$ (Larsen \& Richtler 2000; Adamo \"{O}stlin, \& Zackrisson 2011). These relationships imply that the conditions for cluster formation are not universal, but are dependent on conditions in the host galaxy. Chandar, Fall, \& Whitmore (2015), on the other hand, measured the cluster mass function (CMF) in seven galaxies and found that when normalized by the SFR, these CMFs fall nearly on top of one another. The CMF/SFR statistic implies that cluster formation is similar in many galaxies, regardless of SFR. We estimate both $\Gamma$ and, for the first time, the CMF/SFR statistic, in NGC 3256. This paper is arranged as follows. Section \ref{obs} describes the observations and cluster selection. Section \ref{method} describes the method for obtaining ages and masses of clusters. Section \ref{results} presents the luminosity functions (LFs), age distributions, and mass functions for NGC 3256. In Section \ref{dist}, we quantify the effects that distance has on the measured LF and age distribution of the clusters. We determine the CMF/SFR statistic and $\Gamma$ from our new NGC 3256 cluster sample in Section \ref{discussion}. We summarize our results and state conclusions in Section \ref{conc}.
\label{conc} We have used ACS/WFC from \emph{HST} to measure the properties of star clusters in the main body of NGC 3256. We draw the following conclusions. \begin{enumerate} \item The LF follows a power law with index $\alpha = -2.23 \pm 0.07$ where $m_V < 21.5$ for the sample combining inner (with $m_V < 21.5$) and outer (with $m_V < 23.0$) regions of NGC 3256. Measuring $\alpha$ for the inner and outer regions separately yielded $\alpha \approx -2.1$. These values agree with previous work. \item We found that the age distribution can be described by a power law with index $\gamma \approx -0.67 \pm 0.08$ for independent mass ranges and for catalogs from the inner and outer areas of the galaxy separately. These values can be interpreted as a destruction rate of $\sim 80\%$ each decade in time and are consistent with typical values of $\gamma = -0.8 \pm 0.2$ found in other systems. \item The mass functions in various cluster age ranges are well described by a power law with index $\beta$. Young (log$(\tau/\rm{yr}) < 7$) clusters follow a robust $\beta = -1.86 \pm 0.34$. We found that $7 <$ log$(\tau/\rm{yr}) < 8$ clusters are better described by $\beta = -1.31 \pm 0.36$, while $8 <$ log$(\tau/\rm{yr}) < 8.6$ clusters follow $\beta = -2.08 \pm 0.45$. We investigated a number of sources of uncertainty in $\beta$ and found that uncertainties agree with the formal uncertainties given in Figure \ref{dndm}. \item In order to test for the effect that image resolution can have on cluster properties, we artificially degraded an image of the Antennae and created independent catalogs from the degraded and original images. While less than half of the image-smoothed sources were detected in the original image, the LFs, color distributions, and age distributions produced from both catalogs were very similar. We conclude that reliable measurement of the ages and luminosities of star clusters is not significantly hampered by the distance to NGC 3256. \item We considered two different methods that measure the efficiency with which clusters form in a galaxy. The CMF/SFR statistic was measured for eight galaxies, including NGC 3256, and had a dispersion of $\sigma$(log $A) = 0.24$. We measured $\Gamma$ and found a value of $33\%$ from clusters younger than 10 Myr, and we discussed the different parameters and assumptions that affect this method. \end{enumerate} We thank the referee for helpful comments. R.C. is grateful for support from NSF through CAREER award 0847467. This work is based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This work was supported in part by NASA through grant GO-9735-01 from the Space Telescope Science Institute, which is operated by AURA, INC, under NASA contract NAS5-26555. This research has made use of the NASA/IPAC Extragalactic Database, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
16
7
1607.03577
1607
1607.06487_arXiv.txt
{Stellar signals are the main limitation for precise radial-velocity (RV) measurements. These signals arise from the photosphere of the stars. The \ms\,perturbation created by these signals prevents the detection and mass characterization of small-mass planetary candidates such as Earth-twins. Several methods have been proposed to mitigate stellar signals in RV measurements. However, without precisely knowing the stellar and planetary signals in real observations, it is extremely difficult to test the efficiency of these methods.} {The goal of the RV fitting challenge is to generate simulated RV data including stellar and planetary signals and to perform a blind test within the community to test the efficiency of the different methods proposed to recover planetary signals despite stellar signals.} {In this first paper, we describe the simulation used to model the measurements of the RV fitting challenge. Each simulated planetary system includes the signals from instrumental noise, stellar oscillations, granulation, supergranulation, stellar activity, and observed and simulated planetary systems. In addition to RV variations, this simulation also models the effects of instrumental noise and stellar signals on activity observables obtained by HARPS-type high-resolution spectrographs, that is, the calcium activity index \logrhk\,and the bisector span and full width at half maximum of the cross-correlation function.} {We publish the 15 systems used for the RV fitting challenge including the details about the planetary systems that were injected into each of them (data available at CDS and here: \url{https://rv-challenge.wikispaces.com}).} {}
\label{sect:1} The radial-velocity (RV) technique is an indirect method that measures the stellar wobble induced by a planet orbiting its host star with Doppler spectrostrcopy. It is fundamental for transit surveys, where it is used to confirm the exoplanet candidates and to understand their composition through measuring the planet mass. Based on this, the planet density is derived with the help of the radius measured using the transit light curve. Important results have been achieved in the \emph{Kepler} era, such as the confirmation of Kepler-78 with the HIRES and the HARPS-N spectrographs \citep{Sanchis-Ojeda-2013,Howard-2013b,Pepe-2013}, the characterization of the exoplanet Kepler 10-c, which has 17 Earth masses \citep[][]{Dumusque-2014}, and the indication that all planets below $\sim 6\,M_{\oplus}$ have a similar rocky composition \citep[][]{Dressing-2015}. However, the RV technique is strongly limited by the faintness of \emph{Kepler} targets. This will not be the case for upcoming photometric space missions such as TESS \citep[][]{Ricker-2014} and PLATO \citep[][]{Rauer-2014}, which will deliver hundreds of good candidates for RV follow-up. Obtaining the planet density is also important to estimate the scale height of the atmosphere that might exist on a planet, and thus form an idea of the detectability of chemical species in this atmosphere. This will be crucial in the \emph{James Webb Space Telescope} era, when this scale height needs to be known before dozens of hours of telescope time are spent trying to detect chemical species in the atmosphere of rocky planets orbiting M dwarfs. In addition to complementing transit detections that are restricted to a very limited orbital configuration in terms of inclinations, precise RVs may also be the only technique for detecting low-mass planets orbiting nearby bright stars, for which the atmospheres will be characterized with direct-imaging future space missions. The RV technique is sensitive not only to possible companions, but also to signals arising from the photosphere of the host star, called stellar signals here. Now that the \ms\,precision level has been reached by the best spectrographs, it is obvious that stars introduce signals at a similar level, which strongly complicates the detection and measurement of small-mass planets. There are several examples in the literature that show this, and we present here only a few of them. Gl581 is assumed to have between three and six planets \citep[][]{Hatzes-2016,Anglada-Escude-2015, Robertson-2014, Baluev-2013, Vogt-2012, Gregory-2011, Vogt-2010b, Mayor-2009b}, HD40307 between four and six planets \citep[][]{Diaz-2016,Tuomi-2013a}, and GJ667C between three and seven planets \citep[][]{Feroz-2014,Anglada-Escude-2012a,Gregory-2012}. In all these systems, the number of detected planets strongly depends on the model used to fit the data, and this is a sign that stellar signals are not properly modeled. Different models exists to mitigate the effect of stellar signals, but without precisely knowing the contribution of stellar and planetary signals in real observations, it is extremely difficult to determine how efficient the different methods are in correcting for stellar signals. We note that the HARPS-N solar telescope \citep[][]{Dumusque-2015b} will deliver the perfect real data set on which the RV effect of stellar signals can be better determined. Likewise, these future data sets will help in determining the efficiency with which different techniques can recover tiny planetary signals such as those from Venus despite stellar signals. Other facilities are also observing the Sun-as-a-star using high-resolution spectroscopy: the SONG spectrograph \citep[][]{Palle-2013}, the PEPSI spectrograph \citep[][]{Strassmeier-2015}, and the G\"ottingen FTS spectrograph \citep[][]{Lemke-2016}. The data provided by these facilities, often allowing higher spectral resolution than HARPS-N, are complementary to the data obtained by the HARPS-N solar telescope and therefore will be extremely useful for characterizing the RV effect of the stellar signal in more detail through analyzing spectral line variations (spectral line bisector and full width at half maximum, FWHM). To be able to test the efficiency of the different techniques in recovering tiny planetary signals despite stellar signals, we started the RV fitting challenge initiative. The idea of the RV fitting challenge is \begin{itemize} \item to generate simulated RV time series that include realistic planetary and stellar signals, and \item to share these times series with the community so that different teams using different techniques to account for stellar signal can search for the injected planetary signal. \end{itemize} When the planetary signal injected in the data is known exactly, it is \emph{\textup{a posteriori}} possible to test the efficiency of the different methods used to search for planetary signals despite stellar signals. This RV fitting challenge exercise was performed between October 2014 and June 2015, and an analysis of the results of the different teams will be presented in a forthcoming paper \citep[][]{Dumusque-2016b}. We note that a similar challenge was previously posed in the exoplanet field to test the efficiency of different algorithms to recover planetary transit in photometric light curves \citep[][]{Moutou-2005b}. The goal of this first paper is to describe the simulation we used to generate the different RV time series of the RV fitting challenge and to present the planets that were artificially injected into each of them. Although complex models can be used to estimate the effect of the different stellar signals (see Sect. \ref{sect:10}), we adopt here a simple approach based on real observations that therefore already includes the RV effect of stellar oscillations and granulation. We then model the RV variation induced by activity using a simplistic simulation of active region appearance on the solar surface, based on empirical properties derived from solar observations. This approach allows us to easily and rapidly generate RV times series that are similar to real RV measurements. We can therefore use these time series to further test the efficiency of the different methods used to search for planetary signals despite stellar signals. A \emph{wiki} was created for the purpose of the RV fitting challenge to invite participants to collaborate in designing the optimal data set, and to share it. This can be accessed at \url{https://rv-challenge.wikispaces.com} to follow the preliminary discussions before the data set of the RV fitting challenge was created, to download the data set (data also available at CDS), and to see preliminary results. Section \ref{sect:10} introduces the different stellar signals that affect RV measurements. In Sect. \ref{sect:2} we present the simulation that we used to model RV stellar signals and instrumental noise, followed in Sect. \ref{sect:3} by a description of the model used to generate RV variations induced by stellar activity. In Sect. \ref{sect:4} we perform a comparison between real and simulated data to check that our simulation of stellar signals gives realistic results, and finally, we conclude in Sect. \ref{sect:5}.
\label{sect:5} We presented here the simulation of instrumental and stellar signals that led to the data set of the RV fitting challenge. This simulation, based on solar and stellar observations, allows generating realistic instrumental signals from HARPS and stellar signals induced by oscillations, granulation, and supergranulation, and magnetic activity on the rotation period and magnetic cycle timescales. The simulation is able to reproduce the variations seen in RV, BIS SPAN, FWHM, and \logrhk. The simulation of instrumental and stellar signals was used to generate two-thirds of the 15 systems included in the data set of the RV fitting challenge. The remaining third of the data set corresponds to real measurements obtained with HARPS. These real measurements are important for checking the realism of simulated systems, which is critical if we wish to draw valid conclusions from the results of the RV fitting challenge \citep[][]{Dumusque-2016b}. A first comparison between simulated and real data shows that the simulation of stellar signals presented in this paper manages to reproduce the correlation between the different observables given in the data set of the RV fitting challenge fairly well, that is, RV, BIS SPAN, FWHM, and \logrhk. A more detailed comparison is left for a forthcoming analysis \citep[][]{Dumusque-2016b}. Two versions of the data set of the RV fitting challenge exist and can be downloaded at the CDS or on the RV fitting challenge \emph{wiki}. The first version is the blind-test data set given to the different teams to recover planetary signals embedded in stellar signals; it can be downloaded at \url{https://rv-challenge.wikispaces.com/Dataset+for+RV+challenge}. In this version, only the RV, BIS SPAN, FWHM, and \logrhk\,variations are given without extra information. The second version includes the variation of the same observables, but separately for each signal component present in the data, in addition to stellar and planet properties; it can be downloaded at \url{https://rv-challenge.wikispaces.com/Details+about+the+dataset}. It is therefore possible to use this more detailed version to check \emph{\textup{a posteriori}} that planetary signals recovered in the data correspond to true signals, that models to fit activity are indeed adjusting the correct activity component, and so on. This more detailed version of the data set of the RV fitting challenge was given to the different teams after they completed their analysis of the blind-test data set. We hope that in addition to the different teams that participated in the analysis of the data set of the RV fitting challenge, other teams will use these data to search for new methods for recovering planetary signals embedded in stellar signals. Determining the best methods is critical for the immediate future, when TESS \citep[][]{Ricker-2014} and PLATO \citep[][]{Rauer-2014} will deliver hundreds of good candidates for RV follow-up with ultra-precise spectrographs such as ESPRESSO \citep[][]{Pepe-2014}, G-CLEF \citep[][]{Fzresz-2014}, and EXPRES (PI: D. A. Fischer and C. Jurgenson).
16
7
1607.06487
1607
1607.06164_arXiv.txt
The determination of the size of the convective core of main-sequence stars is usually dependent on the construction of models of stars. Here we introduce a method to estimate the radius of the convective core of main-sequence stars with masses between about 1.1 and 1.5 $M_{\odot}$ from observed frequencies of low-degree p-modes. A formula is proposed to achieve the estimation. The values of the radius of the convective core of four known stars are successfully estimated by the formula. The radius of the convective core of KIC 9812850 estimated by the formula is $\mathbf{0.140\pm0.028}$ $R_{\odot}$. In order to confirm this prediction, a grid of evolutionary models were computed. The value of the convective-core radius of the best-fit model of KIC 9812850 is $0.149$ $R_{\odot}$, which is in good agreement with that estimated by the formula from observed frequencies. The formula aids in understanding the interior structure of stars directly from observed frequencies. The understanding is not dependent on the construction of models.
By matching the luminosity, atmospheric parameters, and oscillation frequencies of models with the observed ones, asteroseismology is used to determine fundamental parameters of stars. Asteroseismology is also used to probe physical processes in stars and diagnose internal structures of stars \citep{roxb94, roxb99, roxb01, roxb03,roxb04, roxb07, cunh07, cunh11, bran10, bran14, chri10, dehe10, yang10a, yang12, yang15, silv11, silv13, chap14, ge14, guen14, liu14, yang16}. Asteroseismology is a powerful tool for studying the structure and evolution of stars. Stars with a mass larger than $1.1$ \dsm{} are considered to have a convective core during their main sequence (MS) stage. Due to the fact that the overshooting of the convective core can bring more hydrogen-rich material into the core, the evolution of a star could be significantly affected by the overshooting. Thus determining the size of the convective core including the overshooting region is important for understanding the structure and evolution of stars. However, the size of the convective core has never been determined directly from observed data of stars. Generally, the understanding of the size of the convective core derives from the computation of evolutionary models of stars. When seeking to probe the internal structures of stars with low-$l$ p-modes, the small separations, $d_{10}$, $d_{01}$, $d_{02}$, and $d_{13}$, and the ratios of the small separations to the large separations, \drb{}, \dra{}, $r_{02}$, and $r_{13}$ \citep[and references therein]{roxb03, yang07b}, are considered to be the very useful diagnostic tools. The small separations $d_{10}$ and $d_{01}$ are defined as \citep{roxb03} \begin{equation} d_{10}(n)=-\frac{1}{2}(-\nu_{n,0}+2\nu_{n,1}-\nu_{n+1,0}) \label{d10} \end{equation} and \begin{equation} d_{01}(n)=\frac{1}{2}(-\nu_{n,1}+2\nu_{n,0}-\nu_{n-1,1}). \label{d01} \end{equation} But in calculation, the smoother five-point separations are adopted. Stars with masses between about 1.1 and 1.5 \dsm{} have a convective core during their MS. The discontinuity in density at the edge of the convective core increases with the evolution of the stars. The rapid variation of density with depth in a stellar core can distort acoustic wave propagation in stellar interiors, producing a reflected wave \citep{roxb07}. The reflectivity can come from the rapid density change at the edge of the convective core \citep{roxb07}. For the modes with frequencies larger than a critical frequency, they can penetrate into the convective core. Partial wave reflection at the core boundary could lead to acoustic resonances in the convective core \citep{roxb04}. As a consequence, at high frequencies, we would see a periodic variation in the small separations with frequency \citep{roxb04}. If this periodic component is determined from observations, it can be used for constraining the size of the convective core \citep{roxb99, roxb01, roxb04}. \cite{roxb94, roxb00a, roxb00b, roxb01} developed the theory of semiclassical analysis that can more accurately describe the low-degree p-modes and the small separations. However, \cite{roxb04} pointed out that their expressions for the perturbations in the phase shifts are not transparent enough to serve as a basis for simple estimates. The effects of the convective core on $d_{10}$, $d_{01}$, \drb{}, and \dra{} are also studied by other authors \citep{cunh07, cunh11, bran10, bran14, dehe10, silv11, silv13, liu14, yang15}. The conclusion is that the ratios \dra{} and \drb{} can be affected by the presence of the convective core. In order to isolate the frequency perturbation produced by the edge of the convective core, \cite{cunh07} and \cite{cunh11} defined a tool $r_{0213}=r_{02}-r_{13}$. They have shown that the tool can potentially be used to infer information about the amplitude of the discontinuity in the sound speed at the edge of the convective core but it is unable to fully isolate the frequency perturbation. \cite{yang15} show that the ratios \dra{} and \drb{} of a star with a convective core can be described by equation \begin{equation} B(\nu_{n,1})=\frac{2A\nu_{n,1}}{2\pi^{2}(\nu^{2}_{0}-\nu^{2}_{n,1})} \sin(2\pi\frac{\nu_{n,1}}{\nu_{0}})+B_{0}, \label{eqb} \end{equation} where the quantities $A$ and $B_{0}$ are two parameters, the $\nu_{0}$ is the frequency of $l=1$ mode whose inner turning point is located on the boundary between the radiative region and the overshooting region of the convective core. In this work, we propose a method to estimate the radius of the convective core of MS stars with masses between about 1.1 and 1.5 \dsm{} from observed frequencies of low-degree p-modes. The estimated radius is comparable with that obtained from evolutionary model. Individual frequencies of p-modes of KIC 9812850 have been extracted by \cite{appo12}. The mass of KIC 9812850 estimated by \cite{metc14} is $1.39\pm0.05$ \dsm{}. Thus KIC 9812850 could have a convective core. We determined the radius of the convective core of KIC 9812850 in two ways. One is estimated from the observed frequencies; the other is determined from the best model for KIC 9812850. In Section 2, a formula that can be used to determine the radius of the convective core from oscillation frequencies is proposed and is applied to different stars. In Section 3, based on finding the maximum likelihood of models of a grid of evolutionary tracks, the best-fit model of KIC 9812850 is found out; and then, we compare the radius of the convective core of the best model with that determined from oscillation frequencies. Finally, we give a discussion about the domain of the validity of the method and summary in Section 4.
\subsection{Discussion} \textbf{When} angular frequencies of modes are larger than \textbf{a critical frequency $\omega_{0}$,} the modes can penetrates into the convective core \textbf{of stars}. Assuming that the effects of the convective core on oscillations is related to $-A\cos(\omega_{0}t)$, where $A$ is a free parameter, \cite{yang15b} obtained the \textbf{equation} (\ref{eqb}) as the result of Fourier transform of $-A\cos(\omega_{0}t)$. Thus the \textbf{equation} (\ref{eqb}) is invalid for stars whose core is \textbf{radiative.} Figure \ref{fig2} shows the distributions of H mass fraction, adiabatic sound speed, and \drb{} of \textbf{core-radiative models in different evolutionary stages.} The ratio \drb{} decrease with increase in frequency. The distributions can not be reproduced by \textbf{equation (\ref{eqb}). The core of model S2 in Figure \ref{fig3} is also radiative. The distribution of \drb{} of the model cannot be reproduced by equation (\ref{eqb}) too. } \cite{roxb04, roxb07} pointed out that \textbf{the discontinuity in density} at the boundary of a convective core can distort acoustic wave propagation in stellar interior, producing a reflected wave. The effects of the convective core on oscillations are related to the fact that the discontinuity reflects acoustic waves. \textbf{Therefore,} the \textbf{equation} (\ref{eqb}) is invalid for stars with a convective core but without the discontinuity in density or sound-speed at the edge of the convective core. \textbf{ Model S3 in Figure \ref{fig3} has a small convective core but has no an obvious discontinuity in density or sound speed at the edge of the convective core (see Figure \ref{fig3}). The distribution of \drb{} of the model cannot be reproduced by equation (\ref{eqb}). While the model S4 has an obvious sound-speed discontinuity at the edge of the convective core. The distribution of \drb{} of model S4 is almost reproduced by equation (\ref{eqb}). The cores of models S1 in Figures \ref{fig4} and \ref{fig5} are also convective, but there is no an obvious discontinuity in density or sound speed of the models. The distributions of \drb{} of the models cannot be reproduced by equation (\ref{eqb}) too. While the distributions of \drb{} of models with a convective core and an obvious discontinuity in sound speed at the edge of the convective core are almost reproduced by equation (\ref{eqb}) (see Figures \ref{fig4} and \ref{fig5}). } The modes with $l=0$ are considered to be able to reach the center of a star. According to equation (\ref{tp}), it is more difficult to arrive at the convective core for the modes with $l\geq2$ than for the modes with $l=1$. Thus the frequency $\nu_{0}=\omega_{0}/2\pi$ could be the frequency of the $l=1$ mode whose inner turning point is located on the boundary between the radiative region and the overshooting region of the convective core. \cite{roxb94, roxb99, roxb04, roxb07} show that there are periodic variations in small separations with period determined approximately by the acoustic diameter of the convective core, i.e., the period \begin{equation} T_{c} \approx 2\int_{0}^{r_{c}}\frac{dr}{c_{s}}. \label{tc} \end{equation} The larger the value of $r_{c}$, the longer the $T_{c}$, i.e., the smaller the frequency $\nu_{c}=1/T_{c}$. According to equation (\ref{tp}), the larger the value of $r_{c}$, the smaller the frequency $\nu_{0}$. Thus the frequency $\nu_{c}$ of Roxburgh could be related to the frequency $\nu_{0}$. The value of sound speed decreases from $5.60\times10^{7}$ cm s$^{-1}$ to $4.85\times10^{7}$ cm s$^{-1}$ in the convective core of model S3 of star with $M=1.16$ \dsm{} (see Figure \ref{fig4}). If the $c_{s}$ in equation (\ref{tc}) is replaced by $c_{s}(r_{c})$, the value of $\nu_{c}$ can be estimated to be about $0.5 c_{s}(r_{c})/r_{c}$. From equation (\ref{rc1}), one can obtain $\nu_{0} = 0.45 c_{s}(r_{c})/r_{c}$ which is very close to $\nu_{c}$. In stellar interior, sound speed decreases with the increase in radius. The value of $c_{s}(r_{c})$ varies with the mass and the age of stars and is affected by overshooting. The value of $c_{s}(r_{c})$ of the most of MS stars with masses between about 1.1 and 1.5 \dsm{} is mainly in the range of $(4-6)\times10^{7}$ cm s$^{-1}$ (see Figures \ref{fig4} and \ref{fig5}). \textbf{For example, for} a star with $M=1.16$ \dsm{}, $X_{i}=0.7$, $Z_{i}=0.02$, and $\delta_{\rm ov}=0.2$, when it evolves from central hydrogen abundance $X_{c}=0.59$ to $X_{c}=0.16$, the value of $c_{s}(r_{c})$ decreases from about $5.2\times10^{7}$ cm s$^{-1}$ to $4.5\times10^{7}$ cm s$^{-1}$; \textbf{for} a star with $M=1.4$ \dsm{}, $X_{i}=0.7$, $Z_{i}=0.02$, and $\delta_{\rm ov}=0$, when it evolves from $X_{c}=0.5$ to $X_{c}=0.1$, the value of $c_{s}(r_{c})$ decreases from about $5.5\times10^{7}$ cm s$^{-1}$ to $4.5\times10^{7}$ cm s$^{-1}$. Therefore, in the most of MS stage of stars with masses between about 1.1 and 1.5 \dsm{}, \textbf{taken a model in the middle stage of MS as a reference,} one can assume that there is a change of $10\%$ in $c_{s}(r_{c})$, i.e., $c_{s}(r_{c})\sim(5.0\pm0.5)\times10^{7}$ cm s$^{-1}$. \textbf{For our sample, the relative uncertainty of $\nu_{0}$ determined by chi-square fitting from observed data is between about $2.4\%$ and $6.2\%$. But our sample is small. The relative uncertainty of $\nu_{0}$ of other stars might be larger than $6.2\%$. In order to estimate the uncertainty of the estimated $r_{c}$ of other stars when this method is applied to the stars, we assume that the relative uncertainty of $\nu_{0}$ for other stars is of the order of $10\%$ and apply the uncertainty of $10\%$ to all cases.} As a consequence, the \textbf{relative} uncertainty of the estimated $r_{c}$ is about $20\%$. Table \ref{tab3} shows that the values of the radius of the convective core determined by equations (\ref{eqb}) and (\ref{rc2}) from observed frequencies of different stars are in good agreement with those obtained from the best models of the stars. \subsection{Summary} Combining equation (\ref{eqb}), we propose here for the first time using formula (\ref{rc2}) to estimate the radius of the convective core including overshooting region of MS stars with masses between about 1.1 and 1.5 \dsm{} from observed frequencies and ratios. The estimated values of the radius of the convective core of four stars are consistent with those of the best models of the four stars. Using the observed frequencies and ratios of KIC 9812850, equations (\ref{eqb}) and (\ref{rc2}) predict the radius of the convective core of KIC 9812850 is $\mathbf{0.140\pm0.028}$ \dsr{}. In order to confirm this prediction, we constructed a grid of evolutionary tracks. Basing on finding the maximum likelihood of models, we obtained the best-fit model of KIC 9812850 with $M=1.48$ \dsm{}, $R=1.867$ \dsr{}, \teff{}$=6408$ K, $L=5.28$ \dsl{}, $t=2.606$ Gyr, $r_{c}=0.149$ \dsr{}, and \dov{}$=0.2$. The best model can reproduce asteroseismic and non-asteroseismic characteristics of KIC 9812850. The value of the radius of the convective core of the best-fit model is in good agreement with that predicted by formula (\ref{rc2}). Equations (\ref{eqb}) and (\ref{rc2}) aid in understanding the structure of stars directly from the observed frequencies.
16
7
1607.06164
1607
1607.08386_arXiv.txt
{PG\,1159 stars are hot, hydrogen-deficient (pre-) white dwarfs with atmospheres mainly composed of helium, carbon, and oxygen. The unusual surface chemistry is the result of a late helium-shell flash. Observed element abundances enable us to test stellar evolution models quantitatively with respect to their nucleosynthesis products formed near the helium-burning shell of the progenitor asymptotic giant branch stars. Because of the high effective temperatures ($T\mathrm{\hspace*{-0.4ex}_{eff}}$), abundance determinations require ultraviolet spectroscopy and non-local thermodynamic equilibrium model atmosphere analyses. Up to now, we have presented results for the prototype of this spectral class and two cooler members (\Teff\ in the range 85\,000--140\,000~K). Here we report on the results for two even hotter stars (\pgfuenf\ and \pgev, both with \Teff = 150\,000~K) which are the only two objects in this temperature--gravity region for which useful far-ultraviolet spectra are available, and revisit the prototype star. Previous results on the abundances of some species are confirmed, while results on others (Si, P, S) are revised. In particular, a solar abundance of sulphur is measured in contrast to earlier claims of a strong S deficiency that contradicted stellar evolution models. For the first time, we assess the abundances of Na, Al, and Cl with newly constructed non-LTE model atoms. Besides the main constituents (He, C, O), we determine the abundances (or upper limits) of N, F, Ne, Na, Al, Si, P, S, Cl, Ar, and Fe. Generally, good agreement with stellar models is found.}
\label{intro} PG\,1159 stars are hydrogen-deficient and occupy the hot end of the white dwarf cooling sequence \citep[effective temperatures ranging between \Teff = 75\,000~K and 250\,000~K,][]{2006PASP..118..183W,2014A&A...564A..53W,2015A&A...584A..19W}. Their surfaces are dominated by helium-rich intershell matter dredged up by a late helium-shell flash (thermal pulse, TP). The main atmospheric constituents are helium and carbon (He = 0.30--0.92, C = 0.08--0.60, mass fractions), often accompanied by large amounts of oxygen (up to 0.20). The high abundances of C and O are witness to efficient overshoot of intershell convection in asymptotic giant branch (AGB) stars, drawing these elements from the C--O stellar core \citep{1999A&A...349L...5H,2000A&A...360..952H,2016A&A...588A..25M}. Measured overabundances of neon confirm this picture. Our investigation of these stars now concentrates on the trace elements up to the iron group. Quantitative spectroscopic abundance determinations can be compared to predictions from nucleosynthesis calculations in stellar evolution models for AGB stars. The first comprehensive analysis of trace metal abundances was performed for the prototype of the PG\,1159 spectroscopic class, \elf\ \citep[\Teff = 140\,000~K,][]{2007A&A...462..281J}. Then, two cooler objects from this group were analysed \citep[\pgvier\ and \pgsieben, \Teff = 110\,000 and 85\,000~K,][]{2015A&A...582A..94W}. Prerequisites for these types of work are ultraviolet (UV) spectroscopic observations and model atmospheres accounting for departures from local thermodynamic equilibrium (LTE), because of the high effective temperatures. The far-UV spectral range covered by the Far Ultraviolet Spectroscopic Explorer (FUSE, 912--1188~\AA) is a most rewarding source. In the present paper, we assess the element abundances of two representatives of this group that are hotter than the prototype, namely 150\,000~K: \pgfuenf\ and \pgev. They are the hottest PG\,1159 stars for which useful FUSE data are available. For their analysis we developed new non-LTE model atoms, concentrating on high ionisation stages, particularly for elements not considered previously (Na, Mg, Al, Cl). We strive to identify spectral lines that have not been observed before in any stellar spectrum. The effective temperature of both stars is only rivaled by a few low-gravity PG\,1159-type central stars with \Teff\ up to 170\,000\,K \citep[K1--16, RX\,J2117.1+3412, NGC\,246, Longmore~4;][]{2010ApJ...719L..32W} for which good FUSE spectra are available. But for a comprehensive quantitative analysis, they require expanding-atmosphere models and the design of model atoms for even higher ionisation stages. In addition to the two program stars, we revisit the prototype itself. It is necessary and useful to analyse a larger number of these stars because, according to stellar models, the abundances of individual trace elements strongly depend on the stellar mass.
\label{sect:discussion} We have analysed FUV spectra of two hot PG\,1159 stars (\pgev\ and \pgfuenf, both \Teff = 150\,000~K) with the focus on element abundance determinations. Previously obtained values for \Teff\ and \logg\ were confirmed. With our improved model atmospheres, we additionally derived new findings on the prototype itself (\elf; 140\,000~K). The results for these three objects are summarized in Table~\ref{tab:stars}. In Fig.\,\ref{fig:abu}, the element abundances are displayed, together with the results for two cooler PG\,1159 stars \citep[\pgsieben\ and \pgvier\ with 85\,000 and 110\,000~K;][]{2014A&A...569A..99W}. Taken together, these five objects represent the only PG\,1159 stars with comprehensive element abundance determinations. \begin{table} \begin{center} \caption{Asteroseismic, spectroscopic, and initial masses of the five considered PG\,1159 stars. } \label{tab:masses} \begin{tabular}{llll} \hline \hline \noalign{\smallskip} Star & M$_{\rm puls}$/M$_\odot$ & M$_{\rm spec}$/M$_\odot$& M$_{\rm init}$/M$_\odot$\tablefootmark{(e)} \\ \hline \noalign{\smallskip} \pgvier & -- & $0.51^{+0.07}_{-0.01}$\tablefootmark{(c)} & $\approx$1--2 \\ \noalign{\smallskip} \pgsieben & $0.542^{+0.014}_{-0.012}$\tablefootmark{(a)} & $0.53^{+0.17}_{-0.03}$\tablefootmark{(c)} & $\approx$1--3 \\ \noalign{\smallskip} \elf & $0.565^{+0.025}_{-0.009}$\tablefootmark{(b)} & $0.54^{+0.05}_{-0.01}$\tablefootmark{(c)} & $\approx$1--2 \\ \noalign{\smallskip} \pgev & -- & $0.56^{+0.07}_{-0.03}$\tablefootmark{(d)} & $\approx$1--2.5 \\ \noalign{\smallskip} \pgfuenf & -- & $0.62^{+0.15}_{-0.08}$\tablefootmark{(d)} & $\approx$1--4 \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ \tablefoottext{a}{\citet{2008A&A...478..869C};} \tablefoottext{b}{\citet{2009A&A...499..257C};} \tablefoottext{c}{\citet{2014A&A...569A..99W};} \tablefoottext{d}{this work;} \tablefoottext{e}{estimated from initial-to-final mass relation of \citet{2000A&A...363..647W} using M$_{\rm spec}$.} } \end{center} \end{table} For the comparison of the results with evolutionary models, stellar masses are relevant. In Table~\ref{tab:masses} we list the spectroscopic masses derived by comparing them with the observed values of \Teff\ and \logg\ with evolutionary tracks by \citet{2009ApJ...704.1605A}. For the two pulsators in the sample, asteroseismic masses are also available, being in good agreement with the spectroscopic values. Also listed are the masses of the main-sequence progenitors, as estimated from the initial-to-final mass relation of \citet{2000A&A...363..647W}, taking into account the uncertainties of the spectroscopic masses. The previously determined abundances of the main atmospheric constituents (He, C, O, Ne) in our two program stars were confirmed. The same holds for N, F, Ar, and Fe. The investigation on Si, P, and S gave improved results compared to earlier, preliminary work. For the first time, we have assessed the abundances of Na, Al, and Cl, and derived upper limits. We used our model atmosphere grid to reassess the abundances of trace elements in the prototype \elf. The main result is an improved S abundance, which is solar in contrast to the strong S depletion (0.02 solar) claimed by \citet{2007A&A...462..281J}, which was caused by an error in the model atom. We also derived upper limits for Na, Al, and Cl for this star. The results can be compared to evolutionary models by \citet{2013MNRAS.431.2861S}, who presented intershell abundances for stars with initial masses of 1.8--6~$M_\odot$. For \elf, the authors found good agreement for the species investigated (He, C, O, F, Ne, Si, P, S, Fe) with the exception of sulphur. As pointed out, this discrepancy is now resolved. For the two cool PG\,1159 stars mentioned above, good agreement of observed and predicted element abundances was found \citep{2014A&A...569A..99W}. As to our two program stars, all element abundances are similar to those found for the other three PG\,1159 stars (Fig.\,\ref{fig:abu}) and, hence, confirm the evolutionary models. For a thorough discussion, we can therefore refer to \citet{2014A&A...569A..99W}, however, N, Na, Al, and Cl deserve particular attention here. The Na and Cl abundances in PG\,1159 stars were examined for the first time. \pgev\ is remarkable because of its relatively large N abundance. This phenomenon is shared by some other PG\,1159 stars (like the mentioned cooler object \pgsieben) and is attributed to the fact that these objects suffered their last thermal pulse when they already were white dwarfs (so-called very late thermal pulse, VLTP). Only for \pgev\ could a useful upper limit for the sodium abundance be derived, which is 341 times oversolar. Karakas \& Shingles (priv. comm.) report that the Na intershell abundance at the last thermal pulse in the 1.8 and 3~$M_\odot$ models presented in \citet{2013MNRAS.431.2861S} has increased considerably to 23.1 and 18.1 times oversolar after the last thermal pulse. This is still well below our detection limit. For aluminum, upper limits of 18 and 6 times solar were derived for \pgev\ and \pgfuenf. The Al intershell abundance at the last thermal pulse in the mentioned 1.8 and 3~$M_\odot$ models is 1.5 and 2 times the initial solar value. Hence, the rather modest production of Al is confirmed by our result. For the two program stars, as well as for \elf, only a high upper limit of Cl $= 10^{-3}$ was derived, which is about two dex oversolar. The intershell abundance in the 1.8~$M_\odot$ and 3~$M_\odot$ models has increased to 2.3 and 2.1 times oversolar after the last thermal pulse, i.e., well below our detection limit. To summarize, the abundances of elements up to iron in PG\,1159 stars is in good agreement with intershell abundances predicted by stellar-evolution models. For a number of interesting species (Na, Al, Cl), however, only upper limits can be derived by spectral analyses. It turned out that for the relevant ionisation stages ({\sc iv--vii}), precise atomic data is lacking, impeding line identifications. This obstacle can only be overcome by laboratory measurements. The detailed investigation into the trace element abundances as a function of stellar mass is additionally hindered by the relatively large uncertainty in the spectroscopic mass determination, which itself is due to the uncertainty of the surface gravity measurement ($\pm 0.5$~dex). Considerable improvement is expected from accurate parallaxes that will be provided by GAIA.
16
7
1607.08386
1607
1607.06813_arXiv.txt
We argue that gravitational wave (GW) signals due to collisions of ultra-relativistic bubble walls may be common in string theory. This occurs due to a process of post-inflationary vacuum decay via quantum tunnelling within (Randall-Sundrum-like) warped throats. Though a specific example is studied in the context of type IIB string theory, we argue that our conclusions are likely more general. Many such transitions could have occurred in the post-inflationary Universe, as a large number of throats with exponentially different IR scales can be present in the string landscape, potentially leading to several signals of widely different frequencies -- a \emph{soundscape} connected to the landscape of vacua. Detectors such as eLISA and AEGIS, and observations with BBO, SKA and EPTA (pulsar timing) have the sensitivity to detect such signals, while at higher frequency aLIGO is not yet at the required sensitivity. A distribution of primordial black holes is also a likely consequence, though reliable estimates of masses and $\Omega_{\rm pBH} h^2$ require dedicated numerical simulations, as do the fine details of the GW spectrum due to the unusual nature of both the bubble walls and transition.
I. Introduction} The recent direct detection of gravitational waves (GW) by LIGO \cite{Abbott:2016blz} opens a new mode of physical exploration. Although the potential of GW detectors to study astrophysical objects has been deeply investigated \cite{1511.05994}, their potential for exploring Beyond-the-Standard-Model (BSM) physics is still in a relative infancy. Prime examples studied include the physics of inflation~\cite{Grishchuk:1974ny,Starobinsky:1979ty,Rubakov:1982df,Fabbri:1983us,Abbott:1984fp}, the presence of strongly first order thermal phase transitions in the early Universe, e.g.~non-SM electroweak (see~\cite{Witten:1984rs,Turner:1992tz,Kosowsky:1992rz,Kosowsky:1991ua,astro-ph/9211004,astro-ph/9310044} for early work and~\cite{Caprini:2015zlo} for a recent overview), and the possibility of probing the existence of axions~\cite{Arvanitaki:2016qwi}. We argue that GW detectors may provide a powerful tool to interrogate the nature of short-distance physics, particularly string theory, in a way unrelated to the process of inflation: specifically, GW signals from post-inflationary vacuum decay are a natural feature of the type IIB (and likely more general) string landscape. Our conclusions rely on the observation that flux compactifications in type IIB string theory can contain a large number of highly warped regions~\cite{Giddings:2001yu,Douglas:2003um,hep-th/0307049,manythroats1,manythroats2}, often referred to as throats, with physics related to that of Randall-Sundrum models~\cite{hep-ph/9905221,hep-th/9906064} (see Fig.~\ref{fig:cartoon}). Under rather general assumptions, later made more precise, a throat can present a metastable vacuum in which supersymmetry (SUSY) is locally broken, along with a true locally-SUSY-preserving vacuum \cite{Kachru:2002gs}, to which it eventually decays. Specifically, we explore the process of vacuum decay, taking place within a given throat, via the process of zero-temperature quantum tunnelling. We study the effect of the nucleation of bubbles of the true vacuum in the early Universe, and argue that resulting ultra-relativistic bubble wall collisions may lead to observable GW signals. The frequency of the GW produced will be different for different throats, since it depends sensitively on its details, most of all on the gravitational warp factor (i.e. red-shift), $w_{IR} \ll 1$, setting the relation between the IR energy scale of the tip of the throat and the string scale~$M_{s}$. Since a large number of throats with exponentially different warp factors can be present in the string landscape (see e.g.~\cite{Hebecker:2006bn}), GW signals with very different frequencies can be produced. Space-based experiments, such as BBO \cite{BBO1,BBO2}, eLISA \cite{eLISA}, or AEGIS \cite{AEGIS3}, and pulsar timing arrays like SKA \cite{SKA} or EPTA \cite{EPTA}, are well suited for detection of these \emph{soundscape} signals, both in terms of frequency range and sensitivity. Larger compactification volumes and smaller $w_{IR}$ both shift the frequency peak of the signal towards smaller values, making pulsar timing array detectors optimal for probing very large volume and/or very strongly warped scenarios, while in the higher frequency range where aLIGO \cite{aLIGO} operates, even the strongest GW signal is unlikely to be detectable by current technology. Another likely consequence of the ultra-relativistic bubble wall collisions is the production of primordial black holes (pBHs)~\cite{PBH1,PBH2,PBH3,PBH5,PBH6}. This pBH production process, and the fine details of the high-frequency portion of the GW spectrum itself, is sensitive to the peculiarities of the bubble wall and vacuum decay dynamics that apply in our case (the dynamics are different than those of both thermal phase transitions, and previous studies of inflation-terminating quantum tunnelling vacuum decay). A reliable calculation of the pBH mass distribution and of $\Omega_{\rm pBH} h^2$ requires dedicated numerical simulations, as does the GW spectrum in the frequency domain beyond the peak position. If the production of pBHs is both highly efficient, and has a mass distribution that extends above $\simeq 10^9 \ {\rm g}$, then the maximum amplitude of GWs observable today can be constrained. On the other hand, the possible production of pBHs in the mass regions where pBHs may still comprise a significant fraction of the dark matter (DM) density provides another motivation for detailed studies. We emphasise that the study we present here is just a first step towards understanding the rich physics of the string soundscape. \begin{figure} \includegraphics[scale=0.35]{fig1.pdf} \caption{\label{fig:cartoon} (a) Cartoon of a type IIB flux compactification featuring a large number of warped regions (throats) some of which will be of Klebanov-Strassler (KS) type \cite{Klebanov:2000hb}. (b) Close-up of a KS throat (topologically $\cong S^3 \times S^2 \times \mathbb{R}$) with 3-form RR and NSNS flux quanta on the $A$-cycle and on the $B$-cycle respectively. The fluxes lead to a tip warp factor $w_{IR}$. In the locally SUSY-breaking false vacuum anti-$D3$ branes are localized at the tip~\cite{Kachru:2002gs}.} \end{figure}
16
7
1607.06813
1607
1607.00432_arXiv.txt
IRAS~19312+1950 is a peculiar object that has eluded firm characterization since its discovery, with combined maser properties similar to an evolved star and a young stellar object (YSO). To help determine its true nature, we obtained infrared spectra of IRAS~19312+1950 in the range 5-550~$\mu$m using the {\it Herschel} and {\it Spitzer} space observatories. The {\it Herschel} PACS maps exhibit a compact, slightly asymmetric continuum source at 170~$\mu$m, indicative of a large, dusty circumstellar envelope. The far-IR CO emission line spectrum reveals two gas temperature components: $\approx0.22M_{\odot}$ of material at $280\pm18$~K, and $\approx1.6M_{\odot}$ of material at $157\pm3$~K. The O\,{\sc i} 63~$\mu$m line is detected on-source but no significant emission from atomic ions was found. The HIFI observations display shocked, high-velocity gas with outflow speeds up to 90~\kms\ along the line of sight. From {\it Spitzer} spectroscopy, we identify ice absorption bands due to H$_2$O at 5.8~$\mu$m and CO$_2$ at 15~$\mu$m. The spectral energy distribution is consistent with a massive, luminous ($\sim2\times10^4L_{\odot}$) central source surrounded by a dense, warm circumstellar disk and envelope of total mass $\sim500$-$700M_{\odot}$, with large bipolar outflow cavities. The combination of distinctive far-IR spectral features suggest that IRAS~19312+1950 should be classified as an accreting high-mass YSO rather than an evolved star. In light of this reclassification, IRAS~19312+1950 becomes only the 5th high-mass protostar known to exhibit SiO maser activity, and demonstrates that 18 cm OH maser line ratios may not be reliable observational discriminators between evolved stars and YSOs.
\baboon\ is an infrared-bright object located in the Galactic plane at a distance of about $3.8$~kpc \citep{ima11}. Observations by 2MASS show an extended, horn-like IR nebulosity surrounding the central (point-like) source \citep{nak00}. A more recent near-IR (JHK) image from the UKIRT Infrared Deep Sky Survey is shown in Figure \ref{fig:ukidss}, in which a complex filamentary structure is evident, spanning a scale of $\sim30''$. Based on SiO and H$_2$O maser detections, \citet{nak00} deduced that the central source of \baboon\ is most likely an oxygen-rich evolved star similar to OH\,231.8+4.2. Subsequent molecular line observations by \citet{deg04} identified abundant carbon, nitrogen and oxygen-bearing molecules in the circumstellar envelope, many of which show a complex kinematical structure. The presence of a strong, narrow, bipolar emission component (about 1-2~km\,s$^{-1}$ wide), superimposed on a broader component with FWHM $\sim50$~km\,s$^{-1}$ led \citet{deg04} and \citet{nak05} to interpret the observed emission as arising in the outflows from an AGB stellar atmosphere. However, several observational characteristics of \baboon\ are unusual for an AGB star, which justifies a closer look at the nature of this object: (1) the linear distribution of H$_2$O maser spots and complex OH maser line profile \citep{nak11}; (2) the detection of common interstellar molecules, including CH$_3$OH, N$_2$H$^+$ and HC$_3$N, which are not normally seen in O-rich AGB star envelopes; and (3) the association of \baboon\ with a massive, dense molecular cloud core G055.372+00.185 \citep{dun11}. \citet{nak11} identified two most likely scenarios to explain the observations of \baboon. First, they considered the central object to be a massive ($\sim10\,M_{\odot}$) O-rich AGB star with a fast bipolar outflow embedded in a chemically-rich molecular cloud. \citet{deg04} noted that the association with a compact molecular cloud is highly unusual, but could be the result of a chance encounter \citep[\eg][]{kas94}. On the other hand, the evidence could be indicative that \baboon\ is an unusual, massive young stellar object, which would be consistent with some of the observed outflow characteristics as well as the high CH$_3$OH and HC$_3$N abundances. A Class~I CH$_3$OH maser was recently detected in \baboon\ by \citet{nak15}. This class of maser has not previously been seen in evolved stars, but is common in regions of high mass star formation as a tracer of molecular outflows \citep[\emph{e.g.}][]{cyg09}. Strongly variable red and blue-shifted 43~GHz SiO maser peaks were observed towards \baboon\ by \citet{sco02} and \citet{nak11}, with a velocity separation of about 30-36~\kms. \citet{sco02} speculated that these masers could arise in the approaching and receding sides of a rotating disk about a young stellar object, similar to Orion KL. SiO masers, however, are extremely rare in young stellar objects, having been seen in only 4 massive star forming regions to-date: Orion KL, Sgr B2, W51 \citep{zap09} and recently in Galactic Center Cloud C, G0.38+0.04 \citep{gin15}. By contrast, SiO masers have been observed in thousands AGB stars. Furthermore, the strong 1612~MHz OH maser line of \baboon\ (relative to the 1665 and 1667~MHz lines) is a typical characteristic of evolved stars \citep{cas98,nak11}, so if confirmed as a YSO, the combined maser properties of \baboon\ would be remarkable. The spectral energy distribution (SED) of \baboon\ was modeled by \citet{mur07} assuming the central object to be a high mass-loss AGB star. At the time of that study, the SED was not well constrained in the far-IR. A severe mis-match between model and observations at the longest wavelengths ($>20~\mu$m) was dismissed as primarily the result of background flux contamination. However, the large far-infrared fluxes measured by IRAS indicate that the circumstellar envelope mass (as well as the intrinsic source luminosity and temperature) may have been underestimated. If confirmed, an SED that rises to a peak in the far-IR would be more reminiscent of a young, massive YSO embedded in a dense, accreting envelope. To help determine the true nature of IRAS~19312+1950, we have obtained mid and far-IR spectroscopic observations to probe the properties of the environment surrounding this enigmatic, dust-enshrouded object. The {\it Herschel} Space Observatory \citep{pil10} HIFI (Heterodyne Instrument for the Far-Infrared; \citealt{deg10}) and PACS (Photodetector Array Camera \& Spectrometer; \citealt{pog10}) instruments were used to observe emission from dust, carbon monoxide, water and other species in the wavelength range 51-550~$\mu$m. {\it Spitzer} Space Telescope spectra were also obtained, covering the range 5-35~$\mu$m. The combined {\it Herschel} and {\it Spitzer} observations provide crucial new information on the SED and ice properties, complementing the previous IR data from ISO, 2MASS, Akari, WISE and other telescopes. In addition, our high signal-to-noise, spectrally-resolved HIFI CO and H$_2$O line profiles provide new information on the nature of the outflow. The evidence provided by these observations strongly favor the identification of IRAS~19312+1950 as an emdedded, high-mass YSO.
The observational evidence presented in this article leads to a self-consistent view of \baboon\ as a massive YSO embedded in a collapsing molecular envelope. Far infrared SED modeling and comparison of the IR spectral features with other sources provides strong evidence to support this scenario. Indeed, the spectrum of PACS and HIFI emission lines is very similar to that observed previously in other massive protostars. The fact that SiO and OH maser observations of \baboon\ have previously been considered to be more characteristic of an evolved star highlights the unusual nature of this object, making it an ideal candidate for followup observations (using ALMA, SOFIA, and JWST, for example), to confirm its identity and search for any other peculiarities that may help inform our understanding of the process of high-mass star-formation / stellar evolution. Future studies to confirm the nature of \baboon\ will require high-resolution imaging to elucidate the spatial and kinematic structure of the outflow, envelope and putative disk. Observations of optically thin, dense molecular gas tracers such as C$^{18}$O, CS, HCN and H$_2$CO using sub-mm interferometry (at sub-arcsecond resolutions) should be particularly revealing. Detection and mapping of outflow tracers such as CO, SiO and HCO$^+$, and `hot core' chemical tracers such as CH$_3$CN, CH$_3$OCHO and C$_2$H$_5$CN would also help confirm the YSO identification. The presence of a compact H\,{\sc ii} region may be revealed by searching for emission from hot, ionized gas, either through deep radio continuum observations or far-IR line searches for C\,{\sc ii}, N\,{\sc ii} and other ions. Although our data strongly indicate the presence of a massive YSO, a chance coincidence with a massive evolved star along the line of sight still cannot be ruled out. The presence of an AGB star could be established for example, by measuring the profile of the 3~$\mu$m H$_2$O absorption band to determine the presence of crystalline H$_2$O. More detailed mapping of far-IR emission from dust, O\,{\sc i} and H$_2$O would also be worthwhile to elucidate the energetic environment close to the star.
16
7
1607.00432
1607
1607.07448_arXiv.txt
We report the detection of a $78.1\pm0.5$ day period in the X-ray lightcurve of the extreme ultraluminous X-ray source \ngc\ ($L_{\rm{X,peak}}\sim5\times10^{40}$\,\ergps), discovered during an extensive monitoring program with \swift. These periodic variations are strong, with the observed flux changing by a factor of $\sim$3--4 between the peaks and the troughs of the cycle; our simulations suggest that the observed periodicity is detected comfortably in excess of 3$\sigma$ significance. We discuss possible origins for this X-ray period, but conclude that at the current time we cannot robustly distinguish between orbital and super-orbital variations.
\ngc\ is a remarkable member of the ultraluminous X-ray source (ULX) population. At a distance of $\sim$13.4\,Mpc, it exhibits an extreme peak X-ray luminosity of $\sim$5$\times10^{40}$\,\ergps\ (\citealt{WaltonULXcat, Sutton13}). Its hard X-ray spectrum below 10\,\kev\ had previously led to speculation that it might host an intermediate mass black hole accreting in the low/hard state, similar to what is seen in Galactic black hole binaries at low luminosities (\citealt{Sutton12}; see \citealt{Remillard06rev} for a review of accretion states in Galactic binaries). Our recent coordinated observations with the \nustar\ and \xmm\ observatories have subsequently revealed a broadband X-ray spectrum inconsistent with this identification (\citealt{Walton15}), similar to the other ULX systems observed by \nustar\ to date (\citealt{Bachetti13, Walton14hoIX, Walton15hoII, Rana15, Mukherjee15}). Instead, the broadband spectrum implies instead that \ngc\ is likely a system accreting at high- or even super-Eddington rates, as suggested by \cite{Sutton13}. The most remarkable aspect of these \nustar\ and \xmm\ observations, however, is the fact that we witnessed a rise in flux of $\sim$2 orders of magnitude or more in the mere 4 days between our two observing epochs. \ngc\ was essentially undetected in our first observation, with an implied luminosity of $L_{\rm{X}}\lesssim2\times10^{38}$\,\ergps, before the source returned to a more typical brightness of $L_{\rm{X}}\sim10^{40}$\,\ergps\ in our second (\citealt{Walton15}). This event prompted us to begin a monitoring campaign with the \swift\ observatory (\citealt{SWIFT}) in order to investigate whether this behaviour was a common occurence. Although these observations have not revealed such extreme variations again, here we report on the detection of an $\sim$\period\ day periodicity in the \swift\ lightcurve. \begin{figure*} \hspace*{-0.6cm} \epsscale{1.13} \plotone{./figs/lc_4d_nb10_apr16.pdf} \caption{ The \swift\ XRT lightcurve of \ngc\ obtained with our monitoring campaign, shown with 4d bins (\textit{top panel}). A strong $78.1\pm0.5$d period is visibly present. In addition to the XRT data, we overlay the average cycle profile in blue, and label the individual cycles covered by the duration of our program. We also show the agreement between the data and the average cycle profile, evaluated as the $n\sigma$ deviation between the data ($D$) and the cycle prediction ($P$; \textit{bottom panel}). The agreement is generally very good. } \vspace{0.2cm} \label{fig_lc} \end{figure*} \pagebreak
\label{sec_dis} We have reported the detection of an $78.1\pm0.5$ day X-ray periodicity in the extreme ULX \ngc\ ($L_{\rm{X,peak}}\sim5\times10^{40}$\,\ergps) with \swift. The variation on this timescale is very strong, with the observed XRT count rate varying by a factor of $\sim$3--4 (peak to trough; Figure \ref{fig_lc}). Our simulations find that this periodicity is significant, comfortably in excess of the 3$\sigma$ level. When bright ($L_{\rm{X}}\gtrsim\times10^{40}$\,\ergps), \ngc\ dominates the X-ray emission from NGC\,5907, so the risk of source confusion is negligible. Long-timescale periodicities have been observed from several other well studied ULXs. The $\sim$62 day period observed from the M82 field is well established, generally assumed to arise from M82\,X-1 (\citealt{Kaaret07, Pasham13m82}, although this origin has recently been questioned, \citealt{Qiu15}), and a $\sim$115 day period has been claimed from NGC\,5408 X-1 (\citealt{Pasham13_5408}; although see \citealt{Grise13}). Perhaps most famously, the most luminous ULX known to date, ESO\,243-49 HLX-1, is seen to outburst every $\sim$380 days (\citealt{Godet14}, although recently this behaviour has appeared more erratic; \citealt{Yan15}). The timescale of the periodicity reported here is comparable to several of these other cases. The key question regarding the nature of the observed periodicity is whether it could be related to the orbital period of the system, as suggested by \cite{Godet14} for ESO\,243-49 HLX1, or perhaps some super-orbital period, as suggested by \cite{Pasham13m82} for M82 following the likely detection of a sudden phase shift in the cycle. Orbital periods can be imprinted on the observed lightcurves from X-ray binaries through (at least partial) eclipses by the companion star, or through some Be/X-ray binary-like phenomenon, in which the binary orbit is eccentric and the accretion rate is enhanced around periastron. We note, however, that the FRED-like cycle profile observed here does not bear much similarity to the majority of the orbital profiles compiled by \cite{Falanga15} for eclipsing X-ray binaries. The lightcurve observed here also does not show a series of quiescence--outburst--quiescence cycles, as traditionally seen from Be/X ray binaries (\citealt{Reig11}), and also from ESO\,243-49 HLX-1, so any analogy here is limited. However, it may be possible for an elliptical orbit to result in more moderate accretion rate variations via changes in the degree of Roche-lobe overflow as the distance between the compact object and its stellar companion varies (\eg\ \citealt{Church09}). Alternatively, if \ngc\ is a wind-fed X-ray binary, the accretor could enter a higher density region of the stellar wind, resulting in an enhanced accretion rate, similar to the case of GX\,301$-$2 (\citealt{Fuerst11}). However, sustaining the extreme luminosities observed from \ngc\ would be a major challenge for a wind-fed scenario. Super-orbital X-ray periods are seen in many well monitored Galactic X-ray binaries, \eg\ Cygnus X-1, Hercules X-1, SS433, \etc\ (\citealt{Rico08, Staubert13, Cherepashchuk13}, and references therein), and are typically assumed to be related to precession of the accretion flow analogous to that seen in SS433, for which this interpretation is well established (\citealt{Fabrika04}). However, super-orbital periods have also now been seen in wind-fed high-mass X-ray binaries, for which such a scenario is unlikely to be viable (\citealt{Corbet13}), and other, more exotic mechanisms such as triple systems have been proposed in some cases (see \citealt{Kotze12} for a recent review of super-orbital variability in X-ray binaries). It is difficult to distinguish between these scenarios based on the observed timescale. Several authors have suggested that even if ULXs host standard stellar remnant black holes (\mbh\ $\sim$ 10\,\msun), some may have very long orbital periods (up to $\sim$100 days or more) if they accrete from evolved stellar companions via Roche-lobe overflow (\eg\ \citealt{Madhusudhan08}). Currently we have no independent observational constraints on the nature of the stellar companion in \ngc\ owing to both its distance and the obscuring column towards this source ($N_{\rm{H}}\sim10^{22}$\,cm$^{-2}$; the host galaxy NGC\,5907 is seen edge-on). However, \cite{Heida14, Heida15} have recently reported a number of ULXs with candidate red supergiant companions, demonstrating that some of the ULX population likely do have evolved counterparts. Indeed, if we assume Roche-lobe overflow and that the period is orbital, we can estimate a density for the stellar counterpart of $\rho\sim3\times10^{-5}$\,\gpcm3\ (\citealt{Faulkner72}), implying the counterpart may be either an M giant or an F supergiant (\citealt{Drilling00}). Furthermore, the ULX \p13\ has an orbital period of $\sim$64\,d (\citealt{Motch14nat}), so a $\sim$\period\ day orbital period may be a plausible scenario for \ngc. Similarly, super-orbital periods have been observed across a very wide range of timescales in Galactic systems, at least from 3--300 days (\citealt{Kotze12}), fully consistent with the period observed here. Should this be the correct interpretation, this would obviously imply a significantly shorter orbital period for this system. In addition to the flux variations observed, we also investigated briefly whether there is any evolution in the hardness ratio between the 0.3--2 and 2--10\,\kev\ energy bands with phase that might indicate spectral changes across the observed cycle. We did not find any strong evidence for such variations, indicating that either the spectrum is not systematically varying across the cycle, or that the spectral changes are subtle enough that they are not well probed by a simple hardness ratio. A detailed multi-epoch spectral analysis of the high S/N data available for \ngc\ will be presented in a follow-up paper (Fuerst et al. in preparation). Ultimately, we conclude that despite some of the orbital scenarios seeming unlikely, the question regarding the nature of this periodicity currently remains open. Finally, should \ngc\ be exhibiting dipping behaviour in addition to its periodic variability, this would be of particular interest. X-ray dips have only been reported from a handful of other ULXs to date, notably NGC\,55 ULX (\citealt{Stobbart04}), NGC\,5408 X-1 (\citealt{Pasham13_5408, Grise13}), a source in M72 (\citealt{Lin13}) and an ultrasoft source in M51 (\citealt{Urquhart16}). Analogy with the dipping phenomenon seen in Galactic X-ray binaries would imply we are viewing \ngc\ at a high inclination (\eg\ \citealt{DiazTrigo06}). This would naively appear to be at odds with the expectation from the inclination-based framework proposed to explain ULXs with soft and hard spectra (as observed below 10\,keV) within a super-Eddington framework, discussed in \cite{Sutton13uls} and \cite{Middleton15}. This assumes the accretion flow has a large scale height, as expected for super-Eddington accretion (\eg\ \citealt{Poutanen07}), resulting in an inclination dependence for the observed X-ray spectrum. ULXs with soft spectra (as seen from NGC\,55 ULX, NGC\,5408 X-1 and the M51 source) are viewed at high inclination, such that the lower temperature regions of the outer accretion flow dominate the observed emission and the hotter regions of the inner flow are obscured. ULXs with hard spectra are viewed more face on, with the hotter regions being visible. \ngc\ has a hard spectrum (classified as a `hard ultraluminous state' by \citealt{Sutton13uls}), and so would be expected to be viewed at a low inclination. However, it may still be possible to reconcile dipping and a hard spectrum within this framework if our viewing angle lies close to the opening angle of the accretion flow, such that we are viewing the innermost regions through the uppermost atmosphere of the outer regions, which super-Eddington similations predict to be dominated by a clumpy outflow (\citealt{Takeuchi13}). If \ngc\ is a standard $\sim$10\,\msun\ stellar remnant, its extreme luminosity would suggest the opening funnel for the accretion flow would likely be quite narrow, and so the wind could well be close to our line-of-sight. Continued monitoring of this remarkable source will test the stability of this period over a longer baseline, helping to distinguish between orbital and super-orbital scenarios, and may identify additional potential dips for further investigation.
16
7
1607.07448
1607
1607.03695_arXiv.txt
Discs of dusty debris around main-sequence stars indicate fragmentation of orbiting planetesimals, and for a few A-type stars, a gas component is also seen that may come from collisionally-released volatiles. Here we find the sixth example of a CO-hosting disc, around the $\sim$30 Myr-old A0-star HD 32997. Two more of these CO-hosting stars, HD 21997 and 49 Cet, have also been imaged in dust with SCUBA-2 within the SONS project. A census of 27 A-type debris hosts within 125 pc now shows 7/16 detections of carbon-bearing gas within the 5-50 Myr epoch, with no detections in 11 older systems. Such a prolonged period of high fragmentation rates corresponds quite well to the epoch when most of the Earth was assembled from planetesimal collisions. Recent models propose that collisional products can be spatially asymmetric if they originate at one location in the disc, with CO particularly exhibiting this behaviour as it can photodissociate in less than an orbital period. Of the six CO-hosting systems, only $\beta$ Pic is in clear support of this hypothesis. However, radiative transfer modelling with the ProDiMo code shows that the CO is also hard to explain in a proto-planetary disc context.
Dusty debris around main-sequence stars results from collisions between rocky bodies. Timescales for the particles to fall into the star or grind down to sizes small enough to be blown out by radiation pressure are short compared to stellar lifetimes, so larger progenitor planetesimals must be present. In most cases, these are found to be in belts located at tens of AU, where rock/ice comet-like compositions are probable, akin to solar system cometary material. Although collisions should cause the ices to sublimate into gases, this component in the belts is difficult to detect, as molecules are quickly photo-dissociated. For nearby debris-hosting stars, e.g. Fomalhaut, improved limits from deep observations are important for comparing the chemistry to solar system comets (Matr\`{a} et al. 2015). There are five A-type debris hosts where the molecule carbon monoxide (CO) has been detected, via millimetre rotational transitions. This species photo-dissociates on hundreds of year timescales even under interstellar radiation (Zuckerman \& Song 2012, e.g.). When illuminated by A-stars, this timescale can be shortened to less than orbital periods at tens of AU (Jackson et al. 2014, e.g.). If CO is preferentially produced at one location in the disk then the CO distribution would be `one-sided' around the star, and this effect has been imaged recently in an ALMA study of $\beta$ Pic (Dent et al. 2014; Matr\`{a} et al., in prep.). Other gas phases can also be present, with CII and OI lines in the far-infrared (Dent et al. 2012) potentially tracing photo-dissociated CO (Roberge et al. 2013). Three of the CO-hosting systems also appear to host `falling evaporating bodies', with transient red-shifted Ca-absorption features seen towards $\beta$ Pic and 49 Cet (Montgomery \& Welsh 2012), and Na absorption identified towards HD 32297 (Redfield 2007). These features are consistent with volatiles originating from ongoing cometary breakups, an idea which is now being explored by models (Kral et al. 2016, e.g.). Here we consider debris-hosting A-stars within 125 pc (parallax $\geq$8 mas) that have been searched for CO. This distance limit helps to exclude stars outside the Local Bubble where interstellar CO may be a strong contaminant. We report the sixth detection of CO, around the approximately 30 Myr-old A0 star HD 32297. In this case, we successfully used the presence of weak CII emission (Donaldson et al. 2013) as a predictor for the presence of CO. We have also followed CO detections for 49 Ceti and HD 21997 (Zuckerman et al. 1995; Mo\'{o}r et al. 2011) with dust continuum-imaging at 450 and 850 micron wavelength. These continuum data are part of the JCMT Legacy Project SONS (SCUBA-2 Survey of Nearby Stars), described by Pani\'{c} et al. (2013). The gas-plus-dust systems are also tested here against model predictions that the short-lived CO component could be more spatially asymmetric than the dust. \begin{figure} \label{fig1} \includegraphics[width=0.5\textwidth,angle=0,trim=0 3cm 0 3cm,clip]{49ceti_test.eps} \caption{SCUBA-2 results for 49 Cet, on signal-to-noise ratio scales. The peak flux at 450 microns is is 74 $\pm$ 14 mJy/beam, and the integrated flux within a 40 arcsec diameter aperture is 125 $\pm$ 10 mJy. The dashed contours show the 850~$\umu$m SNR (peak = 8.3), overlaid on the 450~$\umu$m colour-scale. The integrated flux at 850~$\umu$m is 12.1 $\pm$ 2.0 mJy. The secondary peak adjacent to the north (top) side of the disc may be due to a background dusty galaxy. The stellar position coincides with the 450~$\umu$m flux-peak within typical pointing drifts ($\la2''$). } \end{figure} \begin{figure} \label{fig2} \includegraphics[width=0.5\textwidth,angle=0,trim=0 3cm 0 3cm,clip]{hd21997-850-new.eps} \caption{SCUBA-2 850 micron image for HD 21997, with contours at 3,4,5,6,7 sigma levels and a peak of 7.9 $\pm$ 1.1 mJy/beam. The integrated flux is 10.7 $\pm$ 1.5 mJy. } \end{figure}
With the discovery of CO molecules around HD 32297, there are now six A-type main-sequence stars where dusty debris is known to be accompanied by carbon monoxide gas. In the age bracket spanning 5-50 Myr, nearly half of the A-stars with debris in fact exhibit carbon-bearing gas. This suggests a prolonged active epoch where giant collisions release a burst of gas from frozen volatiles. The time period is similar to that taken to assemble the Earth from colliding planetesimals (e.g. Jacobson \& Walsh 2015), so further study may yield clues to how volatiles are folded into rocky planets and their atmospheres. As the photo-dissociation time for CO around A-type stars is short, the gas profiles will tend to be asymmetric if debris originates from one spatial location, with molecules mainly on one side of the star. This is supported for 1-2 systems where the gas asymmetry appears to exceed that of the dust. Models can explain a high incidence of CO detection when the molecules are short-lived if, for example, colliding debris repeatedly passes through the original impact point. However, at least two of the discs are optically thick in CO emission, suggesting the gas may not be collisional, but instead represent a very prolonged proto-planetary disc phase.
16
7
1607.03695
1607
1607.04283_arXiv.txt
We present a new exploration of the cosmic star-formation history and dust obscuration in massive galaxies at redshifts $0.5<z<6$. We utilize the deepest 450 and 850\micron\ imaging from SCUBA-2 CLS, covering 230~arcmin$^2$ in the AEGIS, COSMOS and UDS fields, together with 100--250\micron\ imaging from \Herschel. We demonstrate the capability of the \tphot\ deconfusion code to reach below the confusion limit, using multi-wavelength prior catalogues from CANDELS/3D-HST. By combining IR and UV data, we measure the relationship between total star-formation rate (SFR) and stellar mass up to $z\sim5$, indicating that UV-derived dust corrections underestimate the SFR in massive galaxies. We investigate the relationship between obscuration and the UV slope (the IRX--$\beta$ relation) in our sample, which is similar to that of low-redshift starburst galaxies, although it deviates at high stellar masses. Our data provide new measurements of the total SFR density (SFRD) in $\Mstar>10^{10}\Msun$ galaxies at $0.5<z<6$. This is dominated by obscured star formation by a factor of $>10$. One third of this is accounted for by 450\micron-detected sources, while one fifth is attributed to UV-luminous sources (brighter than $\Luv^\ast$), although even these are largely obscured. By extrapolating our results to include all stellar masses, we estimate a total SFRD that is in good agreement with previous results from IR and UV data at $z\lesssim3$, and from UV-only data at $z\sim5$. The cosmic star-formation history undergoes a transition at $z\sim3-4$, as predominantly unobscured growth in the early Universe is overtaken by obscured star formation, driven by the build-up of the most massive galaxies during the peak of cosmic assembly.
\label{sec:intro} A key element in understanding the evolution of galaxies and the build-up of the present-day population is the cosmic star-formation history, i.e. the overall comoving volume-density of the star-formation rate (SFR) within galaxies throughout the Universe, measured as a function of look-back time. This has been observationally determined from ultraviolet (UV) emission from star-forming galaxies up to $z\sim9$, within the first few hundred Myr of the Universe \citep{Bouwens2007,Reddy2008,Cucciati2012,McLure2013,Duncan2014,Bouwens2015,Mashian2015,McLeod2015,McLeod2016,Parsa2015}. {It is well known, however, that over most of cosmic history} the great majority of the UV radiation from young stars is absorbed by dust within galaxies and is thermally re-radiated in the far-infrared (FIR; \citealp{Desert1990}). {As a result, rest-frame UV observations must either be corrected for these extinction losses, or supplemented with observations in the rest-frame FIR, to recover the total SFR.} {The cosmic star-formation history can also be explored through measurements of the average specific SFR (SSFR~=~SFR/stellar mass) of galaxy samples. This is known to increase with look-back time, but may plateau or rise more slowly at $z>3$ \citep{Madau2014}. Meanwhile, stellar mass density is constrained out to high redshifts with smaller systematic errors due to the reduced dust extinction effects in the rest-frame near-IR \citep{Ilbert2013, Muzzin2013,Grazian2015}. This must approximately trace the total SFR integrated over all masses and times up to a given look-back time, so we can expect to see the rate of growth of stellar mass trace the cosmic SFR evolution \citep{Wilkins2008, Madau2014}.} {While the large body of observational work appears to have converged on a consistent picture of the cosmic star-formation history \citep{Behroozi2013b,Madau2014}, there remain significant discrepancies with the most up-to-date hydrodynamical simulations and semi-analytic models. Both flavours of physical models are unable to consistently explain the evolution of the cosmic SFR density (SFRD), the history of stellar-mass assembly, and the average SFRs of individual galaxies as a function of their stellar mass; suggesting that either there must be systematic errors in the observational results, or the models do not adequately describe properties such as the dust attenuation curve \citep{Somerville2012,Kobayashi2013,Genel2014,Furlong2015, Lacey2016}. } {The obscuration of star formation by dust has been studied extensively using \textit{Herschel} and \textit{Spitzer} FIR data \citep{Buat2012, Reddy2012a, Burgarella2013a, Heinis2014, Price2014}. It has been shown to correlate with stellar mass and SFR in galaxy samples spanning a wide range of redshifts \citep{Reddy2010,Garn2010,Hilton2012a,Heinis2014,Price2014,Whitaker2014,Pannella2015}. However, these scaling relations contain significant scatter and we must seek more physically meaningful observations to allow us to predict the dust obscuration in the absence of direct FIR measurements. The empirical relationship which is usually employed for this task is the so-called ``IRX--$\beta$'' relation, which connects the obscuration fraction (FIR/UV luminosity ratio; IRX) and the observed slope ($\beta$) of the UV spectral energy distribution (SED). The connection is motivated by the principle that the same dust is responsible for the reddening of the intrinsic SED and for the reprocessing of the extincted UV flux into the FIR. The exact relationship is calibrated empirically on low-redshift samples \citep{Meurer1999, Calzetti2000, Kong2004, Boissier2007, Munoz-Mateos2009, Overzier2011, Hao2011, Takeuchi2012}, but is by no means fundamental. It depends on the UV attenuation curve and the intrinsic UV spectral slope (hence may be sensitive to metallicity, star-formation history, and the initial mass function), and it relies on the basic assumption of a simple dust screen obscuring all of the UV emission isotropically. Low-redshift calibrations of the IRX--$\beta$ relation \citep[such as the starburst calibration of][]{Meurer1999} are important for the study of star formation at high redshifts, where rest-frame UV data are frequently the only available tracer in large samples; yet it remains unclear whether these assumptions are valid across the full star-forming galaxy population at high redshifts \citep{Gonzalez-Perez2013, Castellano2014, Heinis2014, Price2014,Yajima2014, Capak2015, Coppin2015, Pannella2015, Zeimann2015, Talia2015, Watson2015}. } The chief problem {to overcome} is the difficulty of measuring accurate rest-frame FIR emission in representative samples of star-forming galaxies at $z>3$, as a result of the high confusion noise in sub-millimetre (submm) surveys with moderate-sized, single-dish telescopes such as \Herschel\ \citep{Reddy2012a,Burgarella2013a,Gruppioni2013,Heinis2014}. Interferometric imaging surveys of deep fields (e.g. with ALMA) offer a high-resolution alternative to single-dish surveys \citep{Bouwens2016, Kohno2016, Hatsukade2016, Dunlop2016}, but these only probe relatively small volumes which have limited statistical power. Apart from biased samples of bright Lyman-alpha emitters and submm-selected starbursts (which are rare and therefore unrepresentative of the overall star-forming-galaxy population), the only significant source of information on the cosmic SFR at early times comes from samples of photometrically selected Lyman-break galaxies (LBGs). These samples can be used to measure the rest-frame UV luminosity function (LF), from which the total {SFRD} can be extrapolated by integrating the LF model and applying (potentially large) corrections for dust obscuration. The obscuration of star formation in $z\gtrsim3$ LBGs has been studied in the submm via stacking of SCUBA-2 and \Herschel\ data by \citet{Coppin2015}; {AzTEC and \Herschel\ data by \citet{Alvarez-Marquez2016};} and individually using ALMA and the Plateau de Bure Interferometer \citep[e.g.][and see also \citealp{Chapman2000, Peacock2000, Stanway2010, Davies2012, Davies2013}]{Schaerer2015,Capak2015,Bouwens2016}, but the question of whether low-redshift calibrations hold true {for all star-forming galaxies at high redshifts remains open \citep[see for example the recent discussion by][]{Alvarez-Marquez2016}.} It is clear {from our knowledge of the UV and FIR luminosity densities \citep[e.g.][]{Burgarella2013a,Madau2014}} that the vast majority of the SFR in the Universe is obscured by dust, and this fraction appears to increase from $z=0$ to $z=1$. However, the behaviour at higher redshifts is uncertain \citep{Burgarella2013a}. There are systematic uncertainties in the true behaviour of the dust-obscured (hence total) SFRD at high redshifts due to uncertainties about the nature of star-forming galaxies at high redshifts. Typical star-forming galaxies at high redshifts have higher SSFRs than their counterparts at low redshifts, so it is unclear whether they resemble their low-redshift counterparts (which might be implied by the existence of a common mass-SFR relation known as the ``main sequence''; \citealt{Noeske2007}) or whether they are more similar to high-SSFR galaxies at low redshifts (because they are similarly rich in dense gas and have a clumpy mass distribution; \citealt{Price2014}). In summary we need to improve our knowledge of the obscured SFRD from FIR observations at $z>3$. Currently we are limited by issues including sample bias {(towards unrepresentative bright objects selected in the FIR, or more unobscured objects selected in the UV), and uncertainties due to scatter within the population when stacking fainter objects (such as those selected by stellar mass).} In the FIR, the major obstacle is the low resolution of single-dish FIR/submm surveys (typically 15--35 arcsec), which limits our ability to detect and identify individual sources above the confusion limit, and also hampers stacking below the confusion limit due to the difficulty of separating the emission from heavily blended positions. In this paper we attempt to improve this situation with a combination of three key ingredients: (i) deep, high resolution submm imaging of blank fields with JCMT/SCUBA-2 \citep{Holland2013}; (ii) rich multi-wavelength catalogues containing positions, redshift information and UV-to-mid-IR SEDs to support the submm data; (iii) the latest deconfusion techniques developed within the ASTRODEEP consortium, which allow us to maximize the useful information available from these combined data sets. In Section~\ref{sec:data} we briefly describe the submm imaging data and the sample used in this work. In Section~\ref{sec:methods} we explain how \tphot\ is applied to measure deconfused submm photometry for the sample, and in Section~\ref{sec:analysis} we describe and validate the stacking technique. Section~\ref{sec:results} presents the results and discussion in the context of previous literature. Throughout this paper we assume a flat $\Lambda$CDM cosmology with $\Omega_M=0.3$, $h=H_0/100$~km\,s$^{-1}$\,Mpc$^{-1}=0.7$. All magnitudes are in the AB system \citep{Oke1974,Oke1983} and we assume the \citet{Kroupa2003} initial mass function (IMF) throughout, unless otherwise stated.
In this paper we have demonstrated how statistical information about faint, high-redshift source populations can be extracted from confused, single-dish, submm surveys (from S2CLS) with a combination of deep, value-added positional prior catalogues (from CANDELS/3D-HST) and the computational deconfusion technique offered by \tphot\ ({described in} Sections~\ref{sec:methods}--\ref{sec:analysis}). {We applied these techniques to 230~arcmin$^2$ of the deepest 450-\micron\ imaging over the AEGIS, COSMOS and UDS CANDELS fields, in order to measure the obscured SFRs of stellar-mass-selected galaxies at $0.5<z<6$. We used additional data at 100--850\micron\ to constrain the FIR SED, which we modelled with a single average template at all redshifts. We cannot exclude the possibility of evolution in the SED shape, but due to our use of submm data close to the SED peak at all redshifts, the resulting systematic uncertainties are small and are fully accounted for in our errors (see Sections~\ref{sec:sedfit}, \ref{sec:sfrcalib}). We obtained the following main results: } \begin{enumerate} \item We detect 165 galaxies at 450\micron\ with $S/N>3$ at $S_{450}\gtrsim 3$~mJy, similar to published 450-\micron\ samples from SCUBA-2. The detected sources have a broad redshift distribution at $0<z<4$ (median $z=1.68$), although we also detect four sources at $4<z<6$. They span a wide range of stellar masses (typically $9.5<\log(\Mstar/\Msun)<11.5$) and UV luminosities (typically $-21<\MUV<-16$). This sample generally traces the highest SFRs at $z<4$, but {exhibits} a wide range of obscuration fractions, with $1<\log(\Lir/\Luv)<4$ (see Section~\ref{sec:sfr}). \item In the stacked results, total SFR (from IR+UV {data}) is strongly correlated with stellar mass at all redshifts, while the raw UV luminosity is a relatively weak indicator of total SFR, especially at $z\gtrsim2$ (Section~\ref{sec:sfr}). \item Instead, UV luminosity primarily indicates the level of SFR obscuration {at a given stellar mass}. {The mean obscuration is strongly correlated with both stellar mass and UV luminosity, but does not appear to evolve significantly with redshift at a given \Mstar\ and \Luv, at least in our stellar-mass-selected sample (Sections~\ref{sec:sfr}, \ref{sec:IRXMstarMuv}).} {When restricting the analysis to $UVJ$-selected star-forming galaxies, we} find that the obscuration can be determined from the stellar mass and UV luminosity as \mbox{$\log(\Lir/\Luv) = a+b(\Luv/10^9\Lsun)+c(\Mstar/10^9\Msun)$}, where \mbox{$a=5.9\pm1.8$}, \mbox{$b=-0.56\pm0.07$}, \mbox{$c=0.70\pm0.07$}. \item The average UV+IR SSFRs of $UVJ$-selected star-forming galaxies rise with redshift, and the evolution is steeper for more massive galaxies, indicating that, on average, they stop forming stars earlier. Massive galaxies ($\Mstar>10^{10}\Msun$) have average SSFRs $\sim1-2$~Gyr$^{-1}$ at $z\sim2-4$, and $\sim3-4$~Gyr$^{-1}$ at $z\sim5$, in agreement with the most recent studies from both SCUBA-2 and ALMA. We fit the binned data with a bivariate model: $\log(\text{SSFR}/\text{Gyr}^{-1})=a(z)[\log(\Mstar/\Msun)-10.5]+b(z)$. The evolution of the slope of the SSFR($\Mstar$) {relation} is given by $a(z)=(-0.64\pm0.19) + (0.76\pm0.45)\log(1+z)$, while the evolution of the normalization (at $\log(\Mstar/\Msun)=10.5$) is given by $b(z)=(-9.57\pm0.11) + (1.59\pm0.24)\log(1+z)$ (Section~\ref{sec:ssfr}). \item Dust-corrected SFRs from the UV luminosity and spectral slope ($\beta$) can under-estimate the total SFR and overall predict a weaker evolution in the average SSFR as a function of redshift (Section~\ref{sec:uvcorr}). \item By stacking $UVJ$-selected star-forming galaxies, we find that massive galaxies ($\Mstar>10^{11}\Msun$) tend to have higher obscuration for a given $\beta$ in comparison with lower-mass galaxies. It is also possible that the IRX--$\beta$ relation evolves with redshift, although this result could be influenced by contamination of our $z<1.5$ bins with passive galaxies (Section~\ref{sec:irxbeta}). \item Our results provide homogeneous measurements of the obscured SFR density (SFRD) in a highly-complete sample of massive galaxies ($\Mstar>10^{10}\Msun$) over the redshift range $0.5<z<6$, extending beyond what has been possible in previous studies using \Herschel\ data. We show that obscured star formation dominates the total SFRD in massive galaxies at all redshifts, and exceeds unobscured star formation by a factor $>10$ (Section~\ref{sec:sfrd}). \item The FIR-detected sample, which is effectively flux-limited at $S_{450}>3$~mJy and samples the highest star-formation rates at all redshifts (SFR~$\gtrsim100\sfr$), accounts for approximately one third of the total SFRD over the redshift range $0.5<z<6$. \item The most UV-luminous massive galaxies, defined as those with $\MUV<\MUV^{\ast}$ and $\Mstar>10^{10}\Msun$, account for around one fifth of the total SFRD over the same redshift range, but even in these the majority of the SFRD is obscured at $z\lesssim3$. \item After correcting for the contributions from lower-mass galaxies, the full SFRD from UV+IR data is in good agreement with previous literature estimates both from UV+IR at $z\lesssim3$ and from UV-only data at $z\sim5$. This indicates that UV-selected samples with $\beta$ dust corrections are successful for calibrating total SFRD at the highest redshifts ($z>3$), in spite of variations in the IRX--$\beta$ relation and the lower SSFRs estimated from the dust-corrected UV alone. This is due to the increasing dominance of unobscured star formation at $z\gtrsim3$ (Section~\ref{sec:lmc}). \item When accounting for all stellar masses, the SFRD at $z\lesssim3$ remains dominated by obscured star formation, but at higher redshifts the total obscured and unobscured SFRD are equal, while the most UV-luminous galaxies are predominantly unobscured at $z\gtrsim3$. The SFRD contribution from the most UV-luminous and the FIR-detected galaxies are approximately equal at $z\sim$~2--3 when including all stellar masses. \item We conclude that the reason for this transition from an early Universe ($z>3$) dominated by unobscured star formation to a Universe dominated by obscured star formation at cosmic noon ($z\approx2$) is explained by the increasing contribution of massive galaxies as the high-mass end of the stellar mass function is built up at around $z\sim$~2--3. This is consistent with our observation that the obscuration of star formation at a fixed stellar mass is independent of redshift. \end{enumerate}
16
7
1607.04283
1607
1607.06309_arXiv.txt
After 26 years from the major event of 1990, in early 2016 the puzzling symbiotic binary MWC 560 has gone into a new and even brighter outburst. We present our tight $B$$V$$R_{\rm C}$$I_{\rm C}$ photometric monitoring of MWC 560 (451 independent runs distributed over 357 different nights), covering the 2005-2016 interval, and the current outburst in particoular. A stricking feature of the 2016 outburst has been the suppression of the short term chaotic variability during the rise toward maximum brightness, and its dominance afterward with an amplitude in excess of 0.5 mag. Similar to the 1990 event when the object remained around maximum brightness for $\sim$6 months, at the time Solar conjunction prevented further observations of the current outburst, MWC 560 was still around maximum, three months past reaching it. We place our observations into a long term contex by combining with literature data to provide a complete 1928-2016 lightcurve. Some strong periodicities are found to modulate the optical photometry of MWC 560. A period of 1860 days regulate the occourence of bright phases at $B$$V$$R_{\rm C}$ bands (with exactly 5.0 cycles separating the 1990 and 2016 outbursts), while the peak brightness attained during bright phases seems to vary with a $\sim$9570 days cycle. A clean 331 day periodicity modulate the $I_{\rm C}$ lightcurve, where the emission from the M giant dominates, with a lightcurve strongly reminiscent of an ellipsoidal distortion plus irradiation from the hot companion. Pros and cons of 1860 and 331 days as the system orbital period are reviewed, waiting for a spectroscopic radial velocity orbit of the M giant to settle the question (provided the orbit is not oriented face-on).
\label{} MWC 560 was first noted in the Mt. Wilson Catalog of emission line objects by Merrill \& Burwell (1943) as a Be-type star with bright Balmer emission lines flaked, on the violet side, by wide and deep absorption lines. The presence of a cool giant, betrayed by strong TiO absorption bands visible in the red, was reported by Sanduleak \& Stephenson (1973), who classify the giant as M4 and confirmed the presence next to the emission lines of deep, blue-shifted absorptions. A short abstract by Bond et al. (1984) informed that in early 80ies they measured terminal velocities up to $-$3000 km~sec$^{-1}$ in the Balmer absorption components, the absoption profiles were very complex and variable on timescales of one day, and flickering with an amplitude of 0.2 mag on a timescale of a few minutes dominated high-speed photometry. Interestingly, Bond et al. mentioned that near H$\alpha$ the spectrum was dominated by the M giant, and no absorption componet was seen. Compared to post-1990 spectra in which the H$\alpha$ absorption is outstanding and the visibility of the M giant spectrum is confined to $\lambda$$\geq$6500/7000 \AA, this indicates that, at the time of the observations by Bond et al. (1984), the luminosity of the hot component was significantly lower than typical for the last 25 years. In keeping up with the very slow pace at which MWC 560 was attracting interest, even the promising report by Bond et al. (1984) did not much to improve upon the anonymity of MWC 560, until all of a sudden, in early 1990, MWC 560 took the scene for a few months, with a flurry of IAU Circulars and near-daily reports conveying increasing excitement. All started when Tomov (1990) reported on the huge complexity he had observed in the Balmer absorption profiles on his January 1990 high resolution spectra, suggesting ``discrete jet-like ejections with a relatively high degree of collimation and with the direction of the ejection near to the line of sight". Feast \& Marang (1990) soon announced that optical photometry clearly indicated that the object was in outburst, immediately followed by Buckley et al. (1990) who reported terminal velocities up to $-$5000 km~sec$^{-1}$ for the Balmer absorptions, upward revised to $-$6500 km~sec$^{-1}$ by Szkody \& Mateo (1990) a few days later. By the time Maran \& Michalitsianos (1990) re-observed MWC 560 with the IUE satellite at the end of April 1990, the paroxysmal phase was ending. \begin{table*}[!Ht] \caption{Our 2005-2016 $B$$V$$R_{\rm C}$$I_{\rm C}$ photometric observations of MWC 560. The full table is available electronically via CDS, a small portion is shown here for guidance on its form and content.} \centering \includegraphics[width=135mm]{Table_1_short.ps} \label{tab1} \end{table*} \begin{figure*}[!Ht] \centering \includegraphics[angle=270,width=16.2cm]{Figure_1.ps} \caption{Typical optical spectrum for MWC 560. The dotted-dashed line marks the transmission profile of Landolt $I_{\rm C}$ photometric band.} \label{fig1} \end{figure*} The nature of MWC 560 as outlined by the 1990 outburst was reviewed by Tomov et al. (1990), while the preceeding photometric history tracing back to 1928 was recostructed, from archival photographic plates, by Luthardt (1991). A few observations reported by Doroshenko, Goranskij, \& Efimov (1993) extend the time coverage back to $\sim$1900. Tomov et al. (1992) and Michalitsianos et al. (1993) modelled MWC 560 with a non-variable M4 giant and an accreting - and probably magnetic - white dwarf (WD), surrounded by an (outer) accretion disk, and subject to a steady optically thick wind outflow and a complex pattern of mass ejection into discrete blobs. Stute \& Sahai (2009) deduced however a non-magnetic WD from their X-ray observations. A fit with a variable collimated outflow that originates at the surface of the accretion disk and that is accellerated with far greater efficiency than in normal stellar atmospheres was considered by Shore, Aufdenberg, \& Michalitsianos (1994). The collimated jet outflow was also investigated by Schmid et al. (2001). A strong flickering activity has been present at all epochs in the photometry of MWC 560, with an amplitude inversely correlated with then system brightness in the $U$ band (Tomov et al. 1996). A search for a spectroscopic counterpart of the photometric flickering was carried out by Tomov et al. (1995) on high resolution and high S/N spectra taken during 1993-1994 when the object was in a quiescent state. In spite of the very large amplitude of the photometric flickering recorded on simultaneous $B$$V$ observations ($\sim$0.35 mag in $B$, $\sim$0.20 mag in $V$), no significant change in intensity and profile (at a level of a few \%) was observed both for the emission lines and their deep and wide absorption components. With MWC 560 at quiescence and not much going on with its photometric and spectroscopic behavior, the interest in the object progressively declined after the 1990 outburst. The situation could now reverse following our recent discovery (Munari et al. 2016) that MWC 560 is going through a new outburst phase, {\it brighter} than that of 1990. This has immediately prompted deep X-ray observations by Lucy et al. (2016) that found a dramatic enhancement in the soft ($<$2 keV) X-rays, compared to the observations by Stute \& Sahai (2009) obtained in 2007 when MWC 560 was in quiescence. The report on the optical outburst also prompted VLA observations that detected for the first time radio emission from MWC 560 (Lucy, Weston, \& Sokoloski 2016), at least an order of magnitude enhanced over a VLA non-detection on 2014 October 2, during the quiescence preceding the current outburst. \begin{figure*}[!Ht] \centering \includegraphics[width=17.0cm]{Figure_2.ps} \caption{Light and color evolution of MWC 560 from our 2005-2016 $B$$V$$R_{\rm C}$$I_{\rm C}$ observations. The varied symbols identify different telescopes.} \label{fig2} \end{figure*} In this paper we present the results of our 2005-2016 $B$$V$$R_{\rm C}$$I_{\rm C}$ photometric monitoring of MWC 560, with an emphasis on the current outburst phase. This is placed into an historical context by combining with existing data that provides an optical lightcurve of MWC 560 covering almost a century. Finally, a search for periodicities is carried out, especially taking advantage of our unique set of $I_{\rm C}$ data which is dominated by the emission from the M giant.
16
7
1607.06309
1607
1607.03230_arXiv.txt
The determination of atmospheric structure and molecular abundances of planetary atmospheres via spectroscopy involves direct comparisons between models and data. While varying in sophistication, most model-spectra comparisons fundamentally assume ``1D'' model physics. However, knowledge from general circulation models and of solar system planets suggests that planetary atmospheres are inherently ``3D'' in their structure and composition. We explore the potential biases resulting from standard ``1D'' assumptions within a Bayesian atmospheric retrieval framework. Specifically, we show how the assumption of a single 1-dimensional thermal profile can bias our interpretation of the thermal emission spectrum of a hot Jupiter atmosphere that is composed of two thermal profiles. We retrieve upon spectra of unresolved model planets as observed with a combination of \textit{HST} WFC3+\textit{Spitzer} IRAC as well as \textit{JWST} under varying differences in the two thermal profiles. For WFC3+IRAC, there is a significantly biased estimate of \methane\ abundance using a 1D model when the contrast is 80\%. For \textit{JWST}, two thermal profiles are required to adequately interpret the data and estimate the abundances when contrast is greater than 40\%. \ccolor{We also apply this preliminary concept to the recent WFC3+IRAC phase curve data of the hot Jupiter WASP-43b. We see similar behavior as present in our simulated data: while the \water\ abundance determination is robust, \methane\, is artificially well-constrained to incorrect values under the 1D assumption.} Our work demonstrates the need to evaluate model assumptions in order to extract meaningful constraints from atmospheric spectra and motivates exploration of optimal observational setups.
\label{intro} Even a cursory view of images of solar system planets shows us that these planets have complex atmospheres. It is readily appreciated that not all latitudes and longitudes look alike. A view of Jupiter at 5 $\mu$m shows bright bands and spots, where, due to locally optically thin clouds, thermal emission can be seen from deeper, hotter atmospheric layers. Looking at Mars in visible light, we can often see locations obscured by thin cirrus clouds in the atmosphere, and at other locations we can see down to the surface. These different locations not only appear different to our eyes; the spectra of light that they reflect and emit also differ. When it is possible to resolve the disk of the planets under study, quite detailed levels of information can be determined: for instance, changing cloud properties with latitude, different atmospheric temperature-pressure (TP) profiles with solar zenith angle, and compositional differences in updrafts vs.~downdrafts. However, if a planet is tens of parsecs distant, there is no path to spatially resolving the visible hemisphere (with current technology). Observers probe the spectra reflected or emitted by the visible hemisphere, but there is generally little hope of assessing how diverse or uniform the visible hemisphere is. Typically, when comparing observations to the spectra from either self-consistent radiative-convective forward models \cp[e.g.][]{Burrows07c,Fortney08a,Marley12,Barman11}, or from data-driven retrievals \cp[e.g.][]{Madhu10,Line14}, the spectrum, or set of spectra, are generated and aim to represent hemispheric average conditions. However, while the calculation of such a spectrum, and its comparison to data, is relatively straightforward, it has been unclear how dependent our inferences are for TP profile structure, cloud optical depth, and chemical abundances from this important initial assumption. Recent work on matching the spectra of some brown dwarfs and directly imaged planets points to problems with the homogeneous atmosphere assumption, with best-fit radiative-convective forward models coming from spectra generated from linear combinations of ``cloudy'' and ``clear'' atmospheres, or atmospheres with weighted areas of ``thick'' and ``thin'' clouds \cp{Skemer14,Buenzli14}. The variable nature of brown dwarf thermal emission, now well documented over several years via photometry \cp[e.g.,][]{Enoch03,Artigau09,Radigan14} and spectroscopy \cp{Buenzli14,Buenzli15}, also indicates inhomogeneity in the visible hemisphere, with emission that changes due to rotation and/or atmospheric dynamics \cp{Robinson14,Zhang14,Morley14b,zhou2016}. In the realm of retrievals, could a search through phase space for a best-fit to a measured spectrum lead to well-constrained yet biased or incorrect constraints on atmospheric properties when we assume planet-wide average conditions? This seems like a real possibility, and one well worth investigating in a systematic way. With the advent of higher signal-to-noise spectroscopy from the ground \cp{Konopacky13} and the coming launch of the \textit{James Webb Space Telescope} (\textit{JWST}), which will deliver excellent spectra for many planets over a wide wavelength range, we aim to test the 1D planet-wide average assumption systematically. We want to furthermore determine, when the data quality is high enough, if we can justify a more complex inhomogeneous model. Recently, \ct{Line15} investigated for transmission spectra how the signal of high atmospheric metallicity inferred under planet-wide average conditions can be mimicked by a uniform lower metallicity together with a high cloud over a part of the planet's terminator. Our work here is on thermal emission and is entirely complementary. We take the first step in characterizing how a diverse visible hemisphere may impact atmospheric retrievals. Our paper is organized as follows: Section \ref{method} describes the setup, retrieval approach, and methodology. In Section \ref{results}, we describe our findings. In Section \ref{w43b}, we present the application to WASP-43b. We discuss our results in Section \ref{discussion} and conclude with future expansions.
\label{discussion} The interpretation of exoplanet spectra is complex; the conclusions we draw about the composition, thermal structures, and other properties of exoplanet atmospheres strongly depend on our model assumptions. In this pilot study, we explored the biases in thermal structure and molecular abundances as a result of the commonly used assumption of ``1D" on the interpretation of transiting exoplanet emission spectra. We generated spectra from a simple ``2D" setup of a planetary hemisphere composed of two thermal profiles, representative of either a ``checkerboard" hemisphere, which may physically correspond to a planet peppered with various convective cells, or a ``half-and-half" planet, similar to simultaneously observing a hot day side and cooler night side. We then applied commonly used atmospheric retrieval tools under the assumption of a single 1D homogeneous hemisphere to one that is inherently ``2D". Within this setup, we explored how the biases in the abundances and 1D thermal profile are influenced by varying degrees of ``contrast" between the two TP profiles for two different observational situations. We found that, for current observational setups, \textit{HST} WFC3+\textit{Spitzer} IRAC, while the inclusion for a 2nd thermal profile is largely unjustified within a nested Bayesian hypothesis testing framework (e.g., the fits do not improve enough to justify the additional parameter), significant biases in the abundance may exist at large contrasts. In particular we found that an artificially precise constraint on the methane abundance can be obtained when assuming a hemisphere composed of a single 1D thermal profile. For a representative \textit{JWST} observational scenario (1-11 microns requiring the NIRISS, NIRCam, and MIRI instruments), we found strong evidence of a 2nd profile in all contrast cases. While little molecular abundance biases appeared to exist for the lowest contrast (0.2), significant biases exist in the water, carbon monoxide, and ammonia abundances for high contrast (0.8). We also found that the retrieval was able to accurately recover both TP profiles when included in the model. \ccolor{Conceptually, we can understand why the 1TP retrieval performs poorly in the case of large contrast by considering just the blackbody spectra of the day and the night sides. Because the night side flux is much lower, the averaged flux we observe is essentially half of the day side flux. This averaged spectrum is then not of a blackbody form. The 1TP approach can be thought of the attempt to fit one blackbody to the averaged spectrum -- it cannot simultaneously fit for both the peak location and the amplitude. An alternative way to fit for the lowered flux, and allowing the fitting of the peak, is to change the emitting area. In our case, that area is fixed, making that not applicable. The 1TP retrieval has to rely on the flexibility provided by tweaking the thermal profile and abundances. With a 2TP approach, we are able to halve one of the blackbodies in the same way the data are generated, and we can better characterize this simple day-night atmosphere.} As a practical real-world example, we tested the 1 TP vs. 2 TP profile on the first quarter phase, third quarter phase, and day side emission spectra of the hot Jupiter WASP-43b as observed with \textit{HST} WFC3 \citep{stevenson14} and \textit{Spitzer} IRAC (Stevenson et al., in prep). For the dayside, the results are analogous to the low contrast synthetic cases. For the first quarter, we found, much like in our high contrast synthetic model scenarios, that a strong methane bias appears when assuming only a single 1D profile, but that the retrieved water abundance remains robust under the different assumptions. The artificially strong methane constraint is driven by the requirement to fit the IRAC 3.6$\mu$m point given only a single TP profile to work with, whereas the water abundance constraint is driven primarily by the WFC3 data of which is less impacted by the assumption of one or two TP profiles. The inclusion of a 2nd TP profile in this particular scenario is justified at the moderate to strong 3.3 $\sigma$ level. It is prudent for us to note, however, that for WASP-43b vertical mixing could potentially reproduce our single TP scenario retrieved methane abundance ($\sim 10^{-5}$). The abundance of methane near the base of the single TP profile at typical CH$_4$-CO quench pressures of $\sim$10 bars \citep[e.g.,][1600 K]{Moses11,line2011} is a few $\times 10^{-5}$. So, in a sense, if we assume a single TP profile, we would arrive to the conclusion that the measured methane abundance is indicative of disequilibrium chemistry to a high degree of constraint (i.e., solar composition thermochemical equilibrium would have been ruled out at several sigma in this scenario). Instead, if we assume two TP profiles, the methane upper limit would be consistent with both pure thermochemical equilibrium at solar composition or solar composition with quenched methane. We are inclined to believe the latter scenario (two profiles) given our synthetic test cases and the fairly strong detection thresh-hold for the 2nd TP profile. The broad methane upper limit permits both chemical situations. Furthermore, \citet{Kreidberg14} found only an upper limit on the methane abundance from the day side emission and transmission spectra of WASP-43b. Had disequilibrium methane been as present as it appeared so here, under the single TP assumption, we would have expected a similar, if not higher, degree of constraint on the methane abundance due to the slightly higher signal-to-noise of the feature during occultation. \ccolor{This WASP-43b example clearly points out a degeneracy in the interpretation of the spectrum, non-equilibrium chemistry or not, which can only be lifted with a robust determination of additional TP profiles that comes from higher S/N spectra over a wider wavelength range.} For the third quarter, the posterior for methane remains constrained regardless of the retrieval set-up. Instead of a statement on the chemical processes present at this phase, we take this result to highlight future work that should be done to examine the effects of utilizing broadband photometric points and the consistency of retrievals for a full phase curve. \ccolor{
16
7
1607.03230
1607
1607.03006_arXiv.txt
The \agile\ satellite discovered the transient source \agl\ in 2010 which triggered the study of the associated field allowing for the discovery of the first Be/black hole binary system: \mwc. This binary was suggested to be the counterpart of \agl, but this is still not a firm association. In this work we explore the archival \agile\ and \fermi\ data in order to find more transient events originating in the field of \agl\ and address the possibility to link them to the accretion/ejection processes of \mwc. We found a total of 9 other transient events with \agile\ compatible with the position of \agl, besides the one from 2010. We folded this events with the period of the binary system and we did not find significant results that allow us to associate the gamma-ray activity to any particular orbital phase. By stacking the 10 transient events we obtained a spectrum that extends between 100~MeV and 1~GeV, and we fitted it with a power law with photon index $\Gamma =2.3\pm0.2$. We searched into the \fermi\ data in order to complement the gamma-ray information provided by \agile\ but no significant results arose. In order to investigate this apparent contradiction between the two gamma-ray telescopes, we studied the exposure of the field of \agl\ in both instruments finding significant differences. In particular, \agile\ exposed for longer time and at lower off-axis angle distance the field of \agl. This fact, together with the energy dependent sensitivity of both instruments, and the soft spectrum found in the stacking analysis, might explain why \agile\ observed the transient events not seen by \fermi.
In July 2010 the \agile\ satellite detected a gamma-ray transient source, {\object AGL J2241+4454}, below the galactic plane \citep{Lucarelli10}, thus far from the strong diffuse Galactic emission. Within the \agile\ position error box there are only 4 prominent X-ray sources: {\object RX J2243.1+4441}, a radio quasar \citep{Brinkmann97}; \object{HD 215325}, an eclipsing binary of beta Lyr type \citep{kiraga2013}; \object{TYC 3226-1310-1}, a rotationally variable star \citep{kiraga2013}; and {\object MWC 656}, a massive Be star \citep{2007A&A...474..653V}. Both HD 215325 and TYC 3226-1310-1 star systems do not host the conditions for gamma-ray emission processes, such as accreting companions or the collision of two strong stellar winds, which could accelerate particles up to relativistic energies and thus produce gamma rays. As we will show throughout the paper, we consider \mwc\ as the most likely candidate to be the counterpart of the gamma-ray transient emission. \mwc\ has been recently discovered as the first binary system containing a black hole (BH) and a Be companion star \citep{Casares14}. This system was discovered thanks to its putative association with the transient gamma-ray source \agl\ \citep{Lucarelli10} that triggered multiwavelength observations. This binary system is located at galactic coordinates $(l, b)= ($100\grp18, -12\grp40$)$ and shows a photometric modulation of 60.37 $\pm$ 0.04 days \citep{Williams10, Paredes-Fortuny12} which suggested its binary nature, confirmed by \cite{Casares_optical_2012}. Phase zero, $\phi_{0}$, is definded at the optical maximum (53242.8 MJD) and the periastron occurs at $\phi=0.01 \pm 0.1 $ \citep{Casares14}. In the latter paper, it was found that the compact object is a BH with a mass between 3.8--6.9 $M_{\odot}$, and the Be star was reclassified as a B1.5-B2 III type star with a mass between 10--16 $M_{\odot}$. X-ray follow up observations revealed a faint X-ray source \citep{munaradrover14} compatible with the position of the Be star, allowing for the classification of this system as a high-mass X-ray binary (HMXB). The X-ray spectrum was fitted with a black-body plus a powerlaw model. The total X-ray luminosity in the 0.3--5.5~keV band is $L_{\rm X}=(3.7\pm1.7)\times 10^{31}$ erg s$^{-1}$. The measured X-ray spectrum is dominated by the non-thermal component, with a non-thermal luminosity $L_{\rm pow}=(1.6^{+1.0}_{-0.9})\times 10^{31}$ erg s$^{-1} \equiv (3.1\pm2.3)\times 10^{-8} L_{\rm Edd}$ for the estimated BH mass. This very low X-ray luminosity is compatible with the binary system being in quiescence during the X-ray observations and is at the level of the faintest low-mass X-ray binaries (LMXBs) ever detected, such as A0620$-$00 \citep{Gallo2006} and XTE~J1118+480 \citep{Gallo2014}. The non-thermal component is interpreted in \cite{munaradrover14} as the contribution arising from the vicinity of the BH and is studied in the context of the accretion/ejection coupling seen in LMXBs \citep{Fender10}, and also in the context of the radio/X-ray luminosity correlation \citep{Corbel13, Gallo2014}. Recent VLA radio observations by \cite{Dzib15} showed that the binary system is also a weak radio source with variable emission. The peak of the radio flux density is $14.2\pm2.9~\mu$Jy for a single $\sim$2h observation taken in 2015 February 22, at orbital phase $\sim$0.49, while 6 following radio observations carried out during 2015 at the same sensitivity did not show any radio signal. However, stacking these other 6 observations yielded a detection with a flux density of $3.7\pm1.4~\mu$Jy compatible with the position of \mwc. At TeV energies, \mwc\ was observed in 2012 and in 2013 with the MAGIC Telescopes. The 2013 observatinons were contemporaneous to the {\it XMM-Newton} observation by \cite{munaradrover14}. According to X-ray and optical data, these observations were thus performed during an X-ray quiescent state of the binary system and yielded only upper limits to its possible very-high energy (VHE) emission at the level of $2.0\times10^{-12}$ cm$^{-2}$ s$^{-1}$ above 300~GeV \citep{aleksic2015}. X-ray binary (XRB) systems containing a Be star have been studied for a long time through radio and X-ray surveys. These systems are characterized by their variable X-ray emission and for their strong radio flares. Out of 184 known BeXRB systems, in 119 of them the X-ray pulsations are observed, confirming that the compact component must be a neutron star \citep{Ziolkowski14}. In the remaining BeXRB, the nature of the compact object is still unknown, with no confirmed BH known with a Be companion. This is known as the missing Be/BH binary problem \citep{Belczynski07}, and the predicted number of existing systems of this kind at any time is a few tens in our Galaxy \citep{Grudzinska15}. In our case, the newly discovered \mwc\ system opens a new window in the study of this class of binaries. The missing Be/BH population could then possibly reveal itself through gamma-ray surveys more easily than with X-ray surveys and could have been subject to selection effects up to now. In this work we present new \agile\ and \fermi\ data analysis: we searched for persistent, transient and periodic emission at the position of \mwc. We also discuss possible counterparts of the gamma-ray events and the possible association of the transient gamma-ray emission found with \mwc\ in the context of the accretion/ejection scenario.
In July 2010 \agile\ detected the transient source \agl\ which triggered the studies that revealed the first Be/BH binary system. Further searches in the \agile\ data revealed that 9 other flares from the same location occured between 2008 and 2013 with energies above 100~MeV. By stacking all the detected events a spectral characterization of the source has been possible yielding a fit to a powerlaw with photon index $\Gamma=2.3\pm 0.2$ between 100~MeV and 3~GeV. The field around \agl\ does not contain many possible counterpart sources. In fact we consider only two: RX~J2243.1+4441, a radio galaxy with a possible FR-II type classification and thus less probable to be a gamma-ray emitter; and \mwc, the above mentioned HMXB containing a Be star and a BH. We consider the latter as the most suitable source to produce the observed gamma-ray emission, given its X-ray and radio characteristics. In order to do a more complete study, we confronted the \agile\ data with \fermi\ data, performing the same study with both telescopes. However, \fermi\ data does not show evidence of the observed gamma-ray emission. Nevertheless, we think that it might be due to differences in the way both telescopes observe, as well as due to different energy-dependent sensitivity. Simultaneous radio to gamma-ray data in a wide temporal coverage is needed to unveil the identity of \agl.
16
7
1607.03006
1607
1607.00005_arXiv.txt
In diffuse molecular clouds, possible precursors of star-forming clouds, the effect of the magnetic field is unclear. In this work we compare the orientations of filamentary structures in the Polaris Flare, as seen through dust emission by \textit{Herschel}, to the plane-of-the-sky magnetic field orientation ($\rm B_{pos}$) as revealed by stellar optical polarimetry with RoboPol. Dust structures in this translucent cloud show a strong preference for alignment with $\rm B_{pos}$. 70\% of field orientations are consistent with those of the filaments (within 30$^\circ$). We explore the spatial variation of the relative orientations and find it to be uncorrelated with the dust emission intensity and correlated to the dispersion of polarization angles. Concentrating in the area around the highest column density filament, and in the region with the most uniform field, we infer the $\rm B_{pos}$ strength to be 24 $-$ 120 $\mu$G. Assuming that the magnetic field can be decomposed into a turbulent and an ordered component, we find a turbulent-to-ordered ratio of 0.2 $-$ 0.8, implying that the magnetic field is dynamically important, at least in these two areas. We discuss implications on the 3D field properties, as well as on the distance estimate of the cloud.
The structure of interstellar clouds is highly complex, characterized by the existence of elongations, or filaments, of various scales \citep[e.g.][]{myers2009}. The Gould Belt Survey conducted by the \textit{Herschel} space observatory captured the morphologies of the nearby molecular clouds with unprecedented sensitivity and detail, allowing the study of filamentary structures to advance \citep[e.g.][]{andre2010}. A better understanding of filament properties and their relation to their environments could provide clues as to how clouds proceed to form stars. To this end, important questions to answer are whether the magnetic field interacts with filaments and how this interaction takes place. Its role in the various stages and environments of star formation is hotly debated. In simulations of super-Alfv\'{e}nic turbulence, magnetic fields are tangled due to the gas flow \citep[e.g.][]{ostriker, falceta}. In such models, filaments are formed by shock interactions \citep[][]{heitsch2001, padoan2001} and the magnetic field within them is found to lie along their spines \citep{heitsch2001, ostriker, falceta}. In studies of sub/trans-Alfv\'{e}nic turbulence, where the magnetic field is dynamically important, filament orientations with respect to the large scale ordered field, depend on whether gravity is important. In simulations where gravity is not taken into account \citep[e.g.][]{falceta} or structures are gravitationally unbound \citep[][]{soler2013}, filaments are parallel to the magnetic field. On the other hand, self-gravitating elongated structures are perpendicular to the magnetic field \citep{mouschovias1976, nakamura2008, soler2013}. Both configurations are the result of the magnetic force facilitating flows along field lines. Finally, if the magnetic field surrounding a filament has a helical configuration \citep{fiege} the relative orientation of the two as projected on the plane of the sky can have any value, depending on projection, curvature and if the field is mostly poloidal or toroidal \citep{matthews2001}. The relation between cloud structure and the magnetic field was highlighted early on by observational studies \citep[e.g. in the Pleiades,][]{hall1955}. On cloud scales, \cite{li2013} found that the distribution of relative orientations of elongated clouds and the magnetic field (both projected on the plane of the sky) is bimodal: clouds either tend to be parallel or perpendicular (in projection) to the magnetic field. In a series of papers the \textit{Planck} collaboration compared the magnetic field to ISM structure across a range of hydrogen column densities ($\rm N_H$). \cite{planck32} considered the orientation of structures in the diffuse ISM in the range of $\rm N_H\sim 10^{20} - 10^{22}$ and found significant alignment with the plane-of-the-sky magnetic field ($\rm B_{pos}$). \cite{planck35} found that in their sample of 10 nearby clouds, substructure at high column density tends to be perpendicular to the magnetic field, whereas at low column density there is a tendency for alignment. Studies of optical and NIR polarization, tracing $\rm B_{pos}$ in cloud envelopes, show that dense filaments within star-forming molecular clouds tend to be perpendicular to the magnetic field \citep[][]{pereyra2004, alves2008, chapman2011, sugitani2011}. On the other hand, there are diffuse linear structures termed \textit{striations}, that share a common smoothly varying orientation and are situated either in the outskirts of clouds \citep{goldsmith2008, deoliveira} or near dense filaments \citep{palmeirim}. These structures show alignment with $\rm B_{pos}$ \citep[][]{vandenbergh, chapman2011, palmeirim, deoliveira, malinen2015}. The extremely well-sampled data of \cite{franco2015} in Lupus I show that $\rm B_{pos}$ is perpendicular to the cloud's main axis but parallel to neighbouring diffuse gas. There are, however, exceptions to this trend \citep[e.g. L1506 in Taurus,][]{goodman1990}. While the large-scale magnetic field has been mapped in many dense molecular clouds, little is known about the field in translucent molecular clouds. In this paper we investigate the relation of the magnetic field to the gas in the Polaris Flare, a high-latitude diffuse cloud. This molecular cloud extends above the galactic plane and is at an estimated distance between 140 and 240 pc, although this is debated \citep[e.g.][]{zagury1999, schlafly2014}. It is a translucent region \citep[$\rm A_V \lesssim 1$ mag,][]{cambresy} devoid of star formation activity \citep{andre2010, menshchikov, WT2010}, except for the existence of possibly prestellar core(s) in the densest part of the cloud MCLD 123.5+24.9 (MCLD123) \citep[][]{WT2010, wagle}. Signatures of intense velocity shear have been identified in this region and have been linked to the dissipation of supersonic (but trans-Alfv\'enic) turbulence \citep[][and references therein]{hily-blant2009}. The dust emission in $\sim$16 deg$^2$ of the Polaris Flare has been mapped by \textit{Herschel} \citep{pilbratt2010} as part of the \textit{Herschel} Gould Belt Survey \citep{andre2010,miville,menshchikov,WT2010}. The \textit{Planck} space observatory has provided the first map of the plane-of-the-sky magnetic field in the area at tens of arcminute resolution \citep{planck19, planck20}. In \cite{panopoulou2015} (Paper I) we presented a map of the plane-of-the-sky magnetic field in the same area, measured by stellar optical polarimetry with RoboPol. The resolution of optical polarimetry (pencil beams) and coverage of our data allow for a detailed comparison between magnetic field and cloud structure. The goal of this work is to compare the magnetic field orientation to that of the linear structures in the Polaris Flare and to estimate the plane-of-the-sky component of the field in various regions of the cloud. In section \ref{ssec:wholemap} we present the distribution of relative orientations of filaments and $\rm B_{pos}$ throughout the mapped region. We compare properties such as the relative orientations and polarization angle dispersion across the map in section \ref{ssec:maps} and present the distribution of filament widths in section \ref{ssec:width}. We analyse two regions of interest separately in section \ref{ssec:regions} and estimate the $\rm B_{pos}$ strength in these regions in section \ref{ssec:Bstrength}. Finally, we discuss implications of our findings in section \ref{sec:discussion} and summarize our results in section \ref{sec:summary}.
\label{sec:discussion} \subsection{Properties of the 3D magnetic field} The projected magnetic field of the cloud presents a very inhomogeneous structure. There are regions where it is uniform and others where the measured orientations appear random, or significant measurements are entirely absent. These characteristics may provide hints on the nature of the 3D field. Let us consider region A, which has the largest density of significant polarization measurements. As discussed in Paper I, this is not due to variation in stellar density, observing conditions or other systematics. Therefore, region A must be characterized by higher polarization efficiency (in the terminology of \cite{andersson2015}: intrinsic polarization per unit column density). We can further investigate this observation by considering that $p_d$ is related to the following factors \citep{leedraine}: \begin{equation} p_d = p_0 R F \cos^2\gamma . \label{eqn:leedraine} \end{equation} where $\gamma$ is the inclination angle (the angle between the magnetic field and the plane-of-the-sky), $p_0$ reflects the polarizing capability of the dust grains due to their geometric and chemical characteristics, and $R$ is the Reyleigh reduction factor \citep{greenberg} which quantifies the degree of alignment of the grains with the magnetic field. $F$ accounts for the variation of the field orientation along the line-of-sight and is equal to $F = \frac{3}{2}(\left\langle \cos^2\chi\right\rangle-\frac{1}{3})$, where $\chi$ is the angle between the direction of the field at any point along the line-of-sight and the mean field direction. The angular brackets denote an average along the line-of-sight. The increased $p_d$ of region A could therefore be the result of any of these factors (or some combination of them), in other words region A could have: \begin{itemize} \item[i)] increased alignment efficiency (e.g. more background radiation, a larger amount of asymmetric dust grains, larger grain sizes), i.e. higher factors $p_0$ and/or $R$, \item[ii)] more uniform magnetic field along the line-of-sight, i.e. higher $F$, \item[iii)] increased $\rm B_{pos}$ (inclination of B is larger with respect to the line-of-sight), i.e. higher $\cos\gamma$. \end{itemize} Several pieces of evidence challenge the validity of case (i). First, if the radiation illuminating region A were much different in intensity or direction, then the dust temperature of the area would have to be qualitatively different (higher) than in other regions of the cloud. As mentioned in section \ref{ssec:Bstrength}, the results from \cite{menshchikov} imply that temperature and density variations are subtle across the field. Indeed, the temperature PDF presented by \cite{schneider2013} is narrow. Also, the most likely candidate for providing illumination to the cloud, due to its likely proximity, is Polaris (the star). \cite{zagury1999} concluded by studying optical and $\rm 100\,\mu m$ light from MCLD123, that the star cannot be the primary source of dust heating. Additionally, larger amounts of (aligned) grains would imply larger column densities (or $\rm A_V$) than the rest of the cloud, which is not a characteristic of region A. To the best of our knowledge, evidence for significant variation of grain size within the same cloud between regions of such similar $\rm A_V$ does not exist. We now investigate whether the observed difference in $p_d$ between region A and other parts of the cloud could arise from differences in the properties of the magnetic field along the line-of-sight. We can obtain an upper limit on the influence of the factor $F$ on $p_d$ by keeping all other factors in equation \ref{eqn:leedraine} constant and taking the ratio of two regions, for example A and B: \begin{equation} \frac{p_A}{p_B} \approx \frac{\left\langle \cos^2 \chi_A \right\rangle - \frac{1}{3}}{\left\langle \cos^2 \chi_B \right\rangle -\frac{1}{3}} \label{eqn:pratioF} \end{equation} where the average $p_d$ in regions A and B are equal to $p_A = 1.63, \, p_B = 1.3$ and $\left\langle \cos^2{\chi_i} \right\rangle$ ($i = A, B$) is an average over all lines of sight in region $i$. For region A, the angle dispersion is small ($\delta\theta \sim 10 ^\circ$, section \ref{ssec:regions}), so we can make the approximation $\left\langle \cos^2{\chi_A} \right\rangle \approx \cos^2{10^\circ}$ (to better than 10\%). We check what values of $\left\langle \cos^2{\chi_B} \right\rangle$ produce the observed ratio of $p_A/p_B$ (within 30\%) by drawing $\mathcal{N} = 1-9$ angles (the number of turbulent cells in region B) from a normal distribution with $\sigma = 10^\circ - 50^\circ$. After repeating the process 100 times, we find that the most likely $\sigma$ that can reproduce the observed $p_A/p_B$ are in the range $10^\circ-25^\circ$, similar to the observed angle dispersion on the plane-of-the-sky (section \ref{ssec:Bstrength}). Therefore, it is possible that region A has a more ordered field along the line-of-sight compared to other regions. Finally, if the 3D orientation of the magnetic field is mostly in the plane-of-the-sky in region A and less so in other parts of the cloud, this could also explain the increased measurement density in this region. We can estimate the change in angle that is needed to obtain the difference in $p_d$ between regions A and B, from equation \ref{eqn:leedraine}. Taking all factors equal between the two regions except the inclination angle, the ratio of $p_d$ is: $p_d^A/p_d^B = 1.63/1.3 = \cos^2\gamma_A/\cos^2\gamma_B$. This ratio could arise either from a pair of large $\gamma_A,\, \gamma_B$ with a small difference or from small angles having a large difference. This ambiguity can be lifted by considering the expected polarization angle dispersions for different inclinations of the magnetic field, studied with MHD simulations by \cite{falceta}. In their work, these authors found that in the range $0^\circ-60^\circ$ the predicted polarization angle dispersions are below $35^\circ$ for their model with a strong magnetic field (Alfv\'en Mach number 0.7). Since the angle dispersions in the two regions are significantly lower than this value, it is safe to assume that $0^\circ < \gamma_A,\gamma_B < 60^\circ$. For this range, we find that the observed ratio of $p_d$ can be explained by inclination differences most likely in the range $\gamma_B-\gamma_A = 6^\circ - 30^\circ$. In order to explain the difference in $p_d$ between region A and the lowest mean-$p_d$ region in the map (Fig 4, panel f, approximately at the center of the map) which has a $p_d \sim 0.7\%$, the difference in inclination angle needs to be $20^\circ-50^\circ$. With the existing set of measurements, we are not able to conclude whether the variations in polarization fraction are mostly due to change in inclination or due to differences in the uniformity of the field along the line-of-sight. We have, however, provided bounds on the areas of the parameter space in which these differences are likely to occur. \subsection{Cloud distance} \begin{table} \centering \caption{Distance estimates to the cloud from different references.} \begin{tabular}{|c|c|c|} \hline Reference & $d$ (pc) & method \\ \hline \hline \cite{heithausen1990} & $\leq 240$ & Stellar extinction,\\ && nearby clouds\\ \cite{zagury1999} & 125 $\pm$ 25 & Association to stars \\ \cite{brunt2003} & 205 $\pm$ 62 & Size-linewidth relation \\ \cite{schlafly2014} & 390 $\pm$ 34 & Stellar extinction \\ \hline \end{tabular} \label{tab:distances} \end{table} In the literature there is no definitive consensus on the cloud's distance. The existing estimates are shown in table \ref{tab:distances} along with the method that was used to obtain each one. \cite{heithausen1990} base their distance estimate on reddening estimates of stars in the field from \cite{keenan}, who found reddened stars from distances as close as 100 pc and that all stars farther than 300 pc were reddened. Knowing that Polaris (the star) showed dust-induced polarization, and that the then existing distance estimates to the star were 109 - 240 pc they placed the cloud at a distance $d < 240$ pc. This distance fits with the smooth merging of the cloud at lower latitudes with the Cepheus Flare, at 250 pc. \begin{figure} \centering \includegraphics[scale=1]{Fig12.eps} \caption{Top: Polarization angle of the star Polaris \citep{heiles2000}, white segment, compared to our data, red segments. The $p_d$ of Polaris is 0.1\%, but has been enhanced 10 times. Its position is marked with a star. Bottom: Distribution of $\theta$ in the area shown in the top panel. The dark gray line shows the $\theta$ of Polaris while the gray band is the 1$\sigma$ error. Bin size is 10$^\circ$.} \label{fig:northstar} \end{figure} \cite{zagury1999} compared IRAS 100 $\rm\mu m$ emission with optical images of MCLD123 and found that the brightness ratios are consistent with Polaris (the star) being the illuminating source of the cloud in the optical. They placed the cloud at a distance 6 $-$ 25 pc in front of the star ($\rm 105 \, pc < d < 125\, pc$ from the sun) so that its contribution in dust heating would be minimal compared to the interstellar radiation field. \cite{brunt2003} used Principal Component Analysis (PCA) of spectral imaging data to infer distances based on the universality of the size-linewidth relation for molecular clouds. Their estimate is consistent with both the above estimates. \cite{schlafly2014} used accurate photometry measurements of the Pan-STARRS1 survey and calculated distances to most known molecular clouds. Their estimate for the distance of the Polaris Flare (390 pc), was obtained for lines of sight in the outskirts of the cloud (outside our observed field). In Fig. \ref{fig:northstar} (top) we zoom in on the region surrounding the North Star and overplot the polarization data (red) and the measurement of the North Star (white) from the \cite{heiles2000} catalogue on the \textit{Herschel} image. The lengths of the segments are proportional to their $p_d$, and the length of the Heiles measurement has been enhanced 10 times. The position of the North Star is marked with a blue star. The star happens to be projected on an area of very little dust emission, hence the low $p_d$. Magnetic field orientations in the area show a strong peak around $110^\circ$, with a few measurements (that happen to fall towards the right and bottom of the area) clustering around $40^\circ$. This can be seen in the distribution of angles in the area shown in Fig. \ref{fig:northstar} (bottom). The polarization angle of the North Star \citep{heiles2000} is shown with the dark gray vertical line and the light gray band shows the 1 $\sigma$ error. The stars that are nearest to Polaris, in projection, belong to the peak at 100$^\circ$. The fact that the orientation of the North Star's polarization is consistent with this peak is intriguing. It could add to the evidence supporting that Polaris is behind the Flare, constraining its distance to the \cite{zagury1999} estimate. However, since the orientation of stellar polarization shifts by a substantial amount (60$^\circ$) in $\sim 20'$, denser sampling of the area is needed in order to ascertain this indication. We combined RoboPol optical polarization measurements and \textit{Herschel} dust emission data to infer the magnetic field properties of the Polaris Flare. We found that linear dust structures (filaments and striations) are preferentially aligned with the projected magnetic field. This alignment is more prominent in regions where the fractional linear polarization is highest (and the number of significant polarization measurements is largest). This correlation supports the idea that variations in the alignment are partly caused by the projection of the 3-dimensional magnetic field. We investigated the possibility of important spatial variations in the filament widths and found only a slight indication of such an effect. Using the \cite{davis1951, chandrasekhar} and \cite{hildebrand} methods, we estimated the strength of the plane-of-the-sky field and the ratio of turbulent-to-ordered field components in two regions of the cloud: one containing diffuse striations, and the other harbouring the highest column density filament. Our results indicate that the magnetic field is dynamically important in both regions. Combining our results, we find that differences of $6^\circ-30^\circ$ in the magnetic field inclination between two cloud regions can explain the observed polarization fraction differences. This difference can also be explained by a difference in the line-of-sight dispersion of the field of $10^\circ-25^\circ$. Finally, we find that the polarization angles of the North Star \citep[]{heiles2000} and of RoboPol data in the surrounding area favour the scenario of the cloud being in front of the star.
16
7
1607.00005
1607
1607.00834_arXiv.txt
The X-ray integral field unit for the Athena mission consists of a microcalorimeter transition edge sensor pixel array. Incoming photons generate pulses which are analyzed in terms of energy, in order to assemble the X-ray spectrum. Usually this is done by means of optimal filtering in either time or frequency domain. In this paper we investigate an alternative method by means of principle component analysis. This method attempts to find the main components of an orthogonal set of functions to describe the data. We show, based on simulations, what the influence of various instrumental effects is on this type of analysis. We compare analyses both in time and frequency domain. Finally we apply these analyses on real data, obtained via frequency domain multiplexing readout.
The proposed X-ray integral field unit (X-IFU)\cite{xifu2013} for the Athena\cite{Athena2013} mission uses a microcalorimeter array as a 2-dimensional detection device, capable of retrieving high resolution spectra for each imaging element. Each pixel contains a transition edge sensor (TES) which records the energy pulses of incoming X-ray photons. The XIFU detector will have of order 4000 pixels in the array. At SRON we develop a TES-pixel readout scheme by means of frequency domain multiplexing \cite{Hartog2011}. \begin{SCfigure} \includegraphics[width=0.40\textwidth,clip]{pca_ideal_pulse.ps} {\caption{\small Shape of ideal model pulses, without any systematic instrumental effects, but with some independent noise added. The energy of the pulses is drawn from the expected \mka distribution} \label{fig:ideal_pulse}} \end{SCfigure} The recorded pulses will have to be analyzed in terms of incoming photon energy. Usually this is done by means of optimal filtering\cite{szymkowiak1993}, fitting a pulse shape in the time, or frequency domain using appropriate weights based on the noise spectrum. In this paper we use an alternate way of processing by means of principle component analysis (PCA). PCA attempts to find the main components in an orthogonal set of shapes (eigenvectors) which describe the measured pulses. The main eigenvalues, or projections of the pulse onto the main eigenvectors, will have a relation with the incoming photon energy. Ideally only one main component should represent energy and the other components should represent other effects present in the data, but in practice several parameters are mixed in the different components. Projecting a data vector onto an eigenvector (the in-product) can also be seen as multiplying the data vector with a weights vector. As such the PCA method can be seen as a modification of the optimal filtering method, in the sense that PCA derives a best set of weights instead of taking them directly from noise spectrum, for a (set of) component(s) which measure the energy. Previously Busch et al.\cite{busch2015} have reported on the application of this method to real microcalorimeter data pulses of photons, in time domain DC readout mode. They found this method to be quite promising to obtain the best energy resolution. Here we report on the application of this method to simulated data, in order to investigate the relation of different effects present in the data on the components found by PCA. In addition we apply PCA to microcalorimeter data obtained at SRON from a multi-pixel array by means of frequency domain multiplexing (FDM). We compare analyses both in time and frequency domain. We aim to minimize the number of relevant components, ideally to obtain the smallest set of components which relate to the energy of the photons, in other words, to maximize the isolation of a component representing the energy.
We performed PCA analysis on simulated data, both in time and frequency domain to study instrumental effects, and applied this analysis on real data from a multi pixel array obtained via frequency domain multiplexing. We conclude the following: \begin{itemize} \item{Different instrumental effects can be recognized in the different PCA components obtained} \item{The relative importance of the different components in PCA analysis offers diagnostics on the possibility to separate different instrumental effects from photon energy} \item{PCA clearly shows that analysis in the frequency domain directly eliminates the problems with sampling phase and offers to obtain better energy resolution in a more efficient way} \end{itemize}
16
7
1607.00834
1607
1607.00233_arXiv.txt
We present upper limits in the hard X-ray and gamma-ray bands at the time of the LIGO gravitational-wave event GW 151226 derived from the CALorimetric Electron Telescope ({\it CALET}) observation. The main instrument of CALET, CALorimeter (CAL), observes gamma-rays from $\sim$1 GeV up to 10 TeV with a field of view of $\sim$2 sr. The {\it CALET} gamma-ray burst monitor (CGBM) views $\sim$3 sr and $\sim$2$\pi$ sr of the sky in the 7 keV - 1 MeV and the 40 keV - 20 MeV bands, respectively, by using two different scintillator-based instruments. The CGBM covered 32.5\% and 49.1\% of the GW 151226 sky localization probability in the 7 keV - 1 MeV and 40 keV - 20 MeV bands respectively. We place a 90\% upper limit of $2 \times 10^{-7}$ erg cm$^{-2}$ s$^{-1}$ in the 1 - 100 GeV band where CAL reaches 15\% of the integrated LIGO probability ($\sim$1.1 sr). The CGBM 7 $\sigma$ upper limits are $1.0 \times 10^{-6}$ erg cm$^{-2}$ s$^{-1}$ (7-500 keV) and $1.8 \times 10^{-6}$ erg cm$^{-2}$ s$^{-1}$ (50-1000 keV) for one second exposure. Those upper limits correspond to the luminosity of $3$-$5$ $\times 10^{49}$ erg s$^{-1}$ which is significantly lower than typical short GRBs.
\label{sec:intro} The first gravitational-wave detection by the Laser Interferometer Gravitational-Wave Observatory (LIGO) on GW 150914 confirmed the existence not only of gravitational waves from astronomical objects but also of a binary black hole system with several tens of solar masses \citep{ligo_gw150914}. Based solely on the gravitational-wave signals recorded by two LIGO detectors, the current hypothesis is that GW 150914 was the result of a merger of two black holes with initial masses of $36_{-4}^{+5} M_{\sun}$ and $29_{-4}^{+4} M_{\sun}$ at the luminosity distance of $410_{-180}^{+160}$ Mpc. The {\it Fermi} Gamma-ray Burst Monitor (Fermi-GBM) reported a possible weak gamma-ray transient source above 50 keV at 0.4 s after the GW 150914 trigger \citep{fermigbm_gw150914}. However, the upper limit provided by the {\it INTEGRAL} ACS instrument in a gamma-ray energy band similar to the {\it Fermi}-GBM energy band is not consistent with a possible gamma-ray counterpart of GW 150914 suggested by the {\it Fermi}-GBM \citep{integral_gw150914}. No electromagnetic counterpart of GW 150914 was found in radio, optical, near-infrared, X-ray and high-energy gamma-ray \citep{em_gw150914}. GW 151226 (LIGO-Virgo trigger ID: G211117) is the 2nd gravitational-wave candidate identified by both LIGO Hanford Observatory and LIGO Livingston Observatory with a high significance (the false alarm rate of less than one per 1000 years by the on-line search) at 3:38:53.647 UT on December 26, 2015 \citep{ligo_gw151226}. According to a Bayesian parameter estimation analysis, the event is very likely a binary black hole merger with initial black hole masses of $14.2_{-3.7}^{+8.3} M_{\sun}$ and $7.5_{-2.3}^{+2.3} M_{\sun}$ , and final black hole mass of $20.8_{-1.7}^{+6.1} M_{\sun}$ \citep{ligo_O1run}. The luminosity distance of the source is estimated as $440_{-190}^{+180}$ Mpc which corresponds to a redshift of $0.09_{-0.04}^{+0.03}$. As far as the electromagnetic counterpart search of GW 151226 in the gamma-ray regime is concerned, {\it Fermi}-GBM \citep{fgbm_gw151226}, {\it Fermi} Large Area Telescope (LAT) \citep{flat_gw151226}, High-Altitude Water Cherenkov Observatory (HAWC) \citep{hawc_gw151226}, and {\it Astrosat}-CZTI \citep{astrosat_czti_gw151226} reported no detections around the GW trigger time. According to \citet{racusin2016}, the flux upper limit of {\it Fermi}-GBM is from $4.5$ $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ to $9$ $\times$ 10$^{-7}$ erg cm$^{-2}$ s$^{-1}$ in the 10-1000 keV band. The {\it Fermi}-LAT flux upper limit using the first orbit data after the LIGO trigger is from $2.6 \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ to $7.8 \times 10^{-9}$ erg cm$^{-2}$ s$^{-1}$ in the 0.1-1 GeV band. The CALorimetric Electron Telescope ({\it CALET}; \citet{torii_2015, asaoka_2015}) mission, which was successfully launched and emplaced on the Japanese Experiment Module - Exposed Facility (JEM-EF) of the International Space Station (ISS) in August 2015, was fully operational at the time of GW 151226. {\it CALET} consists of two scientific instruments. The Calorimeter (CAL) is the main instrument which is capable of observing high energy electrons from $\sim$1 GeV to $\sim$20 TeV, protons, helium and heavy nuclei from $\sim$10 GeV to 1000 TeV, and gamma-rays from $\sim$1 GeV to $\sim$10 TeV. The field of view (FOV) of CAL is $\sim$$45^{\circ}$ from the zenith direction. Another instrument, {\it CALET} Gamma-ray Burst Monitor (CGBM; \citet{yamaoka2013}), is a gamma-ray burst (GRB) monitor using two different kind of scintillators (LaBr$_{3}$(Ce) and BGO) to achieve a broad energy coverage. The Hard X-ray Monitor (HXM) using LaBr$_{3}$(Ce) covers the energy range from 7 keV up to 1 MeV, and two identical modules are equipped in the same direction in {\it CALET}. The Soft Gamma-ray Monitor (SGM) based on BGO covers the energy range from 40 keV to 20 MeV. The FOV of HXM and SGM are $\sim$$60^{\circ}$ and $\sim$$110^{\circ}$ from the boresight respectively. The CGBM has been detecting GRBs at an average rate of 3-4 events per month. Around the trigger time of GW 151226, {\it CALET} was performing regular scientific data collection. Between 3:30 and 3:43 UT, the CAL was operating in the low-energy gamma-ray mode, which is an operation mode with a lower energy threshold of 1 GeV. The high voltages of CGBM were set at the nominal values around 3:20 UT and turned off around 3:40 UT to avoid high background radiation area. There was no CGBM on-board trigger at the trigger time of GW 151226.
16
7
1607.00233
1607
1607.02146_arXiv.txt
{We present observations with the Karl G. Jansky Very Large Array (VLA) at 3~GHz (10~cm) toward a sub-field of the XXL-North 25~deg$^{2}$ field targeting the first supercluster discovered in the XXL Survey. The structure has been found at a spectroscopic redshift of 0.43 and extending over $0^\circ\llap{.}35 \times 0^\circ\llap{.}1$ % on the sky. The aim of this paper is twofold. First, we present the 3~GHz VLA radio continuum observations, the final radio mosaic and radio source catalogue, and, second, we perform a detailed analysis of the supercluster in the optical and radio regimes using photometric redshifts from the CFHTLS survey and our new VLA-XXL data. Our final 3~GHz radio mosaic has a resolution of $3\farcs2\times1\farcs9$, and encompasses an area of $41\arcmin\times41\arcmin$ with rms noise level lower than $\sim20~\mu$Jy~beam$^{-1}$. The noise in the central $15\arcmin\times15\arcmin$ region is $\approx11\mu$Jy~beam$^{-1}$. From the mosaic we extract a catalogue of 155 radio sources with signal-to-noise ratio (S/N)$\geq6$, eight of which are large, multicomponent sources, and 123 ($79\%$) of which can be associated with optical sources in the CFHTLS W1 catalogue. Applying Voronoi tessellation analysis (VTA) in the area around the X-ray identified supercluster using photometric redshifts from the CFHTLS survey we identify a total of \nTott overdensities at $z_{\mathrm{phot}}=0.35-0.50$, \nXMM of which are associated with clusters detected in the \textit{XMM-Newton} XXL data. We find a mean photometric redshift of 0.43 for our overdensities, consistent with the spectroscopic redshifts of the brightest cluster galaxies of \nXMMt X-ray detected clusters. The full VTA-identified structure extends over $\sim0^\circ\llap{.}6 \times 0^\circ\llap{.}2$ on the sky, which corresponds to a physical size of $\sim12\times4$~Mpc$^{2}$ at $z=0.43$. No large radio galaxies are present within the overdensities, and we associate \nRadioInCluster (S/N$>7$) radio sources with potential group/cluster member galaxies. The spatial distribution of the red and blue VTA-identified potential group member galaxies, selected by their observed $g-r$ colours, suggests that the clusters are not virialised yet, but are dynamically young, as expected for hierarchical structure growth in a $\Lambda$CDM universe. Further spectroscopic data are required to analyse the dynamical state of the groups.}
} Over the last decade, deeper insights into various physical properties of galaxies, their formation and their evolution through cosmic time have been gained by sensitive, multiwavelength surveys such as GOODS \citep{dickinson03}, COSMOS \citep{scoville07}, GAMA (\citealt{driver09,driver11}) and CANDELS (\citealt{koekemoer11}; \citealt{grogin11}). The XXL\footnote{http://irfu.cea.fr/xxl } is a panchromatic survey (X-ray to radio) of two regions on the sky, each 25~deg$^{2}$ in size. Constraining the time evolution of the Dark Energy equation of state using galaxy clusters is the main goal of the XXL project. In a deep (6~Ms) \textit{XMM-Newton}% \footnote{http://xmm.esac.esa.int/ % } survey to a depth of $\sim5\times10^{-15}\, {\rm erg\, s^{-1}\, cm^{-2}}$ in the {[}0.5-2{]}~keV band several hundred galaxy cluster detections ($0<z<1.5$) were made over the $\sim$50~deg$^{2}$ \citep{pierre15} (hereafter \citet{pierre15}). The XXL South region (XXL-S) is located in the southern hemisphere in the Blanco Cosmology Survey (BCS) field% \footnote{http://www.usm.uni-muenchen.de/BCS/% } and the XXL North region (XXL-N) near the celestial equator encompasses the smaller XMM Large Scale Structure Survey (XMM-LSS)% \footnote{http://wela.astro.ulg.ac.be/themes/spatial/xmm/LSS/% } field. Multiwavelength photometric data ($<25$ in AB mag) from the UV/optical to the IR drawn from \textit{GALEX}, Canada-France-Hawaii Telescope Legacy Survey (CFHTLS), BCS, Sloan Digital Sky Survey (SDSS)% \footnote{http://www.sdss.org/% }, the Two Micron All Sky Survey (2MASS)% \footnote{http://www.ipac.caltech.edu/2mass/% }, \textit{Spitzer} Space Telescope% \footnote{http://ssc.spitzer.caltech.edu/% }, and Wide-field Infrared Survey Explorer (WISE)% \footnote{http://wise.ssl.berkeley.edu/% } already exist over almost the full area (\citet{pierre15}). Even deeper observations are available over smaller areas (\citet{pierre15}), and will be conducted over the full area with the DECam\footnote{http://www.darkenergysurvey.org/DECam/camera.shtml} to magnitude $25-26$ in \textit{griz} bands, and $26-27$ in $grizY$ bands with Hyper-SuprimeCam\footnote{http://www.naoj.org/Projects/HSC/}. Photometric redshifts are expected to reach accuracies better than $\sim10$\%, sufficient for large-scale structure studies out to intermediate redshift (z$\sim0.5$), and for evolutionary studies of galaxies and AGN to $z\sim3$. More than 15,000 optical spectra are already available (see \citet{pierre15}). To complement the multiwavelength coverage of the field, we present new radio observations with the Karl G. Jansky Very Large Array (VLA) at 3~GHz over a $\sim0^\circ\llap{.}7 \times 0^\circ\llap{.}7$ subarea of the XXL-N field. These observations target the first supercluster discovered in the XXL survey by Pompei et al. (hereafter \citet{pompei15}). One cluster within this large structure was previously identified at $z_{\mathrm{phot}}=0.48$ using photometric redshifts in the CFHTLS wide field by \citet{durret11}. % The \textit{XMM-Newton} XXL observations revealed six X-ray clusters within $\sim0^\circ\llap{.}35 \times 0^\circ\llap{.}1$ as described in \citet{pierre15}, see also \citet{pacaud15} (hereafter \citet{pacaud15}). Based on further spectroscopic observations of the brightest cluster galaxies by \citet{koulouridis15} (hereafter \citet{koulouridis15}), \citet{pompei15} reports the redshift of the structure of $z=0.43$. The structure, thus, has a physical extent of $\sim10\times~2.9$~Mpc$^{2}$ ($21\arcmin\times6\arcmin$) . Here we present a detailed analysis of this structure based on the CFHTLS optical data and our new VLA radio data. Throughout this paper we use cosmological parameters in accordance with the \textit{Wilkinson} Microwave Anisotropy Probe satellite final data release (WMAP9) combined with a set of baryon acoustic oscillation measurements and constraints on $H_{0}$ from Cepheids and type Ia supernovae \citep{hinshaw13}. These parameters are $\Omega_{M}=0.282$, $\Omega_{\Lambda}=0.718$ and $H_{0}=69.7~\rm km~s^{-1}~Mpc^{-1}$. All sizes given in this paper are physical. All magnitudes are in the AB magnitude system, and all coordinates in J2000 epoch. In Sect.~\ref{sec:data} we present the optical and radio data used in this paper. In Sect.~\ref{sec:supercluster} we describe optical and radio properties of the supercluster. Discussion of the results is presented in Sect.~\ref{sec:discussion} and the summary is given in Sect.~\ref{sec:summary}.
\label{sec:discussion} \subsection{Constraining total masses of the overdensitites} In \citet{pompei15} the six westernmost X-ray detected clusters in the XLSSC-e supercluster have been spectroscopically confirmed in the region of interest to lie at $z=0.42-0.43$ within an extent of $\sim 0^\circ\llap{.}35 \times 0^\circ\llap{.}1$ on the sky (or a physical extent of $\sim10\times2.9$~Mpc$^{2}$ at $z=0.43$). Using the XXL X-ray data they have also computed the virial masses within the radius encompassing 500 times the critical density of the six clusters to be in the range of M$_{\mathrm{500}}=(1-3)\times10^{14}$~\msol. This yields a total mass of all six of the westernmost X-ray detected clusters of $M_{\mathrm{500}}\sim10^{15}$~\msol (see Table 1 in \citet{pompei15}; see also Table 3 in \citet{pacaud15}). Additionally, ten VTA identified overdense structures have not been identified as X-ray clusters, presumably because their X-ray emission falls below the X-ray cluster detection threshold. Nevertheless, for these overdense structures we can put upper limits on their X-ray properties as follows. The upper limits on VTA overdense structures were calculated using upper limits on the count rates in a 300~kpc aperture centred at the middle of the optical structures listed in Table~\ref{tab:clusterstable}. The method consists of the Bayesian approach for calculating X-ray aperture photometry described in detail in J.~Willis et al. (in prep.). It computes values and bounds for the intensity of the source (S) using counts and exposure data obtained in source and background apertures. It calculates the background-marginalised posterior probability distribution function of the source intensity (S), assuming Poisson likelihoods for the detected number of source counts and background counts in the given exposure time. The mode of this PDF is determined, and the lower and upper bounds of the confidence region are determined by summing values of the PDF alternately above and below the mode until the desired confidence level is attained. When the mode is at S=0 or the calculation for the lower bound reaches the value S=0, only the upper confidence bound is evaluated, which is considered an upper limit. With this upper limit on the count rate in a 300~kpc aperture we used the mass-temperature relation of \citet{lieu15} (hereafter \citet{lieu15}) and the luminosity-temperature relation of \citet{giles15}, (hereafter \citet{giles15}) to calculate upper limits on the mass of the VTA overdense structures. Starting with an initial estimate of the temperature of the object, we use the $M-T$ relation to evaluate the mass and the radius corresponding to the overdense structure of 500 times the critical density. We integrate the given count rate to compute the X-ray luminosity up to $r_{500}$, and re-estimate the temperature predicted from the corresponding $L-T$ relation. We re-iterate until we converge on the value of the temperature. The calculations for the upper limits are given in Table~\ref{tab:clusterstable}. For VTA10 we masked two AGN point sources leaving only 48\% of the area of the 300~kpc aperture for the calculation of the upper limit on the flux. As expected, the upper limits on total masses estimated for our VTA-identified overdensities without X-ray detections are consistent with the structures being poor clusters or groups (e.g. \citealt{vajgel14}). A spectroscopic velocity dispersion of the structures would allow the classification of these structures (e.g. \citealt{dehghan14}). However, no such spectroscopic data is currently available. \subsection{Morphology and composition of the supercluster } As is evident from Fig. \ref{fig:voronoi_colour}, the X-ray emission of the clusters XLSSC~082, 083, and 084 overlap into a morphologically elongated ensemble in the NW-SE direction. Our VTA analysis reveals multiple overdense structures of optical galaxies in the same field, three of which lie at the same redshift but are not detected in the X-ray (VTA01 - VTA03). Overdense structure VTA01 is located toward the SE from the ensemble revealed in the X-rays, and appears to be a distinct structure. The overdense structures VTA02 and VTA03 are located toward the south of the ensemble. The non-detection of the VTA-identified overdense structures in the X-ray data sets the upper limit to the combined total virial mass of these three overdense structures to 2.3$\times10^{14}$~\msol (see Table~\ref{tab:clusterstable}.). The spatial distribution of red and blue galaxies in the X-ray ensemble (XLSSC~082, XLSSC~083, XLSSC~084) and VTA01-03 groups appears non-structured, i.e. there is no clear red-blue galaxy separation with increasing distance from the group centre, which suggests an unvirialised state of the groups. If this is the case, then the calculated virial masses may overestimate the real masses of these cluster candidates. We find a 3~GHz radio source (VLA-XXL ID = 072) associated with the third brightest galaxy within the XLSSC~083 clusters close to the peak of the X-ray emission. Its monochromatic 3~GHz radio luminosity density is $(4.05~\pm~0.55)\times10^{22}$~\wh , and it is associated with a red-sequence galaxy ($g-r=1.6$). This suggests that the radio synchrotron emission likely arises from a weak AGN within the galaxy, often found at the bottom of the gravitational potential wells of galaxy clusters (e.g. \citealt{smolcic08, smolcic10}). The X-ray, radio and optical data combined, thus suggest that the bottom of the gravitational potential well is within the X-ray peak, and that the six clusters and overdense structures identified in this region (XLSSC~082, 083, 084 and VTA01, 02, 03) are likely in the process of merging and forming a larger structure. Based on the $M_{\mathrm{500}}$ values calculated in \citet{pompei15}, and using the upper mass limits given in Table~\ref{tab:clusterstable}, we can estimate that the upper limit to the total mass of the western structure after adding $M_{\mathrm{500}}$ is $M_{\mathrm{500}}\lesssim1.3\times10^{15}$~\msol. The clusters XLSSC~085 and 086 are at a projected distance of $8\arcmin$ and $4\farcm8$, respectively, from the XLSSC~083 cluster. This corresponds to a distance of $\sim2.7$ and $\sim\rm 1.6~Mpc$, respectively at $z=0.43$. Our VTA identifies overdense structures associated with both clusters. The red-blue galaxy distribution in XLSSC~086 appears non-structured, and both structures show subclumping and elongated overdense structure features. It is interesting that the brightest galaxy (in the $r$ band) in XLSSC~086 is associated with a red galaxy toward the south of the X-ray emission, while the next three brightest galaxies (one of them hosting the radio source VLA-XXL ID = 093) are associated with the X-ray centroid. The E-W elongation of the X-ray emission of this cluster, supported by a similar elongation of the distribution of galaxies associated with VTA-identified overdense structures, suggests the existence of two substructures that are in the process of merging. The X-ray detected cluster XLSSC~081 lies $\sim0.35$~deg away from XLSSC~083 (see Fig.~\ref{fig:voronoi_colour}), while the cluster XLSSC~099 is located further to the east, $\sim0.43$~deg from XLSSC~083. Furthermore, the VTA associates an elongated overdense structure of galaxies with this cluster. In addition to this cluster, our VTA has revealed seven more overdense structures toward the east of this cluster (VTA04-10), and a potential overdense structure toward the west (VTA07). All overdense structures identified in this region seem to have a non-segregated distribution of red and blue galaxies. This again suggests a dynamically young state for the structures. The X-ray emission coincident with VTA10 originates from two background X-ray sources, spectroscopically found to lie at $z_{spec}\sim0.6$ and $z_{spec}\sim2.1$ by SDSS and reported in \citet{koulouridis15}. In summary, the VTA resulted in the identification of \nTott overdense structures at $z_{\mathrm{phot}}=0.43$, \nXMMt of which are associated with X-ray detected clusters, while the remaining 10 structures are likely groups. The full structure extends over $0^\circ\llap{.}6 \times 0^\circ\llap{.}2$ on the sky, which corresponds to a size of $\sim12\times4$~Mpc$^{2}$ at $z=0.43$. Two main galaxy cluster agglomerations are located toward the east (XLSSC~081, XLSSC~099, VTA04-10) and west (XLSSC~082-086, VTA01-03) of this structure. The identified overdense structures seem unvirialised and dynamically young, suggesting that these structures are perhaps in the process of forming a larger structure. To shed further light on the dynamical state of the overdensity structures, in the next section we investigate the radio properties of the western cluster/group agglomeration covered by our 3~GHz observations. \subsection{Radio properties of the western cluster/group agglomeration} Previous studies used radio source counts and luminosity functions to investigate enhancement or supression of radio emission. Several authors have found that in the pre-merging stages the suppression of AGN activity has not yet reached the cluster core \citep{venturi97, venturi01} where radio AGN preferably reside \citep{dressler80}. A different situation is found in clusters that have already likely undergone the first core-to-core encounter and are now accreting smaller groups. In such cases, an excess in the number of blue galaxies is found at the expected positions of the shock fronts \citep{miller05, johnston08} as the merging process has had enough time to influence the galaxies' optical and radio properties. This implies that cluster mergers in later stages lower the probability of a galaxy developing an AGN \citep{venturi00, venturi01}, but have also been shown to enhance the number of low-power radio galaxies driven by star formation \citep{miller05, johnston08}. In this sense, as can be seen in Fig.~\ref{fig:mfmapsc}, the clusters XLSSC~082-084 are close to each other, and we find that the core of XLSSC~083 seems to be preserved. Thus it appears that the agglomeration of XLSSC~082-084 has not yet undergone the first core-to-core encounter. The cluster XLSSC~086 also contains a radio source within a red host galaxy at its centre, and we note the increase in number of blue galaxies between XLSSC~086 and the large cluster structure south-west of it, possibly indicating a shock front present in this area. This picture seems to be consistent with the merger process discussed by \citet{venturi01} - most massive structures are the first to undergo a merger (in this case XLSSC~082-084), and they later attract satellite structures of lower mass (in this case XLSSC~086, and possibly VTA02-03). To further investigate this, in Fig.~\ref{fig:counts} we show the Euclidean normalised radio source counts for the supercluster area, and the field with good rms excluding the supercluster area, as well as the general field source counts taken from \citet{condon12}. Given the low number of radio sources in our field the supercluster counts are consistent with those in the field. Our results suggest the radio counts within the supercluster may be slightly enhanced (suppressed) at low (high) fluxes relative to the field. This is similar to the suppresion of radio AGNs and an enhancement of low-power (likely star-forming) radio galaxies found in Shapely Supercluster \citep{venturi00, venturi02, miller05} and the Horologium-Reticulum Supercluster \citep{johnston08} at $z\sim$0.07. However, this is statistically not significant given the large error bars, and it prevents us from drawing further conclusions. \begin{figure}[!hp] \centering \includegraphics[width=1\linewidth]{countsVLAXXLN_SC_3_Gehrels2} \caption{1.4~GHz radio source counts for the supercluster area (red triangles), and the field with good rms excluding the supercluster area (black). Red dots are slightly offset from the bin centres for better visualisation. Errors are Poissonian, calculated using approximate algebraic expressions from \citet{gehrels86}. Radio source counts from \citet{condon12} are shown by the blue line. The flux densities at 1.4~GHz were calculated from the 3~GHz fluxes assuming a spectral index of -0.7 for consistency with \citet{condon12}.} \label{fig:counts} \end{figure} Our radio data probe $L_{3~\mathrm{GHz}}\gtrsim4\times10^{21}~\mathrm{W/Hz}$. This limit is very close to the division between star-forming and AGN galaxies, thus in the range where volume densities of star-forming and AGN galaxies are comparable. Converting this limit to star formation rate (SFR) using the \citet{bell03} relation yields SFR$\gtrsim30$\Msol/year. Thus, our data are not deep enough to probe a potential enhancement of star-forming galaxies within the supercluster for $\lesssim30$\msolyr. However, they are sensitive enough to detect powerful radio AGNs usually hosted by red galaxies (e.g. \citealt{best06}, \citealt{smolcic08}), if present within the supercluster. In Fig.~\ref{fig:StellarMass} we show the fraction of red galaxies detected in the radio (radio AGNs, as mentioned above) with $L_{1.4~\mathrm{GHz}}>10^{23}~\mathrm{W/Hz}$ as a function of stellar mass for the region shown in the upper panel of Fig.~\ref{fig:voronoi_colour}. The fraction was calculated by taking the ratio of the number of red ($g-r\geq1.17$) galaxies from the supercluster area detected in the radio (see Tab.~\ref{tab:radio}) with $L_{1.4~\mathrm{GHz}}>10^{23}~\mathrm{W/Hz}$, and the number of optical red galaxies in the same field. The radio luminosity threshold was chosen to validate the comparison with the red galaxy fraction of NVSS-detected sources hosted by red galaxies from \citet{best05}, and also shown in Fig.~\ref{fig:StellarMass}. The fractions derived here for the general field derived by \citet{best05} agree well for masses above $log(\rm{M_*}/$\Msol$)\sim11$, however, for $log(\rm{M_*}/$\Msol$)\lesssim10.8$ the fraction of radio AGNs is slightly higher than expected, although still within the errors. If the excess in the fraction of radio detected galaxies of lower stellar mass ($log(\rm{M_*}/$\Msol$)\sim10.8$) were real, it could be explained by the physical mechanisms in the cluster which favour the AGN activity in red galaxies. \citet{best05} find that richer environments preferably host radio AGN. For galaxies with stellar masses of $\sim10^{11}$\msol they estimate a cooling rate of the order of magnitude of $\sim1$\msolyr, which is consistent with a low-power radio AGN. This indicates that the cooling of gas in the hot atmosphere of the host galaxy is a plausible candidate for the fuel source of radio AGN in the cluster structure studied here. This leads to the possibility that this cluster structure is rich in cooling gas, and/or that the cooling gas is likely cospatial with the lower-mass galaxies. In summary, by analysing the properties of radio sources within the structure, we conclude that the clusters and group candidates in the western agglomeration are likely in the pre-merger state, before the first core-to-core encounter. An increase in the surface density of blue galaxies between the XLSSC~086 and the XLSSC~082-084 agglomeration could indicate a formation of a shock front. Since the XLSSC~082-084 agglomeration is the most massive one in the area, it is likely that in the future the three most massive clusters will be the first to go radio silent, followed by the smaller satellite structures XLSSC~086 and VTA01-03. \begin{figure} \centering \includegraphics[bb = 0 100 595 742, angle = 90, width=1\linewidth]{StellarMassHist_Best3.png} \caption{The fraction of radio detected red galaxies (radio AGN) as a function of the stellar mass for the region shown in the upper panel of Fig.~\ref{fig:voronoi_colour}. Black stars show the fractions estimated in this paper for galaxies with $L_{1.4~\mathrm{GHz}}>10^{23}~\mathrm{W/Hz}$. The errors shown on the black points are calculated from the equations given in \citet{gehrels86}. Blue diamonds show the fractions obtained for the NVSS sample by \citet{best05}.} \label{fig:StellarMass} \end{figure} We have presented observations of a subfield of the XXL-N 25~deg$^{2}$ field with the VLA data at 3~GHz (10~cm). The final radio mosaic has a resolution of $3\farcs2~\times~1\farcs9$ and encompasses an area of $41\arcmin\times41\arcmin$ with $rms\lesssim20$~$\mu$Jy~beam$^{-1}$. The rms in the central $15\arcmin\times15\arcmin$ area is $10.8~\mu$Jy~beam$^{-1}$. From the mosaic we have extracted 155 sources with S/N$>6$, 8 of which are large, multicomponent sources. Of the 155 sources, we find optical counterparts for 123 sources within the CFHTLS W1 catalogue. We also analysed the first supercluster discovered in the XXL project, partially located within the radio mosaic. Applying the Voronoi tessellation analysis using photometric redshifts from the CFHTLS survey have identified \nTott overdense structures at $z_{\mathrm{phot}}=0.35-0.50$, \nXMMt of which are detected in the \textit{XMM-Newton} XXL data. The structure is extended over $0^\circ\llap{.}6 \times 0^\circ\llap{.}2$ on the sky, which corresponds to a size of $\sim12\times4$~Mpc$^{2}$ at $z=0.43$. No large radio galaxies are present within the overdense structures, and we associate eight (S/N$>$7) radio sources with potential supercluster members. We find that the spatial distribution of the red and blue potential group member galaxies, selected by their observed $g-r$ colours, suggests that the groups are not virialised, but are dynamically young, which is consistent with the hierarchical structure growth expected in a $\Lambda$CDM universe. Further spectroscopic follow-up is required for a more detailed analysis of the dynamical state of the structure.
16
7
1607.02146
1607
1607.00375_arXiv.txt
{We present a general parameter study, in which the abundance of interstellar argonium (ArH$^+$) is predicted using a model for the physics and chemistry of diffuse interstellar gas clouds. Results have been obtained as a function of UV radiation field, cosmic-ray ionization rate, and cloud extinction. No single set of cloud parameters provides an acceptable fit to the typical ArH$^+$, OH$^+$ and $\rm H_2O^+$ abundances observed in diffuse clouds within the Galactic disk. Instead, the observed abundances suggest that ArH$^+$ resides primarily in a separate population of small clouds of total visual extinction of at most $0.02$~mag \re{per cloud}, within which the column-averaged molecular fraction is in the range $\re{10^{-5} - 10^{-2}}$, while OH$^+$ and $\rm H_2O^+$ reside primarily in somewhat larger clouds with a column-averaged molecular fraction $\sim 0.2$. This analysis confirms our previous suggestion (S14) that the argonium molecular ion is a unique tracer of almost purely {\it atomic} gas. }
One of the more unexpected results obtained from the Heterodyne Instrument for the Infrared (HIFI) on board the {\it Herschel Space Observatory} has been the discovery of interstellar argonium ions (ArH$^+$), the first known astrophysical molecule to contain a noble gas atom.Detected early in the {\it Herschel} mission as a ubiquitous interstellar absorption feature of unknown origin at 617.5~GHz (M\"uller et al.\ 2013), the $J=1-0$ transition of ArH$^+$ was finally identified by Barlow et al.\ (2013), following its detection, in emission, from the Crab nebula, along with a second transition (the $J=2-1$ transition) at twice the frequency. While the {\it Herschel} detections of argonium were all obtained within the Galaxy, an extragalactic detection has been reported recently (M\"uller et al.\ 2015), within the $z=0.89$ galaxy along the sight-line to PKS 1830-211. As discussed by Schilke et al.\ (2014; hereafter S14), several ``accidents" of chemistry conspire to permit the presence of interstellar argonium at observable abundances: ArH$^+$ has an unusually slow rate of dissociative recombination with electrons (Mitchell et al.\ 2005) and an unusually small photodissociation cross-section for radiation longward of the Lyman limit (Alekseyev et al.\ 2007). Moreover, while other abundant noble gas cations (He$^+$ and Ne$^+$) undergo an exothermic dissociative ionization reaction when they react with with H$_2$, e.g. $$\rm Ne^+ + H_2 \rightarrow Ne + H + H^+, \eqno(R1)$$ the analogous reaction is endothermic in the case of Ar$^+$ and thus the alternative argonium-producing abstraction reaction is favored: $$\rm Ar^+ + H_2 \rightarrow ArH^+ + H. \eqno(R2)$$ These unusual features of the relevant chemistry also suggested that argonium ions can serve as a unique tracer of gas in which the H$_2$ fraction is extremely small; indeed, the theoretical predictions of our diffuse cloud model (S14) indicated that the argonium abundance peaks very close to cloud surfaces where the molecular hydrogen fraction, $f({\rm H}_2)=2n({\rm H}_2)/n_{\rm H}$ is only $10^{-4}$ to $10^{-3}$. In this paper, we present a general parameter study of argonium in diffuse clouds that extends the results we presented previously in S14. In the present study, we computed the argonium column densities expected as a function of several relevant parameters: the gas density, the UV radiation field, the cosmic-ray ionization rate (CRIR), and the visual extinction across the cloud. Our model is described in \S 2, and the results are presented in \S 3, together with analogous results for OH$^+$ and H$_2$O$^+$. In \S 4, we discuss the results and describe how observations of all three molecular ions can be used to constrain the molecular fraction and cosmic ray ionization rate in the diffuse interstellar medium.
The results shown in Figures 4 and 5 indicate that a single population of clouds cannot account for the typical $N({\rm ArH}^+)/N({\rm H})$, $N({\rm OH}^+)/N({\rm H})$ and $N({\rm H_2O}^+)/N({\rm H})$ abundance ratios observed in the diffuse ISM. In particular, models with cloud parameters that can account for the OH$^+$ and H$_2$O$^+$ abundances underpredict the ArH$^+$ abundance by more than an order of magnitude. For example, when $\chi_{\rm UV}/n_{50}=1$, the typical $N({\rm OH}^+)/N({\rm H})$ and $N({\rm H_2O}^+)/N({\rm H})$ ratios suggest a population of diffuse clouds of visual extinction $A_{\rm V}({\rm tot})\ \sim 0.15$~mag across each cloud, with a CRIR $\zeta_p({\rm H})/n_{50} \sim 3 \times 10^{-16}\rm s^{-1},$ within which the column-averaged molecular fraction $f^{N}({\rm H}_2) \sim 0.15$. For the same total visual extinction, the typically-observed ArH$^+$ abundance would require a CRIR an order of magnitude larger, $\zeta_p({\rm H})/n_{50} \sim 3 \times 10^{-15}\rm s^{-1},$ while for the same CRIR required by the OH$^+$ and H$_2$O$^+$ abundance, the visual extinction would need to be $\simlt 0.005$~mag per cloud. These considerations suggest two possible explanations of the OH$^+$, H$_2$O$^+$ and ArH$^+$abundances observed in the diffuse ISM. First, the CRIR could be a strongly-decreasing function of the visual extinction (i.e., the shielding column density), such that the regions very close to cloud surfaces, where ArH$^+$ is most abundant, are subject to a much larger cosmic-ray flux than the somewhat deeper regions where OH$^+$ and H$_2$O$^+$ are most abundant. In this scenario, which is not addressed by our standard diffuse clouds models (in which the cosmic-ray flux is assumed to be constant throughout the cloud), enough ArH$^+$ might be produced in the surface layers of the same clouds responsible for the OH$^+$ and H$_2$O$^+$ absorption. Second, the population of absorbing clouds within the Galactic disk might possess a distribution of sizes, with the ArH$^+$ absorption occurring primarily in the smaller-extinction clouds and the OH$^+$ and H$_2$O$^+$ absorption arising primarily in clouds of larger extinction. This second possibility may be favored by the observational fact (e.g. \re{S14, their Figure 6}; Neufeld et al.\ 2015, their Figure 11) that ArH$^+$ has a distribution (in line-of-sight velocity) that is different from that of any other molecular ion, whereas OH$^+$ and H$_2$O$^+$ show very similar absorption spectra. \ We have investigated the second scenario described above with a highly-idealized toy model in which the OH$^+$, H$_2$O$^+$ and ArH$^+$ absorption arises in a collection of clouds of two types: smaller clouds of visual extinction $A_{\rm V}{\rm (tot)_S}$, and larger clouds of visual extinction $A_{\rm V}{\rm (tot)_L}$. These cloud types are assumed to account for a fraction $f_{\rm S}$ and $f_{\rm L} = 1 -f_{\rm S}$, respectively, of the HI mass, and both cloud types are assumed to be irradiated by the same cosmic-ray flux and UV radiation field. Even if $\chi_{\rm UV}/n_{50}$ is specified, this model has four independent adjustable parameters -- $A_{\rm V}{\rm (tot)_S}$, $A_{\rm V}{\rm (tot)_L}$, $f_{\rm S}$, and $\zeta_p({\rm H})/n_{50}$ -- and is subject to three observational constraints: $N({\rm ArH}^+)/N({\rm H})$, $N({\rm OH}^+)/N({\rm H})$ and $N({\rm H_2O}^+)/N({\rm H}).$ Accordingly, there is no unique solution for given $\chi_{\rm UV}/n_{50}$, but instead a set of solutions. In Figure 4, 10 pairs of vertically-separated circles indicate ten possible solutions; one member of each pair represents a larger cloud type with log$_{10}\,A_{\rm V}{\rm (tot)} \ge -0.7$, and one member represents a smaller cloud type with log$_{10}\,A_{\rm V}{\rm (tot)} \le -1.7.$ The radius of each circle is proportional to the required mass-fraction within each cloud type. By assumption, both cloud types are exposed to the same CRIR and thus each circle within a given pair has the same horizontal position. An alternative presentation of the same information appears in Figure 6, where complete results are shown for multiple values of $\chi_{\rm UV}/n_{50}.$ Here, the following quantities are shown as a function of the assumed visual extinction across an individual smaller cloud: the required mass fraction of HI in the smaller clouds (top left), the column- averaged molecular fraction in the smaller (diamonds) and larger (asterisks) clouds (top right), the visual extinction required across an individual larger cloud (bottom left), and the required CRIR (bottom right). Different colors represent different adopted values for $\chi_{\rm UV}/n_{50}.$ If we now focus on canonical values for the UV ISRF and gas density in the cold neutral medium, $\chi_{\rm UV} = n_{50}=1$ (black curve), we find that the typical abundances observed for OH$^+$, H$_2$O$^+$ and ArH$^+$ can be explained by a combination of two cloud types: (1) smaller diffuse clouds, accounting for $\re{40 - 75}\%$ of the gas mass and having a visual extinction $\le 0.02$~mag and a column-averaged molecular fraction in the range $3 \times 10^{-5}$ to $10^{-2}$; and (2) larger diffuse clouds, accounting for $\re{25 - 60}\%$ of the gas mass and having a visual extinction $\ge \re{0.2}$~mag and a column-averaged molecular fraction $\sim 0.2$. \re{In the case of the smaller diffuse clouds, acceptable fits can be obtained for the smallest values of $A_{\rm V}{\rm (tot)}$ that we considered (and thus the smallest values of $f^{N}({\rm H}_2)$ predicted in any of our models.) The observed ArH$^+$ abundances are therefore entirely consistent with UV observations of H and H$_2$ in diffuse clouds; such observations indicate that most clouds have $f^{N}({\rm H}_2)$ either greater than $10^{-2}$ or smaller than $10^{-4},$ and that relatively few clouds have $f^{N}({\rm H}_2)$ in the intermediate range ($10^{-2} - 10^{-4}$) where H$_2$ self-shielding results in a very strong dependence of $f^{N}({\rm H}_2)$ upon $A_{\rm V}{\rm (tot)}$ (e.g. Liszt 2015; and references therein).} The highly-idealized two-component model presented here requires a primary CRIR in the range $\zeta_p({\rm H}) = \re{4 - 8} \times 10^{-16} \, \rm s^{-1}$, \re{for $\chi_{\rm UV} = n_{50}=1$} (For the enhanced-metallicity models, \re{the $A_{\rm V}{\rm (tot)}$ values required for the larger clouds are somewhat smaller, but the required ionization rates are very similar.}) \re{This range of primary CRIR lies slightly above} the value $\sim 3 \times 10^{-16} \, \rm s^{-1}$ needed to account for $N({\rm OH}^+)/N({\rm H})$ and $N({\rm H_2O}^+)/N({\rm H})$ with a single-component model, because in the two-component model, only \re{25 - 60}$\%$ of the HI is in the larger clouds that contribute most of the OH$^+$ and $\rm H_2O^+$. The CRIR required for the two-component model is also a factor $\re{2.5-5}$ larger than the average value inferred by Indriolo \& McCall (2012) from observations of H$_3^+$ in clouds of somewhat larger size than those considered here. This discrepancy may reflect the idealized nature of our two-component model or uncertainties in the chemistry; alternatively, it may suggest that shielding does modulate the cosmic-ray flux in the regions where H$_3^+$ is present. The question of the CRIR and its depth-dependence will be investigated further in a future paper.
16
7
1607.00375
1607
1607.01553_arXiv.txt
Stars much heavier than the Sun ($M_{\rm initial}>8\,M_\odot$) are extremely luminous and drive strong stellar winds, blowing a large part of their matter into the galactic environment before they finally explode as a supernova. By this strong feedback, massive stars ionize, enrich, and heat the interstellar medium, regulate the star formation, and affect the further development of the cluster in which they were born. Empirical diagnostics of massive star spectra provide quantitative information about stellar and wind parameters, such as the terminal wind velocity, $v_\infty$, and the mass-loss rate, $\dot{M}$. With the development of space based telescopes, the classical analyses of optical, and radio radiation was extended to the ultraviolet (UV) and the X-ray range. The optical range is the easiest to access. However in the vast majority of OB stars the optical spectra are dominated by photospheric lines. Only the most luminous stars with strongest stellar winds (such as Wolf-Rayet (WR) stars) show many wind lines in emission in the optical. OB stars commonly display wind signatures in their UV spectra, but UV observations are scarce. For main-sequence B stars the wind signatures are marginal and difficult to disentangle from the photospheric spectra even in the UV. In X-rays, on the other hand, we can observe emission lines from winds of nearly all types of massive stars, including B-type dwarfs and other stars with weak winds. Thus, X-rays provide an excellent, sometimes unique wind diagnostic. The history of massive star X-ray astronomy begins with the UV observations. Back in the 1970's the observatory {\em Copernicus} made the important discovery of strong lines of highly ionized ions such as O\,{\sc vi}, N\,{\sc v}, and C\,{\sc iv} in the UV spectra of massive stars as cool as spectral type B1. It became immediately clear that the stellar effective temperatures are not sufficiently high to power such high degrees of ionization. E.g.\ \citet{wrh1981} showed that the lines of O\,{\sc vi} and N\,{\sc v} observed in the UV spectra of the B-type star $\tau$\,Sco cannot be reproduced by the standard wind models. While different theories were put forward to explain the presence of high ions in stellar spectra, it was the work of \citet{co1979} where the Auger ionization by X-rays was suggested as an explanation. The first X-ray observations by the {\em Einstein} observatory indeed detected X-rays from OB stars, and hence proved the importance of the Auger process for stellar winds \citep{Seward1979,Harnden1979}. One of the instruments on board of the {\em Einstein} observatory was the Solid State Spectrometer (SSS) that was used to observe O-type stars. Already at these early days of X-ray astronomy, \citet{StewartFabian1981} employed {\em Einstein} spectra to study the transfer of X-rays through a uniform stellar wind as a mean to determine stellar mass-loss rates. They applied a photoionization code to calculate the wind opacity. Using \mdot\ as a model parameter, they found from matching the model and the observed X-ray spectrum of \zpup\ (O4I) that the X-ray based mass-loss rate is lower by a factor of a few than obtained from fitting the H$\alpha$ emission line and the radio and IR excess. As the most plausible explanation for this discrepancy they suggested that the mass-loss rate derived from H$\alpha$ is overestimated because of wind clumping. In retrospect, this was a deep insight confirmed by follow-up studies only in the 21st century. These first low-resolution X-ray spectra of OB stars also showed emission signatures of such high ions as S\,{\sc xv} and Si\,{\sc xiv}, and thus revealed the presence of plasma with temperatures in excess of a few MK. From this moment on, a quest to explain X-rays from stellar winds has began. Among the first proposed explanations was a picture where the stellar wind has a very hot base zone where the plasma is constrained by a magnetic field. X-rays are produced in this base corona, and ionize the overlaying cool stellar wind. Models predict that X-rays originating from this inner part of the winds should become strongly absorbed in the overlaying outer wind. Therefore, when observations showed only little absorption of X-rays, the base corona model was seriously questioned \citep{Long1980,Cas1981,cas1983}. A way to resolve this ``too little absorption'' problem was shown only much later (see discussion in Section\,\ref{sec:xprof}). Besides the magnetically confined hot corona scenario, other models were put forward to explain X-ray emission from single massive stars. Detailed (albeit 1-D) radiative hydrodynamic models predict the development of strong shocks \citep{Owocki1988, felda1997, Run2002} as a result of the line driving instability (LDI) intrinsic to radiatively driven stellar winds \citep{LucyW1980, Lucy1982}. A fraction of otherwise cool ($T_{\rm cool}\sim 10$\,kK) wind material is heated in these shocks to a few MK, and cools radiatively via X-ray emission. In the hydrodynamic model of \citet{feld1997}, the X-rays are generated when a fast parcel of gas rams into a slower-moving dense shell, both structures resulting from the LDI. This model was successful in quantitatively explaining the observed low-resolution {\em Rosat} spectrum of an O supergiant, and is often invoked as the standard scenario for the origin of X-rays in stellar winds. Another family of models suggests that radiatively driven blobs of matter plough through an ambient gas that is less radiatively accelerated \citep{Lucy2012,Guo2010}. These blobs might be the result of an instability \citep{LucyW1980}, or seeded at or below the stellar photosphere \citep{Waldron2009,Cant2011}. When these blobs propagate, forward shocks are formed, where gas is heated giving rise to X-ray emission. \citet{Cas2008} \& \citet{Ignace2012} calculated the temperature and density in such a bow shocks to interpret some major X-ray properties: the power-law distribution of the observed emission measure derived from many hot star X-ray spectra, and the wide range of ionization stages that appear to be present throughout the winds. One can also envisage a ``hybrid'' scenario to explain the X-ray emission from massive stars by a combination of magnetic mechanisms on the surface with shocks in the stellar wind \citep[e.g.][]{cas1983,Waldron2009}. These and alternative models for the X-ray production in stellar winds are now rigorously checked by modern observations. The 21st century's X-ray telescopes \cxo\ and \xmm\ made high-resolution X-ray spectroscopy possible \citep{Brin2000,rgs2001,Can2005}. On board of \cxo\ is the High Energy Transmission Grating Spectrometer (HETGS/MEG) with a spectral resolution of $0.024$\AA\ in its 1st order. The Reflection Grating Spectrometers (RGSs) of \xmm\ have a more modest spectral resolution of $0.05$\,\AA, but higher sensitivity. Figure\,\ref{fig:zxmm} shows examples of high-resolution X-ray spectra. Both telescopes also provide a possibility for low-resolution X-ray spectroscopy, imaging, and timing analysis. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{EpsCMaZetaOriXMM.ps} \caption{A sample of high-resolution X-ray spectra obtained with the RGS spectrograph on board of \xmm. The vertical axis is the X-ray count rate in arbitrary units. Strong lines are identified at the top. \label{fig:zxmm}} \end{figure} In this review I briefly address the diagnostic potential of X-rays for our understanding of winds from single stars. General X-ray properties of massive stars are introduced in Section\,\ref{sec:lxlbol}, and the location of X-ray plasma in their winds is discussed in Section\,\ref{sec:fir}. The classical UV diagnostics of stellar winds and the influence of X-rays on these diagnostics are considered in Section\,\ref{sec:drive}. The role of X-rays in resolving the so-called ``weak wind problem'' is discussed in Section\,\ref{sec:bw}. The X-ray properties of B-stars, including pulsating $\beta$ Cep-type variables and magnetic stars, are briefly considered in Section\,\ref{sec:b}. Section\,\ref{sec:xvar} deals with using X-ray variability to study the structure of stellar winds. Section\,\ref{sec:obi} introduces modeling approaches for the description of X-ray line spectra. The diagnostic power of X-ray emission lines is discussed in Section\,\ref{sec:xprof}. Modern approaches combining X-ray spectroscopy with a multiwavelength analysis are presented in Section \ref{sec:nlte}. X-ray diagnostics of massive stars in the latest stages of their evolution, RSG, WR, and LBV stars, are considered in Section\,\ref{sec:wr}. A summary concludes this review (Section\,\ref{sec:sum}).
\label{sec:sum} Observations in the X-ray band of the electromagnetic spectrum provide an important diagnostic tool for stellar winds. In early B-type and late O-type dwarfs, the bulk of stellar wind may be in a hot phase, making the observations at X-ray wavelengths, the primary tool to study these most numerous massive stars. In OB giants and supergiants, the wind is mainly in the cool phase and quite opaque for the X-rays. Since the stellar winds are clumped, X-rays emitted from embedded sources can escape and reach the observer. The high-resolution X-ray spectra of OB supergiants allow detailed studies of X-ray transfer in their winds. A brief parameter study shows that the fitting of X-ray line profiles by means of a simple model does not provide a reliable tool to measure stellar mass-loss rates. A multiwavelength spectroscopic analysis, when the X-ray data are analyzed consistently with optical and UV spectra, is required to derive realistic stellar and wind parameters. X-ray variability emerges as a new powerful tool to understand stellar wind structures. Pronounced stochastic X-ray variability has not been detected from stellar winds so far. However, all stars that were observed with sufficiently long exposures show X-ray variability on time scales of days, similar to the variability in H$\alpha$ and UV lines. A likely mechanism for this variability is by corotating interaction regions. Massive stars at the final stages of their evolution have diverse X-ray properties. The RSG stars appear to be X-ray dark. Single LBV stars and late sub-type WC stars are not observable as X-ray sources, in agreement with theoretical expectations. Single WN stars emit X-rays, albeit the responsible mechanism is not yet fully understood. The hottest among all massive stars, the early sub-type WC and WO stars, are found to be X-ray sources, likely indicating that in these extreme objects the attenuation of X-ray is sufficiently low. \bigskip The author is indebted to both anonymous referees who provided important and detailed comments leading to a significant improvement of the manuscript. Support from DLR grant 50 OR 1302 is acknowledged.
16
7
1607.01553
1607
1607.08158_arXiv.txt
{Gaseous and dust debris disks around white dwarfs (WDs) are formed from tidally disrupted planetary bodies. This offers an opportunity to determine the composition of exoplanetary material by measuring element abundances in the accreting WD's atmosphere. A more direct way to do this is through spectral analysis of the disks themselves.} {Currently, the number of chemical elements detected through disk emission-lines is smaller than that of species detected through lines in the WD atmospheres. We assess the far-ultraviolet (FUV) spectrum of one well-studied object (\sdsslong) to search for disk signatures at wavelengths $<1050$\,\AA, where the broad absorption lines of the Lyman series effectively block the WD photospheric flux. In addition, we investigate the \ion{Ca}{ii} infrared triplet (IRT) line profiles to constrain disk geometry and composition.} {We performed FUV observations (950--1240\,\AA) with the \emph{Hubble} Space Telescope/Cosmic Origins Spectrograph and used archival optical spectra. We compared them with \mbox{non-local} thermodynamic equilibrium model spectra.} {No disk emission-lines were detected in the FUV spectrum, indicating that the disk effective temperature is $\Teff\approx 5000$\,K. The long-time variability of the \ion{Ca}{ii} IRT was reproduced with a precessing disk model of bulk Earth-like composition, having a surface mass density of $0.3$\,g\,cm$^{-2}$ and an extension from 55 to 90 WD radii. The disk has a spiral shape that precesses with a period of approximately 37 years, confirming previous results.} {}
White dwarfs (WDs) that cool down to effective temperatures $\Teff\approx 25\,000\,$K should have either pure H or pure He atmospheres, as a result of their high surface gravity. Heavy elements sink out of the atmosphere toward the stellar interior. However, a significant fraction (20--30\%) displays photospheric absorption lines from metals \citep[e.g.,][]{Koester:2005}. These polluted WDs must actively accrete matter at a rate of the order of $10^8\,$g\,s$^{-1}$ to sustain the atmospheric metal content, because diffusion time scales are by orders of magnitude shorter than the WD cooling age \citep[e.g.,][]{Koester:2009,Koester:2014}. After the discovery of warm dust disks \citep[e.g.,][and references therein]{Farihi:2012} and gaseous disks \citep{Gaensicke:2006,Wilson:2014} around many polluted WDs, it is now commonly accepted that accretion occurs from debris disks that are located within the WD tidal volume. The disks contain material from tidally disrupted exoplanetary bodies that were scattered towards the central star as a consequence of a dynamical resettling of a planetary system in the post-main sequence phase \citep{Debes:2002,Jura:2003}. Therefore, the metal abundance pattern in the polluted WD atmospheres allows to conclude on the composition of the accreted matter. This opened up the exciting possibility to study the composition of extrasolar planetary material. Generally, the abundances are similar to those found in solar system objects \citep{Jura:2014}. Concluding on the composition of the accreted material from the WD photospheric abundance pattern is not straightforward. The results are based on the knowledge of metal diffusion rates in WD atmospheres and envelopes \citep{Koester:2009}. Other uncertainties enter, for example the depth of the surface-convection zone. It is therefore desirable to seek alternative methods to determine the chemical composition of the accreted material. Ideally, it is derived by direct observation of the accreting matter. One possibility of doing this is offered by line spectroscopy of gaseous disks. Their hallmark is the double-peaked emission lines from the \ion{Ca}{ii} infrared triplet \citep[IRT; $\lambda\lambda$\,8498, 8542, 8662\,\AA,][]{Gaensicke:2006}. Detailed investigations have shown that the gaseous and dusty disks are roughly spatially coincident concerning the radial distance from the WD \citep{Brinkworth:2009,Farihi:2010,Melis:2010}; a result that is consistent with the scenario in which the disk material has its origin in disrupted planetary bodies. The \ion{Ca}{ii} IRT line profiles were scrutinized to derive disk geometries and secular evolution. The most detailed investigation in this respect represents the Doppler imaging of \sdsslong\ (henceforth \sdss) based on spectra taken over twelve years \citep{Manser:2016}. The spectral analysis of the gas disks with \mbox{non-local} thermodynamic equilibrium (\mbox{non-LTE}) radiation transfer models aims to probe the physical structure and chemical composition of the disks. It has been shown that the disks have effective temperatures of the order $\Teff=6000$\,K and that they are strongly hydrogen-deficient \citep[H $< 0.01$ mass fraction,][]{Hartmann:2011}. Currently, the prospects of studing the gas disk composition are hampered by the fact that only few species were actually identified. In the best-studied case, \sdss, disk emissions from four elements (O, Mg, Ca, Fe) were detected, whereas a total of eight elements were detected in the WD spectrum \citep{Manser:2016}. Ultraviolet (UV) spectroscopy was expected to reveal more species, but was unsuccessful \citep{Gaensicke:2012}. The general problem is that the flux of the relatively hot WDs increases towards the UV. Therefore, we performed observations of \sdss\ in the \mbox{far-UV} (FUV) range (i.e., 950--1230\,\AA), where the WD flux is strongly depressed by broad photospheric hydrogen Lyman lines, striving to discover gas disk signatures in this wavelength region. In the following (Sect.\,\ref{sec:obj}) we summarize the current knowledge of \sdss. Section\,\ref{sec:obs} describes the observations used in this work. In Sect.\,\ref{sec:methods} we introduce the spectral modeling of the WD and the gas disk. In Sect.\,\ref{sec:results} we compare the FUV observations with our models. We also use the disk model spectra to fit the \ion{Ca}{ii} IRT in observations performed in 2003 and 2014. We summarize our results in Sect.\,\ref{sec:summary}.
\label{sec:summary} Our search for disk emission-lines from \sdss\ in the FUV wavelength range was unsuccessful. This suggests that the disk is cooler than previously assumed, namely $\Teff\approx 5000$\,K and not $\approx 6000$\,K. From the analysis of their UV spectra (performed at longer wavelengths than our observation), \citet{Gaensicke:2012} conclude that the material accreted by the WD resembles a bulk Earth-like mixture. This is confirmed by our models otherwise, for a chondritic mixture, carbon emission lines should be visible in the vicinity of the \ion{Ca}{ii} IRT. The same holds for sulfur: The \mbox{non-detection} of S lines suggest an underabundance compared to bulk Earth value, so that the abundance of this volatile element resembles more the Earth mantle \citep{Allegre:1995}. The element is not detected in the WD photosphere but this does not allow to significantly constrain the sulfur abundance in the accreted material \citep{Gaensicke:2012}. Applying our models with a spiral shape to explain the time evolution of the \ion{Ca}{ii} IRT between 2003 and 2014 suggests a precession of the spiral pattern with a period of $\approx 37$\,a, consistent with the result of \citet{Manser:2016}. Our models also confirm their suggestion that the observed different red/blue asymmetries of disk line profiles can be explained by a \mbox{non-axisymmetric} temperature distribution.
16
7
1607.08158
1607
1607.06832_arXiv.txt
We discuss a minimal solution to the long-standing $(g-2)_\mu$ anomaly in a simple extension of the Standard Model with an extra $Z'$ vector boson that has only flavor off-diagonal couplings to the second and third generation of leptons, i.e. $\mu, \tau, \nu_\mu, \nu_\tau$ and their antiparticles. A simplified model realization, as well as various collider and low-energy constraints on this model, are discussed. We find that the $(g-2)_\mu$-favored region for a $Z'$ lighter than the tau lepton is totally excluded, while a heavier $Z'$ solution is still allowed. Some testable implications of this scenario in future experiments, such as lepton-flavor universality-violating tau decays at Belle 2, and a new four-lepton signature involving same-sign di-muons and di-taus at HL-LHC and FCC-ee, are pointed out. A characteristic resonant absorption feature in the high-energy neutrino spectrum might also be observed by neutrino telescopes like IceCube and KM3NeT.
\label{sec:intro} The anomalous magnetic moment of the muon $a_\mu \equiv (g-2)_\mu/2$ is among the most precisely known quantities in the Standard Model (SM), and therefore, provides us with a sensitive probe of new physics beyond the SM (BSM)~\cite{Czarnecki:2001pv, Jegerlehner:2009ry}. There is a long-standing $3.6\, \sigma$ discrepancy between the SM prediction~\cite{Hagiwara:2011af, Aoyama:2012wk, Kurz:2016bau} and the measured value of $a_\mu$~\cite{Agashe:2014kda}: \begin{equation} \label{eq:gm2} \Delta a_\mu \ \equiv \ a_\mu^\text{exp} - a_\mu^\text{SM} \ \simeq \ (288 \pm 80) \times 10^{-11}\, . \end{equation} The uncertainties in the experimental measurement, which come from the E821 experiment at BNL~\cite{Bennett:2006fi}, can be reduced by about a factor of four in the upcoming Muon $g-2$ experiment at Fermilab~\cite{Grange:2015fou}. If comparable progress can be made in reducing the uncertainties of the SM prediction~\cite{Blum:2013xva, Blum:2015gfa, Blum:2015you, Chakraborty:2016mwy}, we will have a definite answer to the question whether or not $\Delta a_\mu$ is evidence for BSM physics. Thus from a theoretical point of view, it is worthwhile investigating simple BSM scenarios which can account for the $(g-2)_\mu$ anomaly, should this endure, and at the same time, have complementary tests in other ongoing and near future experiments. With this motivation, we discuss here a simple $Z^\prime$ interpretation of the $(g-2)_\mu$ anomaly. A sufficiently muonphilic $Z^\prime$ can address the $(g-2)_\mu$ discrepancy~\cite{Foot:1994vd, Gninenko:2001hx, Murakami:2001cs, Baek:2001kca, Ma:2001md, Pospelov:2008zw, Heeck:2011wj, Davoudiasl:2012ig, Carone:2013uh, Harigaya:2013twa, Altmannshofer:2014cfa, Tomar:2014rya, Altmannshofer:2014pba, Lee:2014tba, Allanach:2015gkd, Heeck:2016xkh, Patra:2016shz}; however, in order to avoid stringent bounds from the charged lepton sector, while being consistent with a sizable contribution to $(g-2)_\mu$, the $Z^\prime$ coupling must violate lepton universality.\footnote{There are other experimental hints of lepton flavor violation or the breakdown of lepton flavor universality in processes involving muons and taus, e.g. in $B^+\to K^+\ell^+ \ell^-$ decays at the LHCb~\cite{Aaij:2014ora}, in $B \to D^{(*)} \tau \nu$ decays at BaBar~\cite{Lees:2012xj}, Belle~\cite{Huschle:2015rga, Abdesselam:2016cgx} and LHCb~\cite{Aaij:2015yra}, and in the $h\to \mu\tau$ decay at both CMS~\cite{Khachatryan:2015kon} and ATLAS~\cite{ Aad:2015gha} (which however seems to have disappeared in the early run-II LHC data~\cite{CMS:2016qvi, Aad:2016blu}). See e.g. Refs.~\cite{Boucenna:2016wpr, Buttazzo:2016kid, Altmannshofer:2016oaq, Nandi:2016wlp, Bauer:2015knc, Das:2016vkr, Wang:2016rvz, Tobe:2016qhz} for the most recent attempts to explain some of these anomalies. In this work we concentrate on $(g-2)_\mu$ and only comment on $h\to \mu\tau$.} For instance, a sizable $Z^\prime$ coupling to electrons is strongly constrained over a large range of $Z^\prime$ masses from $e^+e^- \to e^+e^-$ measurements at LEP~\cite{Schael:2013ita}, electroweak precision tests~\cite{Hook:2010tw, Curtin:2014cca}, $e^+e^-\to \gamma \ell^+\ell^-$ (with $\ell=e,\mu$) at BaBar~\cite{Lees:2014xha}, $\pi^0\to \gamma \ell^+\ell^-$ at NA48/2~\cite{Batley:2015lha}, the $g-2$ of the electron~\cite{Pospelov:2008zw}, and neutrino-neutrino scattering in supernova cores~\cite{Manohar:1987ec, Dicus:1988jh}. Similarly, a sizable flavor-diagonal $Z'$ coupling to muons is strongly constrained from neutrino trident production $\nu_\mu N\to \nu_\mu N \mu^+\mu^-$~\cite{Altmannshofer:2014pba} using the CCFR data~\cite{Mishra:1991bv}. In addition, charged lepton flavor-violating (LFV) processes, such as $\mu\to e\gamma$, $\mu\to 3e$, $\tau\to \mu\gamma$, $\tau\to 3e$, $\tau\to ee\mu$, $\tau\to e\mu\mu$, constrain all the lepton-flavor-diagonal couplings of the $Z^\prime$, as well as the flavor off-diagonal couplings to electrons and muons~\cite{Farzan:2015hkd, Cheung:2016exp, Yue:2016mqm, Kim:2016bdu}. There also exist stringent LHC constraints from di-lepton resonance searches: $pp\to Z' \to ee,\mu\mu$~\cite{CMS:2015nhc, Aaboud:2016cth}, $\tau\tau$~\cite{CMS:2016zxk} and $e\mu$~\cite{atlas:emu, CMS:2016dfe}. All these constraints require the flavor-diagonal $Z'$ couplings, as well as the flavor off-diagonal couplings involving electrons to be very small, or equivalently, push the $Z^\prime$ mass scale to above multi-TeV range~\cite{Langacker:2008yv}. We propose a simplified leptophilic $Z^\prime$ scenario with {\it only} a flavor off-diagonal coupling to the muon and tau sector [see Eq.~\eqref{lagZp} below], which trivially satisfies all the above-mentioned constraints, and moreover, can be justified from symmetry arguments, as discussed below. In such a scenario, we find that the most relevant constraints come from leptonic $\tau$ decays in low-energy precision experiments, and to some extent, from the leptonic decays of the SM $W$ boson at the LHC. In particular, we show that the $(g-2)_\mu$ anomaly can be accounted for only with $m_{Z^\prime}>m_\tau-m_\mu$ and by allowing a larger $Z^\prime$ coupling to the right-handed charged-leptons than to the left-handed ones, whereas the lighter $Z'$ scenario (with $m_{Z'}<m_\tau-m_\mu$) is ruled out completely from searches for $\tau\to \mu$+invisible decays. We emphasize that the entire allowed range can likely be tested in future low-energy precision measurements of lepton flavor universality in $\tau$ decays at Belle 2, as well as in the leptonic decay of the $W$ boson at the LHC. A striking four-lepton collider signature consisting of like-sign di-muons and like-sign di-taus can be probed at the high luminosity phase of the LHC (HL-LHC) as well as at a future electron-positron collider running at the $Z$ pole. We also point out an interesting possibility for the detection of our flavor-violating $Z^\prime$ scenario by the scattering of ultra-high energy neutrinos off lower-energy neutrinos, which leads to characteristic spectral absorption features that might be observable in large volume neutrino telescopes like IceCube and KM3NeT. The rest of the paper is organized as follows: in Section~\ref{sec:model}, we present our phenomenological model Lagrangian, which can be justified in a concrete BSM scenario. In Section~\ref{sec:gm2}, we show how the $(g-2)_\mu$ anomaly can be resolved in our LFV $Z'$ scenario. Section~\ref{sec:lfv} discusses the lepton flavor universality violating tau decays for $Z'$ masses larger than the tau mass. Section~\ref{sec:2body} discusses the two-body tau decays for a light $Z'$. In Section~\ref{sec:lhc}, we derive the LHC constraints on our model from leptonic $W$ decays. Section~\ref{sec:lep} derives the LEP constraints from $Z$-decay measurements. Section~\ref{sec:4lepton} presents a sensitivity study for the new collider signature of this model. Section~\ref{sec:icecube} discusses some observational prospects of the $Z'$ effects in neutrino telescopes. Our conclusions are given in Section~\ref{sec:concl}.
\label{sec:concl} We have discussed a simple new physics interpretation of the long-standing anomaly in the muon anomalous magnetic moment in terms of a purely flavor off-diagonal $Z'$ coupling only to the muon and tau sector of the SM. We have discussed the relevant constraints from lepton flavor universality violating tau decays for $m_{Z'}>m_\tau$ and from $\tau\to \mu ~+$ invisibles decay for $m_{Z'}<m_\tau$, as well as the latest LHC constraints from $W\to \mu\nu$ searches. We find that for a $Z'$ lighter than the tau, the low-energy tau decay constraints rule out the entire $(g-2)_\mu$ allowed region by many orders of magnitude. However, a heavier $Z'$ solution to the $(g-2)_\mu$ puzzle is still allowed, provided the $Z'$ coupling to the charged leptons has both left- and right-handed components, and the right-handed component is larger than the left-handed one. The deviations from lepton flavor universality in the tau decays predicted in this model can be probed at Belle 2, while a large part of the $(g-2)_\mu$ allowed region can be accessed at future colliders such as the high-luminosity LHC and/or an $e^+e^-$ $Z$-factory such as FCC-ee. The on-shell production of $Z'$ in high-energy neutrino interactions with either cosmic neutrino background or with other natural neutrino sources such as supernova neutrinos could lead to characteristic absorption features in the neutrino spectrum, which might be measured in neutrino telescopes.
16
7
1607.06832
1607
1607.05000_arXiv.txt
{We examine the scenario that the Doppler factor determines the observational differences of blazars in this paper. Significantly negative correlations are found between the observational synchrotron peak frequency and the Doppler factor. After correcting the Doppler boosting, the intrinsic peak frequency further has a tightly linear relation with the Doppler factor. It is more interesting that this relation is consistent with the scenario that the black hole mass governs both the bulk Lorentz factor and the synchrotron peak frequency. In addition, the distinction of the kinetic jet powers between BL Lacs and FSRQs disappears after the boosting factor $\delta^2$ is considered. The negative correlation between the peak frequency and the observational isotropic luminosity, known as the blazar sequence, also disappears after the Doppler boosting is corrected. We also find that the correlation between the Compton dominance and the Doppler factor exists for all types of blazars. Therefore, this correlation is unsuitable to examine the external Compton emission dominance.
% \label{sect:intro} Blazar is the most extreme subclass of active galactic nuclei (AGNs). Its radiation is dominated by the non-thermal emission of relativistic jet with small viewing angle to our line of sight. The spectral energy distributions (SEDs) of blazars show two peaks (in $\nu - \nu L_\nu$ diagram) which are believed to be produced by synchrotron and inverse Compton (IC) processes, respectively. However, whether the external photons outside jet participate in the IC process is still an open question (e.g., \citealt{2011ApJ...735..108C}; \citealt{2012ApJ...752L...4M}). Blazars are classified as flat spectrum radio quasars (FSRQs) and BL Lac objects (BL Lacs) by the optical spectra. They can also be classified as low synchrotron peaked blazars (LSPs), intermediate synchrotron peaked blazars (ISPs), and high synchrotron peaked blazars (HSPs) based on the synchrotron peak frequency~\citep{2010ApJ...716...30A}. An alternative classification based on the ratio of broad line luminosity to Eddington luminosity was proposed by~\citet{2011MNRAS.414.2674G}. This classification considers the potential selection effect on the equivalent width (EW) measurement of broad lines due to the Doppler boosting effect, and links the observational classification to accretion regimes. On the other side, the blazar sequence based on the bolometric luminosity was put forward to unify the observational differences of blazars~\citep{1998MNRAS.299..433F}. The negative correlations between the peak frequency and luminosity, as well as the Compton dominance (CD) are explained as the increasing cooling from external photons outside jet with increasing luminosity \citep{1998MNRAS.301..451G}. However, latter studies showed that the blazar sequence was an artefact of the Doppler boosting~\citep{2008A&A...488..867N} or redshift selection effects~\citep{2012MNRAS.420.2899G}, and the sources with both high luminosity and high peak frequency which break the blazar sequence exist (e.g., \citealt{2012MNRAS.422L..48P}; \citealt{2015ApJ...810...14A}). To improve the simple blazar sequence, \citet{2011ApJ...740...98M} proposed a concept named ``blazar envelope" which considered the jet power and the orientation effect. According to the blazar envelope, blazars are composed by two populations divided by jet power, and an envelope forms due to different orientation for various sources. Since its launch in 2008, the large area telescope (LAT) onboard Fermi gamma-ray space telescope (Fermi) had detected 1444 AGNs in the third LAT catalog (3LAC) clean sample \citep{2015ApJ...810...14A}. The broad energy range and high accuracy of LAT promote deep understandings on both the radiation mechanism (e.g. \citealt{2011ApJ...735..108C}; \citealt{2012ApJ...752L...4M}) and the classification of blazars (e.g. \citealt{2009MNRAS.396L.105G}; \citealt{2011MNRAS.414.2674G}). However, it is still uncertain that the differences of blazars are determined by different physical features, or observational effects (such as the orientation). Moreover, it also needs to be verified that there are one or more factors taking effects on the blazar classifications. The distinctions of the Doppler boosting had been discovered for different subclasses of blazars, such as BL Lacs and FSRQs \citep{2009A&A...494..527H}, or X-ray-selected BL Lacs and radio-selected BL Lacs \citep{1993ApJ...407...65G}. Stronger Doppler boosting was also suggested to explain the $\gamma$-ray detected blazars by many papers (e.g. \citealt{2009ApJ...696L..17K}; \citealt{2009ApJ...696L..22L}; \citealt{2010A&A...512A..24S}; \citealt{2015ApJ...810L...9L}). Furthermore, the blazar sequence was found to be an artefact of the Doppler boosting \citep{2008A&A...488..867N}. All these results indicate that the observational differences of blazars could be determined by the Doppler factor. Thus, this work aims to identify this scenario. Our paper is organized as follows. In section 2, we examine the connection between the Doppler factor and the synchrotron peak frequency. Section 3 presents the impact on jet power due to the Doppler factor. The Doppler-corrected blazar sequence is discussed in section 4. In section 5, we recheck the validity of examining the external Compton (EC) dominance with the correlation between CD and Doppler factor. Discussions are presented in section 6. In this paper, we use a $\Lambda$CDM cosmology model with h=0.71, $\Omega_{m}$=0.27, $\Omega_{\Lambda}$=0.73 \citep{2009ApJS..180..330K}.
In this paper, we obtain two groups of Doppler factors estimated through two independent methods, aim to identify whether the Doppler factor determines the observational differences of blazars. Significant correlations are found between the Doppler factors and the indicator of the SED classification, i.e., observational synchrotron peak frequency. After corrected the Doppler boosting, the intrinsic peak frequency has uniform linear relations with two groups of Doppler factors. In addition, we find the distinction of jet power is mainly caused by the different Doppler factors for different subclasses. The negative correlation between the peak frequency and the observational isotropic luminosity disappears after the Doppler boosting is corrected. All these results confirm that the Doppler factor (physically the bulk Lorentz factor) determines the observational differences of blazars. Furthermore, the black hole mass plays an important role to control the bulk Lorentz factor and SED of blazars. Moreover, we find the correlation between the Compton dominance and the Doppler factor existing for all types of blazars, thus this correlation is unsuitable to examine the EC emission dominance. The correlation between CD and Doppler factor can be explained for SSC process if the radius of emission region decreases as the bulk Lorentz factor increases.
16
7
1607.05000
1607
1607.04671.txt
The Matter-Neutrino Resonance (MNR) phenomenon has the potential to significantly alter the flavor content of neutrinos emitted from compact object mergers. We present the first calculations of MNR transitions using neutrino self interaction potentials and matter potentials generated self-consistently from a dynamical model of a three-dimensional neutron star merger. In the context of the single angle approximation, we find that Symmetric and Standard MNR transitions occur in both normal and inverted hierarchy scenarios. We examine the spatial regions above the merger remnant where propagating neutrinos will encounter the matter neutrino resonance and find that a significant fraction of the neutrinos are likely to undergo MNR transitions.
Remnants arising from binary neutron star mergers have been found to produce immense numbers of neutrinos and antineutrinos. The neutrinos and antineutrinos play important roles in the physics of several phenomena associated with these remnants including jet production in gamma ray bursts \cite{Eichler:1989ve,Ruffert:1998qg,Rosswog:2003rv,Just:2015dba} as well as nucleosynthesis from collisionally heated material \cite{Wanajo:2014wha,Sekiguchi:2015dma,Goriely:2015fqa,Roberts:2016igt} as well as winds \cite{Surman:2003qt,Surman:2004sy,Surman:2005kf,Dessart:2008zd,Fernandez:2013tya,Just:2014fka,Martin:2015hxa}. Neutrino physics is a necessary component in simulations that predict gravitational radiation from mergers \cite{Sekiguchi:2011zd,Foucart:2014nda,Palenzuela:2015dqa,Bernuzzi:2015opx,Foucart:2015gaa} and additionally, if a merger occurs within range of current or future neutrino detectors, the neutrino signal will provide a wealth of information about these objects \cite{Caballero:2009ww,Caballero:2015cpa}. An understanding of the flavor content of the neutrinos is essential for providing an accurate picture of merger phenomena. Neutrino oscillations will occur in neutron star mergers and have the effect of changing the flavor content of the neutrinos. Merger remnants are a dense environment, so flavor transformation is strongly influenced by interactions between neutrinos and the surrounding particles. The potential that a neutrino experiences due to coherent forward scattering on other neutrinos and antineutrinos, sometimes called the neutrino self interaction potential, is a significant driver of the neutrino evolution. The influence of this potential on supernova neutrino flavor transformation has been extensively studied, e.g. \cite{Duan:2006an,Hannestad:2006nj,EstebanPretel:2008ni, Duan:2010bg,Pehlivan:2011hp, Volpe:2013jgr,Balantekin:2006tg,Vlasenko:2014bva,Duan:2010af,Cherry:2012zw,Gava:2009pj} and references therein. One of the major consequences of the neutrino self interaction potential in supernovae is the prediction of a pendulum-like oscillation referred to as a nutation/bipolar oscillation. This type of oscillation is also expected to occur for at least some merger neutrinos \cite{Dasgupta:2008cu,Malkus:2015mda}. In merger remnant environments, it has been suggested that neutrinos can undergo not only the same type of flavor transformations as in supernova scenarios, but also a novel type of transition called {\it Matter-Neutrino Resonance} (MNR) transitions, first observed in \cite{Malkus:2012ts}. This phenomenon is distinct from the nutation/bipolar oscillation and typically occurs closer to the neutrino emitting surface. In both hierarchies MNR transitions can dramatically change neutrino flavor content. During the transformation the neutrinos stay \lq\lq on-resonance" meaning that the neutrinos evolve in such a way so as to ensure that all entries of the Hamiltonian remain relatively small \cite{Malkus:2014iqa}. In addition, the splitting of the instantaneous eigenstates remains small \cite{Vaananen:2015hfa}, and the neutrinos stay approximately on their instantaneous mass eigenstates \cite{Vaananen:2015hfa,Wu:2015fga}. The MNR transformation phenomenon requires a matter potential (from neutrino coherent forward scattering on baryons and charged leptons) and a neutrino self interaction potential of opposite sign and roughly equal magnitude. In merger remnants this condition can be fulfilled either (1) because leptonization requires that antineutrinos outnumber neutrinos initially or (2) because the geometry of the system causes the relative contributions of neutrino and antineutrinos to the self interaction potential to shift as the neutrinos travel along their trajectories. This geometric effect comes from the extension of the neutrino emitting surface significantly beyond the antineutrino emitting surface, for a discussion see \cite{Malkus:2015mda} The matter neutrino resonance is not naively expected to occur in supernovae because both potentials have the same sign. However, if Non Standard Interactions (NSIs) exist with significant strength they can trigger MNR transitions in supernovae \cite{Stapleford:2016jgz}. Further, if neutrino-antineutrino oscillations occur near the surface of the protoneutron star, a situation similar to Matter Neutrino Resonance transformation occurs \cite{Vlasenko:2014bva}. Two primary types of Matter Neutrino Resonance transformation have been suggested to occur in merger remnants. One of these, the {\it Standard} MNR transition, is characterized by a full conversion of electron neutrinos to other types while electron antineutrinos partially transform but then return to their original configuration ~\cite{Malkus:2014iqa}. Standard MNR transitions occur in regions where the neutrino self interaction potential begins as the largest potential in the system, but declines in magnitude eventually reaching the same magnitude as the matter potential. The other type of MNR transformation fully converts both electron neutrinos and antineutrinos symmetrically in short range and is called a {\it Symmetric} MNR transition~\cite{Malkus:2015mda}. Symmetric MNR transitions differ from Standard MNR transitions in that they occur when geometric effects cause the system to pass from a region where antineutrinos dominate the neutrino self interaction potential to a region where neutrinos dominate this potential \cite{Malkus:2015mda,Vaananen:2015hfa}. Both types of transformation may well influence nucleosynthesis \cite{Malkus:2012ts,Malkus:2015mda} as MNR transitions typically occur closer to the emitting surface of neutrinos than other large scale oscillation phenomena. Previous studies of Matter Neutrino Resonance transitions used phenomenological, flat neutrino emission surfaces motivated by the qualitative properties found in complex dynamical merger simulations. Here, we consider a snapshot of a dynamical calculation \cite{Perego:2014fma} and compute flavor transformation using the neutrino self interaction potentials and matter potentials obtained from the same dynamical simulation. The purpose of this work is to explore the character of matter neutrino resonance transformations in the presence of a more complex neutrino emission geometry as well as self-consistent density profiles. This manuscript is structured as follows. In Sec. \ref{secModel} we describe the dynamical calculation and discuss the behavior of the physical quantities relevant for neutrino flavor transformation. In Sec. \ref{secCalculation}, we explain the Hamiltonian and the evolution equations used in our calculations. In Sec. \ref{secResults}, we present the results of our neutrino flavor transformation calculations and locate the spatial regions where propagating neutrinos will encounter MNRs. In Sec. \ref{secDiscuss}, we show that MNR transitions are approximately independent of hierarchy although the efficacy of the resonance is determined in part by neutrino mixing angle and mass squared differences. We also consider three flavor effects by comparing two and three flavor scenarios. We conclude in Sec. \ref{secConclude}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Merger Remnant %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
We conduct the first studies of the flavor evolution of neutrinos above a binary neutron star merger remnant where the matter potential and the neutrino self interaction potential are computed self consistently from the same dynamical simulation \cite{Perego:2014fma} in order to give as realistic a picture as is currently possible about matter neutrino resonance (MNR) transformation. In this model, which has both a massive neutron star and an accretion disk, matter neutrino resonances are a common phenomenon. Neutrinos typically pass through a resonance location at the edge of the low density funnel above the massive neutron star where the neutrino and matter potentials have the approximately the same magnitude. Thus most neutrinos emitted from the massive neutron star, which begin in the funnel, have the opportunity to encounter a MNR. For the disk neutrinos, outside of the funnel the matter potential is often too high for a MNR, so only those neutrinos that travel in the direction of the funnel encounter a resonance. The exact locations of the resonances depend on the angle of travel of the neutrinos, but are mostly at $30$ km to $300$ km above the core. In the context of the single angle approximation, most of the neutrinos that encounter a resonance as they leave the funnel region exhibit a transition. The type of MNR transition varies between neutrino trajectories, with some neutrinos undergoing only a Standard MNR, some a Symmetric followed by a Standard MNR, and some having more of a hybrid appearance. In general, the Symmetric MNR transitions are caused by the neutrino self interaction potential changing from negative to positive, i.e. from becoming dominated by antineutrinos to dominated by neutrinos. Part of this behavior comes from the spatial distribution of the relative number densities of neutrinos and antineutrinos. The massive neutron star emits more antineutrinos than neutrinos, so over the central axis, $\bar{\nu}_e$ outnumber $\nu_e$. In contrast, the disk emits more neutrinos than antineutrinos, so in some regions over the disk, the $\nu_e$ outnumber $\bar{\nu}_e$. This effect drives the initially negative potential toward the positive as neutrinos exit the region above the massive neutron star. In addition, there is a second effect which comes from the geometric, $1 - \cos \Theta$ factor in the potential which takes into account the angle of scattering on the ambient neutrinos. This factor means that a more distended emission surface creates a larger contribution to the potential than a more compact emission surface. Since the neutrino emitting surface is larger than the antineutrino emitting surface, this favors neutrinos at sufficient distance. Therefore the initially negative potential is pushed toward the positive as the neutrino travels away from the emitting surface. It is the combination of these two effects that creates the change in sign. In most cases, at the end of the MNR transition(s), the electron neutrinos have completely converted whereas the electron antineutrinos have started to convert but then returned to their original configuration. The evolution of the neutrinos during a MNR transition in typical circumstances is fairly insensitive to the hierarchy as well as the mass squared difference and mixing angle. However, a sufficiently small $\delta m^2 \sin 2 \theta$ will suppress the MNR transition. The value of $\delta m^2 \sin 2 \theta$ which suppresses the transition can be reasonably well predicted using a single energy analysis of the growth of the imaginary component of the flavor basis Hamiltonian. Given the close proximity of some resonance locations to the neutrino emission surface, matter neutrino resonance transformation may have a number of consequences. It will alter the subsequent evolution of the neutrinos as well as the flavor composition of the neutrino signal. For example, in the absence of the MNR transition, one would expect a nutation/bipolar transition farther from the emission surfaces. However, since the MNR transition alters the relative states of the neutrinos and antineutrinos, it also alters the prospects for this type of transition \cite{Malkus:2015mda}. The calculations presented here are performed in the context of the single angle approximation. Many of the neutrinos from the remnant encounter the MNR resonance at the same location, which is encouraging from the point of view of possible mulit-angle effects. However, a full multi-angle calculation would be required to know definitely what and how large these effects are. MNR transformation may play a role in the dynamics of the remnant, the prospects for jet formation or on nucleosynthesis. Since there are fewer $\mu$/$\tau$ type neutrinos than electron type, the MNR oscillation effectively removes some of the ability of the neutrinos and antineutrinos to convert neutrons to protons and vice versa. Therefore, it is likely that this oscillation has an effect on any type of nucleosynthesis that is influenced by the neutrinos, for example, wind nucleosynthesis. Future studies of MNR transitions in binary neutron star merger remnants are needed to elucidate these consequences, as well as to further probe the efficacy of the MNR transition itself. \label{secConclude} %}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Acknowledgements %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
16
7
1607.04671
1607
1607.05478_arXiv.txt
We develop models of two-component spherical galaxies to establish scaling relations linking the properties of spheroids at $z=0$ (total stellar masses, effective radii $R_e$ and velocity dispersions within $R_e$) to the properties of their dark-matter halos at both $z=0$ and higher redshifts. Our main motivation is the widely accepted idea that the accretion-driven growth of supermassive black holes (SMBHs) in protogalaxies is limited by quasar-mode feedback and gas blow-out. The SMBH masses, $M_{\rm{BH}}$, should then be connected to the dark-matter potential wells at the redshift $z_{\rm{qso}}$ of the blow-out. We specifically consider the example of a power-law dependence on the maximum circular speed in a protogalactic dark-matter halo: $M_{\rm{BH}}\propto V^4_{\rm{d,pk}}$, as could be expected if quasar-mode feedback were momentum-driven. For halos with a given $V_{\rm{d,pk}}$ at a given $z_{\rm{qso}}\ge 0$, our model scaling relations give a typical stellar velocity dispersion $\sigma_{\rm{ap}}(R_e)$ at $z=0$. Thus, they transform a theoretical ``$M_{\rm{BH}}$–-$V_{\rm{d,pk}}$ relation'' into a prediction for an observable $M_{\rm{BH}}$-–$\sigma_{\rm{ap}}(R_e)$ relation. We find the latter to be distinctly non-linear in log–-log space. Its shape depends on the generic redshift-evolution of halos in a {$\Lambda$}CDM cosmology and the systematic variation of stellar-to-dark matter mass fraction at $z=0$, in addition to any assumptions about the physics underlying the $M_{\rm{BH}}$-–$V_{\rm{d,pk}}$ relation. Despite some clear limitations of the form we use for $M_{\rm{BH}}$ versus $V_{\rm{d,pk}}$, and even though we do not include any SMBH growth through dry mergers at low redshift, our results for $M_{\rm{BH}}$--$\sigma_{\rm{ap}}(R_e)$ compare well to data for local early types if we take $z_{\rm{qso}}\sim 2$-–$4$.
\label{sec:intro} The masses $M_{\rm BH}$ of supermassive black holes (SMBHs) at the centres of normal early-type galaxies and bulges correlate with various global properties of the stellar spheroids---see \citet{kho13} for a comprehensive review. The strongest relationships include one between $M_{\rm BH}$ and the bulge mass $M_{\rm bulge}$ (either stellar or dynamical, depending on the author: e.g., \citealt{magetal98,mh03,hnr04,mcconnellma}); a scaling of $M_{\rm BH}$ with the (aperture) stellar velocity dispersion $\sigma_{\rm ap}$ averaged inside some fraction of the effective radius $R_e$ of the bulge ($M_{\rm BH}\sim \sigma_{\rm ap}^{4{\mbox{--}}5}$ if fitted with a single power law: \citealt{ferrarese00,gebhardt00,fandf,mcconnellma}); and a fundamental-plane dependence of $M_{\rm BH}$ on a combination of either $M_{\rm bulge}$ and $\sigma_{\rm ap}$ or $\sigma_{\rm ap}$ and $R_e$ (\citealt{hop07b,hop07c}). Whether any one correlation is more fundamental than the others is something of an open question, but collectively they are interpreted as evidence for co-evolution between SMBHs and their host galaxies. This co-evolution likely involved self-regulated feedback in general. Most of the SMBH mass in large galaxies is grown in a quasar phase of Eddington-rate accretion \citep{yu02}, driven by a rapid succession of gas-rich mergers at high redshift. Such accretion deposits significant momentum and energy back into the protogalactic gas supply, which can lead to a blow-out that stops further accretion onto the SMBH. In this context, the empirical correlation between $M_{\rm BH}$ and $\sigma_{\rm ap}$ takes on particular importance, as the stellar velocity dispersion should reflect the depth of the potential well from which SMBH feedback had to expel the protogalactic gas. Cosmological simulations of galaxy formation now routinely include prescriptions for the quenching of Eddington-rate accretion by ``quasar-mode'' feedback, with free parameters that are tuned to give good fits to the SMBH $M$--$\sigma$ relation at $z=0$. However, it is not clear in detail how the stellar velocity dispersions in normal galaxies at $z=0$ relate to the protogalactic potential wells when any putative blow-out occurred and the main phase of accretion-driven SMBH growth came to an end. For most systems, this was presumably around $z\sim 2$--3, when quasar activity in the Universe was at its peak \citep{rich06,hop07a}. The potential wells in question were dominated by dark matter, and a general method is lacking to connect the stellar $\sigma_{\rm ap}$ in spheroids to the properties of their dark-matter halos, not only at $z=0$ but at higher redshift as well. Moreover, it is not necessarily obvious what specific property (or properties) of dark-matter halos provides the key measure of potential-well depth in the context of a condition for accretion-driven blow-out. Different simulations of galaxy and SMBH co-evolution with different recipes for quasar-mode feedback appear equally able (with appropriate tuning of their free parameters) to reproduce the observed $M$--$\sigma$ relation. Our main goal in this paper is to address the first part of this problem. We develop ``mean-trend'' scaling relations between the average stellar properties (total masses, effective radii and aperture velocity dispersions) and the dark-matter halos (virial masses and radii, density profiles and circular-speed curves) of two-component spherical galaxies. These scalings are constrained by some data for a representative sample of local early-type galaxies, and by the properties of dark-matter halos at $z=0$ in cosmological simulations. We then include an analytical approximation to the mass and potential-well growth histories of simulated dark-matter halos, in order to connect the stellar properties at $z=0$ to halo properties at $z>0$. We ultimately use these results to illustrate how one particularly simple analytical expression, which gives a critical SMBH mass for protogalactic blow-out directly in terms of the dark-matter potential well at quasar redshifts, translates to a relation between SMBH mass and stellar velocity dispersion at $z=0$. \subsection{SMBH masses and halo circular speeds} \label{subsec:mbhvhalo} Under the assumption (which we discuss just below) that accretion feedback is momentum-conserving and takes the form of a spherical shell driven outwards by an SMBH wind with momentum flux $dp_{\rm wind}/dt=L_{\rm Edd}/c$, \citet{mcq12} derive a minimum SMBH mass sufficient to expel an initially static and virialised gaseous medium from any protogalaxy consisting of dark matter and gas only. This critical mass is approximately \begin{align} M_{\rm BH} & ~\simeq~ \frac{f_0 \kappa}{\pi G^2}\frac{V^{4}_{\rm{d,pk}}}{4} \notag \\ & ~\simeq~ 1.14\times10^{8}\,M_\odot\, \left(\frac{f_0}{0.2}\right) \left(\frac{V_{\rm{d,pk}}}{200~{\rm km s^{-1}}}\right)^{4} ~ , \label{eq:msig} \end{align} where $\kappa$ is the Thomson-scattering opacity and $f_0$ is the (spatially constant) gas-to-dark matter mass fraction in the protogalaxy. The velocity scale $V_{\rm d,pk}$ refers to the {\it peak} value of the circular speed $V_{\rm d}^2(r)\equiv GM_{\rm d}(r)/r$ in a dark-matter halo with mass profile $M_{\rm d}(r)$. Equation (\ref{eq:msig}) holds for any form of the mass profile, just so long as the associated circular-speed curve has a single, global maximum---as all realistic descriptions of the halos formed in cosmological $N$-body simulations do. Defining a characteristic (dark-matter) velocity dispersion as $\sigma_0\equiv V_{\rm d,pk}/\sqrt{2}$ turns equation (\ref{eq:msig}) into a critical $M_{\rm BH}$--$\sigma_0$ relation, which is formally the same as that obtained by \citet{king03,king05}, and similar to the earlier result of \citet{fabian99}, for momentum-driven blow-out from a singular isothermal sphere. \begin{figure} \begin{center} \includegraphics[width=83mm]{Figure1.eps} \end{center} \caption{SMBH mass versus stellar velocity dispersion averaged over an effective radius. Data are from the compilation of \citet{mcconnellma} for 53 E or S0 galaxies (filled circles) and 19 bulges in late Hubble types (open circles). The dashed line is equation (\ref{eq:msig}) with a protogalactic gas-to-dark matter fraction $f_0=0.18$ and $V_{\rm{d,pk}} \equiv \sqrt2\,\sigma_{\rm{ap}}(R_e)$ for all galaxies. Improving upon this poorly-justified association between the characteristic stellar and dark-matter velocities in early-type galaxies is one of the goals of this paper.} \label{fig:simple} \end{figure} This critical mass is based on the simplified description given by \citet{kp03} of a Compton-thick wind resulting from accretion at or above the Eddington rate onto an SMBH. In particular, their analysis provides the assumption that the momentum flux in the SMBH wind is simply $L_{\rm Edd}/c$ (with no pre-factor).\footnotemark \footnotetext{Having $dp_{\rm wind}/dt=L_{\rm Edd}/c$, rather than $\propto\!L_{\rm Edd}/c$ but much less, implies high wind speeds of up to $\sim\!0.1\,c$ \citep{king10}. Such ``ultrafast outflows'' are observed in many local active galactic nuclei and low-redshift quasars accreting at or near their Eddington rates (e.g., \citealt{pounds03,reeves03,tombesi10,tombesi11}).} The wind from an SMBH with mass greater than that in equation (\ref{eq:msig}) will then supply an outwards force (i.e., $L_{\rm Edd}/c=4\pi G M_{\rm BH}/\kappa$) on a thin, radiative shell of swept-up ambient gas that exceeds the gravitational attraction of dark matter behind the shell (maximum force $f_0V_{\rm d,pk}^4/G$ if the gas was initially virialised), {\it everywhere} in the halo. It is a condition for the clearing of all gas to beyond the virial radius of any non-isothermal halo. Equation (\ref{eq:msig}) has limitations. Most notably, the protogalactic outflows driven by SMBH winds are in fact expected to become energy-driven (non-radiative) after an initial radiative phase \citep{zub12,mcq13}. This may \citep{silk98,mcq13} or may not \citep{zub14} change the functional dependence of a critical $M_{\rm BH}$ for blow-out on the dark-matter $V_{\rm d,pk}$ or any other characteristic halo velocity scale. Beyond this, the equation also assumes a wind moving into an initially static ambient medium, ignoring the cosmological infall of gas and an additional, confining ram pressure that comes with hierarchical (proto-)galaxy formation \citep{costa14}. It also neglects the presence of any stars in protogalaxies, which could contribute both to the feedback driving gaseous outflows (e.g., \citealt{murray,power}) {\it and} to the gravity containing them. (The assumptions of spherical symmetry and a smooth ambient medium are not fatal flaws; see \citealt{zub14}). However, it is not our intention here to improve equation (\ref{eq:msig}). Rather, we aim primarily to establish a method by which halo properties at $z>0$ in relations {\it such as} equation (\ref{eq:msig}) can be related to the average properties of stellar spheroids at $z=0$. By doing this, we hope to understand better how expected relationships between SMBH masses and protogalactic dark-matter halos are reflected in the observed $M$--$\sigma$ relation particularly. Equation (\ref{eq:msig}) is a good test case because it is simple and transparent but still contains enough relevant feedback physics to be interesting, even with the caveats mentioned above. It is also the only such relation we know of, which {\it does not assume that dark-matter halos are singular isothermal spheres}. \subsection{Halo circular speeds and stellar velocity dispersions} \label{subsec:transform} As a point of reference, Figure \ref{fig:simple} shows SMBH mass against the stellar velocity dispersion $\sigma_{\rm ap}(R_e)$ within an aperture equal to the stellar effective radius, for galaxies and bulges in the compilation of \citet{mcconnellma}. The dashed line shows equation (\ref{eq:msig}) evaluated with a gas-to-dark matter mass ratio of $f_0=0.18$ (the cosmic average; \citealt{planck14}) for all protogalaxies at the time of blow-out, and with the naive substitution $V_{\rm d,pk}\equiv \sqrt{2}\,\sigma_{\rm ap}(R_e)$ for all spheroids at $z=0$. The proximity of this line to the data---first emphasised by \citet{king03,king05}, who assumed isothermal halos---encourages taking seriously the basic physical ideas behind equation (\ref{eq:msig}), even though (as discussed above) some details must be incorrect at some level. However, setting $V_{\rm d,pk}=\sqrt{2}\,\sigma_{\rm ap}(R_e)$ is problematic. A $\sqrt{2}$-proportionality between circular speed and velocity dispersion is appropriate only for isothermal spheres, which real dark-matter halos are not. A dark-matter velocity dispersion can be equated to a stellar velocity dispersion only if the dark matter and the stars have the same spatial distribution, which is not true of real galaxies. And $V_{\rm d,pk}$ in equation (\ref{eq:msig}) refers to a protogalactic halo, which will have grown significantly since the quasar epoch at $z\sim2$--3. In \S\ref{sec:modingred}, we gather results from the literature that we need in order to address these issues. In \S\ref{sec:modresults}, we combine them to constrain simple models of spherical, two-component galaxies, focussing on scaling relations between the stellar and dark-matter properties at $z=0$. This is done without any reference to black holes, and the scalings should be of use beyond applications to SMBH correlations. In \S\ref{sec:msigma}, we make a new, more rigorous comparison of equation (\ref{eq:msig}) to the SMBH $M$--$\sigma$ data (compare Figure \ref{fig:mbhsigma} below to Figure \ref{fig:simple}). Our work could in principle be used to explore the consequences of SMBH--halo relations like equation (\ref{eq:msig}) for other SMBH--bulge correlations as well, but we do not pursue these here. In \S\ref{sec:summary}, we summarise the paper.
\subsubsection{Dwarf galaxies} \label{subsubsec:dwarf} There are more physical considerations than the validity of a \citeauthor{hern} profile for the stellar distribution, which affect how well our models might be able to describe galaxies with stellar masses less than a few $\times10^9\,M_\odot$. In order to calculate velocity dispersions in \S\ref{subsec:apdisp}, we assumed that stellar ejecta are retained at the bottom of any galaxy's potential well. However, supernova-driven winds will have expelled the ejecta from many dwarf ellipticals to far beyond the stellar distributions. In this case, $F_{\rm ej}=0$ in equations (\ref{eq:jeans}) and (\ref{eq:sigapprox}) is more appropriate than $(1+F_{\rm ej})=1/0.58$. This lowers the expected $\sigma_{\rm ap}(R_e)$ by $\approx\!30\%$ at a given $M_{*,{\rm tot}}$ for a given halo density profile. On the other hand, the same galactic winds may cause changes in the central structures of the dark-matter halos of dwarfs, from initially steep density cusps to shallower profiles perhaps closer to the \citet{bur95} model (e.g., \citealt{bns97,pont12}); while subsequent tidal stripping could have led to further modifications at large radii in the halos. Substantial, systematic alterations to the dark-matter density profiles may impact the values we infer for $V_{\rm d,pk}$, $f_*(R_e)$ and $\sigma_{\rm ap}(R_e)$ from a given $M_{*,{\rm tot}}$, $R_e$ and $M_{\rm d,vir}$. And in any case, the relationship connecting $M_{*,{\rm tot}}$ to $M_{\rm d,vir}$ in equation (\ref{eq:mos}), from \citet{mos10}, may itself be in error if extrapolated to halo masses much below $M_{\rm d,vir}\la 10^{11}~M_\odot$ (see \citealt{behr13}). All in all, while the model curves in Figure \ref{fig:allplots} can be viewed as broadly indicative of the situation for dwarf galaxies, they should also be seen as provisional in that regime. More comprehensive modelling is required to be confident of how these kinds of average trends extrapolate to stellar masses much less than several $\times10^{9}\,M_\odot$ (or, roughly, $\sigma_{\rm ap}(R_e)\la 60$--$70~{\rm km~s}^{-1}$). \subsubsection{Intracluster baryons} \label{subsubsec:icm} As already discussed in \S\ref{subsec:virial}, we can safely ignore any small differences that intracluster baryons (whether gas or stars) might make to the virial radii and masses we calculate for halos centred on the most massive galaxies. Equation (\ref{eq:sigapprox}) in \S\ref{subsec:apdisp} now provides a way to assess the effects of intracluster baryons on the stellar velocity dispersions in the central galaxies of groups and clusters. If additional baryonic mass is distributed spatially like the dark matter, then it can be accounted for in the Jeans equation (\ref{eq:jeans}), and hence in equation (\ref{eq:sigapprox}), by decreasing $f_*(r)\equiv M_*(r)/M_{\rm d}(r)$ by a constant factor. This factor will be largest if the global baryon fraction in a halo is equal to the cosmic average value but only a trace amount is actually contained in the central galaxy itself. Thus, an ``effective'' $f_*(r)$ in the Jeans equation might be lower than the \citet{mos10} value by a factor of $\left(1-\Omega_{b,0}/\Omega_{m,0}\right)^{-1}$ at most, which is $\simeq\!1.18$ for a 2013 {\it Planck} cosmology. This could plausibly be the situation in halos with $M_{\rm d}(r_{\rm vir})\sim10^{15}~M_\odot$ (which have $M_{*,{\rm tot}}\sim 10^{12}~M_\odot$ for the central galaxy), but the total baryon fraction decreases systematically with decreasing (sub-)halo mass \citep[e.g.,][]{gonzalez13,zhang11,mcgaugh10}. In galaxy-sized halos, it is generally consistent with the mass of stars, remnants and stellar ejecta in the galaxy proper, which we have already accounted for fully. The maximum effect on $\sigma_{\rm ap}(R_e)$ in the central galaxy can be estimated by comparing the value of equation (\ref{eq:sigapprox}) with $(1+F_{\rm ej})=1/0.58$ and $f_*(R_e)=0.5$---the lowest value in any of our models at $M_{*,{\rm tot}}=10^{12}~M_\odot$ or $M_{\rm d,vir}\simeq 10^{15}~M_\odot$ in Figure \ref{fig:allplots}---to the value using $f_*(R_e)=0.5/1.18$ instead. The result is an increase of $<\!5\%$ in the velocity dispersion. This is of the same order as the maximum effect on our values for the halo virial radii. We have chosen to ignore intracluster baryons altogether rather than introduce detailed additional modelling just to make adjustments that are {\it at most} so small. \subsubsection{Comparisons to individual systems} \label{subsubsec:real} In an Appendix, we make some checks on the average scalings represented in Figure \ref{fig:allplots}, by comparing various numbers extracted from them to relevant data in the literature for the Milky Way, M87 and M49 (the central galaxies of Virgo sub-clusters A and B) and NGC\,4889 (the brightest galaxy in the Coma Cluster). The stellar masses and velocity dispersions of these systems span the range covered by the local early-type galaxies used to define empirical black hole $M$--$\sigma$ relations. It is notable in particular that, starting with just the galaxies' total stellar masses, the scalings imply detailed properties of the {\it cluster-sized} dark-matter halos around each of M87, M49 and NGC\,4889, which are in reasonably good agreement with literature values. Incorporating the generally modest {\it systematic} effects of low-redshift mergers in the models shown in Figure \ref{fig:mbhsigma} would primarily move the curves upwards on the plot. [Mergers at all redshifts are already included in how $V_{\rm d,pk}$ in a progenitor halo at $z_{\rm qso}>0$ is connected to $\sigma_{\rm ap}(R_e)$ in the central galaxy at $z=0$; only the value of $M_{\rm BH}$ needs to be adjusted.] However, a few factors could lower the starting $M_{\rm BH}$--$V_{\rm d,pk}$ relation predicted by equation (\ref{eq:msigagain}) at any given $z_{\rm qso}$. First, if the baryon-to-dark matter mass fraction in a protogalaxy at $z_{\rm qso}$ were less than $f_0=0.18$---the cosmic average, assumed for all of the curves in Figure \ref{fig:mbhsigma}---then the critical $M_{\rm BH}$ for blow-out would be decreased proportionately. Second, equation (\ref{eq:msigagain}) ignores any prior work done by a growing SMBH to push the protogalactic gas outwards before the point of final blow-out, and thus it overestimates the mass required to clear the halo completely at $z_{\rm qso}$. Related to this, lower SMBH masses may suffice to quench quasar-mode accretion by clearing gas from the inner regions to ``far enough'' away from a central SMBH, without expelling it fully past the virial radius. Cosmological simulations are required to evaluate the balance between these effects pushing the model $M_{\rm BH}$--$\sigma_{\rm ap}(R_e)$ curves downwards in Figure \ref{fig:mbhsigma}, and the competing effects of late, dry mergers pulling upwards. But at this level, the more fundamental simplifications underlying equation (\ref{eq:msigagain})---among others, the idea that quasar-mode feedback is always momentum-driven---need to be improved first. Likewise, low-redshift mergers are just one possible source of intrinsic {\it scatter} in the empirical $M$--$\sigma$ relation at $z=0$. Another is different values in different systems for the precise redshift at which the main phase of accretion-driven SMBH growth was ended by quasar-mode feedback. Even if there were a single $z_{\rm qso}$, there must be real scatter in the data around any trend line such as those in Figure \ref{fig:mbhsigma}, because of the scatter around the constituent scalings from \S\ref{sec:modingred} and \S\ref{sec:modresults} for halos, halo evolution and central galaxies. It is important, but beyond the scope of this paper, to understand the physical content of the observed $M$--$\sigma$ scatter in detail. Part of the challenge is to know the ``correct'' trend for $M_{\rm BH}$ versus $\sigma_{\rm ap}(R_e)$ at $z=0$, around which scatter should be calculated. In the context of feedback models, this again requires improving on equation (\ref{eq:msigagain}) for the prediction of $M_{\rm BH}$ values at $z_{\rm qso}>0$. We have examined how a simple relationship between SMBH masses $M_{\rm BH}$ and the circular speeds $V_{\rm d,pk}$ in protogalactic dark-matter halos, established by quasar-mode feedback at redshift $z_{\rm qso}>0$, is reflected in a correlation between $M_{\rm BH}$ and the stellar velocity dispersions $\sigma_{\rm ap}(R_e)$ in early-type galaxies at $z=0$. Straightforward but non-trivial approximations for halo growth and scalings between halos and their central galaxies transform a power-law $M_{\rm BH}$--$V_{\rm d,pk}$ relation at $z_{\rm qso}$ into a decidedly {\it non}--power-law $M_{\rm BH}$--$\sigma_{\rm ap}(R_e)$ relation at $z=0$. This relation nevertheless compares well to current data, for assumed values of $z_{\rm qso}\approx 2$--4. We worked with two-component models for spherical galaxies. Because the stellar properties most relevant to us are those at (or averaged inside) an effective radius, it sufficed to assume \cite{hern} density profiles for the stars inside any galaxy. Because dark-matter halos are key to determining SMBH mass in the feedback scenario we focussed on, we allowed for any of four different halo density profiles: those of \citet{NFW96,NFW97}, \citet{hern}, \citet{dm} and \citet{bur95}. The scaling relations we developed are trend lines connecting average stellar properties at $z=0$ [total masses $M_{*,{\rm tot}}$, effective radii $R_e$, aperture velocity dispersions $\sigma_{\rm ap}(R_e)$ and dark-matter mass fractions] to the typical virial masses $M_{\rm d,vir}$ and peak circular speeds $V_{\rm d,pk}$ of dark-matter halos at $z=0$ {\it and} their most massive progenitors up to $z\la 4$--5. These scalings are constrained by theoretical work in the literature on the global structures, baryon contents and redshift-evolution of dark-matter halos (\S\ref{sec:modingred}) and by data in the literature for local elliptical galaxies (\S\ref{sec:modresults}). They are robust for normal early-type systems with stellar masses greater than several $\times10^9\,M_\odot$ at $z=0$, corresponding to velocity dispersions $\sigma_{\rm ap}(R_e)\ga60{\mbox{--}}70~{\rm km~s}^{-1}$, but are largely untested against lower-mass dwarf galaxies (see \S\ref{subsec:discussz0}). We applied the scalings to show in \S\ref{sec:msigma} how a relationship of the form $M_{\rm BH}\propto V_{\rm d,pk}^4$ at a range of redshifts $z_{\rm qso}>0$ (equation [\ref{eq:msig}]; \citealt{mcq12}) appears as a much more complicated $M_{\rm BH}$--$\sigma_{\rm ap}(R_e)$ relation at $z=0$. The specific form for an initial $M_{\rm BH}$--$V_{\rm d,pk}$ relation comes from a simplified theoretical analysis of {\it momentum-conserving} SMBH feedback in {\it isolated and virialised} gaseous protogalaxies with {\it non-isothermal} dark-matter halos. Some of the simplifying assumptions involved thus need to be relaxed and improved in future work. Meanwhile, the highly ``non-linear'' observable $M_{\rm BH}$--$\sigma_{\rm ap}(R_e)$ relation we infer from it does describe the data for local early types if the redshift of quasar-mode blow-out was $z_{\rm qso}\approx 2$--4. This range is reassuringly similar to the epoch of peak quasar density and SMBH accretion rate in the Universe. This lends support to the notion that the empirical $M$--$\sigma$ relation fundamentally reflects {\it some} close connection due to accretion feedback between SMBH masses in galactic nuclei and the {\it dark matter} in their host (proto)galaxies. It also demonstrates that the true, physical relationship between $M_{\rm BH}$ and stellar velocity dispersion at $z=0$ is not necessarily a pure power law. The shape in our analysis has an upwards bend around $\sigma_{\rm ap}(R_e)\approx 140~{\rm km~s}^{-1}$ (Figure \ref{fig:mbhsigma}), corresponding to stellar masses $M_{*,{\rm tot}}\approx3{\mbox{--}}4\times10^{10}\,M_\odot$ and halo masses $M_{\rm d,vir}(0)\approx 10^{12}\,M_\odot$ at $z=0$. This bend comes from a sharp maximum at these masses in the global stellar-to-dark matter fractions, $M_{*,{\rm tot}}/M_{\rm d,vir}(0)$ (e.g., \citealt{mos10}). Consequently, there is a sharp upturn in the dependence of halo circular speeds $V_{\rm d,pk}$ on the stellar $\sigma_{\rm ap}(R_e)$ (see Figures \ref{fig:allplots} and \ref{fig:mvirvpk}). Our models also show a flattening of $M_{\rm BH}$ versus $\sigma_{\rm ap}(R_e)$ at $z=0$ for velocity dispersions above $300~{\rm km~s}^{-1}$ or so, for any blow-out redshift $z_{\rm qso}>0$ but more so for higher $z_{\rm qso}$ (Figure \ref{fig:mbhsigma}). This is due to the way that dark-matter halo masses grow and circular speeds increase through hierarchical merging in a $\Lambda$CDM cosmology after $M_{\rm BH}$ is set by feedback and the halo properties at $z_{\rm qso}$ (see Figure \ref{fig:vpkz}). However, the values we calculate for $M_{\rm BH}$ include only the growth by accretion up to $z=z_{\rm qso}$; further growth through SMBH--SMBH coalescences in gas-poor mergers at lower redshifts is neglected. (The effects of such mergers on halo masses and circular speeds, and stellar velocity dispersions at $z=0$, {\it are} accounted for.) As discussed in \S\ref{subsec:msigma}, simulations by \citet{vol13} suggest that low-redshift merging has a significant effect on the SMBH masses in systems with large $\sigma_{\rm ap}(R_e)\ga 300$--$350~{\rm km~s}^{-1}$ at $z=0$. There, dry mergers can scatter $M_{\rm BH}$ values strongly upwards from the values at $z_{\rm qso}$, essentially erasing the flattening that might otherwise be observed at $z=0$ and ``saturating'' the empirical $M$--$\sigma$ relation. In galaxies with lower $\sigma_{\rm ap}(R_e)\la 300~{\rm km~s}^{-1}$, where most current data fall, such scatter up from feedback-limited SMBH masses will be much more modest in general. The expected $M_{\rm BH}$--$\sigma_{\rm ap}(R_e)$ relations at $z=0$ should then have the same basic shape as when late mergers are ignored. Although we have focussed on the observed $M$--$\sigma$ relation, other SMBH--bulge correlations exist that may be just as strong intrinsically. These include the $M_{\rm BH}{\mbox{--}}M_{\rm bulge}$ correlation and multivariate, ``fundamental-plane'' relationships between $M_{\rm BH}$ and non-trivial combinations of $M_{*,{\rm tot}}$, $R_e$ and $\sigma_{\rm ap}(R_e)$. They should also reflect any underlying SMBH--dark matter connection at some $z_{\rm qso}>0$, and the techniques of this paper can be applied to look at them as well. However, this will best be done with close attention also paid to the inevitable scatter around all of the scalings we have adopted for both stellar and dark-matter halo properties. It remains to be understood how the numerous individual sources of scatter combine to produce SMBH correlations with apparently so little net scatter at $z=0$. More sophisticated predictions of critical SMBH masses for quasar-mode blow-out in terms of protogalactic dark-matter halo properties are required. The simple relation $M_{\rm BH}\propto V_{\rm d,pk}^4$ that we have used makes very specific assumptions about the mechanism (e.g., momentum-driven) and the setting (spherical protogalaxies with no stars, initially virialised gas, smooth outflows) of the feedback that establishes it. We mentioned in \S\ref{subsec:mbhvhalo} and \S\ref{subsec:msigma} several ways to improve on these assumptions. Our work in this paper is readily adaptable to help test any refinements.
16
7
1607.05478
1607
1607.07227_arXiv.txt
The operations model of the Paranal Observatory relies on the work of efficient staff to carry out all the daytime and nighttime tasks. This is highly dependent on adequate training. The Paranal Science Operations department (PSO) has a training group that devises a well-defined and continuously evolving training plan for new staff, in addition to broadening and reinforcing courses for the whole department. This paper presents the training activities for and by PSO, including recent astronomical and quality control training for operators, as well as adaptive optics and interferometry training of all staff. We also present some future plans.
\label{sec:intro} The Very Large Telescope (VLT) at Cerro Paranal in Chile is the European Southern Observatory's (ESO) premier site for observations in the visible and infrared light. Starting routine operations in 1999, it represented at the time the largest single investment in ground-based astronomy ever made by the European community. At Paranal, ESO operates the four 8.2-m Unit Telescopes (UTs), each equipped with three instruments covering a wide range in wavelength, as well as a variety of technology, from imagers, low- and high-resolution spectrographs, multiplex and integral field spectrographs, to polarimeters. Some of the instruments use adaptive optics technologies to increase their resolution. In addition, the VLT offers the possibility of combining the light from the four UTs to work as an interferometer, the Very Large Telescope Interferometer (VLTI), with its own suite of instruments, providing imagery and spectroscopy at the milliarcsecond level and soon, astrometry at 10 microarcsecond precision. In addition to the 8.2-metre diameter telescopes the VLTI is complemented by four Auxiliary Telescopes (AT) of 1.8-metre diameter to improve its imaging capabilities and enable full nighttime use on a year-round basis. Two telescopes for imaging surveys are also in operation at Paranal, the VLT Survey Telescope (VST, 2.6-metre diameter) for the visible, and the Visible and Infrared Survey Telescope for Astronomy (VISTA, 4.1-metre) for the infrared. The comprehensive ensemble of telescopes and instruments available in Paranal is depicted in Fig.~\ref{fig:vlt}. \begin{figure} [htbp] \begin{center} \begin{tabular}{c} % \includegraphics[width=15cm]{vlt} \end{tabular} \end{center} \caption[example] { \label{fig:vlt} The Paranal Observatory telescopes and instruments. Instruments listed in blue are at the Cassegrain focii of the telescopes. Instruments listed in italics are not yet installed. Credit: ESO. } \end{figure} Each year, between 1800 and 2000 proposals are made for the use of ESO telescopes. This leads to ESO being the most productive astronomical observatory in the world, which annually results in many peer-reviewed publications: in 2015 alone, over 950 refereed papers based on ESO data were published. Moreover, research articles based on VLT data are in the mean quoted twice as often as the average. The very high efficiency of ESO's ``science machine'' is not due to chance, but the outcome of a careful operational model, which encompasses the full cycle, from observing proposal preparation to planning and executing the observations, providing data reduction pipelines, checking and guaranteeing data quality, and finally making the data available to the whole astronomical community through a science archive. The operational model of the VLT rests on two possible modes: visitor (or classical) and service. In visitor mode, the astronomer travels to Paranal to execute their observations. In service mode, however, observations are queued and executed taking into account, in real time, their priorities and the atmospheric and astronomical conditions. In both cases, observations are done by the Paranal Science Operations (PSO) staff. This is obvious for the service mode, but also necessary for the visitor mode given the very complicated nature of the instruments that cannot be easily taught to the visiting astronomers. PSO staff is there to support observing operations in both visitor (VM) and service mode (SM) in Paranal. The tasks to be performed include the support of visitor astronomers, the short-term (flexible) scheduling of queue observations, the calibration and monitoring of the instruments, and the assessment of the scientific quality of the astronomical data, with the ultimate goal of optimising the scientific output of this world leading astronomical facility. The PSO department consists of about 65 staff\cite{Dumas}, composed of staff astronomers (who do 105 to 135 nights of duties), post-doctoral fellows (80 nights) and telescopes and instrument operators (TIOs). Obviously, this group did not exist on April 1, 1998, i.e. before First Light of the first UT. Six years later, it counted 56 members, a number that thus increased, in parallel to the larger numbers of systems available. During the early years of operations, the recruitment and turnover of staff was particularly high. For example, during the first 6 years on average more than 12 PSO staff were recruited each year\cite{Mathys}. While these numbers have now plateaued to lower levels, there is still considerable turnover of staff: in the last 3 years 9 staff astronomers and 3 TIOs have been hired. Moreover, the post-doctoral fellowships are three-year contracts\footnote{ESO Fellows in Chile receive a $4^{\rm th}$ year which they may choose to spend at ESO Chile, in which case they would have no or reduced duties at the Observatory, but they can also choose to go to a Chilean institution or in a research group in any of the ESO member states.}, so by definition, this leads to a large turnover. This implies that there is a continuous need for a considerable training effort. Moreover, not only is there a need to train the new members of the department, but one also needs to deepen the training of everyone. This article presents how this is done.
We have briefly presented the various training activities organised for staff of the Paranal Science Operations department. These activities, made mostly using only internal resources, are now going well beyond the initial (and necessary) training of newcomers in the department, and aim at increasing the efficiency and motivation of the people that operate the Paranal telescopes and instruments. It becomes more and more important to provide training on specialised topics such as adaptive optics and interferometry, as well as on modern programming languages. Such training activities will still need to be developed in the future to address new challenges coming with more and more sophisticated instruments. \pagebreak
16
7
1607.07227
1607
1607.05708_arXiv.txt
Direct spectroscopic biosignature characterization (hereafter ``biosignature characterization'') will be a major focus for future space observatories equipped with coronagraphs or starshades. Our aim in this article is to provide an introduction to potential detector and cooling technologies for biosignature characterization. We begin by reviewing the needs. These include nearly noiseless photon detection at flux levels as low as $<0.001~\textrm{photons}~s^{-1}~\textrm{pixel}^{-1}$ in the visible and near-IR. We then discuss potential areas for further testing and/or development to meet these needs using non-cryogenic detectors (EMCCD, HgCdTe array, HgCdTe APD array), and cryogenic single photon detectors (MKID arrays and TES microcalorimeter arrays). Non-cryogenic detectors are compatible with the passive cooling that is strongly preferred by coronagraphic missions, but would add non-negligible noise. Cryogenic detectors would require active cooling, but in return deliver nearly quantum limited performance. Based on the flight dynamics of past NASA missions, we discuss reasonable vibration expectations for a large UV-Optical-IR space telescope (LUVOIR) and preliminary cooling concepts that could potentially fit into a vibration budget without being the largest element. We believe that a cooler that meets the stringent vibration needs of a LUVOIR is also likely to meet those of a starshade-based Habitable Exoplanet Imaging Mission.
\label{sec:intro} The search for life on other worlds looms large in NASA's 30-year strategic vision.\cite{2014arXiv1401.3741K} Already, several mission concept studies are either completed or underway that would use a larger than 8 meter aperture UV-Optical-IR space telescope equipped with a coronagraph or starshade to characterize potentially habitable exoEarths (\eg ATLAST, HDST, LUVOIR).\footnote{The acronyms stand for: LUVOIR = Large UV-Optical-IR Surveyor,\cite{2014arXiv1401.3741K} ATLAST = Advanced Technology Large Aperture Space Telescope,\cite{Bolcar:2015jh,Rauscher:2015hba,Rioux:2015hn} and HDST = High Definition Space Telescope\cite{2015arXiv150704779D}.} Alternatively, smaller starshade-based Habitable-Exoplanet Imaging Mission concepts exist.\cite{NASATownhall:2015tm} All would benefit from better visible and near-IR (VISIR; $\lambda=400~\textrm{nm} - 2.5~\mu\textrm{m}$) detectors than exist today. Moreover, because of different overall system design considerations, different solutions may turn out to be optimal depending on whether a mission is coronagraph or starshade based. Our aim in this article is to discuss a short list of technologies that we believe to be potentially capable of biosignature characterization for either coronagraph or starshade missions. Once a rocky exoplanet in the habitable zone has been found, biosignature characterization will be the primary tool for determining whether we think it harbors life. Biosignature characterization uses moderate resolution spectroscopy, $R=\lambda/\Delta\lambda>100$, to study atmospheric spectral features that are thought to be necessary for life, or that can be created by it (\eg \water, \oxygen, \ozone, \methane, \cotwo). We discuss these biosignatures in more detail in Sec.~\ref{sec:biosig} and the spectral resolution requirements for observing them in Sec.~\ref{sec:resolution}. Even using a very large space telescope, biosignature characterization is extremely photon starved. Ultra low noise detectors are needed, and true energy resolving single photon detectors would be preferred if they could be had without the vibration that is associated with conventional cryocoolers. Our aim in this article is to provide an introduction to the detector needs for biosignature characterization, and some of the emerging technologies that we believe hold promise for meeting them within the next decade. The technologies fall into two broad categories: (1) low noise detectors (including ``photon counting'') that are compatible with passive cooling and (2) true energy resolving single photon detectors that require active cooling. We draw a clear distinction between photon counting low noise detectors and single photon detectors. A photon counting detector is able to resolve individual photons, although the detection process still adds significant noise. For example, many kinds of photon counting detector have significant dark current and spurious charge generation at the ultra low flux levels that are encountered during biosignature characterization. A single photon detector, on the other hand, provides essentially noiseless detection of light. Noise in the single photon detectors discussed here manifests as an uncertainty in the energy of a detected photon rather than an uncertainty in the number of photons. The low noise detectors include electron multiplying charge coupled devices (EMCCD) for the visible and HgCdTe photodiode and avalanche photodiode (APD) arrays for the near-IR. With targeted investment, we believe that all can be improved beyond today's state of the art. Sec.~\ref{sec:btb} describes a low risk but evolutionary payoff route to improving these existing non-cryogenic detectors for use with conventional spectrographs. One advantage of this approach is that it completely retires the risks, cost, and complexities associated with a cryocooler. The disadvantages include increased noise and the need for dispersive spectrograph optics. The single photon detectors that we discuss are based on thin superconducting films and operate at $\rm T\approx 100~mK$. Cryogenic cooling is required to achieve these temperatures. In return for cryogenic cooling, single photon detectors promise noiseless (in the conventional astronomy sense), nearly quantum limited photon detection with built in energy resolution. The built in energy resolution offers the tantalizing prospect of non-dispersive imaging spectrometry, thereby eliminating most spectrograph optics. In this article, we focus on two single photon detectors that have already been used for astronomy and that offer the potential for multiplexing up to sufficiently large formats. These are microwave kinetic inductance device (MKID) arrays and transition-edge sensor (TES) microcalorimeter arrays. Sec.~\ref{sec:spd} discusses a path forward using single photon detectors that offers the potential for nearly quantum limited detector performance and non-dispersive imaging spectrometry if the cooling challenges can be met. Cryogenic cooling in the context of LUVOIR brings its own challenges. High performance space coronagraphs require tens of picometer wavefront error stability. This extreme stability is incompatible with the vibration from existing cryocoolers. Since ultra-low vibration cooling is a necessary prerequisite to using cryogenic single photon detectors on a LUVOIR, we briefly describe a few preliminary concepts for achieving it in Sec.~\ref{sec:cooling}. Although vibration will undoubtedly present challenges in starshade missions too, we believe that the coronagraphic LUVOIR represents a challenging ``worst case'' for cooler design studies. In the interest of brevity, we have limited discussion to a fairly short list of detector technologies that are either already under development, or that we view as particularly promising. One could easily add other technologies to those that are discussed. For example, scientific CMOS arrays have a wide consumer base and potentially provide sub-electron read noise with better radiation tolerance than CCDs because no charge transfer is required. Superconducting nanowire single photon detectors (SNSPD) may provide another route to cryogenic single photon detectors that, while not energy resolving, would still promise essentially noiseless detection. As the field matures, it may be desirable to revisit these and other technologies. Although the need for essentially noiseless detection is clear, no existing technology currently fulfills all of the needs.
\label{sec:summary} We have discussed a broad suite of detector and cooling technologies for biosignature characterization using future space observatories such as LUVOIR and the Habitable-Exoplanet Imaging Mission. For easy reference, Tab.~\ref{tab:summary} summarizes some of these technologies, and the challenges with reference to the state-of-the-art. \begin{table}[t] \caption{Summary of where further work is desirable} \label{tab:summary} \begin{center} \includegraphics[width=.8\textwidth]{t30.pdf} \end{center} \end{table} For EMCCDs, improving radiation tolerance is arguably the greatest need. As is discussed in Sec.~\ref{sec:emccd}, radiation tolerance was not a design consideration for current generation EMCCDs. One should not be surprised to see the radiation induced performance degradation that is typical for n-channel CCDs in space (\eg charge transfer efficiency degradation), and other artifacts that may be revealed at sub-electron noise levels (CIC is one example, but surprises are also possible). For LUVOIR and/or a Habitable Exoplanet Imaging Mission, we believe it would be wise to apply known CCD radiation hardening design features and fabrication processes to EMCCDs.\cite{Burt:2009ct} For risk mitigation, it may also make sense to explore similar detector architectures that promise greater radiation tolerance. It would also be desirable to improve CIC in EMCCDs, for which the current state-of-the-art is already close to ``good enough'' when new. For CIC, we believe that incremental improvements in operation and design hold good promise for meeting the need on the relevant timescale. There is still some room from improvement in near-IR photodiode arrays similar to the HxRGs that are being used for JWST, Euclid, and WFIRST. Although the current architecture seems unlikely to function as a single photon detector, significant incremental improvement (perhaps factors of 2-3 reduction in read noise) may be possible. A reasonable first step would be detailed characterization of existing HxRGs aimed at separating out the different contributors to the noise (photodiode, resistive interconnect, pixel source follower, other amplifiers, \etc). Near-IR APD arrays like those made by Selex may also be promising if the ``dark current'' can be reduced to $<0.001~e^-~s^{-1}$. The $\sim 10~e^-~s^{-1}$ gain corrected ``dark current'' of current devices is almost certainly dominated by ROIC glow, but there may still be significant work required to go from the to-be-determined leakage current of these devices to the $<0.001~e^-~s^{-1}$ that is needed. Superconducting MKID and TES arrays already function as single photon detectors and both have already been used for VISIR astronomy. Use of these technologies by LUVOIR is contingent upon developing ultra-low vibration cooling. If ultra-low vibration cooling is available, then the challenges for both MKID arrays and TES microcalorimeter arrays are similar. Higher energy resolution and better photon coupling efficiency are needed. If ultra-low vibration cooling is not available, then we believe MKID and TES microcalorimeter arrays may still be attractive for a starshade-based Habitable Exoplanet Imaging Mission because they would offer nearly quantum limited performance. With specific regard to MKID arrays, further work should include the development of VISIR MKID arrays with designs targeting the energy resolution and optical efficiency required for biosignature characterization. Several areas of investment will be needed. One expects significant resolution improvements over the state-of-the-art in the near-term from the development of broadband parametric amplifiers with nearly quantum-limited sensitivity, and from switching to MKID materials with greater uniformity in thin-film properties that will eliminate position-dependent broadening of the measured photon energy. In addition, reaching Fano-limited energy resolution will likely require designs that reduce VISIR MKID inductor volume by a factor on the order of 30 from current devices designed for the optical background in ground-based instruments, while at the same time managing to improve optical efficiency. Achieving high enough optical efficiency over the broad LUVOIR bandwidth will be challenging given the non-constant, reactive complex resistivity of MKID materials at VISIR frequencies. However, even achieving the Fano-limit with currently favored MKID materials (transition temperature $\approx$ 1 K) will not be sufficient to reach biosignature characterization goals. Either VISIR MKIDs (and cooling systems) will need to be developed with lower $T_{c} \approx$ 0.17 K (operating T $\approx$ 20 mK) in order to give a better Fano-limit, or else effort will be needed to optimize the TKID (membrane) style of detector in order to circumvent the Fano limit for VISIR MKIDs. Both MKIDs and TESs require ultra-low vibration cooling for use in a LUVOIR. For a Habitable Exoplanet Imaging Mission, the vibration requirements may be less stringent. For these technologies to be viable in all biosignature characterization mission architectures, We recommend the development of prototype technology for ultra-low vibration coolers. As a first step, studies are needed to examine the feasibility of a laminar-flow system, including a detailed computational effort to determine whether flow separation can be avoided. Once feasibility has been established, the most immediate need will be for techniques that can be used to verify the computational models in prototype components at the required nN levels. \appendix
16
7
1607.05708
1607
1607.03632_arXiv.txt
{Many efforts to detect Earth-like planets around low-mass stars are presently devoted in almost every extra-solar planet search. M dwarfs are considered ideal targets for Doppler radial velocity searches because their low masses and luminosities make low-mass planets orbiting in their habitable zones more easily detectable than those around higher mass stars. Nonetheless, the statistics of frequency of low-mass planets hosted by low mass stars remains poorly constrained.} {Our M-dwarf radial velocity monitoring with HARPS-N within the GAPS (Global architectures of Planetary Systems) -- ICE (Institut de Ciències de l'Espai/CSIC-IEEC) -- IAC (Instituto de Astrof{\'{i}}sica de Canarias) project\thanks{http://www.oact.inaf.it/exoit/EXO-IT/Projects/Entries/2011/12/27\_GAPS.html} can provide a major contribution to the widening of the current statistics through the in-depth analysis of accurate radial velocity observations in a narrow range of spectral sub-types (79 stars, between dM0 to dM3). Spectral accuracy will enable us to reach the precision needed to detect small planets with a few earth masses. Our survey will bring a contribute to the surveys devoted to the search for planets around M-dwarfs, mainly focused on the M-dwarf population of the northern emisphere, for which we will provide an estimate of the planet occurence. } {We present here a long duration radial velocity monitoring of the M1 dwarf star GJ\,3998 with HARPS-N to identify periodic signals in the data. Almost simultaneous photometric observations were carried out within the APACHE and EXORAP programs to characterize the stellar activity and to distinguish from the periodic signals those due to activity and to the presence of planetary companions. We run an MCMC simulation and use Bayesian model selection to determine the number of planets in this system, to estimate their orbital parameters and minimum masses and for a proper treatment of the activity noise. } {The radial velocities have a dispersion in excess of their internal errors due to at least four superimposed signals, with periods of 30.7, 13.7, 42.5 and 2.65 days. Our data are well described by a 2-planet Keplerian (13.7~d and 2.65~d) and 2 sinusoidal functions (stellar activity, 30.7~d and 42.5~d) fit. The analysis of spectral indices based on Ca II H \& K and H$\alpha$ lines demonstrates that the periods of 30.7 and 42.5 days are due to chromospheric inhomogeneities modulated by stellar rotation and differential rotation. This result is supported by photometry and is consistent with the results on differential rotation of M stars obtained with $Kepler$. The shorter periods of $13.74\pm\, 0.02\, d$ and $2.6498\pm\, 0.0008\, d$ are well explained with the presence of two planets, with minimum masses of $6.26^{+0.79}_{-0.76}$\, M$_\oplus$ and $2.47\pm\, 0.27$\, M$_\oplus$ and distances of 0.089 AU and 0.029 AU from the host, respectively.} {}
In the two decades since the discovery of the first giant planetary mass companion to a main sequence star \citep{may95} the search for and characterization of extrasolar planets has quickly developed to become a major field of modern-day astronomy. Thanks to concerted efforts with a variety of observational techniques (both from the ground and in space), thousands of confirmed and candidate planetary systems are known to-date (e.g., http://www.exoplanet.eu - \citealp{sch11}), encompassing orders of magnitude in mass and orbital separation. The frontier today is being pushed ever closer to the identification of potentially habitable small-mass planets with a well-determined rocky composition similar to Earth's. Planetary systems harboring objects with these characteristics are likely to be discovered first around primaries with later spectral types than the Sun's. Stars in the lower main sequence (M dwarfs) constitute the vast majority ($>70-75\%$) of all stars, both in the Solar neighbourhood and in the Milky Way as a whole \citep{henry06,win15}. They are particularly promising targets for exoplanet search programs for a number or reasons. In particular, the favourable mass and radius ratios lead to readily detectable radial-velocity (RV) and transit signals produced by terrestrial-type planets. Furthermore, the low luminosities of M dwarfs imply that the boundaries of their habitable zones (HZ) are located at short separations \citep[typically between 0.02 AU and 0.2 AU, see e.g.][]{man07}, making rocky planets orbiting within them more easily detectable with present-day observing facilities than those around more massive stars \citep[e.g.][]{cha07}. Finally, the favorable planet-star contrast ratios for small stars enable the best opportunities in the near future for detailed characterization studies of small planets and their atmospheres \citep[e.g.][]{sea10}. \begin{figure} \centering \includegraphics[width=9.cm]{fig1a.ps} \includegraphics[width=9.cm]{fig1b.ps} \includegraphics[width=9.cm]{fig1c.ps} \caption{Radial velocity (top) and activity indices H$\alpha$ (middle) and $S$ index (bottom) time series for GJ\,3998 measured with the TERRA pipeline.} \label{fig:rv} \end{figure} While the first planets discovered around M dwarfs were Jovian-type companions \citep{del98,mar98,mar01}, it is now rather well observationally established that their frequency is lower than that of giant planets around solar-type hosts \citep[e.g.][]{soz14}, and references therein), a result understood within the context of the core-accretion formation model \citep[e.g.][]{lau04}. The most recent evidence gathered by RV surveys and ground-based as well as space-borne transit search programs points instead towards the ubiquitousness of small (in both mass and radius) companions around M dwarfs. The outstanding photometric dataset of transit candidates around early M dwarfs from the Kepler mission has allowed \citet{gai16} and \citet{dre15} to derive cumulative occurrence rates of $2.3\pm0.3$ and $2.5\pm0.2$ planets with $1-4$ R$_\oplus$ for orbital periods $P$ shorter than 180 days and 200 days, respectively. Approximately one in two early M-type stars appears to host either an Earth-sized ($1-1.5$ R$_\oplus$) or a Super Earth ($1.5-2.0$ R$_\oplus$) planet \citep{dre15} with an orbital period $< 50$ days. Based on different recipes for the definition of the HZ boundaries and planetary atmospheric properties, \citet{kop13} and \citet{dre15} obtain frequency estimates of potentially habitable terrestrial planets ($1-2$ R$_\oplus$) around the Kepler early M-dwarf sample $\eta_\oplus=48^{+12}_{-24}$\% and $\eta_\oplus=43^{+14}_{-9}$\%, respectively. The inference from RV surveys of early- and mid-M dwarfs instead indicates $0.88^{+0.55}_{-0.19}$ Super Earth planets per star with $P< 100$ d and $m\sin i=1-10$ M$_\oplus$, and a frequency of such planets orbiting within the host's HZ $\eta_\oplus=41^{+54}_{-13}$\% \citep{bon13}. The $\eta_\oplus$ estimates from Doppler and transit surveys thus seem to be in broad agreement. However, $a)$ the uncertainties associated to these occurrence rate estimates are still rather large, and $b)$ the known compositional degeneracies in the mass-radius parameter space for Super Earths \citep[e.g.][]{rog10} make the mapping between the $\eta_\oplus$ estimates based on (minimum) mass and radius not quite straightforward. Any additional constraints coming from ongoing and upcoming planet detection experiments targeting M dwarfs are then particularly valuable. We present here high-precision, high-resolution spectroscopic measurements of the bright ($V=10.83$ mag) M1 dwarf GJ\,3998, gathered with the HARPS-N spectrograph \citep{cos12} on the Telescopio Nazionale Galileo (TNG) as part of an RV survey for low-mass planets around a sample of northern-hemisphere early-M dwarfs. The HADES (HArps-n red Dwarf Exoplanet Survey) observing programme is the result of a collaborative effort between the Italian Global Architecture of Planetary Systems \citep[GAPS,][]{cov13,des13,por16} Consortium, the Institut de Ciències de l'Espai de Catalunya (ICE), and the Instituto de Astrof\'isica de Canarias (IAC). Analysis of the Doppler time-series, spanning $\sim2.4$ years, reveals the presence of a systems of two Super Earths orbiting at 0.029 AU and 0.089 AU from the central star. After a short presentation of the HADES RV programme, in Sect.~\ref{2} we describe the observations and reduction process and in Sect.~\ref{obj} we summarize the atmospheric, physical and kinematic properties of GJ\,3998. In Sect.~\ref{3} we describe the RV analysis and discuss the impact of stellar activity. Sect.~\ref{sec:fot} presents the analysis and results of a multi-site photometric monitoring campaign. In Sect. \ref{sec:rvanalysis} we describe the Bayesian analysis, the model selection and derive the system parameters. We summarize our findings and conclude in Sect.~\ref{sec:concl}.
\label{sec:concl} We have presented in this paper the first results from the HADES programme conducted with HARPS-N at TNG. We have found a planetary system with two super-Earths hosted by the M1 dwarf GJ\,3998, by analysing the high precision, high resolution RV measurements in conjunction with almost-simultaneous photometric observations. We based our analysis on RV measurements obtained with the TERRA pipeline, which enables us to circumvent potential mismatch of the spectral lines that may happen using a CCF technique for M stars. The homogeneous analysis of the RV observations was carried out both with a frequentistic approach and by comparing models with a varying number of Keplerian signals and different models of the stellar activity noise. The analysis of the radial velocity time series unveiled the presence of at least four significant periodic signals, two of these are linked to the activity of the host star and two to orbital periods: \begin{itemize} \item{P = 30.7 d, gives us an estimate of the rotational period of the star;} \item{P = 42.5 d, could be indicative of a modulation of the stellar variability due to differential rotation;} \item{P = 2.6 d, orbital period of GJ\,3998b;} \item{P = 13.7 d, orbital period of GJ\,3998c.} \end{itemize} The conclusions on activity-related periods are confirmed by the analyses of the activity indicators and of the photometric light curves of two independent sets of observations. The time series of $S$ index and H$\alpha$ show periodic variations around the 30.7 d and 42.5 d signals. The photometric results from both programs confirm the presence of variations in the period range 30 d -- 32 d, highlighting the strong connection of the long RV periods to chromospheric activity and to its rotational modulation. Due to its smaller amplitude and effect than the period of 30.7~d, the periodicity at 42.5~d could not be detected in the photometric time series, having a time coverage worst than that of the spectroscopic ones, too. For the same reason, the search for any robust correlation like $BV$ photometry vs. H$\alpha$ or $S$ indexes is unfortunately inconclusive.\\ Our results show that the detection of small planets in these data could be hampered by activity-induced signals. As a target of the HADES programme, GJ\,3998 is included in several statistical studies with the aim to scrutinize in deep the activity properties and rotations of M-dwarfs (Perger et al. 2016, submitted, Maldonado et al. 2016, submitted, Su{\'a}rez Mascare{\~n}o et al., Scandariato et al., in preparation). In particular, the accurate analysis of activity indicators performed by Su{\'a}rez Mascare{\~n}o et al. (in preparation), including also 6 HARPS (ESO) observations of GJ\,3998 acquired more than five years before the start of the HADES programme, provides an estimate of $30.8\,\pm2.5\, d$ for the rotational period and the hint of an activity cycle 500-600 d long.\\ In general, differential rotation is not yet fully understood, see e.g. the discussion in \citet{kuk11}, therefore empirical data are then preferable. \citet{rei03} made an analysis of solar-type stars using an analysis of the shape of the lines in the Fourier domain. They considered a calibration of the ratio between the positions of the two first zeros of the power spectrum of the line profile as a function of the ratio $\alpha$ between equatorial and polar rotation rate. They obtained an anticorrelation between differential rotation and equatorial velocity: differential rotation is mostly evident in slow rotators. Since GJ3998 is indeed a slow rotator, this might be indeed the case. However, Reiners \& Schmitt only considered solar-type stars and they do not show that we could see such trends in time-series of M dwarfs. The most relevant data for us are then those provided by the analysis of the exquisitely sensitive time series obtained with Kepler. Using Kepler data, \citet{rei15} analysed an extremely wide datasets of stars ranging from solar-type stars to M-dwarfs and confirmed the relation found by Reiners \& Schmitt between differential rotation and period. We may directly compare our result for GJ\,3998 with Figure 9 of Reinhold \& Gizon, where they plotted the minimum rotation period (that in our case is 30.7 d), with the parameter $\alpha$=(Pmax-Pmin)/Pmax. For GJ\,3998, $\alpha$=(42.5-30.7)/42.5=0.277. The point for GJ\,3998 falls exactly in the middle of the points for the M-stars. We would conclude that the assumption that the two periods related to activity for GJ\,3998 are due to rotation is fully consistent with what we know about differential rotation. We run an MCMC simulation and use Bayesian model selection to determine the number of planets in the system and estimate their orbital parameters and minimum masses. We test several different models, varying the number of planets, eccentricity and treatment of stellar activity noise. We select a model involving two Keplerian signals, with a circular orbit for the inner planet and the eccentricity of the outer planet treated as a free parameter. The short-term stellar activity is modelled with the Gaussian Processes approach and the long-term activity noise is not included. \begin{figure}% \includegraphics[width=9cm]{fig14.ps} \caption{Minimum mass vs. orbital period diagram for known Neptune-type and Super Earth planets around M dwarfs (http://www.exoplanet.eu - black dots, 2016 June 17), the green stars indicate GJ\,3998b and GJ\,3998c. Filled dots indicate planets with known radius.} \label{fig:pianeti} \end{figure} The two planets appear to have minimum masses compatible with those of super-Earths, the inner planet has a minimum mass of $2.47 \pm\,0.27$\, M$_\oplus$, at a distance of 0.029 AU from the host star, the outer has a minimum mass of $6.26^{+0.79}_{-0.76}$\, M$_\oplus$ and a semi-major axis of 0.089 AU. \\A very rough estimate of the equilibrium temperatures of the planets, from the Stefan--Boltzmann law assuming uniform equilibrium temperature over the entire planet and zero albedo, gives T$_{eq}$ = 740 K for the inner, and T$_{eq}$ = 420 K, for the outer planet. \\ These close distances strongly call for a search of potential transits in the next months, in particular for the inner planet, through a proposal for observing time on the $\it Spitzer\, Space\, Telescope$. This planet has an interestingly high geometric transit probability of $\approx$\,8\%. With a minimal mass of $2.47 \pm\,0.27$\, M$_\oplus$, GJ\,3998b's radius should most likely lie somewhere between $\approx$\, 1 M$_\oplus$ (metal-rich composition) and $\approx$\, 1.65 M$_\oplus$ (ice-rich composition) \citep{sea07}. Given its host star's radius of $\approx$\, 0.5 R$_\odot$, this range of planetary sizes correspond to plausible transit dephts between 325 to 935 ppm. We thus proposed to use 20hr of $\it Spitzer$ time to monitor a 2-$\sigma$ transit window of the innermost planet in the GJ\,3998 system (P.I. M. Gillon). A potential transit could provide a constraining point in the mass-radius diagram of known planets, enabling also the determination of the mean density and a better characterization of the system architecture with future follow-up observations. In particular, there is currently no object with an accurately measured mass ($\le$\,20\%) in the range 2 -- 3 M$_\oplus$. This mass gap is even larger if we consider only M dwarf planets, as is evident in Fig. \ref{fig:pianeti}, which shows the minimum mass vs. orbital period diagram for known Neptune-type and Super Earth planets around main sequence M stars, along with the location of GJ\,3998 planets. The eventuality of a transit, given the small distance from the host and the brightness of GJ\,3998 (V = 10.83), would make this star a natural target for follow-up observations with the ESA $CHEOPS$ space telescope and also one of the most interesting M-dwarf targets for a detailed atmospheric characterization.
16
7
1607.03632
1607
1607.03318_arXiv.txt
\noindent We quantify the impact that a variety of galactic and environmental properties have on the quenching of star formation. We collate a sample of $\sim$ 400,000 central and $\sim$ 100,000 satellite galaxies from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). Specifically, we consider central velocity dispersion ($\sigma_{c}$), stellar, halo, bulge and disk mass, local density, bulge-to-total ratio, group-centric distance and galaxy-halo mass ratio. We develop and apply a new statistical technique to quantify the impact on the quenched fraction ($f_{\rm Quench}$) of varying one parameter, while keeping the remaining parameters fixed. For centrals, we find that the $f_{\rm Quench} - \sigma_{c}$ relationship is tighter and steeper than for any other variable considered. We compare to the Illustris hydrodynamical simulation and the Munich semi-analytic model (L-Galaxies), finding that our results for centrals are qualitatively consistent with their predictions for quenching via radio-mode AGN feedback, hinting at the viability of this process in explaining our observational trends. However, we also find evidence that quenching in L-Galaxies is too efficient and quenching in Illustris is not efficient enough, compared to observations. For satellites, we find strong evidence that environment affects their quenched fraction at fixed central velocity dispersion, particularly at lower masses. At higher masses, satellites behave identically to centrals in their quenching. Of the environmental parameters considered, local density affects the quenched fraction of satellites the most at fixed central velocity dispersion.
Understanding why galaxies stop forming stars is an important unresolved question in the field of galaxy formation and evolution. Only $\sim$10\% of baryons reside within galaxies (e.g., Fukugita \& Peebles 2004; Shull et al. 2012), yet since galaxies lie at nodes in the cosmic web corresponding to local minima in the gravitational potential, naively one would expect far more baryons to collate in galaxies, ultimately forming more stars. Theoretical models offer a wide range of solutions to this problem, relying on the physics of gas, stars, and black hole accretion disks as so called `baryonic feedback' (e.g., Cole et al. 2000; Croton et al. 2006, Bower et al. 2006, 2008; Somerville et al. 2008; Guo et al. 2011; Vogelsberger et al. 2014a,b; Henriques et al. 2015; Schaye et al. 2015; Somerville et al. 2015). However, observational studies are required to test these models and provide evidence for their range and applicability. Observationally, the fraction of quenched (passive/non-star forming) galaxies in a given population has been shown to have a strong dependence on galaxy stellar mass (e.g., Baldry et al. 2006; Peng et al. 2010, 2012) and galaxy structure, e.g. bulge-to-total light/ mass ratio, $B/T$, or S\'{e}rsic index, $n_{S}$ (e.g., Driver et al. 2006; Cameron et al. 2009; Cameron \& Driver 2009; Wuyts et al. 2011; Mendel et al. 2013; Bluck et al. 2014; Lang et al. 2014; Omand et al. 2014). Additionally, the quenched fraction depends on environment, particularly the surface density of galaxies in a given region of space, the halo mass of the group or cluster, and the distance a galaxy resides at from the centre of its group (e.g., Balogh et al. 2004; van den Bosch et al. 2007, 2008; Peng et al. 2012; Woo et al. 2013; Bluck et al. 2014). It has become evident that understanding quenching processes in galaxies requires separate consideration of central and satellite galaxies, since the mechanisms for quenching star formation in these systems most likely differ (e.g., Peng et al. 2012; Woo et al. 2013; Bluck et al. 2014; Knobel et al. 2015). Central galaxies are most commonly defined as the most massive galaxy in their group or cluster, with satellites being any other group member (e.g., Yang et al. 2007, 2009). The dominant galaxy in any given dark matter halo is taken to be the central, so isolated galaxies are considered to be the central galaxy of their group of one. Observationally, satellites in general depend on both intrinsic and environmental parameters for their quenching, whereas centrals depend primarily only on intrinsic properties (e.g., Peng et al. 2012). In many simulations and models, the quenching of central galaxies is governed primarily by AGN feedback and the quenching of satellite galaxies is governed primarily by environmental processes, such as, e.g., strangulation or stripping (e.g., Guo et al. 2011; Vogelsberger et al. 2014a,b; Henriques et al. 2015; Schaye et al. 2015; Peng et al. 2015; Somerville et al. 2015). More recent work has linked the quenched (or red) fraction of large populations of galaxies to the central density within 1 kpc (Cheung et al. 2012, Fang et al. 2013, Woo et al. 2015), the central velocity dispersion (Wake et al. 2012), and to the mass of the galactic bulge (Bluck et al. 2014, Lang et al. 2014, Omand et al. 2014). An artificial neural network (ANN) analysis performed by Teimoorinia, Bluck \& Ellison (2016) established that for central galaxies the most accurate predictions for whether a galaxy will be star forming or not are given by central velocity dispersion, which outperforms all other variables considered, including bulge mass, stellar mass and halo mass. All of these inner-region galaxy properties are expected to correlate strongly with the mass of the central black hole (e.g., Magorrian et al. 1998; Gebhardt et al. 2000; Ferrarese \& Merritt 2000, Haring \& Rix 2004, McConnell et al. 2011; McConnell \& Ma 2013; Saglia et al. 2016) and hence may provide qualitative support for the AGN feedback driven quenching paradigm. However, it is certainly conceivable that other quenching processes could give rise to these trends without AGN feedback. Since the idea that most galaxies contain a supermassive black hole was first suggested (e.g., Kormendy \& Richstone 1995), the energy released from forming these objects has become a popular mechanism for regulating gas flows and star formation in simulations, particularly for massive galaxies (e.g., Croton et al. 2006; Bower et al. 2006, 2008; Somerville et al. 2008; Guo et al. 2011; Vogelsberger et al. 2014a,b; Henriques et al. 2015; Schaye et al. 2015). In fact, substantial feedback from accretion around supermassive black holes is required in cosmological semi-analytic models, semi-empirical models, and hydrodynamical simulations to achieve the steep slope of the high-mass end of the galaxy stellar mass function (e.g., Vogelsberger et al. 2014a,b; Henriques et al. 2015; Schaye et al. 2015). Observationally, direct measurements of AGN driven winds in galaxies and radio jet induced bubbles in galaxy haloes have provided evidence for the mechanisms by which AGN feedback can affect galaxies, but typically only for a very small number of galaxies (e.g., McNamara et al. 2000; Nulsen et al. 2005; McNamara et al. 2007; Dunn et al. 2010; Fabian 2012; Cicone et al. 2013; Liu et al. 2013; Harrison et al. 2014, 2016). Hence, whether or not AGN feedback actually quenches galaxies in statistically significant numbers remains an open question. Alternatives to AGN feedback driven quenching of central galaxies do exist in the theoretical literature, and there is some observational support for these as well. Virial shock heating of gas in haloes above some critical dark matter halo mass ($M_{\rm crit} \geq 10^{12} M_{\odot}$) can lead to a stifling of gas supply and hence an eventual shutting off of star formation in galaxies (e.g., Dekel \& Birnboim 2006; Dekel et al. 2009; Dekel et al. 2014). Recent observations suggest that halo mass is more constraining of the quenched fraction of centrals than stellar mass, qualitatively in line with this view (e.g., Woo et al. 2013). However, the stronger dependence of central galaxy quenching on bulge mass and central density (e.g., Bluck et al. 2014, Woo et al. 2015) imply that this cannot be the sole, or dominant, route to quenching centrals. Further to this, elevated gas depletion and supernovae feedback in galaxy mergers, and the growth of the central potential and its stabilizing influence on giant molecular cloud collapse, have both been evoked as potential alternatives to the more commonly utilised AGN feedback (e.g., Martig et al. 2009; Darg et al. 2010; Moreno et al. 2013). To fully distinguish between these various processes careful comparison of observational data to simulations and models must be made. Satellites are potentially subject to a wide range of additional physical processes for quenching than centrals, resulting from their relative motion across the hot gas halo, and their increased group potential, and galaxy - galaxy, tidal interactions. Processes such as ram pressure stripping, harassment, strangulation from removal of the satellites' hot gas halo, and pre-processing in groups prior to the cluster environment can all lead to a removal of gas or gas supply and hence a reduction and eventual cessation of star formation (e.g., Balogh et al. 2004; Cortese et al. 2006; Font et al. 2008; Tasca et al. 2009; Peng et al. 2012; Hirschmann et al. 2013; Wetzel et al. 2013). Additionally, if a central galaxy enters a group or cluster environment for the first time, transitioning to becoming a satellite, it will no longer reside at a local gravitational minimum in the cosmic web. Thus, cold gas streams will no longer feed the new satellite galaxy and hence this will also contribute to its star formation quenching (e.g., Guo et al. 2011; Henriques et al. 2015). It is important to stress that all of these environmentally dependent quenching processes work in addition to the mass-correlating quenching associated with centrals, and thus that we might expect to see evidence for two distinct regimes in satellite quenching, one where environment dominates and one where internal properties dominate. In Bluck et al. (2014) we conclude that `bulge mass is king' in the sense that bulge mass is a tighter and steeper correlator to the quenched fraction for centrals than stellar mass, halo mass, disk mass, local galaxy density, and galactic structure ($B/T$). For a smaller list of variables (not including bulge or halo mass) Wake et al. (2012) established that central velocity dispersion outperforms stellar mass, morphology and environment in constraining the quenching of a general population of local galaxies. Recently, Teimoorinia, Bluck \& Ellison (2016) found strong evidence from an ANN technique that central velocity dispersion is the best single variable for parameterizing the quenching of centrals, improving upon even bulge mass. Additionally, Cheung et al. (2012), Fang et al. (2013) and Woo et al. (2015) find strong evidence for the central stellar mass density within 1 kpc being a particularly tight correlator to the quenched fraction. This quantity is also demonstrated to scale tightly with both bulge mass and central velocity dispersion. Taken together, it is clear that a high central mass concentration and hence central velocity dispersion is a prerequisite for quenching central galaxies. The primary motivation for this paper is to expand on the work of Wake et al. (2012), Bluck et al. (2014) and Teimoorinia et al. (2016) by investigating the impact on the quenched fraction of central and satellites galaxies from varying galaxy and environmental properties at fixed central velocity dispersion. For centrals, this allows us to look for additional dependencies of quenching, whilst controlling for the parameter which matters most statistically. For satellites, fixing the central velocity dispersion allows us to effectively control for the most important intrinsic parameter before studying the impact of environment on these systems. We then compare these results to a cosmological hydrodynamical simulation (Illustris, Vogelsberger et al. 2014a,b) and a semi-analytic model (the Munich model of galaxy formation: L-Galaxies, Henriques et al. 2015), to gain insight into the possible physical processes responsible for our observed results. The paper is structured as follows. We give a review of our data sources and measurements in Section 2, and define our quenched fraction method in Section 3. In Section 4 we give a brief overview of our results. Section 5 presents our results for central galaxies, including a new method for ascertaining the statistical influence on the quenched fraction of varying a given galaxy property at fixed other galaxy properties. We discuss the possible interpretations of our results for centrals in Section 6, and make a detailed comparison to a cosmological simulation and a semi-analytic model. In Section 7 we present our results for satellites and compare them to the centrals. We conclude in Section 8. We also include two appendices, the first giving an example of our area statistics approach (Appendix A) and the second showing the stability of our results to different scaling laws (Appendix B). Throughout the paper we assume a $\Lambda$CDM cosmology with H$_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, and adopt AB magnitude units.
In this paper, we explore the dependence of central and satellite galaxy quenching on a variety of physical galaxy and environmental properties. We start with a sample of $\sim$ half a million SDSS galaxies (80\% centrals and 20\% satellites) at z $<$ 0.2. We quantify the quenched fraction dependence on galaxy properties at fixed central velocity dispersion, which has previously been found to be the tightest correlator to quenching for central galaxies (Teimoorinia, Bluck \& Ellison 2016). At a fixed central velocity dispersion, we find that satellite galaxies are more frequently quenched than central galaxies, with inner satellites (within 0.1 virial radii of their centrals) being more frequently quenched than the general satellite population (see Fig. 2). This effect is more pronounced at lower central velocity dispersions and disappears entirely by $\sigma_{c} >$ 250 km/s. Furthermore, the $f_{\rm Quench} - \sigma_{c}$ relationship is steep for centrals, varying from $\sim$ 0.05 -- 0.95 across the range we probe, and progressively less steep for satellites and inner satellites. Qualitatively, this result is consistent with central galaxies being quenched by AGN feedback, given the tight observed relationships between central velocity dispersion and black hole mass, and the dependence of AGN-driven quenching on black hole mass in simulations and models. However, the quenching of satellites and inner satellites cannot be driven by AGN feedback at low central velocity dispersion, suggesting that other (most probably environmental) processes must be important in their quenching when they depart from centrals at low masses. For central galaxies, we confirm the prior result that central velocity dispersion is more predictive of quenching than any of the following properties: stellar mass, halo mass, bulge mass, disk mass, $B/T$ structure and local density ($\delta_{5}$), e.g. Wake et al. (2012); Teimoorinia et al. (2016). Moreover, we find that varying stellar, halo or bulge mass or local density (by even three orders of magnitude) has little if any effect on the quenched fraction at fixed central velocity dispersion for centrals. This indicates that these parameters cannot be causally connected to central galaxy quenching, which provides powerful new constraints on the mechanism(s) which may be responsible for causing quenching in these galaxies. In Section 5.2 we develop a new technique for ascertaining and quantifying the impact on quenching of varying one parameter at a fixed other parameter. In particular, we define two statistics, the area contained within the upper and lower 50\% range in the quenched fraction from varying a secondary parameter at fixed first parameter, and the average difference between quenched fraction at upper and lower 50\% range. The former indicates the tightness of the quenched fraction dependence on the primary variable, and the latter additionally indicates the directionality of the trend. For centrals, we find a strong positive effect on the quenched fraction from varying central velocity dispersion at fixed values of all of the other variables considered in this work. Most of the other variables have little to no effect on quenching at fixed central velocity dispersion. However, $B/T$ and disk mass do have a statistically significant effect, although smaller in magnitude to central velocity dispersion. This is most probably due to these parameters correlating with gas mass and hence being related to the amount of work which needs to be done to quench the galaxy. Given the lack of impact on the quenched fraction of halo mass and stellar mass, it is highly improbable that either halo mass quenching, stellar or supernova feedback can be responsible for central galaxy quenching. However, the strong observed correlations between central velocity dispersion and supermassive black hole mass do present an interesting opportunity for explanation of our results via AGN feedback. In Section 6, we compare the quenching of centrals in Illustris and L-Galaxies to our observational (SDSS) results. In both models, the quenched fraction - black hole mass relationship is significantly tighter than the stellar mass or halo mass relation, qualitatively in agreement with observations. However, we find a quenching threshold (defined as the black hole mass at which 50\% of galaxies are quenched) of $10^{6} M_{\odot}$ in the Munich model and $2 \times 10^{8} M_{\odot}$ in Illustris, compared to $2 \times 10^{7} M_{\odot}$ in the SDSS (assuming the scaling law of Saglia et al. 2016). This suggests that quenching via AGN feedback may be too efficient (as a function of black hole mass growth) in L-Galaxies and too inefficient in Illustris, compared to local galaxies. We also consider if evolutionary systematics (e.g., via size evolution) can give rise to the observed tightness of the $f_{\rm Quench} - \sigma_{c}$ relationship, without any causal connection. We perform a test using the green valley fraction (of {\it quenching}) galaxies. We find that central velocity dispersion remains a significantly tighter correlator to the quenched fraction than stellar or halo mass, even for galaxies currently undergoing transformation in their star forming state. This implies that evolutionary systematics, which can affect the quenched fraction, are not ultimately responsible for the dependence of central galaxy quenching on central velocity dispersion since this exists already in galaxies which are contemporaneously quenching. For satellites, we find that the environmental metrics we consider (i.e., local density, halo mass, satellite-halo mass ratio, and group centric distance) all have a significant effect on the quenched fraction at fixed central velocity dispersion (see Section 7), unlike for centrals which experience very little dependence on halo mass or local density at fixed central velocity dispersion. Using the area statistics approach we developed for centrals, we find that local density engenders the most significant perturbation on the $f_{\rm Quench} - \sigma_{c}$ relationship, followed jointly by halo mass and mass ratio, with group centric distance leading to the smallest impact on quenching. One possibility is that local density simply represents a good average quantity, sensitive to the mass of the group or cluster and the location of the satellite within it. However, it is possible that this ranking gives more direct information on what mechanisms are likely responsible for satellite galaxy quenching. For example, if galaxy - galaxy interactions dominate over galaxy - halo interactions, this would naturally lead to similar results to what we observe. In summary, we find the tightest correlation between quenched fraction and central velocity dispersion for central galaxies, tighter than for any other parameter considered in this work. Moreover, the $f_{\rm Quench} - \sigma_{c}$ relationship is largely unaffected by varying other galaxy parameters for centrals, whereas the quenched fraction dependence on each of the other galaxy parameters is heavily affected by varying central velocity dispersion. The invariance of the dependence of central galaxy quenching on central velocity dispersion with other galaxy parameters suggests that this may be a causal relationship. If so, it is most likely explained by AGN feedback given the observed $M_{BH} - \sigma_{c}$ relation. Furthermore, our observational results are qualitatively in agreement with the predictions from a hydrodynamical simulation and semi-analytic model, both of which quench galaxies via AGN feedback in the radio mode. However, the details of our comparison do motivate further work in the implementation of quenching in the model and simulation, since the former is too efficient in its quenching and the latter is not efficient enough. Finally, we find that central velocity dispersion is the most significant {\it intrinsic} parameter for satellite quenching; although environment has a much larger impact on satellites than centrals. Thus, additional quenching mechanisms are clearly needed for satellite galaxies over centrals, which must be strongly related to environment, particularly local galaxy density which performs best out of the environmental parameters we consider.
16
7
1607.03318
1607
1607.03774_arXiv.txt
We introduce a parametric family of models to characterize the properties of astrophysical systems in a quasi-stationary evolution under the incidence evaporation. We start from an one-particle distribution $f_{\gamma}\left(\mathbf{q},\mathbf{p}|\beta,\varepsilon_{s}\right)$ that considers an appropriate deformation of Maxwell-Boltzmann form with inverse temperature $\beta$, in particular, a power-law truncation at the scape energy $\varepsilon_{s}$ with exponent $\gamma>0$. This deformation is implemented using a generalized $\gamma$-exponential function obtained from the \emph{fractional integration} of ordinary exponential. As shown in this work, this proposal generalizes models of tidal stellar systems that predict particles distributions with \emph{isothermal cores and polytropic haloes}, e.g.: Michie-King models. We perform the analysis of thermodynamic features of these models and their associated distribution profiles. A nontrivial consequence of this study is that profiles with isothermal cores and polytropic haloes are only obtained for low energies whenever deformation parameter $\gamma<\gamma_{c}\simeq 2.13$. \newline \newline PACS numbers: 05.20.-y, 05.70.-a \newline Keywords: astrophysical systems, evaporation, thermo-statistics \newline E-mail: yuvineza.gomez@gmail.com; lvelazquez@ucn.cl
\Delta S=S(U)-S_{C}=\int_{U_{C}}^{U}\eta(U') dU', \end{equation} which employs as a reference the value $S_{C}$ corresponding to critical point of evaporation instability $U_{C}$. Stable quasi-stationary configurations inside the energy range $U_{A}\leq U\leq U_{B}$ exhibit \textit{negative heat capacities} $C<0$. The existence of this thermodynamic anomaly is a remarkable consequence of the long-range character of gravitation, in particular, because of the short-range divergence of its interaction potential energy: \begin{equation} \phi\left(\mathbf{r}_{i},\mathbf{r}_{j}\right)=-Gm_{i}m_{j}/\left|\mathbf{r}_{j}-\mathbf{r}_{i}\right|\rightarrow\infty \end{equation} when particles separation distance $\left|\mathbf{r}_{j}-\mathbf{r}_{i}\right|$ drops to zero. While such configurations are unstable if the system is put into thermal contact with an environment at constant temperature (canonical ensemble), they are stable if the system is put into energetic isolation (microcanonical ensemble). \begin{figure}[t] \begin{center} \includegraphics[width=6.8in]{profiles.eps} \caption{Dependence of distribution profiles on the deformation parameter $\gamma$ for three notable values of the internal energy $U$. While the increasing of deformation parameter $\gamma$ produces profiles with more dense cores and more diluted haloes, the increasing of the energy produces the opposite effect. Interestingly, the qualitative form of the haloes are the same for a given value of deformation parameter $\gamma$. Panels a)-c) Distribution profiles at the energy of gravothermal collapse $U_{A}$ for three values of deformation parameter with $\gamma<\gamma_{c}$. All them exhibit isothermal cores and polytropic haloes. Panel d) A distribution profile at gravothermal collapse with deformation parameter $\gamma>\gamma_{c}$. Note that this profile does not exhibit an isothermal core, but a divergence in the central density. Panels i)-l) Distribution profiles very near the energy of evaporation disruption $U_{C}$ are everywhere polytropic. Panels e)-h) Transitional profiles at the energy of isothermal collapse $U_{B}$. These profiles hardly differ from polytropic profiles i)-l) because of they exhibit more dense cores.} \label{profiles.eps} \end{center} \end{figure} For the sake of a better understanding about the influence of deformation parameter $\gamma$, we have calculated dependence of some thermodynamic quantities at the notable points. Such results are shown in FIG.\ref{CalculoFino_1.eps}. It is clearly evident that the increasing of the deformation parameter $\gamma$ provokes a systematic decreasing of inverse temperatures $(\eta_{A},\eta_{B})$, and the increasing of absolute values of energies $\left(U_{A},U_{B},U_{C}\right)$ and their associated central particles densities $\left(\rho_{0A},\rho_{0B},\rho_{0C}\right)$. Interestingly, the inverse temperature $\eta_{A}$ at the notable point of gravothermal collapse vanishes when $\gamma\geq\gamma_{c}\simeq2.1$. This means that both the total energy $U_{A}$ and the temperature $T_{A}$ diverge at this point as a consequence of divergence of the central density $\rho_{A}$. Precisely, the existence of this divergence also manifested as a divergence of dimensionless radius $\xi_{c}$, which was shown in figure \ref{numericos.eps} for the particular cases with deformation parameters $\gamma=2.5$ and $\gamma=3.0$. The fact that the energy of gravothermal collapse diverges $U_{A}\rightarrow-\infty$ when $\gamma\geq\gamma_{c}$, significantly reduces the dramatic character of this phenomenon. For values admissible values of deformation parameter $\gamma$ below the point $\gamma_{c}$, the system develops a gravothermal collapse at the finite energy $U_{A}$, which should evolves in a \emph{discontinuous way} towards a certain collapsed structure that is not describable with the present model. For values admissible values of deformation parameter $\gamma$ above the point $\gamma_{c}$, the system should release an infinite amount of energy to reach a collapsed structure with a divergent central density associated with the point of gravothermal collapse. However, this collapsed structure is now described within the present models and the transition is developed in a \emph{continuous way} with the decreasing of the internal energy. It is noteworthy that an analogous divergence is observed in thermodynamic parameters of other notable points when deformation parameter $\gamma$ approaches its maximum admissible value $\gamma_{m}=3.5$. As expected, this second divergence point is related to the nonphysical character of polytropic dependencies when $n>5$. A more precise estimation for the critical value $\gamma_{c}$ can be obtained considering an adjustment of dependency $\eta_{A}\left(\gamma\right)$ near with a power-law form: \begin{equation}\label{form.power} \eta_{A}\left(\gamma\right)= A|\gamma-\gamma_{c}|^{p}. \end{equation} As shown in the inset panel of FIG.\ref{CalculoFino_1.eps}, the proposed form (\ref{form.power}) exhibits a great agreement with numerical results for the following set of parameters: $A=0.292\pm0.004$, $p=1$.$24\pm0.01$ and $\gamma_{c}=2.1307\pm0.003$. \subsection{Distribution profiles} We show in figure \ref{comparacion_1.eps} a distribution profile with deformation parameter $\gamma=1$ at the point of gravothermal collapse, which correspons to a Michie-King profile with lowest energy. As a consequence of gravitation, the highest concentration of the particles is always located in the inner region of the system, while particles density gradually decays with the increasing of the radius $r$ till vanishes at the tidal radius $R_{t}$. As clearly shown in this figure, the inner region, the core, can be well-fitted with an isothermal profile \cite{Antonov}, while the outer one, the halo, is well-fitted with a polytropic profile \cite{chandra}. Dependence of distribution profiles on the deformation parameter $\gamma$ and the internal energy $U$ is illustrated in figure \ref{profiles.eps}, specifically, twelve profiles corresponding to three notable points and forth different values of deformation parameter $\gamma$. Particles concentration in the inner regions decreases with the increasing of the internal energy $U$. The increasing of the deformation parameter $\gamma$ produces distribution profiles with more diluted haloes, and consequently, more dense cores. Not all admissible profiles derived from the present family of models can exhibit isothermal cores. However, all these profiles exhibit polytropic haloes. In fact, distribution profiles near the point of evaporation disruption can be regarded as everywhere polytropic with high accuracy. Finally, distribution profiles corresponding to the point of gravothermal collapse with $\gamma>\gamma_{c}$ are divergent at the origin. \begin{figure}[t] \begin{center} \includegraphics[width=4.0in]{PhiConverge.eps} \caption{Phase diagram of truncated $\gamma$-exponential models in the plane of integration parameters $\gamma-\Phi_{0}$. Dark grey region corresponds to quasi-stationary configurations with negative heat capacities, while light grey region corresponds to positive heat capacities. White regions are nonphysical or unstable configurations. We have emphasized inside the dark grey region those profiles that exhibit isothermal cores and polytropic haloes. Configurations corresponding to isothermal collapse are always outside this region. Quasi-stationary configurations on the line of gravothermal collapse (thick solid line) exhibit a weakly dependence on the deformation parameter $\gamma$ if $\gamma<\gamma_{c}$. This dependence exhibits a significant change for $\gamma_{c}\leq\gamma\leq\gamma_{m}$, which accounts for a divergence in the central density.} \label{PhiConverge_1.eps} \end{center} \end{figure} We have shown in figure \ref{PhiConverge_1.eps} the phase diagram of truncated $\gamma$-exponential models in the plane of integration parameters $\gamma-\Phi_{0}$ of the nonlinear Poisson problem (\ref{p1}). For each value of deformation parameter $\gamma$, the admissible values of the central dimensionless potential $\Phi_{0}$ are located inside the interval $0\leq\Phi_{0}\leq\Phi_{A}(\gamma)$, where $\Phi_{A}(\gamma)$ corresponds to the critical point of gravothermal collapse $U_{A}$. Central values of dimensionless potential $\Phi_{0}$ above dependence $\Phi_{A}(\gamma)$ correspond to nonphysical on unstable configurations (white region). Additionally, we have included dependence $\Phi_{B}(\gamma)$ associated with the point of isothermal collapse $U_{B}$. Configuration between dependencies $\Phi_{B}(\gamma)\leq\Phi_{0}\leq\Phi_{A}(\gamma)$ exhibit negative heat capacities (dark gray region), while those one with central dimensionless potential $\Phi_{0}$ belonging to interval $0<\Phi_{0}<\Phi_{B}(\gamma)$ exhibit positive heat capacities (light gray region). It is remarkable that dependence $\Phi_{A}(\gamma)$ is weakly modified by a change in the deformation parameter $\gamma$ for values below the critical point $\gamma_{c}$. However, this function experiences an sudden change above this critical value. As expected, this behavior accounts for a sudden change in behavior of distribution profiles: the proposed models can exhibit profiles with isothermal cores for $\gamma<\gamma_{c}$, while they only exhibit profiles without isothermal cores for $\gamma\geq\gamma_{c}$. According to equation (\ref{asymp.density}), profiles with isothermal cores are directly related to asymptotic dependence of particles distribution to follow an exponential-law with regard to the local value of dimensionless potential $\Phi(\xi)$. Such an asymptotic behavior is better described in terms of \emph{deviation function} $\delta(x,\gamma)$ with regard to the exponential function, which is introduced in \ref{Exp.Append}. For the sake of convenience, we have denoted dependence $\Phi_{ic}(\gamma)$ as follows: \begin{equation} \delta\left[\Phi_{ic}(\gamma);\gamma+\frac{3}{2}\right]=\epsilon, \end{equation} where the convergence error $\epsilon$ was fixed at the value $\epsilon=1.6\times 10^{-4}$. This small value guarantees the matching of this dependence at the critical value of deformation parameter $\gamma_{c}$ with central dimensionless potential $\Phi_{0}$ associated with the point of gravothermal collapse, $\Phi_{ic}(\gamma_{c})=\Phi_{A}(\gamma_{c})$. Quasi-stationary configurations located inside the region $\Phi_{ic}(\gamma)\leq\Phi_{0}\leq\Phi_{A}(\gamma)$ and $0<\gamma<\gamma_{c}$ exhibit isothermal cores, that is, the inner regions of particles distribution can be fitted with an isothermal profile.
\begin{figure}[t] \begin{center} \includegraphics[width=4.5in]{entropy.eps} \caption{Energy dependence of entropy difference $\Delta S=S-S_{C}$ obtained from numerical integration using definition (\ref{entropy.integral}). Only the upper branch of microcanonical caloric curves shown in figure \ref{Curvacalorica_1.eps} correspond to stable quasi-stationary configurations because of they exhibit the higher entropy.} \label{Entropy_1.eps} \end{center} \end{figure} \subsection{Generalities} Many studies about astrophysical systems and cosmological problems \cite{Binney,chandra} start from assuming a polytropic dependence between the pressure $p$ and the particles density $\rho$: \begin{equation}\label{state.eq} p=C \rho^{\gamma^{*}}, \end{equation} where $C$ is a certain constant and $\gamma^{*}$ the so-called \emph{polytropic index}. The phenomenological state equation (\ref{state.eq}) can be combined with condition of hydrostatic equilibrium (\ref{hidro}) to derive a power-law relation between the density and the dimensionless potential $\Phi$: \begin{equation} \rho=K\Phi^{n}, \end{equation} where $K=\left[(\gamma^{*}-2)/\gamma^{*} m \beta C\right]^{n}$ and the exponent $n\equiv1/(\gamma^{*}-1)\geq 0$. The marginal case $\gamma^{*}=1$ is fully licit. Condition of hydrostatic equilibrium now predicts an exponential dependence between $\rho$ and $\Phi$: \begin{equation} \rho=K^{*}\exp\left[\Phi\right], \end{equation} where $K^{*}$ is a certain integration constant. This last behavior can be associated with a system under \emph{isothermal conditions} imposing the constraint $C\equiv 1/m\beta$, where $m$ is the mass of constituting particles and $\beta$ is the inverse temperature. The ansatz (\ref{df.model}) proposed in this study allows to consider the above dependencies into a unifying fashion. According the asymptotic behavior of truncated $\gamma$-exponential function (\ref{asymptotic}), the particles density (\ref{den}) describes a power-law dependence for $\Phi$ small with exponent $n=\gamma+3/2$, while an exponential dependence for $\Phi$ large enough: \begin{equation}\label{asymp.density} \rho\left( \Phi;\gamma\right) =\left\{ \begin{array} [c]{cc}% \propto\Phi^{n}, & x<<1,\\ \propto\exp\left( \Phi\right) , & x>>1. \end{array} \right. \end{equation} Accordingly, the deformation parameter $\gamma$ can be related to the polytropic index $\gamma^{*}$ of polytropic state equation (\ref{state.eq}) as $\gamma^{*}=(2\gamma+5)/(2\gamma+3)$. According to nonlinear Poisson problem (\ref{p1}), the dimensionless potential $\Phi$ decreases from the inner towards the outer region of the system. This means that the truncated $\gamma$-exponential models could describe distribution profiles with \textit{isothermal cores} and \textit{polytropic haloes}. Since the particles density (\ref{den}) should vanish at the system surface, where dimensionless potential $\Phi(\xi_{c})=0$, one should expect that the external dimensionless radius $\xi_{c}$ should be finite, $0<\xi_{c}<+\infty$. However, polytropic profiles with exponent $n>5$ are infinitely extended in the space and exhibit an infinite mass, $\xi_{c}\rightarrow+\infty$ and $M\rightarrow +\infty$. Such profiles cannot describe any realistic situation \cite{chandra}. On the other hand, finite character of quasi-stationary distribution function $\emph{f}_{qe}(\textbf{q},\textbf{p})$ is only possible if deformation parameter $\gamma\geq0$. Consequently, the admissible values of deformation parameter $\gamma$ must be restricted to the following interval: \begin{equation} 0\leq\gamma<\gamma_{m}=7/2. \end{equation} Accordingly, these models can only describe polytropic dependencies with exponent $n$ restricted to the interval $3/2\leq n<5$. As show below, this restriction will be manifested in both thermodynamic quantities and distribution profiles. \begin{figure}[t] \begin{center} \includegraphics[width=4.0in]{detallado.eps} \caption{Thermodynamic quantities associated with the notable points \textit{versus} deformation parameter $\gamma$: Panel a) the inverse temperatures $\left(\eta_{A},\eta_{B}\right)$ and the modulus of energies $\left(U_{A},U_{B},U_{C}\right)$; Panel b) the central particles densities $\left(\rho_{0A},\rho_{0B},\rho_{0C}\right)$. Inset panel: Fit of dependence $\eta_{A}(\gamma)$ near the critical value $\gamma_{c}$ using the power-law (\ref{form.power}).} \label{CalculoFino_1.eps} \end{center} \end{figure} Nonlinear Poisson problem (\ref{p1}) is integrated using $\gamma$ and $\Phi_{0}$ as independent integration parameters. This task was accomplished using Runge-Kutta fourth-order method, which was implemented using FORTRAN 90 programming. Results from this integration are shown in figure \ref{numericos.eps}, in particular, dependencies $\xi_{c}$ and $\eta$ \emph{versus} the central value of dimensionless potential $\Phi_{0}$ for some values of deformation parameter $\gamma$. According to these results, all dependencies of the dimensionless radius $\xi_{c}$ diverge when dimensionless potential $\Phi_{0}$ approaches to zero, which means that admissible values of parameter $\Phi_{0}$ are nonnegative. Curiously, dependencies of dimensionless radius $\xi_{c}$ corresponding to deformation parameter $\gamma=2.5$ and $\gamma=3.0$ diverge at a certain value $\Phi^{\infty}_{0}$ of dimensionless potential $\Phi_{0}$ that depends on deformation parameter $\gamma$, while the corresponding dependencies of dimensionless inverse temperature $\eta$ simultaneously vanishes. This means that value of dimensionless potential $\Phi_{0}$ above the point $\Phi^{\infty}_{0}$ are also nonphysical. A better understanding above the physical meaning of behaviors observed in dependencies of figure \ref{numericos.eps} is achieved analyzing dependencies of thermodynamic quantities. \begin{figure}[t] \begin{center} \includegraphics[width=4.0in]{comparacion_1.eps} \caption{Panel a) Truncated $\gamma$-exponential profile with $\gamma=1$ (Michie-King profile) corresponding to the point of gravothermal collapse and its comparison with isothermal and polytropic profiles using \emph{log-log scales}. Panel b) The same dependencies but now using \emph{linear-log scales} to appreciate between the polytropic fit of the halo. Accordingly, the proposed family of models can describe distribution profiles that exhibit isothermal cores and polytropic haloes.} \label{comparacion_1.eps} \end{center} \end{figure} \subsection{Thermodynamical behavior} Dependencies of the inverse temperature $\eta$ and central particles densities $\rho(0)$ \emph{versus} the dimensionless energy $U$ are shown in figure \ref{Curvacalorica_1.eps} for different values of deformation parameter $\gamma$. All quasi-stationary configurations obtained from these models have always negative values for the energy $U$. Moreover, one can recognize the existence of three notable points: \begin{itemize} \item \textit{Critical point of gravothermal collapse }$U_{A}$: Quasi-stationary configuration with minimum energy $U_{A}$. There not exist quasi-stationary configuration for energies below this point. If a system is initially prepared with an energy below this point, it will experience an instability process that leads to sudden contraction of the system under its own gravitational field, a phenomenon commonly referred to in the literature as \textit{gravothermal collapse} \cite{Lynden-Bell}. \item \textit{Critical point of isothermal collapse} $U_{B}$: Quasi-stationary configuration with minimum temperature $T_{B}$. There not exist a stable configurations for temperatures $T<T_{B}$. If a system under \textit{isothermal conditions} (in presence of a thermostat at constant temperature) is initially put in thermal contact with a heat reservoir with $T<T_{B}$, this system will experience a instability process fully analogous to gravitational collapse, which is referred to as \emph{isothermal collapse} \cite{Lynden-Bell}. This type of thermodynamical instability is less relevant than the gravothermal collapse because of the presence of a thermostat is actually an unrealistic consideration in most of astrophysical situations. \item \textit{Critical point of evaporation disruption} $U_{C}$: Quasi-stationary configuration with maximal energy, that is, there not exist quasi-stationary configurations for energies $U>U_{C}$. The existence of this superior bound is a direct consequence of the incidence of evaporation, which imposes a maximum value for the individual mechanical energies of the system constituents, $\varepsilon <\varepsilon_{s}$. If the system is initially prepared prepared with an energy above, it will experience a sudden evaporation in order to release its excess of energy \cite{Vel.QEM2}. Note that the inverse temperature $\eta_{C}$ always vanishes at this point regardless the value of deformation parameter $\gamma$. \end{itemize} All quasi-stationary configurations are located inside the energy region $U_{A}\leq U\leq U_{C}$. Moreover, there exist more of one admissible value for the dimensionless inverse temperature $\eta$ for a given energy $U$ near the point of gravothermal collapse $U_{A}$. According to results shown in figure \ref{Entropy_1.eps}, stable quasi-stationary configurations belong to the superior branch $A-B-C$, since these configurations exhibit the higher value of entropy $S$ for a given total energy. Energy dependence of this thermodynamic potential was evaluated from numerical integration of the expression: \begin{equation}\label{entropy.integral} \Delta S=S(U)-S_{C}=\int_{U_{C}}^{U}\eta(U') dU', \end{equation} which employs as a reference the value $S_{C}$ corresponding to critical point of evaporation instability $U_{C}$. Stable quasi-stationary configurations inside the energy range $U_{A}\leq U\leq U_{B}$ exhibit \textit{negative heat capacities} $C<0$. The existence of this thermodynamic anomaly is a remarkable consequence of the long-range character of gravitation, in particular, because of the short-range divergence of its interaction potential energy: \begin{equation} \phi\left(\mathbf{r}_{i},\mathbf{r}_{j}\right)=-Gm_{i}m_{j}/\left|\mathbf{r}_{j}-\mathbf{r}_{i}\right|\rightarrow\infty \end{equation} when particles separation distance $\left|\mathbf{r}_{j}-\mathbf{r}_{i}\right|$ drops to zero. While such configurations are unstable if the system is put into thermal contact with an environment at constant temperature (canonical ensemble), they are stable if the system is put into energetic isolation (microcanonical ensemble). \begin{figure}[t] \begin{center} \includegraphics[width=6.8in]{profiles.eps} \caption{Dependence of distribution profiles on the deformation parameter $\gamma$ for three notable values of the internal energy $U$. While the increasing of deformation parameter $\gamma$ produces profiles with more dense cores and more diluted haloes, the increasing of the energy produces the opposite effect. Interestingly, the qualitative form of the haloes are the same for a given value of deformation parameter $\gamma$. Panels a)-c) Distribution profiles at the energy of gravothermal collapse $U_{A}$ for three values of deformation parameter with $\gamma<\gamma_{c}$. All them exhibit isothermal cores and polytropic haloes. Panel d) A distribution profile at gravothermal collapse with deformation parameter $\gamma>\gamma_{c}$. Note that this profile does not exhibit an isothermal core, but a divergence in the central density. Panels i)-l) Distribution profiles very near the energy of evaporation disruption $U_{C}$ are everywhere polytropic. Panels e)-h) Transitional profiles at the energy of isothermal collapse $U_{B}$. These profiles hardly differ from polytropic profiles i)-l) because of they exhibit more dense cores.} \label{profiles.eps} \end{center} \end{figure} For the sake of a better understanding about the influence of deformation parameter $\gamma$, we have calculated dependence of some thermodynamic quantities at the notable points. Such results are shown in FIG.\ref{CalculoFino_1.eps}. It is clearly evident that the increasing of the deformation parameter $\gamma$ provokes a systematic decreasing of inverse temperatures $(\eta_{A},\eta_{B})$, and the increasing of absolute values of energies $\left(U_{A},U_{B},U_{C}\right)$ and their associated central particles densities $\left(\rho_{0A},\rho_{0B},\rho_{0C}\right)$. Interestingly, the inverse temperature $\eta_{A}$ at the notable point of gravothermal collapse vanishes when $\gamma\geq\gamma_{c}\simeq2.1$. This means that both the total energy $U_{A}$ and the temperature $T_{A}$ diverge at this point as a consequence of divergence of the central density $\rho_{A}$. Precisely, the existence of this divergence also manifested as a divergence of dimensionless radius $\xi_{c}$, which was shown in figure \ref{numericos.eps} for the particular cases with deformation parameters $\gamma=2.5$ and $\gamma=3.0$. The fact that the energy of gravothermal collapse diverges $U_{A}\rightarrow-\infty$ when $\gamma\geq\gamma_{c}$, significantly reduces the dramatic character of this phenomenon. For values admissible values of deformation parameter $\gamma$ below the point $\gamma_{c}$, the system develops a gravothermal collapse at the finite energy $U_{A}$, which should evolves in a \emph{discontinuous way} towards a certain collapsed structure that is not describable with the present model. For values admissible values of deformation parameter $\gamma$ above the point $\gamma_{c}$, the system should release an infinite amount of energy to reach a collapsed structure with a divergent central density associated with the point of gravothermal collapse. However, this collapsed structure is now described within the present models and the transition is developed in a \emph{continuous way} with the decreasing of the internal energy. It is noteworthy that an analogous divergence is observed in thermodynamic parameters of other notable points when deformation parameter $\gamma$ approaches its maximum admissible value $\gamma_{m}=3.5$. As expected, this second divergence point is related to the nonphysical character of polytropic dependencies when $n>5$. A more precise estimation for the critical value $\gamma_{c}$ can be obtained considering an adjustment of dependency $\eta_{A}\left(\gamma\right)$ near with a power-law form: \begin{equation}\label{form.power} \eta_{A}\left(\gamma\right)= A|\gamma-\gamma_{c}|^{p}. \end{equation} As shown in the inset panel of FIG.\ref{CalculoFino_1.eps}, the proposed form (\ref{form.power}) exhibits a great agreement with numerical results for the following set of parameters: $A=0.292\pm0.004$, $p=1$.$24\pm0.01$ and $\gamma_{c}=2.1307\pm0.003$. \subsection{Distribution profiles} We show in figure \ref{comparacion_1.eps} a distribution profile with deformation parameter $\gamma=1$ at the point of gravothermal collapse, which correspons to a Michie-King profile with lowest energy. As a consequence of gravitation, the highest concentration of the particles is always located in the inner region of the system, while particles density gradually decays with the increasing of the radius $r$ till vanishes at the tidal radius $R_{t}$. As clearly shown in this figure, the inner region, the core, can be well-fitted with an isothermal profile \cite{Antonov}, while the outer one, the halo, is well-fitted with a polytropic profile \cite{chandra}. Dependence of distribution profiles on the deformation parameter $\gamma$ and the internal energy $U$ is illustrated in figure \ref{profiles.eps}, specifically, twelve profiles corresponding to three notable points and forth different values of deformation parameter $\gamma$. Particles concentration in the inner regions decreases with the increasing of the internal energy $U$. The increasing of the deformation parameter $\gamma$ produces distribution profiles with more diluted haloes, and consequently, more dense cores. Not all admissible profiles derived from the present family of models can exhibit isothermal cores. However, all these profiles exhibit polytropic haloes. In fact, distribution profiles near the point of evaporation disruption can be regarded as everywhere polytropic with high accuracy. Finally, distribution profiles corresponding to the point of gravothermal collapse with $\gamma>\gamma_{c}$ are divergent at the origin. \begin{figure}[t] \begin{center} \includegraphics[width=4.0in]{PhiConverge.eps} \caption{Phase diagram of truncated $\gamma$-exponential models in the plane of integration parameters $\gamma-\Phi_{0}$. Dark grey region corresponds to quasi-stationary configurations with negative heat capacities, while light grey region corresponds to positive heat capacities. White regions are nonphysical or unstable configurations. We have emphasized inside the dark grey region those profiles that exhibit isothermal cores and polytropic haloes. Configurations corresponding to isothermal collapse are always outside this region. Quasi-stationary configurations on the line of gravothermal collapse (thick solid line) exhibit a weakly dependence on the deformation parameter $\gamma$ if $\gamma<\gamma_{c}$. This dependence exhibits a significant change for $\gamma_{c}\leq\gamma\leq\gamma_{m}$, which accounts for a divergence in the central density.} \label{PhiConverge_1.eps} \end{center} \end{figure} We have shown in figure \ref{PhiConverge_1.eps} the phase diagram of truncated $\gamma$-exponential models in the plane of integration parameters $\gamma-\Phi_{0}$ of the nonlinear Poisson problem (\ref{p1}). For each value of deformation parameter $\gamma$, the admissible values of the central dimensionless potential $\Phi_{0}$ are located inside the interval $0\leq\Phi_{0}\leq\Phi_{A}(\gamma)$, where $\Phi_{A}(\gamma)$ corresponds to the critical point of gravothermal collapse $U_{A}$. Central values of dimensionless potential $\Phi_{0}$ above dependence $\Phi_{A}(\gamma)$ correspond to nonphysical on unstable configurations (white region). Additionally, we have included dependence $\Phi_{B}(\gamma)$ associated with the point of isothermal collapse $U_{B}$. Configuration between dependencies $\Phi_{B}(\gamma)\leq\Phi_{0}\leq\Phi_{A}(\gamma)$ exhibit negative heat capacities (dark gray region), while those one with central dimensionless potential $\Phi_{0}$ belonging to interval $0<\Phi_{0}<\Phi_{B}(\gamma)$ exhibit positive heat capacities (light gray region). It is remarkable that dependence $\Phi_{A}(\gamma)$ is weakly modified by a change in the deformation parameter $\gamma$ for values below the critical point $\gamma_{c}$. However, this function experiences an sudden change above this critical value. As expected, this behavior accounts for a sudden change in behavior of distribution profiles: the proposed models can exhibit profiles with isothermal cores for $\gamma<\gamma_{c}$, while they only exhibit profiles without isothermal cores for $\gamma\geq\gamma_{c}$. According to equation (\ref{asymp.density}), profiles with isothermal cores are directly related to asymptotic dependence of particles distribution to follow an exponential-law with regard to the local value of dimensionless potential $\Phi(\xi)$. Such an asymptotic behavior is better described in terms of \emph{deviation function} $\delta(x,\gamma)$ with regard to the exponential function, which is introduced in \ref{Exp.Append}. For the sake of convenience, we have denoted dependence $\Phi_{ic}(\gamma)$ as follows: \begin{equation} \delta\left[\Phi_{ic}(\gamma);\gamma+\frac{3}{2}\right]=\epsilon, \end{equation} where the convergence error $\epsilon$ was fixed at the value $\epsilon=1.6\times 10^{-4}$. This small value guarantees the matching of this dependence at the critical value of deformation parameter $\gamma_{c}$ with central dimensionless potential $\Phi_{0}$ associated with the point of gravothermal collapse, $\Phi_{ic}(\gamma_{c})=\Phi_{A}(\gamma_{c})$. Quasi-stationary configurations located inside the region $\Phi_{ic}(\gamma)\leq\Phi_{0}\leq\Phi_{A}(\gamma)$ and $0<\gamma<\gamma_{c}$ exhibit isothermal cores, that is, the inner regions of particles distribution can be fitted with an isothermal profile.
16
7
1607.03774
1607
1607.06684_arXiv.txt
Results from the Solar Maximum Mission showed a close connection between the hard X-ray and transition region emission in solar flares. Analogously, the modern combination of \rhessi\ and \iris\ data can inform the details of heating processes in ways never before possible. We study a small event that was observed with \rhessi, \iris, \sdo, and \hinode, allowing us to strongly constrain the heating and hydrodynamical properties of the flare, with detailed observations presented in a previous paper. Long duration red-shifts of transition region lines observed in this event, as well as many other events, are fundamentally incompatible with chromospheric condensation on a single loop. We combine \rhessi\ and \iris\ data to measure the energy partition among the many magnetic strands that comprise the flare. Using that observationally determined energy partition, we show that a proper multi-threaded model can reproduce these red-shifts in magnitude, duration, and line intensity, while simultaneously being well constrained by the observed density, temperature, and emission measure. We comment on the implications for both \rhessi\ and \iris\ observations of flares in general, namely that: (1) a single loop model is inconsistent with long duration red-shifts, among other observables; (2) the average time between energization of strands is less than 10 seconds, which implies that for a hard X-ray burst lasting ten minutes, there were at least 60 strands within a single \iris\ pixel located on the flare ribbon; (3) the majority of these strands were explosively heated with energy distribution well described by a power law of slope $\approx -1.6$; (4) the multi-stranded model reproduces the observed line profiles, peak temperatures, differential emission measure distributions, and densities.
\label{sec:intro} The transport of energy through flaring coronal loops is well studied, both observationally and theoretically, but not yet fully understood. The release of energy from magnetic reconnection events drives the acceleration of particles, generation of waves, and {\it in situ} heating of the coronal plasma, although it is not clear how energy is partitioned between the mechanisms. Further complicating the problem is that the partition of energy amongst the loops that comprise the arcade that forms along the flare ribbon has not been determined to date. Flare energy release undoubtedly occurs across many magnetic threads, as has been known for a long time ({\it e.g.} \citealt{svestka1982}). \citet{aschwanden2001} presented an analysis of a large flare occurring across more than 100 loops, to infer cooling times across a well-observed arcade. \yohkoh\ observations pointed to a temperature gradient in the arcade, where the outermost loops are the hottest \citep{tsuneta1996}. Tracing the motion of hard X-ray (HXR) sources, \citet{grigis2005} showed that as a disturbance propagates along the arcade, it triggers reconnection and particle acceleration in successive loops as it proceeds, thus heating the loops sequentially. Multi-threaded models have been employed by a number of authors to study solar flares. \citet{hori1997,hori1998} adopted a multi-stranded model to explain the observation of stationary \ion{Ca}{19} emission during the impulsive phase of many flares, when single loop models consistently predicted strong blue-shifts. Similarly, \citet{reeves2002} developed a multi-threaded model to show that \trace\ and \yohkoh\ light-curves were more readily explained by an arcade rather than a single loop. \citet{warren2005} derived an algorithm to compute energy inputs for successive threads comprising a flare, by calculating the discrepancy between the observed and calculated GOES flux. They showed that the absence of strongly blue-shifted \ion{Ca}{19} emission in \yohkoh\ observations is because that emission is masked by previously heated threads. \citet{warren2006} studied the duration of heating on successive threads, concluding that short heating time scales lead to significantly higher temperatures, inconsistent with \yohkoh\ observations. \citet{falewicz2015} compared one and two-dimensional models of a flare to find that the observed dynamics were better reproduced by their 2D model, which approximated a multi-stranded model. On the other hand, \citet{doschek2015b} found that while a single loop model can reproduce high temperature evaporation flows, there were numerous discrepancies between the observed and modeled cooler, red-shifted lines. Recently, \citet{qiu2016}, using the 0D model EBTEL \citep{klimchuk2008}, studied the cooling phase of flares with a multi-threaded model, and only found consistency with EUV emission if there is prolonged gradual phase heating occurring on many threads. In the first paper (\citealt{warren2016}, hereafter Paper \rom{1}), we studied extensively a small flare that was seen with \irisLong\ (\iris, \citealt{depontieu2014}), \rhessiLong\ (\rhessi, \citealt{lin2002}), \aiaLong\ (\aia, \citealt{lemen2012}), and \eisLong\ and \xrtLong\ aboard \hinode\ (\eis\ and \xrt, respectively, \citealt{culhane2007} and \citealt{golub2007}). The combination of instruments allows coverage across a wide temperature range, from the chromosphere through the transition region (TR) and upper corona, to temperatures exceeding 10 MK. The unique perspective allowed us to measure temperatures, emission measures (EMs), non-thermal electron beam parameters, energy input, and individual TR brightenings at high cadence and spatial resolution. In Paper \rom{1}, we presented observations of \ion{Si}{4} and \ion{C}{2} as seen by \iris, both of which brightened during the rise phase along with the HXR emission measured with \rhessi. The two lines were red-shifted during that time period, and remained red-shifted even after the impulsive phase, gradually decreasing in magnitude over time-scales exceeding 20 minutes at some positions. Similar trends in \ion{Si}{4} and other cool lines were reported by other authors in larger flares seen with \iris, {\it e.g.} \citet{sadykov2015,brannon2015, polito2016}. In this paper, we seek to explain the persistent red-shifts by developing a model which requires a partition of energy amongst the magnetic strands comprising the flare. In Section \ref{sec:modeling}, we describe the hydrodynamic code used to model this flare. We then split up the results in Section \ref{sec:results} into two parts: a simple model (both single loop and multi-threaded loop) and a multi-threaded Monte Carlo simulation. We finally discuss the implications and conclusions of this work in Section \ref{sec:implications}.
\label{sec:implications} In Paper \rom{1}, we presented observations of the flare SOL2014-11-19T14:25UT, which was observed with many different instruments, covering a wide range of energies and temperatures. Observed red-shifts in the \ion{Si}{4} 1402.770 \AA\ and \ion{C}{2} 1334.535 \AA\ lines persisted for longer than 30 minutes, which is difficult to reconcile with simple theoretical models. Specifically, \citet{fisher1989} showed that condensation flows persist for about $45$ seconds, regardless of the strength or duration of heating. Alternatively, \citet{brosius2003} suggested that a ``warm rain'' scenario can produce long-lasting red-shifts in TR lines (see also \citealt{tian2015}). During the impulsive phase of a large M-flare, \citet{brosius2003} found \ion{O}{3}, \ion{O}{5}, \ion{Mg}{10}, and \ion{Fe}{19} were all initially blue-shifted. The two oxygen lines gradually transitioned into down-flows that lasted for half an hour, while the \ion{Mg}{10} line was found to be composed of a strong stationary component and a weaker red wing, and \ion{Fe}{19} remained stationary thereafter. Those red-wing components, termed ``warm rain'', were interpreted as signatures of the cooling and draining of a loop, and lasted for half an hour or so after the flare's onset. However, the event studied here differs in a few important respects. First, there were no signatures of blue-shifts in the TR lines during the impulsive phase. \ion{Si}{4} was fully red-shifted for the duration of the event (Figure 9 of Paper \rom{1}), in contrast to the behavior of the oxygen lines reported in \citet{brosius2003}. Second, there is insufficient time for the loops to drain. The red-shifts begin simultaneously with the HXR burst, suggesting that they are signatures of chromospheric condensation as the energy is deposited by electron beams. After heating ceases on a given coronal loop, there is a long time period during which the coronal density does not drain significantly, and energy losses are first dominated by thermal conduction, then by radiation, and only then by an enthalpy flux (see the thorough treatment by \citealt{bradshaw2010}). The time scales for coronal loops to cool and drain were derived analytically and checked numerically by \citet{cargill1995,bradshaw2005,bradshaw2010,cargill2013}, and typically are on the order of 45 minutes to an hour. Finally, the cooling between successive \aia\ channels often seen in flares ({\it e.g.} \citealt{petkaki2012}) was seen in the coronal section of the loops, with a cooling time-scale of about 40 minutes, suggesting they did not drain significantly for nearly as long. In this paper, by adopting a multi-threaded model, we have shown that these observations are consistent with a power-law distribution of heating occurring on a very large number of threads. The following important conclusions can be drawn from the work here. \begin{enumerate} \item \textbf{Multi-stranded heating.} The single loop model is woefully inadequate to explain the intensities or Doppler shifts observed in this event, regardless of the number of heating events on the loop or duration of heating. A simple multi-stranded model of 7 loops similarly fails, as the observed Doppler shifts are essentially continuous, not single discrete events. However, a multi-stranded loop model as presented in Section \ref{subsec:montecarlo} captures many of the observed properties of the \iris\ emission, while being within the bounds of the observed density, temperature, and emission measure. Compare the conclusions of many prior multi-threaded studies, {\it e.g.} \citet{hori1998,reeves2002,warren2006}, etc. \item \textbf{Energy partition among strands.} We measured the distribution of \iris\ SJI intensities to be well described by a power-law, with slope $\alpha = -1.6$ at most times (see Paper \rom{1} for details). Since the intensities of many TR lines are proportional, both spatially and temporally, to the HXR intensities (Paper \rom{1}, \citealt{cheng1981,poland1984,simoes2015b}), and since the energy flux of the electrons is proportional to the non-thermal HXR intensity \citep{brown1971,holman2011}, we take this distribution as a proxy for the partition of energy among the threads. This distribution over a large number of threads produces \iris\ \ion{Si}{4} and \ion{C}{2} intensities and Doppler shifts that are consistent with values measured in Paper \rom{1}. In future work, we will examine this distribution for more events of varying \goes\ class to determine statistical trends and properties. \item \textbf{Resolving loop structures.} It does not seem possible to explain the observed red-shifts with a single loop model, or with a small number of strands. Further, the background level of emission does not strongly show Doppler shifts (blue or red), and the shifts correspond to brightenings above background emission so that the red-shifts must be a signature of the flare itself. As the red-shifts were measured in single pixels, then, we conclude that there is loop structure not being resolved at the sub-pixel level of \iris. In order to maintain a red-shift in these lines without sharp drops in the speed, threads must be energized at a rate $r < 10$\,s per thread, giving a lower limit on the number of threads $N > 60$ {\it rooted within a single \iris\ pixel} for the duration of the HXR burst. For the duration of the entire event, this number must be appropriately increased. In comparison, \citet{simoes2015} estimated a rate of $r = 3$\,s per thread (total of 120 threads during the impulsive phase) over the entire reconnection region of a small C2.6 flare. Their analysis was based on the released non-thermal energy, and constitutes a lower bound. \\ What is the size, then, of an individual strand? If the IRIS pixel were divided evenly between strands, then the diameter is on the order of $\frac{1}{100}$\,arcsec or less, significantly smaller than previous suggestions. This may provide evidence for the fractal model of reconnection in flares \citep{shibata2001,singh2015,shibata2016}, where the current sheet becomes exceedingly thin due to the secondary tearing instability \citep{zweibel1989}. \item \textbf{Beam energy flux constraints.} \rhessi\ measures the power contained in the electron beam integrated over the entire foot-point, which can then be divided by an area to give an estimate of the beam energy flux. However, since it is integrated over the entire foot-point, that does not specify what the flux was on the many threads comprising that area. Combined with the power-law distribution, we have constrained the maximum and minimum values of the flux. The cut-off energy during this event was measured at 11-13 keV for the duration of the HXR burst (Figure \ref{fig:rhessi_params}). At that cut-off, the threshold between gentle and explosive evaporation is $\approx 3 \times 10^{9}$\,erg\,s$^{-1}$\,cm$^{-2}$ \citep{reep2015}. For lower beam fluxes, the \ion{Si}{4} line is in fact blue-shifted (compare \citealt{testa2014}), which was never observed during this event. We therefore can reasonably conclude that the maximum beam flux must be greater than this value. What's more, since small events are far more likely on a power-law distribution of energies, they strongly weight the emission and often cause sharp drops in the measured Doppler shift, so that it seems likely that the majority of the threads were heated explosively. \end{enumerate} This work has given a great deal of insight into the dynamics of this small flare. We have reasonably found a lower limit to the number of magnetic field threads, and have found the partition of energy among them, which allows us to build a realistic multi-threaded model. This model is well constrained by the abundance of observations from many different instruments, and can be applied to flares which do not have coverage as good as this one. There are still many areas of this work that can be improved to remove assumptions and generalize the model, however, such as determining how the electron beam parameters vary from thread to thread or finding an upper limit to the number of threads. It is also often true that \ion{Si}{4} has a stronger stationary component in other flares than was seen in this one ({\it e.g.} \citealt{tian2014}), so that further work may be required to determine whence the difference arises. We speculate that spectral lines seen in larger flares such as \ion{Fe}{21} 1354.08\,\AA\ may further improve our understanding of energy deposition between threads. The results of \citet{fisher1989} make it clear that the duration of condensation flows are insensitive to the heating strength and duration. However, evaporation flows are not limited in the same manner, and in fact there are indications that the flows last as long as the heating does ({\it e.g.} the flows in Figures 4 and 5 of \citealt{reep2015} or the \ion{Fe}{21} shifts in \citealt{polito2016}). Unfortunately, for this event, \ion{Fe}{21} was not observed, and so no hard conclusions can yet be drawn regarding the heating durations.
16
7
1607.06684
1607
1607.07294_arXiv.txt
At the heart of today's solar magnetic field evolution models lies the alpha dynamo description. In this work, we investigate the fate of alpha-dynamos as the magnetic Reynolds number $Rm$ is increased. Using Floquet theory, we are able to precisely quantify mean field effects like the alpha and beta effect (i) by rigorously distinguishing dynamo modes that involve large scale components from the ones that only involve small scales, and by (ii) providing a way to investigate arbitrary large scale separations with minimal computational cost. We apply this framework to helical and non-helical flows as well as to random flows with short correlation time. Our results determine that the alpha-description is valid for $Rm$ smaller than a critical value $Rm_c$ at which small scale dynamo instability starts. When $Rm$ is above $Rm_c$ the dynamo ceases to follow the mean field description and the growth rate of the large scale modes becomes independent of the scale separation while the energy in the large scale modes is inversely proportional to the square of the scale separation. The results in this second regime do not depend on the presence of helicity. Thus alpha-type modeling for solar and stellar models needs to be reevaluated and new directions for mean field modeling are proposed.
16
7
1607.07294
1607
1607.05634_arXiv.txt
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
High precision Doppler velocimeters have enabled the discovery of hundreds of exoplanets over the past two decades. While technological improvements in many aspects of precision spectroscopy continue to push the measurement capabilities of Doppler radial velocity (RV) spectrometers, future instruments will require an exquisite understanding of instrumental and astrophysical noise sources to push towards the discovery of terrestrial mass planets. As such, developing a detailed instrument performance budget is critical for both identifying instrument error sources, and estimating achievable Doppler measurement precision. As instruments begin to push below 1 {\ms} measurement precision, many individual error contributions from a variety of subsystems begin to set the measurement floor. Identifying and characterizing each of these contributions represents a significant technological challenge, though one that must be overcome for next generation instruments aiming for 10 c{\ms} precision. Here we describe the instrumental error budget for the NASA Extreme Precision Doppler Spectrometer \lq{}NEID\rq{} (NN-explore Exoplanet Investigations with Doppler spectroscopy). NEID is a highly stabilized spectrometer for the 3.5 m WIYN telescope, and is designed specifically to deliver the measurement precisions necessary for detection of Earth-like planets. The core of NEID is a single-arm, high resolution (R$\sim$100,000) optical (380 -- 930 nm) spectrometer mounted to an aluminum optical bench. NEID borrows heavily from the design of the Habitable-zone Planet Finder instrument (HPF) \citep{Mahadevan:2014a}, sharing a nearly identical vacuum chamber, radiation shield, and environmental control system \citep{Hearty:2014}.
We have developed a detailed, bottom-up Doppler radial velocity performance budget for the Extreme Precision Doppler Spectrometer concept NEID. This analysis includes many of the error terms and methodologies introduced in \cite{Podgorski:2014}, though we include many new sources of error in our Doppler budget. This budget contains a suite of contributions from a variety of instrumental error sources. While primarily focused on the performance of the NEID spectrometer, this systems engineering approach of performance budgeting could easily be applied to a number of future RV instruments to derive estimated performance. This type of analysis and systems engineering is crucial for illuminating a path towards 10 c{\ms} precision, which has long been held as the ultimate Doppler precision goal in the quest for discovering Earth-size planets orbiting Sun-like stars.
16
7
1607.05634
1607
1607.00820_arXiv.txt
{{CONTEXT} Gamma Doradus stars (hereafter $\gamma$\,Dor stars) are known to exhibit gravity- and/or gravito-intertial modes that probe the inner stellar region near the convective core boundary. The non-equidistant spacing of the pulsation periods is an observational signature of the stars' evolution and current internal structure and is heavily influenced by rotation.\\ {AIMS} We aim to constrain the near-core rotation rates for a sample of $\gamma$ Dor stars, for which we have detected period spacing patterns.\\ {METHODS} We combined the asymptotic period spacing with the traditional approximation of stellar pulsation to fit the observed period spacing patterns using $\chi^2$-optimisation. The method was applied to the observed period spacing patterns of a sample of stars and used for ensemble modelling.\\ {RESULTS} For the majority of stars with an observed period spacing pattern we successfully determined the rotation rates and the asymptotic period spacing values, though the uncertainty margins on the latter were typically large. This also resulted directly in the identification of the modes corresponding with the detected pulsation frequencies, which for most stars were prograde dipole gravity and gravito-inertial modes. The majority of the observed retrograde modes were found to be Rossby modes. We further discuss the limitations of the method due to the neglect of the centrifugal force and the incomplete treatment of the Coriolis force.\\ {CONCLUSION} Despite its current limitations, the proposed methodology was successful to derive the rotation rates and to identify the modes from the observed period spacing patterns. It forms the first step towards detailed seismic modelling based on observed period spacing patterns of moderately to rapidly rotating $\gamma\,$Dor stars.}
\label{sec:intro} Gamma Dor stars are early F- to late A-type stars (with $1.4\,M_\odot \lesssim M \lesssim 2.0\,M_\odot$) which exhibit non-radial gravity and/or gravito-inertial mode pulsations \citep[e.g.][]{Kaye1999}. This places them directly within the transition region between low-mass stars with a convective envelope and intermediate-mass stars with a convective core, where the CNO-cycle becomes increasingly important relative to the pp-chain as the dominant hydrogen burning mechanism \citep[e.g.][]{Silva2011}. The pulsations in $\gamma$\,Dor stars are excited by the flux blocking mechanism at the bottom of the convective envelope \citep{Guzik2000,Dupret2005}, though the $\kappa$ mechanism has been linked to $\gamma$\,Dor type pulsations as well \citep{Xiong2016}. The oscillations predominantly trace the radiative region near the convective core boundary. As a result these pulsators are ideal to characterise the structure of the deep stellar interior. As shown by \citet{Tassoul1980}, high order ($n \gg l$) gravity modes are asymptotically equidistant in period for non-rotating chemically homogeneous stars with a convective core and a radiative envelope. This study was further expanded upon by \citet{Miglio2008a}. The authors found characteristic dips to be present in the period spacing series when the influence of a chemical gradient is included in the analysis. The periodicity of the deviations is related to the location of the chemical gradient, while the amplitude of the dips was found to be indicative of the steepness of the gradient. \citet{Bouabid2013} further improved upon the study by including the effects of both diffusive mixing and rotation, which they introduced using the traditional approximation. The authors concluded that the mixing processes partially wash out the chemical gradients inside the star, resulting in a reduced amplitude for the dips in the spacing pattern. Stellar rotation introduces a shift in the pulsation frequencies, leading to a slope in the period spacing pattern. Zonal and prograde modes, as seen by an observer in an inertial frame of reference, were found to have a downward slope, while the pattern for the retrograde {high order} modes has an upward slope. Over the past decade the observational study of pulsating stars has benefitted tremendously from several space-based photometric missions, such as MOST \citep{Walker2003}, CoRoT \citep{Auvergne2009} and \emph{Kepler} \citep{Koch2010}. While typically only a handful of modes could be resolved using ground-based data, the space missions have provided us with near-continuous high S/N observations of thousands of stars on a long time base, resulting in the accurate determination of dozens to hundreds of pulsation frequencies for many targets. In particular, this has proven to be invaluable for $\gamma$\,Dor stars, as their gravity and/or gravito-inertial mode frequencies form a very dense spectrum in the range of 0.3 to 3\,$\rm d^{-1}$. Period spacing patterns have now been detected for dozens of $\gamma$\,Dor stars \citep[e.g.][]{Chapellier2012, Kurtz2014, Bedding2015, Saio2015, Keen2015, VanReeth2015,Murphy2016}. In this study we focus on the period spacing patterns detected by \citet{VanReeth2015} in a sample of 68 $\gamma$\,Dor stars with spectroscopic characterisation and aim to derive the stars' internal rotation rate and the asymptotic period spacing value of the series. This serves as a first step for future detailed analyses of differential rotation, similar to the studies which have previously been carried out in slow rotators among g-mode pulsators interpreted recently in terms of angular momentum transport by internal gravity waves \citep[e.g.,][]{Triana2015,Rogers2015}. In this paper we present a grid of theoretical models, which we use as a starting point (Section \ref{sec:grid}), and explain our methodology to derive the rotation frequency (Section \ref{sec:method}). The method is illustrated with applications on synthetic data (Section \ref{subsec:synthdata}), a slowly rotating star with rotational splitting, KIC\,9751996, and a fast rotator with a prograde and a retrograde period spacing series, KIC\,12066947 (Section \ref{subsec:slowfast}). We then analyse the sample as a whole (Section \ref{subsec:sample}), before moving on to the discussion and plans for future in-depth modelling of individual targets (Section \ref{sec:conclusions}).
\label{sec:conclusions} In this paper, we have presented methodology to derive the near-core interior rotation rate $f_{\rm rot}$ from an observed period spacing pattern and to perform mode identification for the pulsations in the series. In a first step, we considered all combinations of $l$- and $m$-values for mode identification. For each pair of $(l,m)$we consider the asymptotic spacing $\Delta\Pi_l$ and compute the corresponding equidistant model period spacing pattern as described by \citet{Tassoul1980}. Using the traditional approximation, the frequencies of the model pattern are subsequently shifted for an assumed rotation rate $f_{\rm rot}$ and the chosen $l$ and $m$. The optimal values of $\Delta\Pi_l$, $f_{\rm rot}$, $l$ and $m$ are then determined by fitting the model pattern to the observed period spacing series using least-squares optimisation and taking into account that different values of $\Delta\Pi_l$ are expected for different values of $l$. In most cases this method is reasonably successful. For slow rotators it may be difficult to find the correct value for the azimuthal order $m$, though this problem is solved when we have multiple series with different $l$ and $m$ values. By fitting these series simultaneously, not only do we obtain the mode identification, but the values for $\Delta\Pi_l$ and $f_{\rm rot}$ are also a lot more precise than in the case where we do not detect multiplets. In the case where we are dealing with a moderate to fast rotator, the retrograde modes were found to be Rossby modes, which arise due to the interaction between the stellar rotation and toroidal modes. In this study, we have used the asymptotic approximation derived by \citet{Townsend2003c} to compute their eigenvalues $\lambda$ of the Laplace tidal equation. A complete numerical treatment of these modes is required to exploit them quantitatively. A complete and detailed analysis of such stars with multiple gravity-mode period spacings will allow us to study possible differential rotation in $\gamma$\,Dor stars, ultimately leading to proper observational constraints on rotational chemical mixing and angular momentum transport mechanisms. From the ensemble modelling of the gravity-mode period spacings of the stars in our sample, we found that there is a large range in the stellar rotation rates. Interestingly, only three out of forty targets were found to be in the superinertial regime. These three stars, KIC\,8645874, KIC\,9751996 and KIC\,11754232, are hybrid $\gamma$\,Dor/$\delta$\,Sct stars which exhibit variability in the frequency range from 5\,$\rm d^{-1}$ to 8\,$\rm d^{-1}$. This indicates that these stars' low rotation rates are likely linked to their hybrid character, making them prime targets for further asteroseismological analysis. The other stars were found to be in the subinertial regime. Their pulsation frequencies in the corotating frame are typically confined in the narrow range between 0.15 and 0.75\,$\rm d^{-1}$. This is in agreement with the theoretical expectation that $\gamma$\,Dor pulsation frequencies in the corotating frame are on the order of the thermal timescale $\tau_{th}$ at the bottom of the convective envelope \citep{Bouabid2013}. However, this frequency range does not agree with the predicted values by \citet{Bouabid2013}. With the exception of the three stars in the superinertial regime, we find that on average the observed modes have longer pulsation periods in the corotating frame than theory predicts. This is also reflected in the high spin parameter values we derived for many of the stars. The high spin parameters detected for the retrograde Rossby modes are linked to the low eigenvalues $\lambda$ of these modes as already found on theoretical grounds by \citet{Townsend2003c}. The global results for the mode identification are consistent with existing spectroscopic studies. The majority of the modes were found to be prograde dipole modes. This is in line with the results obtained by \citet{Townsend2003b} for heat-driven gravity modes in slowly pulsating B stars. In addition, we found single high-amplitude modes, as opposed to a series, to be present in several stars. They are consistent with retrograde Rossby modes with $m=-1$. They are likely heavily influenced by mode trapping, and as a result contain valuable information about these stars' internal structure. We conducted a linear regression analysis on the combined spectroscopic and photometric parameter values for the sample. The strong correlation between $v\sin i$ and $f_{\rm rot}$ independently confirmed the reliability of the obtained rotation rates. We also detected weak correlations between $R\sin i = v\sin i/f_{\rm rot}$ and $T_{\rm eff}$ and between $\log\,g$ and $f_{\rm rot}$. Indeed, as a star with a convective core evolves on the main sequence, its radius increases, and its temperature and rotation rate decrease. Despite the limitations of the traditional approximation, the results we obtained in this work are consistent and offer the first estimates of the interior rotation frequencies for a large sample of $\gamma\,$Dor stars. The large observed spin parameter values indicate that the pulsations are constrained in a waveguide around the equator \citep{Townsend2003b,Townsend2003c}. This in turn implies that the vast majority of the stars should be seen at moderate to high inclination angles, which is also what we can indirectly derive from the relation between the observed $v\sin i$ and $f_{\rm rot}$ in Fig.\,\ref{fig:vsini_frot}. From the grid of theoretical models in Section\,\ref{sec:grid}, we find radii between 1.3\,$R_{\odot}$ and 3\,$R_{\odot}$. For many stars in our sample, this results in inclination angle estimates on the order of or above 50\,\textdegree. Two of the stars for which lower inclination angle estimates were found, KIC\,4846809 and KIC\,9595743, are also the stars for which we detected zonal dipole modes. This is consistent with expectations for the geometrical cancellation effects of the pulsations. These ensemble analyses now form an ideal starting point for detailed asteroseismological modelling of individual targets in the sample. This, in turn, will allow us to place constraints on the shape and extent of the convective core overshooting and the diffusive mixing processes in the radiative near-core regions, and by extension on the evolution of the convective core itself as it was recently achieved for a hybrid $\delta\,$Sct — $\gamma\,$Dor binary \citep{SchmidAerts2016} and also for a slowly \citep{Moravveji2015} and a moderately \citep{Moravveji2016} rotating gravity-mode pulsator of $\sim$ 3.3\,$M_\odot$.
16
7
1607.00820
1607
1607.01017_arXiv.txt
{Stars are generally born in clustered stellar environments, which can affect their subsequent evolution. An example of this environmental influence can be found in globular clusters (GCs) harbouring multiple stellar populations. Bastian et al. recently suggested an evolutionary scenario in which a second (and possibly higher order) population is formed by the accretion of chemically enriched material onto the low-mass stars in the initial GC population to explain the multiple stellar populations. The idea, dubbed `Early disc accretion', is that the low-mass, pre-main sequence stars sweep up gas expelled by the more massive stars of the same generation into their protoplanetary disc as they move through the cluster core. The same process could also occur, to a lesser extent, in embedded stellar systems that are less dense.} {Using assumptions that represent the (dynamical) conditions in a typical GC, we investigate whether a low-mass star of $0.4\,\Msun$ surrounded by a protoplanetary disc can accrete a sufficient amount of enriched material to account for the observed abundances in `second generation' GC stars. In particular, we focus on the gas loading rate onto the disc and star as well as on the lifetime and stability of the disc.} {We perform simulations at multiple resolutions with two different smoothed particle hydrodynamics codes and compare the results. Each code uses a different implementation of the artificial viscosity.} {We find that the gas loading rate is about a factor of two smaller than the rate based on geometric arguments, because the effective cross section of the disc is smaller than its surface area. Furthermore, the loading rate is consistent for both codes, irrespective of resolution. Although the disc gains mass in the high resolution runs, it loses angular momentum on a time scale of $10^4$ years. Two effects determine the loss of (specific) angular momentum in our simulations: 1) continuous ram pressure stripping and 2) accretion of material with no azimuthal angular momentum. Our study, as well as previous work, suggests that the former, dominant process is mainly caused by numerical rather than physical effects, while the latter is not. The latter process, as expected theoretically, causes the disc to become more compact and increases the surface density profile considerably at smaller radii.} {\textnormal{The disc size is determined in the first place by the ram pressure exerted by the flow when it first hits the disc. Further evolution is governed by the decrease in the specific angular momentum of the disc as it accretes material with no azimuthal angular momentum. Even taking into account the uncertainties in our simulations and the result that the loading rate is within a factor two of a simple geometric estimate}, the size and lifetime of the disc are probably not sufficient to accrete the amount of mass required in the Early disc accretion scenario.}
Stars generally form in clusters \citep{lada03}. These dense environments affect the formation and evolution of the stars they host. For example, globular clusters (GCs) were once considered the archetype of coeval, simple stellar populations, but during the last two decades they have been shown to harbour multiple stellar populations. Observations imply that a considerable fraction (up to 70\,\%) of the stars currently in GCs has a very different chemical composition from the initial population \citep[see e.g.][]{gratton12}. They indicate that a second (and in some cases even higher order, e.g. \citet{piotto07}) population\footnote{Most scenarios proposed to date imply subsequent epochs of star formation and hence refer to multiple \emph{generations} of stars in a GC. Since it is still not clear whether GCs can facilitate an extended star formation history or if the enriched stars actually belong to the initial population, we will refer to multiple \emph{populations} of stars.} of stars has formed from material enriched by ejecta from first generation stars. To explain the formation of these enriched stellar populations, several scenarios have been proposed \citep[see e.g.][]{decressin07-1, dercole08, de_mink09, bastian13-2}. One of the recently proposed scenarios applies in particular to star formation in GCs, but could, in theory, also leave its signature on stellar systems that are less dense. \citet{bastian13-2}, hereafter B13, have suggested a scenario in which the enriched population is not formed by a second star formation event, but rather by the accretion of enriched material, that was expelled by the high-mass stars of the initial population, onto the low-mass stars of the same (initial) population. Because Bondi-Hoyle accretion, i.e. gravitational focusing of material onto the star, is unlikely to be efficient in a GC environment with a high velocity dispersion, they suggest that the protoplanetary discs of low-mass stars sweep up the enriched matter. In order to account for the observed abundances in the enriched population, the low-mass stars have to accrete of the order of their own mass, i.e. a 0.25 $\Msun$ star has to accrete about 0.25 $\Msun$ of enriched material in the most extreme case (as inferred from, e.g., the main sequence of NGC2808 \citep{piotto07}). The time scale of this scenario is limited by the lifetimes and sizes of protoplanetary discs. B13 assume that the protoplanetary discs can accrete material for up to 20 Myr. Current disc lifetimes are observed to be 5-15 Myr, but B13 argue that their lifetimes may have been considerably longer in GCs. The accretion rate averaged over 20 Myr therefore has to be about $10^{-8} \Msun / yr$. In their scenario, they assume that the accretion rate is proportional to the size of the disc, $\pi R_{\rm disc}^2$, density of the ISM, $\rho_{\rm ISM}$, and the velocity, $v_{\rm ISM}$, of the disc with respect to the ISM, i.e. $\dot{M} \propto \rho_{\rm ISM} v_{\rm ISM} \pi R_{\rm disc}^2$. Furthermore, they assume an average and constant disc radius of 100 AU. In this work, we test several of these assumptions of the early disc accretion scenario. A similar scenario has been studied before by \citet{moeckel09}, M09 hereafter. They performed smoothed particle hydrodynamics simulations of a protoplanetary disc that is embedded in a flow of interstellar medium (ISM) with a velocity of $3\,\mathrm{km\, s^{-1}}$. They found that the mean accretion rate onto the star equals the rate expected from Bondi-Hoyle theory, whether a disc is present or not. We note that for the parameters they assumed, the theoretical Bondi-Hoyle radius is almost twice the radius of the disc. Here we follow up on the work by M09 by simulating the accretion process onto a protoplanetary disc for the typical conditions expected in a GC environment. We directly compare the outcome of two different smoothed particle hydrodynamics codes for the same set of initial conditions and different particle resolutions. We first discuss both the physical and numerical effects in our reference model and subsequently we compare the results of the different codes and particle numbers.
We have performed simulations of accretion of interstellar material (ISM) onto a protoplanetary disc with two different smoothed particle hydrodynamics codes. We find that, as theoretically expected, when the flow of ISM first hits the disc, all disc material beyond the radius where ram pressure dominates is stripped. As ISM is being accreted and disc material migrates inwards, the disc becomes more compact and the surface density profile increases at smaller radii. We find that the ISM loading rate, i.e. the rate at which ISM is entrained by the disc and star, is approximately constant across all our simulations with both codes and is a factor of two lower than the rate expected from geometric arguments (see Eq. \ref{eq:mdot_eda1}). This difference arises because the outskirts of the disc do not entrain ISM and therefore the effective cross section of the disc is smaller than its physical surface area. We find that, despite the accretion of ISM, the net effect is that the disc loses mass, except in the highest-resolution simulations with both codes where the disc gains mass. This decreasing trend with resolution implies that the net mass loss from the disc in our low-resolution simulations is numerical. Considering the time scale on which the disc loses all of its angular momentum rather than its mass, provides an estimate of about $10^4$ years. The angular momentum loss from the disc in our simulations is dominated by continuous stripping of disc material and by transfer of angular momentum to the ISM as it flows past the outer edge of the disc. Our convergence and consistency study as well as previous work indicate that this these effects are predominantly numerical. The time scale estimated from the simulations with the highest resolution therefore provide a lower limit to the lifetime of the disc. The loss of angular momentum due to accretion of disc material onto the star, which is governed by the (modelling of) viscous processes in the disc, is two orders of magnitude smaller than the loss associated with stripping. Even if the disc grows in mass, the (specific) angular momentum of the disc will always decrease in this scenario, if not for the aforementioned angular momentum loss processes then by accretion of ISM with no azimuthal angular momentum. Either way, the disc will shrink in size, thereby decreasing its effective cross section. Although our ISM loading rate agrees within a factor of two with the geometrically estimated rate, the lifetime and size of the disc are probably not sufficient to accrete the amount of mass required in the early disc accretion scenario. In future work we will extend our simulations to explore the parameter-space and conditions that correspond to a broader range of stellar environments in order to find a parametrization for the mass loading rate onto a protoplanetary disc system in terms of the density and velocity of the ambient medium and the size of the disc.
16
7
1607.01017
1607
1607.03224_arXiv.txt
We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from $O[\log L]$ to $O[1]$ ($L$ is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from $O[L\log L]$ to $O[L]$, reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from $O[\delta^2]$ to $O[\delta]$. We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths $b \sim 0.2$ (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.
Friends-of-Friends clustering (FOF) is a common problem in cosmology for identifying features (clusters, usually called halos or groups) in density fields. Three common uses are 1) to find halos from N-body computer simulations in the 3-dimensional configuration space \citep{1985ApJ...292..371D}; 2) to find sub structures inside halos from N-body computer simulations in the 6-dimensional phase space \citep{2010MNRAS.408.1818W,2013ApJ...762..109B}; 3) and galaxy clusters from observational catalogs \citep{2012MNRAS.420.1861M} in the red-shifted configuration space. To assemble a physical catalog based on the feature catalog from the FOF algorithm, it is typical to prune the features (with some dynamical infall model), and to compute and associate additional physical attributes (e.g. spherical over-density parameters). FOF algorithms identify features (or clusters) of points that are (spatially) separated by a distance that is less than a threshold (linking length $b$, typically given in units of mean separation between points) and assigns them a common label. A typical algorithm that solves this involves a breadth-first-search (henceforth BFS). During each visit of BFS, a neighbor query returns all of the particles within the linking length of a given particle. The feature label of these neighbors are examined and updated, and the neighbors whose labels are modified are appended to the search queue for a revisit. The first description of the friends-of-friends algorithm with breadth-first-search in the context of astrophysics following this paradigm is by \citep{1983ApJS...52...61G}. A popular implementation is by \cite{KDFOF}, and more recently by \cite{2016MNRAS.459.2118K}. A naive BFS algorithm queries perform neighbor queries on a point for multiple times, which is an target for optimization. For example, \cite{Kwon:2010:SCA:1876037.1876051} reduce the number of queries by skipping visited branches of the tree. Another widely used algorithm creates the friends-of-friends features by hierarchical merging \citep[e.g.][]{2005MNRAS.364.1105S}. This was originally used for parallelization on large distributed computer architectures, as it allows a very large concurrency with a simple decomposition of the problem onto spatially disjoint domains. The algorithm is implemented in the popular simulation software GADGET\footnote{though not available in the public version}, but probably existed long before. It has been adopted in many codes, including a publicly available version in the AMR code ENZO \citep{2014ApJS..211...19B}. To improve upon spatial queries, GADGET incrementally increase the linking length with multiple iterations. During each iteration, the algorithm performs a neighbor query on a selected set of points, and merges the proto-features(proto-clusters) hosting these points by updating the labels of all constituent points of these two proto-features. The iterations are repeated till no additional merging is possible, and as a result, multiple neighbor queries on a data point are performed. In the GADGET implementation, the proto-features are maintained as a forest of threaded trees, where the leaves (points) of any tree are connected by a linked list (hence the name threaded). During a merge operation, two link lists are joined, by traversing to the tail of the shorter linked list and connecting it to the head of the longer linked list. Two additional storage spaces of $O[N]$ are required to keep track of the size of proto-features and the threading linked list. The traverse increases the cost to merge a feature of length $L$ to $O[L \log L]$, which can be a factor of a few more than optimal in terms of wall clock time. This short-coming of a linked list representation is discussed in detail in Section 21 of \cite{Cormen:2009:IAT:1614191}. Due to these multiple iterations of the data, each making many expensive spatial queries (that slows down significantly as over-density grows) required in the existing algorithms, FOF has been generally considered a slow algorithm. As a result, algorithms that leads to an exact solution are rarely discussed in any detail in the literature of cosmology and astrophysics, while numerous approximated FOFs have been proposed as better alternatives to trade the speed with accuracy, some with more desirable physical characters (e.g. avoid bridging -- counting nearby halos as one). The general idea of these approximated methods is that accurately tracking the outskirts of halos (features) is not important as it is already dominated by shot-noise in the numerical scheme of solvers. A few examples are improving the speed by using density information \citep{1998ApJ...498..137E}, stochastic sub-sampling \citep{2008ApJ...681.1046L}, and a relaxed linking length \citep{AFOF}. Conceptually the FOF problem of cosmology is the same as a well known problem of computer science -- that of identifying the maximum connected components (MCC) from a graph, where the graph is induced from the data set with an adjacent matrix \[ A(i, j) = \begin{cases} 0,& Dist(i, j) > b \\ 1,& Dist(i, j) \le b, \end{cases} \] where $b$ is the linking length. Put differently, if there is a path between two points, then they belong to the same feature, which is represented by a disjoint set. This problem is well studied and has a wide range of applications beyond the field of astrophysics. Numerous example implementations are freely available and integrated into machine learning packages. \citep[e.g.][]{Shun:2013:LLG:2517327.2442530} In this paper we apply well known data structures and algorithms from computer science to derive a fast exact friends-of-friends algorithm that avoids expensive neighbor queries, uses minimal memory overhead, and rejects over-density slow down. Our main inspiration is from the dual-tree algorithm introduced by \cite{2001misk.conf...71M}. The dual-tree algorithm efficiently calculates correlation functions by walking two spatial index trees simultaneously and avoids expensive and unnecessary neighbor queries. We use KD-Tree in the example implementation, though this can be replaced with a ball-tree for higher dimensional data and a chaining mesh for low dimensional data to achieve better performance \citep[for the latter, see][]{manodeep_sinha_2016_55161}. Most importantly, the dual-tree algorithm calculates the correlation function with a single pass, enumerating each pair of neighboring points exactly once. Rewriting the FOF algorithm with pair enumeration avoids the repeated neighbor queries in breadth-first-search (BFS) algorithms and the GADGET hierarchical algorithm. The main issue in the hierarchical merging algorithm, as pointed above, is the costly hierarchical merging of proto-features. We address this by representing the proto-features with a tree/forest data structure, and apply a splay operation in the merge procedure, which moves recently accessed nodes closer to the root, accelerating root finding operations in the average case \citep[][Section 21]{Cormen:2009:IAT:1614191}. The splay operation was original introduced by \cite{Sleator:1985:SBS:3828.3835} to balance binary tree structures. In our case, splay reduces the average case complexity to construct a final features of length $L$ to $O[L]$ (as compared to $O[L\log L]$ with a linked list, as implemented in GADGET). It also eliminates the need to use additional $O[N]$ storage space for threading and balancing, resulting an extremely simple implementation. For completeness, we give an intuitive proof of finding correct solution with a single pass of pair enumeration with the splay tree data structure. To further speed up our algorithm, especially in case of heavily over dense region where spatial queries become increasingly expensive (scaling as $O[((1+\delta) b^3)^2]$ where $b$ is the linking length and $\delta$ is the over density), we implement another important optimization. We show that if two KD-Tree nodes (proto-features) are known to be fully-connected, the nodes need not be further opened and their respective hosting proto-features can be directly merged. This optimization eliminates most of merge operations in dense region and is particularly relevant in high resolution simulations that resolves Kpc scale structures and over-density peaks of $\delta \gg 10^3$ \citep[if we push high resolution simulations such as][to a cosmological volume]{2014MNRAS.445..581H}, though even for current generation of simulations it already reduces the number of merge operations by 20\% to 50\%. The algorithm can be directly applied as the local section of a parallel friend of friend halo finding routine. Our implementation of the algorithm is available at \url{https://github.com/rainwoodman/kdcount/blob/master/kdcount/kd_fof.c}. We note that our reference dual tree pair enumeration code is not particularly optimized for performance, and hence we rather focus on the theoretical aspects of the algorithm and optimizations in this work. One can easily re-implement our algorithms with existing highly optimized fast correlation function codes to further improve the performance of FOF halo identification on actually problems. The paper is organized as the following: in Section 2, we define the plain dual-tree friends-for-friends algorithm and prove its correctness; in Section 3, we will discuss the optimization; in Section 4, we perform scaling tests of the algorithm on two realistic cosmological simulation data sets.
We describe a fast algorithm for identifying friends-of-friends halos in cosmological data sets. The algorithm is defined on pair enumeration which visits all edges of the connected graph induced by the linking length and is constructed on a dual KD-tree correlation function code. We present two optimizations that significantly improves the speed and robustness of the FOF algorithm - use of a splay tree and pruning the enumeration of self-connected KD-tree nodes - both of which can be very easily ported to any of the existing pair-enumeration codes. After these two optimizations we find that our algorithm reduces the number of operations for constructing friends of friends halos by almost two orders of magnitude comparing to a naive implementation with linked list for generally used linking lengths of $b=0.2$. We began by pointing out that although the friends-and-friends problem is identical to the maximum connected component problem in graph theory, with spatial data such as a cosmological simulation, the elements of adjacent matrix are implied via expensive neighbor queries. Therefore, the dual-tree pair enumeration algorithm that we use is advantageous because it systematically eliminates expensive neighbor queries by tracking two tree nodes simultaneously. We implement two important optimizations to improve the scaling and robustness of the algorithm for merging proto-features against input data. The first optimization is to append a splay operation to the root query in the hierarchical tree structure of proto-features. The splay operation significantly reduces the average number of traverses, making root-query a $O[1]$ process, without requiring significant additional storage space. The second optimization is to skip merge operations while visiting pairs in two self-connected KD-Tree nodes. Reducing the number of merge operations and pair enumerations significantly speeds up the algorithm in high over-density regions and with large linking lengths. We also proved the correctness of this optimization. We note that in our application, the time spent in local friends-of-friends finding becomes sub-dominant comparing to the time in global merging of the catalog. We plan to investigate an optimal distributed algorithm by combining our algorithm with the fast distributed memory parallel algorithm by \citep[e.g.][]{Fu:2010:DDS:1851476.1851527}. Finally, as an advantage due to insisting on constructing the algorithm with an abstract hierarchical pair enumeration operation, we expect further improvement of speed from our naive implementation by porting the algorithm to a highly optimized correlation function code beyond KD-Tree \citep[e.g.][]{manodeep_sinha_2016_55161}.
16
7
1607.03224
1607
1607.03965_arXiv.txt
We present the Green Bank Telescope absorption survey of cold atomic hydrogen ($\lesssim 300$~K) in the inner halo of low-redshift galaxies. The survey aims to characterize the cold gas distribution and to address where condensation - the process where ionized gas accreted by galaxies condenses into cold gas within the disks of galaxies - occurs. Our sample consists of 16 galaxy-quasar pairs with impact parameters of $\le$ 20~kpc. We detected an \HI absorber associated with J0958+3222 (NGC~3067) and \HI emission from six galaxies. We also found two \ion{Ca}{2} absorption system in the archival SDSS data associated with galaxies J0958+3222 and J1228+3706, , although the sample was not selected based on the presence of metal lines. Our detection rate of \HI absorbers with optical depths of $\ge 0.06$ is $\sim$7\%. We also find that cold \HI phase ($\lesssim$~300~K) is 44($\pm$18)\% of the total atomic gas in the sightline probing J0958+3222. We find no correlation between the peak optical depth and impact parameter or stellar and \HI radii normalized impact parameters, $\rho/\rm R_{90}$ and $\rho/\rm R_{HI}$. We conclude that the process of condensation of inflowing gas into cold ($\lesssim$ 300~K) \HI occurs at the $\rho << 20$~kpc. However, the warmer phase of neutral gas (T $\sim$ 1000~K) can exists out to much larger distances as seen in emission maps. Therefore, the process of condensation of warm to cold \HI is likely occurring in stages from ionized to warm \HI in the inner halo and then to cold \HI very close to the galaxy disk.
} Theoretical and observational evidence indicate that gas inflow is essential to the growth of galaxies. The observed star-formation histories and stellar population metallicities (e.g., the G-dwarf problem) require continuous accretion of low-metallicity gas into galaxies in the present epoch. Simulations by \citet[][and others]{birn03,keres05} have shown how gas ($\rm 10^4~K$) may be accreted by galaxies via filamentary structures from the cosmic web. Hence, understanding the connection between cold atomic hydrogen (at $\rm \approx 10^2~K$) within the disks, the hot halo gas (at $\rm \approx 10^6~K$), and the cooler accreting gas (at $\rm T \approx 10^{4-5}~K$) is essential to our understanding of galaxy growth and evolution. On the observational front, cool gas traced by Lyman~$\alpha$ absorbers has been ubiquitously detected in the circumgalactic medium (CGM) of galaxies. \citet{werk14} found that the total mass in the CGM can be as large as the stellar mass of a galaxy. The strength of the CGM absorbers increases as we probe closer to the galaxies \citep[][and references therein]{lanzetta95, chen98, tripp98, chen01b, bowen02, prochaska11, stocke13, tumlinson13, liang14, borthakur15}, thus suggesting the possibility of an active condensation process in the inner regions of a galaxy halo. Recently, \citet{curran16} observed a similar inverse correlation between 21cm \HI absorption strength and impact parameter by combining data from various \HI absorption surveys. Furthermore, \citet[COS-GASS survey][]{borthakur15} found that the strength of the \Lya absorbers in the CGM is strongly correlated with the amount of atomic hydrogen within the inner regions of the galaxies. While there is growing observational indications that the inflowing/accreting gas eventually condenses into the \HI disk, the details of the process of condensation has eluded us so far. There is very little direct observational evidence on how the cool accreting gas condenses into neutral \HI\ and descends into the disks of galaxies. Part of the problem is that condensing cold ($\rm 10^{2-4}~K$) clouds are expected to have low column densities, which posses a serious limitation in detecting them around distant galaxies. Deep \HI\ imaging studies of Milky Way shown the presence of extra-planar gas \citep{kalberla08} that exists as filaments extending up to several kiloparsecs (Hartmann 1997). The extraplanar gas amounts to 10\% of the total \HI\ in our galaxy \citep{kalberla08}. Recently, there is growing evidence of the presence of extraplanar cold gas in a few nearby galaxies where \HI imaging of the faint low column density gas is possible have shown the ubiquitous presence of extra-planar gas \citep[][HALOGAS survey]{oosterloo07,heald11}. \HI imaging of NGC~891 by \citet{oosterloo07} revealed a large extended gaseous halo that contain almost 30\% of its total \HI\ in the form of \HI\ filaments extending up to 22~kpc vertically from the galactic disk. While the origin of such structures is not clear, possible origins include \HI\ accreted via satellite galaxies and/or from the inter-galactic medium or condensation of hot halo gas into \HI\ structures. Besides extraplanar gas, the halo of Milky Way contains high and intermediate velocity clouds (HVCs and IVCs respectively) with velocities deviating from the \HI\ in the disk with a covering fraction of $\sim$ 37\% at log~N(HI)$> 17.9$ \citep{lehner12}. Another population of low-mass \HI\ clouds in the galactic halo with peak column densities of 10$^{19} \rm cm^{-2}$, and sizes of a few tens of parsecs exists in the Milky Way halo \citep{lockman02}. The existence of these extraplanar clouds in the Milky Way has been known for decades now, nevertheless their origin still remains controversial. One likely scenario is that these \HI structures are tracing the gas transport route from the outer CGM (50-200 kpc) to the inner regions (5-10kpc) of a galaxy or vice-versa. \begin{figure*} \includegraphics[trim = 55mm 20mm 55mm 20mm, clip,scale=1.07,angle=-0]{f1.ps} \caption{Postage stamp images showing SDSS false color images of galaxies from our sample. The cross-bar is centered at the target (foreground) galaxy. The target ID is printed in the top right corner of the image. Further details on the targets are presented in Table~1. } \label{fig-SDSS_images} \end{figure*} \begin{figure*} \includegraphics[trim = 20mm 0mm 54mm 0mm, clip,scale=0.60,angle=-0]{f2b.ps} \includegraphics[trim = 10mm 0mm 0mm 0mm, clip,scale=0.60,angle=-0]{f2a.ps} \caption{Left: Impact parameter of the sightline to the foreground galaxy is plotted against the 20cm radio continuum flux of the background source for our sample of 16 galaxy-quasar pairs. The galaxies are marked with different color indicating the galaxy ``color" (a proxy for the star-formation) and symbols indicating the orientation of the foreground galaxy with respect to the sightline. Right: The position of the sightline with respect to the major and minor axis of the foreground galaxy. The sightlines that probe face-on galaxies are plotted along the minor axis as filled circles. } \label{fig-rho-S20} \end{figure*} While the present-day instruments are able to achieve sufficient surface brightness sensitivity to detect these clouds in emission around nearby galaxies (e.g. 240~hrs of WSRT for NGC~891 to get $\rm N(HI)= 1 \times 10^{19} ~cm^{-2}$ with at a spatial resolution of 23.4$^{\prime\prime}~\times \rm 16.0^{\prime\prime}$ and spectral resolution of 8~\kms by \citet{oosterloo07}), similar experiments for the more distant galaxies are extremely challenging. A solution to this problem is to conduct a statistical study using Quasar absorption line spectroscopy. HI associated with the galaxies may be probed via corresponding absorption features in the spectrum of background Quasars. There are two main advantages of this method - (1) it allows us to probe \HI at column densities orders-of-magnitude lower than that in emission studies; and % (2) the strength of detection does not depend on the redshift of the galaxy, thus making such a study unbiased for redshift related affects. A large body of work over the past several decades have taken advantage of radio bright background sources to probe \HI properties of foreground galaxies \citep[][and reference therein]{haschick75, rubin82, haschick83, briggs83, boisse88, corbelli90, carilli92, kanekar01, kanekar02, hwang04, keeney05, borthakur10b, gupta10, borthakur11, srianand13, gupta13, borthakur14b, reeves15, zwaan15, dutta16, reeves16, curran16}. Most of these studies targeted lower redshift galaxies as the galaxy redshift catalogs from which the sample were generated become increasingly sparse at higher redshifts. Some of these studies, such as by \citet[][G10 hereafter]{gupta10} and \citet[][B11 hereafter]{borthakur11}, have been surveys that have explored the covering fraction of cold \HI in the halos of the foreground galaxies. G10 found the covering fraction of \HI\ to be $\sim$50\% within 15~kpc of any galaxies. B11 too found the same and concluded that covering fraction falls off rapidly with distance from the center of the galaxy. One caveat of these studies is the limited and somewhat biased samples. For a full census of cold gas around galaxies, one would require an unbiased sample as well as sensitive observations with high spatial and spectral resolution. Here we present our study utilizing the highly sensitive data from the Robert C. Byrd Green Bank Telescope (GBT). Detailed descriptions of our sample, the GBT observations and data reduction are presented in Section~2. The results are presented in in Section~3 and their implications are discussed in Section~4. Finally, we summarize our findings in Section~5. The cosmological parameters used in this study are $H_0 =70~{\rm km~s}^{-1}~{\rm Mpc}^{-1}$, $\Omega_m = 0.3$, and $\Omega_{\Lambda} = 0.7$.
\begin{itemize} \item[1.] The covering fraction of cold \HI ($\lesssim 300$~K) is $\sim$7\% for neutral gas of column density $\gtrsim \rm 10^{20}~cm^{-2}$ in the inner halo ($\rho<$20~kpc) of galaxies. Our estimate is substantially different from that previously observed by G10 and B11 of 50\%. However, it is consistent with the covering fraction of 4\% that was reported by R15. This low covering fraction is also broadly consistent with the covering fraction of 18\% for HVCs at column densities of log~N(HI)$> 18.5$. \item[2.] The most likely cause of the discrepancy in the covering fraction between B11 and our result is the nature of the sample. The observational setup and data reduction procedure was almost identical. While our sample was radio selected, the B11 sample (and so is the G10 sample) was selected based on the optical and radio properties of the background quasars. \item[3.] We detected an \HI absorber towards the sightline J0958$+$3224 probing foreground galaxy J0958+3222. This absorber was observed by multiple studies including \citet{keeney05}. Our observations are the highest spectral resolution observations of this system. The measured FWHM of the absorber was found to be 3.2($\pm$0.5)~\kms with a peak optical depth of 0.035. The optical depth is consistent with that found by \citet{keeney05}. Based on the FWHM, we estimate the spin temperature to be $\le$ 224~K and a column density of $\le \rm 4.4 \times 10^{19}~cm^{-2}$. A comparison of our spin temperature estimate for the cold component of the \HI in this system to that of the total \HI as estimated by \citet{keeney05} suggests that the cold gas to total neutral gas is about 44($\pm 18)$\%. \item[4.] We do not find any correlation in optical depth or column density of 21~cm \HI absorption with impact parameter for absorbers with optical depths of $\geqslant 0.06$. Furthermore, no correlation was observed between optical depths and normalized impact parameters $\rho$/R$_{90}$ and $\rho$/R$_{HI}$. \item[5.] We also do not find any dependence of 21~cm \HI optical depth with the orientation of the sightline with respect to the foreground galaxy. \item[6.] We also searched for \ion{Ca}{2} in the archival SDSS spectra that were available for 12/16 background quasars. \ion{Ca}{2} absorption features were detected at the redshift of the target galaxies J0958+3222 and J1228$+$3706. The associated \HI absorber for J0958+3222 was detected in 21~cm, no such measurements could be made for J1228$+$3706 as our spectral-line data for this sightline were severely corrupted by RFI. \end{itemize} Our characterization of the distribution of \HI in the inner halos of galaxies indicates that the process of condensation of inflowing gas into cold \HI does occur at the very inner regions of the galaxies. However, evidence from \HI emission maps of the Milky Way and several other nearby galaxies suggest that the neutral gas in the halos might be existing mostly in the warm phase at temperatures of of-the-order of 1000~K. Further observations are required to precisely estimate the covering fraction of \HI - especially for the galaxies exhibiting \HI emission. We plan to carry out higher spatial resolution observations in the near future with the VLA. For future studies, we recommend sensitive observations with high spectral resolution of sub-\kms and high spatial resolution optimized to the size of the background source for maximizing the success of detecting cold gas in the inner halos of galaxies. \vspace{.5cm}
16
7
1607.03965
1607
1607.04068_arXiv.txt
{Galaxies which strongly deviate from the radio-far IR correlation are of great importance for studies of galaxy evolution as they may be tracing early, short-lived stages of starbursts and active galactic nuclei (AGNs). The most extreme FIR-excess galaxy NGC~1377 has long been interpreted as a young dusty starburst, but millimeter observations of CO lines revealed a powerful collimated molecular outflow which cannot be explained by star formation alone.} {To determine the nature of the energy source in the nucleus of NGC~1377 and to study the driving mechanism of the collimated CO outflow.} {We present new radio observations at 1.5 and 10~GHz obtained with the Jansky Very Large Array (JVLA) and \textit{Chandra} X-ray observations towards NGC~1377. The observations are compared to synthetic starburst models to constrain the properties of the central energy source.} {We obtained the first detection of the cm radio continuum and X-ray emission in NGC~1377. We find that the radio emission is distributed in two components, one on the nucleus and another offset by 4$''$.5 to the South-West. We confirm the extreme FIR-excess of the galaxy, with a $q_\mathrm{FIR}\simeq$4.2, which deviates by more than 7-$\sigma$ from the radio-FIR correlation. Soft X-ray emission is detected on the off-nucleus component. From the radio emission we estimate for a young ($<10$~Myr) starburst a star formation rate SFR$<$0.1~M$_\odot$~yr$^{-1}$. Such a SFR is not sufficient to power the observed IR luminosity and to drive the CO outflow.} {We find that a young starburst cannot reproduce all the observed properties of the nucleus of NGC~1377. We suggest that the galaxy may be harboring a radio-quiet, obscured AGN of 10$^6$M$_\odot$, accreting at near-Eddington rates. We speculate that the off-nucleus component may be tracing an hot-spot in the AGN jet.}
\begin{figure} \centering \includegraphics[width=.5\textwidth,keepaspectratio]{radio_cont.eps} \caption{\label{fig:sed} Radio continuum flux of NCG1377 compared to free-free and synchrotron emission expected from IR fluxes. Filled symbols refer to our detections of the radio continuum with the JVLA. The upper limits are 3-$\sigma$ estimates from the literature. } \end{figure} A small fraction of galaxies \citep[e.g., ][]{helou_85} have faint radio and bright far-infrared (FIR) emission which strongly deviate from the well-known radio to far-infrared (FIR) correlation \citep[e.g., ][]{condon92}. Potential interpretations of the FIR excess include very young synchrotron-deficient starbursts or dust-enshrouded active galactic nuclei (AGN). FIR-excess galaxies only represent a small sub-group ($\approx$1$\%$) of the IRAS Faint Galaxy Sample \citep[e.g., ][]{roussel03}, which is likely an indication that they may be tracing a short evolutionary phase. If powered by obscured AGN, these are likely in the early stages of their evolution, when the nuclear material has not yet been dispersed and/or consumed to feed the growth of the black hole. Recent publications \citep{aalto1377,sakamoto2013,aalto2016} have shown that some of such systems drive molecular outflows and are thus the ideal targets to study the first stages of starburst/AGN feedback. The most extreme FIR-excess galaxy detected so far is NGC~1377, a lenticular galaxy in the Eridanus group at a distance of 21~Mpc (1$''$ = 102 pc), with a FIR luminosity of the order of 10$^{9}$~L$_\odot$ \citep{roussel03}. Despite several attempts in the past, the radio continuum in this galaxy has long remained undetected. Deep observations with the Very Large Array (VLA) and Effelsberg telescopes \citep[e.g., ][ and references therein]{roussel03} obtained limits on the radio continuum which are about 40 times fainter than the synchrotron emission that would be expected from the radio to far-infrared correlation (Fig.~\ref{fig:sed}). Also, these limits are fainter than the free-free emission that would be expected from the star formation rate of 1-2 M$_\odot$ yr$^{-1}$ derived from the IR flux. HII regions are not detected through near-infrared hydrogen recombination lines or thermal radio continuum \citep{roussel03,roussel06}. Deep mid-infrared silicate absorption features suggest that the nucleus is very compact, and enshrouded by a large mass of dust \citep[e.g., ][]{spoon07}, which potentially absorbs all ionizing photons. The high obscuration makes the determination of the energy source a challenging task. \citet{roussel06} proposed that NGC~1377 is a nascent opaque starburst -- the radio synchrotron deficiency would then be caused by the extreme youth (pre-supernova stage) of the starburst activity when the young stars are still embedded in their birth-clouds. In contrast, \citet{imanishi06} argued, based on the small 3.3~$\mu$m PAH equivalent width and strong mid-IR H$_2$ emission, that NGC~1377 harbors a buried AGN. Furthermore, \citet{imanishi09} found an HCN/HCO$^+$ J = 1--0 line ratio exceeding unity, which they suggested is evidence of an X-ray dominated region (XDR) surrounding an AGN. The authors explained the lack of radio continuum with the presence of a large column of intervening material that causes free-free absorption. With recent SMA and ALMA observations, \citet{aalto1377,aalto2016} found a molecular outflow originating from the inner 30 pc of NGC~1377 and extending to about 150--200~pc. Given its velocity and extent, these authors calculated an age for the outflow of about 1.4~Myr, which is consistent with the young age of the central activity. The upper limit on the 1.4~GHz flux density falls short by a factor of 10 to explain the outflow as supernova-driven and the authors suggest instead that the outflow may be driven by radiation pressure from a buried AGN. In summary, there is substantial evidence that the energy source of NGC~1377 must be young, but its nature is still highly debated. Here we report the results of recent observations with the Enhanced Karl G. Jansky Very Large Array (JVLA) and \textit{Chandra} X-ray observatory which finally reveal the energy source of NGC~1377 and shed new light on the properties of FIR-excess galaxies. The details of the observations are reported in Section~\ref{sec:obs}. In Section~\ref{sec:res} we describe the properties of the radio and X-ray emission and compare the observations with synthetic starburst models. In Section~\ref{sec:disc} we discuss the nature of the nuclear energy source and in Section~\ref{sec:conc} we summarize our conclusions.
\label{sec:conc} We report radio and X-ray observations of NGC1377, the most extreme FIR-excess galaxy known to date, in which a highly collimated molecular outflow has recently been found. Our results suggest that the morphology and energetics of the radio, X-ray, and molecular line emissions point toward a radio-faint AGN+jet system explanation, rather than a nascent starburst as previously proposed to interpret the observed properties of the galaxy. Our main results are: \begin{itemize} \item We obtained the first detection of the cm-wave radio continuum in NGC~1377. Both the 1.5 and 10~GHz emission show two components, peaking on the galaxy nucleus and 4$''$.5$\sim$500~pc to the South--West. The two radio components have a steep spectral index $\alpha\sim$0.5-0.7, consistent with optically thin synchrotron emission. \item Soft X-rays emission (0.3-7~keV) is tentatively detected for the first time in NGC~1377 by \textit{Chandra}. The emission is peaked at the position of the off-nucleus radio component with no detection at the galaxy's center. \item We confirm the extreme far-IR excess of the galaxy, with a $q_\mathrm{FIR}$ of 4.2 which deviates for more than 7--$\sigma$ from the radio-FIR correlation \citep[$q_\mathrm{FIR}$=2.34$\pm$0.26, ][]{roussel03}. \item By comparing the observations with synthetic starburst models, we find that the SFR estimated from optically thin free-free ($<$0.1~M$_\odot$~yr$^{-1}$) falls short by 1--2 orders of magnitude from explaining the galaxy's IR luminosity and the mechanical luminosity of the CO outflow. We conclude that a young starburst cannot reproduce all the observed properties of NGC~1377. \item We estimate that a SMBH of 10$^6$~M$_\odot$ ~accreting at nearly Eddington rates may reproduce the observed IR luminosity and outflow energy. Such an AGN would be extremely radio-faint, with $R\equiv L_\mathrm{5 GHz}/L_\mathrm{4400\mbox{\AA}}\approx$0.02. % \item The SFR density measured by radio observations is more than 20 times lower than the SFR inferred from the Schmidt-Kennicutt relation. We find that a turbulent velocity of 50~km/s would be sufficient to stabilize the galaxy bulge against gravitational collapse and inhibit star formation. This value is similar to the velocity dispersion of CO observations. We suggest that turbulent feedback from the AGN may be inhibiting star formation in the bulge of NGC~1377. \item The radio and X-ray emission from the off-nucleus component are consistent with the presence of a relativistic jet hot-spot. We suggest that this structure may be revealing the presence of a radio counterpart to the CO 3--2 highly collimated outflow. \end{itemize}
16
7
1607.04068
1607
1607.03154_arXiv.txt
We perform a tomographic baryon acoustic oscillations analysis using the two-point galaxy correlation function measured from the combined sample of BOSS DR12, which covers the redshift range of $0.2<z<0.75$. Splitting the sample into multiple overlapping redshift slices to extract the redshift information of galaxy clustering, we obtain a measurement of $D_A(z)/r_d$ and $H(z)r_d$ at nine effective redshifts with the full covariance matrix calibrated using MultiDark-Patchy mock catalogues. Using the reconstructed galaxy catalogues, we obtain the precision of $1.3\%-2.2\%$ for $D_A(z)/r_d$ and $2.1\%-6.0\%$ for $H(z)r_d$. To quantify the gain from the tomographic information, we compare the constraints on the cosmological parameters using our 9-bin BAO measurements, the consensus 3-bin BAO and RSD measurements at three effective redshifts in \citet{Alam2016}, and the non-tomographic (1-bin) BAO measurement at a single effective redshift. Comparing the 9-bin with 1-bin constraint result, it can improve the dark energy Figure of Merit by a factor of 1.24 for the Chevallier-Polarski-Linder parametrisation for equation of state parameter $w_{\rm DE}$. The errors of $w_0$ and $w_a$ from 9-bin constraints are slightly improved when compared to the 3-bin constraint result.
\label{sec:intro} The accelerating expansion of the Universe was discovered by the observation of type Ia supernovae \citep{Riess,Perlmutter}. Understanding the physics of the cosmic acceleration is one of the major challenges in cosmology. In the framework of general relatively (GR), a new energy component with a negative pressure, dubbed dark energy (DE), can be the source driving the cosmic acceleration. Observations reveal that the DE component dominates the current Universe \citep{DEreview}. However, the nature of DE remains unknown. Large cosmological surveys, especially for galaxy redshift surveys, can provide key observational support for the study of DE. Galaxy redshift surveys are used to map the large scale structure of the Universe, and extract the signal of baryon acoustic oscillation (BAO). The BAO, produced by the competition between gravity and radiation due to the couplings between baryons and photons before the cosmic recombination, leave an imprint on the distribution of galaxies at late times. After the photons decouple, the acoustic oscillations are frozen and correspond to a characteristic scale, determined by the comoving sound horizon at the drag epoch, $r_d\sim150\,\rm Mpc$. This feature corresponds to an excess on the 2-point correlation function, or a series of wiggles on the power spectrum. The acoustic scale is regarded as a standard ruler to measure the cosmic expansion history, and to constrain cosmological parameters \citep{Eisenstein2005}. If assuming an isotropic galaxies clustering, the combined volume distance, $D_V(z) \equiv \left[cz (1+z)^2 D_A(z)^2 H^{-1}(z) \right]^{1/3}$, where $H(z)$ is the Hubble parameter and $D_A(z)$ is the angular diameter distance, can be measured using the angle-averaged 2-point correlation function, $\xi_0(s)$ \citep{Eisenstein2005,Kazin2010,Beutler2011,Blake2011} or power spectrum $P_0(k)$ \citep{Tegmark2006,Percival2007,Reid2010}. However, in principle the clustering of galaxies is anisotropic, the BAO scale can be measured in the radial and transverse directions to provide the Hubble parameter, $H(z)$, and angular diameter distance, $D_A(z)$, respectively. As proposed by \citealt{Padmanabhan2008}, the ``multipole'' projection of the full 2D measurement of power spectrum, $P_{\ell}(k)$, were used to break the degeneracy of $H(z)$ and $D_A(z)$. This multipole method was applied into the correlation function \citep{Chuang2012, Chuang2013, Xu2013}. Alternative ``wedge'' projection of correlation function, $\xi_{\Delta \mu}(s)$, was used to constrain parameters, $H(z)$ and $D_A(z)$ \citep{Kazin2012, Kazin2013}. In \citet{Anderson2014}, the anisotropic BAO analysis was performed using these two projections of correlation function from SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) DR10 and DR11 samples. The BOSS \citep{Dawson2013}, which is part of SDSS-III \citep{Eisenstein2011}, has provided the Data Release 12 \citep{Alam}. With a redshift cut, the whole samples are split into the `low-redshift' samples (LOWZ) in the redshift range $0.15<z<0.43$ and `constant stellar mass' samples (CMASS) in the redshift range $0.43<z<0.7$. Using these catalogues, the BAO peak position was measured at two effective redshifts, $z_{\rm eff}=0.32$ and $z_{\rm eff}=0.57$, in the multipoles of correlation function \citep{Cuesta} or power spectrum \citep{Gil-Mar}. \citealt{CF_4bins} proposed to divide each sample of LOWZ and CMASS into two independent redshift bins, thus to test the extraction of redshift information from galaxy clustering. They performed the measurements on BAO and growth rate at four effective redshifts, $z_{\rm eff}=0.24,\,0.37,\,0.49$ and $0.64$ \citep{CF_4bins}. The completed data release of BOSS will provide a combined sample, covering the redshift range from $0.2$ to $0.75$. The sample is divided into three redshift bins, \ie, two independent redshift bins, $0.2<z<0.5$ and $0.5<z<0.75$, and an overlapping redshift bin, $0.4<z<0.6$. The BAO signal is measured at the three effective redshifts, $z_{\rm eff}=0.38, \, 0.51$ and $0.61$ using the configuration-space correlation function \citep{CF-sysweight, Mariana2016} or Fourier-space power spectrum \citep{pk-BAO}. As the tomographic information of galaxy clustering is important to constrain the property of DE \citep{Albornoz2014,DE-recon}, we will extract the information of redshift evolution from the combined catalogue as much as possible. To achieve this, we adopt the binning method. The binning scheme is determined through the forecasting result using Fisher matrix method. We split the whole sample into $\mathit{nine}$ $\mathit{overlapping}$ redshift bins to make sure that the measurement precision of the isotropic BAO signal is better than 3\% in each bin. We perform the measurements on the an/isotropic BAO positions in the $\mathit{nine}$ $\mathit{overlapping}$ bins using the correlation functions of the pre- and post-reconstruction catalogues. To test the constraining power of our tomographic BAO measurements, we perform the fitting of cosmological parameters. The analysis is part of a series of papers analysing the clustering of the completed BOSS DR12 \citep{Alam2016, TomoPk, pk-BAO, pk-full, CF-sysweight, Sanchez2016, Sanchez2016-2, Albornoz2016, Mariana2016, Grieb2016,CF_4bins,Ibanez2016}. The same tomographic BAO analysis is performed using galaxy power spectrum in Fourier space \citep{TomoPk}. Another tomographic analysis is performed using the angular correlation function in many thin redshift shells and their angular cross-correlations in the companion paper, \citet{Albornoz2016}, to extract the time evolution of the clustering signal. In Section 2, we introduce the data and mocks used in this paper. We present the forecast result in Section 3. In Section 4, we describe the methodology to measure the BAO signal using multipoles of correlation function. In Section 5, we constrain cosmological models using the BAO measurement from the post-reconstructed catalogues. Section 6 is devoted to the conclusion. In this paper, we use a fiducial $\Lambda$CDM cosmology with the parameters: $\Omega_m=0.307, \Omega_bh^2=0.022, h=0.6777, n_s=0.96, \sigma_8=0.8288$. The comoving sound horizon in this cosmology is $r_d^{\rm fid}=147.74 \,\rm Mpc$.
\label{sec:conclusion} Measurements of the BAO distance scales have become a robust way to map the expansion history of the Universe. A precise BAO distance measurement at a single effective redshift can be achieved using the entire galaxies in the survey, covering a wide redshift range. However, the tomographic information is largely lost. To extract the redshift information from the samples, one possible way is to use overlapping redshift slices. Using the combined sample of BOSS DR12, we perform a tomographic baryon acoustic oscillations analysis using the two-point galaxy correlation function. We split the whole redshift range of sample, $0.2<z<0.75$, into multiple overlapping redshift slices, and measured correlation functions in all the bins. With the full covariance matrix calibrated using MultiDark-Patchy mock catalogues, we obtained the isotropic and anisotropic BAO measurements. In the isotropic case, the measurement precision on $D_V(z)/r_d$ from the pre-reconstruction catalogue can reach $1.8\%\sim3.3\%$. For the post-reconstruction, the precision is improved, and becomes $1.1\%\sim1.8\%$. In the anisotropic case, the measurement precision is within $2.3\%-3.5\%$ for $D_A(z)/r_d$ and $3.9\%-8.1\%$ for $H(z)r_d$ before the reconstruction. Using the reconstructed catalogues, the precision is improved, which can reach $1.3\%-2.2\%$ for $D_A(z)/r_d$ and $2.1\%-6.0\%$ for $H(z)r_d$. We present the comparison of our measurements with that in a companion paper \citep{TomoPk}, where the tomographic BAO is measured using multipole power spectrum in Fourier space. We find an agreement within the $1\,\sigma$ confidence level. The derived 3-bin results from our tomographic measurements are also compared to the 3-bin measurements in \citet{CF-sysweight}, and a consistency is found. We perform cosmological constraints using the tomographic 9-bin BAO measurements, the consensus 3-bin BAO and RSD measurements, and the compressed 1-bin BAO measurement. Comparing the constraints on $w_0w_a$CDM from 9-bin and 1-bin BAO distance measurements, the uncertainties of the parameters, $w_0$ and $w_a$ are improved by 6\% and 16\%, respectively. The dark energy FoM is improved by a factor of 1.24. Comparing the ``9 $z$bin" with ``3 $z$bin" results, the ``9 $z$bin" BAO measurement give the slightly tighter constraints. The future galaxy surveys will cover a larger and larger cosmic volume, and there is rich tomographic information in redshifts to be extracted. The method developed in this work can be easily applied to the upcoming galaxy surveys and the gain in the temporal information is expected to be more significant.
16
7
1607.03154
1607
1607.06771_arXiv.txt
We study the molecular gas properties of high--$z$ galaxies observed in the ALMA Spectroscopic Survey (ASPECS) that targets a $\sim1$\,arcmin$^2$ region in the Hubble Ultra Deep Field (UDF), a blind survey of CO emission (tracing molecular gas) in the 3\,mm and 1\,mm bands. Of a total of 1302 galaxies in the field, 56 have spectroscopic redshifts and correspondingly well--defined physical properties. Among these, 11 have infrared luminosities $L_{\rm{}IR}>10^{11}$\,\Lsun{}, i.e. a detection in CO emission was expected. Out these, 7 are detected at various significance in CO, and 4 are undetected in CO emission. In the CO--detected sources, we find CO excitation conditions that are lower than typically found in starburst/SMG/QSO environments. We use the CO luminosities (including limits for non-detections) to derive molecular gas masses. We discuss our findings in context of previous molecular gas observations at high redshift (star--formation law, gas depletion times, gas fractions): The CO--detected galaxies in the UDF tend to reside on the low-$L_{\rm{}IR}$ envelope of the scatter in the $L_{\rm{}IR}-L'_{\rm{}CO}$ relation, but exceptions exist. For the CO--detected sources, we find an average depletion time of $\sim$\,1\,Gyr, with significant scatter. The average molecular--to--stellar mass ratio ($M_{\rm{}H2}$/$M_*$) is consistent with earlier measurements of main sequence galaxies at these redshifts, and again shows large variations among sources. In some cases, we also measure dust continuum emission. On average, the dust--based estimates of the molecular gas are a factor $\sim$2--5$\times$ smaller than those based on CO. Accounting for detections as well as non--detections, we find large diversity in the molecular gas properties of the high--redshift galaxies covered by ASPECS.
Molecular gas observations of galaxies throughout cosmic time are fundamental to understand the cosmic history of formation and evolution of galaxies \citep[see reviews by][]{kennicutt12,carilli13}. The molecular gas provides the fuel for star formation, thus by characterizing its properties we place quantitative constraints on the physical processes that lead to the stellar mass growth of galaxies. This has been a demanding task in terms of telescope time. To date, only a couple hundred sources at $z>1$ have been detected in a molecular gas tracer \citep[typically the rotational transitions of the carbon monoxide $^{12}$CO molecule; e.g.,][]{carilli13}. This sample is dominated by `extreme' sources, such as QSO host galaxies \citep[e.g.,][]{walter03,bertoldi03,weiss07,wang13} or sub-mm galaxies \citep[e.g.,][]{frayer98,neri03,greve05,bothwell13,riechers13,aravena16}, with IR luminosities $L_{\rm IR}\gg 10^{12}$\,\Lsun{} and Star Formation Rates (SFR) $\gg 100$\,\Msun{}\,yr$^{-1}$. These extreme sources might contribute significantly to the star formation budget in the Universe at $z>4$, but their role declines with cosmic time \citep{casey14}. Indeed, the bulk of star formation up to $z\sim2$ is observed in galaxies along the so-called `main sequence' \citep{noeske07, elbaz07, elbaz11, daddi10a,daddi10b, genzel10, wuyts11, whitaker12}, a tight (scatter rms $\sim 0.3$\,dex) relation between the SFR and the stellar mass, $M_*$. Addressing the molecular gas content of main sequence galaxies beyond the local universe has become feasible only in recent years. The first step in the characterization of the molecular gas content of galaxies is the measure of the molecular gas mass, $M_{\rm H2}$. The $^{12}$CO molecule (hereafter, CO) is the second most abundant molecule in the Universe, and it is relatively easy to target thanks to its bright rotational transitions. The use of CO as a tracer for the molecular gas mass requires assuming a conversion factor, $\alpha_{\rm CO}$, to pass from CO(1-0) luminosities to H$_2$ masses. At $z\sim0$, the conversion factor that is typically used is $\sim4$\,\Msun{}(\Kkmspc)$^{-1}$ for ``normal'' $M_*>10^9$\,\Msun{} star-forming galaxies with metallicities close to solar \citep[see][for a recent review]{bolatto13}. If other CO transitions are observed instead of the J=1$\rightarrow$0 ground state one, a further factor is required to account for the CO excitation \citep[see, e.g.,][]{weiss07,daddi15}. \citet{tacconi10} and \citet{daddi10a} investigated the molecular gas content of highly star-forming galaxies at $z\sim1.2$ and $z\sim2.3$ via the CO(3-2) transition. They found large reservoirs of gas, yielding molecular--to--stellar mass ratios $M_{\rm H2}/M_*\sim1$. These values are significantly higher than those observed in local galaxies \citep[$\sim 0.1$, see e.g.][]{leroy08}, suggesting a strong evolution of $M_{\rm H2}/M_*$ with redshift \citep[see also][]{riechers10, geach11, casey11, magnelli12, aravena12, aravena16, bothwell13, tacconi13, saintonge13, chapman15a, genzel15}. An alternative approach to estimate gas masses is via dust emission. The dust mass in a galaxy can be retrieved via the study of its rest-frame sub-mm spectral energy distribution (SED) \citep[e.g.,][]{magdis11, magdis12, magnelli12, santini14, bethermin15, berta16}, in particular via the Rayleigh-Jeans tail, which is less sensitive to the dust temperature \citep[see, e.g.,][]{scoville14,groves15}. Using the dust as a proxy of the molecular gas does not require assumptions on CO excitation and on $\alpha_{\rm CO}$. However, this approach relies on the assumption of a dust-to-gas mass ratio (DGR), which typically depends on the gas metallicity \citep{sandstrom13,bolatto13,genzel15}. Recent ALMA results report substantially lower values of $M_{\rm gas}$ than typically obtained in CO--based studies \citep{scoville14,scoville15}. In the present paper, we study the molecular gas properties of galaxies in ASPECS, the ALMA Spectroscopic Survey in the Hubble Ultra Deep Field (UDF). This is a blind search for CO emission using the Atacama Large Millimeter/sub-millimeter Array (ALMA). The goal is to constrain the molecular gas content of an unbiased sample of galaxies. The targeted region is one of the best studied areas of the sky, with exquisitely deep photometry in $>25$ X-ray--to--far-infrared (IR) bands, photometric redshifts, and dozens of spectroscopic redshifts. This provides us with an exquisite wealth of ancillary data, which is instrumental to put our CO measurements in the context of galaxy properties. Thanks to the deep field nature of our approach, we avoid potential biases related to the pre-selection of targets, and include both detections and non-detections in our analysis. Our dataset combines 3mm and 1mm observations of the same galaxies, thus providing constraints on the CO excitation. Furthermore, the combination of the spectral line survey and the 1mm continuum image allows us to compare CO- and dust-based estimates of the gas mass. In other papers of this series, we present the dataset and the catalog of blindly-selected CO emitters (Paper I, Walter et al.), we study the properties of 1.2mm-detected sources (Paper II, Aravena et al.), we discuss the inferred constraints on the luminosity functions of CO (Paper III, Decarli et al.) and we search for \Cii{} emission in $z$=6--8 galaxies (Paper V, Aravena et al.). Paper VI (Bouwens et al.) places our findings in the context of the dust extinction law for $z>2$ galaxies, and Paper VII (Carilli et al.) uses ASPECS to put first direct constraints on intensity mapping experiments. Here we put the CO detections in the context of the properties of the associated galaxies. In sec.~\ref{sec_obs} we summarize the observational dataset; in sec.~\ref{sec_sample} we describe our sample; in sec.~\ref{sec_results} we present CO-based measurements, which are discussed in the context of galaxy properties in sec.~\ref{sec_discussion}. We summarize our findings in sec.~\ref{sec_conclusions}. Throughout the paper we assume a standard $\Lambda$CDM cosmology with $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm m}=0.3$ and $\Omega_{\Lambda}=0.7$ \citep[broadly consistent with the measurements in][]{planck15}, and a \citet{chabrier03} stellar initial mass function. Magnitudes are expressed in the AB photometric system \citep{oke83}.
\label{sec_discussion} In the following we discuss the sources of our sample in the broad context of gas properties in high--redshift galaxies. \subsection{Location in the galaxy `main sequence' plot} The stellar masses of the galaxies in our sample range between $(2.8-275)\times 10^{9}$\,\Msun{} (two orders of magnitude). The $L_{\rm IR}>10^{11}$\,\Lsun{} cut in our sample definition selects sources with SFR$>$10\,\Msun\,yr$^{-1}$. The measured SFRs range between 12--150\,\Msun{}\,yr$^{-1}$ \footnote{We note that the FAST analysis by \citet{skelton14} yields consistent SFRs for ID.1, ID.3, and ID.10 but different values (by a factor $2\times$ or more) for ID.2 ($6$\,\Msun\,yr$^{-1}$), ID.4 (50\,\Msun\,yr$^{-1}$), ID.5 (21\,\Msun\,yr$^{-1}$), ID.6 ($3.7$\,\Msun{}\,yr$^{-1}$), ID.7 (230\,\Msun\,yr$^{-1}$), ID.8 ($0.01$\,\Msun\,yr$^{-1}$), ID.11 ($2.6$\,\Msun\,yr$^{-1}$). No FAST-based SFR estimate is available for ID.9. These differences are likely due to 1) different assumptions on the source redshifts; 2) different coverage of the SED photometry, in particular thanks to the addition of the 1mm continuum constraint in our MAGPHYS analysis; 3) different working assumptions in the two codes. In particular, FAST relies on relatively limited prescriptions for the dust attenuation and star formation history, and does not model the dust emission.}. Fig.~\ref{fig_ms} shows the location of our galaxies in the $M_{\rm *}$--SFR (`main sequence') plane. We plot all the galaxies in the field with a F850LP or F160W magnitude brighter than $27.5$\,mag (this cut allows us to remove sources with highly uncertain SED fits). The galaxies in the present sample are highlighted with large symbols. The different redshifts of the sources are indicated by different colors. As expected from the known evolution of the `main sequence' of star-forming galaxies \citep[e.g.,][]{whitaker12,schreiber15}, sources at higher redshifts tend to have higher SFR per unit stellar mass. Comparing with the {\em Herschel}--based results by \citet{schreiber15}, we find that half of the galaxies in our sample (ID.1, 2, 4, 8, 9, 10) lie on the main sequence (within a factor $3\times$) at their redshift. Three galaxies (ID.5, 7, 11) are above the main sequence (in the `starburst' region), and the remaining two galaxies (ID.3 and ID.6) show a SFR $\sim 3\times$ lower than main sequence galaxies at those redshifts and stellar masses. Similar conclusions are reached if we compare our results with the main sequence fits by \citet{whitaker12} (see Fig.~\ref{fig_ms}). \subsection{Star formation law}\label{sec_sfr_law} The relationship between the total infrared luminosity ($L_{\rm IR}$, a proxy for the star formation rate) and the total CO luminosity ($L'_{\rm CO}$, a proxy for the available gas mass) of galaxy samples is typically referred to as the `integrated Schmidt--Kennicutt' law \citep{schmidt59,kennicutt98,kennicutt12}, or, more generally, the `star formation' law. Sometimes average surface density values are derived from these quantities, resulting in average surface star formation rate densities ($\Sigma_{\rm SFR}$) and gas densities ($\Sigma_{\rm gas}$). We here explore both relations. \subsubsection{Global star formation law: IR vs.\ CO luminosities}\label{sec_co_lum} \begin{figure} \includegraphics[width=0.99\columnwidth]{fig_co_lum.pdf} \caption{IR luminosity as a function of the CO(1-0) luminosity for both local galaxies (grey open symbols) and high-redshift sources ($z>1$, grey filled symbols) from the compilation in \citet{carilli13}. The sources in our sample are shown with big symbols, using the same coding as in Fig.~\ref{fig_ms}. In addition, we also plot the x-axis position of the remaining CO lines found in our 3mm blind search (down-ward triangles; see Paper I). The two parallel sequences of `normal' and `starburst' galaxies \citep{daddi10b,genzel10} are shown as dashed lines (in grey and red, respectively). Our sources cover a wide range of luminosities, both in the CO line and in the IR continuum. Most of the sources in our sample lie along the sequence of `main sequence' galaxies. Four sources lie above the relation: ID.5, which still falls close to the high-$z$ starburst region; ID.7, in which the AGN contamination may lead to an excess of IR luminosity; and ID.8 and ID.11, which are undetected in CO, and that could be shifted towards the relation if one assumes very low CO excitation (as observed in other galaxies of our sample). Conversely, most of the sources detected in CO in our blind search (see Paper I) which lack of an optical/IR counterpart lie significantly below the observed relation.}\label{fig_co_lum} \end{figure} In Fig.~\ref{fig_co_lum} we compare the IR and CO(1-0) luminosities of our sources with respect to a compilation of galaxies both at low and high redshift from the review by \citet{carilli13}, and with the secure blind detections in \citet{decarli14}. For galaxies in our sample that are undetected in CO, we plot the corresponding 3-$\sigma$ limit on the line luminosities. The IR--CO luminosity empirical relation motivates the $L_{\rm IR}$ cut in our sample selection, as galaxies with $L_{\rm IR}>10^{11}$\,\Lsun{} should have CO emission brighter than $L'\approx 3\times10^9$\,\Kkmspc{} (i.e., our typical sensitivity limit in ASPECS; see Paper I). {\em All} the galaxies in our sample should therefore be detected in CO. We find that most of the CO--detected galaxies in our sample lie along the 1-to-1 relation followed by local spiral galaxies as well as color-selected main sequence galaxies at $1<z<3$ \citep{daddi10b,genzel10,genzel15,tacconi13}. Only two galaxies significantly deviate: ID.5, which appears on the upper envelope of the IR--CO relation, close to high-redshift starburst galaxies; and ID.7, which is largely underluminous in CO for its bright IR emission. As discussed in the previous section, these two galaxies appear as starbursts in Fig.~\ref{fig_ms}. Moreover, ID.7 hosts a bright AGN. If the AGN contamination at optical wavelengths is significant, our MAGPHYS-based SFR estimate is likely in excess (since MAGPHYS would associate some of the AGN light at rest-frame optical and UV wavelengths to a young stellar population), thus explaining the big vertical offset of this galaxy with respect to the `star formation law' shown in Fig.~\ref{fig_co_lum}. Notably, out of the 4 CO non-detections in our sample, ID.9 and ID.10 are still consistent with the relation, while ID.8 and ID.11 are not. These two galaxies are located at $z=0.999$ and $z=0.895$ respectively. The lowest-J transition sampled in our study is CO(4-3). Their non-detections might be explained if the excitation in these two sources was much lower than what we assumed to infer $L'_{\rm CO(1-0)}$ ($r_{41}=0.31$; see Sec.~\ref{sec_MH2}). The sources that are also detected in the blind search for CO (ID.1, 2, 3, 4) tend to lie on the lower `envelope' of the plot. This is expected, as these galaxies have been selected based on their CO luminosity (x--axis). Fig.~\ref{fig_co_lum} also shows the x-axis position of the remaining CO blind detections from the 3mm search in Paper I. The CO luminosities of these lines are uncertain (the line identification is ambiguous in many cases, and a fraction of these lines is expected to be a false-positive; see Paper I); however, it is interesting to note that these sources typically populate ranges of line luminosities that were previously unexplored at $z>1$ \citep[see similar examples in][]{chapman08,chapman15b,casey11}, and comparable with or even lower than the typical dust luminosities of local spiral galaxies. We emphasize that a significant fraction of these lines is expected to be real (see Paper I). Deeper data are required to better characterize these candidates. \subsubsection{Average surface densities: SFR vs.\ gas mass}\label{sec_ks} We infer average estimates of $\Sigma_{\rm SFR}$ and $\Sigma_{\rm gas}$ by dividing the global SFR and $M_{\rm H2}$ of the galaxy by a fiducial area set by the size of the stellar component, as CO and optical radii are typically comparable \citep{schruba11,tacconi13}. We thus use the information from the stellar morphology derived by \citet{vanderwel12} and reported in Tab.~\ref{tab_sample} to infer $\Sigma_{\rm SFR}$=SFR/($2\,\pi \, R_e^2$) and $\Sigma_{\rm H2}$=$M_{\rm H2}$/($2\,\pi \, R_e^2$), where $M_{\rm H2}$ is our CO-based measurement of the molecular gas mass, and the factor 2 is due to the fact that the $R_e$ includes only half of the light of the galaxy \citep[see a similar approach in][]{tacconi13}. In Fig.~\ref{fig_ks} we show the star-formation law for average surface densities. Global measurements of local spiral galaxies and starbursts are taken from \citet{kennicutt98}, and corrected for the updated SFR calibration following \citet{kennicutt12} and to the $\alpha_{\rm CO}$ value adopted in this paper. We also plot the galaxies in the IRAM Plateau de Bure HIgh-z Blue Sequence Survey \citep[PHIBSS;][]{tacconi13}, again corrected to match the same $\alpha_{\rm CO}$ assumption used in this work, and the secure detections in \citet{decarli14}. Interestingly, the two CO-brightest galaxies in our sample, ID.1 and ID.2 appear to populate opposite extremes of the density ranges observed in high-$z$ galaxies: ID.1 appears very compact, thus reaching the top-right corner of the plot ($\Sigma_{\rm gas}\approx 10000$\,\Msun{}\,pc$^{-2}$). On the other hand, in ID.2 the vast gas reservoir is spread over a large area (as apparent in Fig.~\ref{fig_mdyn}), thus yielding a globally low $\Sigma_{\rm gas}$. We also find that most of the sources in our sample lie along the $t_{\rm depl}\approx 1$\,Gyr line, in agreement with local spiral galaxies and the PHIBSS main sequence galaxies. Only ID.7 and ID.8 lie closer to the $t_{\rm depl}\approx0.1$\,Gyr line. In particular, the offset of ID.7 with respect to the bulk of the sample in the context of the global star-formation law (Fig.~\ref{fig_co_lum}) is combined here with the very compact size of the emitting region, thus isolating the source in the top-left corner of the plot (see Fig.~\ref{fig_ks}). Once again, a significant AGN contamination in the estimates of both the rest-frame optical/UV luminosity and in the size of the emitting region could explain such outlier. We also caution that, in some of these galaxies, optical and CO radii might differ. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig_ks.pdf} \caption{The `global' star-formation law relates the average star formation rate surface density ($\Sigma_{\rm SFR}$) with the average gas density in galaxies. Here we consider only the molecular gas phase ($\Sigma_{\rm H2}$). Each point in the plot refers to a different galaxy. We plot the reference samples from \citet{kennicutt98} \citep[corrected for the updated SFR calibration in][]{kennicutt12}, as well as the PHIBSS galaxies from \citet{tacconi13}. Data from the literature have been corrected to match the same $\alpha_{\rm CO}=3.6$\,\Msun{}(\Kkmspc)$^{-1}$ assumed in this work. The symbol code is the same as in Fig.~\ref{fig_co_lum}. The galaxies in our sample align along the $t_{\rm depl}\approx 1$\,Gyr, with the only exception of ID.7 and ID.8 which show a short depletion time. It is interesting to note that the two CO-brightest galaxies in our sample, ID.1 and ID.2, populate opposite extremes of the high-$z$ galaxy distribution, with the former being very compact (thus displaying higher SFR and gas densities), and the latter being very extended (thus showing lower SFR and gas densities).}\label{fig_ks} \end{figure} \subsection{Depletion times} Fig.~\ref{fig_tdepl_ssfr} shows the depletion time, $t_{\rm depl}=M_{\rm H2}/{\rm SFR}$, as a function of the specific star formation rate. This timescale sets how quickly the gas is depleted in a galaxy given the currently observed SFR (ignoring any gas repleneshing). Our data are compared again with the secure blind detections in \citet{decarli14}, with the PHIBSS sample, and with the sample of starburst galaxies studied by \citet{silverman15} (in the latter case, we do not change the adopted value of $\alpha_{\rm CO}=1.1$\,\Msun(\Kkmspc)$^{-1}$, as these are not main sequence galaxies). Starburst galaxies tend to reside in the bottom-right corner of the plot (they are highly star-forming given their stellar mass, and they are using up their gaseous reservoir fast). Galaxies with large gas reservoirs and mild star-formation populate the top-left corner of the plot. Since the IR luminosity is proportional to the SFR, and the CO luminosity is used to infer $M_{\rm H2}$, the y-axis of this plot conceptually corresponds to a diagonal line (top-left to bottom-right) in Fig.~\ref{fig_co_lum}. Also, diagonal lines in Fig.~\ref{fig_tdepl_ssfr} mark the loci of constant molecular--to--stellar mass ratio $M_{\rm H2}/M_*$. The sources in our sample range over almost 2 dex in sSFR and $t_{\rm depl}$. Noticeably, ID.1 is highly star-forming (it resides slightly above the main sequence of star forming galaxies at $z\sim 2.5$, see Fig.~\ref{fig_ms}), so we would expect it to reside in the bottom-right corner of Fig.~\ref{fig_tdepl_ssfr}; however, its gaseous reservoir is very large for its IR luminosity (see also Fig.~\ref{fig_co_lum}), thus placing ID.1 in the top-right corner of the plot ($M_{\rm H2}/M_*=12$). On the other hand, ID.2 hosts an enormous reservoir of molecular gas, but because of its even larger stellar mass (yielding low sSFR), it resides on the left side of the plot ($M_{\rm H2}/M_*=0.24$). Their depletion time scales however are comparable (1--3 Gyr). We stress that these results are based on very high-S/N CO line detections, and on very solid descriptions of the galaxy SEDs (see Fig.~\ref{fig_id1_a}). The sources that populate the starburst region in Fig.~\ref{fig_ms} and reside in the top-left part of Fig.~\ref{fig_co_lum} (in particular, ID.5 and ID.7) consistently appear in the bottom-right corner of Fig.~\ref{fig_tdepl_ssfr}, among starbursts. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig_tdepl_ssfr.pdf} \caption{The depletion time $t_{\rm depl}=M_{\rm H2}/{\rm SFR}$ as a function of the specific star formation rate sSFR=SFR/$M_{\rm *}$ for the galaxies in our sample, the secure blind detections in \citet{decarli14}, the PHIBSS sample by \citet{tacconi13}, and the starburst sample in \citet{silverman15}. The symbol code is the same as in Fig.~\ref{fig_co_lum}. Starburst galaxies typically reside in the bottom-right corner of the plot. Our ASPECS sources cover a wide range in parameter space, highlighting the diverse properties of these galaxies. }\label{fig_tdepl_ssfr} \end{figure} \subsection{Gas to stellar mass ratios} A useful parameter to investigate the molecular gas content in high-$z$ galaxies is the molecular gas to stellar mass ratio, $M_{\rm H2}/M_{\rm *}$. We prefer this parameter rather than the molecular gas fraction, $f_{\rm gas}=M_{\rm H2}/(M_*+M_{\rm H2})$, as the two involved quantities ($M_{\rm H2}$ and $M_*$) appear independently at the numerator and denominator of the fraction, so that the parameter is well defined even if we only have upper limits on $M_{\rm H2}$. Fig.~\ref{fig_fgas_z} shows the dependence of $M_{\rm H2}/M_*$ on redshift in the galaxies of our samples, and in galaxies from the literature. This plot informs us on the typical gas content as a function of cosmic time, and can help us shed light on the origin of the cosmic star-formation history (see, e.g., \citealt{geach11}, \citealt{magdis12}, and Paper III of this series). Color-selected star-forming galaxies close to the epoch of galaxy assembly are claimed to show large $M_{\rm H2}/M_*$, with reservoirs of gas as big as (or even larger than) the stellar mass (i.e., $M_{\rm H2}/M_*\sim1$; see, e.g., \citealt{daddi10a}, \citealt{tacconi10,tacconi13}). Indeed, we find examples of very high gas fractions: ID.1 ($M_{\rm H2}/M_*=12$) and the starburst galaxy ID.5 ($M_{\rm H2}/M_*=2.5$) are the most extreme cases. However, it is interesting to note that we also find galaxies with very modest gas fractions, such as ID.2 ($M_{\rm H2}/M_*=0.24$). The CO-detected galaxies at $1.0<z<1.7$ in our sample show an average $M_{\rm H2}/M_*$ ratio that is $\sim 2\times$ lower than the average value for the PHIBSS sample at the same redshift, and closer to the global trend established in \citet{geach11} and \citet{magdis12}. The non-detection of CO in ID.8 places particularly strict limits ($M_{\rm H2}/M_*<0.03$). If the lack of detection is attributed to the very low CO excitation in this galaxy, it would take a 10$\times$ lower $r_{41}$ (i.e., $r_{41}\approx0.03$) to shift ID.8 on the average trend reported by \citet{geach11}. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig_fgas_z.png} \caption{Gas mass fraction (defined as $M_{\rm H2}/M_*$) as a function of redshift from various samples of galaxies in the literature (grey, from the compilation in \citealt{carilli13}), compared with the secure CO detections in \citet{decarli14}, the PHIBSS sample \citep{tacconi13}, the starburst sample in \citet{silverman15}, and our results from this work. The symbol coding is the same as in Fig.~\ref{fig_ms}. Our data seem to support the picture of a generally increasing $M_{\rm H2}/M_*$ ratio in main sequence galaxies as a function of redshift, as highlighted by the $f_{\rm gas}=0.1 \times (1+z)^2$ green line \citep{geach11,magdis12}. In particular, ID.2 appears as a starburst with respect to its position above the `main sequence' in Fig.~\ref{fig_ms}, and shows a large $M_{\rm H2}/M_*$ ratio. On the other hand, we also point out that significant upper limits are present (triangles).}\label{fig_fgas_z} \end{figure} \subsection{CO vs. Dust-based ISM masses}\label{sec_co_vs_dust} In addition to the CO line measurements, six of the 11 galaxies in our sample also have detections in the 1\,mm dust continuum. We can thus estimate the mass of the molecular gas independently of the CO data. The Rayleigh-Jeans part of the dust emission is only weakly dependent on the dust temperature, thus it can be used to trace the mass of dust. Using the dust-to-gas scaling \citep[see, e.g.,][]{sandstrom13}, it is possible to infer the gas mass via the dust mass. \citet{groves15} compare CO-based gas masses with the monochromatic luminosity of the dust continuum in the Rayleigh-Jeans tail. Their analysis relies on a detailed study of 37 local spiral galaxies in the KINGFISH sample \citep{kennicutt11}. The galaxy luminosity in the {\em Herschel}/SPIRE 500$\mu$m band is found to scale almost linearly with the gas mass, yielding: \begin{equation}\label{eq_groves} \frac{M_{\rm gas}}{10^{10}\,{\rm M_\odot}}=28.5 \, \frac{\nu L_\nu (500\mu{\rm m})}{10^{10}\,\rm L_\odot} \end{equation} We compute the rest-frame luminosity $\nu L_\nu$(500$\mu$m) from the observed 1mm continuum of the galaxies in our sample. For the $k$-correction, we adopt a modified black body with $T_{\rm dust}$=25\,K and $\beta$=$1.6$ \citep[see, e.g.,][]{beelen06}, shifted at the redshift of each source. Since the observing frequency (242\,GHz) falls close to the rest-frame 500$\mu$m (as most of our sources reside at $z\sim1.2$), and we are sampling the Rayleigh-Jeans tail (which is almost insensitive to the dust temperature), the differences in the corrections due the adopted templates are negligible for the purposes of this analysis. The adopted values for the $k$ correction, as well as the resulting gas masses, are listed in Tab.~\ref{tab_Mism}. A similar approach was presented by \citet{scoville14,scoville15}. This calibration is tuned on a set of relatively massive [$(0.2-4)\times10^{11}$\,\Msun] star-forming galaxies (30 local star-forming galaxies, 12 low-redshift ULIRGs, and 30 SMGs at $z$=1.4--3.0), all having literature observations of the CO(1-0) transition. The tight relation observed between CO(1-0) luminosity and the rest-frame 850$\mu$m monochromatic luminosity \citep[see Fig.~1 in][]{scoville15} suggests that a simple conversion factor can be used to derive gas masses from monochromatic dust continuum observations. Setting the dust temperature to $T_{\rm dust}=25$\,K \citep[following][]{scoville14}, from eq.~12 in their paper we derive $M_{\rm ISM}$ from our 1\,mm flux densities as follows: \begin{equation}\label{eq_scoville} \frac{M_{\rm ISM}}{10^{10}\,{\rm M_\odot}}=\frac{1.20}{(1+z)^{4.8}}\,\,\frac{F_\nu}{\rm mJy} \, \left(\frac{\nu}{\rm 350\,GHz}\right)^{-3.8} \, \frac{\Gamma_0}{\Gamma_{\rm RJ}} \, \left(\frac{D_{\rm L}}{\rm Gpc}\right)^2 \end{equation} where $F_\nu$ is the observed dust continuum flux density at the observing frequency $\nu$ ($242$\,GHz in our case), $D_{\rm L}$ is the luminosity distance, and $\Gamma_{\rm RJ}$ is a unitless correction factor that accounts for the deviation from the $\nu^2$ scaling of the Rayleigh-Jeans tail. In the reference sample of local galaxies, low-redshift ULIRGs and high-$z$ SMGs that \citet{scoville14} used to calibrate eq.~\ref{eq_scoville}, $\Gamma_{\rm RJ}=\Gamma_0=0.71$. The resulting ISM masses are listed in Tab.~\ref{tab_Mism}. \begin{table*} \caption{\rm Gas mass estimates based on the dust continuum. Only sources detected at 1mm in ASPECS are considered. (1) Source ID. (2) Redshift. (3) Observed 242\,GHz = 1.2\,mm continuum flux density (see Paper II). (4) $k$ correction, expressed as the ratio between the flux density computed at $\lambda_{\rm rest\,frame}=500$\,$\mu$m and the one at $\lambda_{\rm obs}=1.2$\,mm, assuming a modified black body template for the dust emission with $\beta$=1.6 and $T_{\rm dust}=25$\,K. (5) Gas mass based on the 1mm flux density, derived following eq.~\ref{eq_groves} \citep{groves15}. (6) Gas mass based on the 1mm flux density, derived following eq.~\ref{eq_scoville} \citep{scoville14,scoville15}. (7) Gas mass derived from the dust mass estimate resulting from MAGPHYS SED fitting, assuming a dust-to-gas ratio DGR=1/100.} \label{tab_Mism} \begin{center} \begin{tabular}{ccccccc} \hline ID & $z$ & $F_{\nu}$(1.2mm) & $k$-corr & log\,$M_{\rm gas,\,Groves}$ & log\,$M_{\rm ISM,\,Scoville}$ & log\,$M_{\rm gas,\,MAGPHYS}$ \\ & & [$\mu$Jy] & & [\Msun] & [\Msun] & [\Msun] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 1 & 2.543 & $552.7\pm13.8$ & 0.374 & $11.02_{-0.011}^{+0.011}$ & $10.69_{-0.011}^{+0.011}$ & $10.53_{-0.17}^{+0.17}$ \\ % 2 & 1.551 & $223.1\pm21.6$ & 0.919 & $10.63_{-0.04}^{+0.04}$ & $10.33_{-0.04}^{+0.04}$ & $10.09_{-0.14}^{+0.13}$ \\ % 4 & 1.088 & $ 96.5\pm24.7$ & 1.665 & $10.24_{-0.10}^{+0.13}$ & $9.95_{-0.13}^{+0.10}$ & $ 9.78_{-0.20}^{+0.18}$ \\ % 5 & 1.098 & $ 46.4\pm14.9$ & 1.641 & $ 9.93_{-0.12}^{+0.17}$ & $9.63_{-0.17}^{+0.12}$ & $ 9.25_{-0.19}^{+0.14}$ \\ % 6 & 1.094 & $ 69.6\pm18.9$ & 1.650 & $10.10_{-0.10}^{+0.14}$ & $9.81_{-0.14}^{+0.10}$ & $ 9.58_{-0.18}^{+0.17}$ \\ % 10 & 2.224 & $ 36.7\pm13.8$ & 0.478 & $ 9.86_{-0.14}^{+0.20}$ & $9.53_{-0.20}^{+0.14}$ & $ 9.25_{-0.20}^{+0.17}$ \\ % \hline \end{tabular} \end{center} \end{table*} Finally, we can infer an estimate of $M_{\rm gas}$ from the estimate of the dust mass, $M_{\rm dust}$, that we obtain via our MAGPHYS fit of the available SED, simply scaled by a fixed dust-to-gas mass ratio (DGR). \citet{sandstrom13} investigate the dust and gas content in a sample of local spiral galaxies, and find DGR$\approx$1/70. \citet{genzel15} and \citet{berta16} perform a detailed analysis of both gas and dust mass estimates in galaxies at $0.9<z<3.2$ observed with {\em Herschel}, and find a lower value of DGR$\approx$1/100, which is the value we adopt here. We stress that there is a factor $>2\times$ scatter in the estimates of DGR due to its dependence on $M_*$ and metallicity \citep{sandstrom13,berta16}. Following the fundamental metallicity relation in \citet{mannucci10}, we estimate that galaxies in our sample typically have solar metallicities (the lowest metallicity estimates are for ID.9: $Z$=$0.6$\,Z$_\odot$; and ID.5: $Z$=$0.7$\,Z$_\odot$), so we do not foresee large intra-sample variations of DGR. For simplicity, in our analysis we thus assume a fixed DGR=1/100. While SED fits are available for all the galaxies in our sample, we consider here only those with a 1mm detection, in order to best anchor the Rayleigh-Jeans tail of the dust emission. The resulting masses are listed in Tab.~\ref{tab_Mism}. Fig.~\ref{fig_mass_comparison} compares the gas estimates based on Eq.~\ref{eq_groves}, following \citet{groves15}; the ones obtained via Eq.~\ref{eq_scoville}, following \citet{scoville14}; and the estimates based on dust from the MAGPHYS SED fits, with our CO-based estimate (assuming $\alpha_{\rm CO}=3.6$\,\Msun{}[\Kkmspc]$^{-1}$). The dust--based gas estimates obtained with different approaches are strongly correlated with each other, as expected because they all scale (almost linearly) with $F_\nu$(1mm). They also correlate well with the CO-based H$_2$ mass estimates over one and a half dex of dynamic range. However, systematic offsets are observed. The \citet{groves15} estimates are on average $1.5\times$ lower than those based on CO. The estimates based on \citet{scoville15} are another $2\times$ lower, and the masses based on MAGPHYS are a factor $1.7\times$ lower than those obtained following \citet{scoville15}. What causes the discrepancies between these mass estimates? The CO masses might be overestimated because of our assumptions in terms of CO excitation and $\alpha_{\rm CO}$. A higher CO excitation would imply higher $r_{J1}$, thus lower CO(1-0) luminosity (see eq.~\ref{eq_MH2}). If we assume the M82 excitation template by \citet{weiss07} (see Fig.~\ref{fig_co_excit}), the inferred $M_{\rm H2}$ masses would be $1.3\times$ lower for ID.2--6, and $2.2\times$ lower for ID.1 and ID.10. This would solve the discrepancy with respect to the estimates based on the Groves recipe, and it would mitigate, but not solve, the discrepancy with the other gas mass estimates. However, such a high CO excitation scenario is ruled out by our 1mm line observations (see Fig.~\ref{fig_co_excit}). A lower value of $\alpha_{\rm CO}$ could also help. If we adopt the classical value for ULIRGs, $\alpha_{\rm CO}=0.8$\,\Msun{}(\Kkmspc)$^{-1}$ \citep{bolatto13}, the CO-based gas masses would be a factor 4.5 smaller, thus in good agreement with the ones from the dust. Fig.~\ref{fig_co_lum} shows that the majority of our sources lie along the relation of main sequence galaxies / local spiral galaxies in the $L_{\rm IR}$--$L'_{\rm CO}$ plot. This is irrespective of the choice of $\alpha_{\rm CO}$. Thorough studies of galaxies along this sequence support our choice for a larger value of $\alpha_{\rm CO}$ \citep[e.g.][]{daddi10a,genzel10,genzel15,sargent14}. Further support to our choice comes from the position of our sources along the `main sequence' of galaxies (Fig.~\ref{fig_ms}). Among the sources listed in Tab.~\ref{tab_Mism} and appearing in Fig.~\ref{fig_mass_comparison}, only ID.5 could be considered a starburst in this respect. Adopting a lower $\alpha_{\rm CO}$ for only this source would lower its molecular gas mass by a factor $\sim 4.5$, thus bringing it close to the bulk of the `main sequence' galaxies in terms of gas fraction (Fig.~\ref{fig_fgas_z}), but pushing it away from the sequence in the star formation law plot (Fig.~\ref{fig_ks}). It would also reduce its depletion time scale (Fig.~\ref{fig_tdepl_ssfr}) and bring the CO-based gas mass closer to the dust-based estimates (Fig.~\ref{fig_mass_comparison}). Similar considerations could also apply for ID.1, the CO--brightest galaxy in our sample. The compact morphology and the small separation from a companion galaxy, the rising CO emission at high J, the high values of $\Sigma_{\rm SFR}$ and $\Sigma_{\rm H2}$ and the very large $M_{\rm H2}$/$M_*$ all point toward a starburst scenario for this source; however, it is located along the main sequence of galaxies at $z\sim2.5$ in Fig.~\ref{fig_ms}, and the $L_{\rm IR}$--$L'_{\rm CO(1-0)}$ plot (Fig.~\ref{fig_co_lum}) shows that this source is located along the sequence of local spirals and main sequence galaxies (not along the sequence of starbursts), irrespective of the choice of $\alpha_{\rm CO}$, even if we assume the extreme case of thermalized CO(3-2) emission in order to derive $L'_{\rm CO(1-0)}$. Because of this, and because of the lack of any starburst signature (justifying a low $\alpha_{\rm CO}$) in all the other sources, the discrepancies between different gas mass estimates shown in Fig.~\ref{fig_mass_comparison} cannot be mended only by tuning our assumptions on the CO-based mass estimates. The dust-based gas mass estimates could also be affected by systematic uncertainties. The offset between the estimates based on Eq.~\ref{eq_groves} and Eq.~\ref{eq_scoville} suggests a systematic in the calibration of the two recipes. E.g., the luminosity range used in \citet{groves15} to derive Eq.~\ref{eq_groves} does not cover the $>10^{10}$\,\Lsun{} range, where our galaxies are found. Eq.~\ref{eq_scoville}, based on \citet{scoville14}, is pinned down to a longer wavelength than what observed in ASPECS (850\,$\mu$m in the rest-frame, instead of $\sim 500$\,$\mu$m), thus the $k$ correction is significant and dependent on the adopted dust template. In particular, Eq.~\ref{eq_scoville} explicitly assumes $\beta$=$1.8$, which might not be universally valid (see discussion in Paper II). Our dust SED is only poorly sampled. Most remarkably, the comparison between maps of the CO and dust emission in ID.2 suggests that the gas is optically-thick over a large area, while the dust is not. We might be missing part of the dust emission due to surface brightness limits, thus affecting our estimates of the total ISM mass. In ID.3, the dust continuum emission is not detected at all, despite the bright CO emission. Since we do not detect any significant 1mm continuum associated with the extended disk of ID.2, and no dust emission at all in ID.3, it is hard to assess how big a correction one should consider. It is possible that a similar issue is present in other sources, in particular in galaxies that we see as CO emitters but for which we do not recover any 1mm continuum emission (see, e.g., Fig.~\ref{fig_co_lum}). Finally, the underlying assumption in the dust--based gas estimates is the dust-to-gas ratio. This can change significantly as a function of metallicity and other parameters in the galaxy \citep[see][for a detailed discussion]{sandstrom13}. A lower value of DGR (e.g., DGR$\sim$1/200) would halve the discrepancy between the MAGPHYS-based estimates and the CO-based ones. While this is a possibility at the low-mass end of Fig.~\ref{fig_mass_comparison}, we point out that the relatively large stellar mass of the galaxies at the bright end support metallicity values close to solar, thus disfavoring the large DGR values needed to reconcile the two gas mass estimates. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig_mass_comparison.pdf} \caption{Comparison between the H$_2$ masses that we derive from CO for the sources in our sample (x--axis), and the gas masses inferred from the 1mm continuum, following eq.~\ref{eq_groves} \citep{groves15}, eq.~\ref{eq_scoville} \citep{scoville14,scoville15}, and based on the MAGPHYS-based estimates of $M_{\rm dust}$, assuming a dust-to-gas ratio of 1/100 \citep{genzel15} (y--axis). The dashed line shows the 1-to-1 case. Only sources with a 1mm continuum detection are shown. The dust-based estimates are correlated with each other, due to the strong dependence on the 1mm continuum emission. The various mass estimates are also correlated with the CO-based ones over 1.5 dex. There are however systematic offsets among the various gas mass recipes, with dust-based masses that appear lower than the ones inferred from CO.}\label{fig_mass_comparison} \end{figure}
16
7
1607.06771
1607
1607.04297_arXiv.txt
\\ We use the Jeans equations for an ensemble of collisionless particles to describe the distribution of broad-line region (BLR) cloud in three classes: (A) non disc (B) disc-wind (C) pure disc structure. We propose that clumpy structures in the brightest quasars belong to class A, fainter quasars and brighter Seyferts belong to class B, and dimmer Seyfert galaxies and all low-luminosity AGNs (LLAGNs) belong to class C. We derive the virial factor, $f$, for disc-like structures and find a negative correlation between the inclination angle, $\theta_{0}$, and $f$. We find similar behaviour for $f$ as a function of the FWHM and $\sigma_{z}$, the $z$ component of velocity dispersion. For different values of $\theta_{0}$ we find that $ 1.0 \lesssim f \lesssim 9.0 $ in type1 AGNs and $ 0.5 \lesssim f \lesssim 1.0 $ in type2 AGNs. Moreover we have $ 0.5 \lesssim f \lesssim 6.5 $ for different values of $\textsc{FWHM}$ and $ 1.4 \lesssim f \lesssim 1.8 $ for different values of $ \sigma_{z} $. We also find that $ f $ is relatively insensitive to the variations of bolometric luminosity and column density of each cloud and the range of variation of $ f $ is in order of 0.01. Considering wide range of $ f $ we see the use of average virial factor $ \langle f \rangle $ is not very safe. Therefore We propose AGN community to divide a sample into a few subsamples based on the value of $\theta_{0}$ and $\textsc{FWHM}$ of members and calculate $ \langle f \rangle $ for each group separately to reduce uncertainty in black hole mass estimation.
Now it is widely accepted that an active galactic nucleus (AGN) is a supermassive black hole surrounded by an accretion disc. Above the accretion disc, there is dense, rapidly-moving gas making up the so-called broad line region (BLR) emitting broad emission lines by reprocessing the continuum radiation from the inner accretion disc (see \citealt{Gaskell09}). The BLR is believed to consist of dense clumps of hot gas ($n_H > 10^9$ cm$^{-3}$; $T \sim 10^4$K) in a much hotter, rarefied medium. The motions of the line-emitting clouds is the main cause of the broadening of the line profiles. The profiles of the broad emission lines is a major source of information about the geometry and kinematics of the BLR. Profiles can be broadly categorized into two shape: single-peaked and double-peaked. It is believed that double-peaked profiles are emitted from disc-like clumpy structure (e.g., \citealp{Chen89b,Chen89a,Eracleous94,Eracleous03,Strateva03}). Although obvious double-peaked profiles are seen in only small fraction of AGNs, this does not mean that such discs are absent in other AGNs. Based on some studies, under the specific conditions, disc-like clumpy structure can even produce single-peaked broad emission lines (e.g., \citealp{Chen89a,Dumont90a,Kollatschny02,Kollatschny03}). Spectropolarimetric observations (\citealt{Smith05}) imply the presence of clumpy discs in BLR. On the other hand, other authors have suggested a two-component model in which they consider a spherical distribution of clouds surrounding the central black hole in addition to their distribution in the midplane (e.g., \citealp{Popovic04,Bon09}). In this model, while the disc is responsible for the production of the broad wings, the spherical distribution is responsible for the narrow cores. The integrated emission line profile is a combination of wings and cores. Using the width of the broad Balmer emission lines, $\textsc{FWHM}$, and an effective BLR radius, $r_{BLR}$, which is obtained by either reverberation mapping (RM) (e.g., \citealp{Blandford82,Gaskell86,Peterson93,Peterson04}) or from the relationship between optical luminosity and $r_{BLR}$ (\citealp{Dibai77,Kaspi00,Bentz06}), masses of black holes are estimated from the virial theorem, $M = fr_{BLR}\textsc{FWHM}^{2}/G$, where $G$ is the gravitational constant and $f$ is the ``virial factor" depending on the geometry and kinematics of the BLR and the inclination angle, $\theta_{0}$. Unfortunately with current technology it is impossible to directly observe the structure of the BLR. Therefore the true value of $ f $ for each object is unknown and we are required to use average virial factor, $ \langle f \rangle $, to estimate the mass of supermassive black hole. Comparison of virial masses with independent estimates of black hole masses using to $M - \sigma$ relationship (see \citealt{Kormendy13}) have given empirical estimates of the value of $ \langle f \rangle $. However each study has found different value for $ \langle f \rangle $. For example \citet{Onken04} calculate $ \langle f \rangle = 5.5 \pm 1.8 $, \citet{Woo10} calculate $ \langle f \rangle = 5.2 \pm 1.2 $, \citet{Graham11} calculate $ \langle f \rangle = 3.8^{+0.7}_{-0.6} $ and \citet{Grier13} calculate $ \langle f \rangle = 4.31 \pm 1.05 $. This is because each group take a different sample. Different values for $ \langle f \rangle $ prevent us to have a reliable value for black hole mass. The situation for low-luminosity AGNs (LLAGNs) is somewhat different. The lack of broad optical emission lines in the faintest cases ($ L = 10^{-9}< L <10^{-6} L_{Edd}$) has led to two scenarios about the presence of BLR in LLAGNs. The first, which is somewhat supported by the theoretical models (e.g., \citealp{Nicastro00,Elitzur06}), simply says that the BLR is absent in such faint objects. However, there is clear evidence in favor of the presence of the BLR in some LLAGNs, at least those with $ L > 10^{-5} L_{Edd}$. This supports a second scenario which says that the BLR exists in LLAGNs but we cannot detect them in the faintest cases because the intensity of their broad emission lines is below the detection threshold set mostly by starlight in the host galaxy. In the Palomar survey, it was found that broad H$\alpha$ emission is present in a remarkably high fraction of LINERs (LLAGNs)(see \citealt{Ho08}). Moreover, double-peaked broad emission lines have been found in some LINERs including NGC~7213 (\citealt{Filippenko84}), NGC~1097 (\citealt{Storchi93}), M81 (\citealt{Bower96}), NGC~4450 (\citealt{Ho00}) and NGC~4203 (\citealt{Shields00}). Other studies have also shown the presence of variable broad emission lines for NGC~1097 (\citealt{Storchi93}), M81 (\citealt{Bower96}) and NGC~3065 (e.g., \citealt{Eracleous01}). Recently, \citet{Balmaverde14} found other LLAGNs with $ L=10^{-5} L_{Edd}$ showing BLRs. Since the optical spectra of LLAGNs are severely contaminated by the host galaxy, some authors have suggested using the widths of the Paschen lines rather than Balmer lines to determine a $\textsc{FWHM}$ (\citealp{Landt11,Landt13,La Franca15}). Also, in order to estimate the BLR radius, $r_{BLR}$, we can use near-IR continuum luminosity at 1 $\mu m$ (\citealp{Landt11,Landt13}) or X-ray luminosity (\citealp{Greene10}) rather than the optical luminosity. \citet{Whittle86} used the Boltzmann equation to describe the kinematics of BLR. More recently, \citet{Wang12} showed that the BLR can be considered as a collisionless ensemble of particles. By considering the Newtonian gravity of the black hole and a quadratic drag force, they used the collisionless Boltzmann equation (CBE) to study the dynamics of the clouds for the case where magnetic forces are unimportant. Following this approach, some authors included the effect of magnetic fields on the dynamics of the clouds in the BLR (e.g., \citealt{Khajenabi14}). In this paper, we use the CBE to describe the distribution of the clouds in the BLR. The structure of this paper is as follows: in the section (\ref{s2}) we will establish our basic formalism and apply it in order to classify the clumpy structure of the BLR. In the section (\ref{s3}) we will concentrate on LLAGNs and give more details about the distribution of the clouds in such systems. Moreover, we will derive the virial factor $f$ for them and in the final section, the conclusions are summarized.
\label{s5} In this work, considering the clouds as a collisionless ensemble of particles, we employed the cylindrical form of Jeans equations calculated in section 2 to describe a geometric model for their distributions in the BLR. The effective forces in this study are the Newtonian gravity of the black hole, the isotropic radiative force arisen from the central source and the drag force for linear regime. Taking them into account we showed that there are three classes for BLR configuration: (A) non disc (B) disc-wind (C) pure disc structure (see Figure \ref{figure1}). We also found that the distribution of BLR clouds in the brightest quasars belongs to class A, in the dimmer quasars and brighter Seyfert galaxies it belongs to class B, and in the fainter Seyfert galaxies and all LLAGNs (LINERs) it belongs to class C. Then we derived the Virial factor, $ f $, for disc-like structures and found a negative correlation for $f$ as a function of the inclination angle, the width of broad emission line and z-component of the velocity dispersion. We also found $ 1.0 \lesssim f \lesssim 9.0 $ for type1 AGNs and $ 0.5 \lesssim f \lesssim 1.0 $ for type2 AGNs. Moreover we saw that $ f $ approximately varies from 0.5 to 6.5 for different values of $\textsc{FWHM}$ and from 1.4 to 1.8 for different values of $ \sigma_{z} $. We also indicated that $ f $ doesn't change significantly with the variations of bolometric luminosity, column density of each cloud, density index and $ \beta = \langle v_{\phi}^{2} \rangle/\langle v_{R}^{2} \rangle $ and the maximum change in the value of $ f $ is of order of 0.01. In introduction, we mentioned that since each group take a different sample of AGNs, they find different values for average virial factor, $ \langle f \rangle $. However different values leads to significant uncertainties in the estimation of black hole mass. On the other hand, in this paper, we saw that $ f $ significantly changes with the inclination angle $ \theta_{0} $ and $ \textsc{FWHM} $ (Figure \ref{figure7}). Therefore in order to have more accurate estimation for black hole mass, we suggest observational campaigns to divide a sample of objects into a few subsamples based on the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of objects and then determine the value of $ \langle f \rangle $ for each subsample separately. Therefore we will have several values for $ \langle f \rangle $. Finally regarding the value of $ \theta_{0} $ and $ \textsc{FWHM} $ of each object with unknown black hole mass, we use the appropriate value of $ \langle f \rangle $ in the virial theorem to have more accurate estimation of black hole mass.
16
7
1607.04297
1607
1607.05717_arXiv.txt
We compare observed far infra-red/sub-millimetre (FIR/sub-mm) galaxy spectral energy distributions (SEDs) of massive galaxies ($M_{\star}\gtrsim10^{10}$~$h^{-1}$~M$_{\sun}$) derived through a stacking analysis with predictions from a new model of galaxy formation. The FIR SEDs of the model galaxies are calculated using a self-consistent model for the absorption and re-emission of radiation by interstellar dust based on radiative transfer calculations and global energy balance arguments. Galaxies are selected based on their position on the specific star formation rate (sSFR) - stellar mass ($M_{\star}$) plane. We identify a main sequence of star-forming galaxies in the model, i.e. a well defined relationship between sSFR and $M_\star$, up to redshift $z\sim6$. The scatter of this relationship evolves such that it is generally larger at higher stellar masses and higher redshifts. There is remarkable agreement between the predicted and observed average SEDs across a broad range of redshifts ($0.5\lesssim z\lesssim4$) for galaxies on the main sequence. However, the agreement is less good for starburst galaxies at $z\gtrsim2$, selected here to have elevated {sSFRs}$>10\times$ the main sequence value. We find that the predicted average SEDs are robust to changing the parameters of our dust model within physically plausible values. We also show that the dust temperature evolution of main sequence galaxies in the model is driven by star formation on the main sequence being more burst-dominated at higher redshifts.
\label{sec:Introduction} Interstellar dust plays an important role in observational probes of galaxy formation and evolution. It forms from metals produced by stellar nucleosynthesis which are then ejected by stellar winds and supernovae into the interstellar medium (ISM), where a fraction ($\sim$~$30-50$~per~cent, e.g. Draine \& Li \citeyear{DL07}) condense into grains. These grains then absorb stellar radiation and re-emit it at longer wavelengths. Studies of the extragalactic background light have found that the energy density of the cosmic infra-red background (CIB, $~\sim10-1000$~$\muup$m) is similar to that found in the UV/optical/near infra-red \citep[e.g.][]{HauserDwek01,Dole06}, suggesting that much of the star formation over the history of the Universe has been obscured by dust. Thus understanding the nature of dust and its processing of stellar radiation is crucial to achieve a more complete view of galaxy formation and evolution. Observations suggest that the majority of star formation over the history of the Universe has taken place on the so-called main sequence (MS) of star-forming galaxies, a tight correlation between star formation rate (SFR) and stellar mass ($M_{\star}$) that is observed out to $z\sim4$, with a $1\sigma$ scatter of $\sim0.3$~dex (e.g. Elbaz et al. \citeyear{Elbaz07}; Karim et al. \citeyear{Karim11}; Rodighiero et al. \citeyear{Rodighiero11}; for theoretical predictions see also Mitchell et al. \citeyear{Mitchell14}). This is thought to result from the regulation of star formation through the interplay of gas cooling and feedback processes. Galaxies that have elevated SFRs (typically by factors $\sim4-10$) relative to this main sequence are often referred to as starburst galaxies (SB) in observational studies. In contrast to the secular processes thought to drive star formation on the MS, the elevated SFRs in SB galaxies are thought to be triggered by some dynamical process such as a galaxy merger or disc instability. The SFRs in these galaxies are usually inferred from a combination of UV and IR photometry and thus a good understanding of the effects of dust in these galaxies is important. However, understanding the dust emission properties of these galaxies is both observationally and theoretically challenging. Observationally, an integrated FIR SED for the whole galaxy is required to give an indication of the luminosity from young stars that is absorbed and re-emitted by the dust. As is discussed in the following paragraph, at far infra-red/sub-mm (FIR/sub-mm) wavelengths broadband photometry is complicated by issues such as confusion due to the coarse angular resolution of single dish telescopes at these wavelengths. Evolutionary synthesis models are then required to convert the infra-red luminosity derived from the observed photometry into a star formation rate \citep[e.g.][]{Kennicutt98}. However, these must make assumptions about the star formation history of the galaxy and the stellar initial mass function (IMF). Various models for dust emission and galaxy SEDs, that often include evolutionary synthesis and make further assumptions about the composition and geometry of the dust, can be fitted to the observed FIR/sub-mm photometry (e.g. Silva et al. \citeyear{Silva98}; Draine \& Li \citeyear{DL07}; da Cunha, Charlot \& Elbaz \citeyear{daCunha08}) to give estimates for physical dust properties such as the dust temperature ($T_{\rm d}$) and dust mass ($M_{\rm dust}$). As mentioned above, a significant difficulty with FIR/sub-mm imaging surveys of high-redshift galaxies is the coarse angular resolution of single-dish telescopes at these long wavelengths [$\sim20$ arcsec full width half maximum (FWHM)]. This, coupled with the high surface density of detectable objects, means that imaging is often confusion-limited and that only the brightest objects (with the highest SFRs) can be resolved as point sources above the confusion background \citep[e.g.][]{Nguyen10}. These resolved galaxies either form the massive end of the MS or have elevated SFRs relative to the MS and are thus defined as starburst galaxies (SB). At $z\sim2$, MS galaxies have SFRs high enough to be resolved in \emph{Herschel}\footnote{\url{http://sci.esa.int/herschel/}} imaging only if they have large stellar masses ($M_{\star}\gtrsim10^{10.5}$~$h^{-1}$~M$_{\sun}$) whereas SB galaxies with stellar mass approximately an order of magnitude lower can still be resolved \citep[e.g.][]{Gruppioni13}. For less massive MS galaxies and galaxies at higher redshifts, as it is not possible to individually resolve a complete sample of galaxies, stacking techniques have been developed to overcome the source confusion and derive average FIR/sub-mm SEDs for different samples \citep[e.g.][]{Magdis12,Magnelli14,Santini14,Bethermin15}. These studies typically begin with a stellar mass selected sample and stack available FIR/sub-mm imaging at the positions of these galaxies, in bins of stellar mass and redshift. An early study using this technique, \cite{Magdis12}, fitted the dust model of \cite{DL07} to stacked FIR/sub-mm SEDs of $M_{\star}\gtrsim3.6\times10^{9}$~$h^{-1}$~M$_{\sun}$ galaxies at $z\sim1$ and $z\sim2$. The Draine and Li model describes interstellar dust as a mixture of polyaromatic hydrocarbon molecules (PAHs), as well as carbonaceous and amorphous silicate grains, with the fraction of dust in PAHs determined by the parameter $q_{\rm PAH}$. The size distributions of these species are chosen such that observed extinction laws in the Milky Way, Large Magellanic Cloud and the Small Magellanic Cloud are broadly reproduced. Dust is assumed to be heated by a radiation field with constant intensity, $U_{\rm min}$, with some fraction, $\gamma$, being exposed to a radiation field ranging in intensity from $U_{\rm min}$ to $U_{\rm max}$, representing dust enclosed in photodissociation regions. This model thus provides a best fitting value for the total dust mass, $U_{\rm min}$, $\gamma$ and $q_{\rm PAH}$. The resulting average radiation field $\langle U\rangle$ is strongly correlated with average dust temperature. Magdis et al. found that the dust temperatures of MS galaxies increases with redshift. \cite{Bethermin15} extended this analysis to $z\sim4$ by stacking on a stellar mass-selected sample ($M_{\star}>2.1\times10^{10}$~$h^{-1}$~M$_{\sun}$) of galaxies derived from \uvista data \citep{Ilbert13} in the \cosmos field. B\'{e}thermin et al. found, similarly to Magdis et al., that the dust temperatures of MS galaxies increases with redshift. From fitting the \cite{DL07} dust model to their stacked SEDs, Bethermin et al. found a strong increase in the mean intensity of the radiation field, $\langle U\rangle$, which is strongly correlated with $\Td$, for MS galaxies at $z\gtrsim2$. This led these authors to suggest a break to the fundamental metallicity relation \citep[FMR,][]{Mannucci10}, which connects gas metallicity to SFR and stellar mass, and is observed to be redshift independent for $z\lesssim2$. This break has the effect of reducing the gas metallicity (and hence dust mass) at a given stellar mass for $z\gtrsim2$. This results in hotter dust temperatures than is implied by simply extrapolating the FMR from lower redshifts. Bethermin et al. also performed their stacking analysis on a sample of SB galaxies, finding no evidence for dust temperature evolution with redshift for these galaxies, and that they have a similar temperature to the $z\sim2$ main sequence sample. Here we compare predictions from a state-of-the-art semi-analytic model of hierarchical galaxy formation within the $\Lambda$CDM paradigm \citep[\galform,][hereafter L16]{Lacey16} to the observations presented in \cite{Bethermin15}. B\'{e}thermin et al. also compared their inferred dust-to-stellar mass ratios and gas fractions directly with those predicted by the \galform models of L16 and Gonzalez-Perez et al. (\citeyear{vgp14}, hereafter GP14). Here, we extend this by comparing the FIR/sub-mm SEDs directly and inferring physical properties for both the observed and simulated galaxies in a consistent manner. In the model, the FIR/sub-mm emission is calculated by solving the equations of radiative transfer for dust absorption in an assumed geometry; and by applying energy balance arguments for dust emission to solve for the dust temperature, assuming the dust emits as a modified blackbody. Importantly, this means that the dust temperature is a prediction of the model and not a free parameter. The L16 model can reproduce an unprecedented range of observational data, notably the optical and near infra-red luminosity functions of the galaxy population from $z=0$ to $z\sim3$, and the FIR/sub-mm number counts and redshift distributions \citep[from $250$ to $1100$~$\muup$m, L16, see also][]{Cowley15}. An important feature of the model is that it incorporates two modes of star formation, a quiescent mode which is fuelled by gas accretion onto a galactic disc and a burst mode in which a period of enhanced star formation is triggered by a dynamical process, either a galaxy merger or disc instability. In order to avoid confusion with the definition of starburst arising from a galaxy's position on the sSFR-$M_{\star}$ plane relative to the main sequence, throughout this paper we will refer to populations of galaxies selected in this manner as MS, if they lie on the locus of the star-forming main sequence, or SB, if they are found at elevated SFRs relative to this locus. Additionally, we will refer to populations of galaxies selected according to the \galform star formation mode which is dominating their current total SFR as quiescent mode dominated and burst mode dominated populations respectively. This paper is structured as follows: In Section~\ref{sec:Model} we describe the galaxy formation model and the model for the reprocessing of stellar radiation by dust; in Section~\ref{sec:Results} we present our results\footnote{Some of the results presented here will be made available at \url{http://icc.dur.ac.uk/data/}. For other requests please contact the first author.}, which include a detailed comparison with the observed stacked FIR/sub-mm SEDs of \cite{Bethermin15}. We conclude in Section~\ref{sec:conclusion}. Throughout we assume a flat $\Lambda$CDM cosmology with cosmological parameters consistent with the $7$ year \emph{Wilkinson Microwave Anisotropy Probe} (\emph{WMAP7}) results \citep{Komatsu11} i.e. ($\Omega_{0}$, $\Lambda_{0}$, $h$, $\Omega_{\rm b}$, $\sigma_{8}$, $n_{\rm s}$) $=$ ($0.272$, $0.728$, $0.704$, $0.0455$, $0.81$, $0.967$), to match those used in L16.
\label{sec:conclusion} The re-emission of radiation by interstellar dust produces a large proportion of the extragalactic background light, implying that a significant fraction of the star formation over the history of the Universe has been obscured by dust. Understanding the nature of dust absorption and emission is therefore critical to understanding galaxy formation and evolution. However, the poor angular resolution of most current telescopes at the FIR/sub-mm wavelengths at which dust emits ($\sim$~$20$~arcsec FWHM) means that in the FIR/sub-mm imaging only the brightest galaxies (with the highest SFRs) can be resolved as point sources above the confusion background. These galaxies comprise either starburst galaxies which lie above the main sequence of star-forming galaxies on the sSFR-$M_{\star}$ plane, and do not make the dominant contribution to the global star formation budget, or the massive end (e.g. $M_{\star}\gtrsim10^{10.5}$~$h^{-1}$~M$_{\sun}$ at $z\approx2$) of the main sequence galaxy population. For less massive galaxies, and at higher redshifts, where the galaxies cannot be resolved individually in the FIR/sub-mm imaging, their dust properties can be investigated through a stacking analysis, the outcome of which is an average FIR/sub-mm SED. We present predictions for such a stacking analysis from a state-of-the-art semi-analytic model of hierarchical galaxy formation. This is coupled with a simple model for the reprocessing of stellar radiation by dust in which the dust temperatures for molecular cloud and diffuse dust components are calculated based on the equations of radiative transfer and energy balance arguments, assuming the dust emits as a modified blackbody. This is implemented within a $\Lambda$CDM Millennium style $N$-body simulation which uses the \emph{WMAP7} cosmology. In a way consistent with observations, we define two populations of star-forming galaxies based on their location on the sSFR$^{\prime}$-$M_{\star}^{\prime}$ plane [where the prime symbol represents the value for this physical property that would be inferred assuming a universal Kennicutt \citeyear{Kennicutt83} IMF, see Section~\ref{subsubsec:infer}], namely main sequence (MS) if they lie close to the main locus of star-forming galaxies and starburst (SB) if they are elevated on that plane relative to the MS. We note that these definitions do not necessarily reflect the quiescent and burst modes of star formation as defined within the model based on physical criteria. Quiescent mode star formation takes place within the galaxy disc, and follows an empirical relation in which the star formation depends on the surface density of molecular gas in the disc. Burst mode star formation takes place in the bulge after gas is transferred to this from the disc by some dynamical process, either a merger or a disc instability. Burst mode dominated galaxies have generally hotter dust temperatures (driven by their enhanced SFRs) than quiescent mode dominated galaxies. Our model incorporates a top-heavy IMF, characterised by a slope of $x=1$, for star formation in burst mode. However, when we make comparisons to physical properties we scale all quantities to what would be inferred assuming a universal \cite{Kennicutt83} IMF (see Section~\ref{subsubsec:infer}). Most conversion factors are taken from the literature and are described in the text. However, we do not apply a conversion factor to the true stellar masses predicted by our model, despite the assumption of a top-heavy IMF for burst mode star formation. As discussed in Appendix~\ref{app:IMF} this has a relatively small effect on the stellar masses that would be inferred fitting the UV/optical/near-IR SED, a technique commonly used in observational studies, compared to the uncertainties and/or scatter associated with this technique. The model exhibits a tight main sequence (sSFR$^{\prime}=$sSFR$^{\prime}_{\rm MS}$) on the sSFR$^{\prime}$-$M^{\prime}_{\star}$ plane when galaxies are able to self-regulate their SFR through the interplay of the prescriptions for gas cooling, quiescent mode star formation and supernovae feedback. In instances where this is not the case through either (i) dynamical processes triggering burst mode star formation, (ii) environmental processes such as ram-pressure stripping limiting gas supply or (iii) energy input from AGN inhibiting gas cooling, this causes the scatter around sSFR$^{\prime}_{\rm MS}$ to increase. We observe a negative high mass slope for sSFR$^{\prime}_{\rm MS}$ at low redshifts ($z\lesssim1$) which we attribute to AGN feedback in high mass halos. This is also reflected in high bulge-to-total mass ratios in these galaxies. This negative slope exists at higher redshifts in quiescent mode dominated galaxies but is not seen for the total galaxy population, because at these redshifts the high mass end of the main sequence is populated predominantly by burst mode dominated galaxies. Additionally we find the model predicts that galaxies classified as being on the main sequence make the dominant contribution to the star formation rate density at all redshifts, as is seen in observations. For redshifts $z\gtrsim2$ this contribution is predicted to be dominated by galaxies that lie on the main sequence but for which the current SFR is dominated by burst mode star formation. We investigate the redshift evolution of the average temperature for main sequence galaxies and find that it is driven primarily by the transition from the main sequence being dominated by burst mode star formation (higher dust temperatures) at high redshifts, to quiescent mode star formation (lower dust temperatures) at low redshifts. We compare the average (stacked) FIR SEDs for galaxies with $M^{\prime}_{\star}>1.7\times10^{10}$~$h^{-1}$~M$_{\sun}$ at a range of redshifts with observations from \cite{Bethermin15}. For main sequence galaxies the agreement is very good for $0.5<z<4$. The model predicts dust temperatures in agreement with those inferred from observations accurately up to $z\sim3$, while at higher redshifts the observations appear to favour hotter dust temperatures than the model predicts. This appears to be due primarily to the model producing too much dust at these redshifts. It could also be that real galaxies are more heterogeneous at higher redshifts e.g. clumpier dust distributions resulting in a range of dust temperatures, which would not be well captured by our simple dust model. For starburst galaxies, which lie elevated relative to the main sequence on the sSFR$^{\prime}$-$M_{\star}$ plane, the agreement between the model and observations is also encouraging for $0.5\lesssim z\lesssim2$. For $z\gtrsim2$ the model appears to underpredict the average $L_{\rm IR}$ inferred from the observations. This implies that the model does not allow enough star formation at higher redshifts ($z\gtrsim2$) in extremely star-forming systems. However, the model \emph{is} calibrated to reproduce the observed $850$~$\muup$m number counts, which are composed predominantly of galaxies at $z\sim1-3$ undergoing burst mode star formation. The apparent discrepancy here is most probably due to how these populations are defined. As we have shown, many of the model galaxies undergoing burst mode star formation at $z\gtrsim2$ would be classified as MS based on their position on the sSFR$^{\prime}$-$M_{\star}^{\prime}$ plane, and their SEDs not included in the SB stack. Thus the model can underpredict the average SEDs of objects with extreme sSFRs at high redshifts whilst still reproducing the abundance of galaxies selected by their emission at $850$~$\muup$m at similar redshifts. We investigate whether the predictions for the stacked SEDs are sensitive to choices made for the values of parameters in our dust model, mainly the fraction of dust in molecular clouds ($f_{\rm cloud}$) and the escape time of stars from their molecular birth clouds ($t_{\rm esc}$). We find that varying these parameters causes only fairly modest changes to the predicted stacked SED, thus these observational data do not provide a stronger constraint on these parameters than previously available data, e.g. the rest-frame $1500$~{\AA} luminosity function at $z\sim3$. In summary, the predictions made by our simple dust model, combined with our semi-analytic model of galaxy formation provide an explanation for the evolution of dust temperatures on the star-forming galaxy main sequence, and can reproduce the average FIR/sub-mm SEDs for such galaxies remarkably well over a broad range of redshifts. Main sequence galaxies make the dominant contribution to the star formation rate density at all epochs, and so this result adds confidence to the predictions of the model and the computation of the FIR SEDs of its galaxies.
16
7
1607.05717
1607
1607.02137_arXiv.txt
Owing to the remarkable photometric precision of space observatories like \emph{Kepler}, stellar and planetary systems beyond our own are now being characterized en masse for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain \mb{fundamental stellar parameters such as ages, masses, and radii} from these observations, they require substantial computational efforts to do so. We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16 Cyg A \& B, and 34 planet-hosting candidates that have been observed by the \emph{Kepler} spacecraft. We find that our estimates and their associated uncertainties are comparable to the results of other methods, but with the additional benefit of being able to explore many more stellar parameters while using much less computation time. We furthermore use this method to present evidence for an empirical diffusion-mass relation. Our method is open source and freely available for the community to use.\footnote{The source code for all analyses and for all figures appearing in this manuscript can be found electronically at \url{https://github.com/earlbellinger/asteroseismology} \citep{earl_bellinger_2016_55400}.}
In recent years, dedicated photometric space missions have delivered dramatic improvements to time-series observations of solar-like stars. These improvements have come not only in terms of their precision, but also in their time span and sampling, which has thus enabled direct measurement of dynamical stellar phenomena such as pulsations, binarity, and activity. Detailed measurements like these place strong constraints on models used to determine the ages, masses, and chemical compositions of these stars. This in turn facilitates a wide range of applications in astrophysics, such as testing theories of stellar evolution, characterizing extrasolar planetary systems \citep[e.g.][]{2015ApJ...799..170C, 2015MNRAS.452.2127S}, assessing galactic chemical evolution \citep[e.g.][]{2015ASSP...39..111C}, and performing ensemble studies of the Galaxy \citep[e.g.][]{2011Sci...332..213C, 2013MNRAS.429..423M, 2014ApJS..210....1C}. The motivation to increase photometric quality has in part been driven by the goal of measuring oscillation modes in stars that are like our Sun. Asteroseismology, the study of these oscillations, provides the opportunity to constrain the ages of stars through accurate inferences of their interior structures. However, stellar ages cannot be measured directly; instead, they depend on indirect determinations via stellar modelling. Traditionally, to determine the age of a star, procedures based on iterative optimization (hereinafter IO) seek the stellar model that best matches the available observations \citep{1994ApJ...427.1013B}. Several search strategies have been employed, including exploration through a pre-computed grid of models (i.e.\ grid-based modelling, hereinafter GBM; see \citealt{2011ApJ...730...63G, 2014ApJS..210....1C}); or \emph{in situ} optimization (hereinafter ISO) such as genetic algorithms \citep{2014ApJS..214...27M}, Markov-chain Monte Carlo \citep{2012MNRAS.427.1847B}, or the downhill simplex algorithm (\citealt{2013ApJS..208....4P}; see e.g.\ \citealt{2015MNRAS.452.2127S} for an extended discussion on the various methods of dating stars). Utilizing the detailed observations from the \emph{Kepler} and CoRoT space telescopes, these procedures have constrained the ages of several field stars to within 10\% of their main-sequence lifetimes \citep{2015MNRAS.452.2127S}. IO is computationally intensive in that it demands the calculation of a large number of stellar models (see \citealt{2009ApJ...699..373M} for a discussion). ISO requires that new stellar tracks are calculated for each target, as they do not know \emph{a priori} all of the combinations of stellar parameter values that the optimizer will need for its search. They furthermore converge to local minima and therefore need to be run multiple times from different starting points to attain global coverage. GBM by way of interpolation in a high-dimensional space, on the other hand, is sensitive to the resolution of each parameter and thus requires a very fine grid of models to search through \citep[see e.g.][who use more than five million models that were varied in just four initial parameters]{2010ApJ...725.2176Q}. Additional dimensions such as efficiency parameters (e.g.\ overshooting or mixing length parameters) significantly impact on the number of models needed and hence the search times for these methods. As a consequence, these approaches typically use, for example, a solar-calibrated mixing length parameter or a fixed amount of convective overshooting. Since these values in other stars are unknown, keeping them fixed therefore results in underestimations of uncertainties. This is especially important in the case of atomic diffusion, which is essential when modelling the Sun \citep[see e.g.][]{1994MNRAS.269.1137B}, but is usually disabled for stars with M/M$_\odot > 1.4$ because it leads to the unobserved consequence of a hydrogen-only surface \citep{2002A&A...390..611M}. These concessions have been made because the relationships connecting \mb{observations} of stars to their internal \mb{properties} are non-linear and difficult to characterize. Here we will show that through the use of machine learning, it is possible to avoid these difficulties by capturing those relations statistically and using them to construct a regression model capable of relating observations of stars to their structural, chemical, and evolutionary properties. The relationships can be learned using many fewer models than IO methods require, and can be used to process entire stellar catalogs with a cost of only seconds per star. To date, only about a hundred solar-like oscillators have had their frequencies resolved, allowing each of them be modelled in detail using costly methods based on IO. In the forthcoming era of TESS \citep{2015JATIS...1a4003R} and PLATO \citep{2014ExA....38..249R}, however, seismic data for many more stars will become available, and it will not be possible to dedicate large amounts of supercomputing time to every star. Furthermore, for many stars, it will only be possible to resolve \emph{global} asteroseismic quantities rather than individual frequencies. Therefore, the ability to rapidly constrain stellar parameters for large numbers of stars by means of global oscillation analysis will be paramount. In this work, we consider the constrained multiple-regression problem of inferring fundamental stellar \mb{parameters} from observable \mb{quantities}. We construct a random forest of decision tree regressors to learn the relationships connecting observable quantities of main-sequence (MS) stars to their zero-age main-sequence (ZAMS) histories and current-age structural and chemical attributes. We validate our technique by inferring the parameters of simulated stars in a hare-and-hound exercise, the Sun, and the well-studied stars 16 Cyg A and B. Finally, we conclude by applying our method on a catalog of \emph{Kepler} objects-of-interest (hereinafter KOI; \citealt{2016MNRAS.456.2183D}). We explore various model physics by considering stellar evolutionary tracks that are varied not only in their initial mass and chemical composition, but also in their efficiency of convection, extent of convective overshooting, and strength of gravitational settling. We compare our results to the recent findings from GBM \citep{2015MNRAS.452.2127S}, ISO \citep{2015ApJ...811L..37M}, interferometry \citep{2013MNRAS.433.1262W}, and asteroseismic glitch analyses \citep{2014ApJ...790..138V} and find that we obtain similar estimates but with orders-of-magnitude speed-ups.
Here we have considered the constrained multiple-regression problem of inferring fundamental stellar parameters from observations. We created a grid of evolutionary tracks varied in mass, chemical composition, mixing length parameter, overshooting coefficient, and diffusion \mb{multiplication} factor. We evolved each track in time along the main sequence and collected \mb{observable quantities} such as effective temperatures and metallicities as well as global statistics on the modes of oscillations from models along each evolutionary path. We used this matrix of \mb{stellar models} to train a machine learning algorithm to be able to discern the patterns that relate observations to \mb{fundamental stellar parameters}. We then applied this method to hare-and-hound exercise data, the Sun, 16 Cyg A and B, and 34 planet-hosting candidates that have been observed by \emph{Kepler} and rapidly obtained precise initial conditions and current-age values of these stars. % Remarkably, we were able to empirically determine the value of the diffusion \mb{multiplication} factor and hence the efficiency of diffusion required to reproduce the observations instead of inhibiting it \emph{ad hoc}. A larger sample size \mb{will} better constrain the diffusion \mb{multiplication} factor and determine what other variables are relevant in its parametrization. \mb{This is work in progress.} The method presented here has many advantages over existing approaches. First, random forests can be trained and used in only seconds and hence provide substantial speed-ups over other methods. Observations of a star simply need to be fed through the forest---akin to plugging numbers into an equation---and do not need to be subjected to expensive iterative optimization procedures. Secondly, random forests perform non-linear and non-parametric regression, which means that the method can use orders-of-magnitude fewer models for the same level of precision, while additionally attaining a more rigorous appraisal of uncertainties for the predicted quantities. Thirdly, our method allows us to investigate wide ranges and combinations of stellar parameters. And finally, the method presented here provides the opportunity to extract insights from the statistical regression that is being performed, which is achieved by examining the relationships in stellar physics that the machine learns by analyzing simulation data. This contrasts the blind optimization processes of other methods that provide an answer but do not indicate the elements that were important in doing so. We note that the predicted quantities reflect a set of choices in stellar physics. Although such biases are impossible to propagate, varying model parameters that are usually kept fixed---such as the mixing length parameter, diffusion \mb{multiplication} factor, and overshooting coefficient---takes us a step in the right direction. Furthermore, the fact that quantities such as stellar radii and luminosities---quantities that have been measured accurately, not just precisely---can be reproduced both precisely and accuractely by this method, gives a degree of confidence in its efficacy. The method we have presented here is currently only applicable to main-sequence stars. We intend to extend this study to later stages of evolution.
16
7
1607.02137
1607
1607.00903_arXiv.txt
Sunspot groups are the main source of solar flares, with the energy to power them being supplied by magnetic-field evolution (\eg\ flux emergence or twisting/shearing). To date, few studies have investigated the statistical relation between sunspot-group evolution and flaring, with none considering evolution in the McIntosh classification scheme. Here we present a statistical analysis of sunspot groups from Solar Cycle 22, focusing on 24-hour changes in the three McIntosh classification components. Evolution-dependent $\geqslant$\,C1.0, $\geqslant$\,M1.0, and $\geqslant$\,X1.0 flaring rates are calculated, leading to the following results: (i) flaring rates become increasingly higher for greater degrees of upward evolution through the McIntosh classes, with the opposite found for downward evolution; (ii) the highest flaring rates are found for upward evolution from larger, more complex, classes (\eg\ Zurich D- and E-classes evolving upward to F-class produce $\geqslant$\,C1.0 rates of 2.66\,$\pm$\,0.28 and 2.31\,$\pm$\,0.09 flares per 24\,hours, respectively); (iii) increasingly complex classes give higher rates for all flare magnitudes, even when sunspot groups do not evolve over 24\,hours. These results support the hypothesis that injection of % magnetic energy by flux emergence (\ie\ increasing in % Zurich or compactness classes) leads to a higher frequency and magnitude of flaring.
\label{sec:intro} Solar flares are known to originate in active regions on the Sun, and they are the result of the rapid release of large quantities of energy \citep[up to 10$^{32}$ ergs;][]{Emslie2012} from complex magnetic-field structures rooted in their sunspot groups. This release of magnetic energy can lead to the acceleration of highly energetic particles and emission of high-energy radiation that can affect the performance and reliability of technology in the near-Earth space environment (\ie\ space weather). Timescales for space-weather events, from first detection to arrival in the near-Earth environment, range from instantaneous (solar flare electromagnetic radiation) to tens of minutes (solar energetic particles) or hours \citep[coronal mass ejections;][]{Vrsnak2013}. There is a great need to develop a better understanding of the conditions that lead to the production of solar flares, due to the simultaneous nature of their initial detection and Earth impact. Historically, the complexity of sunspot groups has been investigated as an indicator of potential flaring activity. A classification scheme describing their magnetic complexity was established by \cite{Hale1919} and is known as the Mount Wilson classification scheme. It originally consisted of three parameters to describe the mixing of magnetic polarities in sunspot groups: $\alpha$ (unipolar); $\beta$ (bipolar); $\gamma$ (multipolar). Early work relating these magnetic classifications to flare productivity showed that sunspot groups of increasingly complex Mount Wilson class (\eg\ $\beta$ to $\beta\gamma$ to $\gamma$) were found to produce increasing frequencies of flaring \citep{Giovanelli1939}. This scheme was later extended to include close ($\leqslant$\,2\degree) mixing of umbral magnetic polarities within one penumbra, known as the $\delta$-configuration \citep{Kunzel1960}. Studies including this extended scheme have shown that groups that achieve greater magnetic complexity (\eg\ $\beta\gamma\delta$-configuration) and larger sunspot area (a proxy for total magnetic flux) produce flares of greater magnitude at some point in their lifetime \citep{Sammis2000}. Analogously, a classification scheme describing the white-light structure of sunspot groups was developed, originally by \cite{Cortie1901} and later modified and expanded to include a wider range of parameters \citep{Waldmeier1947,McIntosh1990}. Currently this is referred to as the McIntosh scheme, which always consists of three components [Zpc]: \begin{description} \item[Z] modified Zurich class, describing longitudinal extent of the sunspot group; \item[p] penumbral class, describing size/symmetry of largest sunspot's penumbra; \item[c] compactness class, describing interior spot distribution of the group. \end{description} As these classifications are the primary focus of this work, Section~\ref{sssec:mcint} describes the individual McIntosh components in greater detail. Statistical analysis has previously been carried out on sunspot-group McIntosh classifications to produce historically averaged rates of flaring. Similar to the magnetic-complexity work of \cite{Giovanelli1939}, it was found that sunspot groups with higher McIntosh structural complexity classes (corresponding to larger extent, large and asymmetric penumbrae, and more internal spots) produced higher flaring rates overall \citep{McIntosh1990,Bornmann1994}. In more recent years, several studies have investigated magnetic properties of sunspot groups that are thought to play an important role in flare production. It has been shown that flares most commonly occur in regions that display rapidly emerging flux \citep{Schmieder1994} or twisted, non-potential magnetic fields, a signature of stored free magnetic energy \citep{Hahn2005}. Examples of derived, point-in-time magnetic properties include strong horizontal gradients of magnetic field close to polarity inversion lines -- the $R$-value of \citet{Schrijver2007} and the $^{L}$WL$_{\mathrm{SG}}$ value of \citet{Falconer2008} -- and large effective connected magnetic field \citep[$B_{\mathrm{eff}}$;][]{Georgoulis2007}. These derived properties all show a potential for use in flare forecasting through varying degrees of correlation with flaring activity. However, there has yet to be any large-scale statistical analysis on applying these properties to forecast flares. Such large-scale statistical analyses have been carried out on historical records of sunspot properties and their relation to flaring activity. \cite{Gallagher2002} implemented a flare-forecasting method using historical McIntosh classifications to produce flare probabilities from average flare rates under the assumption of Poisson statistics. Although only taking into account morphological properties, the McIntosh--Poisson forecasting method has comparable levels of success to other much more complex techniques \citep{Bloomfield2012} and expert-based systems \citep[\eg][]{Crown2012,Bloomfield2016}. However, none of the works described so far take into account a key factor in pre-flare conditions, namely the evolution of the sunspot-group properties. The energy that is available for flaring is governed by the Poynting flux through the solar surface, which can be modified by changes in total magnetic flux (through emergence or submergence) and reorientation of the magnetic field (through twisting, shearing, or tilting). % In terms of flux emergence, \citet{Schrijver2005} found that active-region non-potentiality (correlated with higher likelihood of flaring) was enhanced by flux emergence in the 10\,--\,30\,hours prior to flares. In addition, \citet{Lee2012} studied the most flare-productive McIntosh classifications and 24-hour changes in their sunspot group area (\ie\ decreasing, steady, or increasing), finding a noticeable increase in flare-occurrence rates for sunspot groups of increasing area. Regarding the reorientation of the magnetic field, \citet{Murray2012} found that local concentrations of magnetic flux at flare locations displayed a field-vector inclination ramp-up towards the vertical before flaring. In addition, this reorientation of the field resulted in a corresponding pre-flare increase in free magnetic energy that then decreased after the flare \citep{Murray2013}. These works highlight that sunspot-group property evolution is an important indicator of flaring activity. However, there has not yet been a study of the temporal evolution of sunspot-group classifications and its potential for use in flare forecasting. Here we present statistical analysis of the evolution of sunspot groups in terms of their McIntosh white-light structural classifications and associated flaring rates. In Section \ref{sec:data_anal}, the distribution of sunspot groups across McIntosh classes is presented along with evidence for mis-classification in limb regions. Section~\ref{sec:results} then focuses on the main results and discussion of the study. Analysis of the overall evolution of sunspot-group McIntosh classes is presented in Section~\ref{ssec:class_evol}, with the class-specific evolution discussed in Section~\ref{ssec:zurich_class_evol}. Flaring rates associated with these class-specific evolution steps are included in Section~\ref{ssec:flare_rates} for three different flaring levels (\ie\ $\geqslant$\,C1.0, $\geqslant$\,M1.0, and $\geqslant$\,X1.0), while our conclusions and future direction are presented in Section~\ref{sec:conc}.
\label{sec:conc} In this study we have examined the evolution of sunspot groups in terms of their McIntosh classification and their subsequent flaring rates. We have shown that the majority (\ie\ $\geqslant$\,60\,\%) of sunspot groups do not evolve on a 24-hour timescale for the McIntosh modified Zurich, penumbral, and compactness classes (\ie\ Figure~\ref{fig:hist_freq} and its equivalent in Appendices~\ref{app_pen} and \ref{app_comp}, respectively), with a secondary preference in overall evolution by $\pm$\,1 step in class. When examining limb-only locations (\ie\ those beyond $\pm$\,75\degree\ Heliographic longitude) we found that the overall evolution distributions show significant deviation from that observed on disk (\ie\ within $\pm$\,75\degree\ Heliographic longitude), with an inherent bias for evolution upward at the east limb and evolution downward at the west limb. This is a direct result of sunspot groups being mis-classified at both limbs due to foreshortening effects as sunspot groups rotate around the solar limb either into or out of view. This mis-classification manifests itself predominantly in the assignment of Zurich H-class (\ie\ unipolar with penumbra) whereby there is a tendency for over-classification of H-class at the east and west limbs. Taking this mis-classification into consideration, we therefore excluded both limbs from our main analysis of flaring rates and focus on only those sunspot groups within $\pm$\,75\degree\ Heliographic longitude. The evolution of specific Zurich, penumbral, and compactness classes was examined and their resulting percentage occurrences analysed (\ie\ Figure~\ref{fig:zur_evol} and its equivalent in Appendices \ref{app_pen} and \ref{app_comp}, respectively). Again, it was found that sunspot groups predominantly do not evolve over 24\,hours and preferentially evolve by just $\pm$\,1 step in class. The Zurich occurrence evolution at the east and west limbs displays significant bias in terms of greater frequencies of upward evolution at the east limb and opposite behaviour (\ie\ downward evolution) at the west limb, reconfirming the mis-classification of Zurich classes at the limbs. Class-specific evolution was examined further to calculate the subsequent 24-hour flaring rates associated with each evolution step. Increasingly higher flaring rates were observed in practically all starting classes for greater degrees of upward evolution in Zurich, penumbral, and compactness class (\ie\ Figure~\ref{fig:zur_flare_nolimb} and its equivalent in Appendices \ref{app_pen} and \ref{app_comp}, respectively), with opposite behaviour (\ie\ sequentially lower flaring rates) observed for greater downward evolution. For example, Figure~\ref{fig:zur_flare_nolimb} and Table~\ref{T-Zur_rates} show that sunspot groups which start as Zurich D-class and do not evolve yield a $\geqslant$\,C1.0 flaring rate of 0.68 % flares per 24\,hours, while the rate for those that evolve upwards to E-class is 1.38 flares per 24\,hours and those evolving further upward to F-class is 2.67 % flares per 24\,hours (\ie\ roughly double and quadruple, respectively, the rate of the no evolution case). In contrast, the flaring rate of sunspot groups that start as D-class and evolve downward to C-class is 0.21 % flares per 24\,hours and those evolving further downward to B- or H-class is 0.08 or 0.06 flares per 24\,hours (\ie\ roughly a third and a tenth, respectively, the rate of the no-evolution case). The evolution in McIntosh classification, specifically in the Zurich and compactness classes, act as a proxy for the emergence (upward evolution) or decay (downward evolution) of magnetic flux in a sunspot group. Our analysis therefore shows that flux emergence into a region produces a higher number of flares compared to the decay of flux. This result complements previous studies relating magnetic-flux emergence to the production of flares. \cite{Lee2012} showed that for the largest and most flare-productive McIntosh classifications, sunspot groups that were observed to increase in spot area over 24\,hours produced higher flaring rates than similarly classified groups that decreased in spot area. Our results agree very well with this and show that the McIntosh classification components can accurately characterize the growth of sunspot groups. % In conjunction with this, \cite{Schrijver2005} showed that flares $\geqslant$\,C1.0 were $\approx$2.4 times more frequent in active regions undergoing flux emergence that leads to the production of current systems and non-potential coronae than in near-potential regions. This mirrors our finding that sunspot groups increasing in Zurich, penumbral, or compactness class over 24-hour timescales have systematically higher flaring rates than those showing no change in these classes. Some of the highest rates of flaring were observed for upward evolution from the larger, more complex Zurich classes -- \eg\ bipolar and large sunspot groups that start as Zurich D- and E-classes and evolve to F-class show a $\geqslant$\,C1.0 rate of 2.66\,$\pm$\,0.28 and 2.31\,$\pm$\,0.09 flares per 24\,hours, respectively. It was also found that increasingly complex Zurich classes produce higher flaring rates even when there is no evolution (\ie\ no flux emergence or decay) in a sunspot group over 24\,hours. This behaviour was observed throughout all starting Zurich classes (\ie\ A to F) and flaring magnitudes (\ie\ $\geqslant$\,C1.0, $\geqslant$\,M1.0, and $\geqslant$\,X1.0), indicating that flaring rates are correlated with the starting level of Zurich complexity as well as evolution through the three McIntosh classification components. Finally, we calculated the associated uncertainty in our flaring rates using standard Poisson errors, in order to determine which of the rates are statistically significant (\ie\ clearly separable from zero). It was found that the majority of the evolution-dependent $\geqslant$\,C1.0 flaring rates are statistically significant -- a direct consequence of high numbers of both flares and evolution occurrence. As flare magnitude increases the flare occurrence drops significantly, leading to less statistically significant rates (\eg\ $\geqslant$\,X1.0 flaring rates are only significant for Zurich F-class groups that remain F-class, with a rate of 0.06\,$\pm$\,0.04 flares per 24\,hours). However, the same systematic behaviour of higher flaring rates for greater upward evolution (and lower rates for greater downward evolution) still persist for $\geqslant$\,M1.0 and $\geqslant$\,X1.0, despite the large uncertainties in these rates. The evolution-dependent flaring rates presented here show potential for use in flare forecasting. Future work will focus on calculating evolution-dependent flaring probabilities under the assumption of Poisson statistics \citep{Gallagher2002}. The forecast performance of flaring rates determined here from Cycle 22 will be tested against data from Cycle 23 (\ie\ 1 August 1996 to 31 December 2010, inclusive), enabling direct comparison to the benchmark performance of the standard point-in-time (\ie\ not considering evolution) McIntosh--Poisson flare forecasting method presented in \citet{Bloomfield2012}. \begin{acks} The authors thank Dr Chris Balch (NOAA/SWPC) for providing the data used in this research. AEM was supported by an Irish Research Council Government of Ireland Postgraduate Scholarship, while DSB was supported by the European Space Agency PRODEX Programme as well as the European Union's Horizon 2020 research and innovation programme under grant agreement No.~640216 (FLARECAST project). \end{acks} \newpage \appendix
16
7
1607.00903
1607
1607.00768_arXiv.txt
As a shock front interacts with turbulence, it develops corrugation which induces outgoing wave modes in the downstream plasma. For a fast shock wave, the incoming wave modes can either be fast magnetosonic waves originating from downstream, outrunning the shock, or eigenmodes of the upstream plasma drifting through the shock. Using linear perturbation theory in relativistic MHD, this paper provides a general analysis of the corrugation of relativistic magnetized fast shock waves resulting from their interaction with small amplitude disturbances. Transfer functions characterizing the linear response for each of the outgoing modes are calculated as a function of the magnetization of the upstream medium and as a function of the nature of the incoming wave. Interestingly, if the latter is an eigenmode of the upstream plasma, we find that there exists a resonance at which the (linear) response of the shock becomes large or even diverges. This result may have profound consequences on the phenomenology of astrophysical relativistic magnetized shock waves.
\label{sec:introduction} The physics of relativistic shock waves, in which the unshocked plasma enters the shock front with a relative relativistic velocity $v_{\rm sh}\,\sim\,c$, is a topic which has received increased attention since the discovery of various astrophysical sources endowed with relativistic outflows, such as radio-galaxies, micro-quasars, pulsar wind nebulae or gamma-ray bursts. In those objects, the relativistic shock waves are believed to play a crucial role in the dissipation of plasma bulk energy into non-thermal particle energy, which is then channeled into non-thermal electromagnetic radiation (or possibly, high energy neutrinos and cosmic rays). The various manifestations of these high energy sources have been a key motivation to understand the physics of collisionless shock waves and of the ensuing particle acceleration processes~\citep[e.g.][]{2011A&ARv..19...42B,2012SSRv..173..309B,2015SSRv..191..519S} for reviews. The nature of the turbulence excited in the vicinity of these collisionless shocks remains a nagging open question, which is however central to all the above topics, since it directly governs the physics of acceleration and, possibly, radiation. The physics of shock waves in the collisionless regime has itself been a long-standing problem in plasma physics, going back to the pioneering studies of ~\citet{1963JNuE....5...43M}, with intense renewed interest related to the possibility of reproducing such shocks in laboratory astrophysics~\citep[e.g.][]{2011PhRvL.106q5002K,2012ApJ...749..171D,Park201238,Huntington_NP_11_173_2015,Park_PHP_22_056311_2015}. The generation of relativistic collisionless shock waves is also already envisaged with future generations of lasers~\citep[e.g.][]{Chen_PRL_114_215001_2015, 2015PhRvL.115u5003L}. One topic of general interest, with direct application to the above fields, is the stability of shock waves. The study of the corrugation instability of a shock wave goes back to the early works of ~\citet{1958JETP....6..739D} and \citet{1958JETP....6.1179K}, see also ~\citet{1982SvAL....8..320B}, \citet{LandauLifshitz87} or more recently ~\citet{2000PhRvL..84.1180B}. General theorems assuming polytropic equations of state ensure the stability of shock waves against corrugation, in the relativistic~\citep{1986PhFl...29.2847A} and/or magnetized regime~\citep{1964PhFl....7..700G,1967PhFl...10..782L,McKW71}, although instability may exist in other regimes~\citep[e.g.][]{TCST97}. In any case, the stability against corrugation does not preclude the possibility of spontaneous emission of waves by the shock front, as discussed in the above references. The interaction of the shock front with disturbances thus represents a topic of prime interest, as it may lead to the corrugation of the shock front and to the generation of turbulence behind the shock, with possibly large amplification. The transmission of upstream Alfv\'en waves through a sub-relativistic shock front has been addressed, in particular, by~\citet{1986MNRAS.218..551A}; more recently, ~\citet{2012MNRAS.422.3118L} has reported on numerical MHD simulations of the interaction of a fast magnetosonic wave impinging on the downstream side of a relativistic shock front. The present paper proposes a general investigation of the corrugation of relativistic magnetized collisionless shock waves induced by either upstream or downstream small amplitude perturbations. This study is carried out analytically for a planar shock front in linearized relativistic MHD. This problem is addressed as follows. Section~\ref{sec:gen} provides some notations as well as the shock crossing conditions to the first order in perturbations, which relate the amplitude of shock corrugation to the amplitude of incoming and outgoing MHD perturbations of the flow. Section~\ref{sec:dscatt} is devoted to the interaction of a fast magnetosonic wave originating from downstream and to its scattering off the shock front, with resulting outgoing waves and shock corrugation. Section~\ref{sec:utrans} discusses the transmission of upstream entropy and Alfv\'en perturbations into downstream turbulence. It reveals, in particular, that there exist resonant wavenumbers of the turbulence for which the amplification of the incoming wave, and consequently the amplitude of the shock crossing, becomes formally infinite. This resonant excitation of the shock front by incoming upstream turbulence may have profound implications for our understanding of astrophysical shock waves and the associated acceleration processes.
This paper has provided a general analysis of the corrugation of relativistic magnetized (fast) shock waves induced by the interaction of the shock front with moving disturbances. Two cases have been analyzed, depending on the nature of these disturbances: whether they are fast magnetosonic waves originating from the downstream side of the shock front, outrunning the shock, or whether they are eigenmodes of the upstream plasma. Working to first order in the perturbations of the flows, on both sides of the shock front, as in the amplitude of corrugation $\delta X$ of the shock front, we have provided transfer functions relating the amplitude of the outgoing wave modes to the amplitude of the incoming wave, thus developing a linear response theory for the corrugation. We have then provided estimates of these transfer functions for different cases of interest. One noteworthy result is that the front generically responds with $\vert\partial\delta X\vert \,\sim\,{\cal O}(1)\delta\psi_{<}$, where $\delta\psi_{<}$ represents the amplitude of the incoming wave, the partial derivative being taken along $t$, $y$ or $z$. The corrugation remains linear as long as $\vert\partial\delta X\vert\,\ll\,1$, therefore the present analysis is restricted to small amplitude incoming waves. Interestingly, however, the extrapolation of the present results indicates that non-linear corrugation can be achieved in realistic situations. In this respect, we have obtained an original solution for the equations of shock crossing, in the non-perturbative regime; this solution allows to calculate the amplitude of fluctuations in density, pressure, velocity and magnetic field components at all locations (and all times) on a shock surface which is arbitrarily rippled in the $y-$ direction, but smooth along the background magnetic field. Furthermore, when corrugation is induced by upstream wave modes, we find that there exist resonant wavenumbers $\bsk$ where the linear response becomes large or even formally infinite, leading to large or formally infinite amplification of the incoming wave amplitude and of the shock corrugation. For a given pair $(k_y,k_z)$, there exists one such resonance in $k_x$ for the incoming mode, corresponding to the value at which the outgoing fast magnetosonic mode moves as fast as the shock front. The physics of shock waves interacting with turbulence containing such resonant wavenumbers should be examined with dedicated numerical simulations able to probe the deep non-linear regime, as the structure of the turbulence produced by the corrugation may have profound consequences for our understanding of astrophysical magnetized relativistic shock waves and their phenomenology.
16
7
1607.00768
1607
1607.00074_arXiv.txt
Periodic dips observed in $\approx20$\% of low-mass X-ray binaries are thought to arise from obscuration of the neutron star by the outer edge of the accretion disk. We report the detection with the {\it Rossi X-ray Timing Explorer}\/ of two dipping episodes in \src, not previously a known dipper. The X-ray spectrum during the dips exhibited an elevated neutral column density, by a factor between 1 and almost two orders of magnitude. Dips were not observed in every cycle of the 18.95-hr orbit, so that the estimated frequency for these events is \aqldiprate. This is the first confirmed example of intermittent dipping in such a system. Assuming that the dips in \src\/ occur because the system inclination is intermediate between the non-dipping and dipping sources, implies a range of \aqlincl\ for the source. This result lends support for the presence of a massive ($>2\ M_\odot$) neutron star in \src, and further implies that $\approx30$ additional LMXBs may have inclinations within this range, raising the possibility of intermittent dips in those systems also. % Thus, we searched for dips from \nsource\ other bursting systems, without success. For the system with the largest number of dip phases covered, 4U~1820$-$303, the nondetection implies a 95\% upper limit to the dip frequency of \bestdipratelim.
Periodic, irregular dips in the X-ray intensity of low-mass X-ray binaries (LMXBs) were first observed in the early 1980s \cite[]{wm85}. The dips are generally attributed to partial obscuration of the neutron star by a thickened region of the accretion disk close to the line joining the two centres of mass \cite[e.g.][]{dt06}. The dips are typically irregular in shape and depth, with duty cycle between 10--30\%, and (in most systems) are accompanied by an increase in the absorption column density. XB~1916$-$050 was the first such example discovered \cite[]{white82,walter82}, with dips recurring at the $\approx50$~min orbital period. \cite{wm85} described 4 systems showing periodic dips, and one additional system (X~1624$-$490) in which the dips were not yet known to be periodic. Since then, the list of known dippers has grown to include EXO~0748$-$676, 4U~1254$-$69 (XB~1254$-$690), MXB~1659$-$298, 4U~1746$-$371, 4U~1323$-$62, XTE~J1710$-$281, GRS~1747$-$312, and possibly also XTE~J1759$-$220 and 1A~1744$-$361, \cite[e.g.][]{lmxb07}. The most recent discovery of dipping behaviour is in the 24.27-d binary and burst source, GX~13+1 \cite[]{iaria14}. The dips are characterised by a decrease in X-ray intensity, and (usually) an increase in spectral hardness, arising from additional absorption at the low-energy ($< 10$~keV) part of the X-ray spectrum. The lack of photoabsorption in shallow dips seen from X~1755$-$338 was explained by metal-poor obscuring material; the required abundances are 1/600 of the solar value. Similarly, a factor 10--60 shortfall in degree of photoelectric absorption was also noted for XB~1916$-$50 \cite[]{wm85}. Dips may occur over $\approx30$\% of the orbital cycle, generally just prior to inferior conjunction, and culminating in some sources (EXO~0748$-$676 and MXB~1659$-$298) in an eclipse \cite[e.g.][]{parmar86,cw84}. The inclination angle for dipping sources is thought to be higher than for the non-dippers, and for the systems which also show eclipses is higher again \cite[e.g.][]{motch87}. Conventionally, the sample of LMXBs was clearly divided into dippers (which showed a dip in almost every orbital cycle) and non-dippers, which never exhibited dips. A possible exception was the candidate absorption event reported from the ultracompact binary 4U~1820$-$303 \cite[]{csb85} following a search for high hardness ratios with HEAO A-2. Scanning observations of the source were made between 1977 August and 1978 March; one of the scans, on 1977 Sep 27, 23:33:56 UT, was found with hardness ratio increased compared to previous scans by a factor of almost 3. The neutral column density was inferred to have increased for the ``abnormal'' scan. Because of the low duty cycle of the scanning observations, the duration of the event was not well constrained \cite[$\leq2$~hr;][]{mrg88},) although the variation was seemingly not related to the (energy-independent) orbital X-ray intensity modulation. Recently, a single dipping event was observed by the {\it Rossi X-ray Timing Explorer}\/ ({\it RXTE}) from the 18.95-hr binary \src\ \cite[]{gal12b}, not previously known as a dipping source. \src\ is one of the most prolific X-ray transients known, exhibiting bright ($L_X\approx10^{37}\ \eps$) outbursts every few hundred days since 1969 \cite[e.g.][]{campana13}. The transient outburst profiles are quite variable, with a range of durations and fluences noted by several authors \cite[e.g.][]{gungor14}. In recent years, the source has exhibited ``long high'' outbursts in 2011 and 2013, and weaker outbursts in each of the following years \cite[e.g.][]{waterhouse16}. Here we describe in more details the properties of the event observed from \src, and also report on a search for additional dipping events in a large sample of LMXBs.
We report the detection by \xte\/ of a pair of events in \src\ with properties consistent with the dips found in a subset of low-mass X-ray binaries. These events are the first such ``intermittent'' dips confirmed for a LMXB; given the accumulated exposure on this source, the estimated rate of such events is \aqldiprate. The spectral analysis of the dip segments shows that the inferred neutral column density $n_H$ is elevated typically by an order of magnitude, and up to almost two orders of magnitude, compared to the non-dipping value. Assuming that the dips arise in \src\ by virtue of a higher-than-average inclination, the implied range is \aqlincl; this is in excess of most other estimates for the source. Given the difficulty of detecting such events in other systems, it is possible that an additional 15 systems may be undetected intermittent dippers. Of the possible explanations of the intermittent dipping behaviour, the high inclination seems the most plausible. We also report on a search of \nsource\ additional systems for dips, which was unsuccessful. For the system with the largest exposure, 4U~1636$-$536, we derive an upper limit on the dip rate of \altdipratelim\ (95\%).
16
7
1607.00074
1607
1607.06464_arXiv.txt
We present ALMA ultra--high--spatial resolution ($\sim 20 \, {\rm mas}$ or $150 \, {\rm pc}$) observations of dust continuum at $920 \, {\rm \mu m}$ and $1.2 \, {\rm mm}$ in a pair of submm galaxies (SMGs) at $z = 3.442$, ALMACAL--1 (A--1: $S_{\rm 870 \mu m} = 6.5 \pm 0.2 \, {\rm mJy}$) and ALMACAL--2 (A--2: $S_{\rm 870 \mu m} = 4.4 \pm 0.2 \, {\rm mJy}$). These are the brightest and most luminous SMGs discovered so far in ALMACAL, a wide and deep (sub)mm survey, which is being carried out in ALMA calibrator fields and currently contains observations at sub-arcsec resolution down to an ${\rm r.m.s.}$ of $\sim 15 \,{\rm \mu Jy \, beam^{-1}}$ in more than 250 calibrators fields. The spectroscopic redshifts of A--1 and A--2 have been confirmed via serendipitous detection of up to nine emission lines, in three different ALMA bands. Our ultra-high-spatial resolution data reveal that about half of the star formation in each of these starbursts is dominated by a single compact clump (FWHM size of $\sim 350 \, {\rm pc}$). This structure is confirmed by independent datasets at $920 \, {\rm \mu m}$ and $1.2 \, {\rm mm}$. In A--1, two additional, fainter clumps are found. The star-formation rate (SFR) surface densities of all these clumps are extremely high, $\Sigma_{\rm SFR} \sim 1200$ to $\sim 3000 \, {M_\odot \, {\rm yr}^{-1} \, {\rm kpc}^{-2}}$, the highest found in high-redshift galaxies. There is a small probability that A--1 and A--2 are the lensed components of a background source gravitationally amplified by the blazar host. If this was the case, the effective radius of the dusty galaxy in the source plane would be $R_{\rm eff} \sim 40 \, {\rm pc}$, and the de-magnified SFR surface density would be $\Sigma_{\rm SFR} \sim 10000 \, {M_\odot \, {\rm yr}^{-1} \, {\rm kpc}^{-2}}$, comparable with the eastern nucleus of Arp 220. Despite being unable to rule out an AGN contribution, our results suggest that a significant percentage of the enormous far-IR luminosity in some dusty starbursts is concentrated in very small star-forming regions. The high $\Sigma_{\rm SFR}$ in our pair of SMGs could only be measured thanks to the ultra--high--resolution ALMA observations used in this work, demonstrating that long-baseline observations are essential to study and interpret the properties of dusty starbursts in the early Universe.
Two decades ago, the first large format bolometer cameras on single-dish submm telescopes discovered a population of galaxies that were forming stars at tremendous rates, the so-called submm galaxies \citep[SMGs,][]{Smail1997ApJ...490L...5S, Barger1998Natur.394..248B, Hughes1998Natur.394..241H, Blain2002PhR...369..111B}. Later, it was reported that these starbursts were predominantly at high redshift, $z \sim 1 - 3$ \citep{Chapman2005ApJ...622..772C, Simpson2014ApJ...788..125S}. One of the main problems of these single-dish submm observations is their large beams, typically $>10''$. This complicates the multi-wavelength counterpart identification in the absence of higher-resolution (sub)mm follow-up and prevents us from studying the morphology of dust emission, needed to help interpret the properties of the ISM in dusty starbursts. Interferometric observations at arcsec and sub-arcsec resolution revealed that most SMGs are major mergers, both from morphological and kinematics arguments \citep[e.g.][]{Tacconi2008ApJ...680..246T,Engel2010ApJ...724..233E}. Building on early indications from radio imaging \citep{Ivison2007MNRAS.380..199I}, ALMA revealed that single-dish submm sources are normally resolved into several distinct components \citep{Karim2013MNRAS.432....2K,Hodge2013ApJ...768...91H}, although it is not clear that all of them are at the same redshift and, therefore, physically associated. Based on limited ALMA data, \cite{Ikarashi2015ApJ...810..133I} reported that the dust in SMGs at $z > 3$ is confined to a relatively compact region, with a FWHM size of $\sim 0.2''$ or $\sim 1.5\,{\rm kpc}$. This average value is compatible with the size of SMGs at slightly lower redshifts reported in \cite{Simpson2015ApJ...799...81S}. Due to the still modest spatial resolution in those works, it was not possible to explore any sub-galactic structure within the SMGs. Using observations at higher spatial resolution ($\sim 0.1''$), \cite{Oteo2016arXiv160107549O} studied the morphology of two interacting starbursts at $z \sim 4.4$. The small beam size resolved the internal structure of the two sources, and revealed that the dust emission is smoothly distributed on $\sim {\rm kpc}$ scales, in contrast with the more irregular [CII] emission. Analysing strongly lensed sources offers an alternative to high-spatial-resolution observations \citep{Swinbank2010Natur.464..733S,Negrello2010Sci...330..800N,Bussmann2013ApJ...779...25B,Bussmann2015ApJ...812...43B}. Arguably, the best example is the ALMA study for SDP.81 \citep{Vlahakis2015ApJ...808L...4A}, a strongly lensed starburst at $z \sim 3$ \citep{Negrello2014MNRAS.440.1999N,Dye2014MNRAS.440.2013D,Frayer2011ApJ...726L..22F} selected from the {\it Herschel}-ATLAS \citep{Eales2010PASP..122..499E}. \cite{Dye2015MNRAS.452.2258D} modelled the lensed dust and CO emission of SDP.81 \citep[see also][]{Rybak2015MNRAS.451L..40R,Rybak2015MNRAS.453L..26R} and the dynamical analysis presented in \cite{Swinbank2015ApJ...806L..17S} revealed that SDP.81 comprises at least five star-forming clumps, which are rotating with a disk-like velocity field. However, with lensed galaxies the results (specially those lensed by galaxy-scale potential wells) must rely on accurate lens modeling ensuring that all the recovered source-plane emission is real and not an artifact of the modeling itself. Furthermore, and importantly, even relatively bright intrinsic emission can lie below the detection threshold if the geometry is not favourable, giving a misleading picture. Thanks to the unique sensitivity and long-baseline capabilities of ALMA, ultra-high-spatial resolution observations can be carried out, for the first time, in unlensed FIR-bright sources. In this work we present ultra-high-spatial resolution observations ($\sim 20 \, {\rm mas}$) in a pair of submm galaxies (SMGs) at $z = 3.442$ selected from {\sc ALMACAL} \citep{Oteo2016ApJ...822...36O}. The main difference between our and previous work \citep{Simpson2015ApJ...799...81S,Ikarashi2015ApJ...810..133I} is the use of a significant number of very long baselines, providing $\sim 10\times$ better spatial resolution. Furthermore, our in-field calibrator and subsequent self-calibration ensures near-perfect phase stability on the longest baselines. Additionally, we have two independent datasets in ALMA band 6 (B6) and band 7 (B7), which prove the reliability of the structure we see. The paper is structured as follows: \S \ref{data_set_ALMACAL} presents the data set used in this work. \S \ref{section_redshift_confirmation_J1058} presents the redshift confirmation of our two sources and their FIR SED. In \S \ref{section_morphology_pc_scales} we discuss the morphology of the dust emission in our sources at $0.02''$ or $\sim 150\,{\rm pc}$ resolution. Finally, \S \ref{concluuuuu} summarizes the main conclusions of the paper. A \cite{Salpeter1955} IMF is assumed to derive star-formation rates (SFRs). assume a flat universe with $(\Omega_m, \Omega_\Lambda, h_0)=(0.3, 0.7, 0.7)$. For this cosmology, the angular scale is $\sim 7.3\,{\rm kpc}$ per arcsec at $z = 3.442$, the redshift of the sources under study. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{./figs/calibrator_map.eps} \caption{Continuum map ($870 \, {\rm \mu m}$) of the two dusty starbursts at $z = 3.442$ (ALMACAL--1 and ALMACAL--2) discovered around the calibrator J1058+0133 at $z = 0.88$. The coordinates of the two sources can be found in Table \ref{table_properties_J1058_SMGs}. The calibrator has been subtracted from the data in the $uv$ plane by using a point--source model and is located at the position marked by the red cross. Orange contours represent the jet emanating from J1058+0133, revealed by $3 \, {\rm mm}$ imaging. The image is $16''$ on each side, and the beam of the $870 \, {\rm \mu m}$ continuum observations is shown on the bottom left. } \vspace{5mm} \label{fig_map_calibrator} \end{figure} \begin{figure*} \centering \includegraphics[width=0.35\textwidth]{./figs/CO65_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO65_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/CO98_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO98_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/CO109_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO109_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/CO1110_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO1110_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/CO1312_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO1312_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/CO1413_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/CO1413_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/H2O312303_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/H2O312303_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/H2O422413_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/H2O422413_spec_SMG2.eps}\\ \vspace{-4mm} \includegraphics[width=0.35\textwidth]{./figs/H2O2111_spec_SMG1.eps} \hspace{-11mm} \includegraphics[width=0.35\textwidth]{./figs/H2O2111_spec_SMG2.eps}\\ \caption{Continuum-substracted spectra showing the coverage of emission lines in our two SMGs, ALMACAL--1 ({\it left}) and ALMACAL--2 ({\it right}). Up to nine emission lines are detected in each source, unambiguously confirming that their redshift is $z = 3.442$. The detected emission lines (except $^{12}$CO(10-9) which is only half covered) are fitted with Gaussian profiles (plotted as the red curves) in order to calculate their observed fluxes. The absence of a Gaussian fit in a given panel means that the corresponding line has not been detected. The vertical dashed lines indicate $v = 0 \, {\rm km \, s^{-1}}$ for a redshift $z = 3.442$. It should be pointed out that the redshift confirmation has been obtained from high-$J$ CO and ${\rm H_2O}$ lines, not usual for FIR-bright sources, where redshift confirmation is normally achieved using spectral scans in the 3mm band \citep{Weiss2009ApJ...705L..45W_CO,Weiss2013ApJ...767...88W,Asboth2016arXiv160102665A,Oteo2016arXiv160107549O,Strandet2016arXiv160305094S}. } \label{detected_lines_SMGs} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{./figs/SEDS_for_paper.eps} \caption{FIR SED of A--1 (red) and A--2 (orange). All photometric points come from the multi-band observations in ALMA bands 6, 7, 8 and 9. Since there are available data on each side of B6 and B7 we have split the data for those bands in two sub-bands. With this, we have six photometric points for each source (and two $5 \sigma$ upper limits in ALMA bands 4 and 3 indicated by the grey arrows). It should be noted that the error bars on the photometric points are smaller than the size of the filled dots. The FIR SEDs have been fitted assuming optically thin models with dust emissivity $\beta = 2.0$ (dashed curves) to derive their dust temperature, and total IR luminosities (see Table \ref{table_properties_J1058_SMGs}). For a reference, we have included the templates associated to the average FIR SED of ALESS SMGs \citep{Swinbank2014MNRAS.438.1267S} and Arp 220, redshifted to $z = 3.442$ and re-scaled using the observed $460 \, {\rm \mu m}$ flux density of each source. } \vspace{5mm} \label{figure_TDLIR_components} \end{figure}
In this paper we have presented ultra-high spatial resolution ($\sim 20 \, {\rm mas}$) dust continuum ($870 \, {\rm \mu m}$ and $1.2 \, {\rm mm}$) observations of two dusty starbursts, the brightest SMGs detected so far in our survey of serendipitous sources in the fields of ALMA calibrators: A--1 ($S_{\rm 870 \mu m} = 6.5 \pm 0.2 \, {\rm mJy}$) and A--2 ($S_{\rm 870 \mu m} = 4.4 \pm 0.2 \, {\rm mJy}$). The main conclusions of our work are the following: \begin{enumerate} \item We have determined the spectroscopic redshift of our two dusty starbursts to be $z = 3.442$ via detection of up to nine $^{12}$CO and ${\rm H_2O}$ emission lines in ALMA bands 4, 6, and 7. The maximum velocity shift found between the emission lines of A--1 and A--2 (which are separated on the sky by $28 \, {\rm kpc}$) is less than $100 \, {\rm km \, s^{-1}}$, significantly lower than in other high-redshift interacting starbursts. \item Using flux densities measured in ALMA band 6, 7, 8, and 9 we have determined the dust temperature and total IR luminosity of each of the two dusty starbursts. These values are compatible with those found for the classical population of SMGs (with A--1 being warmer than A--2), and they would have been selected as SMGs in single-dish submm surveys. Uniquely, the FIR SEDs of our two dusty starbursts have been constrained with sub-arcsec resolution observations, unlike in previous work based on single-dish FIR/submm observations, which suffer from large beam sizes and source confusion problems. \item Our ALMA ultra-high-resolution imaging reveals that about half of the dust emission in A--1 and A--2 is arising in compact components (with FWHM sizes of $\sim 350 \, {\rm pc}$). Two additional, fainter star-forming clumps are found in A--1. We recall that our in-field calibrator and subsequent self-calibration ensures near-perfect phase stability on the longest baselines, ensuring great image quality. Actually, we have two independent datasets in ALMA B6 and B7 at similar spatial resolution which prove the reliability of the reported structures. \item The high SFR and the compact size of all the star-forming clumps in A--1 and A--2 indicate extremely high SFR surface densities of up to $\Sigma_{\rm SFR} \sim 6,000 \, M_\odot \, {\rm yr}^{-1} \, {\rm kpc}^{-2}$. These values are significantly higher than those previously obtained in high-redshift dusty starbursts, and only comparable to the values found in the nuclei of Arp 220 with observations at similar (physical) spatial resolution. It should be noted that the SFR is obtained assuming that the IR luminosity is due to star formation, since with the current data we cannot study what the contribution of a possible AGN to the SFR could be. \item We argue that the extremely high SFR surface densities of the star-forming clumps in A--1 and A--2 might be common in high redshift dusty starbursts but are only visible thanks to the availability of ultra-high spatial resolution data. This highlights the importance of long-baseline observations for the study of the ISM of dusty-starburst in the early Universe. \item There is a small probability that this system is lensed, in the sense that the two SMGs around J1058+0133 are actually the lensed emission of a source gravitationally amplified by the blazar host. If this is actually the case, the resolution of the observations would increase to $\sim 50 \, {\rm pc}$ and we would be resolving sizes comparable to individual giant molecular clouds. The galaxy in the source plane would have an effective radius of $R_{\rm eff} \sim 40 \, {\rm pc}$, implying a de-magnified SFR surface density of $\Sigma_{\rm SFR} \sim 10,000 \, M_\odot \, {\rm yr}^{-1} \, {\rm kpc}^{-2}$, only comparable with the value found in the eastern nucleus of Arp 220. \end{enumerate}
16
7
1607.06464
1607
1607.06378_arXiv.txt
{The concept of pseudomagnitude was recently introduced by \cite{Chelli16}, to estimate apparent stellar diameters using a strictly observational methodology. Pseudomagnitudes are distance indicators, which have the remarkable property of being reddening free. In this study, we use Hipparcos parallax measurements to compute the mean absolute pseudomagnitudes of solar neighbourhood dwarf stars as a function of their spectral type. To illustrate the use of absolute pseudomagnitudes, we derive the distance moduli of $360$ Pleiades stars and find that the centroid of their distribution is $5.715\pm0.018$, corresponding to a distance of $139.0\pm1.2$\,pc. We locate the subset of $\sim 50$ Pleiades stars observed by Hipparcos at a mean distance of $135.5\pm3.7$\,pc, thus confirming the frequently reported anomaly in the Hipparcos measurements of these stars. }
In astrophysics, the calculation of interstellar extinction is a complex and recurring problem. For many objects, such as those buried in star-forming regions, unreddening the photometries is a difficult and demanding task. In the case of a star, the calculation of interstellar extinction requires a detailed knowledge of its luminosity class, spectral type, and intrinsic colors. That is a lot of parameters, not always available, whose robustness is often uncertain.This leads to the accumulation of errors, and makes it nearly impossible to attempt any massive statistical analysis. We recently introduced the concept of pseudomagnitude for the calculation of the apparent size of stars, thus avoiding to deal with the problem of visual extinction \citep{Chelli14,Chelli16}. This has allowed us to compile a catalogue of $453\,000$ angular diameters, with an accuracy of the order of $1\%$ ($2\%$ systematic). Pseudomagnitudes are linear combinations of magnitudes constructed in such a way as to eliminate interstellar extinction. They are purely observational quantities that are unaffected by reddening effects, and can be applied to any type of object. As in the case of magnitudes, pseudomagnitudes are distance indicators, and absolute pseudomagnitudes, measured at a distance of $10$\,pc, are luminosity indicators. Knowledge of the pseudomagnitudes and absolute pseudomagnitudes of stars allows their distance to be estimated. In the present study, we use the parallax measurements of Hipparcos~\citep{ESAHIP,2007A&A...474..653V} to calculate the mean absolute pseudomagnitude of field dwarf stars, as a function of their spectral type. As an example, we use this technique to determine the centroid of the distance distribution of 360 stars in the Pleiades cluster. In section~\ref{sec:pseudomagnitudes}, we explain the concept of pseudomagnitudes. In section~\ref{sec:absolutepseudomagnitudes}, we use distance filtered parallax measurements to calculate the mean absolute pseudomagnitudes (V,J), (V,H) and (V,Ks) of dwarf stars, and the centroid of the distance distribution of our Pleiades stars is calculated and discussed in section~\ref{sec:pleiades}.
Pseudomagnitudes are remarkable distance indicators, since they are free of interstellar reddening effects. We have calculated the mean absolute pseudomagnitudes of field dwarfs from O9 to M4, based on the Hipparcos parallax measurements of approximately 6000 stars, allowing us to estimate the distance of 360 Pleiades stars. We position the centroid of these stars at $139.0\pm1.2$\,pc, and we confirm that the Pleiades stellar distances measured by Hipparcos are on average underestimated by 10\%. \begin{table} \tiny \begin{tabular}{llll} \hline\T HII & SpT & PMD (pc) & VLBI distance (pc) $^{(1)}$ \B\\ \hline \hline \T 75&G7 &136.2 (3.6)& \\ 253&G1 &143.7 (2.1)& \\ 625&G5 &137.0 (2.4)&138.4 (1.1) \\ 1136&G7 &141.0 (3.3)&135.5 (0.6) \\ 1883&K2 &139.0 (1.4)& \\ 2244&K2 &145.1 (2.1)&\B\\ \hline\B \end{tabular} \caption{\tiny Pseusomagnitude distance (PMD) of 6 Pleiades stars of the \cite{Melis13} list. (1) \cite{Melis14}} \label{tab:melis_distances} \end{table} ESA's recently launched GAIA mission will make it possible to accurately determine the fine structure of absolute pseudomagnitudes, their natural width, and the influence of various parameters such as age and metallicity. It will be possible to calibrate these very accurately, in several different optical bands. But already, our initial results obtained with the Pleiades cluster, together with their comparison with VLBI measurements, are very encouraging. This technique is purely observational, direct and simple to implement, since it needs the knowledge of only the spectral type, two magnitudes and the corresponding absolute pseudomagnitude.
16
7
1607.06378
1607
1607.03900_arXiv.txt
We report the discovery and analysis of the most metal-poor damped Lyman-$\alpha$ (DLA) system currently known, which also displays the Lyman series absorption lines of neutral deuterium. The average [O/H] abundance of this system is [O/H]~$= -2.804\pm0.015$, which includes an absorption component with [O/H]~$= -3.07\pm0.03$. Despite the unfortunate blending of many weak \DI\ absorption lines, we report a precise measurement of the deuterium abundance of this system. Using the six highest quality and self-consistently analyzed measures of D/H in DLAs, we report tentative evidence for a subtle decrease of D/H with increasing metallicity. This trend must be confirmed with future high precision D/H measurements spanning a range of metallicity. A weighted mean of these six independent measures provides our best estimate of the primordial abundance of deuterium, $10^{5}\,({\rm D/H})_{\rm P} = 2.547\pm0.033$ ($\log_{10} {\rm (D/H)_P} = -4.5940 \pm 0.0056$). We perform a series of detailed Monte Carlo calculations of Big Bang nucleosynthesis (BBN) that incorporate the latest determinations of several key nuclear cross sections, and propagate their associated uncertainty. Combining our measurement of (D/H)$_{\rm P}$ with these BBN calculations yields an estimate of the cosmic baryon density, $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.156\pm0.020$, if we adopt the most recent theoretical determination of the $d(p,\gamma)^3\mathrm{He}$ reaction rate. This measure of \obhh\ differs by $\sim2.3\sigma$ from the Standard Model value estimated from the \textit{Planck} observations of the cosmic microwave background. Using instead a $d(p,\gamma)^3\mathrm{He}$ reaction rate that is based on the best available experimental cross section data, we estimate $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.260\pm0.034$, which is in somewhat better agreement with the \textit{Planck} value. Forthcoming measurements of the crucial $d(p,\gamma)^3\mathrm{He}$ cross section may shed further light on this discrepancy.
\setcounter{footnote}{8} Moments after the Big Bang, a brief period of nucleosynthesis created the first elements and their isotopes \citep{HoyTay64,Pee66,WagFowHoy67}, including hydrogen (H), deuterium (D), helium-3 ($^{3}$He), helium-4 ($^{4}$He), and a small amount of lithium-7 ($^{7}$Li). The creation of these elements, commonly referred to as Big Bang nucleosynthesis (BBN), was concluded in $\lesssim15$ minutes and currently offers our earliest reliable probe of cosmology and particle physics (for a review, see \citealt{Ste07,Ioc09,Ste12,Cyb15}). The amount of each primordial nuclide that was made during BBN depends most sensitively on the expansion rate of the Universe and the number density ratio of baryons-to-photons. Assuming the Standard Model of cosmology and particle physics, the expansion rate of the Universe during BBN is driven by photons, electrons, positrons, and 3 neutrino families. Furthermore, within the framework of the Standard Model, the baryon-to-photon ratio at the time of BBN (i.e. minutes after the Big Bang) is identical to the baryon-to-photon ratio at recombination ($\sim400\,000$ years after the Big Bang). Thus, the abundances of the primordial nuclides for the Standard Model can be estimated from observations of the Cosmic Microwave Background (CMB) radiation, which was recently recorded with exquisite precision by the \textit{Planck} satellite \citep{Efs15}. Using the \textit{Planck} CMB observations\footnote{The primordial abundances listed here use the TT+lowP+lensing measure of the baryon density, $100\,\Omega_{\rm B,0}\,h^{2}({\rm CMB})=2.226\pm0.023$, (i.e. the second data column of Table~4 from \citealt{Efs15}).}, the predicted Standard Model abundances of the primordial elements are (68 per cent confidence limits; see Section~\ref{sec:dh}): \begin{eqnarray} Y_{\rm P}&=&0.2471\pm0.0005\nonumber\\ 10^{5}\,({\rm D/H})_{\rm P}&=&2.414\pm0.047\nonumber\\ 10^{5}\,({\rm ^{3}He/H})_{\rm P}&=&1.110\pm0.022\nonumber\\ A(^{7}{\rm Li/H})_{\rm P}&=&2.745\pm0.021\nonumber \end{eqnarray} where $Y_{\rm P}$ is the fraction of baryons consisting of $^{4}$He, $A(^{7}{\rm Li/H})_{\rm P}\equiv\log_{10}(^{7}{\rm Li/H})_{\rm P}+12$, and D/H, $^{3}$He/H and $^{7}$Li/H are the number abundance ratios of deuterium, helium-3 and lithium-7 relative to hydrogen, respectively. To test the Standard Model, the above predictions are usually compared to direct observational measurements of these abundances in near-primordial environments. High precision measures of the primordial $^{4}$He mass fraction are obtained from low metallicity \HII\ regions in nearby star-forming galaxies. Two analyses of the latest measurements, including an infrared transition that was not previously used, find $Y_{\rm P}~=~0.2551\pm0.0022$ \citep{IzoThuGus14}, and $Y_{\rm P}~=~0.2449\pm0.0040$ \citep{AveOliSki15}. These are mutually inconsistent, presumably due to some underlying difference between the analysis methods. The primordial $^{7}{\rm Li/H}$ ratio is deduced from the most metal-poor stars in the halo of the Milky Way. The latest determination \citep{Asp06,Aok09,Mel10,Sbo10,Spi15}, $A(^{7}{\rm Li})=2.199\pm0.086$, implies a $\gtrsim6\sigma$ deviation from the Standard Model value (see \citealt{Fie11} for a review). The source of this discrepancy is currently unknown. The abundance of $^{3}$He has only been measured in Milky Way \HII\ regions \citep{BanRooBal02} and in solar system meteorite samples \citep{BusBauWie00,BusBauWie01}. At this time, it is unclear if these measures are representative of the primordial value. However, there is a possibility that $^{3}$He might be detected in emission from nearby, quiescent metal-poor \HII\ regions with future, planned telescope facilities \citep{Coo15}. The primordial abundance of deuterium, \dhp, can be estimated using quasar absorption line systems \citep{Ada76}, which are clouds of gas that absorb the light from an unrelated background quasar. In rare, quiescent clouds of gas the $-82~{\rm km~s}^{-1}$ isotope shift of D relative to H can be resolved, allowing a measurement of the column density ratio \DI/\HI. The most reliable measures of \dhp\ come from near-pristine damped Lyman-$\alpha$ systems (DLAs). As discussed in \citet{PetCoo12a} and \citet{Coo14}, metal-poor DLAs exhibit the following properties that facilitate a high precision and reliable determination of the primordial deuterium abundance: (1) The Lorentzian damped \Lya\ absorption line uniquely determines the total column density of neutral H atoms along the line-of-sight. (2) The array of weak, high order \DI\ absorption lines depend only on the total column density of neutral D atoms along the line-of-sight. Provided that these absorption lines fall on the linear regime of the curve-of-growth, the derived $N$(\DI) should not depend on the gas kinematics or the instrument resolution. In addition, the assumption that D/H=\DI/\HI\ is justified in these systems; the ionization correction is expected to be $\lesssim0.1$~per~cent \citep{Sav02,CooPet16}. Furthermore, galactic chemical evolution models suggest that most of the deuterium atoms in these almost pristine systems are yet to be cycled through many generations of stars; the correction for astration (i.e. the processing of gas through stars) is therefore negligible (see the comprehensive list of references provided by \citealt{Cyb15,Dvo16}). Using a sample of 5 quasar absorption line systems that satisfy a set of strict criteria, \citet{Coo14} recently estimated that the primordial abundance of deuterium is log$_{10}$\,\dhp~=~$-4.597\pm0.006$, or expressed as a linear quantity, $10^{5}\,({\rm D/H})_{\rm P} = 2.53\pm0.04$. These 5 systems exhibit a D/H plateau over at least a factor of $\sim10$ in metallicity, and this plateau was found to be in good agreement with the expected value for the cosmological model derived by \textit{Planck} assuming the Standard Model of particle physics. In this paper, we build on this work and present a new determination of the primordial abundance of deuterium obtained from the lowest metallicity DLA currently known. In Section~\ref{sec:obs}, we present the details of our observations and data reduction procedures. Our data analysis is almost identical to that described in \citet{Coo14}, and we provide a summary of this procedure in Section~\ref{sec:analysis}. In Section~\ref{sec:chemcomp}, we report the chemical composition of this near-pristine DLA. In Section~\ref{sec:dh}, we present new calculations of BBN that incorporate the latest nuclear cross sections, discuss the main results of our analysis, and highlight the cosmological implications of our findings. We summarize our conclusions in Section~\ref{sec:conc}.
\label{sec:conc} Several probes of cosmology have now pinned down the content of the Universe with exquisite detail. In this paper, we build on our previous work to obtain precise measurements of the primordial deuterium abundance by presenting high quality spectra of a DLA at $z_{\rm abs}=2.852054$ towards the quasar J1358$+$0349, taken with both the UVES and HIRES instruments. Our primary conclusions are as follows:\\ \noindent ~~(i) The absorption system reported here is the most metal-poor DLA currently known, with an average oxygen abundance [O/H]~$= -2.804\pm0.015$. Furthermore, in one of the absorption components, we estimate [O/H]~$= -3.07\pm0.03$. This environment is therefore ideally suited to estimate the primordial abundance of deuterium. On the other hand, we have found an unusual amount of unrelated absorption that contaminates many of the weak, high order, \DI\ absorption lines. Consequently, the accuracy in the determination of the D/H ratio achieved for this system is not as high as the best cases reported by \citet[][J1419$+$0829]{PetCoo12a} and \citet[][J1358$+$6522]{Coo14}, see Table~\ref{tab:dhmeasures}. \smallskip \noindent ~~(ii) Using an identical analysis strategy to that described in \citet{Coo14}, we measure a D/H abundance of $\log_{10}\,({\rm D\,\textsc{i}/H\,\textsc{i}}) = -4.582\pm0.012$ for this near-pristine DLA. We estimate that this abundance ratio should be adjusted by $(-4.9\pm1.0)\times10^{-4}$~dex to account for \DII\ charge transfer recombination with \HI. This ionization correction is a factor of $\sim25$ less than the D/H measurement precision of this system, and confirms that ${\rm D\,\textsc{i}/H\,\textsc{i}}\cong{\rm D/H}$ in DLAs. \smallskip \noindent ~~(iii) On the basis of six high precision and self-consistently analyzed D/H abundance measurements, we report tentative evidence for a decrease of the D/H abundance with increasing metallicity. If confirmed, this modest decrease of the D/H ratio could provide an important opportunity to study the chemical evolution of deuterium in near-pristine environments. \smallskip \noindent ~~(iv) A weighted mean of these six independent D/H measures leads to our best estimate of the primordial D/H abundance, $\log_{10}\,({\rm D/H})_{\rm P} = -4.5940\pm0.0056$. We combine this new determination of (D/H)$_{\rm P}$ with a suite of detailed Monte Carlo BBN calculations. These calculations include updates to several key cross sections, and propagate the uncertainties of the experimental and theoretical reaction rates. We deduce a value of the cosmic baryon density $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.156\pm0.017\pm0.011$, where the first error term represents the D/H measurement uncertainty and the second error term includes the uncertainty of the BBN calculations. \smallskip \noindent ~~(v) The above estimate of \obhh(BBN) is comparable in precision to the recent determination of \obhh\ from the CMB temperature fluctuations recorded by the \textit{Planck} satellite. However, using the best available BBN reaction rates, we find a $2.3\sigma$ difference between \obhh(BBN) and \obhh(CMB), assuming the Standard Model value for the effective number of neutrino species, $N_{\rm eff}=3.046$. Allowing \neff\ to vary, the disagreement between BBN and the CMB can be reduced to the $1.5\sigma$ significance level, resulting in a bound on the effective number of neutrino families, $N_{\rm eff} = 3.44\pm0.45$. \smallskip \noindent ~~(vi) By replacing the theoretical $d(p,\gamma)^{3}{\rm He}$ cross section with the current best empirical estimate, we derive a baryon density $100\,\Omega_{\rm B,0}\,h^{2}({\rm BBN}) = 2.260\pm0.034$, which agrees with the \textit{Planck} baryon density for the Standard Model. However, this agreement is partly due to the larger error estimate for the nuclear data. Forthcoming experimental measurements of the crucial $d(p,\gamma)^{3}{\rm He}$ reaction rate by the LUNA collaboration will provide important additional information regarding this discrepancy, since the empirical rate currently rests mainly on a single experiment, and absolute cross sections often turn out in hindsight to have underestimated errors. The theory of few-body nuclear systems is now precise enough that a resolution in favor of the current empirical rate would present a serious problem for nuclear physics. \smallskip Our study highlights the importance of expanding the present small statistics of high precision D/H measurements, in combination with new efforts to achieve high precision in the nuclear inputs to BBN. We believe that precise measurements of the primordial D/H abundance should be considered an important goal for the future generation of echelle spectrographs on large telescopes, optimized for wavelengths down to the atmospheric cutoff. This point is discussed further in Appendix~\ref{app:future}.
16
7
1607.03900
1607
1607.00524_arXiv.txt
We present \texttt{EVEREST}, an open-source pipeline for removing instrumental noise from \emph{K2} light curves. \texttt{EVEREST} employs a variant of pixel level decorrelation (PLD) to remove systematics introduced by the spacecraft's pointing error and a Gaussian process (GP) to capture astrophysical variability. We apply \texttt{EVEREST} to all \emph{K2} targets in campaigns 0-7, yielding light curves with precision comparable to that of the original \emph{Kepler} mission for stars brighter than $K_p \approx 13$, and within a factor of two of the \emph{Kepler} precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that \texttt{EVEREST} achieves the highest average precision of any of these pipelines for unsaturated \emph{K2} stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The \texttt{EVEREST} pipeline can also easily be applied to future surveys, such as the \emph{TESS} mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The \texttt{EVEREST} light curves and the source code used to generate them are freely available online.
\label{sec:intro} Launched in 2009, the \emph{Kepler} spacecraft has to date led to the discovery of nearly 5000 extrasolar planet candidates and to a revolution in several fields of astronomy including but not limited to exoplanet science, eclipsing binary characterization, asteroseismology and stellar variability studies. Its unprecedented photometric precision allowed for the study of astrophysical signals down to the level of $\sim 15$ parts per million \citep{GIL11}, which has enabled the discovery of small planets in the habitable zones of their stars \citep[e.g.,][]{BOR13,QUI14,TOR15}. Unfortunately, after the failure of its second reaction wheel in May 2013, the spacecraft was no longer able to achieve the fine pointing accuracy required for high precision photometry, and the nominal mission was brought to an end. Engineering tests suggested that by aligning the spacecraft along the plane of the ecliptic, pointing drift could be mitigated by the solar wind pressure and by periodic thruster firings. As of May 2014 the spacecraft has been operating in this new mode, known as \emph{K2}, and has continued to enable high precision photometry science, monitoring tens of thousands of stars near the plane of the ecliptic during campaigns lasting about 75 days each \citep{HOW14}. However, because of the reduced pointing accuracy, \emph{K2} raw aperture photometry is between 3 and 4 times less precise than that of the original \emph{Kepler} mission and displays strong instrumental artefacts with different timescales, including a $\sim$ 6 hour trend, which severely compromise its ability to detect small transits. Recently, several authors have developed powerful methods to correct for these systematics, often coming within a factor of $\sim$ 2 of the original \emph{Kepler} precision. In particular, the \texttt{K2SFF} pipeline \citep{VJ14} decorrelates \emph{K2} aperture photometry with the centroid position of the stellar images. Centroids are determined based either on the center-of-light or via a Gaussian fit to the stellar PSF. The motion of the centroids is then fit with a polynomial and transformed into a single parameter that relates spacecraft motion to flux variations, which is then used to de-trend the data. Similar methods are employed in the \texttt{K2VARCAT} pipeline \citep{ARM15}, developed specifically for variable \emph{K2} stars, the \texttt{K2P$^2$} pipeline \citep{LUN15}, which uses an intelligent clustering algorithm to define custom apertures, and in the pipeline of \cite{HUA15}, which employs astrometric solutions to the motion of \emph{K2} targets, determining the $X$ and $Y$ motion of a target from the behavior of multiple stars on the same spacecraft module. Finally, the \texttt{K2SC} pipeline \citep{AIG15,AIG16} and the pipeline of \cite{CRO15} both employ a Gaussian process (GP) to remove instrumental noise, using the $X$ and $Y$ coordinates of the target star as the regressors to derive a model for the instrumental systematics. The nonparametric nature of the GP results in a flexible model with increased de-trending power especially for dim \emph{K2} targets. In one way or another, all of these techniques rely on numerical methods to identify and remove correlations between the stellar position and the intensity fluctuations. Even when a nonparametric technique such as a GP is used, assumptions are still made about the nature of the correlations between spacecraft motion and instrumental variability. Moreover, the process of determining the stellar centroids is prone to uncertainties and relies on assumptions about the shape of the stellar PSF. A powerful alternative to these methods is pixel level decorrelation (PLD), a method developed by \cite{DEM15} to correct for systematics in \emph{Spitzer} observations of transiting hot Jupiters. The tenet of PLD is that the best way to correct for noise introduced by the motion of the stellar image does not involve actual measurements of the position of the star. The centroid of the stellar image is, after all, a secondary data product of photometry, and is subject to additional uncertainty. PLD skips these two numerical steps (i.e., fitting for the stellar position and solving for the correlations) by operating on the \emph{primary} data products of photometry, the intensities in each of the detector pixels. These intensities are normalized by the total flux in the chosen aperture then used as basis vectors for a linear least-squares (LLS) fit to the aperture-summed flux. Since astrophysical signals (stellar variability, planet transits, stellar eclipses, etc.) are present in all of the pixels in the aperture, the normalization step removes astrophysical information from the basis set, ensuring that PLD is sensitive only to the signals that are \emph{different} across the aperture. PLD is therefore an ``agnostic'' method of performing robust flat-fielding corrections with minimal assumptions about either the nature of the intra-pixel variability or the correlation between spacecraft jitter and intensity fluctuations. We note that our method is similar to that of \cite{DFM15} and \cite{MON15}, who use the principal components of the variability among the full set of \emph{K2} campaign 1 light curves as ``eigen light curve'' regressors. However, rather than deriving our basis vectors from other stars in the field, whose light curves contain undesired astrophysical signals, our basis vectors are derived solely from the pixels of the star under consideration. In this paper we build on the PLD method of \cite{DEM15}, extending it to higher order in the pixel fluxes and performing principal component analysis (PCA) on the basis vectors to limit the flexibility of the model and thus prevent overfitting. We further couple PLD with a GP to disentangle astrophysical and instrumental variability. We apply our pipeline, \texttt{EVEREST} (\emph{EPIC Variability Extraction and Removal for Exoplanet Science Targets}), to the entire set of \emph{K2} light curves from campaigns 0-7 and generate a publicly-available database of processed light curves. Our code is open-source and will be made available online. The paper is organized as follows: in \S\ref{sec:pld} we review the basics of PLD and derive our third-order PLD/PCA/GP model, and in \S\ref{sec:methods} we describe our pipeline in detail. Results are presented in \S\ref{sec:results}, and in \S\ref{sec:conclusions} we conclude and outline plans for future work.
\label{sec:conclusions} In this paper we introduced \texttt{EVEREST}, a pipeline developed to yield the highest precision light curves for \emph{K2} stars. \texttt{EVEREST} builds on the pixel level decorrelation (PLD) technique of \cite{DEM15}, extending it to third order in the pixel fluxes and combining it with principal component analysis to yield a set of basis vectors that together capture most of the instrumental variability in the data. Gaussian process (GP) regression is then performed to remove the instrumental systematics while preserving astrophysical signals. In order to prevent overfitting, we developed a method to determine the ideal number of principal components to use in the fit, yielding reliable, high precision de-trended light curves for all \emph{K2} campaigns to date. We validated our model by performing transit injection and recovery tests and showed that when transits were properly masked by our iterative sigma-clipping technique, we recovered the correct depths without any bias. When transits were not masked (the case of many low signal-to-noise transits, which are missed by our outlier detection step), PLD de-trending resulted in somewhat shallower transits by $\lesssim 10\%$. We therefore strongly encourage those making use of our light curves for transiting exoplanet characterization to run \texttt{EVEREST} while explicitly masking these transits. Our code is implemented in Python, is user-friendly, and takes only a few seconds to run for a given target. The same applies to those using our light curves for transiting planet searches. Once features of interest are detected, one should run \texttt{EVEREST} again with those features masked to obtain unbiased estimates of the transit parameters. While the decreased transit depth can in principle preclude the detection of very low signal-to-noise planets in the \texttt{EVEREST} light curves, we find that the increased precision of these light curves relative to other pipelines is sufficient to enable the detection of previously unknown, small transiting planets around \emph{K2} host stars \citep{KRU16}. Since \texttt{EVEREST} preserves stellar signals, these light curves should also greatly aid in stellar variability and asteroseismology studies. For stars brighter than $K_p \approx 13$, we found that \texttt{EVEREST} recovers the photometric precision of the original \emph{Kepler} mission; for fainter stars, the median precision is within a factor of 2 of that of the original mission. We further compared our de-trended light curves to those produced by the other \emph{K2} pipelines available at the MAST HLSP \emph{K2} archive. \texttt{EVEREST} light curves have systematically higher precision than \texttt{K2SFF}, \texttt{K2VARCAT} and \texttt{K2SC} for all Kepler magnitudes $K_p > 11$. Currently, \texttt{EVEREST} performs poorly for saturated targets and for those in highly crowded fields. Our catalog of de-trended light curves is publicly available at \texttt{\url{https://archive.stsci.edu/prepds/everest/}} and will be constantly updated for new \emph{K2} campaigns. The code used to de-trend the light curves is open source under the MIT license and is available at \texttt{\url{https://github.com/rodluger/everest}}, along with user-friendly routines for downloading and interacting with the de-trended light curves. A static release of version \texttt{1.0} of the code is also available at \texttt{\url{http://dx.doi.org/10.5281/zenodo.56577}}. Since the only inputs to \texttt{EVEREST} are the pixel level light curves, the techniques developed here can be generally applied to light curves produced by any photometry mission, including the upcoming \emph{TESS} mission \citep{RIC15}, to remove instrumental noise and enable the detection of small transiting planets.
16
7
1607.00524
1607
1607.08897_arXiv.txt
Recent direct numerical simulations (DNS) of large-scale turbulent dynamos in strongly stratified layers have resulted in surprisingly sharp bipolar structures at the surface. Here we present new DNS of helically and non-helically forced turbulence with and without rotation and compare with corresponding mean-field simulations (MFS) to show that these structures are a generic outcome of a broader class of dynamos in density-stratified layers. The MFS agree qualitatively with the DNS, but the period of oscillations tends to be longer in the DNS. In both DNS and MFS, the sharp structures are produced by converging flows at the surface and might be driven in nonlinear stage of evolution by the Lorentz force associated with the large-scale dynamo-driven magnetic field if the dynamo number is at least 2.5 times supercritical.
Active regions appear at the solar surface as bipolar patches with a sharply defined polarity inversion line in between. Bipolar magnetic structures are generally associated with buoyant magnetic flux tubes that are believed to pierce the surface \citep{Par55}. Furthermore, \cite{Par75} proposed that only near the bottom of the convection zone the large-scale field can evade magnetic buoyancy losses over time scales comparable with the length of the solar cycle. This led many authors to study the evolution of magnetic flux tubes rising from deep within the convection zone to the surface \citep{Caligari,Fan01,Fan08,JB09}. Shortly before flux emergence, however, the rising flux tube scenario would predict flow speeds that exceed helioseismically observed limits \citep{Birch16}. Moreover, the magnetic field expands and weakens significantly during its buoyant ascent. Therefore, some type of reamplification of magnetic field structures near the surface appears to be necessary. The negative effective magnetic pressure instability (NEMPI) may be one such mechanism of field reamplification. It has been intensively studied both analytically \citep{KRR89,KRR90,KMR93,KMR96,KR94,RK07} and numerically using direct numerical simulations (DNS) and mean-field simulations (MFS) \citep{BKR10,BKKMR11,BKKR12,BKR13,BGJKR14}, see also the recent review by \cite{BRK16}. The reamplification mechanism of magnetic structures in DNS has been studied in non-helical forced turbulence \citep{BKKMR11,BKR13,BGJKR14,WLBKR13,WLBKR16} and in turbulent convection \citep{KBKMR12,KBKKR16} with imposed weak horizontal or vertical magnetic fields. However, NEMPI seems to work only when the magnetic field is not too strong (magnetic energy density is less than the turbulent kinetic energy density). The formation of magnetic structures from a dynamo-generated field has recently been studied for forced turbulence \citep{MBKR14,Jabbari14,Jabbari15,Jabbari16} and in turbulent convection \citep{MS16}. In particular, simulations by \cite{MBKR14} have shown that much stronger magnetic structures can occur at the surface when the field is generated by a large-scale $\alpha^2$ dynamo in forced helical turbulence. Subsequent work by \cite{Jabbari16} suggests that bipolar surface structures are kept strongly concentrated by converging flow patterns which, in turn, are produced by a strong magnetic field through the Lorentz force. This raises the question what kind of nonlinear interactions take place when a turbulent dynamo operates in a density-stratified layer. To investigate this problem in more detail, we study here the dynamics of magnetic structures both in DNS and MFS in similar parameter regimes. The original work of \cite{MBKR14} employed a two-layer system, where the turbulence is helical only in the lower part of the system, while in the upper part it is nonhelical. Such two-layer forced turbulence was also studied in spherical geometry \citep{Jabbari15}. They showed that in such a case, several bipolar structures form, which later expand and create a band-like arrangement. The two-layer system allowed us to separate the dynamo effect in the lower layer from the effect of formation of intense bipolar structures in the upper layer. The formation of flux concentrations from a dynamo-generated magnetic field in spherical geometry was also investigated with MFS \citep{Jabbari13}. In that paper, NEMPI was considered as the mechanism creating flux concentrations. Models similar to those of \cite{MBKR14} have also been studied by \cite{Jabbari16}, who showed that a two-layer setup is not necessary and that even a single layer with helical forcing leads to formation of intense bipolar structures. This simplifies matters, and such systems will therefore be studied here in more detail before addressing associated MFS of corresponding $\alpha^2$ dynamos. In earlier work of \cite{MBKR14} and \cite{Jabbari16}, no conclusive explanation for the occurrence of bipolar structures with a sharp boundary was presented. We use both DNS and MFS to understand the mechanism behind the nonlinear interactions resulting in the complex dynamics of sharp bipolar spots. One of the key features of such dynamics is the long lifetime of the sharp bipolar spots that tend to persist several turbulent diffusion times. It has been shown by \cite{Jabbari16} that the long-term existence of these sharp magnetic structures is accompanied by the phenomenon of turbulent magnetic reconnection in the vicinity of current sheets between opposite magnetic polarities. The measured reconnection rate was found to be nearly independent of magnetic diffusivity and Lundquist number. In this work, we study the formation and dynamics of sharp magnetic structures both in one-layer DNS and in corresponding MFS. We begin by discussing the model and the underlying equations both for the DNS and the MFS (\Sec{TheModel}), and then present the results (\Sec{Results}), where we focus on the comparison between DNS and MFS. In the DNS, the dynamo is driven either directly by helically forced turbulence or indirectly by nonhelically forced turbulence that becomes helical through the combined effects of stratification and rotation, as will be discussed at the end of \Sec{Results}. We conclude in \Sec{Conclusions}.
\label{Conclusions} In this work we have compared DNS of helically forced turbulence in a strongly stratified layer with corresponding MFS. Compared with earlier DNS \citep{MBKR14,Jabbari16}, we have considered here a one-layer model and have shown that this simpler case also leads to the formation of sharp bipolar structures at the surface. Larger values of $\Rm$ result in more complex spatio-temporal behavior, while rotation (with $\Co\la1$) and the scale separation ratio have only minor effects. Both aspects confirm similar findings for our earlier two-layer model. The results of our MFS are generally in good qualitative agreement with the DNS. The MFS without parameterized NEMPI (i.e., neglecting the turbulence effects on the Lorentz force) demonstrate that the formation of sharp structures at the surface occurs predominantly due to the nonlinear effects associated with the mean Lorentz force of the dynamo-generated magnetic field, provided the dynamo number is at least 2.5 times supercritical. This results in converging flow structures and downdrafts in equivalent locations both in DNS and MFS. For smaller dynamo numbers, when the field strength is below equipartition, NEMPI can operate and form bipolar regions, as was shown in earlier DNS \citep{WLBKR13,WLBKR16}. Comparing MFS without and with inclusion of the parameterization of NEMPI by replacing the mean Lorentz force with the effective Lorentz force in the Navier-Stokes equation, we found that the formation of bipolar magnetic structures in the case of NEMPI is also accompanied by downdrafts, especially in the upper parts of the computational domain. In this connection, we recall that our system lacks the effects of thermal buoyancy, so our downdrafts are distinct from those in convection. In the Sun, both effects may contribute to driving convection, especially on the scale of supergranulation. However, from convection simulations with and without magnetic fields, no special features of magnetically driven downflows have been seen \citep{KBKKR16}. Finally, we have considered nonhelically forced turbulence, but now with sufficiently rapid rotation which, together with density stratification, leads to an $\alpha$ effect that is supercritical for the onset of dynamo action. Even in that case we find the formation of sharp bipolar structures. They begin to resemble the structures of bipolar regions in the Sun. Thus, we may conclude that the appearance of bipolar structures at the solar surface may well be a generic feature of a large scale dynamo some distance beneath the surface of a strongly stratified domain. As a next step, it will be important to consider more realistic modeling of the large-scale dynamo. This can be done in global spherical domains with differential rotation, which should lead to preferential east-west alignment of the bipolar structures. In addition, the effects of convectively-driven turbulence would be important to include. This would automatically account for the possibility of thermally driven downflows, in addition to just magnetically driven flows. In principle, this has already been done in the many global dynamo simulations performed in recent years \citep{BMBBT11,KMB12,FF14,Hotta}, but in most of them the stratification was not yet strong enough and the resolution insufficient to resolve small enough magnetic structures at the surface. The spontaneous formation of magnetic surface structures from a large-scale $\alpha^2$ dynamo by strongly stratified thermal convection in Cartesian geometry has recently also been studied by \cite{MS16}. They found that large-scale magnetic structures are formed at the surface only in cases with strong stratification. However, in many other convection simulations, the scale separation between the integral scale of the turbulence and the size of the domain is not large enough for the formation of sharp magnetic structures. One may therefore hope that future simulations will not only be more realistic, but will also display surface phenomena that are closer to those observed in the Sun.
16
7
1607.08897
1607
1607.04288_arXiv.txt
{ We present a study of the Very Degenerate Higgsino Dark Matter (DM), whose mass splitting between the lightest neutral and charged components is ${\cal O}(1)$ MeV, much smaller than radiative splitting of 355 MeV. The scenario is realized in the minimal supersymmetric standard model by small gaugino mixing. In contrast to the pure Higgsino DM with the radiative splitting only, various observable signatures with distinct features are induced. First of all, the very small mass splitting makes (a) sizable Sommerfeld enhancement and Ramsauer-Townsend (RT) suppression relevant to $\sim$1 TeV Higgsino DM, and (b) Sommerfeld-Ramsauer-Townsend effect saturate at lower velocities $v/c \lesssim 10^{-3}$. As a result, annihilation signals can be large enough to be observed from the galactic center and/or dwarf galaxies, while relative signal sizes can vary depending on the location of Sommerfeld peaks and RT dips. In addition, at collider experiments, stable chargino signature can be searched for to probe the model in the future. DM direct detection signal, however, depends on the Wino mass; even no detectable signal can be induced if the Wino is heavier than about 10 TeV. } \preprint{SLAC-PUB-16697, NSF-KITP-16-092} \begin{document}
The pure Higgsino (with the electroweak-radiative mass splitting $\Delta m=355$ MeV between its lightest neutral and charged components) is an attractive candidate of thermal dark matter (DM) for its mass around 1 TeV~\cite{ArkaniHamed:2006mb}. As null results at Large Hadron Collider (LHC) experiments push supersymmetry (SUSY) to TeV scale, such Higgsino as the lightest supersymmetric particle (LSP) has recently become an important target for future collider~\cite{Low:2014cba, Acharya:2014pua, Gori:2014oua, Barducci:2015ffa, Badziak:2015qca, Bramante:2015una} and DM search experiments~\cite{Barducci:2015ffa, Badziak:2015qca, Bramante:2015una, Cirelli:2007xd, Chun:2012yt, Fan:2013faa, Chun:2015mka}. \emph{A priori}, the Higgsino mass $\mu$ and gaugino masses $M_1, M_2$ for the Bino and Wino are not related; thus, the pure Higgsino scenario with much heavier gauginos is possible and natural by considering two distinct Peccei-Quinn and R symmetric limits. It is, however, difficult to test the pure Higgsino LSP up to 1--2 TeV at collider experiments (including future 100 TeV options) and dark matter detections. Standard collider searches of jet plus missing energy are insensitive because of the small mass splitting of 355 MeV~\cite{Low:2014cba, Acharya:2014pua, Gori:2014oua}; but the splitting is large enough for charginos to decay promptly at collider so that disappearing track and stable chargino searches are not sensitive~\cite{Low:2014cba, Thomas:1998wy}. The purity of the Higgsino states suppresses DM direct detection signals. DM indirect detection signals are not large enough because of relatively weak interactions and negligible Sommerfeld enhancements~\cite{Hisano:2003ec, Hisano:2004ds, Cirelli:2007xd, Chun:2012yt, Fan:2013faa, Chun:2015mka}. In contrast, the pure Wino DM with the radiative mass splitting of 164 MeV, another thermal DM candidate for its mass $\sim$ 3 TeV, provides several ways to test: monojet plus missing energy due to more efficient recoil and larger cross-section~\cite{Low:2014cba, Gori:2014oua, Cirelli:2014dsa, Bhattacherjee:2012ed}, disappearing track due to longer-lived charged Wino~\cite{Low:2014cba, Cirelli:2014dsa, Bhattacherjee:2012ed}, and indirect detection due to somewhat stronger interaction and larger enhancement~\cite{Hisano:2004ds, Cirelli:2007xd, Chun:2012yt, Fan:2013faa, Cohen:2013ama, Chun:2015mka}. One of the key features of the Wino DM affecting all of these signals is the smaller mass splitting. It has been noticed that the non-perturbative effects can be sizable for the heavy electroweak dark matter annihilation, leading to not only the Sommerfeld enhancement~\cite{Hisano:2003ec, Hisano:2004ds} but also the Ramsauer-Townsend (RT) suppression~\cite{Chun:2012yt, Chun:2015mka, Cirelli:2015bda, Garcia-Cely:2015dda} that become more evident for smaller mass splitting (or equivalently heavier DM) and higher multiplets (or stronger electroweak interactions)~\cite{Chun:2012yt, Chun:2015mka}. The Higgsino-gaugino system, consisting of the weak singlet, doublet and triplet, with variable mass splitting provides a natural framework realizing drastic Sommerfeld-Ramsauer-Townsend (SRT) effects in dark matter annihilation. This motivates us to investigate a possibility of a very degenerate Higgsino DM whose mass splitting is much smaller than the electroweak-induced 355 MeV, realized in the limit of $\mu \ll M_{1, 2}$ admitting slight gaugino mixtures. The Higgsino is more susceptible to nearby gauginos than the gaugino is to others as heavier gaugino effects on the Higgsino decouple less quickly: their effects are captured by dimension-5 operators, while effects on the gaugino DM is captured by dimension-6 operators~\cite{Fan:2013faa}. Thus, it leads to a plausible situation that heavier gauginos are almost decoupled leaving some traces only in the Higgsino DM sector in spite of a large hierarchy between them. The Very Degenerate Higgsino DM turns out to produce distinct features in indirect detection signals from the galactic center (GC) and dwarf spheroidal satellite galaxies (DG), which can be observed in the near future. This paper is organized as follows. In Section 2, we look for the Higgsino-gaugino parameter space realizing the Very Degenerate Higgsino LSP. In Section 3, indirect signals of DM annihilation are studied to feature the SRT effect, which leads to distinct predictions for the GC and DG. In Section 4, we consider other constraints from direct detection, collider searches, and cosmology. We finally conclude in Section 5.
We have studied the Very Degenerate Higgsino DM model with ${\cal O}(1)$ MeV mass splitting, which is realized by small gaugino mixing and leads to dramatic non-perturbative effects. Owing to the very small mass splitting, SRT peaks and dips are present at around 1 TeV Higgsino mass, and velocity saturation of SRT effects is postponed to lower velocities $ v/c \sim 10^{-3}$. As a result, indirect detection signals of $\sim 1$ TeV Higgsino DM can be significantly Sommerfeld-enhanced (to be constrained already or observable in the near future) or even RT-suppressed. Annihilation cross-sections at GC and DG are different in general: either of them can be larger than the other depending on the location of Sommerfeld peaks and RT dips. Other observable signature is also induced in stable chargino collider searches, which can probe the 1 TeV scale in the future. However, the rates of direct detection signals depend on the $M_2$ value (the smaller $M_2$, the larger signal) so that $M_2 \sim 5$(10) TeV can(not) produce detectable signals. Because of various unusual aspects of indirect detection signals at DG and GC, well featured by our two benchmark models of $\Delta m=2$ and 10 MeV, future searches and interpretations on Higgsino DM models shall be carefully done. The Very Degenerate Higgsino DM also provides an example where ``slight'' gaugino mixing can have unexpectedly big impacts on the observation prospects of the Higgsino DM. The mixing is slight in the sense that direct detection, whose leading contribution is induced by gaugino mixing, can still be small (for heavy enough Winos). At the same time, however, phenomenology is unexpectedly interesting because such slight mixing could significantly change indirect detection signal, which is present already in the zero mixing limit so that usually thought not to be so sensitive to small mixing. In all, \emph{nearly} pure Higgsino DM can have vastly different phenomena and discovery prospects from the pure Higgsino DM, and we hope that more complete studies can be followed.
16
7
1607.04288
1607
1607.08773_arXiv.txt
We present the first results from a minute cadence survey of a three square degree field obtained with the Dark Energy Camera. We imaged part of the Canada-France-Hawaii Telescope Legacy Survey area over eight half-nights. We use the stacked images to identify 111 high proper motion white dwarf candidates with $g\leq24.5$ mag and search for eclipse-like events and other sources of variability. We find a new $g=20.64$ mag pulsating ZZ Ceti star with pulsation periods of 11-13 min. However, we do not find any transiting planetary companions in the habitable zone of our target white dwarfs. Given the probability of eclipses of 1\% and our observing window from the ground, the non-detection of such companions in this first field is not surprising. Minute cadence DECam observations of additional fields will provide stringent constraints on the frequency of planets in the white dwarf habitable zone.
Transient surveys like the Palomar Transient Factory \citep{rau09}, Panoramic Survey Telescope \& Rapid Response System Medium Deep Fields \citep{kaiser10,tonry12}, Dark Energy Survey Supernova Fields \citep{flaugher05,bernstein12}, Sloan Digital Sky Survey Stripe 82 \citep{ivezic07}, Catalina surveys \citep{drake09}, as well as microlensing surveys like the Massive Compact Halo Objects project \citep{alcock00} and the Optical Gravitational Lensing Experiment \citep{udalski03} have targeted large areas of the sky with hour to day cadences to identify variable objects like supernovae, novae, Active Galactic Nuclei, cataclysmic variables, eclipsing and contact binaries, and microlensing events. Several exoplanet surveys, e.g., the Wide Angle Search for Planets (WASP) and Hungarian-made Automated Telescope Network (HATNet), have used a number of small cameras or telescopes to obtain $\sim$few min cadence photometry on a large number of stars, providing 1\% photometry for stars brighter than 12 mag. Yet other transient surveys targeted specific types of stars, like M dwarfs for the MEarth project, to look for exoplanets around them. The largest exoplanet survey so far, the Kepler mission, provided short cadence ($\approx$1 min) data for 512 targets in the original mission, and the ongoing K2 mission is adding several dozen more short cadence targets for each new field observed. One of the unusual findings from the Kepler/K2 mission includes an exciting discovery of a disintegrating planetesimal around the dusty white dwarf WD 1145+017 in a 4.5 h orbit \citep{vanderburg15,gansike16,rappaport16}. Such planetesimals around white dwarfs have not been found before because none of the previous surveys were able to observe a large number of white dwarfs for an extended period of time. These planetesimals are likely sent closer to the central star through planet-planet interactions \citep{Jura03, debes12, Vera13}. Hence, at least some planets must survive the late stages of stellar evolution. The Large Synoptic Survey Telescope (LSST) will identify about 13 million white dwarfs and it will provide repeated observations of the southern sky every 3 days over a period of 10 years. Each LSST visit consists of two 15 s exposures, reaching a magnitude limit of $g=24.5$ mag. However, this cadence is not optimum for identifying sources that vary on minute timescales. Here we present the first results from a new minute-cadence survey on the Cerro Tololo 4m Blanco Telescope that reaches the same magnitude limit as each of the LSST visits. We take advantage of the relatively large field of view of the Dark Energy Camera \citep[DECam,][]{flaugher05} to perform eight half-night long observations of individual fields to explore the variability of the sky in minute timescales. For this paper, we focus on the 111 high proper motion white dwarf candidates in our first survey field. We describe the details of our observations and reductions in Section 2. Sections 3 and 4 provide proper motion measurements and the sample properties. We present the light curves for the variable white dwarfs in Section 5, and conclude in Section 6.
We present the results from the first minute-cadence survey of a large number of white dwarf candidates observed with DECam. We identify 111 high proper motion white dwarf candidates brighter than $g=24.5$ mag in a single DECam pointing. We estimate temperatures, cooling ages, and tangential velocities for each object and demonstrate that our targets are consistent with thin and thick disk white dwarfs. We create light curves for each white dwarf, spanning 8 half-nights. We identify a $g=20.64$ mag pulsating ZZ Ceti white dwarf, most likely in a binary system with an M dwarf companion. We do not find any eclipsing systems in this first field, but given the probabiliy of eclipses of 1\% and our observing window from the ground, this is not surprising. However, this work demonstrates the feasibilty of using DECam to search for minute-cadence transits around white dwarfs. In addition to the high proper motion white dwarfs, the Besan\c{c}on Galaxy model predicts 400 other white dwarfs with $\mu<$20 mas yr$^{-1}$ in one of our DECam fields. Image subtraction routines can be used to search for variability for all of the sources in our DECam field, including the non-moving white dwarfs. Such a study with High Order Transform of PSF and Template Subtraction \citep[HOTPANTS,][]{becker15} is currently underway and it will be presented in a future publication. Given the probability of 0.7\% of finding a transit around a white dwarf, increasing the size of the white dwarf sample to several hundreds would enable us to find the first solid-body planetary companion, if such systems exist.
16
7
1607.08773
1607
1607.01522_arXiv.txt
We study the polarization properties of the jitter and synchrotron radiation produced by electrons in highly turbulent anisotropic magnetic fields. The net polarization is provided by the geometry of the magnetic field the directions of which are parallel to a certain plane. Such conditions may appear in the relativistic shocks during the amplification of the magnetic field through the so-called Weibel instability. While the polarization properties of the jitter radiation allows extraction of direct information on the turbulence spectrum as well as the geometry of magnetic field, the polarization of the synchrotron radiation reflects the distribution of the magnetic field over its strength. For the isotropic distribution of monoenergetic electrons, we found that the degree of polarization of the synchrotron radiation is larger than the polarization of the jitter radiation. For the power-law energy distribution of electrons the relation between the degree of polarization of synchrotron and jitter radiation depends on the spectral index of the distribution.
Turbulent magnetic fields play an important role in many astrophysical processes such as amplification of magnetic fields, accretion, viscous heating and thermal conduction in a turbulent magnetised plasma, etc. \citep{Schekochihin2007}. One of the most important processes where the presence of turbulent magnetic fields is necessary is the diffusive shock acceleration. In particular, the collisionless shock waves themselves are generated via magnetic turbulence, which mediates interactions between particles. In the acceleration process the turbulence is required to trap accelerated particles around shock front for successive crossings, in which they gain energy. The acceleration of the particles is accompanied by their radiation. The character of the radiation can reveal details of the acceleration as well as the properties of the turbulent medium where the acceleration occurs. The geometry and the scale of turbulence could be reflected in the polarization properties of the radiation. A completely isotropic turbulence does not produce any net polarization. However, if the scale of the turbulence is sufficiently large one can detect the fluctuations of the polarization and study the structure of the magnetic field \citep{Bykov2009}. In the case of small-scale turbulence, the only way to observe a polarised radiation is the specific anisotropic geometry of the turbulent magnetic field. This concerns, for example, the objects like GRBs. The turbulent magnetic fields in the shock waves can be generated by a variety of plasma instabilities \citep{Bret2004,Lemoine2010}. Particle-in-cell (PIC) simulations show that a Weibel instability is a key component for generation of the relativistic collisionless shock waves and amplification of magnetic fields \citep{Spitkovsky2008,Martins2009,Medvedev2011}. As proposed in Ref.~\cite{Medvedev1999} in the context of the magnetic field amplification in GRBs, the anisotropy of colliding beams is transferred to the energy of the small-scale turbulent magnetic field. The characteristic scale of the turbulence is of the order of the plasma skin-depth. Here it is interesting to note that the amplification occurs predominantly for the components of the field which are perpendicular to the direction of beams. The analysis of PIC simulations conducted in Ref.~\cite{Medvedev2011} reveals also a significant anisotropy of the turbulence at the saturation stage of amplification. However, the turbulence becomes more isotropic in a course of non-linear evolution far behind the shock front \cite{Medvedev2011}. The radiation of electrons in the small-scale magnetic fields significantly differs from the regular synchrotron radiation \cite{Medvedev2000}. Because of the turbulence, the formation length of the radiation can be smaller than the formation length required for synchrotron radiation. In this case the qualitatively different type of radiation - jitter radiation - appears. The conditions for realization of this regime of radiation is the smallness of the characteristic length of the turbulence $\lambda$ compared to the {\it non-relativistic} Larmor radius $R_L=\frac{mc^2}{eB}$. Thus the appearance of the jitter radiation is determined solely by the properties of the magnetic field. Because of smaller formation length, the characteristic frequency of jitter radiation $\omega_j$ is larger than the characteristic frequency of synchrotron radiation $\omega_0$ by the factor $\delta_j=\lambda/R_L$, i.e. $\omega_j=\omega_0/\delta_j$, where $\delta_j\ll 1$. The mechanism of jitter radiation has been revisited in Ref.~\cite{Kelner2013}. It has been shown that the power spectrum of radiation behaves as a constant at frequencies smaller than the characteristic frequency of jitter radiation, contrary to the earlier claimed $\sim\omega^1$ behaviour \cite{Medvedev1999}. The spectrum at high frequencies is a power-law with index determined by the turbulent spectrum. The total power of the jitter radiation equals the total power of the synchrotron radiation in the isotropic magnetic field. Thus presence of the small-scale turbulence affects the radiation spectrum but does not touch total losses. In this paper we show that a similar relation occurs for polarisation properties: the turbulence influences only the spectral degree of polarisation whereas the degree of polarisation of total radiation is the same for jitter and synchrotron radiations. Concerning the polarisation of radiation from GRBs, the synchrotron radiation is assumed to be the main mechanism for the production of polarised emission. While the Weibel instability seems to be inevitable in generation of inner and outer shocks, it is not clear whether the Weibel instability indeed produces small enough turbulence for the operation of the jitter regime. The simulations of Ref.~\cite{Medvedev2011} show that the jitter radiation regime can operate during the growing stage while the current filaments merge, and, at the later times, after reaching the non-linear regime when magnetic field starts to decay. While the growth of the magnetic field occurs very fast, the decaying stage looks more promising for production of significant portion of jitter radiation. At the saturation stage, it is more probable that the synchrotron regime is at work. So it makes a sense to consider the polarisation properties of both synchrotron and jitter regimes for a specific configuration of the turbulent magnetic field generated by Weibel instability. The polarisation properties of the radiation in the turbulent magnetic field with the so-called slab geometry have been studied in refs.~\cite{Laing1980,Mao2013}. Assuming an independence of the radiation from different parts of the emitting region, Laing \cite{Laing1980} has averaged the radiation from power-law electron distribution over the isotropically distributed directions of the magnetic field in the plane. While this approach works for synchrotron radiation, the calculations in the case of jitter radiation should take into account the coherence of the turbulent magnetic field. This has not been done in the calculations of Ref.~\cite{Mao2013} where the polarisation of jitter radiation has been studied in the manner of Ref.~\cite{Laing1980}. It has led them to incorrect the conclusion that jitter radiation can give a $100\%$ polarisation. In this paper we study the polarisation properties of the radiation from isotropically distributed electrons in a turbulent magnetic field with slab geometry. The calculations have been conducted in the general tensor form which is not attached to any specific coordinate system and allows us to avoid any assumptions about principle axis of the polarisation ellipse. For calculations we follow the approach proposed in Ref.~\cite{Kelner2013}. The averaging of the obtained formulae for jitter radiation over all directions of observation reproduces the results obtained for the isotropic turbulence considered in Ref.~\cite{Kelner2013}. The derived formulae can be used for an arbitrary energy distribution of electrons. We compare the polarisation properties of the synchrotron and jitter radiation and show that the synchrotron radiation is more polarised in the case of monoenergetic distributions. For the power-law distribution of electrons the analytical formula for the degree of polarisation of jitter radiation is obtained. The paper has the following structure. Sections 2 and 3 describe the calculations of the polarization produced in the jitter and synchrotron radiation regimes, respectively. In the Section 4 we present the results and compare two cases. Finally, in Section 5 we discuss the main results and make conclusions.
In this paper, the polarization properties of the radiation produced by isotropically distributed electrons in a {\it turbulent} magnetic field with directions strictly parallel to the plane (so-called slab geometry) have been studied. We consider two extreme cases of the small and large turbulent scales. In the large-scale turbulent magnetic field ultrarelativistic electrons radiate in the regular synchrotron regime. The geometry of the field affects insignificantly the spectral energy distribution of radiation for a given turbulence spectrum. The intensity observed at different observation angles differs within a factor of two (see Fig.~\ref{fig:TotalPowerSyn}). The polarization is more sensitive to the observation angle. It changes from $0\%$, when the magnetic field plane is observed face-on to higher than $90\%$ when the plane is observed edge-on (Fig.~\ref{fig:SynPolarization}). Both the intensity and the polarization are sensitive to the distributions over the magnetic field strength (Figs.~\ref{fig:TotalPowerSynB} and \ref{fig:SynPolarization}). At high frequencies the radiation spectrum in the cutoff region falls down slower in the case of broader field distribution. At the same time the radiation becomes less polarized in turbulent field with a broader distribution. In a small-scale turbulent magnetic field, the properties of radiation of electrons are substantially different from the properties of the synchrotron radiation. Namely, if the characteristic length of the turbulence $\lambda$ is smaller than the non-relativistic Larmor radius $R_L$, electrons emit in the jitter radiation regime. The jitter radiation is determined by the scale of turbulence and, therefore, by definition, occurs only in the turbulent media. In the slab geometry of turbulent magnetic field, which can be generated in the relativistic shock waves, e.g. by Weibel instability, we derived analytical formulae presented in Eqs.~(\ref{eq:JitTensor} and \ref{eq:QRJit}) in the tensor form. We derived also the spectral energy distribution of the jitter radiation field as a function of the observation angle $\theta$. After averaging over $\theta$, it naturally leads to the results derived in Ref.~\cite{Kelner2013} in the case of isotropic turbulence. The jitter radiation has distinct spectral features. The maximum of the spectral energy distribution is achieved at $\omega_{j}$ which is shifted $R_L/\lambda$ times towards higher frequencies compared to the position of the maximum of the synchrotron radiation. At high frequencies the spectrum has a power-law form; the slope depends on the spectrum of the magnetic turbulence (see Eq.~(\ref{eq:JHighPower})). At low frequencies, it is described by Eq.~(\ref{eq:assymp}) which tends to a constant. As in the case of synchrotron radiation, the jitter radiation is not polarized when the magnetic field plane is observed face-on. It grows with increase of the angle from this direction. In general, the polarization of the jitter and synchrotron radiations have similar properties. In both regimes, it increases with the frequency and the observation angle. But they are different in details. In the case of the monoenergetic distribution of electrons, the polarization of the synchrotron radiation is higher than the polarization of the jitter radiation at all observation angles and frequencies. However, for the power-law distribution of elections the polarization of the jitter radiation can be higher.
16
7
1607.01522
1607
1607.07585.txt
An unidentified infrared emission (UIE) feature at 6.0 $\mu$m is detected in a number of astronomical sources showing the UIE bands. In contrast to the previous suggestion that this band is due to C=O vibrational modes, we suggest that the 6.0 $\mu$m feature arises from olefinic double-bond functional groups. These groups are likely to be attached to aromatic rings which are responsible for the major UIE bands. The possibility that the formation of these functional groups is related to the hydrogenation process is discussed.
A family of strong unidentified infrared emission (UIE) bands at 3.3, 6.2, 7.7--7.9, 8.6, 11.3 and 12.7\,$\mu$m %and from aliphatic units at 3.4, 6.9 and 7.3\,$\mu$m are commonly observed in astronomical sources \citep[for a recent review, see][]{pe13}. As the UIE bands are ubiquitous in circumstellar envelopes, molecular clouds, and active galaxies, the nature of their carrier provides key information for understanding the matter cycle and chemical evolution of galaxies. Although the exact chemical structure of the carrier of the UIE bands are still uncertain \citep{cataldo2013, papoular2013, jones2013, Kwok13,zk15}, it is generally accepted that the bands arise from C$-$H/C$-$C stretching and bending vibrational modes of aromatic and/or aliphatic compounds. Nevertheless, there also exists a small group of weak UIE bands that do not alway appear together with the stronger UIE bands. Their nature is completely unknown. A valid assignment of their vibrational modes of these weaker bands would help to identify the UIE carrier. The 6.0\,$\mu$m feature is one of the members of this group. Although the 6.0\,$\mu$m feature has been detected in various objects showing UIE bands \citep[e.g.,][]{Peeters02}, there has not yet been a thorough investigation of the nature of this feature. The spectra of young stellar objects usually exhibit a strong absorption feature at 6.0\,$\mu$m, which is commonly ascribed to the bending mode of amorphous water ice \citep{ta87}. However, solid H$_2$O has a stronger absorption at 3.0\,$\mu$m, which has never been observed in these UIE sources. Furthermore, given the carbon-rich nature of UIE sources, H$_2$O can be ruled out as the carrier of the 6.0\,$\mu$m feature. An early suggestion for the origin of the 6.0 $\mu$m feature is a C$-$C stretching mode in a minor population of PAH molecules \citep{Beintema96}. The position of the 6.0\,$\mu$m feature is close to that of the stretching mode of C=O and it has been suggested that HCOOH can be a cause for broadening of the observed 6.0\,$\mu$m absorption feature in young stellar objects \citep{st96}. This has led to the suggestion that the 6.0\,$\mu$m emission band in UIE sources is a carbonyl stretching mode from oxygenated polycyclic aromatic hydrocarbons (PAHs) \citep{Peeters02}. However, the C=O stretching mode lies in a wavelength shorter than 6.0\,$\mu$m so the identification is not perfect. Another possibility is that the 6.0\,$\mu$m UIE arises from heteroatomic aromatic compounds. For example, \citet{js09} shows that the C-C mode with C atoms bounded to Si atoms can shift to the 6.0\,$\mu$m range, and this feature might suggest the presence of multiple Si-atom complexes. %Previously, this feature is either ascribed to as C$-$C stretching mode in a minor population of PAH molecules \citep{Beintema96} or due to carbonyl ($>$C=O) stretch of an oxygenated PAH \citep{Peeters02}. %The 6.0 $\mu$m band is therefore a UIE band that pure PAHs, the most popular model of UIE carrier, cannot account for. In this paper, we attempt to understand the origin of the 6.0\,$\mu$m UIE band through comparisons between the observations, laboratory measurements, and the theoretical computations resulting from quantum-chemical calculations.
In this paper, we present experimental and theoretical spectra of olefins for comparison with astronomical spectra showing the 6.0 $\mu$m UIE feature. Comparisons are also made with pure aromatic (PAH) molecules and molecules containing the carbonyl group. The advantages of the assumption of the %existence of olefins fragments in the molecular structure of interstellar carbon compounds olefinic hypothesis can be summarized as follows: \begin{itemize} \item {The wavelength range of the C=C vibrational mode (5.96--6.25 $\micron$) measured experimentally is closer to the astronomically observed wavelength peak of the 6.0 $\micron$ feature than the C=O vibrational mode.} %range (5.26 - 6.45 $\micron$). %It therefore removes the need to find an additional explanation for the large red-shift of C=O in unknown molecular structure.} \item{Theoretical calculations show that the 6.0 $\micron$ feature can generally arise from simple hydrocarbon molecules with olefinic double-bond functional groups. } \item{The olefinic C=C are inherently not a strong band, so there is no need to invoke the extra assumption of low concentration to explain the weak intensity of 6.0 $\micron$ features. This assumption can better explain why such a feature is common in different objects with different chemistry. } %\item {There seems to be a positive correlation between the position of the peak wavelength of the 6.2 $\mu$m feature and the ratio of the strengths of the 6.0 to 6.2 $\mu$m features, both of which may reflect the shifting aliphatic to aromatic ratio in the chemical structure of the carrier.} \end{itemize} We note that the coming {\it James Webb Space Telescope (JWST)} will offer higher sensitivity and spatial resolution which can be used to increase the sample size and better identify the nature of the 6.0 $\mu$m band. The UIE phenomenon is a complex one, consisting of major UIE bands and minor features as well as broad emission plateaus. It is extremely unlikely that this phenomenon can be explained by simple gas-phase, purely aromatic PAH molecules. This study is part of an exercise to learn more about the nature of vibrational bands of carbonaceous compounds with mixed aromatic/aliphatic structures. We show that the presence of olefinic side groups can lead to minor features in UIE sources. Further work is needed to fully explore the effects on aliphatic side groups as possible contributors to the UIE phenomenon. {\flushleft \bf Acknowledgements~} We thank an anonymous referee for his/her many helpful comments which led to improvements in this paper. The Laboratory for Space Research was established by a special grant from the University Development Fund of the University of Hong Kong. This work is also in part supported by grants from the HKRGC (HKU 7027/11P and HKU7062/13P.).
16
7
1607.07585
1607
1607.04639_arXiv.txt
The young stellar cluster Westerlund~1 (Wd~1: $l$\,=\,339.6$^\circ$, b\,=\,$-$0.4$^\circ$) is one of the most massive in the local Universe, but accurate parameters are pending on better determination of its extinction and distance. Based on our photometry and data collected from other sources, we have derived a reddening law for the cluster line--of--sight representative of the Galactic Plane (-5$^\circ<$\,b\,$<$+5$^\circ$) in the window 0.4-4.8\,$\mu$m: The power law exponent $\alpha$\,=\,2.13$\pm$0.08 is much steeper than those published a decade ago (1.6--1.8) and our index $R_V$\,=\,2.50\,$\pm$\,0.04 also differs from them, but in very good agreement with recent works based on deep surveys in the inner Galaxy. As a consequence, the total extinction $A_{Ks}$\,=\,0.74$ \pm $0.08\ ($A_V$\,=\,11.40$ \pm$ 2.40) is substantially smaller than previous results(0.91--1.13), part of which ($A_{Ks}$\,=\,0.63 or $A_V$\,=\,9.66) is from the ISM. The extinction in front of the cluster spans a range of $\Delta A_V\sim$8.7 with a gradient increasing from SW to NE across the cluster face, following the same general trend of warm dust distribution. The map of the $J-Ks$ colour index also shows a trend of reddening in this direction. We measured the equivalent width of the diffuse interstellar band at 8620~\AA\ (the ``GAIA DIB'') for Wd~1 cluster members and derived the relation $A_{Ks}$\,=\,0.612\,$EW$ $-$ 0.191\,$EW^2$. This extends the \citet{Munari+2008} relation, valid for $E_{B-V}$\,$<$\,1, to the non--linear regime ($A_V$\,$>$\,4).
\label{section1} The majority of stars are born in large clusters, and so, these sites are key to understanding the stellar contents of galaxies. Until recently the Milky Way was believed to be devoid of large young clusters. The realisation that our Galaxy harbours many young clusters with masses (M\,$>$\,10$^3$\,M$_{\odot}$) like Westerlund~1 (Wd~1), Arches, Quintuplet, RSG~1, RSG~3, Stephenson~2, Mercer~81, NGC3603, h+$\chi$ Persei, Trumpler~14, Cygnus~OB2, [DBS2003] and VdBH~222 \citep[see summary by][]{Negueruela2014}, reveals a different scenario than previously thought. Although Milky Way offers the opportunity to resolve the cluster members, unlike other galaxies beyound the Local Group, accurate determination of the fundamental properties of these recently discovered clusters: distance, mass, age, initial mass function (IMF), and binary fraction is still lacking in many cases. The two fundamental parameters upon which all others depend are the interstellar extinction and the distance. In the present work we use accurate techniques to derive the interstellar extinction towards Wd~1 and its surroundings, what was not accurately done in previous works. The extinction is related to the observed magnitudes by the fundamental relation (distance modulus): \begin{equation} m_{\lambda} = M_{\lambda} + 5\log_{10} \left(\frac{d}{10}\right) + A_{\lambda}. \label{eq1} \end{equation} A set of different filters can be combined to define colour excess indices: $\displaystyle E_{\lambda1}-E_{\lambda2}=(m_{\lambda1}-m_{\lambda2})_{\rm{obs}}-(m_{\lambda1}-m_{\lambda2})_0$, where the zero index indicates the intrinsic colour of the star. Some authors -- like the classical work of \citet{Indebetouw+2005} -- attempted to derive $\displaystyle A_\lambda$ directly from the above relations, using a minimization procedure to a large number of observations from 2MASS survey \citep{Stru06}. Actually the number of variables is greater than the degrees of freedom of the system of equations we have to compute. All possible colour excess relations are linear combinations of the same parameters. In the specific case of the NIR, after dividing by the K$_s$--band extinction, there will be an infinite number of pairs $\displaystyle A_J/A_{Ks}$ and $\displaystyle A_H/A_{Ks}$ satisfaying the relations. Many minimization programs (like the downhill technique used in the {\it amoeba} program) just pick up a local minimum close to the first guess as a solution, hiding the existence of other possibilities. Derivation of the extinction can only be accomplished on the basis of a specific extinction law as a function of wavelength, which ultimately reflects the expected properties of dust grains. Interstellar reddening is caused by scattering and refraction of light by dust grains, and the amount of light subtracted from the incoming beam (extinction) is governed by ratio between its wavelength ($\lambda$) and the size of the grains (d). For $\lambda<<d$ all photons are scattered/absorbed (gray extinction).For $\lambda>d$ the fraction of photons that escape being scattered/absorbed increases. The interstelar dust is a mixture of grains of different sizes and refactory indices, leading to a picture a little more complicated than described above. This was first modelled by \citet{Hulst1946} for different dust grain mixtures. All subsequent observational works resulted in optical and NIR extinction laws similar to that of van de Hulst model (in particular his models \#15 and \#16). A remarkable feature is that they are well represented by a power low ($A_\lambda$\,$\propto$\,$\lambda^{-\alpha}$) in the range 0.8\,$<$\,$\lambda$\,$<$\,2.4\,$\mu$m \citep[see e.g.][]{F99}. The $\alpha$ exponent of the extinction power law: $\displaystyle A_{\lambda}/A_{Ks} = (\lambda_{Ks}/\lambda)^\alpha$ is related to the observed colour excess through: \begin{equation} \frac{A_{\lambda_1}-A_{\lambda_2}}{A_{\lambda_2}-A_{\lambda_{Ks}}} = \frac{\left(\frac{\lambda_2}{\lambda_1}\right)^{\alpha}-1}{1-{\left(\frac{\lambda_{Ks}}{\lambda2}\right)^{\alpha}}}. \label{eq2} \end{equation} The value of the $\alpha$ exponent is driven by: a) the specific wavelength range covered by the data; b) the effective wavelengths of the filter set, which may differ from one to another photometric system, especially the $R$ and $I$ bands; and c) the fact that the effective wavelength depends on the spectral energy distribution (SED) of the star, the transmission curve of the filter and the amount of reddening of the interstellar medium (ISM). The power law exponent {\large $\alpha$} in the range 1.65\,$<$\,$\alpha$\,$<$\,2.52 has been reported \citep[see e.g.,][]{CCM89, Berdnikov+1996, Indebetouw+2005, Stead+2009, RL85, FM09, Nishiyama+2006, GF14}, but it is not clear how much spread in the value of the exponent is due to real physical differences in the dust along different lines-of-sight and how much comes from the method used on the determination of the exponent. As shown by \citet[their Fig. 5]{GF14} using the 2MASS survey, the ratio of colour excess $E_(H-K)/E_(J-K)$ grows continuously from 0.615 to 0.66 as the distance modulus grows from 6 to 12 towards the inner Galactic Plane (GP, their Fig. 5). This corresponds to a change in $\alpha$ from 1.6 to 2.2 which translates into $A_J/A_{Ks}$ from 2.4 to 3.4. \citet{Zas09} also used 2MASS data to show that colour excess ratios varies as a function of Galactic longitude, indicating increasing proportions of smaller dust grains towards the inner GP. Reddening laws steeper than the''canonical” ones have been suggested for a long time, but their reality is now clearly evident from deep imaging surveys. The large progress reached in recent years revealed that there is no ``universal reddening law'', as believed in the past. Moreover, the extinction coefficients are quite variable from one line--of--sight to another, even on scales as small as arcminutes. At optical bands ($UBV$) it was already well established long ago that the extinction law for particular lines-of-sight had large discrepancies from the average \citep{FM09,He+1995,Popowski2000,Sumi2004,Racca+2002}. In recent years this has been proved to occur also for the NIR wavelength range \citep{Larson+2005, Nishiyama+2006, Froebrich+2007, Gosling+2009}. The patchy tapestry of extinction indices is particularly impressive in the large area work in the NIR$/$optical done by \citet{Nataf16,Nataf13} for the Galactic Bulge and by \citet{Scha16} for the GP. Although we cannot use directly the extinction coefficients from these works for our particular target (line--of--sight), their derived reddening relations help for checking the consistency of our results. Targets to derive the extinction must be selected between stars with well known intrinsic colour indices, in order to measure accurate colour excesses. Wd~1 cluster members with known spectral types are ideal for this, especially because the majority of them are hot stars, for which the intrinsic colours are close to zero. There are $\approx$ 92 stars in this group; the statistics can be improved by using stars in the field around the cluster. \citet{Wozniak+1996} proposed using Red Clump (RC) stars, to derive the ratio between the total to selective extinction. These Horizontal Branch stars are the most abundant type of luminous stars in the Galaxy and have a relatively narrow range in absolute colours and magnitudes. RCs form a compact group in the colour-magnitude diagram (CMD) as shown by \citet[and references therein]{Stanek+2000}. This is due to the low dispersion -- a few tenths of magnitude -- in intrinsic colours and luminosities of RC stars \citep{Stanek+1997,Paczynski+1998}. This technique, initially designed for using filters $V$ and $I$ in OGLE survey for microlensing events, and filters $V$ and $R$ in MACHO, was adapted for the $JHKs$ bands \citep{Flaherty+2007,Indebetouw+2005,Nishiyama+2006,Nishiyama+2009}. As shown by \citet{Indebetouw+2005} and by \citet{Nishiyama+2006,Nishiyama+2009}, RC stars in the CMD (e.g. $J-Ks$ {\it versus} $Ks$) may appear as an over--density ``strip''. That strip in the CMD contains interlopers which mimic RC star colours but have different luminosities (like nearby red dwarfs and distant supergiants). This does not allow the application of the relation~\eqref{eq1} to derive the absolute extinction from each particular star in the strip, but still works for the colour excess ratios in the relation~\eqref{eq2}. From the measured colour excess ratio (e.g. $E_{J-H}/E_{J-Ks}$) the value of the exponent $\alpha$ can be calculated and, therefore, the ratios $A_J/A_{Ks}$ and $A_H/A_{Ks}$. \citet{Nishiyama+2006} reported $\alpha$\,=\,1.99 and $A_J$\,$\approx$\,3.02 and $A_H$\,$\approx$\,1.73 in a study of the Galactic Bulge, which is much higher then all previous results. \citet{Fritz+2011} also derived a large value $\alpha$\,=\,2.11 for the Galactic Centre (GC), using a completely different technique. \citet{Stead+2009} reported $\alpha$\,=\,2.14\,$\pm$\,0.05 from UKIDSS data and similar high exponents from 2MASS data. They did not derive $A_\lambda/A_{Ks}$, since in their approach those quantities vary because of shifts in the effective wavelengths as the extinction increases (see Section 2.2). However, at a first approximation, using the isophotal wavelengths, we can calculate from their extinction law: $A_J/A_{Ks}$\,$\approx$\,3.25 and $A_H/A_{Ks}$\,$\approx$\,1.78. The dream of using interstellar DIBs to evaluate the exinction has been hampered by saturation effects in the strength of the features and on the behaviour of the carriers which differ from the hot/diffuse ISM as compared to cold/dense clouds -- however, see \citet{Maiz+2015}. The 8620\,\AA\,DIB correlates linearly with the dust extinction \citep{Munari+2008}, at least for low reddening, and is relatively insensitive to the ISM characteristics. Since this spectral region will be observed by GAIA for a large number of stars up to relatively large extinction, we used our data to extend \citet{Munari+2008} relation, which was derived for low reddening. This work is organised as follows. In Section~\ref{section2} we describe the photometric and spectroscopic observations and data reduction. In Section~\ref{section3} we describe the colour excess ratios relations, the ratio between the absolute magnitudes and a suggested extinction law for the inner GP. In Section~\ref{section4} we compare our results with others reported in the literature. In Section~\ref{section5} we perform $J-{Ks}$ extinction maps around Wd~1 field for a series of colour slices and evaluate the 3D position of obscuring clouds. In Section~\ref{section6} we analyse the relation between the interstellar extinction and the Equivalent width (EW) of the 8620~\AA\ DIB. In Section~\ref{section7} we present our conclusions.
\label{section7} We present a study of the interstellar extinction in a FOV 10$'$ $\times$ 10$'$ in direction of the young cluster Westerlund~1 in $JHKs$ with photometric completeness $>$\,90\% at $\,$Ks$\,$=$\,15$. Using data publicly available, we extended the wavelength coverage to shorter and longer wavelengths from the optical to the MIR (although with less complete photometry). Colour excess ratios were derived by combining (92) Wd~1 cluster members with published spectral classification with (8463) RC stars inside the FOV. Our result for the NIR: $E_{J-H}/E_{H-Ks} = 1.891\pm0.001$ is typical of recent deep imaging towards the inner Galaxy. Using the procedure designed by \citet{Stead+2009} to obtain effective wavelengths of 2MASS survey and Eq.\,\eqref{eq2} we derived a power law exponent $\alpha = 2.126 \pm 0.080$ that implies $A_J/A_{Ks} = 3.23$. This extinction law is steeper than the older ones \citep{Indebetouw+2005,CCM89,RL85}, based on less deep imaging and is in line with recent results based on deep imaging surveys \citep{Stead+2009, Nishiyama+2006, Nataf16}. In the NIR, this implies in smaller $A_{Ks}$ and larger distances than laws based on more shallow photometry, which has a large impact on inner Galaxy studies. Using our measured $A_{Ks} / E_{J-Ks} = 0.449$ (plus combinations between other filters) we obtained the extinction to Wd~1, $<A_{Ks}> = 0.736 \pm 0.056$. This is $0.2-0.3$ magnitudes smaller than previous work, based on older (shallower) extinction laws \citep{Gennaro+2011, Negueruela+2010, Lim+2013, Piatti+1998}. On the other hand our $A_V = 11.26 \pm 0.07$ is in excellent agreement with $A_V = 11.6$ derived by \citet{Clark+2005} based on a completely different method: the OI~7774\,\AA\,EW $\times$ $M_V$ of six Yellow Hypergiants. The cluster extinction encompass the range $A_{Ks} = 0.55-1.17$ (which translates into $A_{V} \approx 8.5-17$). Cluster members have typical extinction $A_{Ks}=0.74\pm 0.08$ which translates into $A_V=11.4\pm 1.2$. The foreground interstellar component is $A_{Ks} = 0.63\pm 0.02$ or $A_V = 9.66\pm 0.30$. The extinction spread of $A_V \sim 2.5$ magnitudes inside a FOV 3.5$\arcmin$ indicates that it is produced by dust connected to the cluster region. In fact Fig\ref{fig:dustmap} shows a patchy distribution of warm dust. There are indications for a gradient in $A_{Ks}$ increasing from SW to NE, which is in line with the map of warm dust and with the colour density maps in the surrounding field. However, the effect is not very clear, suggesting a patchy intra-cluster extinction. The $J-Ks$ colour density maps unveiled the existence of a group of blue foreground stars, which may or may not be a real cluster. Since those stars partially overlap the Wd~1 cluster, they must be taken into account when subtracting the field population in the usual procedures to isolate Wd~1 cluster members. We measured the EW8620\,\AA\,DIB for 43 Wd~1 cluster members and combined them with additional filed stars and results collected from the literature, showing a good correlation with $A_{Ks}$. Although the linear relation reported by \citet{Munari+2008} was recovered for $E_{B-V}$\,$\approx 1$, it deviates for larger values and we present a polynomial fit extending the relation. The moderately large scatter in the Wd~1 measurements seems to reflect the uncertainties in our procedures to measure the extinction and the EW. Unfortunately our sample does not probe the range 0.4\,$<$\,EW8620\,$<$\,0.8, see Fig.\,\ref{fig:AKEW8620}. In order to improve the situation aiming for GAIA, the above relation should be re--done incorporating measurements from stars with 1$\,<$\,$E_{B-V}$\,$<$\,2.5 (equivalent to 0.3\,$<$\,$A_{Ks}$\,$<$\,0.7). As a matter of fact, it is expected that such relation will be different for the general ISM as compared to the denser ambients prevalent inside regions of recent star forming regions. We examined our result $R_V\,=\,A_V /E_{B-V}\,=\,2.50 \pm 0.04$ with care, since previous works found suspicious that this ratio for Wd~1 was much smaller than the usual $R_V$\,=\,3.1 \citet{Clark+2005,Lim+2013, Negueruela+2010}, although similar values have been reported by \citet{FM09} for a couple of stars like star BD+45~973. Moreover, this value is in excellent agreement with \citet{Nataf13}, as deduced from $R_I/E_{V-I}$ based on OGLE~III fields in the Galactic Bulge, close to the position of Wd~1. However, even if there was minor systematic errors in the photometric calibrations in the B--band, it would not be larger than a few tenths of magnitudes, which are dwarfed by the large extinctions: $A_B \approx 15$. An interesting result is the lack of correlation between the reddening law in the optical, as compared with the NIR (see Fig.\ref{fig:RJKIJxRAIVI}). This is in agreement with the result found by \citet{Nataf16} for a much larger field in the inner Galaxy. Looking to the position of \citet{CCM89} in that figure, we confirm other diagnostics, showing that dust grain properties in the inner Galaxy are different from those sampled by shallower imaging. Even in our small field (12\arcmin$\times$12\arcmin), the colour ratio diagram of Fig.\,\ref{fig:RJKIJxRAIVI} shows that the Wd~1 cluster members spread to a different zone in the diagram, as compared to RCs. This suggests that intra--cluster dust grains have properties different from the lower density ISM where RCs are located. The large spread of indices between Wd~1 members indicates the existence of clumps of dust grains with a variety of size properties. We derived the extinction law for the range 0.4-4.8\,$\mu$m which is in very good agreement with the colour excess ratios obtained by \citet{Scha16} from large photometric and spectroscopic surveys in the inner GP. We propose our law presented in Eq.\,\eqref{eq19} to be representative of the average inner GP. A striking feature of this law is its close coincidence with a power law with exponent $\alpha = 2.17$ for the entire range 0.8-4\,$\mu$m. We call the reader's attention to the fact that this is an average law, usefull for general purposes, since the reddening law varies from place to place inside the narrow zone of the GP.
16
7
1607.04639
1607
1607.08621_arXiv.txt
{ We explore the possibility that fermionic dark matter undergoes a BCS transition to form a superfluid. This requires an attractive interaction between fermions and we describe a possible source of this interaction induced by torsion. We describe the gravitating fermion system with the Bogoliubov-de Gennes formalism in the local density approximation. We solve the Poisson equation along with the equations for the density and gap energy of the fermions to find a self-gravitating, superfluid solution for dark matter halos. In order to produce halos the size of dwarf galaxies, we require a particle mass of $\sim 200\mathrm{eV}$. We find a maximum attractive coupling strength before the halo becomes unstable. If dark matter halos do have a superfluid component, this raises the possibility that they contain vortex lines.} \arxivnumber{1607.08621}
There has recently been a significant amount of interest in the possibility that dark matter forms a many-body quantum state in galactic halos. Much of this work has focused on bosonic dark matter \cite{Sin:1994,Ji:1994,Peebles:2000,Goodman:2000,Hu:2000,Boehmer:2007,Harko:2011,Chavanis:2011,Harko:2011a,Harko:2011b, RindlerDaller:2011,Pires:2012,Bettoni:2013,Li:2013,Diez-Tejedor:2014,Fan:2016}. A massive bosonic particle can undergo Bose-Einstein condensation and be supported by its own gravitational interactions. One of the reasons for the popularity of this model is that it provides a possible solution to the core-cusp problem of dark matter \cite{deBlok:2009}. This problem arises from large-scale simulations of dark matter structure formation which predict large cuspy densities in the centers of galaxies. These density profiles do not match observations of the galaxies. In Bose-Einstein condensates these cusps cannot form due to either repulsive interactions, or the uncertainty principle in the case of non- or weakly interacting bosons. Recent work has explored the idea that Bose condensation of dark matter and coupling to baryons could explain the MOND-like behaviour of rotation curves while keeping the large-scale success of particle dark matter \cite{Berezhiani:2015,Berezhiani:2016}. The formation of a Bose-Einstein condensate leads to the dark matter halo behaving as a superfluid. The possibility that fermionic dark matter could form a many-body quantum state has also been explored \cite{Chavanis:2002,deVega:2013,Destri:2013,Chavanis:2015,Domcke:2015}. Previous work has focused on the idea that fermion dark matter can form a degenerate Fermi gas either in the core or throughout the whole of a dwarf galaxy halo. These models also avoid the core-cusp problem as the degeneracy pressure stops large overdensities from building up. Unlike the bosonic case, a degenerate Fermi gas does not behave as a superfluid, however as we will see, adding an attractive interaction allows a superfluid to form via a Bardeen-Cooper-Schrieffer (BCS) transition. One significant difference between the degenerate fermion and boson models is the particle masses required to produce galactic sized haloes supported by quantum or degeneracy pressure. On dimensional grounds, the size of a bosonic halo supported by quantum pressure can be estimated to be \cite{Domcke:2015} \begin{equation} R\sim \frac{h^2}{GMm_b^2} \end{equation} where $m_b$ is the mass of the boson and $M$ the mass of the halo. For a fermionic halo supported by degeneracy pressure, the size will be \begin{equation} \label{eq:fermiMR} R\sim \left(\frac{M}{m_f}\right)^{2/3}\frac{h^2}{GMm_f^2} \end{equation} with $m_f$ the mass of the fermion. Since $M\gg m_f$ the halo radius will be much larger for a fermion of a given mass than a boson with the same mass. As an estimate for Milky Way sized galaxy, Domcke and Urbano \cite{Domcke:2015} take $R\sim 100\mathrm{kpc}$ and $M=10^{12}M_\odot$ giving a boson with $m_b\sim 10^{-25}\mathrm{eV}$ and a fermion with $m_f\sim 20\mathrm{eV}$. It should be noted that larger boson masses can be allowed if the condensate is supported primarily by a repulsive interaction rather than the quantum pressure, and in fact non-interacting bosonic dark matter is disfavored by CMB measurements \cite{Li:2013}. Here we investigate the possibility that a fermionic species can become a superfluid via a BCS transition. Fermions can undergo a transition to a superfluid state via the BCS mechanism if there is at least a small attractive force between particles. We model a self-gravitating system of fermions with an attractive self-coupling. We find self-consistent solutions for the fermion density and gap energy along with the gravitational potential.
In this work we have investigated a BCS superfluid non-rotating, spherically symmetric model for the dark matter halo for simplicity. In this case the density profile of the halo, and hence the associated rotation curve, is affected by the presence of the gap energy only for very strong coupling. The presence of the attractive coupling does however affect the density profile more directly by causing the halo to contract. The density profiles produced by the superfluid fermions all have cores rather than cusps. That is they converge to a constant density at the center of the halo. In this way they are similar to the commonly used cored profile, the pseudo-isothermal (PI) sphere \cite{deBlok:2009}, \begin{equation} \rho_\mathrm{PI}(r) = \frac{\rho_0}{1+(r/r_C)^2} \end{equation} where $\rho_0$ is the density at the center of the halo and $r_C$ describes the radius of the core region. However unlike the PI profile, the superfluid fermion profile goes to zero at a finite radius, while the PI profile falls off as $r^{-2}$ for large radii. Increasing the strength of the attractive interaction causes the superfluid fermion profile to contract with an increased central density, but it always retains a flat core. The halos produced therefore naturally have cores and a well-defined halo mass. Adding rotation to the halo opens up the possibility of describing superfluid vortices. Such vortices in a rotating BCS superfluid occur in neutron stars where they are important for explaining glitches in the stars' rotational frequencies \cite{Hoffberg:1970,Pines:1985,DeBlasio:1999,Rezania:2000,Elgaroy:2001,Yu:2003, Buckley:2004,Avogadro:2007}. Vortices in the halo will form lines through the halo. We can get an estimate of the size of any vortices present from the BCS coherence length \cite{Tinkham:1996}, \begin{equation} \xi = \frac{\hbar^2k_F}{\pi m \Delta}, \end{equation} since the superfluid density should not change on length scales shorter than this. The actual size of the vortices depends on temperature and must be determined numerically by solving the BCS system with rotation, however the coherence length gives a reasonable guide as to the order of magnitude \cite{Gygi:1991}. The condition $|\Delta|\ll\hbar^2k_F^2/m$ is met for all but the largest value of the coupling, so we can use the approximation for the gap, equation \eqref{eq:gapsmall}. This gives an expression for the coherence length \begin{equation} \xi = \frac{1}{\pi k_F}\exp\left(\frac{2\pi^2\hbar^2}{mgk_F} \right). \end{equation} For $|\mu|\approx 10^{-7}\mathrm{eV}$ and $m=200\mathrm{eV}$, the length scale (in meters) associated with the Fermi momentum at the center of the halo is about \begin{equation} \frac{1}{\pi k_F(0)}\approx 5\times 10^{-6}\mathrm{m}. \end{equation} For a coupling constant of size $g=0.86|\mu|^{-1/2}m^{-3/2}\approx 0.96\mathrm{eV}$, the exponential factor is approximately $7\times 10^{3}$, giving a coherence length of \begin{equation} \xi \sim 10^{-2}\mathrm{m}. \end{equation} The exponential dependence on the coupling constant means that the coherence length can be much larger than this if the coupling constant is small, however in this case the critical temperature is likely to be too low for the BCS state to form. In order to consider rotating halos, the assumption of spherical symmetry must be relaxed. Axisymmetric models of dark matter halos for dwarf galaxies have previously been considered \cite{Hayashi:2012}. It would be interesting to consider both the global effect of rotation on a BCS condensed halo and the creation of vortex lines in such a situation. Berezhiani and Khoury have proposed that bosons in a superfluid state could have non-trivial effective Lagrangians that allow bosonic dark matter to reproduce the rotation curves of MOND \cite{Berezhiani:2015,Berezhiani:2016}. The effective degrees of freedom for the bosonic superfluid are the quasiparticle excitations which have non-trivial dispersion relations. It is possible that our fermionic superfluid may also have quasi-particles which can display similar behaviour. We will consider the quasiparticle spectrum, effects of rotation, and finite temperature effects in future work.
16
7
1607.08621
1607
1607.04914_arXiv.txt
We present a polarization catalog of 533 extragalactic radio sources with 2.3 GHz total intensity above 420 mJy from the S-band Polarization All Sky Survey, S-PASS, with corresponding 1.4 GHz polarization information from the NRAO VLA Sky Survey, NVSS. We studied selection effects and found that fractional polarization, $\pi$, of radio objects at both wavelengths depends on the spectral index, source magnetic field disorder, source size and depolarization. The relationship between depolarization, spectrum and size shows that depolarization occurs primarily in the source vicinity. The median $\pi_{2.3}$ of resolved objects in NVSS is approximately two times larger than that of unresolved sources. Sources with little depolarization are $\sim2$ times more polarized than both highly depolarized and re-polarized sources. This indicates that intrinsic magnetic field disorder is the dominant mechanism responsible for the observed low fractional polarization of radio sources at high frequencies. We predict that number counts from polarization surveys will be similar at 1.4 GHz and at 2.3 GHz, for fixed sensitivity, although $\sim$10\% of all sources may be currently missing because of strong depolarization. Objects with $\pi_{1.4}\approx \pi_{2.3} \ge 4\%$ typically have simple Faraday structures, so are most useful for background samples. Almost half of flat spectrum ($\alpha \ge -0.5$) and $\sim$25\% of steep spectrum objects are re-polarized. Steep spectrum, depolarized sources show a weak negative correlation of depolarization with redshift in the range 0 $<$ z $<$ 2.3. Previous non-detections of redshift evolution are likely due the inclusion of re-polarized sources as well.
There are many open questions regarding the strength and geometry of the magnetic field in radio galaxies and their relation to other properties of the radio source. The observed degree of polarization depends on the intrinsic properties, such as the regularity and orientation of the source magnetic fields as well as the Faraday effects from the intervening regions of ionized gas along the line of sight. The largest current sample of polarized sources is the NRAO/VLA all sky survey, NVSS, at 1.4 GHz \citep{1998AJ....115.1693C}. It shows that the majority of extragalactic radio sources are only a few percent polarized. Polarization studies of small samples of extragalactic radio sources at other frequencies also show a similar weak average polarization, and suggest the fractional polarization increases at frequencies higher than 1.4 GHz \citep[e.g.][]{2009A&A...502...61M}. It is not clear which mechanism is dominant in reducing the fractional polarization at lower frequencies and depolarizing the sources, although several models have been suggested \citep{1966MNRAS.133...67B,1991MNRAS.250..726T,1998MNRAS.299..189S,2008A&A...487..865R,2015MNRAS.450.3579S}. One key cause for depolarization is Faraday rotation, which can be characterized to first order by a change in the angle of the linear polarization: \begin{equation} \Delta \chi=\left(0.812 \int \frac{n_e{\bf B}}{(1+z)^2}\cdot \frac{d{\bf l}}{dz} \,dz\right) \lambda^2 \equiv \phi \lambda^2 \end{equation} where $\Delta \chi$ is the amount of the rotation of the polarization vector in rad, $\lambda$ is the observation wavelength in m, $z$ is the redshift of the Faraday screen, ${\bf B}$ is the ionized medium magnetic field vector in $\mu$G, $n_e$ is the number density of electrons in the medium in cm$^{-3}$ and $\,d{\bf l}$ is the distance element along the line of sight in pc. The term in parentheses is called the Faraday depth, $\phi$. For a single line of sight through a thin ionized screen, this is equivalent to the rotation measure, $\textrm{RM}$, defined by $\textrm{RM} \equiv \frac{\Delta \chi}{\Delta \lambda^2}$ which can be measured observationally. Different lines of sight to the source all within the observing beam can have different values of $\phi$. Typically, this progressively depolarizes the source at longer wavelengths, but it can also lead to constructive interference and re-polarization, i.e., higher fractional polarizations at longer wavelengths. There are at least three separate possible Faraday screens with different $\textrm{RM}$ distributions along the line of sight: the Galactic component, intervening extragalactic ionized gas, and material local to the source. Multiple studies such as \cite{2005MNRAS.359.1456G,2008ApJ...676...70K,2010MNRAS.409L..99S,2012ApJ...761..144B,2012arXiv1209.1438H,2013ApJ...771..105B,2014ApJ...795...63F,2014MNRAS.444..700B,2014PASJ...66...65A,2015aska.confE.114V,2015arXiv150900747V} have tried to identify and distinguish these separate components and study the evolution of the magnetic field of galaxies through cosmic time. When many lines of sight each have independent single Faraday depths, this problem is approached statistically. Another long standing puzzle is the anti-correlation between the total intensity of radio sources and their degree of polarization, as observed by many groups such as \cite{2002A&A...396..463M}, \cite{2004MNRAS.349.1267T}, \cite{2006MNRAS.371..898S}, \cite{2007ApJ...666..201T}, \cite{2010ApJ...714.1689G}, \cite{2010MNRAS.402.2792S} and \cite{2014ApJ...787...99S}. The physical nature of this relation has been a mystery for almost a decade, and is confused by the dependency on other source properties. \cite{2010ApJ...714.1689G} found that most of their highly polarized sources are steep spectrum, show signs of resolved structure on arc-second scales, and are lobe dominated. However, they found no further correlation between the spectral index and fractional polarization. The anti-correlation between total intensity and fractional polarization seems to become weak for very faint objects with 1.4 GHz total intensities between 0.5 mJy $< I <$ 5 mJy as suggested in \cite{2014ApJ...785...45R}, based on a small sample of polarized radio galaxies in the GOODS-N field \citep{2010ApJS..188..178M}. Recently, \cite{2015arXiv150406679O} studied a sample of 796 radio-loud AGNs with $z < 0.7$. They found that low-excitation radio galaxies have a wide range of fractional polarizations up to $\sim$ 30 \%, and are more numerous at faint Stokes I flux densities while high-excitation radio galaxies are limited to polarization degrees less than 15\%. They suggest that the ambient gas density and magnetic fields local to the radio source might be responsible for the difference. Using WISE colors, \cite{2014MNRAS.444..700B} suggested that the observed anti-correlation primarily reflects the difference between infrared AGN and star-dominated populations. Large samples of polarization data at multiple frequencies are required to understand the magnetic field structures and depolarization mechanisms responsible for the low observed polarization fractions. \cite{2013ApJ...771..105B} have showed the polarization fraction of compact sources decreases significantly at 189 MHz compared to 1.4 GHz. They studied a sample of 137 sources brighter than 4 mJy and only detected one polarized source with probably a depolarization mechanism intrinsic to the source. Recently, \cite{2014ApJS..212...15F} used the \cite{2009ApJ...702.1230T} (hereafter TSS09) catalog, and assembled polarization spectral energy distributions for 951 highly polarized extragalactic sources over the broad frequency range, 0.4 GHz to 100 GHz. They showed that objects with flat spectra in total intensity have complicated polarization spectral energy distributions (SEDs), and are mostly re-polarized somewhere in the spectrum, while steep spectrum sources show higher average depolarization. As a result, they claimed that the dominant source of depolarization should be the local environment of the source, since the spectral index is an intrinsic property of these highly polarized sources. The current work follows up on their discovery, using a sample selected only on the basis of total intensity at 2.3 GHz. In this work, we use the data from the S-PASS survey, conducted by the Australian Parkes single dish radio telescope at 2.3 GHz. We cross match the data with the NVSS catalog and generate a new independent depolarization catalog of bright extragalactic radio sources. Unlike other polarization studies such as \cite{2014ApJS..212...15F} and \citet{2012arXiv1209.1438H} our catalog is not selected based on high polarized intensity which enables us to include objects with low fractional polarizations as well. We study the evolution and possible correlation between quantities such as depolarization, spectral indices and $\textrm{RM}$s. We will tackle the nature of the well-known observed anti-correlation between total intensity and fractional polarization as well as the origin of the dominant component of depolarization. Section \ref{sec:obs} presents the 1.4 GHz and 2.3 GHz observations. Section \ref{sec:mapanalysis} explains the steps in our analysis of the S-PASS total intensity and polarization maps as well as the cross matching with the NVSS catalog. In Section \ref{quantities} we derive quantities such as spectral index, residual rotation measure, fractional polarization and depolarization. The main results and their implications are discussed in sections \ref{result} and \ref{discussion} respectively. At the end, Section \ref{summary} summarizes the main findings and conclusions. Throughout this paper we employ the $\Lambda$CDM cosmology with parameters of H$_0=70$ km.s$^{-1}$Mpc$^{-1}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$.
\label{summary} We constructed a depolarization ($D=\pi_{2.3}/\pi_{1.4}$) catalog of extragalactic radio sources brighter than $420$ mJy at 2.3 GHz including total intensities, spectral indices, observed and residual rotation measures, fractional polarization, depolarization as well as the redshift, 2.3 GHz luminosity and WISE magnitudes for almost half of the objects. We looked for possible correlations between these quantities and found that the fractional polarization of extragalactic radio sources depends on the spectral index, morphology, the intrinsic magnetic field disorder as well as the depolarization of these sources. We summarize our main conclusions as follows: \\ Consistent with previous studies over half of flat spectrum sources in our sample are re-polarized while the majority of steep spectrum objects are depolarized. There is also a significant population of steep-spectrum sources that are repolarized; their underlying physical structure is currently unknown. Although steep objects are more polarized at 2.3~GHz, they are fainter in total intensity, and therefore future surveys at higher frequencies will result in approximately the same number of sources at fixed sensitivity as the lower frequencies. Depolarization, and thus fractional polarizations, are related to the presence of Faraday structures indicated by the non-$\lambda^2$ behavior of polarization angles ($\Delta \textrm{RM}$). Future studies using polarized sources as background probes need to minimize $\textrm{RM}$ structures intrinsic to the sources. Such clean samples require high fractional polarizations ($\pi \ge 4\%$), which will severely limit the number of available sources. Sources with little or no depolarization between 1.4 GHz and 2.3 GHz have fractional polarizations ranging from a few to 10\%. This is much lower than the theoretical maximum, and therefore shows the dominant role of field disorder in creating low polarizations. Compact steep spectrum objects in the NVSS catalog have more Faraday structure, and are $\sim 2$ times less polarized at 2.3 GHz than the extended sources. We found suggestive evidence for a decrease in the depolarization from $z=0$ to $z=2.3$, but only when the sample is restricted to the steep spectrum, $\alpha < -0.5$, depolarized, $D \ge 1.5$ objects. More investigation is needed to confirm the depolarization trend. Assuming that it's real, it is likely the result of the redshift dilution effect (at least partially) but requires more than a simple depolarizing screen local to the source.\\ The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Partial support for ML and LR comes from National Science Foundation grant AST-1211595 to the University of Minnesota. B.M.G. has been supported by the Australian Research Council through the Centre for All-sky Astrophysics (grant CE110001020) and through an Australian Laureate Fellowship (grant FL100100114). The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. We would like to thank G. Bernardi and D. H. F. M. Schnitzeler and the referee for a number of useful conversations and comments on the manuscript.
16
7
1607.04914
1607
1607.08284_arXiv.txt
While dozens of stellar mass black holes have been discovered in binary systems, isolated black holes have eluded detection. Their presence can be inferred when they lens light from a background star. We attempt to detect the astrometric lensing signatures of three photometrically identified microlensing events, \OBtwotwo, \OBtwofive, and \OBsixnine~(OB110022, OB110125, and OB120169), located toward the Galactic Bulge. These events were selected because of their long durations, which statistically favors more massive lenses. Astrometric measurements were made over 1--2 years using laser-guided adaptive optics observations from the W. M. Keck Observatory. Lens model parameters were first constrained by the photometric light curves. The OB120169 light curve is well-fit by a single-lens model, while both OB110022 and OB110125 light curves favor binary-lens models. Using the photometric fits as prior information, no significant astrometric lensing signal was detected and all targets were consistent with linear motion. The significant lack of astrometric signal constrains the lens mass of OB110022 to 0.05--1.79 M$\subsun$ in a 99.7\% confidence interval, which disfavors a black hole lens. Fits to OB110125 yielded a reduced Einstein crossing time and insufficient observations during the peak, so no mass limits were obtained. Two degenerate solutions exist for OB120169, which have a lens mass between 0.2--38.8 M$\subsun$ and 0.4--39.8 M$\subsun$ for a 99.7\% confidence interval. Follow-up observations of OB120169 will further constrain the lens mass. Based on our experience, we use simulations to design optimal astrometric observing strategies and show that, with more typical observing conditions, detection of black holes is feasible.
\label{sec:intro} Core-collapse supernova events, which mark the deaths of high-mass ($\gtrsim$ 8 M$\subsun$) stars, are predicted to leave remnant black holes of order several to tens of M$\subsun$. It is estimated that 10$^{8}$--10$^{9}$ of these ``stellar mass black holes" occupy the Milky Way Galaxy \citep{Agol02,Gould:2000}. Detecting isolated black holes (BHs) and measuring their masses constrains the number density and mass function of BHs within our Galaxy. These factors have important implications for how BHs form, supernova physics, and the equation of state of nuclear matter. For example, the BH mass function can be compared to the stellar initial mass function to define the initial-final mass relation including which stars produce BHs rather than neutron stars. Such measurements can help test different supernova explosion mechanisms, which predict different initial-final mass relations, and constrain the fraction of ``failed supernova explosions'' that lead to BH formation \citep[e.g.][]{Gould2002,Kochanek:2008,KushnirKatz:2015,Kushnir:2015,Pejcha:2015}. Additionally, the BH occurrence rate is a key input into predictions for future BH detection missions like the Laser Interferometer Space Antenna \citep[LISA,][]{Prince07} and the the Laser Interferometer Gravitational-Wave Observatory \citep[LIGO,][]{Abbott09}. To date, a few dozen BHs have been detected, but discoveries have been limited to BHs in binary systems. All of these are actively accreting from a binary companion and emitting strongly at radio or X-ray wavelengths \citep[see e.g.][for a review]{Reynolds13, Casares14}. Isolated BHs, which could comprise the majority of the BH population, remain elusive, with no confirmed detections to date. These objects can only accrete from the surrounding interstellar medium, producing minimal emission presumably in soft X-rays. While isolated BHs do not produce detectable emission of their own, their gravity can noticeably bend and focus (i.e.~lens) light from any background source in close proximity on the sky, allowing their presence to be inferred. During these chance alignments, the relative proper motion between the source and lens, $\boldsymbol{\vec{\mu}_{\mathrm{rel}}}$ produces a transient event that is observable both photometrically and astrometrically with the following signatures: (1) The background source increases in apparent brightness, and (2) The position of the source shifts astrometrically and splits into multiple images \citep{Miyamoto:1995,Hog:1995,Walker:1995}. For stellar-mass lensing events in the Galaxy, these multiple images are unresolved with current telescopes and are deemed ``microlensing" events \citep{Paczynski86}. Photometric microlenses are frequently detected by large transient surveys such as the Optical Gravitational Lensing Experiment \citep[OGLE,][]{Udalski92} and the Microlensing Observations in Astrophysics survey \citep[MOA,][]{Bond01}. However, astrometric shifts from a microlensing event have never been detected. Depending on the mass and relative distances to the source and lens, the astrometric shift of a BH lens is $\sim$1 mas and the event duration can be months to years. Although the successful detection of such astrometric signatures is challenging, it would have significant payoff; astrometric information can be combined with photometric measurements to precisely constrain lens masses. Here, we use high-precision astrometry to search for astrometric lensing signals and constrain the masses of candidate isolated BHs. This is the first such attempt made with ground-based adaptive optics. In \S \ref{sec:mass}, we explain how the lens mass can be estimated using photometric and astrometric means. We describe our photometric selection and observations of three candidate BH microlensing events in \S \ref{sec:obs} and outline our methods to extract high-precision astrometry in \S \ref{sec:methods}. We present photometric and astrometric microlensing models fitted to the three events in \S \ref{sec:models}. Our resulting proper motion fits and lens mass measurements are detailed in \S \ref{sec:analysis}. In \S \ref{sec:discussion}, we discuss our findings and in \S \ref{sec:future} determine the most efficient observing strategies for detecting the lensing signatures of stellar mass black holes in future campaigns. Conclusions are provided in \S \ref{sec:conclusions}.
16
7
1607.08284
1607
1607.07851_arXiv.txt
{Investigations into the substructure of massive star forming regions are essential for understanding the observed relationships between core mass distributions and mass distributions in stellar clusters, differentiating between proposed mechanisms of massive star formation.} {We study the substructure in the two largest fragments (i.e. cores) MM1 and MM2, in the infrared dark cloud complex SDC13. As MM1 appears to be in a later stage of evolution than MM2, comparing their substructure provides an insight in to the early evolution of massive clumps.} {We report the results of high resolution SMA dust continuum observations towards MM1 and MM2. Combining these data with \textit{Herschel} observations, we carry out RADMC-3D radiative transfer modelling to characterise the observed substructure.} {SMA continuum data indicates 4 sub-fragments in the SDC13 region. The nature of the second brightest sub-fragment (B) is uncertain as it does not appear as prominent at the lower MAMBO resolution or at radio wavelengths. Statistical analysis indicates that it is unlikely to be a background source, an AGB star, or the free-free emission of a HII region. It is plausible that B is a runaway object ejected from MM1. MM1, which is actively forming stars, consists of two sub-fragments A and C. This is confirmed by 70$\mu{m}$ \textit{Herschel} data. While MM1 and MM2 appear quite similar in previous low resolution observations, at high resolution, the sub-fragment at the centre of MM2 (D) is much fainter than sub-fragment at the centre of MM1 (A). RADMC-3D models of MM1 and MM2 are able to reproduce these results, modelling MM2 with a steeper density profile and higher mass than is required for MM1. The relatively steep density profile of MM2 depends on a significant temperature decrease in its centre, justified by the lack of star formation in MM2. A final stellar population for MM1 was extrapolated, indicating a star formation efficiency typical of regions of core and cluster formation.} {The proximity of MM1 and MM2 suggests they were formed at the similar times, however, despite having a larger mass and steeper density profile, the absence of stars in MM2 indicates that it is in an earlier stage of evolution than MM1. This suggests that the density profiles of such cores become shallower as they start to form stars and that evolutionary timescales are not solely dependent on initial mass. Some studies also indicate that the steep density profile of MM2 makes it more likely to form a single massive central object, highlighting the importance of the initial density profile in determining the fragmentation behaviour in massive star forming regions.}%
16
7
1607.07851
1607
1607.02190_arXiv.txt
The stellar halos of large galaxies represent a vital probe of the processes of galaxy evolution. They are the remnants of the initial bouts of star formation during the collapse of the proto-galactic cloud, coupled with imprint of ancient and on-going accretion events. Previously, we have reported the tentative detection of a possible, faint, extended stellar halo in the Local Group spiral, the Triangulum Galaxy (M33). However, the presence of substructure surrounding M33 made interpretation of this feature difficult. Here, we employ the final data set from the Pan-Andromeda Archaeological Survey (PAndAS), combined with an improved calibration and a newly derived contamination model for the region to revisit this claim. With an array of new fitting algorithms, fully accounting for contamination and the substantial substructure beyond the prominent stellar disk in M33, we reanalyse the surrounds to separate the signal of the stellar halo and the outer halo substructure. Using more robust search algorithms, we do not detect a large scale smooth stellar halo and place a limit on the maximum surface brightness of such a feature of $\mu_V=35.5$ mags per square arcsec, or a total halo luminosity of $L < 10^6 L_\odot$.
A key feature of $\Lambda$ Cold Dark Matter ($\Lambda$CDM) cosmological models is the hierarchical formation of structure \citep[see][for an overview]{2010gfe..book.....M}. With this, large galaxies are built up over time through the continual accretion of smaller structures. Accretion progenitors that fall towards the centre of their new host are heavily disrupted and the short dynamical timescales in the inner halo rapidly mix accreted structures to form a smooth stellar background. In the outer halo, where timescales are longer, ongoing accretion events can be found in the form of coherent phase-space stellar streams. However, major mergers can add to the confusion, violently disrupting the host and erasing most information of past accretions. The final resting place of many of the accretion events is the diffuse stellar halo, a faint component making up only a few per cent of the total luminosity of its host galaxy. Hence the properties of these stellar halos represent an archaeological record of the processes that shape a galaxy over cosmic time \citep[e.g.][]{2004MNRAS.349...52B,2005ApJ...635..931B,2015MNRAS.454.3185C}. Recent focus has turned to studying the stellar halos of Local Group galaxies through the identification of resolved stellar populations, with surveys such as SDSS/SEGUE revealing the extensive halo properties of our own Milky Way \citep[e.g.][]{2015ApJ...809..144X}. The other large galaxies within the Local Group, namely the Andromeda (M31) and Triangulum (M33) galaxies, have been the targets of the Pan-Andromeda Archaeological Survey (PAndAS), uncovering substantial stellar substructure and an extensive halo surrounding Andromeda \citep{Ibata2014}. M33 has also been found to possess extensive stellar substructure, in the form of a highly distorted outer disk, thought to have been formed in an interaction with the larger M31 \citep{McConnachie2009,McConnachie2010}; this substructure is roughly aligned with the previously detected distorted HI disk \citep{2009ApJ...703.1486P,2013ApJ...763....4L}. Being about a tenth the size of the two other large galaxies within the Local Group, the properties of any stellar halo of M33 would provide clues to galaxy evolution on a different mass scale than for the Milky Way and M31. The smooth halo component around M33 has been extremely elusive; early work presented in \citet{Ibata2007} claimed a detection, which was later shown to be the extended substructure. \citet{Cockcroft2013}, hereafter C13, after excising significant substructure and accounting for foreground contamination, presented a tentative detection of a smooth stellar halo with a scale-length of $\sim$20 kpc, and an estimated total luminosity of a few percent of the luminous disk. In this work, we revisit the detection and characterisation of the stellar halo of M33 using new analysis techniques, the final PAndAS data set with improved calibration, and a more detailed contamination model developed from the PAndAS data \citep{Martin2013}. We seek to fully characterise the smooth component of the stellar halo without resorting to masking the lumpy substructure component; ideally this should be recovered and characterised as a byproduct of our analysis. In order to monitor the validity of our results, we thoroughly test our methods using synthetic datasets generated to match the PAndAS data for M33. Section \ref{sec:data} describes the data and models we employ in investigating the M33 stellar halo. In Section \ref{sec:methods}, we discuss the over-arching methodology we use throughout, including colour-magnitude selection, spatial selection, masking, binning, and most importantly the synthetic data used to test the fitting algorithms. We present the results of our tests in Section \ref{sec:C13}, first of replicating the methods in C13 and then of alternative algorithms, both for the PAndAS data and synthetic mock data. Finally in Section \ref{sec:discussion} we discuss and conclude.
\label{sec:discussion} The motivation of this research was to characterise the smooth stellar halo of M33, presented as a putative detection by C13; no claim to the origin of this component is made in this earlier work, and our characterisation will provide evidence to whether this component represents a halo formed from primordial and accreted components, or is in fact extended disk material, potentially distributed in the event that gave rise to the prominent gas/stellar warping of the disk of M33. Either result is significant for understanding galaxy evolution, with M33 sitting between the scale of the larger galaxies within the Local Group, which are known to host extensive stellar halos (e.g. \citealt{2005astro.ph..2366G,2014ApJ...787...30D}), and the Large Magellanic Cloud, in which an extensive stellar halo appears to be absent (\citealt{2010AJ....140.1719S}). Furthermore will provide clues to the dynamical interactions in the history of the M31-M33 system (e.g. \citealt{McConnachie2009}). Unlike previous approaches, this study employs an extant contamination model for various components, and uses robust statistical analyses to search for a signal of a smooth halo component. However, we have demonstrated a range of potential problems associated with detection of any putative halo of this spiral member of the Local Group. The model parameters are degenerate, the foreground contamination is structured and has not yet been completely characterised $-$ but principally, the signal for the halo is vanishingly faint, and completely degenerate with a significantly brighter extended substructure which pollutes the most desirable region around M33 to search for the halo. With such faint halo signatures as is expected for M33, the halo is dominated by every other component. Statistical fluctuations in any of these components can lead to significant changes to the fit of the halo. Thus even if the halo were detected, it would likely be inaccurately characterised, or with such large uncertainty bounds as to be unenlightening. All of the fitting methods presented fail to detect any halo component in synthetic test data, which was designed to have a significantly brighter and more detectable halo component than any true halo component in the PAndAS data. These methods work on pixelation, and could be improved to avoid this (although removing pixelation from the parametrised substructure method would require significant changes), but this is unlikely to solve the key issue $-$ the PAndAS data alone is not sufficient to detect the halo of M33. The tentative detection of a possible, faint, extended stellar halo by C13, undertaken using PAndAS data with an inferior calibration and without the recent contamination model, describes an average surface brightness for the smooth halo component of less than $\mu_V=33$ magnitudes per square arc second. We find no evidence of a smooth halo component down to the limit reachable using the PAndAS data at approximately $\mu_V=35.5$. Using our methods, we are able to recover the halo parameters consistently down to ten times fainter than the halos used in our synthetic data tests. However, the presence of substructure such as is found around M33 will always complicate the fit as it cannot be separated from the signal using photometry alone, making a proper detection and characterisation of the halo impossible. While we find no evidence of a smooth halo component around M33 in this work, there are indications in other studies. The detection of a handful of remote metal-poor globular clusters in the M33 system (\citealt{Stonkute2008}; \citealt{Huxor2009}; \citealt{Cockcroft2011}) provides evidence for the presence of a halo component. There has also been more direct evidence, with \citet{Chandar2002} identifying a kinematic signal of what could be a halo component around M33, and several metal poor RR Lyrae detections (\citealt{Sarajedini2006}; \citealt{Yang2010}; \citealt{Pritzl2011}). So the case remains open.
16
7
1607.02190
1607
1607.05779_arXiv.txt
We present deep polarimetric observations at 154\,MHz with the Murchison Widefield Array (MWA), covering 625\,deg$^{2}$ centered on $\alpha=0\rah$, $\delta=-27\arcdeg$. The sensitivity available in our deep observations allows \edit1{an in-band,} frequency-dependent analysis of polarized structure for the first time at long wavelengths. Our analysis suggests that the polarized structures are dominated by intrinsic emission but may also have a foreground Faraday screen component. At these wavelengths, the compactness of the MWA baseline distribution provides excellent snapshot sensitivity to large-scale structure. The observations are sensitive to diffuse polarized emission at $\sim54\arcmin$ resolution with a sensitivity of 5.9\,mJy\,beam$^{-1}$ and compact polarized sources at $\sim2.4\arcmin$ resolution with a sensitivity of 2.3\,mJy\,beam$^{-1}$ for a subset (400\,deg$^{2}$) of this field. The sensitivity allows the effect of ionospheric Faraday rotation to be spatially and temporally measured directly from the diffuse polarized background. Our observations reveal large-scale structures ($\sim1\arcdeg$--$8\arcdeg$ in extent) in linear polarization clearly detectable in $\sim2$ minute snapshots, which would remain undetectable by interferometers with minimum baseline lengths $>110\,m$ at 154 MHz. The brightness temperature of these structures is on average 4\,K in polarized intensity, peaking at 11\,K. Rotation measure synthesis reveals that the structures have Faraday depths ranging from $-2$\,rad\,m$^{-2}$ to 10\,rad\,m$^{-2}$ with a large fraction peaking at $\sim+1$\,rad\,m$^{-2}$. We estimate a distance of $51\pm 20$\,pc to the polarized emission based on measurements of the in-field pulsar J2330$-$2005. We detect four extragalactic linearly polarized point sources within the field in our compact source survey. Based on the known polarized source population at 1.4\,GHz and non-detections at 154\,MHz, we estimate an upper limit on the depolarization ratio of 0.08 from 1.4\,GHz to 154\,MHz.
\label{sec:introduction} \setcounter{footnote}{0} The interstellar medium (ISM) of the Milky Way hosts a variety of physical mechanisms that define the structure and evolution of the Galaxy. It is a multi-phase medium composed of a tenuous plasma that is permeated by a large-scale magnetic field and is highly turbulent \citep{McKee:2007v45p565, Haverkorn:2015}. Despite advances in theory and simulation \citep{Burkhart:2012v749p145}, our understanding of the properties of the ISM has been limited by the dearth of observational data against which to test. The local ISM, particularly within the local bubble \citep{Lallement:2003v411p447}, has been very poorly studied. Studies using multi-wavelength observations of diffuse emission \citep{Puspitarini:2014v566p13} show that the local bubble appears to be open-ended towards the south Galactic pole. Polarimetry from stars can be a useful probe \citep{Berdyugin2001v368p635, Berdyugin2004v424p873, Berdyugin:2014v561p24}, however these are sparsely sampled for stars within the local bubble region (a few tens of parsec to $\sim$$100$\,pc). Observations of pulsars can also be used to probe conditions in the line of sight to the source \citep{Mao:2010v717p1170} however the density of such sources is low, even more so if only nearby sources are considered and for directions at mid or high Galactic latitudes. Radio observations of diffuse polarized emission have become a valuable tool for understanding the structure and properties of the ISM. At $350$\,MHz, it has been demonstrated that diffuse polarization could result from gradients in rotation measure and that they could be used to study the structure of the diffuse ionized gas \citep{Wieringa:1993v268p215, Haverkorn:2000v356p13, Haverkorn:2004v421p1011}. \citet{Gaensler:2011v478p214} observed features at 1.4\,GHz associated with the turbulent ISM using polarization gradient maps. Such features have also been observed as part of the Canadian Galactic Place Survey at 1.4\,GHz \citep{Taylor:2003v125p314} carried out at the Dominion Radio Astrophysical Observatory, the S-band Polarization All Sky Survey (S-PASS) at 2.3\,GHz with the Parkes radio telescope \citep{Carretti:2010v438p276, Iacobelli:2014v566p5} and at 4.8\,GHz at Urumqi as part of the Sino-German $\lambda$6\,cm Polarization Survey of the Galactic Plane \citep{Han:2013v23p82,Sun:2011v527p74, Sun:2014v437p2936}. These centimeter-wavelength observations are significantly less affected by depth depolarization than longer wavelength ones and can probe the ISM out to kilo-parsec distances. However, as they are also sensitive to the local ISM, they cannot distinguish between nearby structures and more distant ones. Longer wavelength observations provide a means to do so; depth depolarization is so significant at these wavelengths that only local regions of the ISM can be seen. As such, they provide a valuable tool for probing the local ISM. Long wavelength polarimetric observations are particularly sensitive to small changes in Faraday rotation, as a result of fluctuations in the magnetized plasma, which are difficult to detect at shorter wavelengths. Several such studies have been performed with synthesis telescopes at long wavelengths, e.g. WSRT between $325$ and $375$\,MHz \citep{Wieringa:1993v268p215, Haverkorn:2000v356p13, Haverkorn:2003v403p1031, Haverkorn:2003v403p1045, Haverkorn:2003v404p233}, WSRT at 150\,MHz \citep{Bernardi:2009v500p965, Bernardi:2010v522p67, Iacobelli:2013v549p56}; LOFAR at 150\,MHz \citep{Jelic:2014v1407p2093}, and at 189\,MHz with an MWA prototype \citep{Bernardi:2013v771p105}, but none of these were sensitive to structures larger than $\sim$$1\arcdeg$. \edit1{LOFAR observations of the 3C196 field at 150\,MHz \citep{Jelic:2015v583p137} achieved sensitivity to spatial scales up to $\sim$$5\arcdeg$ by utilising a dual-inner-HBA mode \citep{vanHaarlem:2013v556p2}. However, only a limited number of short-baselines are available in this mode and sensitivity is compromised to provide them.} Single dish polarimetric observations at long wavelengths provide access to large-scale structure but so far there has only been one such observation \citep{Mathewson:1965v18p635} and it suffered from poor sensitivity and spatial sampling. Furthermore, single dish observations below 300\,MHz also lack resolution. The Murchison Widefield Array (MWA) can help to bridge the gap that exists between existing single-dish and interferometric observations at long wavelengths. The MWA is a low frequency ($72$--$300$\,MHz) interferometer located in Western Australia \citep{Tingay:2013v30p7}, with four key science themes: 1) searching for emission from the epoch of reionization (EoR); 2) Galactic and extragalactic surveys; 3) transient science; and 4) solar, heliospheric, and ionospheric science and space weather \citep{Bowman:2013v30p31B}. The array has a very wide field-of-view (over 600 deg$^{2}$ at 154\,MHz) and the dense compact distribution of baselines provides excellent sensitivity to structure on scales up to $14\arcdeg$ in extent at 154\,MHz. Most importantly for this project, the high sensitivity observations can, for the first time, enable a frequency-dependent analysis of large-scale polarized structure. The large number of baselines provide high sensitivity ($\sim$$100$\,mJy rms for a 1 s integration) and dense $(u,v)$-coverage for snapshot imaging. Visibilities can be generated with a spectral resolution of 10\,kHz and with cadences as low as 0.5 s with the current MWA correlator \citep{Ord:2015v32p6O}, however, typical imaging is performed on $>112$\,s time-scales. In this paper, we present results from the first deep MWA survey of diffuse polarization and polarized point sources, for an EoR field situated just west of the South Galactic Pole (SGP). The primary aims of the survey are to study polarized structures in the local ISM, localize them, and gain insights into the processes that generate them. Secondary aims include a study of the polarized point source population at long wavelengths and also an overall evaluation of the polarimetric capabilities of the MWA. In Section \ref{sec:mwaobs} we describe the MWA observations and data reduction. In Section \ref{sec:results} we present our diffuse total intensity and polarization maps, apply rotation measure synthesis, analyze the effects of the ionosphere on the observed Faraday rotation, create both continuum and frequency-dependent polarization gradient maps, and search for polarized point sources. In Section \ref{sec:discussion} we explore the nature of the diffuse polarization, estimate the distance to the observed polarized features, study the linearly polarized point source population, discuss possible causes for the polarized structures based on frequency-dependent observations, perform a structure function analysis, and study the observed Faraday depth spectra. A summary and conclusion is provided in Section \ref{sec:conclusion}.
\label{sec:conclusion} We have presented a 625 square degree survey of diffuse linear polarization at 154\,MHz carried out with the MWA. The survey, centered on the MWA EoR-0 field (0\rah, -27\arcdeg), achieved a sensitivity of 5.9\,mJy\,beam$^{-1}$ at $\sim$$54\arcmin$ resolution. The compact baselines of the MWA have been shown to be particularly sensitive to diffuse structures spanning $1\arcdeg-10\arcdeg$, something that has traditionally only been within reach of single-dish instruments. Our MWA observations reveal smooth large-scale diffuse structures that are $\sim$$1\arcdeg-8\arcdeg$ in extent in linear polarization and clearly detected even in 2 minute snapshots. The brightness temperature of these structures is on average 4\,K in polarized intensity, peaking at 11\,K. We estimate a distance of $51\pm 20$\,pc to the polarized emission based on RM measurements of the in-field pulsar PSR J2330$-$2005. Rotation measure synthesis reveals that the structures have Faraday depths ranging from $-2$\,rad\,m$^{-2}$ to 10\,rad\,m$^{-2}$. A large fraction of these peak at $+1.0$\,rad\,m$^{-2}$ but smaller structures are also observed to peak at $+3.0$, $+7.1$ and $\sim$$+9$\,rad\,m$^{-2}$. The observed RM structure is smooth, particularly around the region where polarized intensity peaks, with a peak RM gradient of $\sim$$0.02$\,rad\,m$^{-2}$\,beam$^{-0.5}$. The sensitivity available in our deep observations allowed a frequency-dependent analysis of the polarized structure to be performed for the first time at long wavelengths. The results of the analysis suggested that the polarized structures are dominated by intrinsic emission but may also have a component that is due to a foreground Faraday screen. A structure function analysis of our linearly polarized images and an analysis of Faraday structure also suggest that intrinsic polarized emission tends to dominate. A 400 square degree subset of the field was re-imaged at full resolution ($\sim$$2.4\arcmin$) and 2.3\,mJy\,beam$^{-1}$ sensitivity to search for polarized point sources. We detect 4 extragalactic linearly polarized point sources within the EoR-0 field and have confirmed these by observing the shift in their RM over two epochs as a result of observably different ionospheric conditions in those epochs. Based on known polarized field sources at 1.4\,GHz and non-detections at 154\,MHz, we estimate an upper limit on the depolarization ratio of 0.08 from 1.4\,GHz to 154\,MHz. Such levels of depolarization are not surprising at long wavelengths, however, we note that the four detected sources did not exhibit significantly increased levels of depolarization compared to 1.4\,GHz. This may hint towards a small population of sources (one per 100 sq. deg) with this behaviour. We also note that these may be associated with relatively large radio galaxies where unresolved polarized hot spots lie outside of the local environment and are less likely to suffer the effects of depolarization. With its high sensitivity to large-scale structure, the MWA has proven itself to be a formidable instrument for the study of diffuse polarization in the local ISM. In combination with RM synthesis, it also provides a unique ability to measure the effect of ionospheric Faraday rotation on the diffuse polarized background both spatially ($\sim$$1\arcdeg$ resolution) and temporally ($\sim$$2$ minute resolution). This not only allows ionospheric effects to be calibrated without the need for ionospheric models but also provides an opportunity to observe local solar events by measuring their effect on the background RM. The survey presented in this paper utilized only a very small fraction of the data currently available in the EoR and GLEAM projects. There is great potential to extend the survey to look more deeply within the EoR fields, to look over an increased field-of-view with the GLEAM data, and also to explore the full $80$--$230$\,MHz range of frequencies available in the GLEAM data. Additional epochs of GLEAM data will aid in improving sensitivity and mitigating the sidelobe confusion that affects some snapshots. An implementation of an improved beam model will drastically reduce the apparent leakage observed in the highest frequency bands and so increase the range of reliable data available for subsequent frequency-dependent analysis. Furthermore, deconvolution of the diffuse structure and a multi-scale analysis of the gradient of linear polarization, e.g. \citet{Robitaille:2015v451p372}, would allow the observations to probe more deeply. While the sensitivity of the observations presented in this paper prevented such an analysis, a measurement of instrumental leakage from linear polarization into total intensity would provide valuable information to the EoR community. Understanding leakage of this form is of particular importance for EoR science because it can act as a possible contamination source for EoR measurements \citep{Jelic:2008v389p1319, Geil2011v418p516, Moore:2013v769p154, Asad:2015v451p3709}. This has been left for future work and will be performed once the new MWA beam model has been implemented. An improved analysis will also be possible with an extension to the MWA that is currently underway. The extension provides additional compact configurations and redundant baselines that will aid in calibration.
16
7
1607.05779
1607
1607.05123_arXiv.txt
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from $\Lambda$CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results enhancement of power from $\Lambda$CDM on small scales whereas the inclusion of GR corrections results the suppression of power from $\Lambda$CDM on large scales. This can be useful to distinguish scalar field models from $\Lambda$CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to difference in background expansion.
Since the first observational evidence of the late time acceleration of the universe \citep{Perlmutter:1996ds,Riess:1998cb}, it has been the biggest challenge in theoretical as well as observational cosmology to find the source of the repulsive gravitational force that causes this acceleration. We still do not have confirmatory evidence whether this is due to some extra dark component in the energy budget of the universe (commonly known as "dark energy") \citep{2000IJMPD...9..373S,2003RvMP...75..559P,2003PhR...380..235P} or due to some modification of Einstein gravity on cosmological scales \citep{Tsujikawa:2010zza}. Although the Planck-2015 result \citep{Ade:2015xua} shows that the concordance $\Lambda$CDM universe (containing Cosmological Constant $\Lambda$ and Cold Dark Matter) is consistent with a whole set of observational data, still there are some recent observational results that indicate inconsistency with $\Lambda$CDM model \citep{2014ApJ...793L..40S,2015A&A...574A..59D,Riess:2016jrr,Bonvin:2016crt}. This is in addition to the theoretical inconsistencies like fine tuning\citep{2000IJMPD...9..373S} and cosmic coincidence problems\citep{2000IJMPD...9..373S} that are present in $\Lambda$CDM model. This motivates people to go beyond $\Lambda$CDM paradigm and consider models with evolving dark energy. But one needs to distinguish this evolving dark energy from a cosmological constant ($\Lambda$) which does not change through out the history of the universe. To do so, we need to study the effect of dark energy on different cosmological observables related to background expanding universe as well as to the process of growth of structures in our universe. This can be done through observations with Supernova Type-Ia as standard candles \citep{Betoule:2014frx}, or observing the fluctuations in the temperature of cosmic microwave background radiation (CMBR) \citep{Ade:2015xua} or looking at the galaxies and their distribution over different distance scales as well as at different redshifts \citep{Alam:2016hwk}. The latest Planck results in 2015 together with data from Supernova Type-Ia and also data related to Baryon Acoustic Oscillations (BAO) from different redshift surveys have put unprecedented constraints on different cosmological parameters including those related to dark energy properties \citep{Ade:2015xua,Ade:2015rim}. But as far as the dark energy is concerned, we have mainly constrained its background evolution till now. This is because most of the observations mentioned above are either related to background universe, or perturbed universe on sub-horizon scales where Newtonian treatment is sufficient and one can safely ignore the fluctuations in dark energy. Hence it has not been possible till now to probe the inhomogeneity in dark energy which can be very useful in distinguishing different dark energy models. Future optical as well as radio/infrared surveys like LSST \citep{Abell:2009aa}, SKA \citep{Maartens:2015mra} etc, have potential to cover larger sky area and to extend to much higher redshifts probing horizon scales and beyond. This will give us a whole lot of information about our universe on such large length scales which is not known yet. Crucially on these scales, one needs to consider the full general relativistic (GR) treatment to study how fluctuations grow and dark energy perturbation can not be neglected any more. This can be a smoking gun to distinguish evolving dark energy model from $\Lambda$CDM as $\Lambda$, being a constant is not perturbed whereas any other evolving dark energy component should be perturbed and hence affects the growth of matter fluctuations on large scales in a different way than the $\Lambda$CDM model. Apart from this extra effect coming from the dark energy perturbations on large scales, there are other GR effects on galaxy overdensity on large scales \citep{Yoo:2009au,Bonvin:2011bg,Bonvin:2014owa,Challinor:2011bk,Jeong:2011as,Yoo:2012se,Bertacca:2012tp,Duniya:2016ibg,Duniya:2015dpa}. Primarily there are two sources of GR effects on large scales. One is related to the gravitational potential \citep{Duniya:2016ibg} which can be local or at the observed galaxies or along the line of sight and the other is related to the peculiar velocities due to the motion of galaxies relative to the observer. In recent years, there has been number of studies related to the calculation of power spectrum of galaxy overdensity on large scales taking into account the dark energy perturbation as well as various GR effects on large scales. This has been done mostly in the context of $\Lambda$CDM universe \citep{Duniya:2016ibg}. The simple way to consider evolving dark energy model is a canonical scalar field rolling over its potential. The first such model considered was the "tracker" scalar field model \citep{Wetterich:1987fm,2006IJMPD..15.1753C,1988PhRvD..37.3406R,1998PhRvL..80.1582C,1999PhRvD..59b3509L,1999PhRvD..59l3504S} with some particular types of potential that causes the scalar field to track the background radiation/matter component until recent past when slope of the potential changes so that it can behave as $\Lambda$ and causes the universe to accelerate. This tracker behaviour helps to evade the cosmic coincidence problem that is present in the $\Lambda$CDM model. The full observed galaxy power spectrum that incorporates various GR corrections on horizon scales has been studied recently by Duniya et al \citep{Duniya:2013eta} for tracker scalar field models. There is another kind of scalar field models, ``the thawer class'', \citep{Caldwell:2005tm} where the scalar field is initially frozen at some flat part of the potential due to large Hubble friction in the early time and behaves like a cosmological constant with $w \approx -1$. Later on, as the Hubble friction decreases, the scalar field slowly thaws away from its initial frozen state and evolves away from cosmological constant type behaviour. In this case, the scalar field never evolves much from its initial frozen state and the equation of state always remain very close to $w=-1$. Thawer scalar fields are much similar to the inflaton that drives the acceleration in the universe in the early time. It is also interesting to note that the thawer canonical scalar field model has a generic analytical behaviour for its equation of state for nearly flat potentials \citep{Scherrer:2007pu} ( Also see \citep{Li:2016grl} for the generalisation of this result for non-canonical scalar field). Thawing model in the context of tachyon field \citep{Ali:2009mr} as well as galileon field \citep{Hossain:2012qm} have been also considered in the recent past.To best of our knowledge, there has been no study till date on observed galaxy power spectrum for thawer class of models that incorporates various GR corrections on large scales. In this paper we study the full general relativistic treatment for the growth of linear fluctuations in cosmological models with thawing scalar field as dark energy. We form a single set of autonomous system of eight coupled equations involving both the background as well as the perturbed equations of motion and solve it for various scalar field potentials. Subsequently we study the power spectrum for the observed galaxy overdensity taking into account various GR corrections for different scalar field potentials and compare them with $\Lambda$CDM model. We also study the difference in observed galaxy power spectrum for thawing and tracking/freezing class of models for some specific choices of potential. The paper is organised as follows: in section 2, we briefly describe the background equations for the thawing dark energy models; in section 3, we describe the full general relativistic perturbation equation for linear fluctuations in both dark energy and matter, form a single set of autonomous equations involving both background evolution and evolution for the fluctuations and study various cosmological quantities; in section 4, we calculate the observed galaxy power spectrum taking into account various GR correction terms for different scalar field potentials and compare them with $\Lambda$CDM model; in section 5, we study the difference between thawing and tracking class models; finally in section 6, we write our conclusions.
Future survey like LSST, SKA etc have the potential to probe our universe on very large scales and also at very higher redshifts. This will give us the opportunity to probe the general relativistic effects on large scale structure formations. As we start probing universe on very large scales, we can not ignore the dark energy perturbations and hence all the GR corrections in observed galaxy power spectrum contain the contribution from dark energy perturbations. This is indeed very promising as it will be possible to distinguish $\Lambda$CDM model (where there is no dark energy perturbation) from other evolving dark energy models (where dark energy perturbations are present) in a completely new way with new generation future surveys. Hence it is important to study the observed galaxy power spectra with relevant GR corrections for different non-$\Lambda$CDM dark energy models. For tracking/freezing models, this has done earlier by \citep{Duniya:2013eta}. In this work we extend this for thawing scalar field models for dark energy which is also a natural alternative to $\Lambda$CDM model. Interestingly these models can also give rise to transient acceleration. We set up a very general autonomous system of equations involving both the background as well as the perturbed universe. This is valid for any form of the potential irrespective of whether it is thawer model or tracker model. This set of equations are first of its kind and can be easily generalized to include other forms of matter like radiation, neutrinos etc. Subsequently we solve this system of equations with thawing type initial conditions for the scalar field evolution for various forms of scalar field potential. Our main aim is to see the effect of thawing scalar fields on observed galaxy power spectrum on large scales with different GR corrections and compare it to the $\Lambda$CDM model. The gravitational potential in scalar field model is enhanced from $\Lambda$CDM value on large scales due to extra contribution from dark energy perturbation as determined by the Poisson equation. This extra contribution from dark energy perturbation is not present on small scales. Hence the small scale deviation from $\Lambda$CDM in scalar field model is always driven by difference in background expansion. Due to the interplay of these two effects, in $P_{s}$ and $P_{ks}$, there is always suppression of power at large scales and enhancement of power on small scales in scalar field models in comparison to $\Lambda$CDM. Once we add the GR correction term in the observed galaxy power spectra, the small scale behaviour remains the same ( which comes only due to background expansion), but on large scales, the suppression of power is increased by $9-10\%$ due to extra effect from GR correction, specifically from the term $\mathcal{A}$ which involves both the peculiar velocity as well as the gravitational potential. This deviation is expected to be probed by upcoming experiments like SKA. We also compare the observed galaxy power spectra for thawing and tracking model on large scales assuming two specific potentials. We show that on large scales and for smaller redshifts, thawer model can have larger suppression of power from $\Lambda$CDM than the tracker models. This shows that the GR corrections in these two models can be substantially different. On smaller scales and for larger redshifts, where the effect from background expansion dominates, the difference between these two models is not substantial. But we should stress that this difference between thawing and tracking models depends on the choice of the parameters in the potentials and unless we have specific constraints on these parameters, it is difficult to say anything conclusively.
16
7
1607.05123
1607
1607.07917_arXiv.txt
We introduce a new set of eight Milky Way-sized cosmological simulations performed using the AMR code ART + Hydrodynamics in a $\Lambda$CDM cosmology. The set of zoom-in simulations covers present-day virial masses that range from $8.3 \times 10^{11} \msun$ to $1.56 \times 10^{12} \msun$ and is carried out with our {\it simple} but {\it effective} deterministic star formation (SF) and ``explosive'' stellar feedback prescriptions. The work is focused on showing the goodness of the simulated set of ``field'' Milky Way-sized galaxies. To this end, we compare some of the predicted physical quantities with the corresponding observed ones. Our results are as follows. (a) In agreement with some previous works, we found circular velocity curves that are flat or slightly peaked. (b) All simulated galaxies with a significant disk component are consistent with the observed Tully-Fisher, radius-mass, and cold gas-stellar mass correlations of late-type galaxies. (c) The disk-dominated galaxies have stellar specific angular momenta in agreement with those of late-type galaxies, with values around $10^3$ km/s/kpc. (d) The SF rates at $z = 0$ of all runs but one are comparable to those estimated for the star-forming galaxies. (e) The two most spheroid-dominated galaxies formed in halos with late active merger histories and late bursts of SF, but the other run that ends also as dominated by an spheroid, never had major mergers. (f) The simulated galaxies lie in the semi-empirical stellar-to-halo mass correlation of local central galaxies, and those that end up as disk dominated, evolve mostly along the low-mass branch of this correlation. Moreover, the baryonic and stellar mass growth histories of these galaxies are proportional to their halo mass growth histories since the last 6.5--10 Gyr. (g) Within the virial radii of the simulations, $\approx 25-50\%$ of the baryons are missed; the amount of gas in the halo is similar to the one in stars in the galaxy, and most of this gas is in the warm-hot phase. (h) The $z\sim 0$ vertical gas velocity dispersion profiles, $\sigma_z$($r$), are nearly flat and can be mostly explained by the kinetic energy injected by stars. The average values of $\sigma_z$ increase at higher redshifts, following roughly the shape of the SF history.
The galaxy formation and evolution, circumscribed to the hierarchical structure formation scenario, is a fascinating problem. It is a complex phenomenon that involves many physical processes and scales, from the formation of the dark and gaseous halos at scales of tens to hundreds of kpc to the formation of stars in giant molecular clouds at scales of dozens of pc \citep[e.g.,][]{McKee-Ostriker2007}, passing through the formation of supermassive black holes at the center of massive galaxies at scales of $10^{-3}$ pc or less \citep[e.g.,][]{Volonteri2010}, with their corresponding stage of active galactic nuclei (AGN). They are accompanied by a variety of large-scale feedback effects such as, for example, supernovae (SNe) explosions and AGN outflows. Yet, we can encounter larger scales and new phenomena related to galaxy formation during the formation of groups and clusters of galaxies \citep[e.g.,][]{Kravtsov-Borgani2012}. It has been long known that disks, as observed, should form when gas cools, condenses and collapses, within dark matter halos, while conserving its angular momentum, obtained through external torques \citep{WR1978, FE1980, Mo+1998, Firmani+2000}. Yet, until recently, forming extended disk galaxies with flat rotation curves in hydrodynamic simulations, in a hierarchical cosmogony, was a challenge \citep[e.g.,][]{Mayer+2008}. Earlier works show that disks could be produced in simulations without difficulty but they inevitably ended up with too little (specific) angular momentum and too much stellar mass at the center of the galaxy \citep{NB1991, KG1991, NW1994, NFW1995, Somer-Larsen+1999}. The problem was that, because of an inefficient feedback or poor resolution or both, ``clumps'' (composed of gas, stars and dark matter) were too much concentrated by the time they were accreted by a halo/galaxy. The result, at the end, was that this lumpy mass lost most of their orbital angular momentum by a physical process called dynamical friction \citep[e.g.,][]{NB1991}. Later works, with better resolution and/or a more effective stellar feedback improved on this by producing more extended disks and less massive spheroids \citep{Abadi+2003, Somer-Larsen+2003, Governato+2004}. The relatively recent success on forming realistic galaxies can be attributed to some degree to resolution but mainly to a better understanding of the processes that play a major role in the classical {\it overcooling} \citep{WR1978, DS1986, WF1991} and {\it angular momentum} \citep[e.g.,][]{Mayer+2008} problems. Since the pioneer works of, for example, \citet{NB1991} and \citet{NW1994}, it was clear that to avoid transforming most of the gas into stars a kind of stellar feedback was needed. This source of energy (and momentum) not only could avoid the overcooling of the gas but it could also solve the angular momentum problem by puffing the gas in the clumps up, making it less susceptible to the loss of angular momentum. It was soon realized that in order to do its job the feedback needs to be efficient. Most of works in the recent years get this in one way or another \citep{Guedes+2011, Brook+2012, Governato+2012, Agertz+2013, Hopkins+2014}. For example, in a number of works the thermal feedback becomes efficient by delaying artificially, for about $\sim 10^7$yr, the cooling of the gas that surrounds the newly formed stellar particle \citep[e.g.,][]{TC2000, Stinson+2006, Governato+2007, Agertz+2011, Colin+2010, PS2011, Guedes+2011, Teyssier+2013}. In this recipe, further star formation (SF) is stopped by the high temperatures and low densities attained by the gas, as a result of turning off the cooling. Another way of making the feedback efficient is by depositing momentum (kinetic energy) directly into the gas which, unlike the thermal energy, can not be radiated away \citep[][]{NW1993, SH2003, Scannapieco+2006, OD2008, Marinacci+2014}. This wind method is somehow artificial in the sense that the kick is put by hand and because the wind particles are temporary decoupled from the hydrodynamic interaction. Traditionally, stellar feedback has been associated with the injection of energy only by SNe and in some cases with both SNe and stellar wind by massive stars \citep{Somer-Larsen+2003, Kravtsov03}, but very recently some kind of ``early'' feedback has also been incorporated \citep{Stinson+2013, Hopkins+2014, Trujillo-Gomez+2015}. This feedback begins few millions of years before the first SN explodes and it includes radiation pressure \citep{Krumholz+2014} and photoheating by the ionizing radiation of massive stars. This latter has been shown can significantly affect the structure of molecular clouds and in some cases destroy them \citep{Walch+2012, Colin+2013, Lopez+2014}. On the other hand, results from \citet{Trujillo-Gomez+2015} show that radiation pressure alone has a small effect on the total stellar mass content, but see \citet{Hopkins+2014}. Nowadays these effects and others, such us cosmic rays \citep{Salem+2014} and turbulence in molecular clouds, are slowly being incorporated in the simulations of galaxy formation. The MW-sized galaxies are special in the sense that they occupy the peak of the stellar mass growth efficiency within dark matter (DM) halos measured through the $\ms/\mv$ fraction \citep[e.g.,][]{Avila-Reese+2011b}. For less massive galaxies, this efficiency decreases because of the global effects of SN-driven outflows, and for more massive ones, because of the long cooling times of the shock heated gas during the virialization of massive halos and the feedback from luminous AGNs \citep[for a recent review on these processes see][]{Somerville-Dave2014}. Therefore, since the baryonic mass assembly of MW-sized galaxies is less affected by these astrophysical processes, the shape of their mass assembly history is expected not to deviate dramatically from the way their halos are assembled \citep[see e.g.,][]{Conroy+2009,Firmani+2010, Behroozi+2013,Moster+2013}, being then interesting objects to constrain the $\Lambda$CDM cosmology. Moreover, being these galaxies less susceptible to the large-scale effects of stellar and AGN feedback, they are optimal to probe simple models of large-scale SF and self-regulation, where the generation of turbulence (traced by the cold gas velocity dispersion) and its dissipation in the ISM are key ingredients. The aim of this work is to introduce a new suite of zoom-in Milky Way (MW) sized simulations run with the N-body + Hydrodynamic ART code \citep{Kravtsov03} and with the SF and stellar feedback prescription implemented for the first time in \cite[][see also Avila-Reese et al. 2011a]{Colin+2010}, and used recently in \citet{Santi+2016} for the ``Garrotxa'' simulations. Here we use the same \nsf\ and \esf\ values as in the latter paper. The dark matter (DM) halos were drawn from a 50 \mpch\ on a side box and have each about 1 million DM particles and cells within their virial radii. The spatial resolution, the side of the cell in the maximum level of refinement, is 136 pc. The runs use a deterministic SF approach; that is, stellar particles form {\it every} time\footnote{The timestep for star formation is given by the timestep of the root grid which varies from about 10 Myr at high $z$ to $\sim 40$ Myr at $z \sim 0$.} gas satisfies certain conditions, with a conversion of gas to stars of 65\%. Immediately after they are born, they dump $2 \times 10^{51}$ ergs of thermal energy, for every star $> 8\msun$, to the gas in the cell where the particle is located, rising its temperature to several $10^7\ \Ke$. This high temperature value comes from the high SF efficiency we have assumed and from the fact that all the injected energy is concentrated in time and space (see subsection \ref{feedback}). We sometimes call this form of stellar feedback ``explosive'', as opposed to the typical assumed feedback consisting of a continuous supply of thermal energy for about 40 Myr \citep[e.g.,][]{CK2009}. At this high temperature, the cooling time is much longer than the crossing time \citep{DS2012} and so, the differences in the galaxies simulated under the assumption of a delayed cooling and those that do not consider it, are not expected to be significant. From our sample of eight simulated halos/galaxies, we distinguish four that are clearly disk dominated. Two more have kinematic bulge-to-disk (B/D) ratios of 1.3 and 2.3. The other two had a violent late assembly phase and ended up with B/D ratios of 3 and 10. Here, we study the evolution of the specific angular momentum and the stellar-to-halo mass fraction, $\ms/\mv$, as a function of stellar mass. As usual in this kind of studies, we also compute the circular velocity profiles and the SF histories. The MW-sized galaxies simulated here present realistic $\ms/\mv$ fractions and are consistent with several observational correlations. We find, in agreement with our previous studies \citep[e.g.,][]{Gonzalez+2014}, nearly flat circular velocity curves. However, contrary to our results on the low-mass galaxies \citep{Avila-Reese+2011, Gonzalez+2014}, and to some degree also on the MW-sized galaxy of \citet{Santi+2016}, here our predicted $\ms/\mv$ values agree with those determined by semi-empirical methods. The suite of MW-sized galaxies introduced here will be used elsewhere to study in detail the spatially-resolved SF and stellar mass growth histories with the aim to compare the results with look-back time studies and fossil record inferences from observational surveys like MaNGA/SDSS-IV \citep{Bundy+2015}. They also will be used to study observational biases after post-processing them to include dust and spectral energy distributions to each stellar particle, and by further performing mock observations. This paper is organized as follows. In Section~\ref{sec:model}, we describe the code, the SF and feedback prescriptions, as well as the simulations. In Section~\ref{sec:results}, we put our simulated galaxies into an observational context by presenting their circular velocity curves, and by comparing the predicted galaxy properties and correlations with observations. Then, we estimate the morphology of our simulated galaxies and present the spatial distribution of light in some color bands. Finally, in the rest of the Section~\ref{sec:results} we present: (i) the halo mass aggregation and specific angular momentum evolution of the runs, (ii) the \ms/\mv\ ratios and their evolution, (iii) the gas-to-stellar mass ratios, and (iv) the cold gas velocity dispersion profiles. In Section~\ref{sec:discussion}, we discuss the resolution issue and the validity our ``explosive'' stellar feedback. We also discuss the implications of the the dark/stellar mass assembly of our simulated galaxies, as well as the role of the SN feedback as the driver of turbulence and self-regulated SF in the disk. Our conclusions are given in in Section~\ref{conclusions}.
\label{conclusions} A suite of eight MW-sized simulated galaxies (virial masses around $10^{12}\ \msun$ at $z=0$) were presented. The hydrodynamics + N-body ART code and a relatively simple subgrid scheme were used to resimulate with high resolution these galaxies. The regions, where the galaxies are located, are resolved mostly with cells of 136 pc per side and their halos contain around one million DM particles. The resimulated regions were chosen from a low-resolution N-body only DM simulation of a box $50\ \mpch$ size of side. These regions target present-day halos in relatively isolated environments. The overdensities, concentrations, and spin parameters of the runs cover a wide range of values. Our main results and conclusions are as follow: $\bullet$ The {\it simple} but {\it effective} subgrid scheme used in our simulations works very well for producing disk galaxies at the MW scale in rough agreement with observations. They have nearly flat circular velocity curves and maximum circular velocities in agreement or slightly higher than the observed Tully-Fisher relation. Other predicted correlations as the \re--\ms, \ms/\mg--\ms\ and \ms/\mv--\mv\ ones are consistent with observations for disk galaxies. In fact, four of the simulated galaxies end up as disk dominated, two have a significant disk but it does not dominate, and two more end up as spheroids, with a small disk component. The latter ones, as expected, moderately deviate from some of the mentioned correlations. Our subgrid scheme is based on a deterministic prescription for forming stars in every cool and dense gas cell with a high efficiency, and in a ``explosive'' (instantaneous energy release) stellar thermal feedback recipe. The parameters of the SF+feedback scheme for our given resolution, were fixed in order to attain a high gas pressure, above $10^7$ Kcm$^{-3}$, in the cells where young ($< 40$ Myr) stellar particles reside. In this way, (1) the external pressure and the negative ``pressure'' due to gravity are surpassed allowing the gas to expand, and (2) the temperature grows high enough as to obtain crossing times in the cell that result much smaller than the cooling time. The chosen parameters are $n_{\rm SF} =1$ cm$^{-3}$, $T_{\rm SF}=9000$ K, $\epsilon=65\%$, $E_{\rm SN+Wind}=2\times 10^{51}$ erg. $\bullet$ The disk-dominated runs are associated to halos with roughly regular MAHs (no major mergers, at least since $z\sim1$), and they have a late quiescent stellar mass growth nearly proportional to the halo mass growth ($\ms/\mv\approx$const. since the last 6.5--10 Gyr). On the contrary, the two most spheroid-dominated runs are associated to halos that suffered (late) major mergers. However, a galaxy with a prominent spheroid is also formed in a run with a very regular (no major mergers) halo MAH; in this run, most stars are assembled in an early burst. We conclude that the halo mass aggregation and merger histories play an important role in the final galaxy morphology of our MW-sized systems, but other effects can also be at work in some cases. $\bullet$ The disk-dominated runs present a gently and significant increase with time of the stellar specific angular momentum, $j_s$, attaining values at $z=0$ as measured in observed local late-type galaxies. On the contrary, the spheroid-dominated runs present an episodic evolution of $j_s$, ending with low values, as measured in the local early-type galaxies. Moreover, the SFR histories of seven of the runs present a strong initial burst at the age of $2-4$ Gyr and then a decline with some eventual bursts. The most spheroid-dominated galaxies, present late bursts of SF associated to late (gaseous) mergers. This implies a formation scenario of spheroid-dominated galaxies in the field different to the expected for early-type galaxies in high-density environments. $\bullet$ The cold gas velocity dispersion is moderately anisotropic. The $z=0$ vertical velocity dispersion profiles, $\sigma_z$($r$), are nearly flat (except for the spheroid-dominated Sp5 galaxy), with values mostly around 25-35 \kms\ up to $\sim 1.5\rhalf$; in several cases the dispersion increases with radius at larger radii. A model of self-regulated SF based on the the vertical balance between the rate of kinetic energy injected by SNe/stellar winds and the rate of kinetic energy dissipation by turbulence decay, is able to roughly predict the $\sigma_z$($r$) profiles. However, at radii where the SF rate strongly decreased, the high measured $\sigma_z$ values are not produced by SNe/stellar winds but probably by gas accretion. The velocity dispersions of the simulated galaxies are significantly larger at higher redshifts, when the galaxies are actively assembling and having strong bursts of SF. The average velocity dispersion histories of the simulated galaxies roughly follow the corresponding SF histories. $\bullet$ The stellar mass growth efficiency, \ms/\mv, of our simulated galaxies at $z=0$ is in good agreement with semi-empirical inferences, showing that MW-sized systems are in the peak of this efficiency. The evolution of \ms/\mv\ happens mostly around the scatter of the present-day \ms/\mv--\mv\ correlation, which implies that this correlation should not change significantly with $z$ in the $\mv=10^{11}-10^{12}$ \msun\ range, as some look-back time semi-empirical studies suggested. For the disk-dominated runs, \ms/\mv\ and $M_b/\mv$ were very low at the earliest epochs, then suddenly increased during the initial burst of SF, and when the halos overcome the mass of $\sim 5\times 10^{11}$ \msun, the galaxies entered in the quiescent phase of stellar/baryonic mass growth nearly proportional to the cosmological halo mass growth (\ms/\mv\ and $M_b/\mv \approx$ const. with time). $\bullet$ The galaxy baryonic fractions, $M_b/\mv$, are much lower than the universal one, $\Omega_b/\Omega_{m}$. Even taking into account the gas outside the galaxies, the missing baryons within \rv\ with respect to the universal fraction amounts for $\approx 25-50\%$. The baryonic budget within \rv\ for our eight MW-sized galaxies shows that on average $\approx$ 28\% and 5.4\% of baryons are in stars and cold gas in the galaxy, and 3.4\%, 24.2\% and much less than 1\% are and in cool, warm-hot, and hot gas in the halo, respectively. The main goal of our study was to show that a {\it simple} subgrid scheme, which captures the main physics of SF and stellar feedback in a {\it effective} way, given our resolution, is able to produce realistic galaxies formed in halos with masses during the evolution of $\sim 10^{11}-10^{12}$ \msun. These galaxies end up today with total masses close to that estimated for the MW galaxy. The success of this scheme can be attributed to the fact that at these scales neither the SF-driven outflows nor AGN-driven feedback, and the halo environmental effects (long gas cooling times), are so relevant as they are at the lower and higher scales, respectively.
16
7
1607.07917
1607
1607.05315_arXiv.txt
We study the putative emission of gravitational waves (GWs) in particular for pulsars with measured braking index. We show that the appropriate combination of both GW emission and magnetic dipole brakes can naturally explain the measured braking index, when the surface magnetic field and the angle between the magnetic dipole and rotation axes are time dependent. Then we discuss the detectability of these very pulsars by aLIGO and the Einstein Telescope. We call attention to the realistic possibility that aLIGO can detect the GWs generated by at least some of these pulsars, such as Vela, for example.
\label{int} Recently, gravitational waves (GWs) have finally been detected~\citep{2016PhRvL.116f1102A}. The signal was identified as coming from the final fraction of a second of the coalescence of two black holes (BHs), which resulted in a spinning remnant black hole. Such event had been predicted \citep[see e.g.,][]{2010ApJ...715L.138B} but never been observed before. As it is well known, pulsars (spinning neutron stars) are promising candidates for producing GW signals which would be detectable by aLIGO (Advanced LIGO) and AdV Virgo (Advanced Virgo). These sources might generate continuous GWs whether they are not perfectly symmetric around their rotation axes. The so called braking index (n), which is a quantity closely related to the pulsar spindown, can provide information about the pulsars' energy loss mechanisms. Such mechanisms can include GW emission, among others. Until very recently, only eight of the $\sim 2400$ known pulsars have braking indices measured accurately. All these braking indices are remarkably smaller than the canonical value $(n = 3)$, which is expected for pure magneto-dipole radiation model \citep[see e.g.,][]{1993MNRAS.265.1003L,1996Natur.381..497L,2007ApSS.308..317L,2011MNRAS.411.1917W,2011ApJ...741L..13E,2012MNRAS.424.2213R, 2015ApJ...810...67A}. Several interpretations for the observed braking indices have been put forward, like the ones that propose either accretion of fall-back material via a circumstellar disk \citep{2016MNRAS.455L..87C}, relativistic particle winds \citep{2001ApJ...561L..85X,2003A&A...409..641W}, or modified canonical models to explain the observed braking index ranges \citep[see e.g.,][and references therein for further models]{1997ApJ...488..409A,2012ApJ...755...54M,2016ApJ...823...34E}. Alternatively, it has been proposed that the so-called quantum vacuum friction (QVF) effect in pulsars can explain several aspects of their phenomenology~\citep{2016ApJ...823...97C}. However, so far no developed model has yet explained satisfactory all measured braking indices, nor any of the existing models has been totally ruled out by current data. Therefore, the energy loss mechanisms for pulsars are still under continuous debate. Recently, Archibald et al.~\cite{2016ApJ...819L..16A} showed that PSR J1640-4631 is the first pulsar having a braking index greater than three, namely $n=3.15\pm 0.03$. PSR J1640-4631 has a spin period of $P=206$~ms and a spindown rate of $\dot P=9.758(44)\times 10^{-13}$~s/s, yielding a spindown power $\dot E_{\rm rot}=4.4\times 10^{36}$~erg/s, and an inferred dipole magnetic field $B_0=1.4\times10^{13}$~G. This source was discovered by using X-ray timing observations with NuStar and a measured distance of $12$ kpc~\citep[see][]{2014ApJ...788..155G}. The braking index of PSR J1640-4631 reignites the question about energy loss mechanisms in pulsars. With the exception of this pulsar, all other eight, as previously mentioned, have braking indices $n < 3$ (see Table~\ref{ta1}), which may suggest that other spindown torques act along with the energy loss via dipole radiation. Recently, we showed that such a braking index can be accounted for if the spindown model combines magnetic dipole and GW brakes~\citep[see][]{2016arXiv160305975D}. Therefore, each of these mechanisms alone can not account for the braking index found for PSR J1640-4631. Since pulsars can also spindown through gravitational emission associated to asymmetric deformations~\citep[see e.g.,][]{1969ApJ...157.1395O,1969ApJ...158L..71F}, it is appropriate to take into account this mechanism in a model which aims to explain the braking indices which have been measured. Thus, our interest in this paper is to explore both gravitational and electromagnetic contributions in the context of pulsars with braking indices measured accurately. Based on the above reasoning, the aim of this paper is to extend the analysis of~\cite{2016arXiv160305975D} for all pulsars of Table~\ref{ta1}. In the next section we revisit the fundamental energy loss mechanisms for pulsars. We also derive its associated energy loss focusing mainly on the energy balance, when both gravitational and classic dipole radiation are responsible for the spindown. Also, we elaborate upon the evolution of other pulsars' characteristic parameters, such as the surface magnetic field $B_0$ and magnetic dipole direction $\phi$. In Section~\ref{sec3} we briefly discuss the detectability of these pulsars by aLIGO and the planned Einstein Telescope (ET) in its more recent design (ET-D). Finally, in Section~\ref{sec4}, we summarize the main conclusions and remarks. We work here with Gaussian units.
\label{sec4} In this paper we study the putative emission of GWs generated in particular by pulsars with measured braking indices. We model the braking indices of these pulsars taking into account in the spindown the magnetic dipole and GWs brakes, besides the surface magnetic dipole and the angle between the magnetic and rotation axes dependent on time. We show that the appropriate combination of these very quantities can account for the braking indices observed for pulsars listed in Table~\ref{ta1}. Notice that even for $\eta = 0.01$ our model predicts that some pulsars would have large values of $\epsilon$. This can be an indication that $\eta \ll 0.01$ for these pulsars, implying in smaller ellipticities, i.e., $\epsilon < 10^{-3}$. Moreover, we study the detectability of these pulsars by aLIGO and ET. Besides considering the efficiency effect of GWs generation, we take into account the role of the moment of inertia. To do so we take into account different values for $I$ that come from models of neutron stars for a given and acceptable equation of state. We show that aLIGO can well detect at least some of the pulsars considered here in particular Vela within one year of observation. Last, but not least, even if one studies scenarios in which alternative models to explain the pulsars' braking index are considered, it is quite important to include the putative contribution of GWs. Since it is quite reasonable to expect that pulsars have deformations, which implies in a finite value for the ellipticity, which is one of the main quantities in the calculation of the GW amplitudes. On the other hand, from the point of view of the searches for continuous GWs, there is a quite recently published paper \cite{2016PhRvD..93f4011M} with interesting strategies in which not only GWs are considered as emission mechanisms.
16
7
1607.05315
1607
1607.08237_arXiv.txt
The \kepler\ Mission has discovered thousands of exoplanets and revolutionized our understanding of their population. This large, homogeneous catalog of discoveries has enabled rigorous studies of the occurrence rate of exoplanets and planetary systems as a function of their physical properties. However, transit surveys like \kepler\ are most sensitive to planets with orbital periods much shorter than the orbital periods of Jupiter and Saturn, the most massive planets in our Solar System. To address this deficiency, we perform a fully automated search for long-period exoplanets with only one or two transits in the archival \kepler\ light curves. When applied to the $\sim 40,000$ brightest Sun-like target stars, this search produces \numcands\ long-period exoplanet candidates. Of these candidates, 6 are novel discoveries and 5 are in systems with inner short-period transiting planets. Since our method involves no human intervention, we empirically characterize the detection efficiency of our search. Based on these results, we measure the average occurrence rate of exoplanets smaller than Jupiter with orbital periods in the range 2--25~years to be $2.0\pm0.7$ planets per Sun-like star.
Data from the \kepler\ Mission \citep{Borucki:2011} have been used to discover thousands of transiting exoplanets. The systematic nature of these discoveries and careful quantification of survey selection effects, search completeness, and catalog reliability has enabled many diverse studies of the detailed frequency and distribution of exoplanets \citep[for example,][]{Howard:2012, Petigura:2013, Foreman-Mackey:2014, Dressing:2015, Burke:2015, Mulders:2015}. So far, these results have been limited to relatively short orbital periods because existing transit search methods impose the requirement of the detection of at least three transits within the baseline of the data. For \kepler, with a baseline of about four years, this sets an absolute upper limit of about two years on the range of detectable periods. In the Solar System, Jupiter~--~with a period of 12 years~--~dominates the planetary dynamics and, since it would only exhibit at most one transit in the \kepler\ data, an exo-Jupiter would be missed by most existing transit search procedures. Before the launch of the \kepler\ Mission, it was predicted that the nominal mission would discover at least 10 exoplanets with only one or two observed transits \citep{Yee:2008}, yet subsequent searches for these signals have already been more fruitful than expected \citep{Wang:2015, Uehara:2016}. However, the systematic study of the population of long-period exoplanets found in the \kepler\ data to date has been hampered due to the substantial technical challenge of implementing a search, as well as the subtleties involved in interpreting the results. For example, false alarms in the form of uncorrected systematics in the data and background eclipsing binaries can make single-transit detections ambiguous. Any single transit events discovered in the \kepler\ light curves are interesting in their own right, but the development of a general and systematic method for the discovery of planets with orbital periods longer than the survey baseline is also crucial for the future of exoplanet research with the transit method. All future transit surveys have shorter observational baselines than the \kepler\ Mission (\KT, \citealt{Howell:2014}; \tess, \citealt{Ricker:2015}; \plato, \citealt{Rauer:2014}) and given suitable techniques, single transit events will be plentiful and easily discovered. The methodological framework presented in the following pages is a candidate for this task. A study of the population of long-period transiting planets complements other planet detection and characterization techniques, such as radial velocity \citep[for example][]{Cumming:2008, Knutson:2014, Bryan:2016}, microlensing \citep[for example][]{Gould:2010, Cassan:2012, Clanton:2014, Shvartzvald:2016}, direct imaging \citep[for example][]{Bowler:2016}, and transmission spectroscopy \citep[for example][]{Dalba:2015}. The marriage of the radial velocity and transit techniques is particularly powerful as exoplanets with both mass and radius measurements can be used to study planetary compositions and the formation of planetary systems \citep[for example][]{Weiss:2014, Rogers:2015, Wolfgang:2016}. Unfortunately the existing catalog of exoplanets with measured densities is sparsely populated at long orbital periods; this makes discoveries with the transit method at long orbital period compelling targets for follow-up observations. Furthermore, even at long orbital periods, the \kepler\ light curves should be sensitive to planets at the detection limits of the current state-of-the-art radial velocity surveys. There are two main technical barriers to a systematic search for single transit events. The first is that the transit probability for long-period planets is very low; scaling as $\propto P^{-5/3}$ for orbital periods $P$ longer than the baseline of contiguous observations. Therefore, even if long-period planets are intrinsically common, they will be underrepresented in a transiting sample. The second challenge is that there are many signals in the observed light curves caused by stochastic processes~--~both instrumental and astrophysical~--~that can masquerade as transits. Even when the most sophisticated methods for removing this variability are used, false signals far outnumber the true transit events in any traditional search. At the heart of all periodic transit search procedures is a filtering step based on ``box least squares'' \citep[\bls;][]{Kovacs:2002}. This step produces a list of candidate transit times that is then vetted to remove the substantial fraction of false signals using some combination of automated heuristics and expert curation. In practice, the fraction of false signals can be substantially reduced by requiring that at least three self-consistent transits be observed \citep{Petigura:2013, Burke:2014, Rowe:2015, Coughlin:2016}. Relaxing the requirement of three transits requires a higher signal-to-noise threshold per transit for validating candidate planets that display only one to two transits. Higher signal-to-noise allows matching the candidate transit to the expected shape of a limb-darkened light curve, as well as ruling out various false alarms. This is analagous to microlensing surveys, for which a planet can only be detected once, thus requiring high signal-to-noise for a reliable detection \citep{Gould:2004}. Recent work has yielded discoveries of long-period transiting planets with only one or two high signal-to-noise transits identified in archival \kepler\ and \KT\ light curves by visual inspection \citep{Wang:2013, Kipping:2014a, Wang:2015, Osborn:2016, Kipping:2016, Uehara:2016}. These discoveries have already yielded some tantalizing insight into the population of long-period transiting planets but, since these previous results rely on human interaction, it is prohibitively expensive to reliably measure the completeness of these catalogs. As a result, the existing catalogs of long-period transiting planets cannot be used to rigorously constrain the occurrence rate of long-period planets. In this \paper, we develop a systematic method for reliably discovering the transits of large, long-period companions in photometric time series \emph{without human intervention}. The method is similar in character to the recently published fully automated method used to generate the official DR24 exoplanet candidate catalog from \kepler\ \citep{Mullally:2016, Coughlin:2016}. Since the search methodology is fully automated, we can robustly measure the search completeness~--~using injection and recovery tests~--~and use these products to place probabilistic constraints on the occurrence rate of long-period planets. We apply this method to a subset of the archival data from the \kepler\ Mission, present a catalog of exoplanet candidates, and estimate the occurrence rate of long-period exoplanets. We finish by discussing the potential effects of false positives, evaluating the prospects for follow-up, and comparing our results to other studies based on different planet discovery methods.
\sectlabel{summary} We have developed a fully automated method to search for the transits of long-period exoplanets with only one or two observable transits in the \kepler\ archival light curves. This method uses probabilistic model comparison to veto non-transit signals. Applying this method to the brightest \numtargets\ G/K dwarfs in the \kepler\ target list, we discover \numcands\ systems with likely astrophysical transits and eclipses. We fit the light curve for each candidate with a physical generative model and informative priors on eccentricity and stellar density to estimate the planet's orbital period. The constraint on the period is also informed by the simplifying assumption that no other transit could occur during the baseline of \kepler\ observations of the target. Simulations of the false positive population~--~lone primary or secondary eclipses of binary systems or background eclipsing binaries~--~suggest that 13 of these candidates have high probability of being planetary in nature. We measure the empirical detection efficiency function of our search procedure by injecting simulated transit signals into the target light curves and measuring the recovery rate. By combining the measured detection efficiency and the catalog of exoplanet candidates, we estimate the integrated occurrence rate of exoplanets with orbital periods in the range $2-25\unit{years}$ and radii in the range $0.1-1\,R_\mathrm{J}$ to be \intocc\ planets per G/K dwarf. \response{% This result is qualitatively consistent with estimates of the occurrence rate of long-period giant planets based on data from radial velocity and direct imaging surveys. The occurrence rate measured here~--~for Sun-like hosts~--~is higher than microlensing results for generally lower mass stars \citep{Gaudi:2012, Clanton:2014, Clanton:2016} but this discrepancy is consistent with predictions from the core-accretion model \citep{Laughlin:2004}. } Using a probabilistic mass--radius relationship, we predict the masses of our candidates and report predictions for the radial velocity semi-amplitudes. Unfortunately, since the target stars are faint and the amplitudes are small, these targets are unlikely to be accessible with even the current state-of-the-art high-precision instruments. We also discuss the potential for astrometric follow-up using the forthcoming data from the \gaia\ Mission with similarly discouraging results. Any detailed analysis of individual systems detected with only a single transit requires follow-up observations to convincingly rule out false positive scenarios and to better characterize the stellar host parameters (with, for example, parallax measurements from \gaia). The conclusions of this work~--~and all other occurrence rate results based on \kepler\ data~--~are conditioned on the assumption that the stellar characterization of the target sample is systematic and un-biased. The main population-level results should be fairly insensitive systematic issues with the sample but a rigorous analysis of these effects will be required to come to more detailed conclusions about this population of long-period transiting planets. Our method of transit discovery is especially relevant for future photometric surveys like \KT, \tess, and \plato\ where the survey baseline is shorter than \kepler. The transits of planets with orbital periods longer than the observation baselines will be plentiful in these forthcoming data sets and this method can, in principle, be trivially generalized to discover these planets, prioritize follow-up, and study their population. \vspace{1.5em} All of the code used in this project is available from \url{https://github.com/dfm/peerless} under the MIT open-source software license. This code (plus some dependencies) can be run to re-generate all of the figures and results in this \paper; this version of the paper was generated with git commit \texttt{\githash} (\gitdate). The parameter estimation results represented as MCMC samplings and the injection results are available for download from Zenodo at \datareleaseurl. \vspace{1.5em} It is a pleasure to thank Jeff Coughlin, So Hattori, Heather Knutson, Phil Muirhead, Darin Ragozzine, Hans-Walter Rix, Dun Wang, and Angie Wolfgang for helpful discussions and contributions. We thank the anonymous referee for comments that improved the presentation and clarity of this manuscript. T.D.M.\ was supported by the National Aeronautics and Space Administration, under the \kepler\ participating scientist program (grant NNX14AE11G), and is grateful for the hospitality of both the Institute for Advanced Study and Carnegie Observatories that helped support this work. D.W.H.\ was partially supported by the National Science Foundation (grant IIS-1124794), the National Aeronautics and Space Administration (grant NNX12AI50G), and the Moore--Sloan Data Science Environment at NYU. E.A.\ acknowledges support from NASA grants NNX13AF20G, NNX13AF62G, and NASA Astrobiology Institutes Virtual Planetary Laboratory, supported by NASA under cooperative agreement NNH05ZDA001C. This research made use of the NASA \project{Astrophysics Data System} and the NASA Exoplanet Archive. The Exoplanet Archive is operated by the California Institute of Technology, under contract with NASA under the Exoplanet Exploration Program. This \paper\ includes data collected by the \kepler\ mission. Funding for the \kepler\ mission is provided by the NASA Science Mission directorate. We are grateful to the entire \kepler\ team, past and present. Their tireless efforts were all essential to the tremendous success of the mission and the successes of \KT, present and future. These data were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. Computing resources were provided by High Performance Computing at New York University. \facility{Kepler} \software{% \project{batman} \citep{Kreidberg:2015}, \project{ceres} \citep{Agarwal:2016}, \project{corner.py} \citep{Foreman-Mackey:2016}, \project{emcee} \citep{Foreman-Mackey:2013}, \project{exosyspop} \citep{Morton:2016a}, \project{george} \citep{Ambikasaran:2016}, \project{isochrones} \citep{Morton:2015}, \project{matplotlib} \citep{Hunter:2007}, \project{numpy} \citep{Van-Der-Walt:2011}, \project{scipy} \citep{Jones:2001}, \project{transit} \citep{Foreman-Mackey:2016a}, \project{vespa} \citep{Morton:2015b}}. \newpage \appendix
16
7
1607.08237
1607
1607.04828_arXiv.txt
We present a statistical method for measuring the average \mbox{H\,{\sc i}} spin temperature in distant galaxies using the expected detection yields from future wide-field 21\,cm absorption surveys. As a demonstrative case study we consider a simulated all-southern-sky survey of 2-h per pointing with the Australian Square Kilometre Array Pathfinder for intervening \mbox{H\,{\sc i}} absorbers at intermediate cosmological redshifts between $z = 0.4$ and 1. For example, if such a survey yielded $1000$ absorbers we would infer a harmonic-mean spin temperature of $\overline{T}_\mathrm{spin} \sim 100$\,K for the population of damped Lyman $\alpha$ (DLAs) absorbers at these redshifts, indicating that more than $50$\,per\,cent of the neutral gas in these systems is in a cold neutral medium (CNM). Conversely, a lower yield of only 100 detections would imply $\overline{T}_\mathrm{spin} \sim 1000$\,K and a CNM fraction less than $10$\,per\,cent. We propose that this method can be used to provide independent verification of the spin temperature evolution reported in recent 21 cm surveys of known DLAs at high redshift and for measuring the spin temperature at intermediate redshifts below $z \approx 1.7$, where the Lyman-$\alpha$ line is inaccessible using ground-based observatories. Increasingly more sensitive and larger surveys with the Square Kilometre Array should provide stronger statistical constraints on the average spin temperature. However, these will ultimately be limited by the accuracy to which we can determine the \mbox{H\,{\sc i}} column density frequency distribution, the covering factor and the redshift distribution of the background radio source population.
Gas has a fundamental role in shaping the evolution of galaxies, through its accretion on to massive haloes, cooling and subsequent fuelling of star formation, to the triggering of extreme luminous activity around super massive black holes. Determining how the physical state of gas in galaxies changes as a function of redshift is therefore crucial to understanding how these processes evolve over cosmological time. The standard model of the gaseous interstellar medium (ISM) in galaxies comprises a thermally bistable medium (\citealt*{Field:1969}) of dense ($n \sim 100$\,cm$^{-3}$) cold neutral medium (CNM) structures, with kinetic temperatures of $T_{\rm k} \sim 100$\,K, embedded within a lower-density ($n \sim 1$\,cm$^{-3}$) warm neutral medium (WNM) with $T_{\rm k} \sim 10^{4}$\,K. The WNM shields the cold gas and is in turn ionized by background cosmic rays and soft X-rays (e.g. \citealt{Wolfire:1995, Wolfire:2003}). A further hot ($T_{\rm k} \sim 10^{6}$\,K) ionized component was introduced into the model by \cite{McKee:1977}, to account for heating by supernova-driven shocks within the inter-cloud medium. In the local Universe, this paradigm has successfully withstood decades of observational scrutiny, although there is some evidence (e.g. \citealt{Heiles:2003b}; \citealt*{Roy:2013b}; \citealt{Murray:2015}) that a significant fraction of the WNM may exist at temperatures lower than expected for global conditions of stability, requiring additional dynamical processes to maintain local thermodynamic equilibrium. Since atomic hydrogen (\mbox{H\,{\sc i}}) is one of the most abundant components of the neutral ISM and readily detectable through either the 21\,cm or Lyman $\alpha$ lines, it is often used as a tracer of the large-scale distribution and physical state of neutral gas in galaxies. The 21\,cm line has successfully been employed in surveying the neutral ISM in the Milky Way (e.g. \citealt{McClure-Griffiths:2009,Murray:2015}), the Local Group (e.g. \citealt{Kim:2003,Bruns:2005,Braun:2009,Gratier:2010}) and low-redshift Universe (see \citealt{Giovanelli:2016} for a review). However, beyond $z \sim 0.4$ (\citealt{Fernandez:2016}) \mbox{H\,{\sc i}} emission from individual galaxies becomes too faint to be detectable by current 21\,cm surveys and so we must rely on absorption against suitably bright background radio (21\,cm) or UV (Lyman-$\alpha$) continuum sources to probe the cosmological evolution of \mbox{H\,{\sc i}}. The bulk of neutral gas is contained in high-column-density damped Lyman-$\alpha$ absorbers (DLAs, $N_{\rm HI} \geq 2 \times 10^{20}$\,cm$^{-2}$; see \citealt*{Wolfe:2005} for a review), which at $z \gtrsim 1.7$ are detectable in the optical spectra of quasars. Studies of DLAs provide evidence that the atomic gas in the distant Universe appears to be consistent with a multi-phase neutral ISM similar to that seen in the Local Group (e.g. \citealt*{Lane:2000}; \citealt*{Kanekar:2001c}; \citealt*{Wolfe:2003b}). However, there is some variation in the cold and warm fractions measured throughout the DLA population (e.g. \citealt*{Howk:2005}; \citealt{Srianand:2005, Lehner:2008}; \citealt*{Jorgenson:2010}; \citealt{Carswell:2011, Carswell:2012, Kanekar:2014a}; \citealt*{Cooke:2015}; \citealt*{Neeleman:2015}). The 21-cm spin temperature affords us an important line-of-enquiry in unraveling the physical state of high-redshift atomic gas. This quantity is sensitive to the processes that excite the ground-state of \mbox{H\,{\sc i}} in the ISM (\citealt{Purcell:1956,Field:1958,Field:1959b,Bahcall:1969}) and therefore dictates the detectability of the 21\,cm line in absorption. In the CNM the spin temperature is governed by collisional excitation and so is driven to the kinetic temperature, while the lower densities in the WNM mean that the 21\,cm transition is not thermalized by collisions between the hydrogen atoms, and so photo-excitation by the background Ly $\alpha$ radiation field becomes important. Consequently the spin temperature in the WNM is lower than the kinetic temperature, in the range $\sim$1000 -- 5000\,K depending on the column density and number of multi-phase components (\citealt{Liszt:2001}). Importantly, the spin temperature measured from a single detection of extragalactic absorption is equal to the harmonic mean of the spin temperature in individual gas components, weighted by their column densities, thereby providing a method of inferring the CNM fraction in high-redshift systems. Surveys for 21\,cm absorption in known redshifted DLAs have been used to simultaneously measure the column density and spin temperature of \mbox{H\,{\sc i}} (see \citealt{Kanekar:2014a} and references therein). There is some evidence for an increase (at $4\,\sigma$ significance) in the spin temperature of DLAs at redshifts above $z = 2.4$, and a difference (at $6\,\sigma$ significance) between the distribution of spin temperatures in DLAs and the Milky Way (\citealt{Kanekar:2014a}). The implication that at least 90\,per\,cent of high-redshift DLAs may have CNM fractions significantly less than that measured for the Milky Way has important consequences for the heating and cooling of neutral gas in the early Universe and star formation (e.g. \citealt*{Wolfe:2003a}). However, these targeted observations rely on the limited availability of simultaneous 21\,cm and optical/UV data for the DLAs and assumes commonality between the column density probed by the optical and radio sight-lines. The first issue can be overcome by improving the sample statistics through larger 21\,cm line surveys of high-redshift DLAs, but the latter requires improvements to our methodology and understanding of the gas distribution in these systems. There are also concerns about the accuracy to which the fraction of the source structure subtended by the absorber can be measured in each system, which can only be resolved through spectroscopic very long baseline interferometry (VLBI). It has been suggested that the observed evolution in spin temperature could be biased by assumptions about the radio-source covering factor (\citealt{Curran:2005}) and its behaviour as a function of redshift (\citealt{Curran:2006b, Curran:2012b}). In this paper we consider an approach using the statistical constraint on the average spin temperature achievable with future large 21\,cm surveys using precursor telescopes to the Square Kilometre Array (SKA). This will enable independent verification of the evolution in spin temperature at high redshift and provide a method of studying the global properties of neutral gas below $z \approx 1.7$, where the Lyman\,$\alpha$ line is inaccessible using ground-based observatories. In an early attempt at a genuinely blind 21\,cm absorption survey, \cite{Darling:2011} used pilot data from the Arecibo Legacy Fast Arecibo L-band Feed Array (ALFALFA) survey to obtain upper limits on the column density frequency distribution from 21\,cm absorption at low redshift ($z \lesssim 0.06$). However, they also noted that the number of detections could be used to make inferences about the ratio of the spin temperature to covering factor. Building upon this work, \cite{Wu:2015} found that their upper limits on the frequency distribution function measured from the 40\,per\,cent ALFALFA survey ({$\alpha$}.40; \citealt{Haynes:2011}) could only be reconciled with measurements from other low-redshift 21\,cm surveys if the typical spin temperature to covering factor ratio was greater than 500\,K. At higher redshifts, \cite{Gupta:2009} found that the number density of 21\,cm absorbers in known \mbox{Mg\,{\sc ii}} absorbers appeared to decrease with redshift above $z \sim 1$, consistent with a reduction in the CNM fraction. We pursue this idea further by investigating whether future wide-field 21\,cm surveys can be used to measure the average spin temperature in distant galaxies that are rich in atomic gas.
We have demonstrated a statistical method for measuring the average spin temperature of the neutral ISM in distant galaxies, using the expected detection yields from future wide-field 21\,cm absorption surveys. The spin temperature is a crucial property of the ISM that can be used to determine the fraction of the cold ($T_{\rm k} \sim 100$\,K) and dense ($n \sim 100$\,cm$^{-2}$) atomic gas that provides sites for the future formation of cold molecular gas clouds and star formation. Recent 21\,cm surveys for \mbox{H\,{\sc i}} absorption in \mbox{Mg\,{\sc ii}} absorbers and DLAs towards distant quasars have yielded some evidence of an evolution in the average spin temperature that might reveal a decrease in the fraction of cold dense atomic gas at high redshift (e.g. \citealt{Gupta:2009, Kanekar:2014a}). By combining recent specifications for ASKAP, with available information for the population of background radio sources, we show that strong statistical constraints (approximately $\pm10$\,per\,cent) in the average spin temperature can be achieved by carrying out a shallow 2-h per pointing survey of the southern sky between redshifts of $z = 0.4$ and $1.0$. However, we find that the accuracy to which we can measure the average spin temperature is ultimately limited by the accuracy to which we can measure the distribution of the covering factor, the $N_{\rm HI}$ frequency distribution function and the evolution of the radio source population as a function of redshift. By improving our understanding of these distributions we will be able to leverage the order-of-magnitude increases in sensitivity and redshift coverage of the future SKA telescope, allowing us to measure the evolution of the average spin temperature to much higher redshifts.
16
7
1607.04828
1607
1607.01585_arXiv.txt
We study the solar energetic particle (SEP) event associated with the 2012 July 23 extreme solar storm, for which STEREO and the spacecraft at L1 provide multi-point remote sensing and in situ observations. The extreme solar storm, with a superfast shock and extremely enhanced ejecta magnetic fields observed near 1 AU at STEREO A, was caused by the combination of successive coronal mass ejections (CMEs). Meanwhile, energetic particles were observed by STEREO and near-Earth spacecraft such as ACE and SOHO, suggestive of a wide longitudinal spread of the particles at 1 AU. Combining the SEP observations with in situ plasma and magnetic field measurements we investigate the longitudinal distribution of the SEP event in connection with the associated shock and CMEs. Our results underscore the complex magnetic configuration of the inner heliosphere formed by solar eruptions. The examinations of particle intensities, proton anisotropy distributions, element abundance ratios, magnetic connectivity and spectra also give important clues for the particle acceleration, transport and distribution.
With the launch of the \emph{Solar Terrestrial Relations Observatory} \citep[STEREO;][] {Kaiser08}, we now have multi-point measurements of SEPs including those from the L1 spacecraft such as \emph{Advanced Composition Explorer} \citep[ACE;][] {stone98} and \emph{SOlar and Heliospheric Observatory} \citep[SOHO;][]{Domingo95}. Both STEREO spacecraft follow a heliocentric orbit as the motion of the Earth in the ecliptic plane with their longitudinal separation increasing about 45\degree${ }$ per year. The configuration, which consists of the well separated STEREO and those near-Earth spacecraft including SOHO, ACE and Wind, has proved to be a great platform to observe SEP events originating from any locations and associated processes from multiple vantage points \citep[e.g.,][]{Liu11, Dresing12, Lario13, Richardson14, Cohen14, Gomez-Herrero15}. Several physical processes have been proposed to explain the wide longitudinal distribution of an SEP event in the interplanetary (IP) medium. A broad coronal or IP shock is one of the views \citep[e.g.,][]{Heras95, Lario98, Reames10b, Dresing12}. Particles observed at 1 AU in large SEP events are accelerated by CME-driven shocks and injected onto the magnetic field lines connecting the observers and the coronal shock \citep[e.g.,][]{Wild63, Cliver04, Zank07, Battarbee11, Cliver05, Kozarev15}, while the augular size of a wide coronal shock can extend up to 300\degree${ }$ \citep{Cliver95} and an IP shock at 1 AU can provide a large acceleration region with its longitudinal extent as large as 180\degree${ }$ \citep[e.g.,][]{Cane96, Liu08}. On the other hand, EUV wave observed in the lower corona has been used as a proxy for the longitudinal extent of the CME during the initial expansion phase \citep[e.g.,][]{Torsti99, Rouillard12, Park13}. For the 2011 March 21 SEP event, \citet{Rouillard12} show an association between the longitudinal expansion of EUV wave and the longitudinal expansion of the SEP event at 1 AU. However, \citet{Prise14} suggest that the longitudinal spread of SEP event is related to CME expansion at a higher altitude in the corona than is represented by the expansion of EUV wave. Alternatively, particle diffusion \citep[e.g.,][]{Reid64, Droge10, Dresing12, Costa13} and other processes like drifts \citep{Marsh13} and scattering are used to explain SEP transport in the corona and interplanetary medium. Another important factor that affects SEP transport and distribution is the complex magnetic configuration of the heliosphere and IP medium \citep[e.g.,][]{Richardson91, Liu11, Leske12, Masson12}. For example, interplanetary CMEs (ICMEs) from previous solar eruptions can perturb the interplanetary magnetic field (IMF) structure and thus modify the particle travel path in the interplanetary medium. \citet{Park13} illustrate four SEP events that are probably influenced by preceding CMEs, which occur less than one day before the events. They suggest that the previous CME could influence the transport of the SEP event from the following CME. Actually, the preceding CME can occur earlier than one day. \citet{Liu14} conclude that the CME launched on 2012 July 19 from the same active region as the July 23 CMEs resulted in an IP medium with low solar wind density and radial magnetic fields, which may have affected the transport of the SEP event from the July 23 solar event. Information on the magnetic configuration of the heliosphere is needed to understand the particle transport and distribution. Coronal magnetic field extrapolations based on photospheric magnetic field measurements can give a zeroth order characterization of the large-scale magnetic configuration \citep{Luhmann03}. In situ measurements of the solar wind plasma and magnetic field may provide a general context of the conditions through which SEPs propagate. Measurements of energetic particle anisotropy also provide important clues on the topology of the interplanetary magnetic fields \citep[e.g.,][]{ Marsden87, Richardson91, Richardson94, Bieber02, Torsti04, Tan12, Leske12}. The solar storm on 2012 July 23 is of particular interest for space weather as it produced a superfast shock and extremely enhanced magnetic fields at 1 AU \citep[e.g.,][]{Baker13, Ngwira13, Russell13, Liu14, Riley15, Temmer15}. \citet{Liu14} suggest that the extreme solar storm was caused by CME-CME interactions with the fast two CMEs separated by about 10-15 min. The merged CME structure had a speed of about 3050 km s${{}^{-1}}$ near the Sun and resulted in a solar wind speed of about 2246 km s${{}^{-1}}$ at 1 AU \citep{Liu14}. The magnetic field strength of the ejecta reached 109 nT at 1 AU. \citet{Temmer15} and \citet{Riley15} further examine the propagation of the shock and support the view of \citet{Liu14} that an earlier CME preconditioned the upstream solar wind for the propagation of the later eruptions. In this paper we study the longitudinal distribution of the SEP event in connection with the shock and CMEs associated with the 2012 July 23 solar storm. As far as we are concerned, a detailed examination of the SEP event, in particular its connection with the shock and eruptions, has been lacking. \citet{Bain16} perform ENLIL MHD modeling to understand the shock connectivity associated with the SEP events during the whole 2012 July period. Their simulations start from about 20 solar radii from the Sun and do not include flux rope magnetic fields in the ejecta. Our work includes near-Sun magnetic mappings and considers the role of the ejecta in the SEP event, which will enhance the interpretation of this event. First, through the examination of in situ plasma and magnetic field measurements, we give a scenario of the complex heliospheric configuration as a context to explain the SEP distribution. Second, we analyze the particle intensities, proton anisotropy distributions, element abundance ratios, magnetic connectivity and spectra and illustrate how these characteristics conform to the proposed scenario for the particle distribution. Finally, we summarize and discuss the results.
We have investigated the large SEP event associated with the 2012 July 23 extreme storm, which produced a superfast shock and extremely enhanced ejecta magnetic fields at STEREO A. Our analyses of the particle intensities, proton anisotropy distributions, element abundance ratios, magnetic connectivity and spectra provide important information on the origin, transport and longitudinal distribution of the large SEP event. STEREO A and B, which were separated by about 124\degree , detected the shock(s) and ICMEs during the period of the July 23 event. STEREO A observed the shock at 20:55 UT on July 23 followed by a complex ejecta composed of twin CMEs \citep{Liu14}. STEREO B observed an IP shock at 21:22 UT on July 23 with a following ICME between 18:20 UT on July 24 and 12:00 UT on July 25. The GS reconstruction gives a left-handed structure for the ICME at STEREO B, wich is different from those at STEREO A \citep{Liu14}. Therefore, we suggest that the ICME at STEREO B is not the same as the ICMEs at STEREO A. It is unclear whether the shock observed at STEREO B is the same shock as observed at STEREO A. The shock(s) and ICMEs may play an important role in the transport and longitudinal distribution of the SEP event. Enhanced particle intensities were observed and had a long duration at both STEREO and near-Earth spacecraft, indicative of a wide longitudinal spread of SEPs. The Fe/O ratios at all three spacecraft were lower than the referenced abundance ratio of coronal component reported by \citet{Reames95}. Therefore, the wide distributed SEP event on July 23 may largely originate from the acceleration by the CME-driven shock. STEREO A observed a typical SEP event with a source near the central meridian. The particle intensities rose rapidly after the solar eruption, and the peak intensities of ${>}$ 10 MeV particles observed at STEREO A were around five orders of magnitude higher than the background level. After the prompt initial rise of the particle intensities, a plateau behind the particle onset, a peak near the shock and a decrease inside the ICMEs were observed sequentially. These can be interpreted as follows: the earliest SEPs observed at STEREO A were accelerated by the shock near the Sun (initial rise), and could generate Alfv{\'e}n waves that would scatter the particles coming behind (plateau or streaming limit); the magnetic field fluctuations near the shock could trap the SEPs accelerated by the IP shock (peak); the strong magnetic fields inside the ICMEs could form the barrier for the SEPs (decrease). The spectra in different phases display different characteristics of particle acceleration and propagation. For STEREO A, the spectrum in the rise phase presented the apparent intensity suppression between 0.5-10 MeV energy range, caused by a streaming limit effect. The spectrum in the peak phase was much harder at all energies compared with the spectra in other phases. These spectra indicate that STEREO A passed through the stronger acceleration region of the shock. The spectrum in the ICME phase kept a similar shape but intensities at all energies had about two orders of magnitude decrease than the spectrum in the peak phase. Note that the decay phase at STEREO A was in the reservoir region. Compared with the spectrum in the background phase, the hardened spectrum in the decay phase at ${>}$ 4 MeV energies may be accounted for by the property of the reservoir region. STEREO B observed slow particle intensity enhancements since the shock at STEREO B arrived at 1 AU. The majority particles at STEREO B may be accelerated by the shock beyond 1 AU and then reach STEREO B along the IMF lines which connected the shock from behind. Inside the ICME at STEREO B, we observed fluctuations in the particle intensities and obvious anisotropic proton distributions from ${\sim}$18:20 UT on July 24 to ${\sim}$3:00 UT on July 25. \citet{Leske14} suggest that the anisotropic flows may arise from the magnetic mirroring and the shock may be the source of the antisunward flow. The following proton distribution became isotropic and proton intensities increase until STEREO B entered the reservoir region on July 26. We suggest that the particle intensities at STEREO B primarily originated from the July 23 solar event. The spectra at STEREO B kept similar shape in the ICME phase and the peak phase at ${>}$ 5 MeV energies, indicative of a connection between STEREO B and the weak acceleration region of the shock. Note that the time intervals of the peak phase and the decay phase were in the reservoir region. The steepened spectrum at ${>}$ 10 MeV energies in the decay phase may be attributed to continuing acceleration by a weakening shock, preferential leakage of high energy particles or slower cross-field transport of lower energy particles \citep{Reames13}. The particle intensities observed by SOHO had a prompt rise at ${>}$1 MeV energies and a long duration at ${>}$20 MeV energies. The long duration at high energies may be associated with the long time period of the strong acceleration of the shock. Compared with the spectrum near the Earth in the background phase, the spectrum in the rise phase had little variability at ${<}$5 MeV energies and the spectrum in the peak phase declined at ${<}$0.4 MeV energies. The Earth missed the shock(s) and ICMEs at 1 AU. It indicates the poor connection between the shock and the Earth. The spectrum near the Earth in the decay phase was lower at all energies, where STEREO A and B entered the reservoir region. We calculate the longitudinal projections of STEREO A, B and the Earth at 2.5 R$_{\odot}$ using a Parker spiral field. The result shows that the magnetic footpoint of STEREO A was the closest to the active region. SEPs accelerated by the shock near the Sun can be injected onto the open magnetic field lines connecting to STEREO A. We also determine the particle release time at STEREO A using a VDA method. The linear fit gives a release time of 02:18 UT ${\pm}$ 4 minutes (02:26 UT ${\pm}$ 4 minutes, adding 8 minutes to compare with the remote-sensing observations). The CMEs were clearly observed in STEREO B/COR1 at 02:26 UT (see Figure 2(b) in \citet{Liu14}). It likely suggests that the earliest particles arriving at STEREO A were accelerated by the shock formed in the lower corona. The magnetic footpoint of the Earth was relatively far from the active region. The release time of the particles observed by SOHO is ${\sim}$05:50 UT. It is about 3.5 hours late for the particle release since the solar eruption (${\sim}$ 02:20 UT). The delay of the particle release may be attributed to the time to create a shock, to accelerate particles to high energies and the time for the shock to reach the field lines connected to the Earth and the time for the accelerated particles to diffuse to the footpoint of the IMF lines connected to the Earth.
16
7
1607.01585