text
stringlengths
1
1.03M
id
stringlengths
1
7.38k
metadata
dict
\section{Introduction} When an object is imaged, variations of the refractive index in the medium, as well as optical alignment and manufacturing errors, distort the recorded image. This problem is typically solved using active or adaptive optics, where a deformable mirror, spatial light modulator (SLM) or a comparable device corrects the propagating wavefront. Typically, such systems are built with a separate optical arm to measure the distorted wavefront, because extracting the wavefront information from only focal-plane images is not trivial. However, focal-plane wavefront sensing is an active topic -- not only to simplify the optical design but also to eliminate the non-common path aberrations limiting the performance of high-contrast adaptive optics systems. The most popular method for the focal-plane wavefront sensing is perhaps the Gerchberg-Saxton (GS) error reduction algorithm \cite{gerchberg1972, fienup82} and their variations, for instance \cite{green2003, burruss2010}. These are numerically very efficient algorithms, and it is easy to modify them for different applications. However, they suffer from accuracy, in particular because their iterative improvement procedure often stagnates at a local minimum. Various alternatives have been proposed, and a popular approach is to use general numerical optimization techniques to minimize an error function; examples include \cite{sauvage2007, riaud2012, paul2013}. However, when the number of optimization parameters is increased, the computational requirements generally rise unacceptably fast. The high computational costs are problematic for instance in astronomy; the largest future adaptive optics system is envisioned to have a wavefront corrector of a size of $200\times 200$ \cite{verinaud2010}. The numerical issues can be significantly reduced, if the unknown wavefront is sufficiently small. This is the case, for example, when calibrating the non-common path aberrations. Previous works have exploited the small-phase approximations \cite{giveon2007oe, meimon2010, martinache2013, smith2013}, but the implementations are generally not easily extended to the wavefront correction at extremely large resolution, such as over $100\times 100$ elements. In this paper, we present two algorithms capable of extremely fast control of a wavefront correcting device with 20~000--30~000 degrees of freedom. The first algorithm, Fast \& Furious (FF), has been published before \cite{keller2012spie,korkiakoski2012spie1,korkiakoski2013}. It relies on small WF aberrations, pupil symmetries and phase-diversity to achieve very fast WF reconstruction. However, FF approximates the pupil amplitudes as an even function that not necessarily matches exactly the real situation. To improve the WF correction beyond the accuracy of FF, a natural way is to use approaches similar to the GS algorithm. However, the standard modifications of the algorithm are sensitive to the used phase diversities, in particular when the pupil amplitudes are not known, and they do not work with iterative wavefront correction as in FF. Therefore, our second algorithm combines FF and GS in a way that can be used not only to correct the wavefront, but also to estimate the pupil amplitudes -- for which we make no assumptions. This comes at a cost in terms of noise sensitivity and instabilities as well as more demanding computational requirements. At first, we illustrate the motivation and principles of the FF algorithm in Section~\ref{sec:ff}. Then, Section~\ref{sec:ffgs} describes the Fast \& Furious Gerchberg-Saxton (FF-GS) algorithm in detail. Section~\ref{sec:hardware} describes the used hardware, Section~\ref{sec:results} shows simulation and experimental results, and Section~\ref{sec:conclusions} draws the conclusions. \section{Fast \& Furious} \label{sec:ff} The Fast \& Furious algorithm is based on iteratively applying a weak-phase approximation of the wavefront. The main principle of the weak-phase solution is presented in \cite{gonsalves2001}, but we found slight modifications \cite{keller2012spie} leading to significantly better performance. The algorithm uses focal-plane images and phase-diversity information to solve the wavefront, and the estimated wavefront is corrected with a wavefront correcting device. The correction step produces phase-diversity information and a new image that are again used to compute the following phase update. The schematic illustration of the algorithm is shown in Fig.~\ref{fg:algoscemaff}. \begin{figure*}[hbtp] \center \includegraphics[width=\textwidth]{fig1} \caption{Schematic illustration of the FF algorithm.} \label{fg:algoscemaff} \end{figure*} An important aspect of the algorithm is to maximize the use of the most recent PSF -- denoted as Image 1 in Fig.~\ref{fg:algoscemaff}. In the weak-phase regime, a single image is sufficient to estimate both the full odd wavefront component and the modulus of the even component of the focal-plane electric field. The phase-diversity is needed only for the sign determination since we assume the wavefront aberrations are small. This makes the FF substantially less prone to noise and stability issues as compared to approaches relying more on the phase diversity information -- such as the FF-GS. Section~\ref{sec:ffdet} explains the details of the weak-phase solution, and Section~\ref{sec:ffpractical} discusses the practical aspects when implementing the algorithm. \subsection{Weak-phase solution} \label{sec:ffdet} A monochromatic PSF can be described by Fraunhofer diffraction and is given by the squared modulus of the Fourier transform of the complex electric field in the pupil plane, \begin{equation} \label{eq:pr} p = |\ft{A \exp(i\phi)}|^2, \end{equation} where $A$ is the pupil amplitude describing transmission and $\phi$ is the wavefront in the pupil plane. The second order approximation of the PSF, in terms of the wavefront expansion, can be written as \begin{equation} \label{eq:p} p = |\ft{A + iA\phi - 0.5A\phi^2}|^2. \end{equation} The phase $\phi$ can be represented as a sum of even and odd functions, \begin{equation} \phi = \phi_e + \phi_o, \end{equation} and Eq.~\eqref{eq:p} can then be written as \begin{multline} \label{eq:p2} p = |\ftml{A + iA\phi_e + iA\phi_o \\ - 0.5A\phi^2_e - 0.5A\phi^2_o - A\phi_e\phi_o}|^2. \end{multline} We make the assumption that $A$ is even, and therefore all the terms here are either even or odd. Therefore, the corresponding Fourier transforms are then either purely real or imaginary with the same symmetries; we list the corresponding terms in Table~\ref{tb:sym}. \begin{table}[hbtp] \begin{center} \caption{Notations and symmetries} \label{tb:sym} \begin{tabular}{lccclcc} \hline \multicolumn{3}{c}{Aperture plane} & & \multicolumn{3}{c}{Fourier plane} \\ Term & Re/Im & Symmetry & & Term & Re/Im & Symmetry \\ \hline $A$ & real & even && a & real & even \\ $A\phi_e$ & real & even && $v$ & real & even\\ $A\phi_o$ & real & odd && $ iy$ & imaginary & odd\\ $A\phi_e^2$ & real & even && $v_2$ & real & even\\ $A\phi_o^2$ & real & even && $y_2$ & real & even\\ $A\phi_e \phi_o$ & real & odd && $iz$ & imaginary & odd \\ \hline \end{tabular}\\ \end{center} \end{table} Thus, all the introduced variables in Table~\ref{tb:sym} are purely real. The quantities $a$, $v$ and $y$ denote the Fourier transforms of the pupil function, even and odd wavefront aberrations, respectively, \begin{align} a &= \ft{A} \label{eq:a} \\ v &= \ft{A\phi_e} \label{eq:v} \\ y &= \imag{ \ft{A\phi_o} } \label{eq:y}. \end{align} Using the definitions, the second-order PSF approximation can be written as \begin{equation} p = |a + iv - y - 0.5 v_2 - 0.5 y_2 -i z|^2, \end{equation} which simplifies to \begin{equation} p = a^2 + v^2 + y^2 - 2ay + \xi, \end{equation} where the first four terms constitute the first order approximation -- in terms of the wavefront expansion -- and the second-order component is \begin{multline} \label{eq:snd} \xi = 0.25 v_2^2 +0.25 y_2^2 +z^2 -av_2 -ay_2 +0.5 v_2 y_2 \\ + yv_2 +yy_2 - 2vz. \end{multline} The above equations are best illustrated by an example. We consider a purely sinusoidal wavefront having a peak-to-valley value of 1.0 rad and an rms error of 0.37 rad -- alternative examples can be seen for instance in \cite{perrin2003}. The wavefront and the resulting PSF image are shown in Fig.~\ref{fg:wfsample}. The WF causes two main side lobes and more side lobes with significantly lower intensity; one pair is shown in Fig.~\ref{fg:wfsample}. \begin{figure}[hbtp] \center \includegraphics{fig2a} \includegraphics{fig2b} \caption{Left: a purely sinusoidal wavefront. Right: resulting image raised to the power of 0.2 to compress the dynamic range.} \label{fg:wfsample} \end{figure} Fig.~\ref{fg:ffradcuts1} shows a radial cut of the second order component $\xi$ for the example wavefront. Its most significant terms are $av_2$ and $ay_2$, and therefore the perfect image ($a^2$) scaled by a negative coefficient approximates $\xi$ reasonably well. This term is responsible of the energy conservation by reducing the Strehl ratio \cite{keller2012spie}. The first-order approximation always has a Strehl ratio of 1. \begin{figure}[hbtp] \center \includegraphics{fig3} \caption{Radial cuts of the second order component $\xi$, defined in Eq.~\eqref{eq:snd}, and an inverted and scaled perfect PSF, $a^2$.} \label{fg:ffradcuts1} \end{figure} Thus, an improved first order approximation can be obtained by subtracting a scaled version of $a^2$ from the first order PSF approximation; the scaling coefficient needs to be adjusted such that the maxima of the perfect PSF and the approximation are the same. The radial cuts of the PSF approximations are illustrated in Fig.~\ref{fg:ffradcuts2}. The improved first-order approximation captures the main lobe and the first pair of side lobes quite well, but the secondary side lobes are missed. \begin{figure}[hbtp] \center \includegraphics{fig4} \caption{Radial cuts of the perfect PSF, its improved 1st order approximation and the 2nd order approximation. The latter is virtually identical to the perfect PSF.} \label{fg:ffradcuts2} \end{figure} However, for a wavefront with an rms error of less than one radian, the improved first-order approximation is often sufficient, and it can be formulated as \begin{equation} \label{eq:p1} p = a^2+y^2+v^2 -2ay -\left(1-\frac{\max{\left(p_n\right)}}{\max\left(a^2\right)}\right)a^2, \end{equation} where $p_n$ denotes the recorded image normalized to the same energy as the perfect PSF, \begin{equation} \label{eq:pnorm} p_n = p_m \frac{\sum_{x,y} a^2(x,y)}{\sum_{x,y} p_m(x,y)}, \end{equation} where $(x,y)$ denotes the detector pixel coordinates and $p_m$ is the raw image. Therefore, to simplify the notations, it is convenient to define a modified normalization of a PSF, \begin{equation} \label{eq:scaling} p' =p_n+\left(1-\frac{\max(p_n)}{\max\left(a^2\right)}\right)a^2, \end{equation} where the normalized image, $p'$, has the same maximum as $a^2$. To solve the wavefront using Eq.~\eqref{eq:p1}, we follow the procedure of \cite{gonsalves2001}, which is repeated here for convenience. The recorded image is normalized and broken to its even and odd parts. It then holds that \begin{align} p'_e &= a^2 +v^2 +y^2 \label{eq:pe} \\ p'_o &= 2ay. \label{eq:po} \end{align} The odd component of the wavefront is then easily reconstructed by first solving $y$ using Eq.~\eqref{eq:po}, and then using the inverse of Eq.~\eqref{eq:y}. Due to noise and approximation errors, the direct application of Eq.~\eqref{eq:po}, however, would result in division by excessively small values. We compensate this by using a regularization as in \cite{gonsalves2001}, \begin{equation} \label{eq:yreg} y = \frac{a p'_o}{2a^2 + \epsilon}, \end{equation} where $\epsilon $ is a small number. We found it best to set $\epsilon$ to a value of 50--500 times the measured noise level of the recorded images. To compute the even wavefront component, we need additional information in the form of phase diversity. We assume that a second, previously recorded image is known, and it was obtained with a known phase change compared to $p$. The even component of its normalized version can be written as \begin{equation} \label{eq:pe2} p'_{e2} = a^2 +(v+v_d)^2 +(y+y_d)^2, \end{equation} where $v_d$ and $y_d$ are the even and odd Fourier components of the phase diversity, obtained in analogy to Eqs.~\eqref{eq:v} and \eqref{eq:y}. Using Eqs.~\eqref{eq:pe} and \eqref{eq:pe2}, we can solve $v$ (the even phase component in Fourier space) and write it as \begin{equation} \label{eq:vs} v_s = \frac{p'_e - p'_{e2} -v_d^2 -y_d^2 -2yy_d}{2v_d}. \end{equation} However, this formula is highly sensitive to noise due to the subtraction of two very similar images. Therefore, as also in \cite{gonsalves2001}, we use Eq.~\eqref{eq:vs} only to compute the signs of $v$; a more robust form follows from the use of Eq.~\eqref{eq:pe}, \begin{equation} \label{eq:vsolv} v = \text{sign}\left(v_s \right) \left|p'_e - a^2 - y^2\right|^{0.5}, \end{equation} where we use the absolute value to avoid taking the square root of negative values, occurring due to noise and approximation errors; this was observed to work better than zeroing the negative values. The even wavefront component is then computed in the same way as the odd one, by using Eq.~\eqref{eq:vsolv} and the inverse of Eq.~\eqref{eq:v}. \subsection{Practical aspects} \label{sec:ffpractical} To use the FF algorithm as presented here, it is necessary to have a wavefront correcting device -- a deformable mirror or spatial light modulator -- whose phase response is known. It is then possible to translate the desired phase change to appropriate wavefront corrector command signals. An appropriate mapping can be created using the standard adaptive optics calibration procedures as in \cite{korkiakoski2012spie1} or, as we do here, with the help of dOTF based calibration method \cite{korkiakoski2013}. The method is based on determining the SLM phase (and transmission) response, when the control signal is changed in different pixel blocks. This data is then used to find an affine transform that maps the location of each SLM pixel to its physical location in the pupil plane. We also assume that the collected images are sufficiently sampled: without aberrations the full width at half maximum of the PSF has to be at least two pixels. If the detector is undersampled, aliasing prevents using the intensity images as described in Section~\ref{sec:ffdet}. Large oversampling is also not desired since it increases the computational requirements. The phase array, $\phi$, needs to be sampled with sufficient resolution to also model the pupil aperture, $A$, with good accuracy. The values we use ($170\times 170$) are sufficient for our purpose; we expect no significant sampling errors when implementing Eqs.~\eqref{eq:v} and \eqref{eq:y} as fast Fourier transforms (FFTs). However, we need to zero-pad the recorded images such that the FFTs correctly implement the Fourier transforms in Eqs.~\eqref{eq:a}, \eqref{eq:v} and \eqref{eq:y}; the sampling of the arrays $a$, $v$ and $y$ need to match the pixels of the camera. The amount of zero-padding is determined by the sampling coefficient, \begin{equation} \label{eq:q} q = \frac{N_\text{arr}}{N_\text{pup}}, \end{equation} where $N_\text{arr}$ is the dimension of the FFT array and $N_\text{pup}$ is the size of $\phi$. We use the dOTF method as discussed in \cite{korkiakoski2013} to find $q$. The method is based on the use of localized phase diversity at the pupil border, which makes it possible to very straightforwardly create an array where the pupil shape can be directly seen. The parameter $q$ is calculated by comparing the sizes of the pupil and the dOTF array. When performing the FFT to obtain the phase from $v$ and $y$, we combine the two real-valued FFTs to a single complex FFT \cite{keller2012spie}, \begin{equation} \label{eq:phisol} A\phi = \ift{w\left(v + iy\right)}, \end{equation} where $w$ is a windowing function; it implements filtering necessary for numerical regularization -- typically, high spatial frequencies are detected with higher uncertainty, and they need to be damped to obtain feasible reconstructions. The regularization is needed also with noiseless images since the weak-phase solution provides only approximate wavefronts. In this work, we have used a concave parabola, whose width can be adjusted depending on the noise level. An optimum filter is the subject of future studies. To implement the iterative feed-back loop to optimize the wavefront error, we use a standard, leaky-integrator control. The wavefront-corrector shape at time step $k$ is calculated as \begin{equation} \label{eq:feedback} \theta_k = g_l\theta_{k-1} -gA\phi_{k-1}, \end{equation} where $g_l$ is the leaky gain, $\theta_{k-1}$ is the previous wavefront corrector shape, $g$ is the integrator gain, and $A\phi_{k-1}$ is the most recent small phase solution, computed using two most recent images using Eq.~\eqref{eq:phisol}. The integrator gain, $g$, determines the tradeoff between convergence speed and stability; a small gain results in slow convergence, a high gain means the image noise causes larger errors after the algorithm has converged. Excessively small gain would also make the use of phase-diversity information difficult. The leaky gain is another regularization parameter. A value of $g_l=1$ would be equal to a standard integrator, and it would be optimal in the case of no errors, the equation $p=|\ft{A\exp(i\phi)}|^2$ perfectly describing the system. Values $g_l < 1$ introduce wavefront aberrations at every time step preventing the system reaching a perfect state. However, that also prevents creeping instabilities from destroying the performance. The result is a stable convergence to a level with a slightly higher residual wavefront error. \section{Fast \& Furious Gerchberg-Saxton} \label{sec:ffgs} The obvious limitation of the FF algorithm is the assumption of the pupil amplitudes being even. This holds reasonably well for most of the optical systems having a circular shape, possibly with a central obstruction. However, to achieve the optimal focal-plane wavefront sensing with a high-order system not suffering from other limiting factors, it is necessary to consider imaging models where the pupil amplitudes can have an arbitrary shape. We have approached the problem by combining the FF-style weak-phase solution and a version of the Gerchberg-Saxton (GS) algorithm. The new algorithm is referred to as FF-GS in the following. \begin{figure*}[hbtp] \center \includegraphics[width=\textwidth]{fig5} \caption{Schematic illustration of the FF-GS algorithm.} \label{fg:algoscema} \end{figure*} As with the GS algorithm, we maintain an iteratively updated estimate of the unknown quantities -- in our case the pupil amplitudes. The pupil amplitude estimate, phase diversities and the recorded images are used to calculate the focal-plane field; it requires three Fourier transforms and the use of the weak-phase approximation. Then, a Fourier transform is used to propagate the field to the pupil plane. The propagation results in improved estimates for the pupil-plane amplitudes and the wavefront. The schematic illustration of the FF-GS algorithm is shown in Fig.~\ref{fg:algoscema}. The FF-GS computation procedure forms a loop that could be iterated several times to obtain improved wavefront estimates. However, we found that in practice it is sufficient to run only two iterations before applying the wavefront correction with the obtained estimate. As with FF, the wavefront correction yields another image and phase-diversity information, which are used to compute the following correction step. Next, Section~\ref{sec:ffgsdet} describes the algebra that we use to compute the focal-plane electric field during the FF-GS procedure. Then, Section~\ref{sec:ffgsiters} explains the details of the iterative computation, and Section~\ref{sec:ffgspractical} discusses practical issues we face when implementing the algorithm. \subsection{A more general weak-phase solution} \label{sec:ffgsdet} In this section, we assume that an approximation of the pupil amplitudes (denoted here as $A$) is known; as a first step, a top-hat function is sufficient in the case of an unobstructed, round pupil. The estimates are updated iteratively, and we will make no restrictive assumptions about $A$. We assume that three images are collected and that the corresponding phase-diversity information is known. The images are normalized according to Eq.~\eqref{eq:scaling}, and it holds approximately that \begin{align} p'_1&=|e_1|^2= |\ft{A + iA\left(\phi\right)}|^2 \label{eq:ffgs1} \\ p'_2&=|e_2|^2=|\ft{A+iA\left(\phi+\phi_{d1}\right)}|^2 \label{eq:ffgs2}\\ p'_3&=|e_3|^2=|\ft{A+iA\left(\phi+\phi_{d2}\right)}|^2, \label{eq:ffgs3} \end{align} where $e_1$, $e_2$ and $e_3$ are the electric fields corresponding to the images, $\phi$ is the unknown pupil-plane phase, and $\phi_{d1}$ and $\phi_{d2}$ are the known phase diversities applied to successively recorded images. When counting the number of unknown variables, one can see that it might be possible to solve the unknown phase using only two images, with Eqs.~\eqref{eq:ffgs1} and \eqref{eq:ffgs2}. However, we found the following procedure with three images to be better. In addition of making the algebra easier, it is also significantly more robust since more information is available to compensate the errors in the estimate of $A$. Using even more images could potentially still improve the results, but studying this is outside the scope of this paper. Instead of solving the phase directly, we use phase-diversity information to find the electric field at the focal-plane. The electric field corresponding to Eq.~\eqref{eq:ffgs1} can be written as \begin{equation} \label{eq:elf} e_1 =\left(a_r +\alpha\right) +i\left(a_i+\beta\right),\\ \end{equation} where \begin{align} a_r &= \mathrm Re{\ft{A}} \nonumber\\ a_i &= \imag{\ft{A}} \nonumber\\ \alpha &= -\imag{\ft{A\phi}} \nonumber\\ \beta &= \mathrm Re{ \ft{A\phi}} \nonumber. \end{align} The unknown coefficients $\alpha$ and $\beta$ can be found by solving the equations that follow when subtracting Eq.~\eqref{eq:ffgs1} from Eqs.~\eqref{eq:ffgs2} and \eqref{eq:ffgs3}. The subtraction cancels all the non-linear terms and results in linear equations, \begin{equation} \label{eq:focacoefs} \left[\begin{array}{cc} 2\alpha_{d1} & 2\beta_{d1} \\ 2\alpha_{d2} & 2\beta_{d2} \end{array} \right] \left[\begin{array}{c} \alpha \\ \beta \end{array} \right] = \left[\begin{array}{c} c_1 \\ c_2 \end{array} \right], \end{equation} where \begin{align} \alpha_{d1} &= -\imag{\ft{A\phi_{d1}}} \nonumber \\ \beta_{d1} &= \mathrm Re{\ft{A\phi_{d1}}} \nonumber \\ \alpha_{d2} &= -\imag{\ft{A\phi_{d2}}} \nonumber \\ \beta_{d2} &= \mathrm Re{\ft{A\phi_{d2}}}, \nonumber \end{align} and \begin{align} \label{eq:psubst} c_1 &= p'_2-p'_1 -\left(2a_r\alpha_{d1} +2a_i\beta_{d1} +\alpha_{d1}^2+\beta_{d1}^2\right)\nonumber \\ c_2 &= p'_3-p'_1 -\left(2a_r\alpha_{d2} +2a_i\beta_{d2} +\alpha_{d2}^2+\beta_{d2}^2\right). \end{align} We solve the coefficients $\alpha$ and $\beta$ by inverting the $2\times 2$ matrix in Eq.~\eqref{eq:focacoefs}. The matrix has full rank, if the used phase-diversities are linearly independent. We found this generally to be the case when applying the algorithm, and therefore it was unnecessary to use any regularization methods. The coefficients can then be substituted into Eq.~\eqref{eq:elf} to compute the focal plane electric field. However, this estimate would again be very prone to noise due to the subtraction of similar images, as shown in Eq.~\eqref{eq:psubst}. Therefore, it is better to use the directly measured modulus and use only the phase-information following from Eq.~\eqref{eq:elf}. This then gives a more robust focal-plane estimate, \begin{equation} \label{eq:focaplfix} e_1 =\left|p'_1\right|^{0.5}\exp\left[i\arg((a_r+\alpha)+i(a_i+\beta))\right]. \end{equation} The following section explains the details how this is then combined with the GS approach. \subsection{Iterative computation procedure} \label{sec:ffgsiters} As the previous section indicates, we first record three images. The phase-diversity can be chosen freely, as long as its peak-to-valley stays below one radian. We use the FF algorithm at the initial steps. Then, using the collected data, we perform computations to calculate a new wavefront update. The wavefront update is applied, and another image with different phase-diversity information is collected. The three most recent images are then used again to calculate the next phase correction to be applied. We continue until the algorithm converges. The computation consists of a cycle of two successive GS-like iterations. The complete process consists of the following steps: \begin{enumerate} \item Take the pupil amplitudes, $A$, estimated at the previous iteration. Use the procedure in Section~\ref{sec:ffgsdet} to calculate the focal-plane electric field corresponding to $p_2$, the second most recent image. This is be done by solving $\alpha$ and $\beta$ in Eq.~\eqref{eq:focacoefs} and using formula % \begin{displaymath} e_2 = \left|p'_2\right|^{0.5}\exp\left[ i\arg\left((a_r+\alpha)+i(a_i+\beta)\right)\right]. \end{displaymath} % Here, the images could be rearranged appropriately: $p_2$ should be the reference and the phase diversities interpreted accordingly. However, we found $\arg(e_2)]\approx\arg(e_1)$ to be a sufficient approximation. \item Compute the pupil-plane electric field corresponding to the image $p_2$. This is done by Fourier transforming the focal-plane field, \begin{displaymath} E_2 = \ift{e_2}. \end{displaymath} \item Update the current estimate of the pupil amplitudes: \begin{displaymath} A = |E_2|. \end{displaymath} \item With the new pupil amplitude estimate, repeat the procedure in Section~\ref{sec:ffgsdet} to compute the electric field for image $p_1$, the most recent image. \item Compute the pupil-plane field corresponding to image $p_1$, \begin{displaymath} E_1 = \ift{e_1}. \end{displaymath} \item Calculate the final phase estimates for the phase and pupil amplitudes, \begin{align} \phi &= \arg(E_1) \label{eq:phigs} \\ A &= |E_1|. \label{eq:ags} \end{align} \end{enumerate} The estimates of $\phi$ are then used in the feedback loop in the same way as with the FF algorithm. \subsection{Practical aspects} \label{sec:ffgspractical} The issues faced in practice by an implementation of the FF-GS differ slightly from the simple FF. Since the pupil amplitudes are not constrained, the imaging model is potentially much more accurate. In practice, indeed, we found that it was not necessary to apply any windowing filters to dampen the high spatial frequencies in the wavefronts reconstructed with FF-GS. The normal feed-back loop, as described by Eq.~\eqref{eq:feedback}, was sufficient regularization for the optimal performance. It was also not necessary to introduce any ad-hoc restrictions to constrain the pupil amplitudes. The values obtained from Eq.~\eqref{eq:ags}, at any time step, do have a significant deviation from the actual pupil amplitudes, but this appears to be a minor issue for the convergence of the algorithm. Moreover, averaging the values of $A$ over several iterations produces non-biased results. However, the heavier reliance on the phase-diversity information makes the algorithm more prone to stability issues. To increase the stability, we found it helpful to introduce other ad-hoc techniques. In the feedback loop, we apply amplitude gains. Just as formulated in Eq.~\eqref{eq:feedback}, we multiply the applied phase correction -- obtained from Eq.~\eqref{eq:phigs} -- by the estimated amplitudes. This helps to prevent abrupt phase changes at points where $|E_1|$ has a very small value; at those points, the determination of the complex phase is likely to fail. In fact, we also set $\phi$ to zero at points where $|E_1|<0.3$. This reduces the speed of convergence, but has no impact on the accuracy of the converged solution. Finally, additional regularization is used in case of numerical issues when the algorithm has converged. We observed that occasionally, every 10th iteration or so, the FF-GS algorithm produces wildly incorrect results. This is related to the fact that the solution of Eq.~\eqref{eq:focacoefs} requires phase-diversity information. Once the applied phase corrections become very small, the corresponding diversity information becomes unreliable. To make sure that such violent phase changes will not cause troubles, we simply restrict the magnitude of the applied phase change. If the rms value of the change exceeds the mean of ten previous changes, we scale it down to the mean value. \section{Hardware used} \label{sec:hardware} To test the algorithms, we created a simple setup that consists of one spatial light modulator (SLM) and an imaging camera. The former is a reflective device (BNS P512) having a screen of $512\times 512$~pixels, a fill factor of 83.4\% and a pixel pitch of $15\times 15 \mu$m. The SLM is able to create a phase-change of 2$\pi$~radians at the used wavelength, and its control signal is coded with 6 bits. The imaging camera is a Basler piA640-210gm, which has a resolution of $648\times 488$~pixels and a dynamic range of 12 bits. As a light source, we use a fibre-coupled laser diode (Qphotonics QFLD-660-2S) having a wavelength of 656~nm. A schematic figure of the setup is shown in Fig.~\ref{fg:setupschema}. The beam goes first through a diaphragm, and it is then collimated such that it hits an area of $245\times 245$~pixels on the SLM. The device reflects several sub-beams due to strong diffraction effects, and we use only the zeroth order beam; it is directly imaged onto the camera (beam numerical aperture NA=0.037). The other sub-beams cause no adverse effects. Before and after the SLM, we place two linear polarizers that are rotated such that their orientation matches the one of the SLM. \begin{figure}[hbtp] \center \includegraphics[width=\columnwidth]{fig6} \caption{Schematic view of the used hardware. The lenses are standard 1-inch doublets. The beam diameter is 3.7~mm at the SLM.} \label{fg:setupschema} \end{figure} The SLM phase and transmittance responses are measured with the differential optical transfer function (dOTF) method as described in \cite{korkiakoski2013}. The resulting measurements are shown in Fig.~\ref{fg:slmrespo}. The maximum control voltage causes $\sim$$2\pi$ phase shift at 656~nm. \begin{figure}[hbtp] \center \includegraphics[width=\columnwidth]{fig7} \caption{SLM phase and amplitude responses. The dots indicate individual measurements. The lines show 5th order polynomial fits to the data.} \label{fg:slmrespo} \end{figure} The used SLM couples the transmittance and phase change; the transmittance gradually increases when a larger phase shift is introduced with the SLM. For phase changes of less than one radian, the transmittance is $\sim$25\% lower compared to what is seen when a change of more than $\sim$4~rad is introduced. To create a mapping between the pupil-plane coordinates and the SLM pixels, we again use the dOTF method and affine transforms as described in \cite{korkiakoski2013}. This time, however, we make the dOTF recording in the best focus to avoid issues with the non-telecentric beam. To compensate for signal-to-noise problems, we take more images to average out the noise: it takes $\sim$2~hours to create one dOTF array. This makes the process also more vulnerable to internal turbulence in the setup; the recorded images are blurred such that the low spatial frequencies in the images become distorted, and we have to mask out the center of the obtained dOTF arrays. Fig.~\ref{fg:dotf} shows the modulus of the best-focus dOTF array recorded with the whole SLM at zero control voltage. Although the center of the array is masked, it is still perfectly usable for the calibration process of \cite{korkiakoski2013}, and we can accurately determine the PSF sampling as defined by Eq.~\eqref{eq:q}: $q=3.76\pm0.01$. \begin{figure}[hbtp] \center \includegraphics[width=\columnwidth]{fig8} \caption{The modulus of an averaged dOTF array.} \label{fg:dotf} \end{figure} The resulting SLM calibration is valid as long as the position of the SLM stays fixed with respect to the imaging camera, and the phase-response of the device does not change. In our setup, we found this to be case for at least one month -- from the initial calibration to the last measurements reported in this paper. As discussed in \cite{korkiakoski2013}, the resolution of the controlled phase is a free parameter when calculating the affine mapping for the SLM calibration. We obtained good results when using $\sim$30\% less pixels than are actually used by the SLM. Thus, we selected the size of the controlled phase array as $N_\text{pup}=170$. The resulting FFT array dimension is then $N_\text{arr}=640$. When recording images for the FF and FF-GS algorithms, we use the same high-dynamic range (HDR) imaging approach as in \cite{korkiakoski2013}. Several snapshot images are taken with different exposure times, and we combine the images to extend the dynamic range and compensate noise. Each single-exposure component in one HDR image is an average over 40--200 images, and we used in total 16 exposure times (2, 5, 12, 25, 50, 100, 200, 400, 750, 1100, 1450, 1800, 2150, 2500, 2850 and 3200~ms). It took $\sim$15~s to record one HDR image. Increasing the integration even further does not significantly improve the performance of the wavefront correction algorithms. Although the imaging camera has a resolution of $640\times 480$~pixels, we use only a smaller area for convenience reasons. After acquiring the image, we crop an array of $320\times 320$~pixels such that the PSF maximum is in the center. Outside of the region, we did not observe any significant amount of light. To detect all the spatial frequencies corrected by the controlled phase array of $170\times 170$~pixels, however, we would need an array of $640\times 640$~pixels. Thus, it is possible that our control algorithms introduce high-spatial frequencies that scatter light outside of the observed image. However, with FF, this is mitigated by the applied low-pass filter. With FF-GS, we observed no stability issues with the high spatial frequencies, although no explicit regularization measures were taken. \section{Results} \label{sec:results} This section illustrates the results of the FF and FF-GS algorithms. We consider only a single case: the wavefront to be corrected is what the camera sees at the beginning, when no voltage is applied to the SLM. We call this the initial situation. We concentrate on the ultimate accuracy the algorithms can achieve in a low-noise regime. Our earlier publication \cite{korkiakoski2012spie1} describes in more detail the FF performance in the presence of more noise. We showed that the algorithm works, but only the lower spatial frequencies can be reconstructed. Now, we study a case that is typical for a high-order adaptive optics test bench, and the noise level is chosen such that FF-GS offers an advantage over FF -- with higher noise FF is more robust. Section~\ref{sec:study} illustrates the properties of the converged algorithms as measured with our test setup. Section~\ref{sec:meassimu} shows a more detailed comparison of the measurements and simulations with the actual hardware modeled in sufficient detail. Finally, Section~\ref{sec:errbud} presents a simulation-based error budget that quantifies the effects of different error sources. \subsection{Performance of the algorithms} \label{sec:study} For the results shown here, we have optimized the free parameters (FF regularization coefficient $\epsilon$, the width of the FF filtering window $w$, leaky gain $g_l$, loop gain $g$) such that the converged WF quality is best; the convergence speed has lower priority. The width of the filtering window used by the FF algorithm was chosen to be $320\times 320$, the same as the recorded images. However, during the first 10 iterations, we used a narrower window (width of 80~pixels) to avoid introducing errors at the high spatial frequencies. After the lower spatial frequencies are corrected, it is safe to increase the window size. The optimal values for feedback loop gains were $g=0.3$, $g_l=0.97$ (with FF) or $g_l=0.999$ (with FF-GS), and $\epsilon$ was 250 times the determined noise level in the images. For the FF algorithm, we also need to determine the pupil amplitudes, $A$. We use a perfect top-hat function having a size of $N_\text{pup} \times N_\text{pup}$, where the choice of $N_\text{pup}$ is explained in Section~\ref{sec:hardware}. It might be possible to improve the results by adjusting $A$ based on the actual pupil shape, but this is outside the scope of this paper. With these settings, both FF and FF-GS converge in 20--50 iterations to a situation where the Strehl ratio has increased from $\sim$75\% to $\sim$99\% (a more detailed analysis can be found in Section~\ref{sec:meassimu}). After the convergence, the control law, Eq.~\eqref{eq:feedback}, gives phase updates that are negligible compared to the shape of the wavefront corrector, $\theta_k$. However, we run the algorithm for a total 400 iterations to make sure that no creeping instabilities occur. Fig.~\ref{fg:slmconv} illustrates the typical wavefronts we obtained after the convergence. Due to the applied low-pass filter, FF yields wavefronts smoother than FF-GS; otherwise they match well, though. The repeatability of the experiments appears reasonable: the converged wavefront shapes have experiment-to-experiment differences at most $\sim$0.2--0.3~rad. The spread of the FF-GS results tends to be smaller compared to FF, and we see that also the higher spatial frequencies are produced in a repeatable way. \begin{figure}[hbtp] \center \includegraphics{fig9a} \includegraphics{fig9b} \includegraphics{fig9c} \caption{Top row: typical wavefront shapes ($170\times 170$~pixels) of the SLM after the convergence of FF and FF-GS. Bottom: radial cuts through the wavefronts; the shaded area shows the range (minima and maxima) of five independent measurements.} \label{fg:slmconv} \end{figure} Fig.~\ref{fg:pupest1} shows the reconstructed pupil amplitudes. The top left shows an average of $A$ following the application of Eq.~\eqref{eq:ags} during a total of 400 FF-GS iterations with phase updates. It can be compared with the dOTF modulus shown next to it, and we see that the shape of the diaphragm and several bigger dust particles are correctly recovered. However, it is obvious that all the finer details are lost, and the very lowest spatial frequencies also deviate from each other. The plot below in Fig.~\ref{fg:pupest1} shows radial cuts of five similarly obtained pupil amplitudes, and we see that all the features in the pupil amplitudes are nevertheless repeatedly reconstructed in the same way. \begin{figure}[hbtp] \center \includegraphics[width=0.32\columnwidth]{fig10a} \includegraphics[width=0.32\columnwidth]{fig10b} \includegraphics[width=0.32\columnwidth]{fig10c} \includegraphics{fig10d} \caption{Top row: pupil amplitudes ($170\times 170$~pixels) reconstructed with different methods. Left: FF-GS. Middle: dOTF (same as in Fig.~\ref{fg:dotf}). Right: GS post-processing from a converged PSF. Bottom: radial cuts through the pupil amplitudes; five independent measurements runs shown for FF-GS.} \label{fg:pupest1} \end{figure} To obtain an improved reconstruction of the finer details in the pupil amplitudes, we use the PSF that results after the FF-GS algorithm has converged. We assume that all the remaining speckles are caused by the amplitude aberrations, and reconstruct -- with a Gerchberg-Saxton-style algorithm -- a pupil that would create such a pattern. This is shown in the upper right in Fig.~\ref{fg:pupest1}, and we can see that it indeed much better matches the dOTF reconstruction in Fig.~\ref{fg:dotf}. Later, we use this pattern in simulations for analysis purposes. The differences between the independent measurement series shown here are a combination of actual small changes in the hardware and uncertainty caused by noise and systematic errors. It is difficult to separate those two effects, and therefore we continue the analysis with the help of numerical simulations. \subsection{Comparison of measurements and simulations} \label{sec:meassimu} To simulate the optical setup, we assume that the algorithms correct wavefronts shown in Fig.~\ref{fg:slmconv} with pupil amplitudes similar to what is shown in Fig.~\ref{fg:pupest1}. We created three study cases reflecting the variability in the converged results. In the simulations, we consider eight different sources of errors that needs to be modeled explicitly. They are: \begin{enumerate} \item SLM quantification. We use only 6 bits to control the wavefront. The plots shown in Fig.~\ref{fg:slmrespo} are used to round the simulated WF correction to what would happen in practice. \item PSF sampling. The wavefront and the resulting PSF are sampled internally by a factor of two higher than what the hardware controls or observes. The control algorithms use re-binned PSFs, and the simulated wavefront correction is interpolated bilinearly from the reconstruction at a resolution of $170\times 170$. \item Image noise and dynamic range. We estimate the read-out noise of the HDR images to be at a level of $2.2\cdot 10^{-6}$ of the image maximum. Gaussian random noise is added to the simulated PSFs. The HDR images have maximum values $\sim$$4\cdot 10^8$, corresponding to about 29 bits, and this is also modeled in the simulations. \item Background level. Standard background subtraction is performed on the PSF images, but a small error will still remain. Therefore, we add a constant background level, $2.7\cdot 10^{-6}$ of the image maximum, to the simulated PSFs. \item Non-perfect pupil. Instead of the perfect top-hat function, we use pupil amplitudes similar to what is illustrated in the top right of Fig.~\ref{fg:pupest1}. \item Amplitude aberrations. We simulate the coupling of the wavefront and the transmission of the SLM as illustrated by Fig.~\ref{fg:slmrespo}. \item Alignment errors. Although the dOTF calibration is rather accurate, some error could still be present in the affine transform that we use to map the wavefront to the SLM pixels. The simulations indicate that if the transform has a mismatch corresponding to a rotation larger than 0.4$^\circ$, FF and FF-GS would be unstable. In practice, with the used hardware, we saw no hints these of instabilities. Therefore, a rotation error of 0.4$^\circ$ represents the maximum misregistration that the wavefront control algorithms are likely to experience. \item Tip-tilt error. Internal turbulence in the optical setup causes frame-to-frame wavefront variations, which can be approximated to a degree as small shifts of the recorded images. We measured the difference of the center-of-gravity between two consecutive PSFs recorded with the HDR method, and it was found to be on average 0.025~pixels. This error cannot be taken into account by the phase-diversity approach, and we model its impact on the performance. \end{enumerate} Fig.~\ref{fg:converg1} shows the remaining wavefront error as a function of time step. The simulation plots show the exact error, but the measured value is estimated from the data. Here, we have estimated the rms error from the corresponding PSF images only. At first, we estimated the Strehl ratios using the method seven in \cite{roberts2004}, and the result was converted to an rms error using the expression $S=\exp(-\sigma^2)$. The resulting estimates are highly sensitive to the estimation of the pupil amplitudes, which we know only approximately (Fig.~\ref{fg:pupest1}). Thus, the y-axis in the lower plot in Fig.~\ref{fg:converg1} is not directly comparable to the simulation plot; alternative estimates that are more easily compared are shown later in this Section. \begin{figure}[hbtp] \center \includegraphics{fig11a} \includegraphics{fig11b} \caption{Tip/tilt removed residual wavefront error as a function of time step. Top: simulations (real value). Bottom: measurements (estimation from PSF images).} \label{fg:converg1} \end{figure} Nevertheless, the speed of the convergence is clearly seen. Both FF and FF-GS reduce the WF rms error from $\sim$0.5~rad rms to $\sim$0.1 in $\sim$50~iterations. FF converges about 50\% faster, but it is plagued by the overshoot at the beginning; it would require an adaptive optimization of the low-pass filter to properly handle it. Regarding the simulations, it is obvious that the FF-GS improves the performance over FF: the rms error is 0.08~rad as compared to 0.12~rad. This is largely due to the smaller value of the leaky integrator gain that we had to apply to make the FF stable. Regarding the measurements, we can see a similar pattern, but we also see that the FF-GS has two modes: the estimate of the residual rms error is either $\sim$0.10~rad or $\sim$0.13~rad. The modes are related to the finite sampling of the CCD detector. Our models do not explicitly constrain the position of the PSF at the detector, which means that a random sub-pixel tip/tilt component -- different between the independent measurement series -- is left in the residual wavefront. The algorithms converge to a state that remains stable, but the different remaining tip/tilt components can cause significant changes in the measured maximum intensity, and this affects our Strehl ratio estimation process. When inspecting the re-centered PSFs carefully, as shown later in this section, no significant differences between the PSFs can be seen. A more detailed investigation reveals that the convergence of the wavefront correction depends on the spatial frequency -- low-frequency features are reconstructed faster. Fig.~\ref{fg:psdconverg} illustrates this by showing how an average intensity in different regions of the field changes as a function of time step. We show three different regions representing low, medium and high spatial frequencies; the locations correspond to Airy rings 2--4, 12--17 and rings further than 30. Since we consider only small wavefront aberrations, the shown intensity values are directly proportional to the average power spectral density at the matching frequency bands. \begin{figure}[hbtp] \center \includegraphics{fig12a} \includegraphics{fig12b} \includegraphics{fig12c} \caption{Average intensity at different parts of the field. Three cases are shown: the field corresponding to Airy rings 2--4, Airy rings 12--17 and Airy further than 30.} \label{fg:psdconverg} \end{figure} Both simulations and measurements show a similar pattern, although the absolute levels are higher in simulations due to differences in noise. At low spatial frequencies, both FF and FF-GS peak at iterations 5--10. FF converges in total in $\sim$20 iterations, and FF-GS takes $\sim$20 iterations more, although some cases show intensity reduction even until $\sim$100 iterations. At medium spatial frequencies, the peak occurs at iteration $\sim$15, and the algorithms need in total $\sim$30 iterations to reach intensity level $\sim$6\% lower than at the beginning. FF saturates at that level, but 30 additional iterations with FF-GS reduce the intensity in total $\sim$15\% from the initial level. At high spatial frequencies, FF requires almost 50 iterations to converge to a level 15\% lower than the initial intensity (in simulations the reduction is only a few percentages due to higher noise). FF-GS, on the other hand, converges faster than FF, but still 150 iterations are needed to reduce the intensity $\sim$35\%. The measurements show marginally better intensity reduction, but that requires almost 300 iterations. The residual wavefront error can obviously also be estimated using the control data that the algorithms themselves provide through Eqs.\eqref{eq:phisol} and \eqref{eq:phigs}; the corresponding results are shown in Fig.~\ref{fg:converg3}. \begin{figure}[hbtp] \center \includegraphics{fig13a} \includegraphics{fig13b} \caption{Residual wavefront error as a function of time step. Values calculated from the actual estimates used by the algorithms. Top: simulations. Bottom: measurements.} \label{fg:converg3} \end{figure} The first striking feature is that the simulations and the measurements produce practically identical patterns. After the convergence, the WF estimates of the FF algorithm have an rms error of 0.12--0.18~rad in the simulations and 0.15--0.24~rad in the measurements. There appears to be no obvious structure in how the error varies between consecutive iterations. Since the actual correction is an average over several consecutive measurements, the actual remaining wavefront error can be smaller than the instantaneous estimates of 0.12--0.24~rad. In the simulations, the error was observed to be $\sim$0.12~rad, and we have no reason to assume the situation with the actual hardware would be different; our estimate for the remaining WF rms error is $\sim$0.15~rad. With the FF-GS algorithm, the issue is slightly more complicated since some of the WF estimates fail when the algorithm approaches the optimum. The reason for this -- the phase-diversity failure -- is discussed in Section~\ref{sec:ffgspractical}. This is seen as prominent spikes in the plots in Fig.\ref{fg:converg3}, although most of the rms error values are concentrated around 0.1~rad. In the simulations, the actual rms error of the residual wavefront is $\sim$0.08~rad, and a similar value is seen in the actual measurements. Four examples of the actual PSF images are shown in Fig.~\ref{fg:psfsampls}: \begin{enumerate} \item the initial PSF (measured when the SLM pixels are set to zero), \item the simulated perfect PSF resulting from the pupil amplitudes shown in Fig.~\ref{fg:pupest1}, \item simulated PSF after the convergence of the FF-GS algorihm, \item measured PSF after the convergence of the FF-GS algorithm. \end{enumerate} \begin{figure}[hbtp] \center \includegraphics{fig14} \caption{Examples of PSF images ($320\times 320$~pixels) raised to the 0.1 power. A) initial, measured. B) perfect, simulated C) converged FF-GS, measured. D) converged FF-GS, simulated. } \label{fg:psfsampls} \end{figure} All the PSFs have a similar, star pattern with ten radial beams gradually fading towards the edges of the images. These are caused by the blades of the diaphragm, whose shape is shown in Figs.~\ref{fg:dotf} and \ref{fg:pupest1}. The initial PSF corresponds to a wavefront like in Fig.~\ref{fg:slmconv}: a clearly deformed core, but still easily recognizable Airy rings 3--20. The simulated, noiseless and aberration-free PSF shows the speckles that we expect to remain due to the non-flat pupil amplitudes. The dust, dirt and also the inhomogeneities of the SLM create a significant transmission distortion dominated by high spatial frequencies. This causes the halo of irregularities on top of the pattern of the perfect diffraction rings. In addition, we can see a few stronger speckles and speckle groups at a distance of approximately Airy rings 12--18. These can be attributed to the larger dust particles also clearly visible in the FF-GS estimated pupil amplitudes in Fig.~\ref{fg:pupest1}. When comparing the measured and simulated PSFs after the FF-GS algorithm has converged, we find no significant differences. Both PSFs have a regular core, which appears to match exactly the perfect PSF up to the 4th diffraction ring. At least 26 diffraction rings are, at least partially, visible. A comparison with the perfect PSF shows that several strong speckles can be identified in all the images, but the halo after the 14th diffraction ring outside the star-like beams, close to the detection limit of the camera, is dominated by speckles with no obvious structure. A more detailed comparison can be obtained by inspecting the radially averaged profiles of the PSFs. Before taking the radial average, we shift, using Fourier transforms, the PSFs to have the center-of-gravity at the location of the perfect PSF. The results are shown in Fig.~\ref{fg:psfradprofs}. \begin{figure}[hbtp] \center \includegraphics{fig15a} \includegraphics{fig15b} \caption{Averaged radial profiles of PSF images. Upper: simulated, the three study cases are shown. Lower: measured, results from five indepedent runs are shown; the perfect PSF is identical to the one in the upper plot.} \label{fg:psfradprofs} \end{figure} The profiles show that both the FF and FF-GS algorithms, in both the simulated and measured cases, converge to a situation very close to the perfect simulated PSF; no significant differences are seen up to the first 13 (simulated) or 20 (measured) diffraction rings. After this, we can see that the performance of both algorithms slowly deviates from the perfect PSF, the intensity being a factor of $\sim$5 (simulated) or $\sim$2--3 (measured) higher at borders. At the distances corresponding to diffraction rings 20 and higher, FF-GS is typically $\sim$20--30\% better in reducing the intensity as compared to FF. In total we can recognize at least 30 diffraction rings before the speckle noise makes the PSF structure too blurry to observe any structure. Nevertheless, compared to the initial PSF, both algorithms reduce the intensity of scattered light throughout the whole used field, although, in the simulated case, the difference is not significant after the 34th diffraction ring. In the measured case, on the other hand, the light intensity is reduced by a factor of $\sim$2--3 also at the edge of the recorded image. This difference between the simulations and measurements is due to a combined effect of differences in actual noise levels, wavefronts and pupil transmission. \subsection{Error budget} \label{sec:errbud} Finally, we show an error budget that illustrates the impact of the different error sources in the optical setup. In the ideal case, we have no noise and a perfectly circular pupil that is -- in the case of FF -- exactly known. The perfect case also uses exactly the same imaging model in both the WF reconstruction and when simulating the PSF images: a zero-padded FFT with a wavefront modeled at a resolution of $170\times 170$. We sequentially simulate each of the error sources listed in Section~\ref{sec:meassimu}. The resulting rms errors in the converged wavefronts are listed in Table~\ref{tb:errbud}. \newcommand{$\hspace{0.1cm}$}{$\hspace{0.1cm}$} \begin{table}[hbtp] \begin{center} \caption{Error budget} \label{tb:errbud} \begin{tabular}{lccc} \hline & FF$^*$ & & FF-GS$^*$ \\ \hline 0. No errors & 0.03 $\pm$ 0.01 &$\hspace{0.1cm}$ & 0.00 $\pm$ 0.00 \\ 1. SLM quantification & 0.04 $\pm$ 0.01 &$\hspace{0.1cm}$ & 0.02 $\pm$ 0.00 \\ 2. PSF sampling 2x & 0.08 $\pm$ 0.01 & & 0.01 $\pm$ 0.00 \\ 3. Image noise & 0.05 $\pm$ 0.01 & & 0.05 $\pm$ 0.00 \\ 4. Background level & 0.04 $\pm$ 0.01 & & 0.01 $\pm$ 0.00 \\ 5. Non-perfect pupil & 0.11 $\pm$ 0.00 & & 0.02 $\pm$ 0.01 \\ 6. Amplitude aberrations & 0.12 $\pm$ 0.01 & & 0.04 $\pm$ 0.01 \\ 7. Alignment errors & 0.08 $\pm$ 0.01 & & 0.01 $\pm$ 0.00 \\ 8. TT instability & 0.03 $\pm$ 0.01 & & 0.04 $\pm$ 0.01 \\ 9. All errors & 0.12 $\pm$ 0.01 & & 0.08 $\pm$ 0.00 \\ \hline \end{tabular}\\ $^*$The residual WF rms errors (rad) at spatial frequencies falling within the used images. \end{center} \end{table} In theory, both algorithms should reach zero wavefront error in the perfect case. However, in the case of FF, we still have to use numerical regularization to maintain stability, and this compromises the performance in the error-free case. This could be improved by optimizing the codes, but it is not done here; the codes are optimized for the performance with all the error sources present. The most severe error source for the FF algorithm, as expected, is indeed the amplitude aberrations: instead of the ideal rms error of 0.03~rad, we are limited to an error of 0.11~rad. Similar errors are also seen if the imaging model does not exactly match the actual hardware; this was tested when simulating the wavefront and PSF with double sampling (case 2 in Table~\ref{tb:errbud}); the double sampling was also used in the misalignment simulation. The different error sources are coupled, so they do not add up quadratically. In the presence of all the error sources, we end up having a residual WF error of $\sim$0.12~rad. With the FF-GS algorithm, we can radically reduce the problems of the unknown pupil aberrations. The transmission we used in simulations, however, had significant fluctuations creating speckles similar to what the wavefront aberrations do. Therefore, the wavefront reconstruction problem is difficult to make unambiguous, and we saw a small residual rms error of 0.02~rad. The FF-GS is limited by the combined effect of read-out noise (0.05~rad), the fact that the SLM couples the transmission and phase change (0.04~rad) and the TT instability (0.04~rad). All the error sources add up quadratically, which indicates that they are largely independent. When comparing the FF and FF-GS, we see that a significant improvement can be obtained with the FF-GS algorithm; the residual wavefront rms error is reduced from 0.12~rad to 0.08~rad. However, the method is more sensitive to uncertainties and noise: the tip-tilt jitter in our hardware has no influence on the FF while being a major error source with the FF-GS algorithm. \section{Conclusions and discussion} \label{sec:conclusions} We have demonstrated the performance of two numerically efficient focal-plane wavefront sensing algorithms: the Fast \& Furious and its extension Fast \& Furious Gerchberg-Saxton. Both algorithms do an excellent job in calibrating static aberrations in an adaptive or active optics system: we demonstrated an increase in the Strehl ratio from $\sim$0.75 to 0.98--0.99 with our optical setup. Although the FF-GS algorithm is more prone to noise, we observed a clear improvement. With our hardware -- a high-resolution spatial light modulator as the wavefront corrector -- we estimate the remaining residual wavefront rms error to be $\sim$0.15~rad with FF and $\sim$0.10~rad with FF-GS. The difference occurs mostly at spatial frequencies corresponding to the 20th and further Airy rings. Simulations with error sources comparable to our hardware show very similar results. This increases our confidence that the estimated performance indicators are reliable, and the simulated error budget also confirms the unknown amplitude aberrations as the main limitation of the FF algorithm in the considered framework. To our knowledge, this is the first time that such focal-plane sensing methods have been demonstrated with $\sim$30~000 degrees of freedom -- and in the case of FF-GS, with twice the number of free parameters to estimate the pupil amplitudes. The sampling at the detector was such that the controlled wavefront of $170\times 170$ pixels would have been enough to correct all spatial frequencies inside an image of $640\times 640$ pixels. However, as we recorded only an image of $320\times 320$ pixels, we had no direct observations of the higher controlled spatial frequencies. Simulations indicate that this resulted in a small amount of light being scattered outside the recorded field, but this amount was too small to be easily detected in our optical setup. We put no particular effort into optimizing the codes; all the software was implemented in Matlab, and it was run on a standard Windows PC. Still, the required computation time was negligible compared to the time of $\sim$15~s we needed to collect data for a single HDR image. We implemented the FF algorithm with two $640\times 640$ FFTs per iteration step (one FFT transferring the phase-diversity information into the focal plane could likely be replaced by a convolution, as explained in \cite{keller2012spie}). Our FF-GS implementation used 8 FFTs per iteration, and that could also potentially be optimized. As with all focal-plane wavefront sensing techniques, the algorithms work best if a monochromatic light source is available. With a chromatic light source having a sufficiently small bandwidth, perhaps $\sim$10\%, the algorithms would still work, but only with a limited corrected field. With special chromatic optics (such as in \cite{guyon2010}) or an integral field unit, it is potentially possible to use the algorithms with even wider bandwidth. Currently, we have only demonstrated a case where an unobstructed PSF is detected, and the wavefront is driven to be flat. To make the algorithms more interesting for astronomical applications in the extreme adaptive optics or ultra-high contrast imaging, a few extensions would be necessary. First, we should consider how coronagraphs and diffraction suppression optics will affect the techniques. In practice, this would mean that the core of the PSF would not be detected, and we would need to consider also the moduli in a part of the focal-plane field as free parameters. Second, instead of flattening the wavefront, we should optimize the contrast at a certain part of the field. This would mean calculating a wavefront shape that, in the same way as in \cite{malbet1995, borde2006, giveon2007oe}, minimizes the light in certain regions of the field at the cost of increasing it in other parts; the updated algorithm should then drive the wavefront to this desired shape. A similar problem is faced, if phase plates are used to create diffraction suppression, for instance as in \cite{codona2004}. Also in such a case, it is necessary to drive the wavefront to a particular shape that is far from flat. Another, potentially interesting application is a real-time application, for instance as a high-order, second-stage sensor in an adaptive optics system. The computational load is manageable, and a successful system would greatly simplify the hardware design compared to a conventional AO approach. However, issues such as the requirement for small aberrations, chromaticity, temporal lag in the phase diversity and the limited dynamic range of the camera -- and therefore photon noise -- are major challenges. \bibliographystyle{osajnl}
proofpile-arXiv_068-13013
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Details of the Case Study Experiment} \label{app:exp_details} \subsection{Details on Dataset} The synthetic dataset in Section~\ref{sec:case_study} was created with a mean function $y = \sin(x/2) + x\cos(0.8x)$ with $x\sim \text{unif}[-10, 10]$. The support $[-10, 10]$ was partitioned into 4 quadrants, and different levels of 0 mean, Gaussian noise was added to the mean function to create the $y$ observations. \begin{align*} -10\leq x < -5& \text{: noise} \sim\mathcal{N}(0, 1^2) \\ -5\leq x < 0& \text{: noise} \sim\mathcal{N}(0, 0.01^2) \\ 0\leq x < 5& \text{: noise} \sim\mathcal{N}(0, 1.5^2) \\ 5\leq x \leq 10& \text{: noise} \sim\mathcal{N}(0, 0.5^2) \end{align*} \subsection{Model Details} We used the same neural network architecture across all methods (i.e. loss functions): 3 layers of 64 hidden units with ReLU non-linearities, and 2 output units: one for the conditional mean $\hat{\mu}(x)$ and one for the conditional log-variance $\log\hat{\sigma^{2}}(x)$. We used the same learning rate $1e^{-3}$ and full batch size ($200$) for all methods. During training, we track the corresponding loss function on the validation set, and at the end of 2000 epochs, the final model was backtracked to the model with lowest validation loss. All reported test results are based on this backtracked model. \subsection{Calculation of Evaluation Metrics} This section describes how each of the reported metrics is computed within Uncertainty Toolbox, given a finite dataset $D = \{(x_i, y_i)_{i=1}^{N}\}$. \textit{Accuracy Metrics}\\ The root mean squared error (RMSE) and mean absolute error (MAE) are computed with the mean prediction $\hat{\mu}(x)$, following the standard definitions. \[ \text{RMSE}(D, \hat{\mu}) = \sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_i- \hat{\mu}(x_i))^{2}} \] \[ \text{MAE}(D, \hat{\mu}) = \frac{1}{N}\sum_{i=1}^{N}\left|y_i- \hat{\mu}(x_i)\right| \] \textit{Calibration Metrics}\\ To measure the calibration metrics (average calibration, adversarial group calibration), expected probabilities are discretized from $0.01$ to $0.99$ in $0.01$ increments (i.e. $0.01$, $0.02$, $\dots$, $0.97$, $0.98$, $0.99$). and the observed probabilities are calculated for each of these $99$ expected probabilities. ECE (measure of average calibration) is computed following the definition given in Section~\ref{sec:metrics}. The procedure in which we measure adversarial group calibration is the following. For a given test set, we scale group size between $1\%$ and $100\%$ of the full test set size, in 10 equi-spaced intervals. With each group size, we draw 20 random groups from the test set and record the worst calibration incurred across these 20 random groups. The adversarial group calibration figure (Figure~\ref{fig:agc}) plots the mean worst calibration incurred with $\pm1$ standard error in shades, for each group size. This is also the method used by \citet{zhao2020individual} to measure adversarial group calibration. \textit{Sharpness}\\ Sharpness is measured as the mean of the standard deviation predictions on the test set. Note that sharpness is a property of the prediction \textit{only} and does not take into consideration the true distribution. \textit{Proper Scoring Rules}\\ The proper scoring rules (NLL, CRPS, check score, interval score) are measured as the mean of the score on the test set. \section{Numerical Results Across Multiple Trials} \label{app:app_table} The results presented in Section~\ref{sec:case_study} are based on one random seed. Below, we present the numerical results across 5 random seeds: $[0,1,2,3,4]$. \begin{table*}[h] \vspace{2mm} \centering \begin{center} \begin{tabular}{*{6}{c}} \cline{3-6} && \multicolumn{4}{c}{\textbf{Metrics}} \\ \cline{3-6} & \multicolumn{1}{c}{} & RMSE & MAE & ECE & Sharpness \\ \hline \multirow{5}{*}{\textbf{Methods}} & NLL & $2.048 \pm 0.125$ & $1.073 \pm 0.080$ & $\mathbf{0.029} \pm 0.007$ & $1.746 \pm 0.155$ \\ & CRPS & $\mathbf{1.023} \pm 0.090$ & $\mathbf{0.661} \pm 0.054$ & $0.044 \pm 0.005$ & $0.897 \pm 0.114$ \\ & Check & $1.045 \pm 0.105$ & $0.672 \pm 0.065$ & $0.050 \pm 0.011$ & $\mathbf{0.874} \pm 0.117$ \\ & Interval & $1.169 \pm 0.187$ & $0.745 \pm 0.101$ & $0.039 \pm 0.009$ & $0.915 \pm 0.130$ \\ \hhline{~=====} & Ground Truth & $\mathit{0.962 \pm 0.064}$ & $\mathit{0.618 \pm 0.042}$ & $\mathit{0.019 \pm 0.002}$ & $\mathit{0.925 \pm 0.052}$ \\ \hline \end{tabular} \end{center} \vspace{5mm} \centering \vspace{2mm} \begin{tabular}{*{6}{c}} \cline{3-6} && \multicolumn{4}{c}{\textbf{Metrics}} \\ \cline{3-6} & \multicolumn{1}{c}{} & NLL & CRPS & Check & Interval \\ \hline \multirow{5}{*}{\textbf{Methods}} & NLL & $1.677 \pm 0.343$ & $0.766 \pm 0.060$ & $0.386 \pm 0.030$ & $3.885 \pm 0.330$ \\ & CRPS & $1.112 \pm 0.111$ & $\mathbf{0.492} \pm 0.040$ & $\mathbf{0.248} \pm 0.020$ & $\mathbf{2.687} \pm 0.186$ \\ & Check & $1.635 \pm 0.661$ & $0.501 \pm 0.048$ & $0.253 \pm 0.024$ & $2.741 \pm 0.224$ \\ & Interval & $\mathbf{0.961} \pm 0.062$ & $0.546 \pm 0.073$ & $0.276 \pm 0.037$ & $2.875 \pm 0.352$ \\ \hhline{~=====} & Ground Truth & $\mathit{0.187 \pm 0.115}$ & $\mathit{0.435 \pm 0.033}$ & $\mathit{0.219 \pm 0.017}$ & $\mathit{2.122 \pm 0.177}$ \\ \hline \end{tabular} \caption{\textbf{Scalar Evaluation Metrics.} Each row shows evaluation metrics for a single method (i.e. loss function), and the mean with $\pm1$ standard error are shown. The best mean for each metric has been \textbf{bolded}.} \label{table:} \end{table*} \section*{Appendix} \label{sec:appendix} \input{app_dataset} \input{app_table} \clearpage \clearpage \section{Case Study on Training, Evaluating PNNs} \vspace{-1mm} \label{sec:case_study} To demonstrate the capabilities of Uncertainty Toolbox, we provide a case study on training PNNs with various loss objectives, and use the toolbox to examine the results. A PNN is a neural network that assumes a conditional Gaussian for the predictive distribution, thus for any input point $x$, outputs an estimate of the mean and the covariance, $\hat{\mu}(x)$, $\hat{\Sigma}(x)$. This NN structure has been proposed as early as \citet{nix1994estimating}, but it has been popularized as a UQ method in deep learning by \citet{lakshminarayanan2017simple}, and it remains one of the most popular UQ methods to date. The standard method of training PNNs is to optimize the logarithmic score, i.e. NLL loss. However, based on the training principle, ``optimize a proper score to improve UQ quality'' \citep{lakshminarayanan2017simple} (also referred to as ``optimum score estimation'' by \citet{gneiting2007strictly}), we can in fact optimize many more proper scoring rules. In this study, we train PNNs using several different methods by optimizing with respect to either NLL, CRPS, check score, or interval score. Afterwards, we assess the predictive UQ quality with Uncertainty Toolbox. We summarize main details of the experiment below (full details in Appendix~\ref{app:exp_details}). \vspace{-4mm} \paragraph{Dataset} The data was generated with a mean function $y = \sin(x/2) + x\cos(0.8x)$ and heteroscedastic Gaussian noise was added to generate the observations, $y$, for each input $x \sim \text{unif} [-10, 10]$. The train, validation and test splits consisted of $200, 100, 100$ points, respectively. \vspace{-4mm} \paragraph{Training} A separate model was trained for each loss function with full batch gradient decent and learning rate $1e^{-3}$, for 2000 epochs while tracking the validation loss. To optimize the check and interval scores, a batch of 30 expected probabilities $p_i \sim \text{unif}(0, 1)$ was selected and the scores for each $p_i$ were summed to compute the loss \citep{tagasovska2019single, chung2020beyond}. All reported results are based on the model with best validation loss. \vspace{-4mm} \paragraph{Analysis} We first visually observe UQ performance on the test set. Figure~\ref{fig:plots} (row 1) shows all of the methods approximately recovering the true level of heteroscedastic noise. Notably, NLL converges to a solution s.t. for $x < -5$, there is high error in mean estimation, which is compensated for with high (and wrong) variance estimates. The widths of the prediction intervals (PIs) in Figure~\ref{fig:plots} (row 2) also show how NLL is erroneously too wide. Meanwhile, comparisons with the ground truth PIs (far right plot) show that CRPS, Check, and Interval all tend to have too narrow (sharp) predictions. This is further confirmed via the \textit{Sharpness} metric column in Table~\ref{table:results}, and we can also observe the ramifications in \textit{average calibration} in Figure~\ref{fig:plots} (row 3): NLL's observed proportions in an interval tend to be greater than the expected proportion, signaling under-confidence (i.e. PIs that are too wide). The opposite case occurs for the other methods, and over-confidence (due to PIs that are too sharp) is especially pronounced in CRPS and Check. While NLL may seem average calibrated (with second lowest ECE), adversarial group calibration in Figure~\ref{fig:agc} shows that CRPS and Interval are better calibrated for smaller subsets of the domain, and achieve better adversarial group calibration. \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{figures/adv_g_cal.pdf} \vspace{-5mm} \caption{ \textbf{Adversarial Group Calibration.} Group size refers to proportion of test dataset size, and the shades represent $\pm1$ standard error for the calibration error of the worst group. \label{fig:agc}} \vspace{-3mm} \end{figure} While the proper score metrics in Table~\ref{table:results} add another facet to the analysis, they also underscore the complex nature of assessing UQ. Each proper score has its own, separate ranking of the four methods, and they are also split on which one is best; simply given this set of proper scoring rules, we believe it would be difficult to choose a single best method. Lastly, we note how a lower proper score may not necessarily indicate better calibration (Figure~\ref{fig:plots} (row 4)). Even while the proper scores improve on the test set (until around the validated epoch), calibration tends to get worse, while the predictions get sharper. Notably, CRPS and Check converge to a solution which is sharper than the true sharpness. This is problematic for calibration since a UQ sharper than the true sharpness will never be calibrated. \vspace{-2mm} \paragraph{Conclusion} This case study demonstrates that, even with numerous evaluation metrics at our disposal, the analysis of UQ for regression problems may not be straightforward. It also highlights limitations of the evaluation metrics, as relying on a single one (or small subset), may imply a conclusion counter to what other metrics signal. In the face of such limitations, we believe it is important to examine a suite of metrics simultaneously and perform a holistic evaluation of UQ quality. Not only does Uncertainty Toolbox provide this functionality, but it also offers recalibration for pre-trained UQ models, and resources that give key terms, explanations, and seminal works for those unfamiliar with the field. We hope that this toolbox is useful for accelerating and uniting efforts for uncertainty in machine learning. \newpage \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{figures/conf_bands.pdf} \includegraphics[width=\textwidth]{figures/intervals_ordered.pdf} \includegraphics[width=\textwidth]{figures/avg_cali.pdf} \includegraphics[width=\textwidth]{figures/cal_sharp.pdf} \vspace{-8mm} \caption{Rows from top to bottom: \textbf{(1)} Test observations, with predicted mean and confidence bands. \textbf{(2)} Test observations, the predicted mean, and prediction interval, in order of test observations. \textbf{(3)} Average calibration plot, with predicted proportions (\textit{expected probability}) on $x$ axis, observed proportions (\textit{observed probability}) on $y$ axis. \textbf{(4)} Training curves: {\color{Cerulean} average calibration} (left $y$ axis), {\color{BurntOrange} sharpness} (right $y$ axis). {\color{BurntOrange} GT Sharp} denotes the true sharpness (noise level) of the data, and {\color{ForestGreen} Val Ep} denotes the epoch with lowest validation loss. } \label{fig:plots} \end{figure*} \begin{table*}[h] \centering \vspace{2mm} \begin{tabular}{*{10}{c}} \cline{3-10} && \multicolumn{8}{c}{\textbf{Metrics}} \\ \cline{3-10} & \multicolumn{1}{c}{} & RMSE & MAE & ECE & Sharpness & NLL & CRPS & Check & Interval \\ \hline \multirow{5}{*}{\textbf{Methods}} & NLL & $1.689$ & $0.852$ & $0.057$ & $1.451$ & $2.214$ & $0.604$ & $0.305$ & $2.990$ \\ & CRPS & $\mathbf{0.864}$ & $0.568$ & $\mathbf{0.056}$ & $0.729$ & $1.266$ & $\mathbf{0.427}$ & $\mathbf{0.215}$ & $2.323$ \\ & Check & $0.880$ & $\mathbf{0.566}$ & $0.092$ & $\mathbf{0.720}$ & $4.264$ & $0.434$ & $0.219$ & $2.434$ \\ & Interval & $0.916$ & $0.600$ & $0.066$ & $0.722$ & $\mathbf{0.780}$ & $0.447$ & $0.226$ & $\mathbf{2.309}$ \\ \hhline{~=========} & Ground Truth & $\mathit{0.824}$ & $\mathit{0.530}$ & $\mathit{0.013}$ & $\mathit{0.831}$ & $\mathit{-0.083}$ & $\mathit{0.370}$ & $\mathit{0.187}$ & $\mathit{1.758}$ \\ \hline \end{tabular} \caption{\textbf{Scalar Evaluation Metrics.} Each row shows evaluations metrics for a single method (i.e. loss function). RMSE (root mean squared error) and MAE (mean absolute error) are accuracy metrics. The best method for each metric is in \textbf{bold}. While these values are based on one seed, we show results across 5 random seeds with standard error in Appendix~\ref{app:app_table}.} \label{table:results} \end{table*} \section{Introduction} \label{sec:introduction} \vspace{-1mm} As machine learning (ML) systems are increasingly deployed on an array of high-stakes tasks, there is a growing need to robustly quantify their predictive uncertainties. Uncertainty quantification (UQ) in machine learning generally refers to the task of quantifying the confidence of a given prediction, and this measure of confidence can be especially crucial in a variety of downstream applications, including Bayesian optimization \citep{jones1998efficient, shahriari2015taking}, model-based reinforcement learning \citep{malik2019calibrated, yu2020mopo}, and in high-stakes prediction settings where errors incur large costs \citep{wexler2017computer, rudin2019stop}. UQ is often performed via \emph{distributional predictions} (in contrast with \emph{point predictions}). Hence, given inputs $x \in \mathcal{X}$ and targets $y \in \mathcal{Y}$, one common goal in UQ is to approximate the true conditional distribution of $y$ given $x$. In the supervised setting where we only have access to a limited data sample, we are then faced with the question, ``how can one verify whether a distributional prediction is close to the true distribution using only a finite dataset?'' Many works in UQ tend to be disjoint in the evaluation metric utilized, which sends divided signals about which metrics \textit{should} or \textit{should~not} be used. For example, some works report likelihood on a test set \citep{lakshminarayanan2017simple, detlefsen2019reliable, zhao2020individual}, some works use other proper scoring rules \citep{maciejowska2016probabilistic, askanazi2018comparison, bowman2020uncertainty, bracher2021evaluating}, while others focus on calibration metrics \citep{kuleshov2018accurate, cui2020calibrated}. Further, with disparate implementations for each metric, it is often the case that reported numerical results are not directly comparable across different works, even if a similar metric is used. To address this, we present \emph{Uncertainty Toolbox}: an open-source python library that helps to assess, visualize, and improve UQ. There are other libraries such as Uncertainty Baselines~\citep{nado2021uncertainty} and Robustness Metrics~\citep{djolonga2020robustness} that focus on aspects of UQ in the \textit{classification} setting. Uncertainty Toolbox focuses on the \textit{regression} setting and additionally aims to provide user-friendly utilities such as visualizations, a glossary of terms, and an organized collection of key paper references. We begin our discussion by first introducing the contents of Uncertainty Toolbox. We then provide an overview of evaluation metrics in UQ. Afterwards, we demonstrate the functionalities of the toolbox with a case study where we train probabilistic neural networks (PNNs) \citep{nix1994estimating, lakshminarayanan2017simple} with a set of different loss functions, and evaluate the resulting trained models using metrics and visualizations in the toolbox. This case study shows that certain evaluation metrics shed light on different aspects of UQ performance, and makes the case for using a suite of metrics for a comprehensive evaluation. \section{Evaluation Metrics in Predictive UQ} \label{sec:metrics} \vspace{-1mm} To summarize the notation and setting: \textbf{X, Y} denote random variables; $x, y$ denote realized values; and $\mathcal{X, Y}$ denote sets of possible values. Further, for any random variable, we denote the true CDF as ${\mathbb{F}}$, its inverse (i.e. the quantile function) as $\mathbb{Q}$, the corresponding density function as ${f}$, and the space of distributions as $\mathcal{F}$. Estimates of these true functions will be denoted with a hat, e.g. $\hat{\mathbb{F}}$ and $\hat{f}$. Lastly, we consider the regression setting where $\mathcal{Y} \subset \mathbb{R}$ and $\mathcal{X} \subset \mathbb{R}^n$. Many recent works have focused on evaluation metrics involving notions of \textit{calibration} and \textit{sharpness} \citep{gneiting2007probabilistic, guo2017calibration, kuleshov2018accurate, song2019distribution, tran2020methods,zhao2020individual, fasiolo2020fast, cui2020calibrated}. Calibration in the regression setting is defined in terms of quantiles, and broadly speaking, it requires that the probability of observing the target random variable below a predicted $p$\textsuperscript{th} quantile is equal to the \textit{expected probability} $p$, for all $p \in (0, 1)$. We refer to the former quantity as the \textit{observed probability} (also referred to as empirical probability) and denote it $p^{\text{obs}}(p)$, for an expected probability $p$. Calibration requires $p^{\text{obs}}(p) = p$, $\forall p \in (0,1)$. From this generic statement, we can describe different notions of calibration based on how $p^{\text{obs}}$ is defined. The most common form of calibration is \textbf{average calibration}, where $\hat{\mathbb{Q}}_p(x)$ is the estimated $p$\textsuperscript{th} quantile of $\textbf{Y} | x$, \vspace{-1mm} \begin{align} \label{eq:avg_cali} p^{\text{obs}}_{avg}(p) := \mathbb{E}_{x \sim \mathbb{F}_{\textbf{X}}}[\mathbb{F}_{\textbf{Y}|x}(\hat{\mathbb{Q}}_p(x))], \hspace{2mm} \forall p\in (0, 1), \end{align} i.e. the probability of observing the target below the quantile prediction, \textit{averaged over $\mathbb{F}_{\textbf{X}}$}, is equal to $p$. Average calibration is often referred to simply as ``calibration'' \citep{kuleshov2018accurate, cui2020calibrated}, and it is amenable to estimation in finite datasets, as follows. Given a dataset $D = \{(x_i, y_i)\}_{i=1}^{N}$, we can estimate $p^{\text{obs}}_{avg}(p)$ with $\hat{p}^{\text{obs}}_{avg}(D, p) = \frac{1}{N}\sum_{i=1}^{N} \mathbb{I}\{y_i \leq \hat{\mathbb{Q}}_p(x_i)\}$. The degree of error in average calibration is commonly measured by \textit{expected calibration error} \cite{guo2017calibration, tran2020methods, cui2020calibrated}, $\text{ECE}(D, \hat{\mathbb{Q}}) = \frac{1}{m}\sum_{j=1}^{m} \left | \hat{p}^{\text{obs}}_{avg}\left(D, p_j\right) - p_j \right |$, where $p_j$ is a range of expected probabilities of interest. Note that if our quantile estimate achieves average calibration then $\hat{p}^{\text{obs}}_{avg} \rightarrow p$ (and thus ECE $\rightarrow 0$) as $N\rightarrow\infty$, $\forall p \in (0, 1)$. It may be possible to have an uninformative, yet average calibrated model. For example, quantile predictions that match the true \textit{marginal} quantiles of $\mathbb{F}_{\textbf{Y}}$ will be average calibrated, but will hardly be useful since they do not depend on the input $x$. Therefore, the notion of \textbf{sharpness} is also considered, which quantifies the concentration of distributional predictions \citep{gneiting2007probabilistic}. For example, in predictions that parameterize a Gaussian, the variance of the predicted distribution is often taken as a measure of sharpness. There generally exists a tradeoff between average calibration and sharpness \citep{murphy1973new, gneiting2007probabilistic}. Recent works have suggested a notion of calibration stronger than average calibration, called adversarial group calibration \citep{zhao2020individual}. This stems from the notion of \textbf{group calibration} \citep{kleinberg2016inherent, hebert2017calibration}, which prescribes measurable subsets $\mathcal{S}_i \subset \mathcal{X}$ s.t. $P_{x \sim \mathbb{F}_{\textbf{X}}}(x \in \mathcal{S}_i) > 0$, $i=1,\dots,k$, and requires the predictions to be average calibrated within each subset. Adversarial group calibration then requires average calibration for \textit{any subset of $\mathcal{X}$ with non-zero measure}. Denote $\boldsymbol{X}_{\mathcal{S}}$ as a random variable that is conditioned on being in the set $\mathcal{S}$. For \textbf{adversarial group calibration}, the observed probability is \vspace{-1mm} \begin{align} \label{eq:adversarial-group-calibration} \begin{split} % % p^{\text{obs}}_{adv}(p) &:= \hspace{2mm} \mathbb{E}_{x \sim \mathbb{F}_{\textbf{X}_{\mathcal{S}}}}[\mathbb{F}_{\textbf{Y}|x}(\hat{\mathbb{Q}}_p(x))],\\ \hspace{2mm} \forall p \in (0, 1), \hspace{2mm} &\forall \mathcal{S} \subset \mathcal{X} \text{ s.t. } P_{x \sim \mathbb{F}_{\textbf{X}}} (x \in \mathcal{S}) > 0. \end{split} \end{align} With a finite dataset, we can measure a proxy of adversarial group calibration by measuring average calibration within all subsets of the data with sufficiently many points. An alternative but widely used family of evaluation metrics is \textbf{proper scoring rules} \citep{gneiting2007strictly}. Proper scoring rules are summary statistics of overall performance of a distributional prediction, and are defined such that the true underlying distribution optimizes the expectation of the scoring rule. Given a scoring rule $S(\hat{\mathbb{F}}, (x,y))$, where $x\sim\mathbb{F}_{\boldsymbol{X}}$, $y \sim\mathbb{F}_{\boldsymbol{Y}|x}$, the expectation of the scoring rule is $S(\hat{\mathbb{F}}, \mathbb{F}) = \mathbb{E}_{\boldsymbol{X}, \boldsymbol{Y}}[S(\hat{\mathbb{F}}, (x,y))]$, and $S$ is said to be a proper scoring rule if $S(\mathbb{F}, \mathbb{F}) \geq S(\hat{\mathbb{F}}, \mathbb{F}), \forall \hat{\mathbb{F}} \in \mathcal{F}$. There are a variety of proper scoring rules, based on the representation of the distributional prediction. Since these rules consider both calibration and sharpness together in a single value~\citep{gneiting2007probabilistic}, they also serve as optimization objectives for UQ. For example, the logarithmic score is a popular proper scoring rule for density predictions \citep{lakshminarayanan2017simple, pearce2018uncertainty, detlefsen2019reliable}, and it is used as a loss function via \textit{negative log-likelihood} (\textbf{NLL}). The \textbf{check score} is widely used for quantile predictions and also known as the \textit{pinball loss}. The \textbf{interval score} is commonly used for prediction intervals (a pair of quantiles with a prescribed expected coverage), and the continuous ranked probability score (\textbf{CRPS}) is popular for CDF predictions \footnote{Proper scoring rules are usually \textit{positively oriented} (i.e. greater value is more desirable), and their negative is taken as a loss function to minimize. In our work, we always report proper scoring rules in their \textit{negative orientation} (i.e. as a loss).}. We refer the reader to \citet{gneiting2007strictly} for the definition of each scoring rule. Given the wide range of metrics available, one might naturally ask, ``is there one metric to rule them all?'' Previous work has investigated some aspects of this question. For example, \citet{chung2020beyond} noted the mismatch between the check score and average calibration, and \citet{gneiting2007strictly} and \citet{bracher2021evaluating} point out cases in which disagreements can occur between some scoring rules. Still, whether there exists a golden metric in UQ is an open research problem. We instead suggest that there is virtue in inspecting various metrics simultaneously, which is made easy by Uncertainty Toolbox, as we show below. \section{Toolbox Contents} \vspace{-1mm} Uncertainty Toolbox comprises four main functionalities, which we detail below. \vspace{-4mm} \paragraph{Evaluation Metrics} The toolbox provides implementations for a suite of evaluation metrics. The main categories of metrics are: calibration, group calibration, sharpness, and proper scoring rules. We discuss each of these metric types in the following section (Section~\ref{sec:metrics}). \vspace{-4mm} \paragraph{Recalibration} We further implement recalibration methods that leverage isotonic regression \citep{kuleshov2018accurate}. Concretely, recalibration aims to improve the average calibration (defined in Eq.~(\ref{eq:avg_cali})) of distributional predictions. \vspace{-4mm} \paragraph{Visualizations} The toolbox offers a range of easy-to-use visualization utilies to help in inspecting and evaluating UQ quality. These plotting utilities focus on visualizing the predicted distribution, calibration, and prediction accuracy. \vspace{-4mm} \paragraph{Pedagogy} For those unfamiliar with the area of predictive UQ, we provide a glossary that communicates the core concepts in this area, and maintain a paper list which organizes some of the key papers in the field. We hope the toolbox serves as an intuitive guide for those unfamiliar but interested in utilizing UQ, and as a practical tool and point of reference for those active in UQ research. \textbf{Uncertainty Toolbox is available at the following page:} {\url{https://github.com/uncertainty-toolbox/uncertainty-toolbox}}.
proofpile-arXiv_068-13211
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Acknowledgments} \section{Introduction}\label{sec:introduction} Multi-agent path finding (MAPF), i.e., finding collision-free paths for multiple agents on a graph, has been a long-standing combinatorial problem. On undirected graphs, a feasible solution can be found in polynomial time, but finding the fastest solution is NP-hard. And on directed graphs, even finding a feasible solution is NP-hard in general cases \citep{nebel2020computational}. Despite the great challenges of MAPF, it is of great social and economic value because many real-life scheduling and planning problems can be formulated as MAPF questions. Many operations research (OR) algorithms are proposed to efficiently find sub-optimal solutions \citep{cohen2019optimal,vsvancara2019online,ma2018multi,ma2017multi}. As an important branch in decision theory, reinforcement learning has attracted a lot of attention these years due to its super-human performance in many complex games, such as AlphaGo \citep{silver2017mastering}, AlphaStar \citep{vinyals2019grandmaster}, and OpenAI Five \citep{berner2019dota}. Inspired by the tremendous successes of RL on these complex games and decision scenarios, multi-agent reinforcement learning (MARL) is widely expected to work on MAPF problems as well. We study MARL in MAPF problems and aim to provide high-performance and scalable RL solutions. To develop and test our RL solution, we focus on a specific MAPF environment, Flatland. Flatland \citep{mohanty2020flatland,Laurent21} is a train schedule simulator developed by the Swiss Federal Railway Company (SBB). It simulates trains and rail networks in the real world and serves as an excellent environment for testing different MAPF algorithms. Since 2019, SBB has successfully held three flatland challenges, attracting more than 200 teams worldwide, receiving thousands of submissions and over one million views. The key reasons why we focus on this environment are as follows, \begin{itemize} \item \textbf{Support Massive Agents:} On the maximum size of the map, up to hundreds of trains need to be planned. \item \textbf{Directed Graphs and Conflicts between Agents:} Trains CANNOT move back, and all decisions are not revocable. Deadlock occurs if the trains are not well planned (\cref{fig:deadlock}), which makes this question very challenging. \item \textbf{Lack of High-Performance RL Solutions:} Existing RL solutions show a significant disadvantage compared with OR algorithms (27.9 vs. 141.0 scores). \end{itemize} To solve Flatland, we propose an RL solution by standard reinforcement learning algorithms at scale. The critical components of our RL solution are (1) the application of a TreeLSTM network architecture to process the tree-structured local observations of each agent, (2) the centralized control method to promote cooperation between agents, and (3) our optimized 20x faster feature parser. Our contributions can be summarized as (1) We propose an RL solution consisting of domain-specific feature extraction and curriculum training phases design, a TreeLSTM network to process the structured observations, and a 20x faster feature parser to improve the sample efficiency. (2) Our observed strategies and performance show the potential of RL algorithms in MAPF problems. We find that standard RL methods coupled with domain-specific engineering can achieve comparable performance with OR algorithm ($2^{nd}$-- $3^{rd}$ OR ). Our solution provides implementation insights to the MARL in the MAPF research community. (3) We will open-source our solution and the optimized feature parser for further research on multi-agent reinforcement learning in MAPF problems. \section{Flatland3 Environment}\label{sec:environment} Flatland is a simplified world of rail networks in which stations are connected by rails. Players control trains to run from one station to another. Its newest version, Flatland3, consists of the following rules. \begin{itemize} \item The time is discretized into timestamps from 0 to $\Tmax$. \item There are $N$ trains and several cities. Trains are numbered from $1$ to $N$. \item Trains' action space consists of five actions, \{do\_nothing, forward, stop, left, right\}. Trains are not allowed to move backward and must go along the rails. \item Trains have different speeds. Each train $i$ has its own speed $s_i$, and can move one step every ${1}/{s_i}$ turns. The time cost of one step ${1}/{s_i}$ is guaranteed to be an integer, and the largest possible speed is 1, i.e., one step a turn. \item For each train $i$, it has an earliest departure time $A_i$ and a latest arrival time $B_i$. Each train can depart from its initial station only after its earliest departure time $A_i$ and should try its best to arrive at its target station before the latest arrival time $B_i$. \item Trains randomly break down (malfunction) while running or waiting for departure. After the breakdown, the train must stay still for a period of time before moving again. \end{itemize} \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{image/flatland.png} \caption{A $30\times30$ flatland map. There are two stations and two trains. The lines connecting trains and stations indicate trains' targets.} \label{fig:flatland} \end{figure} \input{image/rail/rail} \paragraph{Reward} The goal is to control all trains to reach target stations before their latest arrival time $B_i$. Every train will get a reward $R_i$ in the end. $T_i$ denotes the arrival time of each train. \begin{itemize} \item If a train arrives on time, then it scores 0. \item If it arrives late, it gets a negative reward $B_i - T_i$ as a penalty, according to how late it is. \item If a train does not manage to arrive at its target before the end time $\Tmax$, then the penalty consists of two parts, a temporal penalty and a spatial penalty. The temporal penalty is $B_i-\Tmax$, reflecting how late it is. The spatial penalty is decided by the shortest path distance $d_i$ between its final location at time $\Tmax$ and its target. \end{itemize} Formally, $R_i$ is defined as \begin{equation} R_i = \left\{ \begin{aligned} & 0, && \text{if}~ T_i \le B_i; \\ & B_i - T_i, && \text{if}~ B_i < T_i \le \Tmax; \\ & B_i - \Tmax - d_i^{(\Tmax)}, && \text{if}~ T_i > \Tmax; \end{aligned} \right. \end{equation} where $d_i^{(\Tmax)}$ is the distance between train $i$ and its target at time $\Tmax$, \begin{equation} d_i^{(t)} = d\left((x_i^{(t)}, y_i^{(t)}), \;\target_i\right). \end{equation} Our goal is to maximize the sum of individual rewards \begin{equation} R = \sum_{i=0}^{N} R_i. \end{equation} Apparently, $R$ is always non-positive, and $R=0$ if and only if all trains reach targets on time. $\lvert R\lvert$ can be arbitrarily large, as long as the map size is sufficiently large and the algorithm performance is sufficiently bad. \paragraph{Normalized Reward} The magnitude of total reward $R$ greatly relies on the problem scale, such as the number of trains, the number of cities, the speeds of trains, and map size. To make rewards of different problem scales comparable, they are normalized as follows: \begin{equation} \bar{R} = 1 + \frac{R}{N\Tmax}, \end{equation} where $N$ is the number of trains. The environment generating procedure guarantees $\bar{R} \in [0, 1]$ by adjusting $\Tmax$. Normalized reward serves as the standard criterion for testing algorithms. \section{Our Approach}\label{sec:approach} The RL solution we provide is cooperative multi-agent reinforcement learning. Each agent independently observes a part of the map localized around itself and encodes the part of the map topology into tree-structured features. Neural Networks independently process each agent's observation in the first several layers by TreeLSTM, then followed by several self-attention blocks to encourage communications between agents so that a train is able to be aware of others' local observations and forms its own knowledge of the global map. Rewards are shared by all agents to promote cooperation between them. Our network is trained by Proximal Policy Optimization (PPO) \citep{Schulman2017ProximalPO} algorithm. \subsection{Feature Extraction} The extracted features consist of two parts, $X^\mathrm{attr}$ and $X^\mathrm{tree}$. \paragraph{Agent Attributes} The first part, $X^\mathrm{attr}=\{\mathbf{x}^\mathrm{attr}_i\}_{i=1}^N$, are attributes of each agent, such as ID, earliest departure time, latest arrival time, their current state, direction, and the time left, etc. See \cref{tab:agent attribute} for detailed contents of $X^\mathrm{attr}$. \input{table/feature_table} \paragraph{Tree Representation of Possible Future Paths} The second part $X^\mathrm{tree}$ is the main part of the observations. It encodes possible future paths of each agent as well as useful information about these paths into a tree-like structure. We take the rail network as a directed graph and construct a spanning tree for each agent by a depth-limited BFS (breadth-first-search) starting from its current location. Each node in the tree represents a branch the agent may choose, see \cref{fig:tree}. Formally, the spanning tree we construct for each train $i$ is $\mathcal{T}_i= (\mathcal{V}_i, \mathcal{E}_i)$ with node set $\mathcal{V}_i$ and edge set $\mathcal{E}_i$. Each node $\nu\in\mathcal{V}_i$ is associated with a vector $\boldsymbol{x}_\nu$, containing useful information about this branch. See \cref{tab:node feature} for detailed contents of $\boldsymbol{x}_\nu$. So, $X^\mathrm{tree}$ contains both the tree structure and these associated node features: \begin{equation} X^\mathrm{tree} = \{(\mathcal{T}_i, X^\mathrm{tree}_i)\}_{i=1}^N, \end{equation} where $X^\mathrm{tree}_i = \{\boldsymbol{x}_\nu| \nu\in\mathcal{V}_i\}$. Such tree representation is provided by Flatland3 environment and has also been explored by other RL methods \citep{mohanty2020flatland, Laurent21}. However, no RL method before has achieved comparable performance as ours because we made the following improvements. \begin{itemize} \item First, all methods before concatenate node features together into a long vector so that it can be fed into MLPs. Normal networks can only process vectors, not tree-like input. After concatenation, the underlying structures of trees are lost. In contrast, we think the tree structures are super important for decision-making and must be preserved, as they encode map topology. In \cref{sec:netwrok}, we processing such tree-structured data by a special neural network structure, TreeLSTM \cite{tai2015improved}. \item Second, our trees are much deeper than others. Tree depth decides the range of agents' field of view and thus significantly affects the performance. Extracting tree representation is a computationally intensive task. The flatland3 built-in implementation of tree representation is super slow because of the poor efficiency of Python language and the unnecessary complete ternary tree it uses, so the RL methods before have a very limited tree depth, typically 3. We re-implement tree construction by C++ and prune complete ternary trees into normal trees. Our implementation is 20x faster than the built-in one and enables us to build trees with depths of more than 10. \item Third, we build trees in the BFS manner, while the built-in implementation is in the DFS manner. Constructing a spanning tree in a DFS manner makes some nodes near the root on the graph become far from the root, which is a disadvantage. \end{itemize} \subsection{Neural Network Architecture} \label{sec:netwrok} As shown in \cref{fig:network}, our neural networks first process $X^\mathrm{attr}$ by a 4-layer MLP and process $X^\mathrm{tree}$ by TreeLSTM \citep{tai2015improved}. TreeLSTM is a variant of LSTM designed for tree-structured data, whose details will be elaborated later. \begin{align} H^\mathrm{attr} &= \operatorname{MLP}(X^\mathrm{attr}) \\ H^\mathrm{tree} &= \operatorname{TreeLSTM}(X^\mathrm{tree}) \end{align} Then, we concatenate $H^\mathrm{attr}$ and $H^\mathrm{tree}$ together and feed them into three consecutive self-attention blocks to encourage communications between agents. With the self-attention mechanism \citep{vaswani2017attention}, a train is able to be aware of other trains' observations and forms its own knowledge of the global map. \begin{align} H^{(0)} & = [H^\mathrm{attr}, H^\mathrm{tree}], \\ H^{(l)} & = \operatorname{Self-Attention}\left(H^{(l-1)}\right), \quad l=1,2,3. \end{align} Finally, $H^{(3)}$ is fed into two different heads to obtain final actions logits $A\in\mathbb{R}^{N\times 5}$ and estimated state-value $v\in\mathbb{R}$. \begin{align} A &= \operatorname{MLP}(H^{(3)}) \\ V &= \operatorname{MLP}(H^{(3)}) \\ v &= \textstyle \sum_{i=0}^N V_i \end{align} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{image/network-architecture.pdf} \caption{Overview of our network architecture. FC stands for fully connected layer.} \label{fig:network} \end{figure} \paragraph{TreeLSTM} LSTM \citep{hochreiter1997long}, as a kind of RNN, was designed to deal with sequential data. Each LSTM cell takes state $(c_{t-1}, h_{t-1})$ of last cell and a new $x_t$ as input, and output new cell state $c$ and $h$ to next cell. \begin{equation} (c_t, h_t) = \operatorname{LSTM-Cell}\left(x_t, (c_{t-1}, h_{t-1})\right) \\ \end{equation} Sequential data is a special case of trees, where each node has a unique child --- its successor. \citeauthor{tai2015improved} modified its structure to deal with general trees in \citeyear{tai2015improved}. Tree differs from sequential data in the allowed number of children. Unlike sequential data, nodes in a tree are allowed to have multiple children. As a result, TreeLSTM receives a set of children's output as input instead: \begin{align} & (c_t, h_t) = \operatorname{TreeLSTM-Cell}(x_t, S_t), \end{align} where $S_t = \{(h_k, c_k)\;|\;k\in\operatorname{Child}(t)\}$. Within TreeLSTM cells, there are several ways to aggregate children's states, leading to different variants of TreeLSTM. In our network, we adopt Child-sum TreeLSTM. See \citet{tai2015improved} for detailed structures of TreeLSTM cells. \subsection{Reward Design} Agents are given rewards at every time step, according to their performance within the moment. Besides the normalized reward generated by the environment, agents are also rewarded when they depart from stations, arrive at targets, and get penalized when deadlocks happen. To promote cooperation between them, these rewards are shared by all agents, and no credit assignment is performed. As a result, a single agent is encouraged to wait for others if the waiting can lead to global efficiency improvement. \paragraph{Environmental Reward} Agents are rewarded environmental reward $r^{(e)}_t$ at time step $t$: \begin{equation} r^{(e)}_t = \bar{R}_t, \end{equation} where $\bar{R}_t$ is the normalized environmental reward agents get in time step $t$. \paragraph{Departure Reward} We reward agents when there are new agents departing: \begin{equation} r^{(d)}_t = \frac {n^{(d)}_t - n^{(d)}_{t - 1}} {N}, \end{equation} where $n^{(d)}_t$ is the number of agents departing at or before time step $t$. \paragraph{Arrival Reward} We reward agents when there are new arrivals: \begin{equation} r^{(a)}_t = \frac {n^{(a)}_t - n^{(a)}_{t - 1}} {N}, \end{equation} where $n^{(a)}_t$ is the number of arrival so far at time step $t$. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{image/deadlock.png} \caption{Because trains are not allowed to go backward, if two trains go into a single rail in opposite directions, a deadlock happens.} \label{fig:deadlock} \end{figure} \paragraph{Deadlock Penalty} Because trains are not allowed to go backward, if two trains go into a single rail in opposite directions, a deadlock happens and no train can pass this rail again (see \cref{fig:deadlock}). So, we give a penalty when new deadlocks happen: \begin{equation} r^{(l)}_t = \frac {n^{(l)}_t - n^{(l)}_{t - 1}} {N}, \end{equation} where $n^{(l)}_t$ is the number of deadlocks on the map at time step $t$. \paragraph{Total Reward} The final reward we give to agents at time $t$ is a weighted sum of all terms above: \begin{equation} r_t = c_e r_t^{(e)} + c_a r_t^{(a)} + c_d r_t^{(d)} - c_l r_t^{(l)}, \end{equation} where $c_e,c_a,c_d,c_l$ are weight parameters. \section{Experiments}\label{sec:experiment} \input{table/experiments_info} \input{table/compare_phase_1_2} \input{table/generalization_up} \subsection{Experiment Settings} We largely followed the final round (round 2) configurations of the Flatland3 challenge to conduct experiments so that our results are comparable with the ones on the challenge leaderboard \footnotemark. There are 15 test stages in the final round, and each stage contains 10 test cases. Problem scales (\cref{tab:main-result}) and difficulty gradually increase from initial stages to advanced stages. The first stage is the smallest one with 7 agents on a $30\times30$ map, while the last stage contains 425 agents on a $158\times158$ map. Teams' submissions are tested stage by stage. A team can proceed to the next stage only if they pass the last stage (arrival ratio reaches 25\%). \footnotetext{\url{https://www.aicrowd.com/challenges/flatland-3/leaderboards}} \input{table/main-result} \input{table/leaderboard} \input{table/generalization_down} \subsection{Multiple Phase Training} To reduce training difficulty, we train our models in curriculum learning style, and the whole training process can be roughly divided into three phases. In phases I and II, we train a model in 50-agent environments. We found that the learned model generalizes well to smaller environments but not larger ones. In phase III, models are initialized by the one learned in phase II and fine-tuned in settings with more agents. \paragraph{Phase I} Initially, we only use the environmental reward, arrival reward, and deadlock penalty to encourage the trains to march on their targets and avoid deadlocks (see Phase-I in \cref{tab:experiments}). After training, 70\% agents in 50-agent environments can reach their targets, and a normalized reward of 0.859 is achieved. However, 21\% agents have never departed because of the deadlock penalty. Agents choose not to depart and behave conservatively to avoid being penalized by deadlocks. \paragraph{Phase II} In phase I, many trains do not depart, so we add a departure reward to encourage more departures (see Phase-II in \cref{tab:experiments}). The experiment is initialized by the phase-I model, and after 5 days of training, its arrival ratio increases to 86.2\%, and almost all the trains have departed (\cref{tab:phase_1_and_2}). \paragraph{Phase III} Finally, we deal with environments with more than 80 agents. Training models in so large environments from scratch is very difficult, so we adopt curriculum learning. Models for large environments are initialized by parameters learned in small environments. Although models learned in small environments are able to directly generalize to large environments (\cref{tab:generaliztion_up}), fine-tuning in large environments increases performances significantly. \subsection{Results and Analysis} Our stage-specific results are reported in \cref{tab:main-result}; Final scores as well as top 10 teams' scores in Flatland3 challenge are listed in \cref{tab:leaderboard}. In summary, we scored 125.3, ranking top 2--3 on the leaderboard, while the best RL method before scored only 27.9. More specifically, we observe the following phenomena: \begin{itemize} \item No RL method before us managed to pass Test\_03 stage, while our method passed all 15 stages. \item While the number of agents increases, model performance decreases, which suggests large-scale problems are more difficult than we expected. \item When the numbers of agents are equal (Test\_04 to Test\_08), model performance increases with a larger map and more cities because it leads to lower agent density and less traffic congestion. \item Compared to the third-best team, we achieved a higher environmental reward but a lower arrival ratio. This indicates that environmental rewards are not always consistent with arrival ratios because arrival ratios only care agents arrive or not while environmental rewards also care about how fast agents arrive. They are two highly related but different objectives. Similar phenomena can be observed in team An\_Old\_Driver and team Zain. \end{itemize} \paragraph{Generalization across environment scales} We are also interested in the generalization ability of our models and particularly interested in the generalization across environment scales. As \cref{tab:generaliztion_up} shows, models learned in small environments are able to generalize to large environments but perform worse than the one fine-tuned in large environments. \cref{tab:generalization_down} shows that models specialized in large environments are also able to generalize to small environments but perform worse than the ones learned in small environments. To achieve optimal performance, we need to train multiple scale-specific models. \paragraph{Agent Cooperation} We observed many self-organized cooperative patterns in agents' behaviors. They learn to line up to march in a compact manner (\cref{fig:lineup}). Fast ones learned to overtake the slow ones (\cref{fig:overtake}). Slow trains make way for fast ones (\cref{fig:makeway}). When there are two parallel rail lanes, trains spontaneously line up as if they are in a two-way street (\cref{fig:parallel}). \section{Conclusion}\label{sec:conclusion} We provided a new RL solution to the Flatland3 challenge and achieved a score 4x better than the best RL method before. The key reasons behind the improvement are 1) the tree features and TreeLSTM we adopt and 2) the 20x faster feature parser, which enables us to train our model with far more data than the RL methods before. However, there is still a gap between our method and state-of-the-art OR methods \citep{LiICAPS21}. Our method also takes longer time than OR methods. Another drawback is that there lacks a single model that is able to handle environments of any scale. To achieve optimal performance, we have to train multiple scale-specific models.
proofpile-arXiv_068-13347
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Stochastic Convex Optimization (SCO) \citep{vapnik2013nature} and its empirical form, Empirical Risk Minimization (ERM), are the most fundamental problems in supervised learning and statistics. They find numerous applications in many areas such as medicine, finance, genomics and social science. One often encountered challenge in such models is how to handle sensitive data, such as those in biomedical datasets. As a commonly-accepted approach for preserving privacy, differential privacy \citep{dwork2006calibrating} provides provable protection against identification and is resilient to arbitrary auxiliary information that might be available to attackers. Methods to guarantee differential privacy have been widely studied, and recently adopted in industry \citep{apple,ding2017collecting}. Differentially Private Stochastic Convex Optimization and Empirical Risk Minimization ({\em i.e.,} DP-SCO and DP-ERM) have been extensively studied in the past decade, starting from \citep{chaudhuri2009privacy,chaudhuri2011differentially}. Later on, a long list of works have attacked the problems from different perspectives: \citep{bassily2014private, wang2017differentially,wang2019differentially,wu2017bolt,bassily2020} studied the problems in the low dimensional case and the central model, \citep{kasiviswanathan2016efficient,kifer2012private,talwar2015nearly} considered the problems in the high dimensional sparse case and the central model, \citep{smith2017interaction,wang2018empirical,wang2019noninteractive,duchi2013local} focused on the problems in the local model. It is worth noting that all previous results need to assume that either the loss function is $O(1)$-Lipschitz or each data sample has bounded $\ell_2$ or $\ell_{\infty}$ norm. This is particularly true for those output perturbation based \citep{chaudhuri2011differentially} and objective or gradient perturbation based \citep{bassily2014private} DP methods. However, such assumptions may not always hold when dealing with real-world datasets, especially those from biomedicine and finance, implying that existing algorithms may fail. The main reason is that in such applications, the datasets are often unbounded or even heavy-tailed \citep{woolson2011statistical,biswas2007statistical,ibragimov2015heavy}. As pointed out by Mandelbrot and Fama in their influential finance papers \citep{mandelbrot1997variation,fama1963mandelbrot}, asset prices in the early 1960s exhibit some power-law behavior. The heavy-tailed data could lead to unbounded gradient and thus violate the Lipschitz condition. For example, consider the linear squared loss $\ell(w,x, y)=(w^Tx-y)^2$. When $x$ is heavy-tailed, the gradient of $\ell(w, x,y)$ becomes unbounded. With the above understanding, our questions now are: {\bf What is the behavior of DP-SCO on heavy-tailed data and is there any effective method for the problem?} To answer these questions, we will conduct, in this paper, a comprehensive study of the DP-SCO problem. Our contributions can be summarized as follows. \begin{enumerate} \item We first consider the case where the loss function is strongly convex and smooth. For this case, we propose an $(\epsilon, \delta)$-DP method based on the sample-and-aggregate framework by \citep{nissim2007smooth} and show that under some assumptions, with high probability, the excess population risk of the output is $\tilde{O}(\frac{d^3}{n\epsilon^4}L_\mathcal{D} (w^*))$, where $n$ is the sample size, $d$ is the dimensionality and $L_\mathcal{D} (w^*)$ is the minimal value of the population risk. \item Then, we study the case with the additional assumptions: each coordinate of the gradient of the loss function is sub-exponential and Lipschitz. For this case, we introduce an $(\epsilon, \delta)$-DP algorithm based on the gradient descent method and a recent algorithm on private 1-dimensional mean estimation \citep{bun2019average} ({\em i.e.,} Algorithm \ref{alg:3}). We show that the expected excess population risk for this case can be improved to $\tilde{O}(\frac{ d^2 \log \frac{1}{\delta}}{ n\epsilon^2 })$. \item We also consider the general case, where the loss function does not need the above additional assumptions and can be general convex, instead of strongly convex. For this case, we present a gradient descent method based on the strategy of trimming the unbounded gradient (Algorithm \ref{alg:4}). We show that if each coordinate of the gradient of the loss function has bounded second-order moment, then with high probability, the output of our algorithm achieves excess population risks of $\tilde{O}(\frac{ d^2 \log \frac{1}{\delta}}{n\epsilon^2})$ and $\tilde{O}(\frac{\log \frac{1}{\delta }d^\frac{2}{3}}{(n\epsilon^2)^\frac{1}{3}})$ for strongly convex and general convex loss functions, respectively. It is notable that compared with Algorithm \ref{alg:4}, Algorithm \ref{alg:3} uses stronger assumptions and yields weaker results. \item Finally, we test our proposed aglorithms on both synthetic and real-world datasets. Experimental results are consistent with our theoretical claims and reveal the effectiveness of our algorithms in handling heavy-tailed datasets. \end{enumerate} Due to the space limit, some definitions, all the proofs are relegated to the appendix in the Supplementary Material, which also includes the codes of experiments. \section{Related Work} As mentioned earlier, there is a long list of works on DP-SCO or DP-ERM. However, none of them considers the case with heavy-tailed data. Recently, a number of works have studied the SCO and ERM problems with heavy-tailed data \citep{brownlees2015empirical,minsker2015geometric,hsu2016loss,lecue2018robust}. However, all of them focus on the non-private version of the problem. It is not clear whether they can be adapted to private versions. To our best knowledge, the work presented in this paper is the first one on general DP-SCO with heavy-tailed data. The works that are most related to ours are perhaps those dealing with unbounded sensitivity. \citep{dwork2009differential} proposed a general framework called propose-test-release and applied it to mean estimation. They obtained asymptotic results which are incomparable with ours. Also, it is not clear whether such a framework can be applied to our problem. In our second result, we adopt the private mean estimation procedure in \citep{bun2019average}. However, their results are in expectation form, which is not preferred in robust estimation \citep{brownlees2015empirical}. For this reason, we propose a new algorithm which yields theoretically guaranteed bounds with high probability. \citep{karwa2017finite} considered the confidence interval estimation problem for Gaussian distributions which was later extended to general distributions \citep{feldman2018calibrating}. However, it was unknown how to extend them to the DP-SCO problem. \citep{abadi2016deep} proposed a DP-SGD method based on truncating the gradient, which could deal with the infinity sensitivity issue. However, there is no theoretical guarantees on the excess population risk. \section{Preliminaries} \begin{definition}[Differential Privacy \citep{dwork2006calibrating}]\label{def:3.1} Given a data universe $\mathcal{X}$, we say that two datasets $D,D'\subseteq \mathcal{X}$ are neighbors if they differ by only one entry, which is denoted as $D \sim D'$. A randomized algorithm $\mathcal{A}$ is $(\epsilon,\delta)$-differentially private (DP) if for all neighboring datasets $D,D'$ and for all events $S$ in the output space of $\mathcal{A}$, the following holds \[\mathbb{P}(\mathcal{A}(D)\in S)\leq e^{\epsilon} \mathbb{P}(\mathcal{A}(D')\in S)+\delta.\] \end{definition} \begin{definition}[DP-SCO \citep{bassily2014private}]\label{definition:1} Given a dataset $D=\{x_1,\cdots,x_n\}$ from a data universe $\mathcal{X}$ where $x_i$ are i.i.d. samples from some unknown distribution $\mathcal{D}$, a convex loss function $\ell(\cdot, \cdot)$, and a convex constraint set $\mathcal{W} \subseteq \mathbb{R}^d$, Differentially Private Stochastic Convex Optimization (DP-SCO) is to find $w^{\text{priv}}$ so as to minimize the population risk, {\em i.e.,} $L_\mathcal{D} (w)=\mathbb{E}_{x\sim \mathcal{D}}[\ell(w, x)]$ with the guarantee of being differentially private. The utility of the algorithm is measured by the \textit{(expected) excess population risk}, that is $\mathbb{E}_{\mathcal{A}}[L_\mathcal{D} (w^{\text{priv}})]-\min_{w\in \mathbb{\mathcal{W}}}L_\mathcal{D} (w),$ where the expectation of $\mathcal{A}$ is taken over all the randomness of the algorithm. Besides the population risk, we can also measure the \textit{empirical risk} of dataset $D$: $\hat{L}(w, D)=\frac{1}{n}\sum_{i=1}^n \ell(w, x_i).$ \end{definition} \begin{definition} A random variable $X$ with mean $\mu$ is called $\tau$-sub-exponential if $\mathbb{E}[\exp(\lambda (X-\mu))]\leq \exp(\frac{1}{2}\tau^2\lambda^2), \forall |\lambda|\leq \frac{1}{\tau}$. \end{definition} \begin{definition} A function $f$ is $L$-Lipschitz if for all $w, w'\in\mathcal{W}$, $|f(w)-f(w')|\leq L\|w-w'\|_2$. \end{definition} \begin{definition} A function $f$ is $\alpha$-strongly convex on $\mathcal{W}$ if for all $w, w'\in \mathcal{W}$, $f(w')\geq f(w)+\langle \nabla f(w), w'-w \rangle+\frac{\alpha}{2}\|w'-w\|_2^2$. \end{definition} \begin{definition} A function $f$ is $\beta$-smooth on $\mathcal{W}$ if for all $w, w'\in \mathcal{W}$, $f(w')\leq f(w)+\langle \nabla f(w), w'-w \rangle+\frac{\beta}{2}\|w'-w\|_2^2$. \end{definition} \begin{assumption}\label{ass:1} For the loss function and the population risk, we assume the following. \begin{enumerate} \item The loss function $\ell(w, x)$ is non-negative, differentiable and convex for all $w\in \mathcal{W}$ and $x \in \mathcal{X}$. \item The population risk $L_{\mathcal{D}}(w)$ is $\beta$-smooth. \item The convex constraint set $\mathcal{W}$ is bounded with diameter $\Delta=\max_{w, w'\in \mathcal{W}}\|w-w'\|_2< \infty$. \item The optimal solution $w^*=\arg\min_{w\in \mathcal{W}} L_\mathcal{D}(w)$ satisfies $\nabla L_\mathcal{D}(w^*)= 0$. \end{enumerate} \end{assumption} \begin{assumption} \label{ass:2} There exists a number $n_\alpha$ such that when the sample size $|D|\geq n_\alpha$, the empirical risk $\hat{L}(\cdot, D)$ is $\alpha$-strongly convex with probability at least $\frac{5}{6}$ over the choice of i.i.d. samples in $D$. \end{assumption} We note that Assumptions \ref{ass:1} and \ref{ass:2} are commonly used in the studies on the problem of Stochastic Strongly Convex Optimization with heavy-tailed data, such as \citep{hsu2016loss,holland2019a}. Also the probability of $\frac{5}{6}$ in Assumption \ref{ass:2} is only for convenience. \begin{assumption}\label{ass:3} We assume the following for the loss functions. \begin{enumerate} \item For any $w\in \mathcal{W}$ and each coordinate $j\in [d]$, we assume that the random variable $\nabla_j \ell(w, x)$ is $\tau$-sub-exponential and $\beta_j$-Lipschitz (that is $\ell_j(w, x)$ is $\beta_j$-smooth), where $\nabla_j$ represents the $j$-th coordinate of the gradient. \item There are known constants $a, b = O(1)$ such that $a \leq \mathbb{E}[\nabla_j \ell(w, x)]\leq b$ for all $w\in \mathcal{W}$. \end{enumerate} \end{assumption} \begin{assumption}\label{ass:4} For any $w\in \mathcal{W}$ and each coordinate $j\in [d]$, we have $\mathbb{E}[(\nabla_j \ell(w, x))^2]\leq v=O(1)$, where $v$ is some known constant. \end{assumption} We can see that, compared with Assumption \ref{ass:3}, Assumption \ref{ass:4} needs fewer assumptions on the loss functions, because we only need to assume the gradient of the loss function has bounded second-order moment. We also note that Assumption \ref{ass:4} is more suitable to the problem of Stochastic Convex Optimization with heavy-tailed data and has been used in some previous works such as \citep{holland2017efficient,brownlees2015empirical}. \section{Sample-aggregation based method} In this section we first summarize the sample-aggregate framework introduced in \citep{nissim2007smooth}. Most of the existing privacy-preserving frameworks are based on the notion of \textit{global sensitivity}, which is defined as the maximum output perturbation $\|f(D)-f(D')\|_{\xi}$, where the maximum is over all neighboring datasets $D, D'$ and $\xi=1,2$. However, in some problems such as clustering \citep{nissim2007smooth,wang2015differentially} the sensitivity could be very high and thus ruin the utility of the algorithm. To circumvent this issue, \citep{nissim2007smooth} introduced the sample-aggregate framework based on a smooth version of \textit{local sensitivity}. Unlike the global sensitivity, local sensitivity measures the maximum perturbation $\|f(D)-f(D')\|_\xi$ over all databases $D'$ neighboring the input database $D$. The proposed sample-aggregate framework (Algorithm \ref{alg:1}) enjoys local sensitivity and comes with the following guarantee: \begin{theorem}[Theorem 4.2 in \citep{nissim2007smooth}]\label{thm:1} Let $f: \mathcal{D}\mapsto \mathbb{R}^d$ be a function where $\mathcal{D}$ is the collection of all databases and $d$ is the dimensionality of the output space. Let $d_{\mathcal{M}}(\cdot, \cdot)$ be a semi-metric on the output space of $f$. Set $\epsilon> \frac{2d}{\sqrt{m}}$ and $m=\omega(\log^2 n)$. The sample-aggregate algorithm $\mathcal{A}$ in Algorithm \ref{alg:1} is an efficient $(\epsilon, \delta)$-DP algorithm.\footnote{Here the efficiency means that the time complexity is polynomial in all terms.} Furthermore, if $f$ and $m$ are chosen such that the $\ell_1$ norm of the output of $f$ is bounded by $\Lambda$ and \begin{equation}\label{eq:1} \text{Pr}_{D_S\subseteq D}[d_{\mathcal{M}}(f(D_S), c)\leq r]\geq \frac{3}{4} \end{equation} for some $c\in \mathbb{R}^d$ and $r>0$, then the standard deviation of Gaussian noise added is upper bounded by $O(\frac{r}{\epsilon}+\frac{\Lambda}{\epsilon}e^{-\Omega(\frac{\epsilon\sqrt{m}}{d})}).$ In addition, when $m=\omega(\frac{d^2\log^2(r/\Lambda)}{\epsilon^2})$, with high probability each coordinate of $\mathcal{A}(D)-\bar{c}$ is upper bounded by $O(\frac{r}{\epsilon})$, where $\bar{c}$ depending on $\mathcal{A}(D)$ satisfies $d_{\mathcal{M}}(c, \bar{c})=O(r)$. \end{theorem} \begin{algorithm}[h] \caption{Sample-aggregate Framework \citep{nissim2007smooth}} \label{alg:1} $\mathbf{Input}$: $D=\{x_i\}_{i=1}^n\subset \mathbb{R}^d$, number of subsets $m$, privacy parameters $\epsilon, \delta$; $f, d_{\mathcal{M}}$. \begin{algorithmic}[1] \STATE {\bf Initialize:} $s=\sqrt{m}, \gamma=\frac{\epsilon}{5\sqrt{2\log(2/\delta)}}$ and $\beta= \frac{\epsilon}{4(d+\log(2/\delta))}$. \STATE {\bf Subsampling:} Select $m$ random subsets of size $\frac{n}{m}$ of $D$ independently and uniformly at random without replacement. Repeat this step until no single data point appears in more than $\sqrt{m}$ of the sets. Mark the subsampled subsets $D_{S_1}, D_{S_2}, \cdots, D_{S_m}$. \STATE Compute $\mathcal{S}=\{s_i\}_{i=1}^m$, where $s_i=f(D_{S_i})$. \STATE Compute $g(\mathcal{S})=s_{i^*}$, where $i^*=\arg\min_{i=1}^m r_i(t_0)$ with $t_0=\frac{m+s}{2}+1$. Here $r_i(t_0)$ denotes the distance $d_{\mathcal{M}}(\cdot, \cdot)$ between $s_i$ and the $t_0$-th nearest neighbor to $s_i$ in $\mathcal{S}$. \STATE {\bf Noise Calibration:} Compute $S(\mathcal{S})=2\max_{k}(\rho(t_0+(k+1)s)\cdot e^{-\beta k}),$ where $\rho(t)$ is the mean of the top $\lceil \frac{s}{\beta} \rceil$ values in $\{r_1(t), \cdots, r_m(t)\}$. \\ \STATE Return $\mathcal{A}(D)=g(\mathcal{S})+\frac{S(\mathcal{S})}{\gamma}u$, where $u$ is a standard Gaussian random vector. \end{algorithmic} \end{algorithm} We have the following Lemma \ref{lemma:3}, which shows that the minimum of the empirical risk satisfies (\ref{eq:1}). \begin{lemma}\label{lemma:3} Let $w_D=f(D)=\arg\min_{w\in \mathcal{W}}\hat{L}(w, D)$ where $|D|=n$. Then, under Assumptions \ref{ass:1} and \ref{ass:2}, if $n\geq n_\alpha$, the following holds \begin{equation}\label{eq:2} \text{Pr}[\|w_D-w^*\|_2\leq \eta ] \geq \frac{3}{4}, \end{equation} where $\eta=O(\sqrt{\frac{\mathbb{E}\|\nabla \ell(w^*, x)\|_2^2 }{n\alpha^2}})$. \end{lemma} Combining Lemma \ref{lemma:3} and Theorem \ref{thm:1}, we get the following upper bound for DP-SCO with heavy-tailed data and strongly convex loss functions. \begin{theorem}\label{thm:2} Under Assumptions \ref{ass:1} and \ref{ass:2}, for any $\epsilon, \delta>0$, if $n\geq \tilde{\Omega}(\frac{n_\alpha d^2}{\epsilon^2})$, $m \geq \tilde{\omega}(\frac{d^2}{\epsilon^2})$, $f(D)=\arg\min_{w\in \mathcal{W}}\hat{L}(w, D)$ and $d_\mathcal{M} (x, y) = \|x-y\|_2$, then Algorithm \ref{alg:1} is $(\epsilon, \delta)$-DP. Moreover, with high probability the output of $\mathcal{A}(D)$ ensures that \begin{equation}\label{eq:3} L_\mathcal{D}(\mathcal{A}(D))-L_\mathcal{D}(w^*)\leq \tilde{O}((\frac{\beta}{\alpha})^2 \frac{d^3}{n\epsilon^4}L_\mathcal{D} (w^*)), \end{equation} where the Big-$\tilde{O}, \Omega$ and small-$\omega$ notations omit the logarithmic terms. \end{theorem} \begin{remark} For DP-SCO with Lipschitz and strongly-convex loss function and bounded data, \citep{bassily2014private,wang2017differentially,bassily2020} showed that the upper bound of the excess population risk is $O(\frac{\sqrt{d}}{n\epsilon})$, and the lower bound is $\Omega(\frac{d}{n^2\epsilon^2})$ \footnote{\citep{bassily2014private} only shows the lower bound of the excess empirical risk. We can obtain the lower bound of the excess population risk by using the reduction from private ERM to private SCO \citep{bassily2020}.}. This suggests that the bound in Theorem \ref{thm:2} has some additional factors related to $d$ and $\frac{1}{\epsilon}$. We note that the upper bound in Theorem \ref{thm:2} has a multiplicative term of $L_\mathcal{D} (w^*)$. This means that when $L_\mathcal{D} (w^*)$ is small, our bound is better. For example, when $L_\mathcal{D} (w^*)=0$, our algorithm can recover $w^*$ exactly and results in an excess risk of $0$. Notice that there is no previous work on DP-ERM or DP-SCO that has a multiplicative error with respect to $L_\mathcal{D} (w^*)$. \end{remark} \section{Gradient descent based methods} There are several issues in the sample-aggregation based method presented in last section. Firstly, function $f(D)$ in Theorem \ref{thm:2} needs to solve the optimization problem exactly, which could be quite inefficient in practice. Second, previous empirical evidence suggests that sample-aggregation based methods often suffer from poor utility in practice \citep{su2016differentially,wang2015differentially}. Thirdly, Theorem \ref{thm:2} needs to assume strong convexity for the empirical risk and it is unclear whether it can be extended to the general convex case. Finally, from Eq.(\ref{eq:3}) we can see that when $L_\mathcal{D}(w^*)=\Theta(1)$, the excess population risk is quite large as compared to the ones in \citep{bassily2014private}. Thus, an immediate question is whether we can further lower the upper bound. To answer this question and resolve the above issues, we propose in this section two DP algorithms based on the Gradient Descent method under different assumptions. Recently, \cite{bun2019average} studied the problem of estimating the mean of a $1$-dimensional heavy-tailed distribution and proposed algorithms based on the idea of truncating the empirical mean and the local sensitivity. Motivated by this DP algorithm that has the capability of handling heavy-tailed data, we plan to develop a new method by borrowing some ideas from the work \citep{bun2019average} and robust gradient descent. Our method is inspired by their theorem that follows and uses the Arsinh-Normal mechanism (see Algorithm \ref{alg:2} and Prop. 5 in \citep{bun2019average}). \begin{theorem}[Theorem 7 in \citep{bun2019average}]\label{thm:3} Let $0<\epsilon, \delta\leq 1$ be two constants and $n$ be some integer $\geq O(\log(\frac{n(b-a)/\sigma}{\epsilon})$. Then, there exists a $\frac{1}{2}\epsilon^2$-zero concentrated Differentially Private (zCDP) (see Appendix for the definition of zCDP) algorithm (Algorithm \ref{alg:2}) $M:\mathbb{R}^n \mapsto \mathbb{R}$ such that the following holds: Let $\mathcal{D}$ be a distribution with mean $\mu \in [a, b]$, where $a,b$ are given constants and unknown variance $\sigma^2$. Then, \begin{equation*} \mathbb{E}_{X\sim \mathcal{D}^n, Z}[(M(X)-\mu)^2]\leq O(\frac{\sigma^2\log n}{n\epsilon^2}). \end{equation*} \end{theorem} The key idea of our algorithm is that, in each iteration, after getting $w^{t-1}$, we use the mechanism in Theorem \ref{thm:3} on each coordinate of $\nabla \ell(w, x_i)$. See Algorithm \ref{alg:3} for details. \begin{algorithm}[!h] \caption{Mechanism $\mathcal{M}$ in \citep{bun2019average}} \label{alg:2} $\mathbf{Input}$: $D=\{x_i\}_{i=1}^n\subset \mathbb{R}, \epsilon, a, b.$ \begin{algorithmic}[1] \STATE Let $t=\frac{\epsilon^2}{16}$ and $s=\frac{\epsilon}{4}$. Sort $\{x_i\}_{i=1}^n$ in the ascending order as $x_{(1)}\leq x_{(2)}\leq \cdots \leq x_{(n)}$. Calculate the upper bound of the smooth sensitivity for the trimming and truncating step: \begin{equation*} S^{t}_{[\text{trim}_m(\cdot)]_{[a, b]}}(D)= \max\{ \frac{x_{(n)}-x_{(1)}}{n-2m}, e^{-mt}(b-a)\}, \end{equation*} where $m=O(1)\leq \frac{n}{2}$ is a constant. \STATE Do the average trimming and truncating step: \begin{equation*} [\text{Trim}_{m}(D)]_{[a,b]}=[ \frac{x_{(m+1)}+\cdots+x_{(n-m)}}{n-2m}]_{[a,b]}, \end{equation*} where $[x]_{[a,b]}= x$ if $a\leq x \leq b$, equals to $a$ if $x< a$ and otherwise equals to $b$. \STATE Output $[\text{Trim}_{m}(D)]_{[a,b]}+ \frac{1}{s} S^{t}_{[\text{trim}_m(\cdot)]_{[a, b]}}(D)\cdot Z$, where $Z=\text{sinh}(Y)=\frac{e^Y-e^{-Y}}{2}$ and $Y$ is the Standard Gaussian. \end{algorithmic} \end{algorithm} \begin{algorithm}[!h] \caption{Heavy-tailed DP-SCO with known mean} \label{alg:3} $\mathbf{Input}$: $D=\{x_i\}_{i=1}^n\subset \mathbb{R}^d$, privacy parameters $\epsilon, \delta$; loss function $\ell(\cdot, \cdot)$, initial parameter $w^0$, $a, b$ which satisfy Assumption \ref{ass:3}, and the number of iterations $T$ (to be specified later). \begin{algorithmic}[1] \STATE Let $\tilde{\epsilon}= \sqrt{2\log \frac{1}{\delta}+2\epsilon}-\sqrt{2\log \frac{1}{\delta}}$. \FOR {$t=1, 2, \cdots, T$} \STATE For each $j\in [d]$, calculate $D_{t-1, j}(w^{t-1})= \{\nabla_j \ell(w^{t-1}, x_i)\}_{i=1}^n$. \STATE Run Algorithm \ref{alg:2} for each $D_{t-1, j}$ and denote the output $\tilde{\nabla}_{t-1, j}(w^{t-1})=(\mathcal{M}(D_{t-1,j}(w^{t-1})), \frac{\tilde{\epsilon}}{\sqrt{d T}}, a, b)$. Denote $$\nabla \tilde{L}(w^{t-1}, D)= (\tilde{\nabla}_{t-1, 1}(w^{t-1}) \cdots, \tilde{\nabla}_{t-1, d}(w^{t-1})).$$ \STATE Updating $w^t=\mathcal{P}_\mathcal{W} (w^{t-1}- \eta_{t-1}\nabla \tilde{L}(w^{t-1}, D))$, where $\eta_{t-1}$ is some step size and $\mathcal{P}_\mathcal{W}$ is the projection operator. \ENDFOR \end{algorithmic} \end{algorithm} By the composition theorem and the relationship between $zCDP$ and $(\epsilon, \delta)$-DP \citep{bun2016concentrated}, we have the DP guarantee. \begin{theorem}\label{thm:4} For any $0<\epsilon, \delta\leq 1$, Algorithm \ref{alg:3} is $(\epsilon, \delta)$-differentially private. \end{theorem} To show the \textit{expected} excess population risk of Algorithm \ref{alg:3}, we cannot use the upper bound in Theorem \ref{thm:3} directly for the following reasons. First, since the upper bound is for the expectation w.r.t. $X$ and $Z$ while the \textit{expected} excess population risk depends only on the randomness of the algorithm instead of the data. Thus, we need to obtain an upper bound for $\mathbb{E}_{ Z}[(M(X)-\mu)^2]$ (with high probability w.r.t. $X$). Secondly, to get an upper bound, it is sufficient to analyze the term $\|\nabla \tilde{L}(w^{t-1}, D)-\nabla L_\mathcal{D}(w^{t-1})\|_2$ in each iteration. However, since the parameter $w^{t-1}$ at any step depends on the random draw of the dataset $\{x_i\}_{i=1}^n$, upper bounds on the estimation error need to be uniform in $w\in \mathcal{W}$ in order to capture all contingencies. To resolve these two issues, we use the same technique as in \citep{chen2017distributed,vershynin2010introduction} (under Assumption \ref{ass:3}) to obtain the following lemma. \begin{lemma}\label{lemma:4} Under Assumption \ref{ass:3}, with probability at least $1-\frac{2dn}{(1+n\hat{\beta}\Delta)^d}$ the following holds for all $w\in \mathcal{W}$, \begin{equation}\label{eq:4} \mathbb{E}_{Z}\| \nabla \tilde{L}(w, D)- \nabla L_\mathcal{D}(w)\|_2\leq O( \frac{\tau d\sqrt{T\log n}}{\sqrt{n}\tilde{\epsilon}}), \end{equation} where $\hat{\beta}=\sqrt{\beta_1^2+\cdots+\beta_d^2}$, the expectation is w.r.t. the random variables $\{Z_i\}_{i=1}^d$ and the Big-$O$ notation omits other factors. \end{lemma} Next, we show the expected excess population risk for strongly convex loss functions. \begin{theorem}[Strongly-convex case]\label{thm:5} Under Assumptions \ref{ass:1} and \ref{ass:3}, if the population risk is $\alpha$-strongly convex and $T$ and $\eta$ are set to be $T=O(\frac{\beta}{\alpha}\log n)$ and $\eta=\frac{1}{\beta}$, respectively, in Algorithm \ref{alg:3}, then with probability at least $1-\Omega(\frac{\beta}{\alpha}\frac{2dn\log n}{(1+n\hat{\beta}\Delta)^d})$ the output satisfies the following for all $D\sim \mathcal{D}^n$, \begin{equation*}\label{eq:5} \mathbb{E}[ L_\mathcal{D}(w^T)]-L_\mathcal{D} (w^*)\leq O(\frac{\Delta^2\beta^2\tau^2 d^2 \log^2 n \log \frac{1}{\delta}}{\alpha^3 n\epsilon^2 }). \end{equation*} \end{theorem} Compared with the bound in Theorem \ref{thm:2}, we can see that the bound in Theorem \ref{thm:5} improves a factor of $\tilde{O}(\frac{d}{\epsilon^2})$ (if we omit other terms). However, there are more assumptions on the distribution and the loss functions. Specifically, in Assumption \ref{ass:3} we need to assume the sub-exponential property, {\em i.e.,} the moment of $\nabla_j\ell(w, x)$ exists for every order. Also, we need to assume that $\nabla_j\ell(w, x)$ is Lipschitz and the range of its mean is known. These assumptions are quite strong, compared to those used in the literature of learning with heavy-tailed data, such as \citep{holland2017efficient,brownlees2015empirical,hsu2016loss,minsker2015geometric}. To improve the above result, we consider the following. First, we would like to relax those assumptions in the theorem. Second, in the problem of ERM with heavy-tailed data, it is expected to have an excess population risk bound that is in the form of \textit{with high probability} instead of its \textit{expectation} \citep{brownlees2015empirical}. However, it is unclear whether Algorithm \ref{alg:3} can achieve a high probability bound. This is due to the fact that the noise added in each iteration is a combination of log-normal distributions, which is non-sub-exponential and thus is hard to get tail bounds. Third, Algorithm \ref{alg:3} depends on the local sensitivity and thus cannot be extended to the distributed settings or local differential privacy model. Finally, the practical performance of Algorithm \ref{alg:3} has poor utility and is unstable due to the noise added in each iteration (see Section 6 for details), which means that Algorithm \ref{alg:3} is still impractical. To resolve all these issues and still keeping (approximately) the same upper bound, we propose a new algorithm that is simply based on the Gaussian mechanism. In the following we will study the problem under Assumptions 1 and \ref{ass:4}. Note that compared with Assumption \ref{ass:3}, we only need to assume that the second-order moment of $\nabla_j\ell(w, x)$ exists for all $w\in\mathcal{W}$ and $j\in [d]$ and its upper bound is known. Our method is motivated by the robust mean estimator given in \citep{holland2019a}. To be self-contained, we first review their estimator. Now, we consider $1$-dimensional random variable $x$ and assume that $x_1, x_2, \cdots, x_n$ are i.i.d. sampled from $x$. The estimator consists of the following steps: \paragraph{Scaling and Truncation} For each sample $x_i$, we first re-scale it by dividing $s$ (which will be specified later). Then, we apply the re-scaled one to some soft truncation function $\phi$. Finally, we put the truncated mean back to the original scale. That is, \begin{equation}\label{eq:6} \frac{s}{n}\sum_{i=1}^n \phi(\frac{x_i}{s})\approx \mathbb{E}X. \end{equation} Here, we use the function given in \citep{catoni2017dimension}, \begin{equation}\label{eq:7} \phi(x)= \begin{cases} x-\frac{x^3}{6}, & -\sqrt{2}\leq x\leq \sqrt{2} \\ \frac{2\sqrt{2}}{3}, & x>\sqrt{2} \\ -\frac{2\sqrt{2}}{3}, & x<-\sqrt{2}. \end{cases} \end{equation} Note that a key property for $\phi$ is that $\phi$ is bounded, that is, $|\phi(x)|\leq \frac{2\sqrt{2}}{3}$. \paragraph{Noise Multiplication} Let $\eta_1, \eta_2, \cdots, \eta_n$ be random noise generated from a common distribution $\eta\sim \chi$ with $\mathbb{E}\eta =0$. We multiply each data $x_i$ by a factor of $1+\eta_i$, and then perform the scaling and truncation step on the term $x_i(1+\eta_i)$. That is, \begin{equation}\label{eq:8} \tilde{x}(\eta) =\frac{s}{n}\sum_{i=1}^n \phi(\frac{x_i+\eta_i x_i}{s}). \end{equation} \paragraph{Noise Smoothing} In this final step, we smooth the multiplicative noise by taking the expectation w.r.t. the distributions. That is, \begin{equation}\label{eq:9} \hat{x}=\mathbb{E} \tilde{x}(\eta) = \frac{s}{n}\sum_{i=1}^n \int \phi(\frac{x_i+\eta_i x_i}{s})d \chi(\eta_i). \end{equation} Computing the explicit form of each integral in (\ref{eq:9}) depends on the function $\phi(\cdot)$ and the distribution $\chi$. Fortunately, \cite{catoni2017dimension} showed that when $\phi$ is in (\ref{eq:7}) and $\chi\sim \mathcal{N}(0, \frac{1}{\beta})$ (where $\beta$ will be specified later), we have for any $a, b$ \begin{equation}\label{eq:10} \mathbb{E}_{\eta}\phi(a+b\sqrt{\beta}\eta)=a(1-\frac{b^2}{2})-\frac{a^3}{6}+C(a,b), \end{equation} where $C(a, b)$ is a correction form which is easy to implement and its explicit form will be given in the Appendix. \begin{algorithm*}[!ht] \caption{Heavy-tailed DP-SCO with known variance} \label{alg:4} $\mathbf{Input}$: $D=\{x_i\}_{i=1}^n\subset \mathbb{R}^d$, privacy parameters $\epsilon, \delta$, loss function $\ell(\cdot, \cdot)$, initial parameter $w^0$, $v$ which satisfies Assumption \ref{ass:4}, the number of iterations $T$ (to be specified later), and failure probability $\delta'$. \begin{algorithmic}[1] \STATE Let $\tilde{\epsilon}= (\sqrt{\log \frac{1}{\delta}+\epsilon}-\sqrt{\log \frac{1}{\delta}})^2$, $s=\sqrt{\frac{nv}{2\log \frac{1}{\delta'}}}$, $\beta=\log \frac{1}{\delta'}$. \FOR {$t=1, 2, \cdots, T$} \STATE For each $j\in [d]$, calculate the robust gradient by (\ref{eq:8})-(\ref{eq:10}), that is \begin{multline} g_j^{t-1}(w^{t-1})= \frac{1}{n}\sum_{i=1}^n \left(\nabla_j\ell(w^{t-1}, x_i)\big(1-\frac{\nabla^2_j\ell(w^{t-1}, x_i)}{2s^2\beta}\big)- \frac{\nabla^3_j\ell(w^{t-1}, x_i)}{6s^2}\right)\\+\frac{s}{n}\sum_{i=1}^nC\left(\frac{\nabla_j\ell(w^{t-1}, x_i)}{s}, \frac{|\nabla_j\ell(w^{t-1}, x_i)|}{s\sqrt{\beta}}\right)+ Z_{j}^{t-1}, \end{multline} where $Z_{j}^{t-1}\sim \mathcal{N}(0, \sigma^2)$ with $\sigma^2= \frac{8vdT}{9\log \frac{1}{\delta'}n\tilde{\epsilon}}$. \STATE Let vector $g^{t-1}(w^{t-1})\in \mathbb{R}^d$ to denote $g^{t-1}(w^{t-1})=(g_1^{t-1}(w^{t-1}), g_2^{t-1}(w^{t-1}), \cdots, g_d^{t-1}(w^{t-1}))$. \STATE Update $w^{t}=\mathcal{P}_{\mathcal{W}}(w^{t-1}-\eta_{t-1}g^{t-1}). $ \ENDFOR \end{algorithmic} \end{algorithm*} \cite{holland2019a} showed the following estimation error for the mean estimator $\hat{x}$ after these three steps. \begin{lemma}[Lemma 5 in \citep{holland2019a}] \label{lemma:5} Let $x_1, x_2, \cdots, x_n$ be i.i.d. samples from distribution $x\sim \mu$. Assume that there is some known upper bound on the second-order moment, {\em i.e.,} $\mathbb{E}_\mu x^2\leq v$. For a given failure probability $\delta'$, if set $\beta= 2\log \frac{1}{\delta'}$ and $s=\sqrt{\frac{nv}{2\log\frac{1}{\delta'}}}$, then with probability at least $1-\delta'$ the following holds \begin{equation} |\hat{x}-\mathbb{E}x|\leq O(\sqrt{\frac{v\log \frac{1}{\delta'}}{n}}). \end{equation} \end{lemma} To obtain an $(\epsilon,\delta)$-DP estimator, the key observation is that the bounded function $\phi$ in (\ref{eq:7}) also makes the integral form of (\ref{eq:10}) bounded by $\frac{2\sqrt{2}}{3}$. Thus, we know that the $\ell_2$-norm sensitivity is $\frac{s}{n}\frac{4\sqrt{2}}{3}$. Hence, the query \begin{equation}\label{eq:14} \mathcal{A}(D)=\hat{x}+ Z, Z\sim \mathcal{N}(0, \sigma^2), \sigma^2=O(\frac{s^2\log \frac{1}{\delta}}{\epsilon^2n^2}) \end{equation} will be $(\epsilon, \delta)$-DP, which leads to the following theorem. \begin{theorem}\label{theorem:6} Under the assumptions in Lemma \ref{lemma:5}, with probability at least $1-\delta'$ the following holds \begin{equation} |\mathcal{A}(D)-\mathbb{E}(x)|\leq O(\sqrt{\frac{v\log \frac{1}{\delta}\log\frac{1}{\delta'}}{n\epsilon^2}}). \end{equation} \end{theorem} Comparing with Theorem \ref{thm:3}, we can see that the upper bound in Theorem \ref{theorem:6} is in the form of `with high probability' (after transferring zCDP to $(\epsilon, \delta)$-DP \citep{bun2016concentrated}). Moreover, we improve by a factor of $O(\log n)$ in the error bound. Inspired by Theorem \ref{theorem:6} and Algorithm \ref{alg:3}, we propose a new method (Algorithm \ref{alg:4}), which uses our private mean estimator (\ref{eq:14}) on each coordinate of the gradient in each iteration. The following theorem shows the error bound when the loss function is strongly convex. \begin{theorem}\label{thm:7} For any $0<\epsilon, \delta<1$, Algorithm \ref{alg:4} is $(\epsilon, \delta)$-DP. Under Assumptions \ref{ass:1} and \ref{ass:4}, if the population risk is $\alpha$-strongly convex and $\eta_t$ and $T$ in Algorithm \ref{alg:4} are set to be $\eta_t=\frac{1}{\beta}$ and $T=O(\frac{\beta}{\alpha}\log n)$, respectively, then for any $\delta'>0$, with probability at least $1-2\delta' T$ the output $w^{T}$ satisfies \begin{equation*} L_\mathcal{D}(w^T)-L_\mathcal{D} (w^*)\leq O(\frac{v\Delta^2\beta^4 d^2 \log^2 n \log \frac{1}{\delta}\log \frac{1}{\delta'}}{\alpha^3 n\epsilon^2}). \end{equation*} \end{theorem} Comparing with Theorem \ref{thm:7} and \ref{thm:5}, we can see that if we omit other terms, the bounds are asymptotically the same and Theorem \ref{thm:7} needs fewer assumptions. With the high probability guarantee on the error in Theorem \ref{theorem:6}, we can actually get an upper bound for general convex loss functions. For this general convex case, we need the following mild technical assumption on the constraint set $\mathcal{W}$. \begin{assumption}\label{ass:5} The constraint set $\mathcal{W}$ contains the following $\ell_2$-ball centered at $w^*$: $\{w: \|w-w^*\|_2\leq 2\|w^0-w^*\|_2\}$. \end{assumption} \begin{theorem}[Convex case]\label{thm:8} Under Assumptions \ref{ass:1}, \ref{ass:4} and \ref{ass:5}, if we take $\eta=\frac{1}{\beta}$ and $T=\tilde{O}\left(\frac{\|w^0-w^*\|_2\sqrt{n}\sqrt{\tilde{\epsilon}}}{d}\right)^\frac{2}{3}$ in Algorithm \ref{alg:4}, then for any given failure probability $\delta'$, with probability at least $1-T\delta'$ the following holds \begin{equation} L_\mathcal{D}(w^T)-L_\mathcal{D} (w^*)\leq \tilde{O}(\frac{\log^\frac{1}{3} \frac{1}{\delta}\sqrt{\log \frac{1}{\delta' }}d^\frac{2}{3}}{(n\epsilon^2)^\frac{1}{3}}) \end{equation} when $n\geq \tilde{\Omega}(\frac{d^2}{\epsilon^2})$, where the Big-$\tilde{O}$ notation omits other logarithmic factors and the term of $v, \beta$. \end{theorem} \section{Experiments} \paragraph{Baseline Methods} As mentioned earlier, sample-aggregation based methods often have poor practical performance. Thus, we will not conduct experiments on Algorithm \ref{alg:1}. Moreover, as this is the first paper studying DP-SCO with heavy-tailed data and almost all previous methods on DP-SCO that have theoretical guarantees fail to provide DP guarantees, we do not compare our methods with them, and instead focus on comparing the performance of Algorithm \ref{alg:3} and Algorithm \ref{alg:4}. To show the effectiveness of our methods, we use the non-private heavy-tailed SCO method in \citep{holland2019a}, denoted by (stochastic) RGD in the following, as our baseline method. \paragraph{Experimental Settings} For synthetic data, we consider the linear and binary logistic models. Specifically, we generate the synthetic datasets in the following way. Each dataset has a size of $1\times 10^5$ and each data point $(x_i, y_i)$ is generated by the model of $y_i = \langle \omega^*, x_i \rangle + e_i$ and $y_i = \text{sign}[\frac{1}{1+e^{\langle \omega^*, x_i \rangle + e_i}}-\frac{1}{2}]$, respectively, where $x_i \in \mathbb{R}^{10}$ and $y_i \in \mathbb{R}$. In the first model, the zero mean noise $e_i$ is generated as follows. We first generate a noise $\Delta_i$ from the $(\mu, \sigma)$ log-normal distribution, {\em i.e.,} $\mathbb{P}(\Delta_i = x ) = \frac{1}{x\sigma\sqrt{2\pi}} e^{-\frac{(\ln x -\mu)^2}{2\sigma^2}}$, and then let $e_i = \Delta_i - \mathbb{E}[\Delta_i]$. For the second model, we first generate a noise $\Delta_i$ from the $(\mu, \sigma)$ log-logistic distribution, {\em i.e.,} $\mathbb{P}(\Delta_i = x ) = \frac{e^z}{\sigma x(1+e^z)^2}$, where $x>0$ and $z = \frac{\log(x)-\mu }{\sigma}$. Then, we let $e_i = \Delta_i - \mathbb{E}[\Delta_i]$. Accordingly, we implement Algorithm \ref{alg:3} and Algorithm \ref{alg:4}, together with RGD, on the ridge and logistic regressions. For real-world data, we use the Adult dataset from the UCI Repository \citep {Dua:2019}. We aim to predict whether the annual income of an individual is above 50,000. We select 30,000 samples, 28,000 amongst which are used as the training set and the rest are used for test. For the privacy parameters, we will choose $\epsilon=\{0.1, 0.5, 1\}$ and $\delta=O(\frac{1}{n})$. See Appendix for the selections of other parameters. For Algorithm \ref{alg:3}, the strength of prior knowledge is modeled by $\kappa=b-a$. \paragraph{Experimental Results} Figure \ref{fig:1} and \ref{fig:2} show the results of ridge and logistic regressions on synthetic and real datasets w.r.t iteration, respectively. Since there is no ground truth in the real dataset, we use the empirical risk on test data as the measurement. To test scalability of Algorithm \ref{alg:4} dealing with large-scaling data, experiments on stochastic versions of Algorithm \ref{alg:4} and RGD with minibatch size 1000 are also conducted. We can see that the performance of Algorithm \ref{alg:3} bears a larger variation compared to Algorithm 4, since we have to apply a heavy-tailed noise to fit the smooth sensitivity. Moreover, the performance of Algorithm \ref{alg:3} is sensitive to the parameter $\kappa$. Thus, these results show that Algorithm \ref{alg:3} has poor performance and the results of Algorithm \ref{alg:4} are comparable to the non-private ones. In Figure \ref{fig:3} and \ref{fig:4} we test the estimation error w.r.t different dimensionality $d$ and sample size $n$, respectively. From these results we can see that when $n$ increases or $d$ decreases, the estimation error will decrease. Also, with fixed $n$ and $d$, we can see that the estimation error will decrease as $\epsilon$ becomes larger. Thus, all these results confirm our previous theoretical analysis. \begin{figure*}[!htbp] \centering \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/rid_gd_syn.eps} \caption{$\epsilon=1$ \label{fig1:a}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/rid_sgd_syn.eps} \caption{$\epsilon=0.5$ \label{fig1:b}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/log_gd_syn.eps} \caption{$\epsilon=1$ \label{fig1:c}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/log_sgd_syn.eps} \caption{$\epsilon=0.5$ \label{fig1:d}} \end{subfigure} \caption{Experiments on synthetic datasets. Figures \ref{fig1:a} and \ref{fig1:b} are for ridge regressions over synthetic data with Lognormal noises. Figures \ref{fig1:c} and \ref{fig1:d} are for logistic regressions over synthetic data with Loglogistic noises. \label{fig:1} } \end{figure*} \begin{figure*}[!htbp] \centering \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/rid_gd_real.eps} \caption{$\epsilon=1$ \label{fig2:a}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/rid_sgd_real.eps} \caption{$\epsilon=0.5$ \label{fig2:b}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/log_gd_real.eps} \caption{$\epsilon=1$ \label{fig2:c}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/log_sgd_real.eps} \caption{$\epsilon=0.5$ \label{fig2:d}} \end{subfigure} \caption{Experiments on UCI Adult dataset. Figures \ref{fig2:a} and \ref{fig2:b} are for ridge regressions. Figures \ref{fig2:c} and \ref{fig2:d} are for logistic regressions. \label{fig:2} } \end{figure*} \begin{figure*}[!htbp] \centering \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_dim_rid_1.eps} \caption{$\epsilon=0.5$ \label{fig3:a}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_dim_rid_2.eps} \caption{$\epsilon=0.1$ \label{fig3:b}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_dim_log_1.eps} \caption{$\epsilon=0.5$ \label{fig3:c}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_dim_log_2.eps} \caption{$\epsilon=0.1$ \label{fig3:d}} \end{subfigure} \caption{Experiments for the impact of dimensionality. Figure \ref{fig3:a} and \ref{fig3:b} are for ridge regressions. Figure \ref{fig3:c} and \ref{fig3:d} are for logistic regressions. \label{fig:3} } \end{figure*} \begin{figure*}[!htbp] \centering \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_size_rid_1.eps} \caption{$\epsilon=0.5$ \label{fig4:a}} \end{subfigure} ~ \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_size_rid_2.eps} \caption{$\epsilon=0.1$ \label{fig4:b}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_size_log_1.eps} \caption{$\epsilon=0.5$ \label{fig4:c}} \end{subfigure} \begin{subfigure}[b]{.24\textwidth} \includegraphics[width=\textwidth,height=0.15\textheight]{fig/fig_size_log_2.eps} \caption{$\epsilon=0.1$ \label{fig4:d}} \end{subfigure} \caption{Experiments for the impact of the size of the dataset. Figure \ref{fig4:a} and \ref{fig4:b} are for ridge regressions. Figure \ref{fig4:c} and \ref{fig4:d} are for logistic regressions. \label{fig:4}} \end{figure*} \section{Discussion} In this paper, we provide the first comprehensive study on DP-SCO with heavy-tailed data. To the best of our knowledge, this is the first work on this problem. Specifically, we give a systematic analysis on the problem and design the first efficient algorithms to solve it. In various settings, we bound the (expected) excess generalization risk in both addictive and multiplicative manners. However, the problem is far from being closed. First, it is unclear whether the upper bounds of the excess population risk for strongly convex and general convex loss functions can be further improved. The second open problem is that we do not know what the lower bound for the excess population risk for these two cases is. Finally, it is an open problem to determine whether we can further relax the assumptions in our previous theorems. We leave these open problems for future research. \section*{Acknowledgements} Di Wang and Jinhui Xu were supported in part by the National Science Foundation (NSF) under Grant No. CCF-1716400 and IIS-1919492.
proofpile-arXiv_068-13379
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{Sec:Introduction} The motion of active colloidal particles in complex environments is a vivid topic of recent physics research \cite{bechinger_active_2016,reichhardt_depinning_2017,gompper_2020_2020}. In particular if self-propelled particles are moving in a heterogeneous or random medium, there is a plethora of new effects created by disorder. Examples include trapping and clogging of particles \cite{chepizhko_diffusion_2013,reichhardt_clogging_2018,reichhardt_avalanche_2018}, destruction of flocks \cite{morin_distortion_2017}, the control of crowds \cite{pince_disorder-mediated_2016,koyama_separation_2020} and subdiffusive long-time dynamics \cite{chepizhko_diffusion_2013,bertrand_optimized_2018,dor_ramifications_2019,morin_diffusion_2017}. The random environment can be established by a porous medium \cite{grancic_active_2011,blagodatskaya_active_2013}, by fixed obstacle particles \cite{takagi_hydrodynamic_2014,lozano_active_2019,jin_fine_2019,mokhtari_collective_2017,alonso-matilla_transport_2019,brun-cosme-bruny_deflection_2020} or by optical fields (such as a speckle field \cite{volpe_brownian_2014,volpe_speckle_2014,bewerunge_experimental_2016,nunes_ordering_2020,pesce_step-by-step_2015,paoluzzi_run-and-tumble_2014,bianchi_active_2016}) which can create both random external potentials \cite{bewerunge_colloids_2016,bewerunge_time-_2016,hanes_brownian_2013,evers_colloids_2013,hanes_colloids_2012,stoop_clogging_2018,chaki_escape_2020} or a motility landscape \cite{lozano_phototaxis_2016,lozano_propagating_2019}. While the control of particle motion in a random environment is crucial for many applications such as steered drug delivery and minimal invasive surgery, also the fundamental physics needs to be understood within statistical mechanics. In particular, analytical solutions for simple model systems are important here to unravel the underlying principles. A particular successful model for self-propelled particles is that of active Brownian motion \cite{howse_self-motile_2007,ten_hagen_brownian_2011,lowen_inertial_2020} designed for colloidal microswimmers. Basically the particle performs overdamped motion under the action of an internal effective drive directed along its orientation which is experiencing Brownian fluctuations establishing a persistent random walk of the particle. In this model, the mean-square-displacement (MSD) of the particle exhibits a crossover from ballistic behaviour governed by directed self-propulsion to final long-time diffusion with a diffusion coefficient that scales with the square of the self-propulsion velocity. The motion of self-propelled particles in various random environments has been studied by using computer simulations of active Brownian particles or related models \cite{chepizhko_diffusion_2013,chepizhko_active_2015,chepizhko_optimal_2013, schirmacher_anomalous_2015,chepizhko_ideal_2019,chepizhko_random_2020, reichhardt_active_2014,reichhardt_aspects_2014,kumar_symmetry_2011, kumar_flocking_2014,das_polar_2018,quint_swarming_2013,simon_brownian_2016, zhu_transport_2018,ai_flow_2019,sandor_dynamic_2017,zeitz_active_2017,jakuszeit_diffusion_2019}. Also some experiments for active particle in disordered landscapes have been performed on colloids \cite{volpe_microswimmers_2011,morin_distortion_2017,pince_disorder-mediated_2016,lozano_active_2019} and bacteria \cite{bhattacharjee_confinement_2019}. However, analytical results are sparse, even for a single active particle. In one spatial dimension, exact results have been obtained for a run-and-tumble particle \cite{dor_ramifications_2019}. In higher dimensions, analytical results are available for discrete lattice models \cite{bertrand_optimized_2018} and for a highly entangled slender self-propelled rod \cite{mandal_crowding-enhanced_2020,romanczuk_active_2012}. Here we present analytical results for the off-lattice model of active Brownian motion in two dimensions by exploring the short-time behaviour of the mean-square-displacement. The self-propelled particle is experiencing a space-dependent landscape of quenched disorder \cite{bouchaud_anomalous_1990,duan_breakdown_2020} of an external force or the internal motility field. We calculate the averaged mean-square-displacement (MSD) of the particle for arbitrary disorder strength in a systematic short-time expansion. As a result, for overdamped particles, randomness in the external force field and the particle motility both contribute to the initial ballistic regime. Spatial correlations in the force and motility landscape contribute only to the cubic and higher order powers in time for the MSD. Finally, for inertial particles which are initially almost at rest three subsequent regimes can occur where the scaling exponent of the MSD with time crosses over from an initial $\alpha=2$ to a transient $\alpha=3$ and a final $\alpha=4$. The latter superballistic regimes are traced back to the initial acceleration. We remark that similar superballistic exponents have been found for an active Brownian particle in linear shear flow \cite{ten_hagen_brownian_2011} and for animal motion \cite{tilles_random_2017} but the physical origin is different in these cases. Our predictions are confirmed by computer simulations and are in principle verifiable in experiments on self-propelled colloids in random environments. As an aside, we also present results for a passive particle in an random force landscape. Note that we consider the short time behaviour that is also briefly mentioned in \cite{bewerunge_colloids_2016,bewerunge_time-_2016,hanes_brownian_2013,evers_colloids_2013,hanes_colloids_2012,wilkinson_flooding_2020,Zunke_PhD_Thesis} though in these works usually the focus is on the long-time behaviour \cite{bewerunge_colloids_2016,bewerunge_time-_2016,hanes_brownian_2013,evers_colloids_2013,hanes_colloids_2012,Zunke_PhD_Thesis} or the mean first passage time \cite{wilkinson_flooding_2020} of such systems. The paper is organized as follows: in the next section we discuss the model of a single Brownian particle interacting with an external random landscape, in the subsequent one we move on to the case of a random motility field and in both cases we consider both an overdamped and an underdamped particle. Finally in Sec. \ref{Sec:conclusions} we conclude with a summary of our results and possible continuations of our work. \section{Active particle in a disordered potential energy landscape} \label{Sec:potential} \subsection{Overdamped active Brownian motion} \label{Sub:OPot} We start by considering a single active Brownian particle moving in the two-dimensional plane. The dynamics is assumed to be overdamped as relevant for micron-sized swimmers and self-propelled colloids at low Reynolds number. The position of the particle center is described by its trajectory ${\vec{r}}(t)=(x(t),y(t))$ and its orientation is given by a unit vector $\hat{u}(t)=(\cos \phi (t), \sin \phi (t) )$ where $\phi$ is the angle of the orientation vector with the $x$-axis and $t$ is the time. The equations of motion of an overdamped active Brownian particle for the translation and rotation degrees of freedom are given by \begin{align} \label{1} \gamma\dot{\vec{r}}(t)&= \gamma v_0\hat{u}(t)+\vec{f}(t)+\vec{F}(\vec{r}(t)),\\ \label{2} \gamma_R\dot{\phi}(t)&= f_R(t), \end{align} where $\gamma$ and $\gamma_R$ are, respectively, the translational and rotational friction coefficients and $v_0$ is the self-propulsion velocity which is directed along the orientation vector $\hat{u}(t)$. The terms $\vec{f}(t)$ and $f_R(t)$ represent Gaussian white noise forces and torques originating from the solvent kicks with \begin{align} \label{3} \langle \vec{f}(t) \rangle &= 0, \\ \label{4} \langle f_i(t)f_j(t') \rangle &= 2 k_B T\gamma \delta(t-t')\delta_{ij}= 2D \gamma^2\delta(t-t')\delta_{ij},\\ \label{5} \langle f_R(t) \rangle &= 0, \\ \label{6} \langle f_R(t)f_R(t') \rangle &= 2 k_B T_R\gamma_R \delta(t-t')= 2D_R \gamma_R^2\delta(t-t'). \end{align} Here $\langle \cdot \rangle$ is the thermal noise average, $k_BT$ and $k_BT_R$ are the effective translational and rotational thermal energies and $D$ and $D_R$ are the respective free diffusion constants: \begin{align} \label{7} D &=k_BT/\gamma,\\ \label{8} D_R &=k_BT_R/\gamma_R. \end{align} Importantly, the particle is exposed to an external force field $\vec{F}(\vec{r})$ representing the static quenched disorder. We assume that the external force is conservative, i.e.\ that it can be derived as a gradient from a random potential energy $V(\vec{r})$ such that \begin{equation} \label{9} \vec{F}(\vec{r})=-\vec{\nabla}V(\vec{r}) \end{equation} holds. For the scalar potential energy we choose a general decomposition into two-dimensional Fourier modes and assume that the amplitudes in front of these modes are randomly Gaussian distributed and uncorrelated. In detail, the random potential $V(\vec{r})$ is expanded as \begin{align} \label{10}V(\vec{r})=-\sum_{i,j=0}^\infty\left(\epsilon_{ij}^{(1)}\cos(k_ix+k_jy)+\epsilon_{ij}^{(2)}\sin(k_ix+k_jy)\right), \end{align} where $k_n=\frac{2\pi}{L}n$, $L$ denoting a large periodicity length. The amplitudes $\epsilon_{ij}^{(\alpha)}$ are Gaussian random numbers which fulfil \begin{align} \label{11}\overline{\epsilon_{ij}^{(\alpha)}}=0 \text{~~~and~~~} \overline{\epsilon_{ij}^{(\alpha)}\epsilon_{mn}^{(\beta)}}=\overline{\epsilon_{ij}^{(\alpha)2}}\delta_{im}\delta_{jn}\delta^{\alpha \beta}, \end{align} where $\overline{(\cdot)}$ denotes the disorder average. We further assume the potential to be isotropic, meaning that the $\epsilon_{i,j}$ are only functions of $i^2+j^2$.\\ Now we compute the mean-square-displacement (MSD) $\Delta(t)$ of the particle which is initially at time $t=0$ at position ${\vec r}_0$ with orientational angle $\phi_0$. In this paper, we consider a disorder-averaged MSD, in detail it is a {\it triple\/} average over i) the thermal noise $\langle \cdot \rangle$, ii) the disorder $\overline{(\cdot)}$, and iii) the initial conditions $\ll \cdot \gg$. Due to translational invariance and self-propulsion isotropy, the latter are assumed to be homogeneously distributed in space and in the orientational angle. Consequently, \begin{align} \label{12} \Delta(t)& := \ll \langle\overline{(\vec{r}(t)-\vec{r}_0)^2}\rangle \gg. \end{align} In order to simplify the notation, the average over both disorder and initial conditions for the various components and derivatives of the forces will be abbreviated by the symbol $\widehat{(\cdot)}$, for example $\ll \overline{F^2_x(\vec{r}_0)} \gg\equiv \widehat{F^2_x}$. In Appendix A, we detail the analytical systematic short time expansion in terms of powers of time $t$ for the MSD. Up to fourth order, the final result reads as \begin{align} \label{13} \Delta(t)&=4Dt+\left[v_0^2+\frac{1}{\gamma^2}\widehat{F_i^2}\right]t^2-\left[\frac{1}{3}v_0^2D_R+\frac{D}{\gamma^2}\widehat{F_i^{j2}}\right]t^3\nonumber\\ &+\frac{1}{24}\left[2v_0^2D^2_R+10 \frac{D^2}{\gamma^2}\widehat{F_i^{jk2}}-5\frac{v_0^2}{\gamma^2}\widehat{F_i^{j2}} \right.\nonumber\\ &+\frac{1}{\gamma^4}\left(14\widehat{F_i^2F_i^{i2}}+8\widehat{F_i^3F_i^{ii}}+14\widehat{F_xF_yF_x^{y}F_i^{i}}\right.\nonumber\\ &\left. \left. +14\widehat{F_yF_xF_y^{x}F_i^{i}} -5\widehat{F_i^2F_x^{y2}}-5\widehat{F_i^2F_y^{x2}}\right)\right]t^4+\mathcal{O}\left( t^5\right). \end{align} Here our convention in the notation is that the presence of any index $i$, $j$ or $k$ implies an additional sum over the directions $x$ and $y$. For example, in this compact notation, we have $\widehat{F_i^2}\equiv \sum_{i=x,y}\widehat{F_i^2}$. Subscripts in $F$ indicate the Cartesian component of the force, while superscripts denote a spatial derivative. For example, $\widehat{F_i^{j2}}= \sum_{i=x,y} \sum_{j=x,y} \widehat{ (\frac {\partial F_i}{\partial j})^2 }$.\\ In order to assess the presence of scaling regimes for the MSD, it is necessary to know if the prefactors of $t^\alpha$ are negative or positive, and hence what is the sign of the various force products. In Eq.(\ref{13}), it can be shown that all products are positive with the exception of $\widehat{F_i^3F_i^{ii}}$. In the special case of a single mode potential, that we define as a potential where only $\epsilon_{11}\neq 0$, one can simplify this negative product with all the ones with $1/\gamma^4$ prefactor and obtain the shorter and positive expression $6\widehat{F_i^2F_j^{k2}}$ (see Appendix \ref{Appendix}). In the more general case positivity is not ensured.\\ Let us now discuss the basic result contained in Eq.(\ref{13}). First of all, in the absence of any external forces, we recover the analytical expression for a free active particle \cite{howse_self-motile_2007} where \begin{align} \label{14} \Delta(t)&=4Dt+2\frac{v_0^2}{D_R^2}\left( D_Rt+\text{e}^{-D_Rt}-1\right) \nonumber\\ &=4Dt+v_0^2t^2-\frac{1}{3}v_0^2D_Rt^3+\frac{1}{12}v_0^2D^2_Rt^4+\mathcal{O}\left( t^5\right) \end{align} expanded up to order $\mathcal{O}\left( t^5\right)$. Conversely, for finite forces but in the limit of no activity, $v_0=0$, we get results for a passive particle in a random potential energy landscape \cite{Zunke_PhD_Thesis}. In general, for both $v_0 \not= 0$ and ${\vec F} \not= 0$, as far as the influence of disorder is concerned, the first leading correction in the MSD is in the ballistic $t^2$-term. The physical interpretation of this term is rooted in the fact that in a disordered energy landscape on average the particle actually feels a non-vanishing force such that it is drifting. The resulting ballistic contribution is on top of the activity itself which also contributes to the transient ballistic regime. We define now the crossover time $t^c_{1\rightarrow 2}$ as the ratio $A_1/A_2$ between the two regimes scaling with $A_1t$ and $A_2t^2$. This quantity indicates the time when the ballistic regime becomes prominent over the diffusive one. In this case $t^c_{1\rightarrow 2}$ depends on the self-propulsion velocity and the strength of the potential, and more specifically it shrinks as those grow: \begin{equation} \label{15} t^c_{1\rightarrow 2}=\frac{4 D}{\widehat{F^2_i}/\gamma^2+v_0^2}, \end{equation} meaning that an active particle subject to a random force field begins earlier to move ballistically. Spatial correlations in the random potential energy landscape are contributing to the $t^3$-term in lowest order and affect the higher powers in time as well. Clearly, from the result (\ref{13}), the prefactor in front of the $t^3$-term is negative such that there is no regime where a pure $t^3$-scaling in the MSD can be observed. Finally, one could deduce from Eq.(\ref{13}) that there is a special limit of parameters where the dominant regime is an acceleration where $\Delta(t) \propto t^4$. In order to see this, one can set $v_0$ and $D$ to be small, while considering large wave vectors $k$ and amplitudes $\epsilon$ in the potential decomposition Eq.(\ref{10}) such that any combination of $\epsilon^2 k^4$ is much larger than one. However, this is not a scaling regime, as the term $\mathcal{O}\left(t^6\right)$ dominates on $\mathcal{O}\left(t^4\right)$ in the same limit. We compared the result (\ref{13}) to standard Brownian dynamics computer simulations. In our simulations, we first generated a random energy landscape, then the particle was exposed to the selected landscape with an initial random position and orientation. Then we integrated the equations of motion with a Euler finite difference scheme involving a finite time step of typically $\Delta t = 10^{-6}/D_R$. In order to simplify calculations for the simulations, we always used single mode potentials. The MSD was then appropriately averaged over many starting configurations, the number of which was always larger than $10^4$. Figure \ref{Fig1} shows examples for the scaling behaviour of both the MSD and its scaling exponent \begin{equation} \label{16} \alpha(t) := \frac{d(\log(\Delta(t)))}{d(\log(t))} \end{equation} as functions of time in a double logarithmic plot. As can be deduced from Fig.\ref{Fig1} (a,b), the initial diffusive regime where $\Delta(t) \propto t$ and the subsequent ballistic regime $\Delta(t) \propto t^2$ are clearly visible and reproduced by our short time expansion. As expected, for large times there are increasing deviations between theory and simulation as the theory is a short-time expansion. For large values of $\epsilon^2 k^4$ the short time expansion approximation becomes less accurate, as for example is shown in Fig.\ref{Fig1} (c,d). \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.7cm]{MSD_Fig1_ab.pdf} \includegraphics[width=8.7cm]{MSD_Fig1_cd.pdf} \caption{Mean square displacement (a,c), scaling exponent $\alpha$ (b,d) and crossing time $t^c_{1\rightarrow 2}$ (marked by a blue line) for an overdamped active particle in a random single mode potential. In (a,b) we used the parameters $v_0=100\sqrt{DD_R}$, $\epsilon=100k_BT$ and $L=100\sqrt{D/D_R}$. As described by the theory, the initial diffusive behaviour is soon replaced by the ballistic behaviour. In (c,d) the parameters $v_0=50\sqrt{DD_R}$, $\epsilon=100k_BT$ and $L=10\sqrt{D/D_R}$ also show first the diffusive and then the ballistic regimes, but for larger times the short time expansion approximation breaks down earlier, as the average $\epsilon^2 k^4$ is larger.} \label{Fig1} \end{center} \end{figure} \subsection{Underdamped active Langevin motion} \label{Sub:UPot} For macroscopic self-propelled particles or particles in a gaseous medium, inertial effects are getting relevant and overdamped active Brownian motion is generalized towards underdamped active Langevin motion \cite{scholz_inertial_2018,lowen_inertial_2020}. The equations of motion for an inertial active particle in a random potential energy landscape are then generalized to \begin{align} \label{17} m\ddot{\vec{r}}(t)+\gamma\dot{\vec{r}}(t)&= \gamma v_0\hat{u}(t)+\vec{F}(\vec{r}(t))+\vec{f}(t),\\ \label{18} \gamma_R\dot{\phi}(t)&= f_R(t), \end{align} where $m$ is the particle mass. For simplicity, as in many previous studies for inertia \cite{enculescu_active_2011,takatori_inertial_2017,mokhtari_collective_2017,das_local_2019}, we have neglected rotational inertia here which could be included by using a finite moment of inertia \cite{scholz_inertial_2018,lowen_inertial_2020}. Now the initial condition average $\ll \cdot \gg$ has to be performed not only over particle positions and orientations but also over the initial particle velocity $\dot{\vec{r}}(0)$. The resulting triple-averaged short time expansion of the mean square displacement is now: \begin{align} \label{19} \Delta(t)&=\sigma^2_v t^2+\frac{\gamma}{m}\left[\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v\right]t^3\nonumber\\ &+\frac{1}{m^2}\left[\frac{7}{12}\gamma^2\sigma^2_v+\frac{1}{4}\widehat{F_i^2}+\frac{1}{4}\gamma^2v_0^2-\frac{\gamma^3}{m}D\right]t^4\nonumber\\ &+\mathcal{O}\left(t^5\right), \end{align} where $\sigma^2_v=\ll\dot{x}^2(0)+\dot{y}^2(0)\gg$ is the variance of the initial speed of the particle. This result bears different dynamical scaling regimes. First of all, for short times the MSD starts ballistically with $t^2$ due to the initial velocities. Of course this regime is absent if the particle is initially at rest when $\sigma^2_v=0$. Remarkably, for very small amounts of initial velocity, the leading behaviour is governed by the term $t^3$, {\it cubic\/} in time, as the prefactor is positive. Please note that for an initially thermalized particle with a Maxwellian velocity distribution, the prefactor is negative, implying the absence of this cubic regime. Finally, the presence of an external disordered force field now contributes to the $t^4$ term as does the self-propulsion. This is plausible, as if on average a constant (external or internal self-propulsion) force is present, then the particle is constantly accelerated which leads to the $t^4$-scaling. Consequently, for $\sigma^2_v \ll D\gamma/m \ll \overline{F_i^2}/\gamma^2$ there are {\it three} subsequent scaling regimes: from initially ballistic, over to the cubic regime and finally to the constant acceleration regime.\\ The typical crossover time between the $t^2$ and $t^3$ scalings and the one between $t^3$ and $t^4$ are referred to as $t^c_{2\rightarrow 3}$ and $t^c_{3\rightarrow 4}$. Their values are: \begin{align} \label{20} t^c_{2\rightarrow 3}&=\frac{m}{\gamma}\frac{\sigma^2_v}{\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v},\\ \label{21} t^c_{3\rightarrow 4}&=m\gamma\frac{\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v}{\frac{7}{12}\gamma^2\sigma^2_v+\frac{1}{4}\widehat{F_i^2}+\frac{1}{4}\gamma^2v_0^2-\frac{\gamma^3}{m}D}, \end{align} where we assume that both prefactors of $t^3$ and $t^4$ in Eq.(\ref{19}) are positive. Using Langevin dynamics computer simulations, we have compared the theoretical short-time expansion with simulation data in Figure \ref{Fig2}. We used for the time evolution of the system a symmetrical stochastic splitting method that separates the stochastic and deterministic parts of the differential equations \cite{bussi_accurate_2007,sivak_using_2013}, with a typical time step of $\Delta t=10^{-10}/D_R$. As for the overdamped case, we used a single mode potential field and we averaged the MSD over more than $10^4$ configurations of the initial conditions and the potential. A double-logarithmic plot indeed reveals three distinctive regimes where the MSD scales as $t^\alpha$ with $\alpha=2,3,4$ and there is good agreement between theory and simulation if the times are not too large. It is important to note that the cubic regime can only be seen for initially cool systems which are exposed to thermal fluctuations. These can be experimentally prepared for example for granular hoppers \cite{scholz_inertial_2018} which are initially at rest and then brought into motion by instantaneously changing the vibration amplitude and frequency. Hence though the $t^3$ regime is not visible for a thermalized system it shows up for relaxational dynamics even for passive particles. \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.7cm]{MSD_Fig2.pdf} \caption{Mean square displacement (a) for an underdamped active particle in a random single mode potential, with scaling exponent $\alpha$ (b) and crossing times $t^c_{2\rightarrow 3}$, $t^c_{3\rightarrow 4}$. The parameters used are $v_0=100\sqrt{DD_R}$, $\epsilon=100k_BT$, $L=100\sqrt{D/D_R}$ and $\sigma_v=0.0002\sqrt{DD_R}$, and the unit for mass is the mass of the particle $m$. The three different scalings $t^2$, $t^3$ and $t^4$ are in this case clearly distinguishable from each other.} \label{Fig2} \end{center} \end{figure} \section{Active particle in a disordered motility landscape} \label{Sec:motility} \subsection{No aligning torque, overdamped} We now consider a self-propelling velocity that fluctuates \cite{zaburdaev_random_2008} as a function of the position of the particle. We denote hence the fluctuating part of the self-propelling velocity with $\delta v (\vec{r})$, while the constant part will still be named $v_0$, leading to a total propulsion velocity $(v_0+\delta v (\vec{r}))\hat{u}(\phi)$, or motility field. As in the case of the random potential, the random motility field is decomposed into two-dimensional Fourier modes, with Gaussian uncorrelated amplitudes: \begin{align} \label{22}\delta v(\vec{r})=\sum_{i,j=0}^\infty\left(\zeta_{ij}^{(1)}\cos(k_ix+k_jy)+\zeta_{ij}^{(2)}\sin(k_ix+k_jy)\right), \end{align} where the $\zeta_{ij}^{(\alpha)}$ prefactors have the same statistical properties as the $\epsilon_{ij}^{(\alpha)}$ prefactors in (\ref{11}).\\ The main differences between the motility and potential fields are that the first one does not appear as a gradient in the equations of motion and that it is coupled to $\hat{u}(\phi)$.\\ In absence of an aligning torque and inertia the system fulfils the equations: \begin{align} \label{23} \gamma\dot{\vec{r}}(t)&= \gamma(v_0+\delta v (\vec{r}))\hat{u}(\phi)+\vec{f}(t),\\ \label{24} \gamma_R \dot{\phi}(t)&= f_R(t), \end{align} leading to the following short time mean square displacement: \begin{align} \label{25} \Delta(t)&=4Dt+(v_0^2+\widehat{\delta v^2})t^2\nonumber\\ &-\frac{1}{3}\left[2D\widehat{\delta v^{i2}}+D_R(v_0^2+\widehat{\delta v^2})\right]t^3\nonumber\\ &+\frac{1}{24}\left[6D^2\widehat{\delta v^{ij2}} +8DD_R\widehat{\delta v^{i2}}+2D_R^2(v_0^2+\widehat{\delta v^2})\right. \nonumber\\ &\left.+7\widehat{\delta v^2 \delta v^{i2}}+4\widehat{\delta v^3 \delta v^{ii}}-5v_0^2\widehat{\delta v^{i2}}\right]t^4+\mathcal{O}\left( t^5\right), \end{align} where we use the same notation as described for Eq.(\ref{13}): the symbol $\widehat{(\cdot)}$ indicates an average over disorder and initial conditions, while the superscripts of $\delta v$ indicate sums over derivatives. We also remark that the product $\widehat{\delta v^3 \delta v^{ii}}$ is negative, while all the others are positive.\\ From the results in Eq.(\ref{25}) we can extract similar considerations as those we discussed in \ref{Sub:OPot} for Eq.(\ref{13}). In the limit of a vanishing motility field $\delta v(\vec{r})=0$, the mean square displacement of an active particle with constant speed (see Eq.(\ref{14})) is recovered. For a finite total self-propulsion velocity the first correction to the linear MSD is a $t^2$ term which is always positive, leading to a ballistic regime. The typical crossover time related to this transition $t_{1\rightarrow 2}^c$ is now \begin{equation} \label{26} t^c_{1\rightarrow 2}=\frac{4 D}{\widehat{\delta v^2}+v_0^2}. \end{equation} Similar to Eq.(\ref{13}), the space configuration of the field appears for the first time in the $\mathcal{O}\left(t^3\right)$ term of the equation as a negative term that does not constitute a regime. The $\mathcal{O}\left(t^4\right)$ prefactor is positive for a large motility field and a small $v_0$, but as the higher order terms always overshadow this, the particle never shows a pure accelerating behaviour. All these results have been confirmed by simulations similar to those described in \ref{Sub:OPot}. In Figure \ref{Fig3} we can see an example of such a simulation, where the plots of the MSD and its scaling exponent $\alpha$ behave in accord to our theory for short times, with first a diffusive regime and then a ballistic one. \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.7cm]{MSD_Fig3.pdf} \caption{Mean square displacement (a), scaling exponent $\alpha$ (b) and crossing time $t^c_{1\rightarrow 2}$ for an underdamped active particle in a random single mode motility field. The parameters $v_0=20\sqrt{DD_R}$ and $\zeta=20\sqrt{DD_R}$, $L=100\sqrt{D/D_R}$ feature the initial diffusive behaviour and the ballistic behaviour.} \label{Fig3} \end{center} \end{figure} \subsection{No aligning torque, underdamped} The underdamped equations of motion for a massive particle subject to a random motility field and no aligning torque are: \begin{align} \label{27} m\ddot{\vec{r}}(t)+\gamma\dot{\vec{r}}(t)&= \gamma(v_0+\delta v (\vec{r}))\hat{u}(\phi)+\vec{f}(t),\\ \label{28} \gamma_R\dot{\phi}(t)&=f_R(t), \end{align} we ignore the effects of angular inertia, for the same reason explained in \ref{Sub:UPot}. The resulting MSD, averaged over disorder, initial conditions and thermal noise is: \begin{align} \label{29} \Delta(t)&=\sigma^2_v t^2+\frac{\gamma}{m}\left[\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v\right]t^3\nonumber\\ &+\frac{\gamma^2}{m^2}\left[\frac{7}{12}\sigma^2_v+\frac{1}{4}(v_0^2+\widehat{\delta v^2})-\frac{\gamma}{m}D\right]t^4\nonumber\\ &+\mathcal{O}\left(t^5\right). \end{align} The three consecutive scaling regimes that characterized Eq.(\ref{19}): $t^2$, $t^3$ and $t^4$, can be also found in Eq.(\ref{29}) by requiring now $\sigma^2_v \ll D\gamma/m \ll \widehat{\delta v^2}+v_0^2$. The crossing time $t_{3\rightarrow 4}$ changes accordingly, while $t_{2\rightarrow 3}$ remains the same that we calculated in the potential case (see Eq.(\ref{20})): \begin{align} \label{30} t^c_{2\rightarrow 3}&=\frac{m}{\gamma}\frac{\sigma^2_v}{\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v},\\ \label{31} t^c_{3\rightarrow 4}&=\frac{m}{\gamma}\frac{\frac{4}{3}\frac{\gamma}{m}D-\sigma^2_v}{\frac{7}{12}\sigma^2_v+\frac{1}{4}(v_0^2+\widehat{\delta v^2})-\frac{\gamma}{m}D}, \end{align} where we assume that both the prefactors of $t^3$ and $t^4$ in Eq.(\ref{29}) are positive. These results were compared to the numerical MSD calculated with the help of Langevin dynamics simulations. In Figure \ref{Fig4} we present the typical results that can be obtained when the limit $\sigma^2_v \ll D\gamma/m \ll \widehat{\delta v^2}+v_0^2$ applies, and hence three different regimes appear. \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.7cm]{MSD_Fig4.pdf} \caption{Mean square displacement (a) for an underdamped active particle in a random single mode motility field, with scaling exponent $\alpha$ (b) and crossing times $t^c_{2\rightarrow 3}$, $t^c_{3\rightarrow 4}$. The parameters used are $v_0=100\sqrt{DD_R}$, $\zeta=100\sqrt{DD_R}$, $L=100\sqrt{D/D_R}$ and $\sigma_v=0.0002\sqrt{DD_R}$, and the unit of mass is the mass of the particle $m$. The three different scalings $t^2$, $t^3$ and $t^4$ are clearly distinguishable.} \label{Fig4} \end{center} \end{figure} \subsection{Aligning torque} In this subsection we discuss the special case of the presence of an aligning torque $\tau(\vec{r},\phi)$ that redirects the self-propulsion of the particle towards either the maxima or the minima of the motility field. An aligning torque is important for colloidal realizations of active systems \cite{lozano_phototaxis_2016,jahanshahi_realization_2020,jahanshahi_colloidal_2019,geiseler_self-polarizing_2017}. Since one common way of realizing a motility field is by the use of light fields, we refer to the self-propulsion towards the maxima of the field as \emph{positive phototaxis} and the one towards the minima as \emph{negative phototaxis}. Here, we only focus on the underdamped case, characterized by the following equations: \begin{align} \label{32} \gamma \dot{\vec{r}}(t)&= \gamma (v_0+\delta v (\vec{r}))\hat{u}(\phi)+\vec{f}(t),\\ \label{33} \gamma_R\dot{\phi}(t)&=\gamma_R \tau(\vec{r},\phi)+g(t), \end{align} where $\tau(\vec{r},\phi)\equiv q(v_0+\delta v(\vec{r}))\left(\vec{\nabla}\delta v(\vec{r})\times\vec{u}(\phi)\right)\cdot \vec{e}_z$. The sign of the prefactor $q$ determines whether the phototaxis is positive ($q<0$) or negative ($q>0$).\\ The averaged MSD up to $\mathcal{O}\left( t^4\right)$ is: \begin{align} \label{34} \Delta(t)&=4Dt+(v_0^2+\widehat{\delta v^2})t^2+\nonumber\\ &-\frac{1}{3}\left[2D(1+qv_0)\widehat{\delta v^{i2}}+D_R(v_0^2+\widehat{\delta v^2})\right]t^3+\nonumber\\ &+\mathcal{O}\left( t^4\right). \end{align} In the special case of no translational diffusion ($D=0$) the next order of the MSD is: \begin{align} \label{35} \Delta(t)&=\dots+\frac{1}{24}\left[ 2D_R^2(v_0^2+\widehat{\delta v^2})+7\widehat{\delta v^2 \delta v^{i2}}-5v_0^2\widehat{\delta v^{i2}}+\right. \nonumber\\ &+4\widehat{\delta v^3 \delta v^{ii}}-4q(v_0^3\widehat{\delta v^{i2}}+3v_0\widehat{\delta v^2\delta v^{i2}})+\nonumber\\ &\left.+3q^2(v_0^4\widehat{\delta v^{i2}}+6v_0^2\widehat{\delta v^2\delta v^{i2}}+\widehat{\delta v^4\delta v^{i2}})\right]t^4+\nonumber\\ &+\mathcal{O}\left( t^5\right). \end{align} Analyzing Equations (\ref{34}) and (\ref{35}) we first notice that in the limit of $q=0$ we recover the previous case with no aligning torque. When $q$ is non-zero, it appears for the first time as prefactor of $t^3$ if $D>0$ and as prefactor of $t^4$ otherwise. What is peculiar about $q$ is that for different experimental setups its sign can change, and when it is negative, all the prefactors where it appears become positive. One can intuitively understand the reason for this by considering that a positive phototaxis means that the particle redirects itself towards the motility field maxima, and hence will show a MSD which is larger than in the negative phototaxis case. Even when $q$ is negative and large though, this does not constitute a regime of either order $t^3$ or $t^4$, as the higher order terms in time feature higher powers of $q$ that overshadow the lower orders. \section{Conclusions and outlook} \label{Sec:conclusions} In conclusion we have systematically computed the quenched disorder average of the mean-square-displacement for an active particle in a random potential or motility landscape. The amplitude of the ballistic regime is affected by the strength of disorder but spatial derivatives in the landscapes only contribute to the next cubic term in time. For an inertial particle two new superballistic scaling regimes are found where the MSD scales as $t^3$ or as $t^4$. Our method can be applied to other more complex situations. First, the generalization to an anisotropic potential is straightforward, even though tedious. Second, the landscapes can be time-dependent as for real speckle patterns \cite{paoluzzi_run-and-tumble_2014,bianchi_active_2016-1}, moving activity waves \cite{geiseler_self-polarizing_2017,merlitz_linear_2018} and propagating ratchets \cite{lozano_propagating_2019,zampetaki_taming_2019,koumakis_dynamic_2019} The same analysis can be performed for time-dependent disorder. Moreover, the same analysis can in principle be done for other models of active particles, including the simpler active Ornstein Uhlenbeck particle \cite{martin_statistical_2020} or more sophisticated pusher or puller descriptions for the self-propagation. A refreshing or resetting of the landscapes can be considered as well \cite{mano_optimal_2017,scacchi_mean_2018}. Finally the model can be extended to a viscoelastic solvent \cite{gomez-solano_dynamics_2016,berner_oscillating_2018,qi_enhanced_2020,theeyancheri_translational_2020} with a random viscoelasticity where memory effects become important. \section{Acknowledgements} We thank S. U. Egelhaaf and C. Zunke for helpful discussions. The work of DB was supported within the EU MSCA-ITN ActiveMatter, (proposal No.\ 812780). HL acknowledges funds from the German Research Foundation (DFG) within SPP 2265 within project LO 418/25-1.
proofpile-arXiv_068-13474
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{ Introduction} There has been some research to find methods\cite{muhammad2003double} that can approximate the anti-derivatives. Most of them are based on polynomial or exponential spline fitting, which often can not capture some integrals' highly non-linear and non-elementary nature. We can not use these methods directly in cases where the integrand is dependent on various parameters. They require a different approach for every integral. Neural networks, acting as universal approximators\cite{hornik1989multilayer}, can be a potent tool for this purpose. Definite integrals have been approximated \cite{zhe2006numerical} using a single hidden layer neural network to significant accuracy. Dual Neural Networks \cite{li2019dual} has also been used to calculate definite integrals in the cases where the integrand is represented as discrete values. Computational speedup over other numerical techniques using shallow neural networks has also been shown\cite{lloyd2020using}. For the integral, \begin{equation} \int f(x)\,dx \end{equation} the previous works have shown the use of a single-layer neural network ( $N_1(x)=(w_j^T\sigma(w_i^Tx+b_i)+b_j)$) to approximate the integrand, similar to curve fitting. Later, this approximation of the integrand is integrated theoretically, which in a way beats the purpose. Since theoretically integrating deep neural networks is not feasible, they have used shallow neural networks that are often insufficient for highly non-linear functions.\\ \begin{equation} Older\ Methods\ :\ \int f(x)\,dx = \int N_1(x)\,dx \end{equation} This paper presents the algorithm- Deep Neural Network Integration(DNNI) \begin{equation} DNNI : \int f(x,a,b,...)\,dx = N(x,a,b,...) \end{equation} where $N(x,a,b,..)$ is a Deep Neural Network similar to figure \ref{fig 1:DNNI}. The breakthrough that has propelled the DNNI algorithm is automatic differentiation\cite{baydin2018automatic} which enables us to include the derivative of the neural network in its loss function. Using DNNI, we can obtain the anti-derivative directly as a continuous function without theoretically integrating. DNNI can be a single method for approximating primitives, calculating the value of definite integrals, and obtaining the closed-form expression of an integral as a function of other parameters.\\ This paper is organized as follows: Section 2 describes the algorithm; In section 3, anti-derivatives have been computed using DNNI and compared with theoretical results for simple, complicated functions, non-elementary integrals, and oscillatory functions. Section 4 focuses on the applications of DNNI in obtaining the closed-form expressions approximating the elliptic and Fermi-Dirac integrals, finding cumulative distribution functions, and speeding up numerical differential equations solvers. The codes used in the paper are available at \href{https://github.com/Dibyajyoti-Chakraborty/Deep-Neural-Network-Integration}{https://github.com/Dibyajyoti-Chakraborty/Deep-Neural-Network-Integration}. They are executed in Intel® Core™ i7-9700K CPU @ 3.60GHz × 8 processors, 32 GB RAM and NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] GPU. \section{ Methodology} Integration of a continuous real function f is defined as \begin{equation} I(x,a,b,..) = \int f(x,a,b,..)\,dx \end{equation} where $I$ is the anti-derivative of $f$ since \begin{equation} \frac{\partial I} {\partial x}=f(x,a,b,..) \end{equation} \begin{figure}[h] \centering \begin{neuralnetwork}[height=10] \newcommand{\x}[2]{$x_{#2}$} \newcommand{\y}[2]{$N(x_1,x_2,x_3)$} \newcommand{\hfirst}[2]{\small $\sigma^{(1)}_{#2}$} \newcommand{\hsecond}[2]{\small $\sigma^{(2)}_{#2}$} \newcommand{\hthird}[2]{\small $\sigma^{(3)}_{#2}$} \newcommand{\hfourth}[2]{\small $\sigma^{(4)}_{#2}$} \inputlayer[count=3, bias=false, title=Input\\layer, text=\x] \hiddenlayer[count=10, bias=false, title=Hidden\\layer 1, text=\hfirst] \linklayers \hiddenlayer[count=10, bias=false, title=Hidden\\layer 2, text=\hsecond] \linklayers \hiddenlayer[count=10, bias=false, title=Hidden\\layer 3, text=\hthird] \linklayers \hiddenlayer[count=10, bias=false, title=Hidden\\layer 4, text=\hfourth] \linklayers \outputlayer[count=1, title=Output\\layer, text=\y] \linklayers \end{neuralnetwork} \caption{Deep Neural Network architecture with three inputs and one output. It also has four hidden layers with ten nodes each.} \label{fig 1:DNNI} \end{figure} In the DNNI algorithm, the integral $I$ is approximated by a Feed Forward Deep Neural Network $N(x)$ as shown in figure \ref{fig 1:DNNI}. The Neural Network can be represented as \begin{equation} N(x_1,x_2,...) = W_{L+1}^T\sigma(W_L^T(\sigma(W_{L-1}^T(\sigma(..........)+b_{L-1})+b_{L})+b_{L+1} \end{equation} where $W$s and $b$s are weights and biases respectively and $\sigma$ is a non-linear function like sigmoid$(1/(1+e^{-x}))$ or tanh. Now, in a required domain $x\in [\alpha,\beta]$ which can be arbitrarily large, we aim to make \begin{equation} N(x,a,b,...) \approx I(x,a,b,...) \end{equation} \begin{equation} \implies N(x,a,b,...) \approx \int f(x,a,b,...)\,dx \end{equation} \begin{equation} \implies \frac{\partial N(x,a,b,...)}{\partial x} \approx f(x,a,b..) \end{equation} Since a neural network is a continuous function, it can be easily differentiated using the latest developments in automatic differentiation. This differential of the neural network is included in the loss function to minimize its deviation from the integrand. As the derivative of the neural network gets closer to the integrand, the integral is approximated by the neural network. Hence, a loss function can be formed such as \begin{equation} LOSS :\ \ MSE\left(\frac{\partial N(x,a,b,...)}{\partial x} , f(x,a,b..) \right) \end{equation} where \begin{equation} MSE(x,y) :\ \ \frac{\sum_{i=1}^{N}(x_i-y_i)^2 }{N} \end{equation} is the Mean Squared Error. The weights and biases can be tuned using an optimization algorithm to reach the required accuracy. Gradient Descent or quasi-Newton based optimization algorithms are mostly used. In this paper, we have mostly used the Adam algorithm\cite{kingma2014adam}, with learning rate scheduling, for optimization. \\In the cases where the limits are defined, DNNI can give a closed form approximation of the integral as a function of other parameters. \begin{equation} \int_{x_0}^{x_n} f(x,a,b,...)\,dx = F(a,b,...) \approx N(x_n,a,b,...)-N(x_0,a,b,...)\end{equation} The model depth, the number of nodes in each hidden layer, and the number of epochs are selected based on the complexity. Some activation functions used are shown in the table below. ReLU is not used as its higher gradients vanish. \begin{center} \begin{tabular}{ c c c c} \hline \hline Name &Expression & Derivative & Second derivative \\ \hline \hline Sigmoid & $ \frac{1} {1 + e^{-z}}$ & $\frac{e^{x}} {(1+e^{x})^2}$ & $-\frac{e^{x}(e^{x}-1)} {(1+e^{x})^3}$ \\ \hline Tanh & $tanh(x)$ & $sech^2(x)$ & $-2\ tanh(x)\ sech^2(x)$ \\ \hline ReLU & max(0,x) & $ \begin{array}{cc} \Bigg\{ \begin{array}{cc} 0 & x\leq 0 \\ 1 & 0\le x \\ \end{array} \end{array} $ & 0\\\hline \hline \end{tabular} \end{center} \section{ Results} We tried the DNNI algorithm in many cases setting the lower limit to some arbitrary value to eliminate the constant of integration. Then, we compared the theoretical primitive, if available, with its DNNI approximation. \subsection{Simple Integrals} In the following subsection, we have shown the application of DNNI to obtain the anti-derivatives of some ubiquitous integrals. They have been plotted with their theoretical counterpart for comparison. \\ Case 1: \begin{equation} \int x^6 \,dx = x^7/7 + c \end{equation} \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/x_6.eps} \caption{Comparison of the DNNI anti-derivative and theoretical anti-derivative of $x^6$. It can be observed that the DNNI anti-derivative perfectly overlaps with the exact anti-derivative.} \label{fig:x^6} \end{figure} Case 2: \begin{equation} \int \sqrt{1+x^2} \,dx = \frac{\sinh{x}+x(x^2+1)}{2} + c \end{equation} \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/root_1+x_2.eps} \caption{Comparison of the DNNI and theoretical anti-derivative of $\sqrt{1+x^2}$} \label{fig:root(1+x^2)} \end{figure}\\ Case 3: \begin{equation} \int \cos{x} \,dx = \sin{x} + c \end{equation} \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/cos.eps} \caption{Comparison of the DNNI and theoretical anti-derivative of $cos(x)$} \label{fig:cos} \end{figure} \newpage \subsection{Complex Integrals} Anti-derivatives are very hard to obtain analytically, even if their closed form exists. Applying deep learning in symbolic mathematics\cite{lample2019deep} has proven helpful in finding primitives of complex integrands. This subsection shows the use of DNNI for such integrals.\\ \\ Case 4: \begin{equation}\label{com1} \int \frac{16x^3-42x^2+2x}{\sqrt{-16x^8+112x^7-204x^6+28x^5-x^4+1}} \,dx = \sin^{-1}\left( 4x^4-14x^3+x^2 \right) + c \end{equation} \begin{figure}[h] \centering \includegraphics[width=8cm,height=6cm]{images/tough2.eps} \caption{Comparison of the DNNI and theoretical anti-derivative for the integral shown in case 4} \label{fig:tough2} \end{figure} \\ \\ Case 5\cite{bronstein1998symbolic}: \begin{equation}\label{com2} \int \frac{x^2+2x+1+(3x+1)\sqrt{x+\log(x)}}{x\sqrt{x+\log(x)}(x+\sqrt{x+\log(x)})} \,dx = 2\left(\sqrt{x+\log(x)}+(\log (x+\sqrt{x+\log(x)}) \right) + c \end{equation} \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/tough1.eps} \caption{Comparison of the DNNI and theoretical anti-derivative for the integral shown in Case 5} \label{fig:tough1} \end{figure} \newpage \subsection{Non-Elementary Integrals} DNNI is very useful in the case of non-elementary integrals. Though there are several numerical techniques for definite integrals, DNNI can be used to plot the primitive, which is computationally expensive using other numerical techniques due to repeated integrations. \\ \\ Case 6: \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/x_x.eps} \caption{The function $x^{-x}$ which has a maximum value at $1/e$ and then decreases asymptotically} \label{fig:x^x} \end{figure} The variation of the function $x^{-x}$ is shown in figure \ref{fig:x^x} and its anti-derivative is shown in figure \ref{fig:integral x^x}. \begin{figure}[h] \centering \includegraphics[width=7.5cm,height=5.6cm]{images/integral_x_x.eps} \caption{Anti-Derivative of $x^{-x}$ given by $\int_0^x t^{-t} \,dt$ where the lower limit is set to zero for eliminating the constant of integration.} \label{fig:integral x^x} \end{figure} \\The identity called a Sophomore's dream is \begin{equation} \int_0^1 t^{-t} \,dt= \sum_{n=1}^{\infty}n^{-n} = 1.291285997 \end{equation} which can be verified using DNNI. Also the integral \begin{equation}\label{ne2} \int_0^\infty t^{-t} \,dt= 1.99545596 \end{equation} can be obtained to any desirable accuracy by changing the number of epochs and depth of the neural network. It is obtained to an error of 0.01\% by using 4 hidden layers with 10 nodes each. \\ \\ Case 6: Elliptic integrals \\ An elliptic integral is expressed as \begin{equation} E(x) = \int_c^x f(t,\sqrt{P(t)})\,dt \end{equation} where $f$ is a rational function, and $P$ is a polynomial of degree 3 or 4. Something as simple as finding the perimeter of an ellipse requires solving a non-elementary integral. The integral to find the perimeter of an ellipse is expressed as \begin{equation}\label{19} Perimeter =\int_0^{\pi/2} 4a\sqrt{1-e^2sin^2(x)}\,dx \end{equation} where 'a' and 'b' are major and minor axis lengths and 'e' is the ellipse's eccentricity.\\ \begin{center} \begin{tabular}{ c c c c} \hline\hline Sl.No. & a & b & Perimeter using Naive DNNI \\ \hline\hline 1& 8 & 7 & 47.17621557 \\ \hline 2 & 2& 1 & 9.68845137 \\ \hline3 & 10 & 5 & 48.44226631 \\ \hline 4 & 5 & 1 & 21.01007226 \\\hline\hline \end{tabular} \end{center} The values obtained have a maximum error of 0.0002\% using a two hidden layer neural network with ten nodes each. This case is computed using DNNI based on a single variable similar to definite integral calculations. Another approach can be to obtain an approximate closed-form formula based on parameters 'a' and 'b .'This pioneering technique can estimate the closed-form expression of any integral based on several parameters. Further details of this are mentioned in section \ref{param}. \subsection{Oscillatory Integrals} Functions of the form $f(x)sin(\frac{\omega}{x^k})$ and $f(x)cos(\frac{\omega}{x^k})$ where $f$ is a continuous and smooth function, $\omega$ and $k$ are real numbers, are highly oscillatory. Integrating such functions is very challenging using common numerical techniques. Mathematicians have developed special techniques like using Haar wavelets and hybrid functions\cite{shivaram2016numerical} to counter such functions. This subsection shows that DNNI can handle even highly oscillatory integrals. \\Case 8: \begin{equation} \int_{0}^1 x\ sin(\frac{1}{x^{10}}) \,dx = 0.060665 \end{equation} \clearpage \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.6\textwidth]{images/osc1.eps} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.6\textwidth]{images/int_osc1.eps} \end{minipage} \caption{Function: $x\ sin(\frac{1}{x^{10}})$ and its DNNI anti-derivative} \label{fig: Osc1} \end{figure} Since DNNI approximates the primitive, it is bound to give the correct values of the definite integral on applying the limits. It also gives a closed-form approximation of the anti-derivative, which can be used as required. \begin{table}[h] \centering \begin{tabular}{c c c} \hline\hline Method & Value & Error(\%) \\ \hline\hline Simpsons 1/3rd(500 points)& 0.0622533209 & 2.61818 \\ \hline Simpsons 3/8th(500 points)& 0.070571762 & 16.33028 \\ \hline Simpsons 1/3rd(1 million points)& 0.0606172467 & 0.07872 \\ \hline Simpsons 3/8th(1 million points)& 0.0605936399 &0.11763 \\ \hline Clenshaw-Curtis method(scipy library)& 0.060524 & 0.232424 \\ \hline Global Adaptive Quadrature(Matlab default)&0.0605935019 & 0.117857 \\ \hline DNNI& 0.06067391 & 0.01469 \\ \hline \hline \end{tabular} \caption{\label{tab:osc1}A comparison between DNNI and several common numerical techniques. The computation time of other methods is lower than DNNI, but the number of points used is much higher. DNNI gives the most accurate result among all.} \end{table} \\Case 9: \begin{equation} \int_0^1 \frac{1}{x+1}\ sin(\frac{1}{x}) \,dx = 0.28749061 \end{equation} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.6\textwidth]{images/osc2.eps} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.6\textwidth]{images/int_osc2.eps} \end{minipage} \caption{ The function: $\frac{1}{x+1}\ sin(\frac{1}{x})$ and its DNNI anti-derivative} \label{fig: Osc2} \end{figure} \begin{table}[h] \centering \begin{tabular}{c c c} \hline \hline Method & Value & Error(\%) \\ [0.5ex] \hline\hline Simpsons 1/3rd(500 points)& 0.28603691 & 0.50565 \\ \hline Simpsons 3/8th(500 points)& 0.28345895 & 1.402362 \\ \hline Simpsons 1/3rd(1 million points)& 0.28751143 & 0.007242 \\ \hline Simpsons 3/8th(1 million points)& 0.28750075 & 0.003527 \\ \hline Clenshaw-Curtis method(scipy library)& 0.285857 & 0.568230 \\ \hline Global Adaptive Quadrature(Matlab default)&0.28749060 & Exact \\ \hline DNNI& 0.28730544 & 0.064409 \\ \hline \hline \end{tabular} \caption{\label{tab:osc2} A comparison between DNNI and several common numerical techniques. Since this function is less oscillatory than case 8, the numerical techniques perform relatively better.} \end{table} \newpage \subsection{Error Analysis} The accuracy of DNNI increases with the number of points taken for training the neural network. The l2 norm from the theoretical solution decreases asymptotically with the number of training points showing some local fluctuations, which decay out on average. In this paper, the learning rate is of the order $10^{-2}$ and is reduced every one-fifth of the training steps. Increasing the number of epochs with decreasing learning rates also decreases the l2 norm asymptotically. We can tune all these parameters accordingly to obtain the best model. \begin{table}[h] \centering \begin{tabular}{c c c} \hline \hline Parameter & Simple Integral & Complex Integral \\ [0.5ex] \hline\hline No. of training points & 20-100 & 1000-5000 \\ \hline Depth of the Neural Network& 2-4 layers & 4-8 layers \\ \hline No. of nodes in each layer& 5-10 & 10-20 \\ \hline No. of epoches& 1000-10000 & 10000-50000 \\ \hline \hline \end{tabular} \caption{We suggest using the above parameters to train the DNNI model based on the complexity of the problem. A meshgrid has to be generated for the integrals based on several parameters to use all combinations in the training data.} \end{table} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{images/epoch1.eps} \label{fig:e1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{images/epoch2.eps} \label{fig:e2} \end{minipage} \caption{The variation of L2 norm with respect to theoretical anti-derivative for the integrals in equation \ref{com1} and \ref{ne2} respectively with increasing epochs.} \label{fig:epoch} \end{figure} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{images/Case1_points.eps} \label{fig:sub1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\linewidth]{images/Case2_points.eps} \label{fig:sub2} \end{minipage} \caption{The variation of L2 norm with respect to theoretical anti-derivative for the integrals in equation \ref{com1} and \ref{ne2} respectively on increasing the number of training points.} \label{fig:error1} \end{figure} \newpage \section{ Applications} The applications of DNNI that we have come up with are based on either the requirement of anti-derivatives of non-elementary integrals or increasing the computational speed of the algorithms having repeated integrals. \subsection{Parametric integrals}\label{param} This subsection shows the use of DNNI in obtaining anti-derivatives of integrands having several parameters. The closed form solution is required in such cases to use the integral further and study the effects of various parameters. Mathematicians have successfully developed many closed-form approximations for popular non-elementary integrals like the perimeter of an ellipse and the Fermi-Dirac integral. Using these test cases, we claim that DNNI can be used to obtain the closed form approximation of any given integral. Though there is no substitute for theoretical analysis, DNNI can effectively give a relatively quick solution in very complex cases. Also, DNNI is a single method that works for all types of integrands.\\ Case 10:\\ The equation \ref{19} shows the integral for the perimeter of an ellipse. This integral depends on the semi-major axis 'a' and the semi-minor axis 'b.' \begin{equation} P(a,b) = 4 \int_0^{\pi/2} \sqrt{a^2-(a^2-b^2)sin^2(\theta)} \,d\theta,\ \ \ \ a,b\in\mathbb{R} \end{equation} For any given 'a' and 'b,' the value can be found easily using numerical definite integral techniques, but a closed form expression of P(a,b) is often required. The above integral is very common and has been approximated by several mathematicians, including the famous Ramanujan's formula: \begin{equation} P(a,b) \approx a\left(3(a+b)- \sqrt{\frac{3a+b}{a+3b}}\right) \end{equation} DNNI can be used to obtain an approximate closed-form anti-derivate of the expression. The inputs to the neural network have to be a flattened meshgrid of 3 parameters: $\theta$, $a$, and $b$ in the required domain of interest. The perimeter can be calculated as: \begin{equation} P(a,b) \approx N(\frac{\pi}{2},a,b)-N(0,a,b) \end{equation} \begin{table}[h] \begin{center} \begin{tabular}{ c c c c} \hline \hline a & b & $N(\frac{\pi}{2},a,b)-N(0,a,b)$ & Relative Error \\ \hline \hline 5 & 1 & 21.03439167 & 0.001159 \\ \hline 6& 1.8 & 26.29762002 & 0.000858 \\ \hline 7 & 2.6 & 31.75970172 &0.000171 \\ \hline 8 & 3.4 & 37.28223005 &0.000139 \\\hline 9 & 4.2& 42.84975621 & 0.000043\\\hline 10&5& 48.40454929 & 0.0000078\\ \hline \hline \end{tabular} \end{center} \caption{DNNI is used to obtain a formula for the perimeter of an ellipse. These errors are slightly higher than the ones obtained in Case 6, which was based on a single variable and fixed parameters. The errors can be further reduced by using deeper neural networks and increasing the number of epochs.} \end{table} \newpage Case 11:\\ The non-relativistic Fermi-Dirac integral is defined as \begin{equation}\label{fermi} F_q(\eta) = \int_0^{\infty} \frac{x^q}{e^{x-\eta}+1} \,dx ,\ \ \ q\geq0,\ \ \ n\in\mathbb{R} \end{equation} and the relativistic Fermi-Dirac integral is \begin{equation} F_q(\eta,\beta)=\int_0^{\infty} \frac{x^q\sqrt{1+\beta x/2}}{e^{x-\eta}+1}\,dx,\ \ \ \beta\geq0,\ \ \ q\geq0,\ \ \ n\in\mathbb{R} \end{equation} The Fermi-Dirac integral has many applications in nuclear astrophysics and finding the concentration of electrons and holes in a semiconductor. There has been substantial research \cite{gil2022complete,sagar1991gaussian,temme1990uniform,bhagat2003evaluation,mohankumar2016very} on just trying to obtain theoretical and numerical approximates of the relativistic and non-relativistic Fermi-Dirac integral. In the following tables, DNNI is used to get an approximate closed-form expression for the above integrals. The values are compared to those obtained using numerical definite integral techniques. DNNI gives a function that outputs the integral on inputting the parameters. The neural network approximate of the Fermi-Dirac integral can be further integrated, differentiated, and plotted based on the requirements. \begin{table}[h] \begin{center} \begin{tabular}{ c c c c} \hline \hline q & $\eta$ & $N(\zeta,q,\eta)-N(0,q,\eta)$ & Relative Error \\ \hline \hline 0 & -2 & 0.12468052 & 0.017706 \\ \hline 0.5& -1 & 0.28986771 & 0.002179 \\ \hline 1 & 0 &0.8233424 &0.001064 \\ \hline 1.5 & 1 & 2.66133345 &0.000130 \\\hline 2 & 2& 9.51024877 & 0.000254\\\hline \hline \end{tabular} \end{center} \caption{DNNI is used to obtain a formula for the non-relativistic Fermi-Dirac integral. Here, $\zeta$ is an arbitrarily large number in the given domain. This table shows the relative errors of a few integrals by putting the required parameters in the DNNI function with their definite integral counterpart. The errors can be further reduced by using deeper neural networks and increasing the number of epochs.} \end{table} \\The multi-parameter DNNI is computationally expensive compared to a single variable DNNI, but it has extensive use in all fields of science and engineering. \begin{table}[h] \begin{center} \begin{tabular}{ c c c c c} \hline \hline q & $\eta$ & $\beta$ & $N(\zeta,q,\eta,\beta)-N(0,q,\eta,\beta)$ & Relative Error \\ \hline \hline 1 & -1 & 0.5 & 0.41499549 & 0.0000397 \\ \hline 1.5& 0 & 1 & 1.74834439 & 0.005471 \\ \hline 2 & 1 & 1.5 &7.94319678 &0.001817 \\ \hline 2.5 & 2 &2& 38.88427763 &0.004774 \\\hline \hline \end{tabular} \end{center} \caption{DNNI is used to obtain a formula for the relativistic Fermi-Dirac integral. This table shows the relative errors of a few integrals by putting the required parameters in the DNNI anti-derivative with their definite integral counterpart.} \end{table} \newpage \subsection{Cumulative Distribution function} A probability distribution function gives the distribution of the probability of occurrence of an event. For a continuous random variable, it is often represented by a function $f$. The cumulative distribution function of a random variable is given by \begin{equation} F(X) = \int_{-\infty}^X f(x) \,dx \end{equation} The need for an anti-derivative is very critical in this case. DNNI can be a handy method to approximate any cumulative distribution function. It can replace the need for distribution tables or repeated numerical integration. This claim is shown in the following cases: \\ \\Case 12:\\ For the probability distribution function \begin{equation} f(x) = \frac{1}{2\pi}e^{-x^2/2}, \end{equation} as shown in figure \ref{fig:normal}, the cumulative distribution function is \begin{equation} \frac{1}{2\pi}\int_{-\infty}^x e^{-t^2/2} \,dt = \frac{1}{2}\ (1+erf\left( \frac{x}{\sqrt{2}}\right)) \end{equation} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.8\textwidth]{images/normal.eps} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.8\textwidth]{images/CDF_normal.eps} \end{minipage} \caption{Standard Normal Distribution and its Cumulative Distribution function} \label{fig:normal} \end{figure} Case 13: \\ For the probability distribution function \begin{equation} f(x) = \frac{1}{3\sqrt{2\pi}}x^4e^{-x^2/2}, \end{equation} as shown in figure \ref{fig: A special bimodal Distribution}, the cumulative distribution function is \begin{equation} \frac{1}{3\sqrt{2\pi}}\int_{-\infty}^x t^4e^{-t^2/2} \,dt = \frac{1}{2}\ (1+erf\left( \frac{x}{\sqrt{2}}\right)) - \frac{1}{3\sqrt{2\pi}}x(x^2+3)e^{-x^2/2} \end{equation} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.8\textwidth]{images/PDF2.eps} \end{minipage}% \begin{minipage}{.5\textwidth} \includegraphics[width=\textwidth,height=0.8\textwidth]{images/CDF2.eps} \end{minipage} \caption{A Bimodal Distribution and its Cumulative Distribution Function} \label{fig: A special bimodal Distribution} \end{figure} \\ Using DNNI, the cumulative distribution function of very complicated probability distributions can also be found. Numerically finding the definite integral values for different limits can be computationally expensive. DNNI can be an up-and-coming solution to this. \newpage \subsection{Galerkin Method} Galerkin Method is a numerical technique to solve differential equations by converting them to weak integral form. The domain is discretized, and the conservation variable is approximated with a basis function. The differential equation is multiplied with a weight function and integrated such that the residual is zero. The basis function's coefficients are calculated by solving sets of linear equations. Galerkin Method is a very general and broad family of methods\cite{giraldo2020introduction} to solve differential equations. We have picked one such method closer to the Finite Element Method. \\For a differential equation, \begin{equation} \alpha\frac{d^2u(x)}{dx^2}+\beta \frac{du(x)}{dx}+\gamma u(x) = f(x) \end{equation} with Dirichlet boundary condition $u(x_0)=u_0$ and Neumann boundary condition $\frac{du}{dx}$=c at $x=x_n$ in the domain $x_0<x<x_n$. Let $u(x)$ be approximated as \begin{equation}\label{eqn28} u(x)=\sum_{i=1}^n c_i \psi_i(x) \end{equation} Using the same weight function as the basis function and integrating, we get: \begin{equation}\label{eqn29} \int_{x_0}^{x_n} \psi_j (\alpha\frac{d^2u(x)}{dx^2}+\beta \frac{du(x)}{dx}+\gamma u(x) - f(x)) \,dx =0 \end{equation} On substituting equation \ref{eqn28}, using linear basis functions, and simplifying\cite{atluri2005methods}, equation \ref{eqn29} converts to the following set of linear equations.\\ \small \begin{equation*} \begin{bmatrix} (-\frac{\alpha}{h}+\frac{\alpha x_1}{h^2} -\frac{\beta}{2} -\frac{\gamma h}{3} -\frac{\alpha x_2}{h^2}) & (-\frac{\alpha x_1}{h^2}+\frac{\beta}{2}-\frac{\gamma h}{6}+\frac{\alpha x_2}{h^2}) & ... &0\\ (-\frac{\alpha x_1}{h^2} -\frac{\beta}{2}+\frac{\gamma h}{6} +\frac{\alpha x_2}{h^2}) & (\frac{\alpha x_1}{h^2} +\frac{2\gamma h}{3}-\frac{\alpha x_3}{h^2}) & (-\frac{\alpha x_2}{h^2}+\frac{\beta}{2}+\frac{\gamma h }{6}+\frac{\alpha x_3}{h^2})&0\\ & \vdots & ... & \\ 0& (-\frac{\alpha x_{i-1}}{h^2}-\frac{\beta}{2} +\frac{\gamma h}{6}+\frac{\alpha x_i}{h^2}) & (\frac{\alpha x_{i-1}}{h^2} +\frac{2\gamma h}{3} -\frac{\alpha x_{i+1}}{h^2}) & (-\frac{\alpha x_i}{h^2} +\frac{\beta}{2} +\frac{\gamma h}{6}+\frac{\alpha x_{i+1}}{h^2})\\ 0&0&(-\frac{\alpha x_{n-1}}{h^2} -\frac{\beta}{2}+\frac{\gamma h}{6} -\frac{\alpha x_n}{h^2}) & (\frac{\alpha x_{n-1}}{h^2} +\frac{\beta}{2}+\frac{\gamma h}{3}-\frac{\alpha x_{n}}{h^2}) \end{bmatrix} \begin{bmatrix} c_1\\c_2\\c_3\\ \vdots\\c_n \end{bmatrix} \end{equation*} \begin{equation}\label{30} = \begin{bmatrix} 1/h\int_{x_1}^{x_2}(x_2-x)f(x)\\ \vdots \\1/h\int_{x_{i-1}}^{x_i}(x-x_{i-1})f(x) + 1/h\int_{x_{i}}^{x_{i+1}}(x_{i+1}-x)f(x)\\ \vdots\\ 1/h\int_{x_{n-1}}^{x_n}(x-x_{n-1})f(x) - c\alpha \end{bmatrix} \end{equation} \normalsize For evaluating the integrals, Quadrature methods are commonly used. Since the integrands are repeated, we propose using DNNI for substantial speedup. Once the primitive is approximated in the given domain, all integrals can be obtained instantaneously by just changing the limits. Let the anti-derivatives of $f(x)$ and $xf(x)$ using DNNI are $N_1(x)$ and $N_2(x)$. A typical integral on the right-hand side of equation \ref{30} can be calculated as \begin{equation*} \int_{x_{i}}^{x_{i+1}}(x_{i+1}-x)f(x) = x_{i+1}\int_{x_{i}}^{x_{i+1}}f(x)\,dx - \int_{x_{i}}^{x_{i+1}}xf(x)\,dx \end{equation*} \begin{equation} = x_{i+1}(N_1(x_{i+1})-N_1(x_i)) - (N_2(x_{i+1})-N_2(x_i)) \end{equation} The above modification will significantly reduce the computation time for a higher number of nodes. \begin{prop}\label{th1} For differential equations with source terms, there is a finite value for the number of nodes 'n,' after which the DNNI-based Galerkin method is computationally less expensive than the naive Galerkin method. \end{prop} \begin{proof} Let the differential equation with a source term be: \begin{equation}\label{gde} F(\frac{\partial y}{\partial t},...\frac{\partial^2y}{\partial x^2},\frac{\partial y}{\partial x},.. y,t...x) = S(x) \end{equation} Using simplifications similar to equation \ref{eqn29}, it can be converted to a system of linear equations like equation \ref{30}. The constructed equation will be similar to: \begin{equation*} \begin{bmatrix} Matrix\ depending \\ on\ the\ LHS \\ of\ equation\ \ref{gde} \end{bmatrix} \begin{bmatrix} c_1\\c_2\\c_3\\ \vdots\\c_n \end{bmatrix} = \end{equation*} \begin{equation}\label{matrix eqn} \begin{bmatrix} 1/h\int_{x_1}^{x_2}f_1(x,x^2...x_1,x_2...)S(x)+ some\ terms\\ \vdots \\1/h\int_{x_{i-1}}^{x_i}f_2(x,x^2...x_1,x_2...)S(x) + 1/h\int_{x_{i}}^{x_{i+1}}f_3(x,x^2...x_1,x_2...)S(x)\\ \vdots\\ 1/h\int_{x_{n-1}}^{x_n}f_{2n-2}(x,x^2...x_1,x_2...)S(x) + some\ terms \end{bmatrix} \end{equation} Though using a clever implementation of DNNI, even the LHS of equation \ref{matrix eqn} can be computed efficiently; we are ignoring this because it often forms patterns and does not require repeated integrations. For the naive Galerkin method, numerical techniques of definite integrations are commonly used. For the 'n' number of nodes, the number of integrations performed is $2n-2$. Assuming that the fastest numerical technique takes '$t$' time to compute a single integral. Thus, the minimum time required to compute the RHS of equation \ref{matrix eqn} is $t(2n-2)$. If the rest of the method takes time '$\tau$,' the total time required for the naive Galerkin method is: \begin{equation} t_1 = t(2n-2)+\tau(n) \end{equation} Using DNNI the RHS term can be computed using combinations of a finite number of anti-derivatives such as $\int xS(x)\,dx$, $\int x^2S(x)\,dx$...., $\int S(x)\,dx$. Let the number of such anti-derivatives be '$m$,' and the average time taken for a DNNI approximation is '$T$ .'Thus, the total time to compute all anti-derivatives is approximately '$mT$ .'Also, let the time for calculating the definite integrals by putting limits on the DNNI anti-derivatives is '$\epsilon$ .'So, \begin{equation*} Total\ time= time\ taken\ for\ \left( DNNI\ +\ applying\ limits\ +\ rest\ of\ the\ process\right) . \end{equation*} \begin{equation} \implies t_2 =\ mT + (2n-2)m\epsilon+ \tau(n) \end{equation} Now, \begin{equation} \frac{t_1}{t_2}=\frac{t(2n-2)+\tau(n)}{mT + (2n-2)m\epsilon+\tau(n)} \end{equation} is the ratio of time taken by the naive Galerkin method and the DNNI-based Galerkin method. For, \begin{equation*} t_1 > t_2 \end{equation*} \begin{equation*} \implies t(2n-2)+\tau(n)>mT + (2n-2)m\epsilon+\tau(n) \end{equation*} \begin{equation}\label{n} \implies n > 1+\frac{mT}{2(t-m\epsilon)} \end{equation} The value of $m$ depends on the type of basis function used. Complex basis functions will lead to several different integrals, and the value of $m$ will increase. Since $\epsilon$ is much less than $t$, if $m$ is not too large, we will always get a finite value '$n_{critical}$' of the number of nodes, after which the DNNI-based Galerkin method will compute faster. \end{proof} In the following test case, the values observed are: \begin{center} \begin{tabular}{c c} \hline\hline Parameters & Value \\ \hline \hline $T$ & 2.810464692115784 \\\hline $t$ & 0.00010534977912902833 \\\hline $\epsilon$ & 5.729198455810547e-07\\\hline $m$ & 2\\\hline \hline \end{tabular} \end{center} Using equation \ref{n}, the theoretical break-even value of 'n' is, \begin{equation}\label{quad} n_{critical} = 1+\frac{2\times2.810464692}{2(0.00010534-2\times5.7291984e-07)} = 26971 \end{equation} \begin{figure}[h] \centering \includegraphics[width=12cm,height=8.5cm]{images/Quad_DNNI.eps} \caption{Comparison of computation time for the RHS of equation \ref{30} using Gaussian Quadrature and DNNI method. The value of the critical number of nodes is of the same order as predicted in equation \ref{quad}.} \label{fig:quad_dnni} \end{figure} \begin{table} \begin{center} \begin{tabular}{c c c} \hline\hline Parameters & Case 14 & Case 15 \\ \hline \hline $\alpha$ & 0 &0 \\\hline $\beta$ & 1&1 \\\hline $\gamma$ & 0&1\\\hline $u_o$ & 0&0\\\hline f(x) & cos(2x)&cos(2x)\\ \hline \hline \end{tabular} \end{center} \caption{The above parameters are used to solve two differential equations using the quadrature and DNNI-based Galerkin method. } \end{table} The speedup achieved is not very significant for simple test cases, but we conjecture that for more complex cases with computationally intensive repeated integrations, the DNNI-based method will outperform traditional quadrature-based Galerkin methods. \begin{figure}[h!] \centering \includegraphics[width=13cm,height=9.5cm]{images/DE1.eps} \caption{Comparison of Quadrature based Galerkin method and DNNI based Galerkin method for Case 14. There is only a slight speedup because solving the set of linear equations is the most time-consuming step. However, the theoretical break-even point is of the same magnitude as the one obtained by computation.} \label{fig: Galerkin} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=13cm,height=9.5cm]{images/DE2.eps} \caption{Comparison of Quadrature based Galerkin method and DNNI based Galerkin method for Case 15. } \label{fig: Galerkin2} \end{figure} \newpage \section{Conclusion} In this paper, we propose an algorithm to represent the anti-derivative of a function using Deep Neural Networks. We have shown that DNNI effectively approximates very complex, non-elementary, and oscillatory integrals. We have also used DNNI to obtain the parameterized closed-form integrals, which can be later utilized to study the effects of various parameters. The closed form representations of the Fermi-Dirac and elliptic integrals were computed with significant accuracy.\\ Cumulative distribution functions were obtained using DNNI, eliminating the need for repeated numerical integrations to get the CDF tables once the anti-derivative is approximated. Also, the integrand need not be represented as a continuous function for DNNI. It can give a closed form anti-derivative on inputting even discrete values as integrand. Another advantage of DNNI is that it will get faster and more accurate with new optimization algorithms and Neural Network architecture developments.\\ The computational speedup using DNNI is shown for the Galerkin method, where repeated integrations are performed. DNNI can theoretically outperform quadrature-based Galerkin methods since it can instantly compute all the definite integrals after the anti-derivatives are obtained. Test cases were also computed to verify this proposition. The only downside of this method is that all the integration terms have to be reduced to a few anti-derivatives, which can be used repeatedly. This paper shows a clever approach to overcome this limitation for Galerkin methods with linear basis functions.\\ Further research on DNNI can include its application in the complex domain and multi-variable integrals. DNNI can also be applied to full-scale engineering problems where repeated or non-elementary integrals are required for obtaining substantial speedup. The application of DNNI on Galerkin methods for both temporal and spatial domains and using higher order polynomial basis functions is also an area of further research. DNNI can also be applied to compute the closed-form expressions of non-elementary integrals appearing in various areas of science. \bibliographystyle{unsrt}
proofpile-arXiv_068-13792
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In Dynamics an interesting and fruitul topic of research is \emph{measured group theory}. Given a measure preserving action of a finitely generated group $\Gamma$ on a standard Borel probability space $(X,\mu)$, measured group theory studies the interplay between the algebraic properties of the group $\Gamma$ and the dynamical properties (for instance the structure of orbits) of the $\Gamma$-action on $(X,\mu)$. One of the most celebrated result in this field is the orbit equivalence rigidity theorem by Zimmer \cite[Theorem 4.3]{zimmer:annals}. Roughly speaking, two finitely generated groups $\Gamma, \Lambda$ acting in an essentially free and measure preserving way on two standard Borel probability spaces $(X,\mu)$ and $(Y,\nu)$, respectively, are \emph{orbit equivalent} if there exists a Borel isomorphism $\varphi:X \rightarrow Y$ sending $\Gamma$-orbits to $\Lambda$-orbits. More precisely, we require that the Borel isomorphism $\varphi$ respects the involved measures, that is the direct image of $\mu$ is $\nu$, and $\varphi(\Gamma.x) = \Lambda.\varphi(x)$, for almost every $x \in X$. When $\Gamma$ and $\Lambda$ are two lattices contained in two higher rank center free simple Lie groups $G,H$, respectively, Zimmer proved that if the actions $\Gamma \curvearrowright (X,\mu)$ and $\Lambda \curvearrowright (Y,\nu)$ are orbit equivalent, then $G$ and $H$ must be isomorphic. Such a rigidity phenomenon is in sharp contrast with what happens in the case of amenable groups, for example. In fact Ornstein and Weiss \cite{OW80} proved that any two ergodic measure preserving actions of two infinite countable amenable groups must be orbit equivalent. We denote by $\calR_\Gamma$ the equivalence relation such that two points of $(X,\mu)$ are related if and only if they are in the same $\Gamma$-orbit, and we adopt the analogous notation $\calR_\Lambda$ for the $\Lambda$-action on $(Y,\nu)$. One easily sees that the definition of orbit equivalence can be naturally rewritten in terms of the associated orbital equivalence relations. This is an easy case of the more general idea of translating the study of measure preserving actions of countable groups in terms of their orbital equivalence relations. The latter idea inspired the theory of \emph{measured equivalence relations}, that is the study of the structural properties of a \emph{countable} equivalence relation (\emph{i.e.} with countable equivalence classes) defined over a probability space $(X,\mu)$. An important contribution to this topic was given in the late $70$'s by Feldman and Moore \cite{moore1976,feldman:moore}. They introduced the cohomology $\upH^\bullet(\calR;T)$ of a measured equivalence relation $\calR$ with coefficients in an Abelian Polish group $T$. Although Polish groups are required to give a consistent definition of higher order cohomology, one can consider the $1$-cohomology $\upH^1(\calR;G)$ with coefficients in $G$, where $G$ is any topological group. In this context a \emph{cocycle} is a Borel measurable map $c:\calR \rightarrow G$ satisfying the relation $c(x,z)=c(y,z)c(x,y)$ for almost every pair $(x,y),(y,z),(x,z) \in \calR$. In the same spirit, two cocycles $c_1,c_2$ are \emph{cohomologous} if there exists a Borel measurable map $f:X \rightarrow G$ such that $f(y)c_1(x,y)=c_2(x,y)f(x)$ for almost every $(x,y) \in \calR$. When $\calR = \calR_\Gamma$ is an orbital equivalence relation, the understanding of its $1$-cohomology $\upH^1(\Gamma \curvearrowright X;G):=\upH^1(\calR_\Gamma;G)$ has attracted the interest of many Mathematicians so far. The study of this exotic cohomology theory in full generality may reveal quite harsh. For this reason, it could be helpful to restrict the attention to specific families of groups, for both $\Gamma$ and $G$. For instance, when $G$ is algebraic, it makes sense to refer to the subset $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$ of \emph{Zariski dense} cohomology classes, whose study can be easier. When $\Gamma$ is an irreducible higher rank lattice and $G$ is an algebraic group over a local field, Zimmer superrigidity theorem \cite{zimmer:annals} ensures that every Zariski dense cohomology class contains a (Zariski dense) representation as representative. Equivalently, we have a surjection from the space $\textup{Rep}_{ZD}(\Gamma;G)$ of Zariski dense representations modulo $G$-conjugation to the Zariski dense orbital cohomology $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$. In this short expository paper we will focus our attention on the particular case when $G$ is a \emph{Hermitian} Lie group. We say that $G$ is Hermitian if the associated symmetric space $\calX$ admits a $G$-invariant complex structure compatible with its Riemannian metric. Additionally, we call $G$ \emph{of tube type} if $\calX$ can be biholomorphically realized as $V+i \Omega$, where $V$ is a real vector space and $\Omega \subset V$ is a proper convex cone. Let $\Gamma$ be a finitely generated group, $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space and consider a simple Hermitian Lie group $G$ not of tube type. In this setting a measurable cocycle boils down to a measurable map $\sigma:\Gamma \times X \rightarrow G$ such that $\sigma(\gamma_1\gamma_2,x)=\sigma(\gamma_1,\gamma_2.x)\sigma(\gamma_2,x)$ for every $\gamma_1,\gamma_2 \in \Gamma$ and for almost every $x \in X$. Since $G$ is Hermitian, the symmetric space $\calX$ admits a closed differential $2$-form $\omega_{\calX}$, called \emph{K\"{a}hler form}, which induces a class $\kappa^b_G$ in the second bounded cohomology group $\upH^2_{cb}(G;\bbR)$ and generates it. Exploiting such a class we can define its pullback $\upH^2_b(\sigma)(k^b_G)$ along any measurable cocycle $\sigma$ and the pullback will lie in the bounded cohomology group $\upH^2_b(\Gamma;\upL^\infty(X,\bbR))$. The main theorem in this context is that the pullback class is a complete invariant of a Zariski dense cocycles (actually of its cohomology class). In this way, we obtain an injection of $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$ into $\upH^2_b(\Gamma;\upL^\infty(X;\bbR))$ whose image avoids the trivial class. The latter result, obtained in collaboration with Sarti \cite{sarti:savini3}, is a generalization of a previous theorem by Burger, Iozzi and Wienhard \cite{BI04,BIW07} for Zariski dense representations. Such generalization allows us to show that $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$ is empty for some lattices satisfying a suitable cohomological condition. When $\Gamma<\textup{PU}(n,1)$, where $n \geq 2$, is a lattice and $G=\textup{PU}(p,q)$, for $1 \leq p \leq q$, something more can be said. Using the pullback class $\upH^2_b(\sigma)(\kappa^b_G)$ we can introduce a numerical invariant, called \emph{Toledo invariant}, for (the cohomology class of) a measurable cocycle $\sigma$. Such invariant has bounded absolute value, so we are allowed to define \emph{maximal cocycles} as those ones attaining the maximum. We will see that maximal Zariski dense cocycles are \emph{superrigid}, that is they admit a representation as representative \cite[Theorem 2]{sarti:savini1}. Moreover, applying a previous result by Pozzetti \cite{Pozzetti}, we immediately see that each representation lying in $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$ comes actually from a representation of the ambient group $\textup{PU}(n,1)$. As a consequence, the set $\upH^1_{\max,ZD}(\Gamma \curvearrowright X;G)$ must be empty whenever $1<p<q$, generalizing a result given by Pozzetti for representations. \subsection*{Plan of the paper} Section \ref{sec herm space} is devoted to the main definitions and results about Hermitian symmetric spaces. We will quickly review the notion of tupe type domains, Shilov boundary, Bergmann kernels and Hermitian triple product. Then we move to Section \ref{sec orbit rel} where we introduce the orbital cohomology. In Section \ref{sec bound cohom} we recall the bounded K\"{a}hler class and in Section \ref{sec pullback class} we remind its pullback along a measurable cocycle. We conclude with Section \ref{sec rigidity} and \ref{sec maximal}, where we report a list of the main results we have in this context. \subsection*{Acknowledgements} I would like to thank Andrea Seppi and the University of Grenoble for the invitation to the TSG seminars and Andrea Seppi for having proposed me to write this paper. \section{Hermitian symmetric spaces}\label{sec herm space} In this section we are going to introduce the main definitions and results about Hermitian symmetric spaces. For more details about this topic we refer the reader either to the papers by Burger, Iozzi and Wienhard \cite{BI04,BIW07} or to the book chapter by Koranyi \cite{Kor00}. Before starting, recall that a group $\bfG$ is called \emph{algebraic} over $\bbR$ if it can be realized as the zero set of a (finite) family of $\bbR$-polynomials and both the multiplication and the inversion in $\bfG$ are $\bbR$-algebraic maps. Given a real algebraic group, we can restrict ourselves to the \emph{real points} of $\bfG$, namely the subset $\bfG(\bbR)$ of the real solutions satisfying the polynomial equations which define $\bfG$. Finally, we will denote by $\bfG(\bbR)^\circ$ the connected component of the neutral element of $\bfG(\bbR)$. \begin{defn}\label{def hermitian group} A symmetric space $\calX$ associated to a connected semisimple Lie group $G$ is \emph{Hermitian} if it admits a $G$-invariant complex structure $\calJ_{\calX}$ compatible with its Riemannian tensor. If $\bfG$ is a connected adjoint semisimple $\bbR$-algebraic group, we say that the group $G=\bfG(\bbR)^\circ$ is \emph{Hermitian} (or of \emph{Hermitian type}) if the associated symmetric space is Hermitian. \end{defn} The first example of Hermitian Lie group to keep in mind is given by $G:=\textup{SU}(p,q)$, namely the subgroup of $\mathrm{SL}(p+q,\bbC)$ whose elements are matrices preserving the Hermitian form $h_{p,q}$ with signature $(p,q)$. If we set $d=\min\{p,q\}$, the symmetric space $\calX_{p,q}$ associated to $\textup{SU}(p,q)$ parametrizes the $d$-dimensional linear subspaces of $\bbC^{p+q}$ whose restriction of $h_{p,q}$ is positive definite. A Hermitian symmetric space $\calX$ is called \emph{of tube type} if it can be biholomorphically realized as $V+i\Omega$, where $V$ is a real vector space and $\Omega \subset V$ is a proper convex cone. When such realization cannot be done, we say that $\calX$ is not of tube type. Going back to our example $\calX_{p,q}$, one can see that the latter is of tube type if and only if $p=q$. In this case $\calX_{p,p}$ is biholomorphic to $\textup{Herm}(p,\bbC) + i \textup{Herm}^+(p,\bbC)$, where $\textup{Herm}(p,\bbC)$ is the space of Hermitian matrices and $\textup{Herm}^+(p,\bbC)$ is the cone of positive definite ones. It is worth noticing that for $p=q=1$, the symmetric space $\calX_{1,1}$ boils down to upper-half plane realization of the hyperbolic plane $\bbH^2_{\bbR}$. For any Hermitian symmetric space $\calX$ there always exists a bounded domain $\calD_{\calX}$ of some finite dimensional complex space $\bbC^n$ such that $\calX$ and $\calD_{\calX}$ are biholomorphic. The domain $\calD_{\calX}$ is usually called \emph{bounded realization} (or \emph{Harish-Chandra realization}) of $\calX$ (see \cite[Theorem III.2.6]{Kor00} for more details). The group $G$ of holomorphic isometries of $\calX$ acts via biholomorphisms on its bounded realization $\calD_{\calX}$. Furthermore such action can be continuously extended to the topological boundary $\partial \calD_{\calX}$. In general the latter is not a homogeneous $G$-space, but it admits a unique closed $G$-orbit called Shilov boundary. Here we will introduce the Shilov boundary starting from its analytic interpretation. \begin{defn}\label{def shilov boundary} Let $\calD \subset \bbC^n$ be a bounded domain. The \emph{Shilov boundary} of $\calD$ is the unique minimal closed subset $\calS_{\calD}$ of $\partial \calD$ such that, for any continuous function $f$ on the closure $\overline{\calD}$ and homolorphic in the interior $\calD$, we have that $$ |f(z)| \leq \max_{y \in \calS_{\calD}}|f(y)|, $$ for every $z \in \calD$. \end{defn} The previous definition can be restated by saying that $\calS_{\calD}$ is the unique minimal closed subset to add to $\calD$ so that the maximum principle can be applied for a homolorphic function which is continuous on the closure $\overline{\calD}$. In the particular case when $\calD=\calD_{\calX}$ is the bounded realization of a Hermitian symmetric space $\calX$, the Shilov boundary $\calS_{\calX}$ is a homogeneous $G$-space, being the unique closed $G$-orbit of a given point \cite[Section 2.3]{BIW07}. To keep track of our favourite example, when $G=\textup{SU}(p,q)$, the Shilov boundary $\calS_{p,q}$ parametrizes all the possible $d$-dimensional linear subspaces of $\bbC^{p+q}$ which are totally isotropic with respect to $h_{p,q}$. Notice that the topological boundary $\partial \calD_{p,q}$ parametrizes the space on which $h_{p,q}$ is semi-definite, thus $\calS_{p,q}$ is a proper subset of the topological boundary. The $G$-homogeneity of $\calS_{p,q}$ is due to the fact that it can be realized as the quotient $G/Q$, where $Q$ is the stabilizer of a fixed totally isotropic subspace with maximal dimension $d$ (say the space generated by the first $d$-vectors $\langle e_1,\ldots,e_d \rangle$ of the canonical basis). This identification is not accidental and can be generalized. More precisely, let $\bfG$ be a connected adjoint semisimple $\bbR$-algebraic group obtained by complexifying a Lie group of Hermitian type $G=\bfG(\bbR)^\circ$. Burger, Iozzi and Wienhard \cite[Section 2.3.1]{BIW07} proved that there exists a proper \emph{maximal parabolic subgroup} $\bfQ<\bfG$, such that $\calS_{\calX}$ corresponds to the real points of the algebraic variety $\bfG/\bfQ$. More precisely $\calS_{\calX}$ is isomorphic to the quotient $(\bfG/\bfQ)(\bbR)=G/Q$, where $Q=G \cap \bfQ$. Also in the product $\calS_{\calX} \times \calS_{\calX}$ we can find a unique open $G$-orbit, denoted by $\calS_{\calX}^{(2)}$, whose elements are pairs of \emph{transverse} points. In the case of $G=\textup{SU}(p,q)$, the subset of transverse pairs in $\calS_{p,q}^{(2)}$ is precisely the subset of pairs of linear subspaces $(V,W)$ which are \emph{linearly transverse}, that is $V \cap W=\{ 0 \}$. Let $g_{\calX}$ the Riemannian tensor of the symmetric space $\calD_{\calX}$ and let $\calJ_{\calX}$ the $G$-invariant complex structure. If we define $$ (\omega_{\calX})_a(X,Y):=(g_{\calX})_a(X,(\calJ_{\calX})_a(Y)) , $$ for every $X,Y \in T_a\calD_{\calX}$, we obtain a differential $2$-form $\omega_{\calX}$ called \emph{K\"{a}hler form}. The latter is clearly $G$-invariant and hence closed by Cartan's lemma \cite[VII.4]{Hel01}. As a consequence, we can consider, for any triple of points $x,y,z \in \calD_{\calX}$, the integral $$ \beta_{Berg}(x,y,z):=\int_{\Delta(x,y,z)} \omega_{\calX}, $$ where $\Delta(x,y,z)$ is any smooth triangle with geodesic sides and vertices $x,y,z$. The closedness of $\omega_{\calX}$ guarantees that $\beta_{Berg}$ does not depend on the choice of the particular filling triangle $\Delta(x,y,z)$. One of the most important properties of $\beta_{Berg}$ is that it encodes information about the complex and analytic structure of the domain $\calD_{\calX}$. In fact the following equation holds \begin{equation}\label{eq cocycle kernel} \beta_{berg}(x,y,z)=-(\arg k_{\calX}(x,y)+\arg k_{\calX} (y,z) + \arg k_{\calX} (z,x))\ , \end{equation} where $\arg$ is the branch of the argument with values in $(-\pi,\pi]$ and $k_{\calX}(\cdot,\cdot)$ is the \emph{Bergman kernel}. The latter is defined as follows: Consider the space of square integrable holomorphic functions $\calH^2(\calD_{\calX})$, namely the space of complex-valued holomorphic functions on $\calD_{\calX}$ whose norm is square integrable with respect to the Lebesgue measure. We have that $\calH^2(\calD_{\calX})$ is a Hilbert space where the evaluation on a point $w \in \calD_{\calX}$ is a bounded linear functional (since $\calD_{\calX}$ is bounded). As a consequence, we can write $f(w)=(f|K_w)$, for some $K_w \in \calH^2(\calD_{\calX})$, where $(\cdot|\cdot)$ is the Hilbert product. The function $k_{\calX}$ is then defined simply by $k_{\calX}(z,w)=(K_z|K_w)$. We denote by $\calS_{\calX}^{(3)}$ the set of triples of points that are pairwise transverse. The existence of a continuous extension of $k_{\calX}$ to pairs of transverse points in $\calS_{\calX}$, allows us to extend $\beta_{Berg}$ to $(\calS_{\calX})^{(3)}$. One can see that such extension, still denoted by $\beta_{Berg}$, is a continuous $G$-invariant alternating cocycle in the sense of Alexander-Spanier. Moreover, we have $$ \sup_{\calS_{\calX}^{(3)}}|\beta_{Berg}(\eta_0,\eta_1,\eta_2)|=\pi\mathrm{rk}\calX, $$ where $\mathrm{rk}\calX$ is the real rank of $\calX$ (that is the maximal dimension of a flat in $\calX$). The restriction of $\beta_{Berg}|_{(\calS_{\calX})^{(3)}}$ to triples of points that are pairwise transverse can be further extended to the whole product $(\calS_{\calX})^3$ and such extension, denoted by $\beta_{\calX}$, is measurable and satisfies the same properties of $\beta_{Berg}$. We conclude this introduction about Hermitian symmetric spaces by talking about the \emph{Hermitian triple product}. Exploiting Bergman kernels, we can define $$ \langle \cdot, \cdot,\cdot \rangle: \calS_{\calX}^{(3)} \rightarrow \bbC^\ast, $$ $$ \langle \eta_0,\eta_1,\eta_2 \rangle:=k_{\calX}(\eta_0,\eta_1)k_{\calX}(\eta_1,\eta_2)k_{\calX}(\eta_2,\eta_0). $$ By \cite[Proposition 2.12]{BIW07} the previous function is continuous and by Equation \eqref{eq cocycle kernel} we have that \begin{equation}\label{eq Bergman triple product} \langle \eta_0,\eta_1,\eta_2 \rangle=e^{i \beta_{\calX}(\eta_0,\eta_1,\eta_2)} \mod \bbR^\ast , \end{equation} where $\mod \bbR^\ast$ means that the two terms in the equation above differ by a non-zero real number. By composing $\langle \cdot,\cdot,\cdot \rangle$ with the projection $\bbR^\ast \backslash \bbC^\ast$, where $\bbR^\ast$ acts on $\bbC^\ast$ via dilations, we obtain the \emph{Hermitian triple product} $$ \langle \langle \cdot , \cdot , \cdot \rangle \rangle: \calS^{(3)}_{\calX} \rightarrow \bbR^\ast \backslash \bbC^\ast. $$ Burger, Iozzi and Wienhard exploited the identifcation between $\calS_{\calX}$ and the real points $(\bfG /\bfQ)(\bbR)$ to extend the Hermitian triple product to the whole $\bfG/\bfQ$. We denote by $A^\ast$ the group $\bbC^\ast \times \bbC^\ast$ endowed with the involution $(\lambda,\mu) \mapsto (\overline{\mu},\overline{\lambda})$ and let $\Delta^\ast$ the image through the diagonal embedding of $\bbC^\ast$. Burger, Iozzi and Wienhard \cite[Corollary 2.17]{BIW07} showed that there exists a rational map $$ \langle \langle \cdot , \cdot , \cdot \rangle \rangle_{\bbC}: (\bfG/\bfQ)^3 \rightarrow \Delta^\ast \backslash A^\ast $$ which fits in the commutative diagram reported below $$ \xymatrix{ \calS^{(3)}_{\calX} \ar[rr]^{\langle \langle \cdot ,\cdot , \cdot \rangle \rangle} \ar[d]^{i^3} && \bbR^\ast \backslash \bbC^\ast \ar[d]^\Delta \\ (\bfG/\bfQ)^3 \ar[rr]^{\langle \langle \cdot , \cdot , \cdot \rangle \rangle_{\bbC}} && \Delta^\ast \backslash A^\ast , } $$ where $i:\calS_{\calX} \rightarrow \bfG/\bfQ$ identifies $\calS_{\calX}$ with the real points $(\bfG/\bfQ)(\bbR)$ and $\Delta$ is the diagonal embedding. The function $\langle \langle \cdot , \cdot , \cdot \rangle \rangle_{\bbC}$ is called \emph{complex Hermitian triple product}. It encodes important information about the structure of the Hermitian symmetric space $\calX$. In fact, consider the (Zariski open) set $\calO_{\eta_0,\eta_1} \subset \bfG/\bfQ$ such that the map $$ P_{\eta_0,\eta_1}:\calO_{\eta_0,\eta_1} \rightarrow \bbR \ , P_{\eta_0,\eta_1}(\eta):=\langle \langle \eta_0,\eta_1,\eta \rangle \rangle_{\bbC} $$ is well-defined. By \cite[Lemma 5.1]{BIW07} we have that $\calX$ is not of tube type if and only if the map $P_{\eta_0,\eta_1}^m$ is not constant for any $m \in \bbN$. \section{Cohomology of orbital equivalence relation}\label{sec orbit rel} In this section we will introduce the main topic of the paper, namely the orbital cohomology. We mainly refer to the reader to the papers by Feldman and Moore \cite{moore1976,feldman:moore}. A standard Borel space $(X,\mu)$ is a measure space which is Borel isomorphic to a Polish space (that is a separable completely metrizable space). Consider an equivalence relation $\calR \subset X \times X$ defined on a standard Borel probability space $(X,\mu)$. We are going to suppose that $\calR$ is \emph{countable}, that is the equivalence classes have at most countable cardinality. Feldman and Moore introduced an exotic cohomology theory associated to a countable equivalence relation with coefficients in a Polish Abelian group. Since for our purpose it will be sufficient to look at the cohomology in degree one, we will give a definition \emph{ad hoc}. An important feature of the $1$-cohomology of a countable equivalence relation is that its definition works fine also when the coefficients are a general topological group $G$, not only a Polish Abelian one. \begin{defn}\label{def measurable cocycle} Let $\calR$ be a countable equivalence relation on a standard Borel probability space $(X,\mu)$. Consider a topological group $G$. A \emph{measurable cocycle} for $\calR$ with coefficients in $G$ is a Borel measurable map $c:\calR \rightarrow G$ such that \begin{equation}\label{eq measurable cocycle} c(x,z)=c(y,z)c(x,y) , \end{equation} for almost every pair $(x,y),(y,z),(x,z) \in \calR$. Two measurable cocycles $c_1,c_2$ are \emph{cohomologous} if there exists a measurable function $f:X \rightarrow G$ such that \begin{equation}\label{eq cohomology} f(y)c_2(x,y)=c_1(x,y)f(x) , \end{equation} for almost every $(x,y) \in \calR$. We denote by $\upH^1(\calR;G)$ the $1$-cohomology of $\calR$ with coefficients in $G$, namely the quotient of measurable cocycles modulo cohomology. \end{defn} In this paper we will be interested in the particular case when $\calR$ is an \emph{orbital equivalence relation}. More precisely, let $\Gamma$ be a finitely generated countable group. We consider a measure preserving action of $\Gamma$ on a standard Borel probability space $(X,\mu)$. The orbital equivalence relation $\calR_{\Gamma}$ is defined as follows: two points $x,y \in X$ are related if and only if there exists $\gamma \in \Gamma$ such that $y=\gamma.x$. If we define $$ \Theta: \{ c:\calR_{\Gamma} \rightarrow G \ | \ \textup{$c$ is measurable} \} \rightarrow \{ \sigma:\Gamma \times X \rightarrow G \ | \ \textup{$\sigma$ is measurable} \}, $$ $$ c \mapsto \sigma_c(\gamma,x):=c(x,\gamma.x), $$ then the image of the set of measurable cocycles corresponds to the set of measurable functions $\sigma:\Gamma \times X \rightarrow G$ such that \begin{equation}\label{eq new measurable cocycle} \sigma(\gamma_1\gamma_2,x)=\sigma(\gamma_1,\gamma_2.x)\sigma(\gamma_2,x), \end{equation} for every $\gamma_1,\gamma_2 \in \Gamma$ and almost every $x \in X$. We will call $\sigma$ a \emph{measurable cocycle} for the orbital equivalence relation. As we did for cocycles, we can rewrite the definition of cohomology using the function $\Theta$. In fact, given two measurable cocycles $\sigma_1,\sigma_2:\Gamma \times X \rightarrow G$, we will say that they are \emph{cohomologous} if there exists a measurable function $f:X \rightarrow G$ such that \begin{equation}\label{eq new cohomology} f(\gamma.x)\sigma_2(\gamma,x)=\sigma_1(\gamma,x)f(x) \ , \end{equation} for every $\gamma \in \Gamma$ and almost every $x \in X$. We denote the $1$-cohomology of the orbital equivalence relation $\calR_{\Gamma}$ by $\upH^1(\Gamma \curvearrowright X;G)$ and we call it \emph{orbital cohomology}. Here we will interested in a more general equivalence relation among cocycles. In fact we will allow different groups as targets. \begin{defn} Let $\sigma_1:\Gamma \times X \rightarrow G_1$ and $\sigma_2:\Gamma \times X \rightarrow G_2$ be two measurable cocycles. We say that they are \emph{equivalent} if there exists an isomorphism $s:G_1 \rightarrow G_2$ such that $s \circ \sigma_1$ is cohomologous to $\sigma_2$. \end{defn} It is worth noticing that a morphism $\Gamma \rightarrow G$ is precisely a measurable cocycle not depending on the space variable in $(X,\mu)$. In fact, cocycles can be viewed as generalized morphisms (they are actually morphisms of groupoids). In this way we obtain a map from the $G$-character variety $\mathrm{Rep}(\Gamma;G)$, that is homomorphisms modulo $G$-conjugation, to the $1$-cohomology $\upH^1(\Gamma \curvearrowright X;G)$. The study of the cohomology $\upH^1(\Gamma \curvearrowright X;G)$ may reveal quite hard to approach. For this reason it could be easier to restrict the attention to particular classes of groups, both for $\Gamma$ and $G$. Suppose for instance that $G$ corresponds to (the connected component) of the real points of a real algebraic group $\bfG$. Then we are allowed to give the following: \begin{defn}\label{def algebraic hull} Let $\Gamma$ be a finitely generated group and let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. The \emph{algebraic hull} of a measurable cocycle $\sigma:\Gamma \times X \rightarrow G$ is the $G$-conjugacy class of the smallest algebraic subgroup $\bfL<\bfG$ such that $L=\bfL(\bbR)^\circ$ cointains the image of a cocycle cohomologous to $\sigma$. We say that $\sigma$ is \emph{Zariski dense} if $\bfL=\bfG$. \end{defn} The previous definition works because the group $\bfG$ is algebraic and hence Noetherian \cite[Proposition 9.1]{zimmer:libro}. For the way we defined the algebraic hull, it is canonically attached to the cohomology class of a cocycle. Thus it makes sense to refer to the subset of Zariski dense cohomology classes, denoted by $\upH^1_{ZD}(\Gamma \curvearrowright X;G)$. \begin{oss} Let $\Gamma$ be a finitely generated group and let $(X,\mu)$ and $(Y,\nu)$ be two standard Borel probability $\Gamma$-spaces. Consider a topological group $G$. Given a $\Gamma$-equivariant map $\pi:X \rightarrow Y$ and a measurable cocycle $\sigma:\Gamma \times Y \rightarrow G$, one can consider the \emph{pullback cocycle}, namely $$ \pi^\ast \sigma:\Gamma \times X \rightarrow G \ , \ \ \pi^\ast\sigma(\gamma,x):=\sigma(\gamma,\pi(x)). $$ The pullback construction naturally induces a map at the level of cohomology classes $$ \pi^\ast:\upH^1(\Gamma \curvearrowright Y;G) \rightarrow \upH^1(\Gamma \curvearrowright X;G). $$ It can be interesting trying to understand when this map is injective. It is difficult to say something relevant in full generality. However, if one assumes that $G$ is (the real points of) an algebraic group then the injectivity holds on the subset of classes whose algebraic hull is semisimple (see \cite{Fur10} for more details). \end{oss} \section{The pullback of the bounded K\"{a}hler class} \subsection{Boundary theory for bounded cohomology}\label{sec bound cohom} The main goal of this section is to introduce the notion of bounded K\"{a}hler class. For more details about the background related to this topic we refer the reader to \cite{monod:libro,burger2:articolo}. We start recalling the definition of continuous bounded cohomology. We will not give the usual definition but we will base our approach on boundary theory. Let $G$ be a locally compact group. A \emph{Lebesgue $G$-space} is a standard Borel probability space $(X,\mu)$ where the measure $\mu$ is only quasi-$G$-invariant. A \emph{Banach $G$-module} $E$ is a Banach space endowed with an isometric $G$-action $\pi:G \rightarrow \mathrm{Isom}(E)$. We will always assume that $E$ is the dual of some Banach space. In this way it makes sense to refer to the weak-$^\ast$ Borel structure on $E$. \begin{es}\label{ex G module} Consider a locally compact group $G$ and a Lebesgue $G$-space $(X,\mu)$. The main examples of Banach $G$-modules we will consider in this paper are: \begin{enumerate} \item The field $\bbR$ endowed with its Euclidean structure and trivial $G$-action. \item The Banach space $\upL^\infty(X;\bbR)$ of essentially bounded measurable functions with the weak-$^\ast$ structure coming from being the dual of $\upL^1(X;\bbR)$ and isometric $G$-action given by $$ (g.f)(x):=f(g^{-1}.x) , $$ for every $f \in \upL^\infty(X;\bbR)$. With an abuse of notation we referred to an equivalence class in $\upL^\infty$ by fixing a representative. \end{enumerate} \end{es} Given a Lebesgue $G$-space $(X,\mu)$, we define the module of \emph{bounded weak-$^\ast$ measurable functions} on $X^{\bullet+1}$ as \begin{align*} \calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E):=\{ \ f:X^{\bullet+1} \rightarrow E \ | \ &\textup{$f$ is weak-$^\ast$ measurable and} \\ \|&f\|_\infty:=\sup_{x_0,\ldots,x_\bullet}\| f(x_0,\ldots,x_\bullet)\|_E < \infty\} \end{align*} By identifying two bounded measurable functions $f,f' \in \calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$ when they coincide almost everywhere, we define the space of \emph{essentially bounded weak-$^\ast$ measurable functions} on $X^{\bullet+1}$, namely $$ \upL^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E):=\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)/\sim , $$ where $f \sim f'$ means that they are identified. With the same abuse of notation of Example \ref{ex G module}, we are going to refer to classes in $\upL^\infty_{\mathrm{w}^\ast}$ by fixing a representative. We can endow $\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$ with a structure of Banach $G$-module via the isometric action $$ (g.f)(x_0,\ldots,x_\bullet):=\pi(g)f(g^{-1}.x_0,\ldots,g^{-1}.x_\bullet) , $$ for every $f \in \calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E), g \in G$ and $x_0,\ldots,x_\bullet \in X$. Since the relation $\sim$ is preserved by the previous isometric action, the Banach $G$-module structure on $\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$ naturally descends to a Banach $G$-module structure on $\upL^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$. A function $f \in \calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$ (or a class in $\upL^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)$) is $G$-\emph{invariant} if $g.f=f$ for every $g \in G$. Similarly, we say that it is \emph{alternating} if $$ \varepsilon(\tau)f(x_0,\ldots,x_\bullet)=f(x_{\tau(0)},\ldots,x_{\tau(\bullet)}) , $$ for every permutation $\tau \in \mathfrak{S}_{\bullet+1}$, where $\varepsilon(\tau)$ is the sign. We denote by $\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)^G$ (respectively $\upL^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E)^G$) the submodule of $G$-invariant vectors and we use the notation $\calB^\infty_{\mathrm{w}^\ast,\mathrm{alt}}(X^{\bullet+1};E)$ (respectively $\upL^\infty_{\mathrm{w}^\ast,\mathrm{alt}}(X^{\bullet+1};E)$) to refer to the subspace of alternating functions. Together with the \emph{standard homogeneous coboundary operator} $$ \delta^\bullet:\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E) \rightarrow \calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+2};E), $$ $$ (\delta^\bullet f)(x_0,\ldots,x_{\bullet+1}):=\sum_{i=0}^{\bullet+1} (-1)^i f(x_0,\ldots,x_{i-1},x_{i+1},\ldots,x_{\bullet+1}), $$ we obtain a cochain complex $(\calB^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E),\delta^\bullet)$. In a similar way, each coboundary operator descends to the quotient, hence we obtain also the cochain complex of essentially bounded functions $(\upL^\infty_{\mathrm{w}^\ast}(X^{\bullet+1};E),\delta^\bullet)$. We will exploit such complex to define the continuous bounded cohomology of $G$. We first need to introduce the notion of boundary. \begin{defn}\label{def boundary} Let $G$ be a locally compact group and let $(B,\nu)$ be a Lebesgue $G$-space. We say that $(B,\nu)$ is \emph{amenable} if it admits a $G$-equivariant \emph{mean}, that is a norm-one linear operator $$ m:\upL^\infty(G \times B;\bbR) \rightarrow \upL^\infty(B;\bbR) , $$ such that $m(\chi_{G \times B})=\chi_B$, $m(f)\geq0$ whenever $f$ is positive and $m(f \cdot \chi_{G \times A})=m(f) \cdot \chi_A$ for any essentially bounded function $f$ and measurable set $A \subset B$. An amenable $G$-space $(B,\nu)$ is a $G$-\emph{boundary} (in the sense of Burger and Monod \cite{burger2:articolo}) if any Borel measurable $G$-equivariant function $B \times B \rightarrow \calH$ is essentially constant, where $\calH$ varies in the set of all Hilbert $G$-modules. \end{defn} \begin{es}\label{es boundary} We give three different examples of $G$-boundary that we will use later. \begin{enumerate} \item Let $\bbF_S$ be the free group with symmetric generating set $S$. We want to exhibit a $\bbF_S$-boundary. In this case is sufficient to consider $B=\partial \calT_S$ the boundary of the Cayley graph of $\bbF_S$, namely the set of reduced words on $S$ with infinite length. We endow $B$ with the quasi-invariant measure $$ \mu_S(C(x))=\frac{1}{2r(2r-1)^{n-1}}, $$ where $x$ is a reduced word of length $n$, $r=|S|$ and $C(x)$ is the cone of infinite reduced words starting with $x$. \item Consider a finitely generated group $\Gamma$ with symmetric generating set $S$. If $\rho:\bbF_S \rightarrow \Gamma$ is a representation where $N=\ker \rho$ is exactly given by the normal subgroup generated by the relations in $\Gamma$, we can consider the set $\upL^\infty(\partial T_S,\mu_S)^N$ of $N$-invariant essentially bounded functions. By Mackey realization theorem \cite{Mackey} there exists a standard measure space $(B,\nu)$ and a measurable map $\pi:\partial T_S \rightarrow B$ such that $\pi_\ast(\mu_S)=\nu$ and the pullback of $\upL^\infty(B,\nu)$ via $\pi$ is exactly $\upL^\infty(\partial \calT_S,\mu_S)^N$. By \cite[Theorem 2.7]{BF14} we have that $(B,\nu)$ is a $\Gamma$-boundary. \item When $\Gamma$ is a lattice in a semisimple Lie group $G$, its $\Gamma$-boundary can be easily realized as the quotient $G/P$, where $P$ is any minimal parabolic subgroup \cite[Theorem 2.3]{BF14}. \end{enumerate} \end{es} Using the notion of boundary we are finally ready to give the following: \begin{defn} \label{def bounded cohomology} Let $G$ be a locally compact group and let $(B,\nu)$ a $G$-boundary. The \emph{continuous bounded cohomology} of $G$ with coefficients in the Banach $G$-module $E$ is the cohomology of the complex $$ \upH^\bullet_{cb}(G;E) := \upH^\bullet((\upL^\infty_{\mathrm{w}^\ast}(B^{\bullet+1};E)^G,\delta^\bullet)) . $$ \end{defn} \begin{oss}\label{oss alt subcomplex} The same definition remains valid if we restrict ourselves to the subcomplex of essentially bounded alternating functions, namely $$ \upH^\bullet_{cb}(G;E) \cong \upH^\bullet((\upL^\infty_{\mathrm{w}^\ast,\mathrm{alt}}(B^{\bullet+1};E)^G,\delta^\bullet)) . $$ \end{oss} We want to point out that our definition is not the usual one, which relies on another complex defined directly on the group. In fact, one can consider the complex $(\upC_{cb}(G^{\bullet+1};E),\delta^\bullet)$ of $E$-\emph{valued continuous bounded functions} on tuples of $G$, endowed with the same action described for the complex of essentially measurable functions. It is still true that the subcomplex of $G$-invariant vectors computes the continuous bounded cohomology of $G$ \cite[Section 6.1]{monod:libro}. Using such complex, it is also clear that any continuous representation $G \rightarrow H$ induces functorially a map between the bounded cohomologies of $G$ and $H$. This is less clear for our definition based on boundary theory, but our approach will have the advantage to make the computation more explicit. We will make it more clear in the next section. \begin{es} When a group $\Gamma$ is discrete (for instance for a finitely generated one or for a lattice), the continuity condition is trivial. Hence we refer simply to the bounded cohomology of $\Gamma$ and we denote it by $\upH^\bullet_b$. \begin{enumerate} \item Let $\Gamma$ be a discrete countable finitely generated group. Its bounded cohomology $\upH^\bullet_b(\Gamma;E)$ with coefficients in $E$ is given by the cohomology of the complex $(\upL^\infty_{\mathrm{w}^\ast}(B^{\bullet+1};E)^{\Gamma},\delta^\bullet)$, where $B$ is the boundary described in Example \ref{es boundary}(2). \item Suppose that $\Gamma < G$ is a lattice in a semisimple Lie group $G$. If $P<G$ is a minimal parabolic subgroup, the bounded cohomology of $\Gamma$ is given by the cohomology of the complex $(\upL^\infty_{\mathrm{w}^\ast}((G/P)^{\bullet+1};E)^{\Gamma},\delta^\bullet)$ in virtue of Example \ref{es boundary}(3). \end{enumerate} \end{es} Any $G$-equivariant morphism $\alpha:E \rightarrow F$ between $G$-modules induces a map at the level of continuous bounded cohomology groups $$ \upH^\bullet_{cb}(\alpha):\upH^\bullet_{cb}(G;E) \rightarrow \upH^\bullet_{cb}(G;F). $$ In this paper we will mainly be interested in the map induced by the change of coefficients $\bbR \hookrightarrow \upL^\infty(X;\bbR)$, where $(X,\mu)$ is a Lebesgue $G$-space. We conclude this section by spending some words about the complex of bounded measurable functions. Let $(Y,\nu)$ be any Lebesgue $G$-space, not necessarily amenable. Burger and Iozzi \cite[Corollary 2.2]{burger:articolo} proved that there exists a canonical non-trivial map $$ \mathfrak{c}^\bullet:\upH^\bullet((\calB^\infty_{\mathrm{w}^\ast}(Y^{\bullet+1};E),\delta^\bullet)^G) \rightarrow \upH^\bullet_{cb}(G;E) , $$ and the same holds if we restrict to the alternating subcomplex. \begin{es} \label{es kahler class} Let $G$ be a semisimple Hermitian Lie group $G$ with symmetric space $\calX$. If $\calS_{\calX}$ is the Shilov boundary, we know that it is isomorphic to the quotient $G/Q$ (by Section \ref{sec herm space}) and hence it is a Lebesgue $G$-space (since homogeneous quotients admit always a quasi-$G$-invariant measure). The Bergman cocycle $\beta_{\calX}$ is an everywhere defined alternating cocycle that can be considered as an element $$ \beta_{\calX} \in \calB^\infty_{\mathrm{alt}}(\calS^3_{\calX};\bbR)^G . $$ By \cite[Proposition 4.3]{BIW07} the image of the class $[\beta_{\calX}]$ under the map $$ \mathfrak{c}^2:\upH^2((\calB^\infty_{\mathrm{alt}}(\calS^{\bullet+1}_{\calX};\bbR)^G;\delta^\bullet)) \rightarrow \upH^2_{cb}(G;\bbR) $$ does not vanish. \end{es} \begin{defn} Let $G$ be a semisimple Hermitian Lie group with symmetric space $\calX$. We denote by $$ k^b_G:=\mathfrak{c}^2[\beta_\calX] \in \upH^2_{cb}(G;\bbR) $$ and we call it \emph{bounded K\"{a}hler class}. \end{defn} It is well-known \cite{BIW07,Pozzetti} that the bounded K\"{a}hler class is a generator for the second bounded cohomology group. We will exploit this fact in Section \ref{sec maximal} when we are going to speak about maximal cocycles. \subsection{Pullback along measurable cocycles}\label{sec pullback class} We are finally ready to introduce the notion of pullback along a measurable cocycle. We mainly refer to \cite{moraschini:savini,moraschini:savini:2} for a detailed discussion about this topic. We will first introduce the pullback using the complex of continuous functions on the group, then we will see how we can implement it in terms of boundaries. Let $\Gamma$ be a finitely generated discrete group and let $G$ be a semisimple Hermitian Lie group. Consider a standard Borel probability $\Gamma$-space $(X,\mu)$. Given a measurable cocycle $\sigma:\Gamma \times X \rightarrow G$ we can define $$ \upC^\bullet_b(\sigma):\upC_{cb}(G^{\bullet+1};\bbR) \rightarrow \upC_{b}(\Gamma^{\bullet+1};\upL^\infty(X;\bbR)) , $$ $$ (\upC^\bullet_b(\sigma)(\psi))(\gamma_0,\ldots,\gamma_\bullet)(x):=\psi(\sigma(\gamma_0^{-1},x)^{-1},\ldots,\sigma(\gamma^{-1}_\bullet,x)^{-1}). $$ The above map is a well-defined cochain map and it induces a map at the level of bounded cohomology \cite[Lemma 2.7]{savini2020}, namely $$ \upH^\bullet_b(\sigma):\upH^\bullet_{cb}(G;\bbR) \rightarrow \upH^\bullet_b(\Gamma;\upL^\infty(X;\bbR)) , \ \upH^\bullet_b(\sigma)([\psi]):=[\upC^\bullet_b(\sigma)(\psi)] . $$ Furthermore, when $\sigma_1$ and $\sigma_2$ are cohomologous cocycles, by \cite[Lemma 2.9]{savini2020} we have that $$ \upH^\bullet_b(\sigma_1)=\upH^\bullet_b(\sigma_2) . $$ \begin{defn}\label{def pullback kahler} Let $G$ be a semisimple Hermitian Lie group, let $\Gamma$ be a finitely generated group and let $(X,\mu)$ be a standard Borel probability $\Gamma$-space. Given a measurable cocycle $\sigma:\Gamma \times X \rightarrow G$, we define its \emph{parametrized K\"{a}hler class} as $$ \upH^2_b(\sigma)(k^b_G) \in \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) . $$ \end{defn} Our next goal is to show how we can implement explicitly the pullback in terms of boundaries. We start with the following \begin{defn}\label{def boundary map} Let $\Gamma$ be a finitely generated group with $\Gamma$-boundary $B$. Consider a standard Borel probability $\Gamma$-space $(X,\mu)$. Given a semisimple Hermitian Lie group $G$, let $(Y,\nu)$ be a Lebesgue $G$-space. A \emph{boundary map} for a measurable cocycle $\sigma:\Gamma \times X \rightarrow G$ is a Borel measurable map $$ \phi:B \times X \rightarrow Y, $$ which is $\sigma$-equivariant, namely $$ \phi(\gamma.b,\gamma.x)=\sigma(\gamma.x)\phi(b,x) , $$ for all $\gamma \in \Gamma$ and almost every $b \in B, x \in X$. \end{defn} Given a boundary map $\phi:B \times X \rightarrow Y$, the map $$ \phi_x:B \rightarrow Y , $$ is called $x$-\emph{slice} of $\phi$ and it is Borel measurable by \cite[Chapter VII, Lemma 1.3]{margulis:libro}. The $\sigma$-equivariance of $\phi$ implies that slices change equivariantly as follows: $$ \phi_{\gamma.x}(b)=\sigma(\gamma,x)\phi_x(\gamma^{-1}b) , $$ for all $\gamma \in \Gamma$ and almost every $b \in B,x \in X$. Recall $G$ has associated a connected adjoint semisimple real algebraic group $\bfG$ obtained via complexification. Suppose that $Y$ corresponds to the real points of a real algebraic quotient $\bfG/\bfL$, for some real algebraic subgroup $\bfL < \bfG$. We say that the $x$-slice is \emph{Zariski dense} if the Zariski closure of the essential image of $\phi_x$ is the whole $\bfG/\bfL$. For our purposes it will be crucial the following: \begin{thm}{\upshape \cite[Corollary 2.16]{sarti:savini1}}\label{teor boundary map} Let $\Gamma$ be a finitely generated group with $\Gamma$-boundary $B$ and let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. Consider a Zariski dense measurable cocycle $\sigma:\Gamma \times X \rightarrow G$ into a semisimple Hermitian Lie group $G$. Then there exists a boundary map $\phi:B \times X \rightarrow G/Q$, where $G/Q$ is the algebraic realization of the Shilov boundary associated to $G$. Moreover, almost every slice is Zariski dense and preserves transversality, that is $\phi(b_0,x),\phi(b_1,x)$ are transverse whenever $b_0,b_1$ are so. \end{thm} We want to use a boundary map to realize the pullback in bounded cohomology. A delicate point already observed by Burger and Iozzi \cite{burger:articolo} is that a priori the slices of a boundary map does not need to preserve the measure classes involved. To overcome such problem, we will consider directly the space of bounded measurable functions. Given a boundary map $\phi:B \times X \rightarrow Y$ for a measurable cocycle $\sigma:\Gamma \times X \rightarrow G$, we can define $$ \upC^\bullet(\phi):\calB^\infty(Y^{\bullet+1};\bbR)^G \rightarrow \upL^\infty_{\mathrm{w}^\ast}(B^{\bullet+1};\upL^\infty(X;\bbR))^\Gamma $$ $$ (\upC^\bullet(\phi)(\psi))(b_0,\ldots,b_\bullet)(x):=\psi(\phi(b_0,x),\ldots,\phi(b_\bullet,x)) , $$ where we tacitly postcomposed with the projection on the essentially bounded functions on $B$. By \cite[Lemma 4.2]{moraschini:savini} the map $\upC^\bullet(\phi)$ is a norm non-increasing cochain map which induces $$ \upH^\bullet(\phi):\upH^\bullet(\calB^\infty(Y^{\bullet+1};\bbR)^G,\delta^\bullet) \rightarrow \upH^\bullet_b(\Gamma;\upL^\infty(X;\bbR)) , \ \upH^\bullet(\phi)([\psi]):=[\upC^\bullet(\phi)(\psi)] . $$ By applying \cite[Proposition 2.1]{burger:articolo} we obtain the following commutative diagram \begin{equation} \label{diagram pullback} \xymatrix{ \upH^\bullet(\calB^\infty(Y^{\bullet+1};\bbR)^G,\delta^\bullet) \ar[rr]^{\mathfrak{c}^\bullet} \ar[d]^{\upH^\bullet(\phi)} && \upH^\bullet(G;\bbR) \ar[dll]^{\upH^\bullet_b(\sigma)}\\ \upH^\bullet_b(\Gamma;\upL^\infty(X;\bbR)) . } \end{equation} \begin{es}\label{ es boundary kahler class} Let $\Gamma$ be a finitely generated group and let $G$ be a semisimple Hermitian Lie group with symmetric space $\calX$. Consider a Zariski dense measurable cocycle $\sigma:\Gamma \times X \rightarrow G$, where $(X,\mu)$ is an ergodic standard Borel probability $\Gamma$-space. By Theorem \ref{teor boundary map} there exists a boundary map $\phi:B \times X \rightarrow \calS_{\calX}$ whose slices are Zariski dense and preserve transversality. By Example \ref{es kahler class} we have that $\mathfrak{c}^2[\beta_{\calX}]$ is the bounded K\"{a}hler class $k^b_G$. By Definition \ref{def pullback kahler} we know that $\upH^2_b(\sigma)(k^b_G)$ is the parametrized K\"{a}hler class. Thus Diagram \ref{diagram pullback} shows that a canonical non-trivial representative of the parametrized K\"{a}hler class is given by $\upC^2(\phi)(\beta_{\calX})$, namely $$ \upC^2(\phi)(\beta_{\calX})(b_0,b_1,b_2)(x):=\beta_{\calX}(\phi(b_0,x),\phi(b_1,x),\phi_2(b_2,x)) . $$ \end{es} \section{Main results} \label{sec main result} \subsection{Rigidity for Zariski dense cocycles}\label{sec rigidity} Let $\Gamma$ be a finitely generated group and let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. Consider a simple Hermitian Lie group $G$ not of tube type. In this section we want to show how the parametrized K\"{a}hler class encodes all the information associated to a Zariski dense $G$-valued measurable cocycle. More precisely, we will see that we can embed the Zariski dense $G$-orbital cohomology in the second bounded cohomology group of $\Gamma$ with $\upL^\infty(X;\bbR)$-coefficients. To see this we start recalling the following more general result. \begin{thm}{\upshape \cite[Theorem 2]{sarti:savini3}} \label{ teor parametrized inequivalent} Let $\sigma_i:\Gamma \times X \rightarrow G_i$, for $i=1,\ldots,n$, be a measurable cocycles into a simple Hermitian Lie group $G_i$ not of tube type. Suppose that the cocycles are Zariski dense and pairwise inequivalent. Then the subset $$ \{ \upH^2_b(\sigma_i)(k^b_{G_i}) \}_{i=1,\ldots,n} \subset \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) $$ is linearly independent over $\upL^\infty(X;\mathbb{Z})$. \end{thm} \begin{proof}[Sketch of the proof] By Theorem \ref{teor boundary map} there exists a boundary map $\phi_i:B \times X \rightarrow \calS_i$, where $B$ is a $\Gamma$-boundary and $\calS_i$ is the Shilov boundary for $G_i$. Notice that by \cite[Corollary 2.6]{MonShal0} there are no coboundaries in degree $2$. Thanks to Example \ref{ es boundary kahler class} any trivial combination $$ \sum_{i=1}^n m_i \upH^2_b(\sigma_i)(k^b_{G_i})=0 , $$ where $m_i \in \upL^\infty(X;\mathbb{Z})$, boils down to the following equation \begin{equation}\label{eq linear combination} \sum_{i=1}^n m_i(x) \beta_i(\phi_i(b_0,x),\phi_i(b_1,x),\phi_i(b_2,x))=0 , \end{equation} for almost every $b_0,b_1,b_2 \in B$ and $x \in X$. Here $\beta_i$ is the Bergman cocycle on the Shilov boundary $\calS_i$, for $i=1,\ldots,n$. Using Equation \eqref{eq Bergman triple product} we can rewrite the previous linear combination in terms of complex Hermitian triple products, namely $$ \prod_{i=1}^n \langle \langle \phi_i(b_0,x),\phi_i(b_1,x),\phi_i(b_2,x) \rangle \rangle_{\bbC}^{m_i(x)}=1 , $$ for almost every $b_0,b_1,b_2 \in B, x \in X$. By the transitivity of $G_i$ on transverse pairs in $\calS_i$, one can find a cocycle $\widetilde{\sigma}_i$ cohomologous to $\sigma$ with boundary map $\widetilde{\phi}_i:B \times X \rightarrow \calS_i$, such that the images $\widetilde{\phi}_i(b_0,x)=\eta_i$ and $\widetilde{\phi_i}(b_1,x)=\zeta_i$ do not depend on $x \in X$ and furthermore it holds that \begin{equation}\label{eq product Hermitian products} \prod_{i=1}^n \langle \langle \eta_i , \zeta_i ,\widetilde{\phi}_i(b_2,x) \rangle \rangle_{\bbC}^{m_i(x)}=1 , \end{equation} for almost every $b_2 \in B, x \in X$. If we consider the product cocycle $$ \widetilde{\sigma}:\Gamma \times X \rightarrow \prod_{i=1}^n G_i , \ (\gamma,x) \mapsto (\widetilde{\sigma}_i(\gamma,x))_{i=1,\ldots,n} $$ with boundary map $$ \widetilde{\phi}:B \times X \rightarrow \prod_{i=1}^n \calS_i , \ (b,x) \mapsto (\widetilde{\phi}_i(b,x))_{i=1,\ldots,n} , $$ Equation \eqref{eq product Hermitian products} and the fact that each $G_i$ is not of tube type imply that almost every $x$-slice of $\widetilde{\phi}$ is not Zariski dense, since the Zariski closure of the essential image of almost each slice is contained in the proper Zariski closed set $$ \{ (\omega_1,\ldots,\omega_n) \in \prod_{i=1}^n \calO_{\eta_i,\zeta_i} \ | \ \prod_{i=1}^n P_i^{m_i(x)}(\omega_i)=1 \}. $$ Here $\calO_{\eta_i,\zeta_i}$ is the Zariski open set defined at the end of Section \ref{sec herm space}. By Theorem \ref{teor boundary map} the algebraic hull $\bfL$ of $\widetilde{\sigma}$ must be a proper subgroup of the product $\prod_{i=1}^n \bfG_i$, where $\bfG_i$ is the connected adjoint simple algebraic group obtained by complexifying $G_i$, for $i=1,\ldots,n$. Since $\bfL$ surjects on each $\bfG_i$ via projections and $\bfG_i$ are simple, there must exist at least one $\bbR$-isomorphism $s:\bfG_i \rightarrow \bfG_j$ for $i\neq j \in \{1,\ldots,n\}$. This is a contradiction to the inequivalence of the $\sigma_i$'s. \end{proof} Using Theorem \ref{ teor parametrized inequivalent} one can show the following: \begin{thm}{\upshape \cite[Theorem 1]{sarti:savini3}}\label{kx injection} Let $\Gamma$ be a finitely generated group and $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. Consider a simple Hermitian Lie group $G$. The map $$ K_X:\upH^1_{ZD}(\Gamma \curvearrowright X;G) \rightarrow \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) , \ \ K_X([\sigma]):=\upH^2_b(\sigma) (k^b_{G}) $$ is an injection whose image avoids the trivial class. As a consequence the parametrized K\"{a}hler class is a complete invariant for the orbital cohomology class of a Zariski dense cocycle $\sigma$. \end{thm} \begin{proof}[Sketch of the proof] Let $\sigma_1,\sigma_2:\Gamma \times X \rightarrow G$ be two Zariski dense cocycles. We need to show that if $\upH^2_b(\sigma_1)=\upH^2_b(\sigma_2)$, then $\sigma_1$ and $\sigma_2$ are cohomologous. By Theorem \ref{ teor parametrized inequivalent} we have that $\sigma_1$ and $\sigma_2$ are equivalent, thus there exists a $\bbR$-isomorphisms $s:\bfG \rightarrow \bfG$ of the connected adjoint simple algebraic group $\bfG$ associated to $G$, such that $s \circ \sigma_1$ is cohomologous to $\sigma_2$. Since the pullback is equivariant with respect to the sign of $s$, we have that $$ 0=\upH^2_b(\sigma_1)-\upH^2_b(\sigma_2)=\upH^2_b(\sigma_1)-\varepsilon(s)\upH^2_b(\sigma_1)=(1-\varepsilon(s))\upH^2_b(\sigma_1) . $$ Again Theorem \ref{ teor parametrized inequivalent} implies that $\upH^2_b(\sigma_1)$ is not trivial, thus $\varepsilon(s)=1$ and the statement follows. \end{proof} The previous theorem has important consequences on the computation of the orbital cohomology when $\Gamma$ is either a higher rank lattice or it is a lattice in a product. \begin{prop}{\upshape \cite[Proposition 4.1]{sarti:savini3}}\label{prop higher rank} Let $\Gamma < H=\bfH(\bbR)^\circ$ be a lattice, where $\bfH$ is a connected, simply connected, almost simple $\bbR$-group of real rank at least $2$. Let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space and let $G$ be a simple Hermitian Lie group. If $\upH^2_b(\Gamma;\bbR)\cong 0$ then $$ |\upH^1_{ZD}(\Gamma \curvearrowright X;G)|=0 . $$ \end{prop} \begin{proof} Thanks to Theorem \ref{kx injection} we have an injection $$ K_X:\upH^1_{ZD}(\Gamma \curvearrowright X;G) \rightarrow \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) $$ whose image avoids the trivial class. Since $\upL^\infty(X;\bbR)$ is semiseparable as Banach $G$-module, by \cite[Corollary 1.6]{Mon10} we have the following chain of isomorphisms $$ \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) \cong \upH^2_b(\Gamma;\upL^\infty(X;\bbR)^\Gamma) \cong \upH^2_b(\Gamma;\bbR), $$ where the last isomorphism is due to the ergodicity of $(X,\mu)$. By assumption the statement now follows. \end{proof} We refer either \cite{BM1,burger2:articolo} to see when the hypothesis $\upH^2_b(\Gamma;\bbR) \cong 0$ is satisfied. In virtue of Proposition \ref{prop higher rank} we have a vanishing result for the Zariski dense orbital cohomology. Such an explicit result is usually difficult to obtain and this is exactly why we should understand the importance of having a rigidity result as Theorem \ref{kx injection}. We conclude with the case of products. Recall that a lattice $\Gamma < H:=H_1 \times \ldots \times H_n$ in a product of locally compact second countable groups is \emph{irreducible} if it projects densely on each $H_i$. Additionally, we say that $H$ acts \emph{irreducibly} on a standard Borel probability space $(X,\mu)$ if each subgroup obtained by omitting one factor of $H$ acts ergodically on $X$. \begin{prop}{\upshape \cite[Proposition 4.4]{sarti:savini3}}\label{prop products} Consider $n \geq 2$ and consider an irreducible lattice $\Gamma < H:=H_1 \times \ldots \times H_n$ in a product of locally compact second countable groups such that $\upH^2_{cb}(H_i;\bbR)=0$ for $i=1,\ldots,n$. Let $(X,\mu)$ be a standard Borel $H$-irreducible probability space and consider a simple Hermitian Lie group $G$. Then $$ |\upH^1_{ZD}(\Gamma \curvearrowright X;G)|=0 . $$ \end{prop} \begin{proof} By \cite[Corollary 9]{Mon10} the inclusion $$ \upL^\infty(X;\bbR) \rightarrow \upL^2(X;\bbR) $$ induces an injection in bounded cohomology. Precomposing with $K_X$, we obtain an injection $$ \upH^1_{ZD}(\Gamma \curvearrowright X;G) \rightarrow \upH^2_b(\Gamma;\upL^2(X;\bbR)) $$ which avoids the trivial class. If we set $$ H'_i:=\prod_{j \neq i} H_j , $$ by \cite[Theorem 16]{burger2:articolo} we have that $$ \upH^2_b(\Gamma;\upL^2(X;\bbR)) \cong \bigoplus_{i=1}^n \upH^2_b(H_i;\upL^2(X;\bbR)^{H'_i}) \cong \upH^2_b(H_i;\bbR) $$ and the statement follows. \end{proof} \subsection{Maximal measurable cocycles}\label{sec maximal} So far we have seen the theory of pullback along a Zariski dense cocycle $\Gamma \times X \rightarrow G$ with values in a simple Hermitian Lie group in full generality. Our next goal is to assume some more restrictive conditions on both $\Gamma$ and $G$ and to introduce a new family of measurable cocycles, namely maximal ones. We mainly refer to \cite{sarti:savini1} for more details about this topic. We set $G_{p,q}:=\textup{PU}(p,q)$. Consider a lattice $\Gamma < G_{n,1}$, with $n \geq 2$, and a standard Borel probability $\Gamma$-space $(X,\mu)$. Since the measure $\mu$ is finite, the change of coefficients $$ \upH^2_b(\Gamma;\bbR) \rightarrow \upH^2_b(\Gamma;\upL^\infty(X;\bbR)) . $$ admits a left inverse induced by integration along $X$. More precisely, if we consider $$ \upI_X^\bullet:\upC_b(\Gamma^{\bullet+1};\upL^\infty(X;\bbR)) \rightarrow \upC_b(\Gamma^{\bullet+1};\bbR) , $$ $$ \upI_X^\bullet(\psi)(\gamma_0,\ldots,\gamma_\bullet):=\int_X \psi(\gamma_0,\ldots,\gamma_\bullet)(x) d\mu(x) , $$ we have that $\upI_X$ is a norm non-increasing cochain map which induces a map at a the level of cohomology groups $$ \upI^\bullet_X:\upH^\bullet_b(\Gamma;\upL^\infty(X;\bbR)) \rightarrow \upH^\bullet_b(\Gamma;\bbR) . $$ Since $\Gamma$ is a lattice (and hence the quotient $\Gamma \backslash G_{n,1}$ has finite Haar measure), also the restriction map $$ \upH^2_{cb}(G_{n,1};\bbR) \rightarrow \upH^2_b(\Gamma;\bbR) $$ admits an inverse, this time a right one. If we define the \emph{transfer map} as $$ \upT_b^\bullet:\upC_b(\Gamma^{\bullet+1};\bbR) \rightarrow \upC_{cb}(G_{n,1}^{\bullet+1};\bbR) , $$ $$ (\upT_b\psi)(g_0,\ldots,g_\bullet):=\int_{\Gamma \backslash \textup{PU}(n,1)} \psi(\overline{g}g_0,\ldots,\overline{g}g_\bullet)d\mu_{\Gamma \backslash \textup{PU}(n,1)}(\overline{g}), $$ we obtain a cochain map inducing the \emph{cohomological transfer map} $$ \upT^\bullet_b:\upH^\bullet_b(\Gamma;\bbR) \rightarrow \upH^\bullet_{cb}(G_{n,1};\bbR) . $$ Given a measurable cocycle $\sigma:\Gamma \times X \rightarrow G_{p,q}$, with $1 \leq p \leq q$, we can consider the image of the K\"{a}hler class $k^b_{p,q} \in \upH^2_b(G_{p,q};\bbR)$ through the following composition $$ (\upT^2_b \circ \upI_X^2 \circ \upH^2_b(\sigma))(k^b_{p,q}) \in \upH^2_b(G_{n,1};\bbR) . $$ Since the latter group is one dimensional and generated by the K\"{a}hler class $k^b_{n,1}$, we are allowed to give the following: \begin{defn}\label{def toledo invariant} The \emph{Toledo invariant} associated to a measurable cocycle $\sigma:\Gamma \times X \rightarrow G_{p,q}$ is the real number $\mathrm{t}_b(\sigma)$ which satisfies the following identity \begin{equation}\label{eq toledo invariant} (\upT^2_b \circ \upI^2_b \circ \upH^2_b(\sigma))(k^b_{p,q})=\mathrm{t}_b(\sigma)k^b_{n,1} . \end{equation} \end{defn} The Toledo invariant of a measurable cocycle $\sigma:\Gamma \times X \rightarrow G_{p,q}$ is invariant along the orbital cohomology class of $\sigma$. As a consequence it induces a function $$ \mathrm{t}_b:\upH^1(\Gamma \curvearrowright X;G_{p,q}) \rightarrow \bbR . $$ The image of the previous function is contained in a bounded interval, in fact the Toledo invariant satisfies $$ |\mathrm{t}_b(\sigma)|\leq \mathrm{rk}(G_{p,q})=\min \{p,q\}=p $$ and those cocycles which attain the extremal values are called \emph{maximal cocycles}. This allows to define the maximal orbital cohomology $\upH^1_{\max}(\Gamma \curvearrowright X;G_{p,q})$ as the preimage along the Toledo function of the extremal values. Additionally, we denote by $\upH^1_{\max,ZD}(\Gamma \curvearrowright X;G_{p,q})$ the subset of maximal Zariski dense classes. \begin{thm}{\upshape \cite[Theorem 2]{sarti:savini1}}\label{teor superrigidity} Let $\Gamma \leq G_{n,1}$, with $n \geq 2$, be a lattice and let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. Any maximal Zariski dense cocycle in $G_{p,q}$, where $1 \leq p \leq q$, is cohomologous to a representation $\Gamma \rightarrow G_{p,q}$ with the same properties. \end{thm} \begin{proof}[Sketch of the proof] We assume that the Zariski dense cocycle $\sigma:\Gamma \times X \rightarrow G_{p,q}$ is maximal. Up to changing it sign by composing it with an antiholomorphic isomorphism, we can suppose that $\sigma$ is positively maximal. Additionally, since $\sigma$ is Zariski dense, we can apply Theorem \ref{teor boundary map} to get a boundary map $\phi:\partial_\infty \bbH^n_{\bbC} \times X \rightarrow \calS_{p,q}$, where $\calS_{p,q}$ is the Shilov boundary associated to $G_{p,q}$. Since in degree $2$ there are no coboundaries \cite[Corollary 2.6]{MonShal0}, we can rewrite Equation \eqref{eq toledo invariant} as follows \begin{align}\label{eq toledo boundary} \int_{\Gamma \backslash G_{n,1}} \int_X &\beta_{p,q}(\phi(\overline{g}b_0,x),\phi(\overline{g}b_1,x),\phi(\overline{g}b_2,x)) d\mu(x)d\mu_{\Gamma \backslash G_{n,1}}(\overline{g})\\ =\mathrm{t}_b(\sigma)&\beta_{n,1}(b_0,b_1,b_2) , \nonumber \end{align} for almost every $b_0,b_1,b_2 \in B$ and $x \in X$. The equation can be actually extended to every triple $b_0,b_1,b_2$ of points that are pairwise distinct. Since $\phi_x$ is Zariski dense \cite[Proposition 4.4]{sarti:savini1} for almost every $x \in X$, Equation \eqref{eq toledo boundary} and \cite[Theorem 1.6]{Pozzetti} imply that $\phi_x$ is the restriction of a rational map for almost every $x \in X$ (both $\partial_\infty \bbH^n_{\bbC}$ and $\calS_{p,q}$ are the real points of some real algebraic variety). Thanks to this rationality condition, one can find a measurable map $f:X \rightarrow G_{p,q}$ such that \begin{equation} \label{eq separation variables} \phi(b,x)=f(x)\phi_0(b) , \end{equation} where $\phi_0:\partial_\infty \bbH^n_{\bbC} \rightarrow \calS_{p,q}$ is still rational and Zariski dense. By setting $$ \widetilde{\sigma}:\Gamma \times X \rightarrow G_{p,q}, \ \widetilde{\sigma}(\gamma,x):=f(\gamma.x)^{-1}\sigma(\gamma,x)f(x) , $$ one can see that the separation of variables contained in Equation \eqref{eq separation variables} implies that $\widetilde{\sigma}$ does not depend on $x \in X$ and hence it is the desired representation $\Gamma \rightarrow G_{p,q}$. \end{proof} \begin{cor}{\upshape \cite[Proposition 3]{sarti:savini1}}\label{cor no cocycles} Let $\Gamma \leq G_{n,1}$, with $n \geq 2$, be a lattice and let $(X,\mu)$ be an ergodic standard Borel probability $\Gamma$-space. There is no maximal Zariski dense cocycle $\Gamma \times X \rightarrow G_{p,q}$ when $1 < p < q$. Equivalently $$ |\upH^1_{\max,ZD}(\Gamma \curvearrowright X;G_{p,q})|=0 . $$ \end{cor} \begin{proof} Let $\sigma:\Gamma \times X \rightarrow G_{p,q}$ be a maximal Zariski dense cocycle. By Theorem \ref{teor superrigidity} we have a maximal Zariski dense representation $\Gamma \rightarrow G_{p,q}$ contained in the orbital cohomology class of $\sigma$. By \cite[Corollary 1.2]{Pozzetti} there are no maximal Zariski dense representation when $1 < p < q$. \end{proof} \bibliographystyle{amsalpha}
proofpile-arXiv_068-13797
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{} In this work, we present several heuristic-based and data-driven active vision strategies for viewpoint optimization of an arm-mounted depth camera for the purpose of aiding robotic grasping. These strategies aim to efficiently collect data to boost the performance of an underlying grasp synthesis algorithm. We created an open-source benchmarking platform in simulation (https://github.com/galenbr/2021ActiveVision), and provide an extensive study for assessing the performance of the proposed methods as well as comparing them against various baseline strategies. We also provide an experimental study with a real-world setup by utilizing an existing grasping planning benchmark in the literature. With these analyses, we were able to quantitatively demonstrate the versatility of heuristic methods that prioritize certain types of exploration, and qualitatively show their robustness to both novel objects and the transition from simulation to the real world. We identified scenarios in which our methods did not perform well and scenarios which are objectively difficult, and present a discussion on which avenues for future research show promise. \begin{comment} implemented a bunch of active vision strategies tried them in simulation and the real world used benchmarks that let us tell if they were doing badly and if they were doing well found that heuristics generally the most reliable, but in simulation most objects had so little room for improvement it didn't make much difference found that the object that did have room for significant improvement was qualitatively different from simpler objects found that in the real world sensor differences stopped machine learning from generalizing well \end{comment} \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} active vision, self-supervised learning, reinforcement learning, grasp synthesis, benchmarking} \end{abstract} \section{Introduction} \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Figures/Intro_Image_v3.png} \caption{The 3D Heuristic policy guiding the camera and finding the grasp for a object} \label{fig_intro} \end{figure} Robotic grasping is a vital capability for many tasks, particularly in service robotics. Most grasping algorithms use data from a single viewpoint to synthesize a grasp \citep{Caldera2018}. This approach attempts to create a single, master algorithm that is useful for all objects in all situations. Nevertheless, these algorithms tend to suffer when the viewpoint of the vision sensor is different than the images used in training \citep{Viereck2017}. Additionally, many graspable objects have observation angles that are ``singular" from which no grasp can be synthesized: For example, if an object has only one graspable surface, which is self-occluded from the current viewpoint of the camera, the grasp synthesis algorithm would either fail to find any grasps or would need to rely on assumptions that might not always hold, and therefore lead to an unsuccessful grasp attempt. The issues of the single viewpoint approaches can be addressed via active vision frameworks, i.e. by actively moving the camera and collecting more data about the task. At one end of this spectrum is collecting data to obtain a complete 3D model of the object. This approach is slow, difficult to carry out in the real world, and vulnerable to misalignment if conditions change during or after data collection \citep{Lakshminarayanan2017}. Our aim is to develop active vision strategies that can efficiently collect data with brief motions and allow the grasp synthesis algorithms to find sufficiently good grasps as quickly as possible. It is shown in the grasping literature that even algorithms tailored for single viewpoints can have substantial performance boost even with very simple data collection procedures \citep{Viereck2017}. Utilizing active vision for robotic grasping has several avenues for optimization: the exploration algorithm, the data analysis, and the grasping algorithm are all open questions. In this work, we present a wide variety of exploration algorithms along with an extensive simulation and real-world experiment analysis. Figure \ref{fig_intro} shows how an active vision policy explores different objects. In simulation, we created benchmarks to assess not only whether our policies do better than random but to measure how close each approach comes to optimal behavior for each object. In the real-world experiments, we have adopted an existing grasp planning benchmark \citep{Bekiroglu2020}, and assess how well the simulation performances translate to real systems. Our exploration algorithms can be split into heuristic and machine learning approaches. In our heuristics, we attempt to identify simple properties of the visual data that are reliable indicators of effective exploration directions. These approaches use estimates of how many potentially occluded grasps lie in each direction. For machine learning, we used self-supervised and Q-learning based approaches. We compare the performance of these methods against three baseline algorithms. The baselines are random motion (as the worst case algorithm), naive straight forward motion (as a simple algorithm more complex efforts should outperform), and breadth-first-search (as the absolute ceiling on possible performance). The last is particularly important: because in simulation we could exhaustively test each possible exploration path, we can say with certainty what the shortest possible exploration path that leads to a working grasp is. We also present a comparison study to another active vision-based algorithm, i.e. \citep{Arruda2016}, which provides, to the best of our knowledge, the closest strategy to ours in the literature. To summarize, the contribution of our work is as follows: \begin{enumerate} \item We present two novel heuristic-based viewpoint optimization methods. \item We provide a novel Q-learning based approach for achieving an exploration policy. \item We provide an open-source simulation platform (https://github.com/galenbr/2021ActiveVision) to develop new active vision algorithms and benchmark them. \item We present an extensive simulation and experimental analysis, assessing and comparing the performance of 5 active vision methods against 3 baseline strategies. \end{enumerate} Taken together, these allow us to draw new conclusions not only about how well our algorithms work now, but how much it would be possible to improve them. \section{Related Works} Adapting robotic manipulation algorithms to work in an imperfect and uncertain world is a central concern of the robotics field, and an overview of modern approaches is given by \cite{Wang2020}. For the use of active vision to address this problem, there has been research into both algorithmic \citep{Calli2011,Arruda2016} and data-driven methods \citep{Paletta2000, Viereck2017,Calli2018,Rasolzadeh2010}, with more recent works tending to favor data-driven approaches \citep{Caldera2018}. In particular, the work in \citep{Viereck2017} demonstrated that active vision algorithms have the potential to outperform state of the art single-shot grasping algorithms. \cite{Calli2011} proposed an algorithmic active vision strategy for robotic grasping, extending 2D grasp stability metrics to 3D space. As an extension of that work \citep{Calli2018}, the authors utilized local optimizers for systematic viewpoint optimization using 2D images. \cite{Arruda2016} employs a probabilistic algorithm whose core approach is the most similar to our heuristics presented in Section~\ref{ssec:heuristic-policies}. Our approaches differ in focus, with \cite{Arruda2016} selects viewpoints based on estimated information gain as a proxy for finding successful grasps, while we prioritize grasp success likelihood and minimizing distance traveled. In our simulation study, we implemented a version of their algorithm and included it our comparison analysis. The data-driven approach presented in \cite{Viereck2017} avoided the problem of labeled data by automating data labeling using state of the art single shot grasp synthesis algorithms. They then used machine learning to estimate the direction of the nearest grasp along a view-sphere, and performed gradient descent along the vector field of grasp directions. This has the advantage of being continuous and fast, but did not fit in our discrete testing framework \citep{Viereck2017}. All data-driven methods analysed in this paper utilize a similar self-supervised learning framework due to its significant easiness in training. One of our data-driven active vision algorithms utilize the reinforcement learning framework. A similar strategy for active vision is used by \cite{Paletta2000} to estimate an information gain maximizing strategy for object recognition. We not only extend Q-learning to grasping, but do away with the intermediary information gain heuristic in reinforcement learning. Instead we penalize our reinforcement approach for each step it takes that does not find a grasp, incentivizing short, efficient paths. Two of the data-driven methods in this paper is based on the general strategy in our prior work in \cite{Calli2018}. In that work, we presented a preliminary study was presented in simulation. In this paper, we present one additional variant of this strategy, and present a more extended simulation analysis. \cite{Gallos2019}, while focused on classification rather than grasping, heavily influenced our theoretical concerns and experimental design. Their paper argues that contemporary machine learning based active vision techniques outperform random searches but that this is too low a bar to call them useful, and demonstrate that none of the methods they implemented could outperform the simple heuristic of choosing a direction and moving along it in large steps. Virtually all active vision literature (e.g. \cite{DeCroon2009,Ammirato2017}) compares active vision approaches to random approaches or single shot state of the art algorithms. While there has been research on optimality comparison in machine vision \citep{Karasev}, to the best of our knowledge it has never been extended to 3D active vision, much less active vision for grasp synthesis. Our simulation benchmarks are an attempt to not only extend their approach to grasping, but to quantify how much improvement over the best performing algorithms remains possible. \section{Overview} \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.8\linewidth]{Figures/Methodology.png} \caption{The active vision based grasp synthesis pipeline} \label{fig_methodology} \end{figure} The proposed active vision based grasp synthesis pipeline is represented in Figure \ref{fig_methodology}. It starts with collecting environment information from a viewpoint and fusing with the previously known information about the environment (except for the first viewpoint captured). The object and table data are extracted, apart from updating the regions which have not been explored (unexplored regions) by the camera yet. This processed data is used in the grasp synthesis and active vision policies which will be explained in the further parts of the paper. An attempt is made to synthesize a grasp with the available data, and if it fails, the active vision policy is called to guide the camera to its next viewpoint after which the process repeats until the grasp has been found. \FloatBarrier \subsection{Workspace description} We assume an eye-in-hand system that allows us to move the camera to any viewpoint within the manipulator workspace. To reduce the dimension of active vision algorithm's action space, the camera movement is constrained to move along a viewsphere, always pointing towards and centered around the target object (a common strategy also adopted in \cite{Paletta2000,Arruda2016,Calli2018a}). The radius of the viewsphere ($v_{r}$) is set based on the manipulator workspace and sensor properties. In the viewsphere, movements are discretized into individual steps with two parameters, step-size ($v_{s}$) and number of directions ($v_{d}$). Figure \ref{fig_viewsphere} shows the workspace we use with $v_{r}$ = 0.4m, $v_{s}$ = 20\textdegree and $v_{d}$ = 8 (N,NE,E,SE,S,SW,W,NW). In our implementation, we use a Intel Realsense D435i as the camera on Franka Emika Panda arm for our eye-in-hand system. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.3\linewidth]{Figures/ViewSphere.png} \caption{Viewsphere and its next steps with parameters $v_{r}$ = 0.4m, $v_{s}$ = 20\textdegree and $v_{d}$ = 8. The blue sphere is the expected position of the object, green sphere the current camera position and red one the next steps it can take} \label{fig_viewsphere} \end{figure} \subsection{Point Cloud Processing and Environment modelling} The point cloud data received from the camera is downsampled before further processing to reduce sensor noise and to speed up the execution time. Figure \ref{fig_obj_modelling} shows the environment as seen by the camera after downsampling. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.9\linewidth]{Figures/Object_Modelling.png} \caption{Example with power drill as object showing the processed pointclouds. Left : Environment as seen by the camera, right-top : Extracted object and table, right bottom : The unexplored regions of the environment} \label{fig_obj_modelling} \end{figure} Sample Consensus based plane segmentation techniques in Point Cloud Library \citep{Rusu_ICRA2011_PCL} is used to extract the table information from the scene following which the points above the table are extracted to be marked as object points. As mentioned previously, identifying the unexplored regions is required for grasp synthesis as well as the active vision policies. For this purpose, the region surrounding the object is populated with an evenly spaced point cloud and then sequentially checked the determine which points are occluded. While a common visibility check approach is ray-tracing, it is a computationally intensive and time consuming process. Instead, we take advantage of the organised nature of the point cloud data, and use the camera intrinsic matrix ($K$) to project the 3D points ($X$) to the image plane (Eqn. \ref{eqn_projection}), and compare the depth values of X and the point present in the environment at pixel coordinate $X_{p}$. This approach leads to a much faster computation. The two images on the bottom right of Figure \ref{fig_obj_modelling} show the unexplored region generated for the the drill object. \begin{equation} \label{eqn_projection} Projected\;pixels\;:\;X_{p} = K X / z_{0},\; where\; K = \begin{pmatrix} f_{x} & 0 & pp_{x} \\ 0 & f_{y} & pp_{y} \\ 0 & 0 & 1 \end{pmatrix}\;and\; X=\begin{pmatrix} x_{0} \\ y_{0} \\ z_{0} \end{pmatrix} \end{equation} With every new viewpoint the camera is moved to, the newly acquired point cloud is fused with the existing environment data and the process is repeated to extract the object data and update the unexplored regions. \subsection{Grasp synthesis} Synthesising a successful grasp is an important part in this pipeline. Essentially, any grasp synthesis algorithm can be used in this methodology. However, these algorithms are naturally preferred to be fast (since they would be run multiple times per grasp), and be able to work with stitched point clouds. Most data-driven approaches in the literature are trained with single-view point clouds, and might not designed to perform well with stitched object data. Instead, we use a force-closure-based approach similar to \citep{Calli2018a}, but with following two additional constraints to make the grasps more reliable: \begin{enumerate} \item Contact patch constraint: Based on the known gripper contact area and the point surrounding the point under consideration, the contact patch area is calculated by projecting the points to the contact plane. This area should be higher than a threshold for both points in the candidate. \item Curvature constraint : The curvature of both the points should be less than a defined threshold. \end{enumerate} On the stitched object data, we search for point pairs that satisfy our criteria: The angle between the normal vectors of the two grasp contact points is the grasp quality metric used. With both vectors pointing directly towards each other we will have the highest quality of 180, with the lowest possible value being 0. A minimum threshold of 150 is set in this study. The unexplored region point cloud is used at this stage to do the collision check before selecting the best available grasp. The grasps close to the line of gravity and high grasp quality are given higher preference during the grasp selection process. Any grasps that intersect with unexplored regions are omitted and therefore the grasp candidates do not make assumptions on the object shape (since they use only the already seen surfaces). Next we explain the active vision policies designed and utilized in this paper. \section{Active Vision Policies} The focus of this paper is the active vision policies, which guide the eye-in-hand system to its next viewpoints. The nature of the pipeline allows us to plug in any policy which takes point clouds as its input and returns the direction to move for the next viewpoint. The policies developed and tested in this paper have been classified into three categories as follows: \begin{enumerate} \item Baseline policies \item Heuristic policies \item Machine learning policies \end{enumerate} Each of these sets of policies are explained below. \subsection{Baseline Policies} As the name suggests these are a set of policies defined to serve as a baseline to compare the heuristic and machine learning policies with. The three baselines used are shown below. \subsubsection{Random Policy} Ignoring camera data, a random direction was selected for each step. No constraints were placed on the direction chosen, leaving the algorithm free to (for instance) oscillate infinitely between the start pose and positions one step away. This represents the worst case for an algorithm not deliberately designed to perform poorly, and all methods should be expected to perform better than it in the aggregate. This is the standard baseline in the active vision literature. \subsubsection{Brick Policy} Named after throwing a brick on the gas pedal of a car, a consistent direction (North East) was selected at each timestep. This direction was selected because early testing strongly favored it, but we make no claims that it is ideal. This policy represents the baseline algorithm which is naively designed and which any serious algorithm should be expected to outperform, but which is nonetheless effective. Any algorithm that performed more poorly than it would need well-justified situational advantages to be usable. \subsubsection{Breadth-First-Search (BFS) Policy} From the starting position, an exhaustive Breadth-First-Search is performed, and an optimal path is selected. This policy represents optimal performance, as it is mathematically impossible for a discrete algorithm to produce a shorter path from the same start point. No discrete method can exceed its performance, but measuring how close each method comes to it gives us an objective measure of each method’s quality in each situation. With baselines defined, we will now discuss the other categories starting with heuristics. \subsection{Heuristic Policies} \label{ssec:heuristic-policies} The idea behind the heuristic policy is to choose the best possible direction after considering next available viewpoints. The metric used to define the quality of each of the next viewpoints is a value proportional to the unexplored region visible from a given viewpoint. \subsubsection{2D Heuristic Policy} The viewpoint quality is calculated by transforming the point clouds to the next possible viewpoints, and projecting the object and unexplored point clouds from those viewpoints onto a image plane using the camera’s projection matrix. This process has the effect of making the most optimistic estimation for exploring unexplored regions; it assumes no new object points will be discovered from the new viewpoint. Since the point clouds were downsampled, their projected images were dilated to generate closed surfaces. The 2D projections are then overlapped to calculate the size of the area not occluded by the object. The direction for which the most area of unexplored region is revealed is then selected. Figure \ref{fig_2D_3D_Heuristic} shows a illustration with the dilated projected surfaces and the calculated non-occluded region. The 2D Heuristic policy is outlined in Algorithm \ref{alg:2DHeuristic}. \begin{algorithm} \caption{2D Heuristic policy} \label{alg:2DHeuristic} \begin{algorithmic} \REQUIRE $obj \leftarrow$ Object point cloud \REQUIRE $unexp \leftarrow$ Unexplored point cloud \FORALL{$viewpoint \in$ next possible viewpoints} \IF{viewpoint within manipulator workspace} \STATE $obj\_trf \leftarrow$ Transform $obj$ to viewpoint \STATE $obj\_proj \leftarrow$ Project $obj\_trf$ onto image plane (B/W image) and dilate \STATE $unexp\_trf \leftarrow$ Transform $unexp$ to viewpoint \STATE $unexp\_proj \leftarrow$ Project $unexp\_trf$ onto image plane (B/W image) and dilate \STATE $non\_occ\_unexp\_proj \leftarrow unexp\_proj - obj\_proj$ \ENDIF \STATE Record the number of white pixels in $non\_occ\_unexp\_proj$ \ENDFOR \STATE Choose the direction with maximum white pixels \end{algorithmic} \end{algorithm} While this heuristic is computational efficient, it considers the 2D projected area, leading it to, at times, prefer wafer thin slivers with high projected area over deep blocks with low projected area. Additionally, it is agnostic to the grasping goal, and only focuses on maximizing the exploration of unseen regions. \subsubsection{3D Heuristic Policy}\label{ssec:3d-heuristic-policy} In the 3D heuristic, we focused only on the unexplored region which could lead to a potential grasp. This was done using the normal vectors of the currently visible object. Since our grasp algorithm relies on antipodal grasps, only points along the surface normals can produce grasps. We found the unexplored points within the grasp width of gripper and epsilon of those normal vectors, and discarded all other points from the unexplored point cloud. Next, like in the 2D heuristic, we transformed the points to the next possible viewpoints. This time, instead of projecting, we used local surface reconstruction and ray-tracing to determine all the unexplored points which will not be occluded from a given viewpoint. The direction which leads to the highest number of non-occluded unexplored points is selected. This prioritizes exploring the greatest possible region of unexplored space that, based on known information, could potentially contain a grasp. If all the viewpoints after one step have very few non-occluded points the policy looks one step ahead in the same direction for each before making the decision. Figure \ref{fig_2D_3D_Heuristic} shows a illustration with the non-occluded useful unexplored region. The green points are the region of unexplored region which is considered useful based on gripper configuration. The 3DHeuristic policy is outlined in Algorithm \ref{alg:3DHeuristic}. \begin{algorithm} \caption{3D Heuristic policy} \label{alg:3DHeuristic} \begin{algorithmic} \REQUIRE $obj \leftarrow$ Object point cloud \REQUIRE $unexp \leftarrow$ Unexplored point cloud \REQUIRE $points\_threshold \leftarrow$ Minimum number of non-occluded unexplored points needed for a new viewpoint to be considered useful \STATE $useful\_unexp\_trf \leftarrow$ Unexplored points with potential for a successful grasp \FORALL{$viewpoint \in$ next possible viewpoints} \IF{viewpoint within manipulator workspace} \STATE $obj\_trf \leftarrow$ Transform $obj$ to viewpoint \STATE $useful\_unexp\_trf \leftarrow$ Transform $useful_unexp$ to viewpoint \STATE $non\_occ\_useful\_unexp \leftarrow$ Check occlusion for each $useful\_unexp\_trf$ using local surface reconstruction and ray-tracing. \ENDIF \STATE Record the number of points in $non\_occ\_useful\_unexp$ \ENDFOR \STATE $max\_points \leftarrow$ Maximum points seen across the possible viewpoints \IF{$max\_points \leq points\_threshold$} \STATE Run the previous for loop with twice the step-size \ENDIF \STATE $max\_points \leftarrow$ Maximum points seen across the possible viewpoints \STATE Choose the direction which has $max\_points$ \end{algorithmic} \end{algorithm} \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.9\linewidth]{Figures/2D_vs_3D_Heuristic.png} \caption{Set of images illustrating how the 2D and 3D Heuristics evaluate a proposed next step North with the drill object. The 3D Heuristic images have been shown from a different viewpoint for representation purposes.} \label{fig_2D_3D_Heuristic} \end{figure} \FloatBarrier \subsubsection{Information Gain Heuristic Policy} The closest approach to the heuristics presented in this paper is provided by \cite{Arruda2016}. For comparison purposes, we implemented an approximate version of their exploration policy to test our assumptions and compare it with our 3D Heuristic approach. First we defined a set of 34 viewpoints spread across the viewsphere to replicate their search space. To calculate the information gain for each viewpoint, we modified the 3D Heuristic to consider all unexplored regions as opposed to focusing on the regions with a potential grasp. Similarly the modified 3D Heuristic policy, instead of comparing the next $v_{d}$ viewpoints, compared all 34 viewpoints and used the one with the highest information gain. A simulation study was performed to compare the camera travel distance and computation times of this algorithm to our other heuristics. \subsection{Machine Learning Policies} Our data-driven policies utilize a fixed size state vector as input. A portion of this vector is obtained by modelling the object point cloud and unexplored regions point cloud with Height accumulated features (HAF), which was also used in \cite{Calli2018a}. We experimented with grid sizes of 5 and 7 height maps, both of which provide similar performance in our implementation, and we chose to use 5. The state vector of a given view is composed of the flattened height maps of the extracted object and the unexplored point cloud and the polar and azimuthal angle of camera in viewsphere. The size of the state vector is $2n^2+2$, where $n$ is the grid size. \subsubsection{Self-supervised Learning Policy} Following the synthetic data generation used in \citep{Calli2018a}, we generated training data by randomly exploring up to five steps in each direction three times, and choosing the shortest working path in simulation. This was repeated for 1,000 random initial poses each for two simple rectangular prisms in Figure \ref{fig_sim_train_objects}. We then applied PCA to each vector to further compress it to 26 components. We have two variations for using this data: In one variation we trained a simple logistic regression classifier to take a compressed state vector and predict the next direction to take from it. In the second variation, we trained an LDA classifier to predict the next direction from the compressed state vector. All the components used in this policy were implemented in the scikit-learn library\citep{scikit-learn}. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.75\linewidth]{Figures/Q_learning.png} \caption{The Deep Q-Learning policy} \label{q_learning_arch} \end{figure} \subsubsection{Deep Q-Learning Policy} A deep Q-Learning policy was trained to predict, for a given state vector, the next step that would lead to the shortest path to a viable grasp using Keras library tools \citep{chollet2015keras}. Four fully connected 128 dense layers and one 8 dense layer, connected by Relu transitions, formed the deep network that made the predictions. In training, an epsilon-random gate replaced the network's prediction with a random direction if a random value exceeded an epsilon value that decreased with training. The movement this function requested was then performed in simulation, and the resulting state vector and a binary grasp found metric were recorded. Once enough states had been captured, experience replay randomly selected from the record to train the Q-Network on a full batch of states each iteration. The Q-Learning was trained in simulation to convergence on all of the objects in Figure \ref{fig_sim_train_objects}, taking roughly 1,300 simulated episodes to reach convergence. We hoped that, given the relatively constrained state space and strong similarities between states, meaningful generalizations could be drawn from the training set to completely novel objects. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.75\linewidth]{Figures/Sim_train_objects.png} \caption{The set of object used for simulation training. Filenames left to right: prism 6x6x6, prism 10x8x4, prism 20x6x5, handle, gasket, cinder block. Only prism 10x8x4, prism 20x6x5 were used to train the supervised learning algorithms.} \label{fig_sim_train_objects} \end{figure} For all machine learning approaches, the objects used for training were never used in testing. \FloatBarrier \section{Simulation and Experimental Results} The methodology discussed in the above section was implemented and tested in both simulation and in the real world. The setups used for the testing are shown in Figure \ref{fig_lab_sim_setup}. Maximum number of steps allowed before a experiment is restarted was set to 6 on the basis of preliminary experiments with the BFS policy. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.9\linewidth]{Figures/Lab_SIm_Setup.png} \caption{The setup as seen in simulation environment (left) and lab environment (right) with the YCB object power drill (ID : 35) in place} \label{fig_lab_sim_setup} \end{figure} \FloatBarrier \subsection{Simulation Study} The extensive testing in simulation was done on a set of 12 objects from the YCB dataset \citep{7254318} which are shown in Figure \ref{fig_sim_exp_objects}. To ensure consistency, we applied each algorithm to the exact same 100 poses for each object. This allowed us to produce a representative sample of a large number of points without biasing the dataset by using regular increments, while still giving each algorithm exactly identical conditions to work in. This was done by generating a set of 100 random values between 0 and 359 before testing began. To test a given policy with a given object, the object was spawned in Gazebo in a stable pose, with 0 degrees of rotation about the z-axis. The object was then rotated by the first of the random value about the z-axis, and the policy was used to search for a viable grasp. After the policy terminated, the object was reset, and rotated to the second random value, and so on. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.75\linewidth]{Figures/Sim_exp_objects.png} \caption{The set of object used for simulation testing. YCB object IDs : 3, 5, 7, 8, 10, 13, 21, 24, 25, 35, 55, 72-a} \label{fig_sim_exp_objects} \end{figure} The number of steps required to synthesise a grasp was recorded for each of the objects and its 100 poses tested. The success rate after each step for each object and the policies tested is shown in Figure \ref{fig_sim_res}. Each sub-image displays the fraction of poses a successful grasp has been reached for each policy on the same 100 pre-set poses for the given object. In object 025, for instance, the BFS found a working grasp on the first step for every starting pose, while all the other methods only found a grasp in the first step for a large majority of poses. By the second step every policy has found a working grasp for every tested pose of object 025. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Figures/policyComparison_1.jpg} \includegraphics[width=\linewidth]{Figures/policyComparison_2.jpg} \caption{Simulation results for applying each approach to each object in 100 pre-set poses. Success is defined as reaching a view containing a grasp above a user defined threshold. The number in parenthesis by the policy names in the legend is the average number of steps that policy took to find a grasp. For cases where no grasp was found, the step count was considered to be 6.} \label{fig_sim_res} \end{figure} The use of baseline policies i.e. random for the lower limit and BFS for the upper limit helped us in classifying the objects as easy, medium and hard in terms of how difficult is it to find a path that leads to a successful grasp. Objects are "Easy" when taking a step in almost any direction will lead to a successful grasp, and "Hard" when a low ratio of random to BFS searches succeed, suggesting very specific paths are needed find a grasp. Two objects with similar optimal and random performance will have similar numbers of paths leading to successful grasps, and so differences in performance between the two would be due to algorithmic failures, not inherent difficulty. The random to BFS ratio is used for the classification. For example, if the BFS result shows that out of 100 poses 40 poses have a successful grasp found in first step and a policy is only able to find a grasp at fist step for 10 poses, the policy is considered to have performed at 25\% of the optimal performance or in other words the ration would be 0.25. Objects with ratio at Step 2 $\leq$ 0.40 are considered hard, objects between 0.41 and 0.80 as medium, and objects with a ratio $>$ 0.80 as easy. With this criteria the test objects were classified as follows: \begin{enumerate} \item Easy : Tomato soup can, Bowl, Mug \item Medium : Apple, Bleach cleanser, Power drill, Base ball \item Hard : Cracker box, Mustard Bottle, Pudding box, Potted meat can, Toy airplane \end{enumerate} With these object classifications, Figure \ref{fig_sim_study_comp} shows the performance of the policies for Step 1 and Step 3 using the policy to BFS ratio. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Figures/Sim_study_comparison.png} \caption{A comparison of performance of various policies for objects categorized into easy, medium and hard, for Step 1 and Step 3} \label{fig_sim_study_comp} \end{figure} Figures \ref{fig_sim_res} and \ref{fig_sim_study_comp} show that overall in simulation, the 3D Heuristic performed the best, followed by the self-supervised learning approaches, Q-Learning and the 2D Heuristic. For half of the objects we tested, the 3D Heuristic performed best, while for objects 003, 010, 013, 021, 025, and 055 another algorithm performed better. One reason the 3D Heuristic may be failing in some cases is that the heuristics are constrained to only considering the immediate next step. Our machine learning approaches can learn to make assumptions about several steps in the future, and so may be at an advantage on certain objects with complex paths. In addition, the optimistic estimations explained in Section~\ref{ssec:3d-heuristic-policy} would not always hold for all objects and cases. One reason for the machine learning techniques underperform for some cases may be due to the HAF representation, which creates a very coarse grained representation of the objects, obliterating fine details. A much finer grid size, or an alternative representation, could improve results. We found that all methods consistently outperformed random, even on objects classified as hard. It is important to note that even brick policy was able to find successful grasps for all objects except for the toy airplane object (72-a), suggesting that incorporating active vision strategies even at a very basic level can improve the grasp synthesis for a object. The toy airplane object (72-a) deserves special attention as it was far and away the hardest object in our test set. It was the only object tested for which most algorithms did not achieve at least 80\% optimal performance by step 5, as well as having the lowest random to BFS ratio at step 5. We also saw (both here and in the real world experiments) that heuristic approaches performed the best on this extremely unusual object, while the machine learning based approaches all struggled to generalize to fit it. Easy and Medium category objects come very close to optimal performance around step 3, as seen in Figure \ref{fig_sim_study_comp}. Given how small the possible gains on these simple objects can be, difficult objects should be the focus of future research. \FloatBarrier \subsection{Comparison with the Information Gain Heuristic} Using the same simulation setup the Information Gain Heuristic policy was compared to the 3D heuristic policy. The comparison results are shown in Table \ref{tbl:comparison}, where the number of viewpoints required was converted to the effective number of steps for 3D Heuristic for comparison. One step is the distance travelled to move to an adjacent viewpoint along the viewsphere in the discretized space with $v_{r}$ = 0.4m, $v_{s}$ = 20\textdegree. \begin{table} \centering \caption{Comparison between the exploration pattern employed by the Information Gain Heuristic and the 3D Heuristic's grasp weighted exploration.} \label{tbl:comparison} \includegraphics[width=0.9\linewidth]{Figures/Prob_comparison_table.png} \end{table} We see an average of 41\% reduction in camera movement and with the 3D Heuristic policy, confirming our theory that only certain types of information warrants exploration and that by focusing on grasp containing regions we can achieve good grasps with much less exploration. As a side benefit, we also see a 73\% reduction in processing time with the 3D Heuristic policy, as it considers far fewer views in each step. \FloatBarrier \subsection{Real World Study} The real world testing was done on a subset of objects in simulation along with two custom objects built using lego pieces. The grasp benchmarking protocol in \citep{Bekiroglu2020} was implemented to asses the grasp quality based of the five scoring parameters specified. The 3D Heuristic and the Q-Learning policies were selected and tested with the objects. The results for the tests performed are shown in Table \ref{tbl:lab_exp_res}. A total of 18 object-pose-policy combinations were tested with 3 trials for each and the average across the trails has been reported. The objects used along with their stable poses used for testing are shown in Figure \ref{fig_lab_exp_objects}. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.75\linewidth]{Figures/Lab_Exp_Objects.png} \caption{The left image shows the set of objects used for real world testing. On the right are the stable poses used for testing. (a) [YCB ID : 8] Stable Pose \#1, (b) [YCB ID : 8] Stable Pose \#2, (c) [YCB ID : 6] Stable Pose \#1, (d) [YCB ID : 35] Stable Pose \#1, (e) [Custom Lego 1] Stable Pose \#1, (f) [Custom Lego 2] Stable Pose \#1} \label{fig_lab_exp_objects} \end{figure} \begin{table} \centering \caption{A list of objects tested for 3DHeuristic and QLearning policies along with the benchmarking results} \label{tbl:lab_exp_res} \includegraphics[width=\linewidth]{Figures/Lab_Exp_Results.png} \end{table} In real world trials, we found that the 3D heuristic works consistently, but the Q-Learning is at times unreliably. When run in simulation, the paths Q-Learning picks for the real-world objects produce successful grasps - the difference between our depth sensor in simulation and the depth sensor in the real world seems to be causing the disconnect. Figure \ref{fig_sim_real_cam_diff} shows the difference between the depth sensors in the two environments. The sensor in simulation is able to accurately see all the surfaces whereas in real world it fails to see the same amount of details. This also explains why more steps were required in the real world than in simulation. Nonetheless, the reliability of the 3D Heuristic demonstrates that simulated results can be representative of reality, although there are some differences. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.75\linewidth]{Figures/sim_real_cam_diff.png} \caption{Difference between information captured by depth sensor in simulation (left) and real world (right)} \label{fig_sim_real_cam_diff} \end{figure} \FloatBarrier \section{Conclusions} In this paper, we presented heuristic and data-driven policies to achieve viewpoint optimization to aid robotic grasping. In our simulation testing, we implemented a wide variety of active vision approaches and demonstrated that, for the YCB objects we tested, the 3D Heuristic outperformed both machine learning based approaches and naive algorithms. From our optimal search, we demonstrated that for most objects tested most approaches work well. We were able to identify that the most difficult object in our test set is not only dissimilar to our training objects, it is objectively more difficult to synthesize a grasp for. In the real world testing, we demonstrated that while sensor differences impacted all algorithms' performances, the heuristic based approach was sufficiently robust to generalize well to the real world while our machine learning based approaches were more sensitive to sensor noise. Finally, we demonstrated that prioritizing exploration of grasp-related locations produces both faster and more accurate policies. Future research should prioritize what we have identified as difficult objects over simple ones, as it is only in the more difficult objects that gains can be made and good policies discerned from poor ones. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} SN designed and implemented the 2D Heuristic. GB and SN designed the 3D Heuristic, and SN implemented it. SN developed the simulation with minor help from GB. GB ran the simulation testing. SN ran the real world testing. GB implemented the machine learning policies. SN and GB both analysed the data. BC provided research supervision and organized project funding. BC, SN, and GB collaborated on the writing. \section*{Funding} This work is partially supported by ``NRT-FW-HTF: Robotic Interfaces and Assistants for the Future of Work" with award number 1922761. \bibliographystyle{frontiersinSCNS_ENG_HUMS}
proofpile-arXiv_068-13969
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Because the free energy landscape of typical macromolecular system is rough and complicated with plenty of minima and barriers, it is difficult to search global free energy minimum using conventional molecule dynamics (MD) or Monte Carlo (MC) simulations. In the last decades, a variety of methods have been developed to achieve an extensive sampling of configurational space. These methods include umbrella sampling~\cite{valleau1977umbre,laio2002metad}, replica exchange method(REM)~\cite{sugita1999remd,berg1991mul}, multicanonical simulation~\cite{okabe2001remc,berg1992mul}, metadynamics~\cite{valleau1991umbre}, simulated tempering~\cite{lyubartsev1992st,john2010st}, essential dynamic sampling~\cite{berendsen1996essd}, Wang-Landau algorithm~\cite{wang2001landau,wang2001prl}, temperature accelerated sampling~\cite{okamoto2004gen}, and so on. Many of these methods are based on generalized ensemble~\cite{vander2006tamd} in which each configuration is weighted by a non-Boltzmann probability factor, and thus a random walk in energy space could be achieved, \emph{i.e.} via a multicanonical method. These generalized ensemble methods have been extensively applied to the studies of, for example, spin glass~\cite{berg1992mulspin} and protein folding~\cite{hansmann1999mulpro,Bruce1996mulcon}. However, the non-Boltzmann probability factor is usually unknown and should be determined by iteration processes. These iterations are non-trivial and even difficult for complex systems, therefore some methods are proposed to accelerate the convergence of iteration processes~\cite{bartels1998mulcon,kumar1996multcon}. \par Recently, an integrated tempering sampling (ITS) method for enhancing the sampling in energy and configuration space was proposed~\cite{gaoyq2008its1,gaoyq2009comp,gaoyq2008its2}. This method is based on a generalized (non-Boltzmann) ensemble which allows an enhanced sampling in a desired broad energy and temperature range. In this generalized ensemble, the probability of a configuration of the system under study is proportional to a summation of Boltzmann factors at a set of temperatures, with each Boltzmann factor carrying a weighting factor. These weighting factors can be determined by the condition that each term in the summation contributes a predefined fraction. \par In the original ITS method, the weighting factors can be estimated in an iterative way, which may be time-consuming for large systems. In this study, we follow the line of ITS method and derive the expression of weighting factors through optimizing the energy distribution in the simulations. The values of weighting factors only depend on the average potential energy of the system, which do not have to be very accurate and can be easily calculated by conventional MD or MC simulations. This process avoids iteration, so the weighting factors can be determined easily and quickly. Moreover, the temperature distribution of an ITS simulation is very important. A broad energy distribution cannot be generated unless a proper temperature range is chosen. Here we also propose an easy-to-use way to generate temperature distribution that can ensure a reasonable energy distribution. \par This paper is organized as follows. In section~\ref{sec:method}, the theory and computational scheme will be described in detail. In section~\ref{sec:application}, we apply the method to the studies of Lennard-Jones fluid, a small peptide and single polymer chain to validate and benchmark the method. Conclusion is drawn in section~\ref{sec:conclusion}. \section{Method}\label{sec:method} \subsection{Generalized ensemble} ITS method is based on the generalized ensemble to get a distribution covering a broad range of energies. We define the generalized distribution function $W(r)$ as a summation of a set of Boltzmann factors at different temperatures $T_k$: \begin{equation} \label{equ:expwr} W(r) = \sum\limits_k {{n_k}{e^{ {-\beta_k}U(r)}}} \qquad k=1,2,\ldots,N\,. \end{equation} In Eq.~(\ref{equ:expwr}), $\beta_k = {1}/{k_BT_k}$, $k_B$ is Boltzmann constant. In this study, we assume that all the terms in the summation are ranked as temperature increases. The probability to find a configuration with potential energy $U$ is proportional to $W(r)$. Eq.~(\ref{equ:expwr}) shows that the generalized ensemble is closely associated with the canonical ensembles at different temperatures. The properties of the generalized ensemble can be calculated from those canonical ensembles. For example, the partition function is: \begin{equation} \label{equ:equqw} {Q_W} = \int {W(r)dr = } \int {\sum\limits_k {{n_k}{e^{ -{\beta _k}U(r)}}} dr = \sum\limits_k {{n_k}{Q_k}} }\,. \end{equation} In Eq.~(\ref{equ:equqw}), $Q_W$ is the partition function of the generalized ensemble and $Q_k$ is the partition function of the canonical ensemble at temperature $T_k$. The ensemble average of thermodynamic quantity $\langle A\rangle_W$ is \begin{equation} \label{equ:equaw} {\left\langle A \right\rangle _W} = \frac{{\int {A(r)W(r)dr} }}{{\int {W(r)} dr}} = \frac{{\int {A(r)\sum\limits_k {{n_k}{e^{ - {\beta _k}U(r)}}dr} } }}{{\int {\sum\limits_k {{n_k}{e^{ - {\beta _k}U(r)}}dr} } }} = \frac{{\sum\limits_k {{n_k}{Q_k}{{\left\langle A \right\rangle }_k}} }}{{\sum\limits_k {{n_k}{Q_k}} }}\,. \end{equation} Here, $A$ is a thermodynamic quantity, $\left\langle A \right\rangle _W$ denotes generalized ensemble average of $A$, $\left\langle A \right\rangle _k$ denotes canonical ensemble average. The potential energy probability density of the generalized ensemble $P_W(U)$ is \begin{equation} \label{equ:equpw} {P_W}(U) = \frac{{n(U)W(r)}}{{\int {W(r)} dr}} = \frac{{\sum\limits_k {{n_k}{Q_k}{P_k}(U)} }}{{\sum\limits_k {{n_k}{Q_k}} }}\,, \end{equation} in which $n(U)$ is the density of states, and $P_k(U)$ is the potential energy probability density of the canonical ensemble at temperature $T_k$. In a special case, if ${n_k} = \frac{c}{Q_k}$ ($c$ is a nonzero constant), Eq.~(\ref{equ:equpw}) becomes \begin{equation} \label{equ:simpw} {P_W}(U) = \frac{{\sum\limits_k {{n_k}{Q_k}{P_k}(U)} }}{{\sum\limits_k {{n_k}{Q_k}} }} = \frac{1}{N}\sum\limits_k {{P_k}(U)}\,. \end{equation} Importantly, the properties of any canonical ensemble whose temperature is in the desired range, \emph{i.e.} $T_j\in[T_1,T_N]$ can be calculated by a reweighting scheme from the generalized ensemble by Eq.~(\ref{equ:reweight}) and Eq.~(\ref{equ:prewei}): \begin{equation} \label{equ:reweight} {\left\langle A \right\rangle _{\beta_j} } = \frac{{\int {A(r)} {e^{ - \beta_j U(r)}}dr}}{{\int {{e^{ - \beta_j U(r)}}dr} }} = \frac{{\int {\frac{{A(r){e^{ - \beta_j U(r)}}}}{{W(r)}}W(r)dr} }}{{\int {\frac{{{e^{ - \beta_j U(r)}}}}{{W(r)}}W(r)dr} }} = \frac{{{{\left\langle {\frac{{A(r){e^{ - \beta_j U(r)}}}}{{W(r)}}} \right\rangle }_W}}}{{{{\left\langle {\frac{{{e^{ - \beta_j U(r)}}}}{{W(r)}}} \right\rangle }_W}}}\,, \end{equation} \begin{equation} \label{equ:prewei} P_{\beta_j}(U) = \frac{n(U) e^{-\beta_j U}}{Q_{\beta_j}} = \frac{ e^{-\beta_j U}}{\sum \limits_k n_k e^{-\beta_k U}} \frac{1}{ \langle \frac{e^{-\beta_j U(r)}} {W(r)} \rangle _W} P_W(U)\,. \end{equation} In Eq.~(\ref{equ:prewei}), $P_{\beta_j}(U)$ denotes the potential energy probability density at inverse temperature $\beta_j$ and $Q_{\beta_j}$ denotes the partition function of canonical ensemble at inverse temperature $\beta_j$. \par In ITS simulation, the generalized distribution function of Eq.~(\ref{equ:expwr}) can be obtained by running a simulation with a modified potential $U'(r)$ at desired temperature $T$. $U(r)$ is defined through \begin{equation} \label{equ:defuprime} e^{-\beta U'(r)} = W(r)= \sum\limits_k {{n_k}{e^{ {-\beta _k}U(r)}}}\,, \end{equation} and can be simply written as: \begin{equation} \label{equ:expuprime} U'(r) = - \frac{1}{\beta}\ln{\sum\limits_k {{n_k}e^{{-\beta_k}U(r)}}}\,. \end{equation} The biased force $F_b$ that is used in the Newtonian equations of motion with the modified potential $U'(r)$ becomes \begin{equation} \label{equ:expbf} F_b = - \frac{\partial{U'(r)}}{\partial{r}} = -\frac{\partial{U'(r)}}{\partial{U(r)}}\frac{\partial{U(r)}}{\partial{r}} =\frac{\sum\limits_k{n_k \beta_k e^{-\beta_kU(r)}}}{\beta\sum\limits_k{n_ke^{-\beta_kU(r)}}}{F} . \end{equation} In Eq.~(\ref{equ:expbf}), $F$ is the force calculated using original potential function of the system under study. To implement this ITS method in an MD software package, we only need to modify the integrator, which calculates the biased force by Eq.~(\ref{equ:expbf}), leaving other software codes such as subroutines for force calculation unchanged. Therefore, the ITS method supplies an easy and efficient way to scan a larger span of energy distribution. \subsection{How to determine $n_k$ and $\beta _k$} The key issue in ITS method is how to determine the weighting factors $n_k$. In original ITS method~\cite{gaoyq2008its2}, to calculate $n_k$, $m_k$ is defined as \begin{equation} \label{equ:defmk} m_k=\begin{array}{ll} 1 &\qquad k=1\\ \frac{n_k}{n_{k-1}} &\qquad k>1 \end{array}\,, \end{equation} so $n_k$ can be obtained by the product of $m_k$, \begin{equation} \label{equ:nkmk} n_k = n_1\prod\limits_{j=1}^k{m_j}\,, \end{equation} and $P_k^{con}$ is defined as product of $n_k$ and $Q_k$, \begin{equation} \label{equ:defgk} P_k^{con}=n_k Q_k=n_k \int{e^{-\beta_k U(r) }dr}\,. \end{equation} In practice, a set of initial guess of $m_k$ is made, then short ITS simulations are performed and $m_k$ are updated in an iterative way to make $P_k^{con}$ of adjacent temperatures equal. Values of $n_k$ are determined by Eq.~(\ref{equ:nkmk}) and the target values of $n_k$ are simply $\frac{c}{Q_k}$ ($c$ is a nonzero constant). \par In this study, we propose an alternative way to get the values of $n_k$ quickly, easily and without an iteration. First, we define the energy $U_k^p$ (shown in Fig.~\ref{figure:upuq} (a)), at which the values of two adjacent terms in $W(r)$ are equal. It gives \begin{equation} \label{equ:defup} {n_k}{e^{ {-\beta _k}U_k^p}} = {n_{k+1}}{e^{ {-\beta _{k + 1}}U_k^p}}\,. \end{equation} We can easily obtain the expression of $U_k^p$ as \begin{equation} \label{equ:expup} U_k^p = \frac{{\ln {n_k} - \ln {n_{k + 1}}}}{{{\beta _k} - {\beta _{k + 1}}}}\,. \end{equation} As mentioned before, terms in $W(r)$ are ranked as temperature increases. According to mathematical property of exponential function, $U_k^p$ increases with increasing temperature: \begin{equation} \label{equ:expup2} U_1^p < U_2^p < \ldots <U_k^p < \ldots < U_{N-1}^p\,. \end{equation} This sequence divides the energy into $N$ ranges. Provided that energy $U$ is in the range $U_{k-1}^p < U < U_{k}^p$, the $k$-th term in $W(r)$ is the largest one (as illustrated in Fig.~\ref{figure:upuq} (a)): \begin{equation} \label{equ:expup3} {n_1}{e^{ {-\beta _1}U}} < {n_2}{e^{ {-\beta _2}U}} < \ldots < {n_k}{e^{ {-\beta _k}U}} > \ldots > {n_N}{e^{ {-\beta _N}U}}\,. \end{equation} If we define weighting functions by \begin{equation} \label{equ:expup4} f_k^W(U) = \frac{n_ke^{-\beta_kU}}{\sum\limits_m n_me^{-\beta_mU}}\,, \end{equation} there is a maximum of weighting function $f_k^W$ in the range $U_{k-1}^p < U < U_{k}^p$. The value of $f_k^W$ normally decays rapidly as energy $U$ varies. In other ranges, the value of $f_k^W$ could be rather small even negligible. This property indicates that in the range $U_{k-1}^p < U < U_{k}^p$, the value of $W(r)$ is dominated by its $k$-th term, and the property of generalized ensemble resembles the canonical ensemble at temperature $T_k$. \par We then define energy $U_k^q$ (shown in Fig.~\ref{figure:upuq} (b)) meeting the condition that the potential energy probability density function of the canonical ensemble at temperature $T_k$ is equal to that of the canonical ensemble at temperature $T_{k+1}$, that is, ideally we have \begin{equation} \label{equ:defuq} P_k(U_k^q) = P_{k+1}(U_k^q)\,. \end{equation} Eq.~(\ref{equ:defuq}) can be written as: \begin{equation} \frac{{n(U_k^q){e^{ {-\beta _k}U_k^q}}}}{{{Q_k}}} = \frac{{n(U_k^q){e^{ {-\beta _{k + 1}}U_k^q}}}}{{{Q_{k + 1}}}}\,. \end{equation} Then, we can get the expression of $U_k^q$ as \begin{equation} \label{equ:expuq} U_k^q = \frac{{\ln {Q_{k + 1}} - \ln {Q_k}}}{{{\beta _k} - {\beta _{k + 1}}}}\,. \end{equation} Provided that the potential energy average of the system will increase as the temperature increases, $U_k^q$ also increases as temperature increases, \emph{i.e}. $U_1^q < U_2^q < \ldots < U_k^q < \ldots < U_{N-1}^q$. And similarly in the range $U_{k-1}^q < U < U_{k}^q$ , there is a maximum for function $P_k(U)$. \par To optimize the energy distribution generated in ITS simulation, when $W(r)$ is dominated by the $k$-th term in the range $U_{k-1}^p < U < U_{k}^p$, the maximum of the potential energy probability density function should be in the same range, that is \begin{equation} \label{equ:optcon} U_k^p = U_k^q\,. \end{equation} If we substitute Eq.~(\ref{equ:expup}) and Eq.~(\ref{equ:optcon}) into Eq.~(\ref{equ:expuq}), we can conclude that \begin{equation} \label{equ:simnk} {n_k} = \frac{c}{{{Q_k}}}\,. \end{equation} Eq.~(\ref{equ:simnk}) is consistent with the result reported in original ITS method, and it indicates that the optimizing condition we present here is essentially identical to the way proposed in Ref.~\cite{gaoyq2008its2}. If we only substitute Eq.~(\ref{equ:expup}) into Eq.~(\ref{equ:optcon}), we obtain the recursive relation of $n_k$: \begin{equation} \label{equ:expnk} \ln {n_k} - \ln {n_{k + 1}} = U_k^q({\beta _k} - {\beta _{k + 1}})\,. \end{equation} In Eq.~(\ref{equ:expnk}), $n_1$ can be simply set to 1 and $U_k^q$ can be estimated in the following way \begin{equation} \label{equ:appuq} U_k^q = \frac{{\ln {Q_{k + 1}} - \ln {Q_k}}}{{{\beta _{k}} - {\beta _{k + 1}}}} \approx - \frac{1}{2}(\frac{{\partial \ln {Q_k}}}{{\partial {\beta _k}}} + \frac{{\partial \ln {Q_{k + 1}}}}{{\partial {\beta _{k + 1}}}}) = \frac{1}{2}({\left\langle U \right\rangle _k} + {\left\langle U \right\rangle _{k + 1}})\,. \end{equation} In Eq.~(\ref{equ:appuq}), the slope of a secant line is approximated by average of the slopes of tangent lines at two line terminals. The potential energy averages can be evaluated through conventional MD simulations. According to Eq.~(\ref{equ:expnk}) and Eq.~(\ref{equ:appuq}), we can easily determine the values of $n_k$ one by one without estimating the partition functions. \par The temperature distribution is crucial to the energy distribution generated in ITS simulation. It seriously affects the efficiency of ITS method. Here we also propose an easy way to determine a reasonable temperature distribution, which can actually be determined by the requirement that the ratio between energy probability density functions at two adjacent temperatures is a constant $t$ when the energy is equal to the potential energy average at the lower temperature, \begin{equation} \label{equ:defbk} \frac{{{P_k}({{\left\langle U \right\rangle }_k})}}{{{P_{k + 1}}({{\left\langle U \right\rangle }_k})}} = t\,. \end{equation} In Eq.~(\ref{equ:defbk}), the parameter $t$ is named overlap factor, which is related to the space between two adjacent temperatures and the total number of temperatures in the desired temperature range. Eq.~(\ref{equ:defbk}) can be rewritten as \begin{equation} \frac{\frac{n({\left\langle U \right\rangle }_k) e^{-\beta_k{\left\langle U \right\rangle }_k}}{Q_k}} {\frac{n({\left\langle U \right\rangle }_{k}) e^{-\beta_{k+1}{\left\langle U \right\rangle }_k}}{Q_{k+1}}}= t\,. \end{equation} Through simple derivation, one can get the recursive relation of inverse temperature, \begin{equation} \label{equ:expbk} {\beta _k} - {\beta _{k + 1}} = \frac{{\ln t}}{{U_k^q - {{\left\langle U \right\rangle }_k}}}\,. \end{equation} Eq.~(\ref{equ:expbk}) contains only one adjustable parameter $t$. Once the overlap factor $t$ is determined, the temperature distribution will be completely determined. \par Because the idea of ITS method is quite similar to that of replica exchange method~\cite{sugita1999remd,okabe2001remc}, we then choose the value of overlap factor $t$ by comparing ITS to REM. REM is based on simultaneous simulations of multiple replicas of the same system at different temperatures. At regular intervals, $N$ independent simulations are allowed to switch temperatures with each other with the acceptance ratio defined in Eq.~(\ref{equ:remdex}). In this way, it is possible for low temperature replicas to gradually migrate up to higher temperatures and back again. \begin{equation} \label{equ:remdex} P_{acc}(U_k,\beta_k \leftrightarrow U_{k+1},\beta_{k+1}) = \min\{1,e^{(\beta_{k+1} - \beta_{k})(U_{k+1} -U_{k})} \}\,. \end{equation} In Eq.~(\ref{equ:remdex}), $U_k$ and $U_{k+1}$ are potential energies at temperatures $T_k$ and $T_{k+1}$, respectively. For efficient REM simulations, the choice of temperatures should guarantee sufficient overlap between all adjacent pairs over the entire temperature range and give the same mean acceptance ratio between those adjacent pairs. Various approaches to optimize the temperature distribution of REM simulations had been proposed. Sanbonmatsu and Garc\'{\i}a performed short simulations at a few temperatures, then fitted average energies with polynomial and determined the temperature distribution by solving Eq.~(\ref{equ:remdex}) in an iterative way~\cite{sanbonmatsu2001remd}. de Pablo and coworkers presented a similar approach and demonstrated that under the assumption that the energy probability density function is Gaussian, the relation between acceptance ratio $P_{acc}$ and the overlap of energy probability density function at two adjacent temperatures is system independent~\cite{pablo2004opt}. \par For ITS method, substituting Eq.~(\ref{equ:appuq}) into Eq.~(\ref{equ:expbk}), we get: \begin{equation} \label{equ:simbk} {\beta _k} - {\beta _{k + 1}} = \frac{{2\ln t}}{{ {\left\langle U \right\rangle }_{k+1} - {\left\langle U \right\rangle }_k }}\,, \end{equation} so, \begin{equation} \label{equ:itsex} e^{(\beta_{k+1} - \beta_{k})( {\left\langle U \right\rangle }_{k+1} - {{\left\langle U \right\rangle }_k}) } = t^{-2}\,. \end{equation} The left side of Eq.~(\ref{equ:itsex}) is the mean acceptance ratio in REM simulations. Suppose that if a set of temperatures could give a reasonable acceptance ratio in REM simulations, there should be enough overlap between adjacent temperatures, and this set of temperatures would also work in ITS simulation. Thus, giving the left side of Eq.~(\ref{equ:itsex}) a proper value in the range of $0\sim 1$ will determine the value of overlap factor $t$. Then the complete temperature distribution is determined further by using Eq.~(\ref{equ:expbk}). \par \subsection{Computational procedure} We propose a new computational procedure of ITS simulation. \begin{enumerate} \item Determine the desired temperature range. \item Choose a set of temperatures in the desired range and generate short replica exchange simulation trajectories to calculate the potential energy averages of the system at those temperatures. For simple systems, conventional MD or MC simulations can also be used. Then the relation between potential energy average and temperature is obtained by interpolation. \item \label{item:param}Determine the ITS temperature distribution and the corresponding weighting factors $n_k$ through Eq.~(\ref{equ:expnk}), Eq.~(\ref{equ:appuq}) and Eq.~(\ref{equ:expbk}). \item Use the parameters generated in step~\ref{item:param} to perform ITS simulation, which is essentially a conventional MD simulation using biased force calculated by Eq.~(\ref{equ:expbf}). \item After ITS simulation, the canonical ensemble properties can be calculated by Eq.~(\ref{equ:reweight}) and Eq.~(\ref{equ:prewei}). \end{enumerate} \par \section{Applications}\label{sec:application} \subsection{Lennard-Jones fluid} Lennard-Jones (LJ) fluid is a widely used benchmark system~\cite{ko1993lj}. To test the validity of this ITS method, we consider the LJ fluid system reported in Ref.~\cite{js1993lj} and compare our results with literature data. The LJ potential is \begin{equation} \label{equ:lj} {U_{LJ}}(r) = 4\varepsilon [{(\frac{\sigma }{r})^{12}} - {(\frac{\sigma }{r})^6}]. \end{equation} The system contains $864$ particles. In our simulations for LJ particles, conventional reduced units are used. The number density is therefore $\rho=0.8$ and the cutoff distance is $r_c=4.0$. Integration timestep of $0.001$ is used. Long range correction $U_{tail}$~\cite{allen1989bible} is applied with \begin{equation} {U_{tail}} = \frac{8}{9}\pi \rho [{(\frac{\sigma }{{{r_c}}})^9} - 3{(\frac{\sigma }{{{r_c}}})^3}]\,. \end{equation} \par First, we perform a set of conventional (canonical) MD simulations at different temperatures to obtain the potential energy versus temperature curve. Because our purpose is to test the validity of our newly-proposed ITS procedure, we try to obtain the potential energy curve as accurate as possible. So $15$ temperatures are chosen in the range of $1.4\sim 2.0$ for this purpose. At each temperature, we run $1\times10^6$ steps canonical ensemble simulation to calculate the potential energy average after equilibrium. Linear interpolation is applied to obtain the potential energy average in the desired temperature range. Then the temperature distribution $\beta_k$ is obtained by solving Eq.~(\ref{equ:expbk}). The overlap factor $t$ is set to $e^{0.5}$, which generates $9$ temperatures in the range of $1.4\sim 1.91$. The corresponding weighting factors $n_k$ are determined using Eq.~(\ref{equ:expnk}). After that, We perform $1\times10^7$ steps ITS simulation and calculate the canonical average of potential energy at three different temperatures through reweighting scheme using Eq.~(\ref{equ:reweight}). The results are shown in TABLE~\ref{table:ljpot}. The potential energies per particle at different temperatures are in good agreement with the literature data~\cite{js1993lj}. These results indicate that our newly proposed ITS method is validated. \par We also calculate the potential energy probability densities of canonical ensembles at $T_5$ and $T_6$ through reweighting scheme using Eq.~(\ref{equ:prewei}). These two temperatures are chosen because they are in the middle of the temperature range between $1.4\sim 1.91$. The results are shown in Fig.~\ref{figure:ljpotdis}. We can see that the two curves overlap largely. Moreover, the value of $U_5^q$ read from Fig.~\ref{figure:ljpotdis} (the cross point of the two curves) is $-4227.7$, in agreement with the value estimated by Eq.~(\ref{equ:appuq}), which is used in the ITS simulation, $-4225.2$. Therefore, our method can ensure sufficient energy distribution overlap between adjacent temperatures, and the approximation method used in Eq.~(\ref{equ:appuq}) yields a good accuracy. \subsection{ALA-PRO peptide in implicit solvent} We then apply this ITS method to study the trans/cis transition of ALA-PRO peptide. Real units are used in the following. For comparison, we also use replica exchange molecular dynamics (REMD) and conventional MD to study the conformational transition of this peptide. \par The structure of ALA-PRO peptide is shown in Fig.~\ref{figure:ala-pro}. The dihedral angle $\omega$ indicated in Fig.~\ref{figure:ala-pro} can be defined as the reaction coordinate of trans/cis transition of this peptide. In our simulations, we use modified GROMACS $4.5.5$ package~\cite{hess2008gromacs} and AMBER $99$sb force field~\cite{sorin2005amber99sb}. Generalized Born solvent accessible surface area (GBSA) implicit solvent model~\cite{tsui2000gbsa} is adopted. The LINCS~\cite{hess1997lincs} algorithm is used to constrain all bonds containing hydrogen atom. In all simulations, the integration timestep is set as $1$ fs. We perform $1$ $\mu$s simulations using ITS, REMD and conventional MD methods, respectively. In REMD simulation, eight replicas with temperatures at $283$, $335$, 400, $478$, $564$, $680$, $805$, and $905$ K are used, which result in an exchange acceptance ratio of roughly $50\%$. The exchange attempt frequency is $1$ ps$^{-1}$ in REMD simulation. For ITS, REMD and conventional MD simulations, the same initial structure is taken. \par To test the robustness of ITS method, the potential energy average curve is obtained from the first $10$ ps REMD simulations (averaging over 10 frames), which is apparently quite approximate to estimate $n_k$. In our ITS simulation, the overlap factor $t$ is set to $e^{0.05}$, which generates 22 temperatures in the range from $283.0$ to $948.82$ K. The potential energy averages calculated from 10 ps, $1$ $\mu$s REMD simulations and ITS simulation are shown in TABLE~\ref{table:enerpot}. It is clear that the potential energy averages calculated from $10$ ps trajectory of REMD simulation are not accurate enough, whereas potential energy averages calculated from $1~\mu$s REMD and ITS simulation are in good agreement. The visited potential energies in ITS, REMD, and conventional MD simulations are shown in Fig.~\ref{figure:potener}. ITS method can effectively explore a broad range of potential energy as REMD method, whereas the conventional MD can only explore a limited potential energy range. An important thing is, although the $n_k$ values are obtained from potential energy averages without enough accuracy (i.e., only from 10 ps REMD trajectories), the ITS method is still quite efficient on exploring the configuration space. It implies that for estimating $n_k$, several short simulations at different temperatures are enough. We denote $\langle U \rangle _k $ and $U_k^{q}$ obtained from short simulations as $\langle U \rangle _k ^{s}$ and $U_k^{qs}$, the true values of $\langle U \rangle _k $ and $U_k^{q}$ are denoted as $\langle U \rangle _k ^{t} $ and $U_k^{qt}$, respectively. The optimizing condition of potential energy distribution is that when $W(r)$ is dominated by the $k$-th term in the range $U_{k-1}^p < U < U_{k}^p$, the maximum of the potential energy probability density should be in the same range. In fact, this optimizing condition can be achieved when the following condition is satisfied: \begin{equation} \label{equ:potlim} U_{k} ^{qs}<\langle U \rangle _{k+1} ^{t} <U_{k+1} ^{qs} (k=1,2,\cdots,N-2) \,. \end{equation} Thus, the potential energy averages obtained from short simulations can vary in a quite wide range and are not necessary to be that accurate. \par To clarify the influence of overlap factor $t$ on ITS simulations, we also try different values of overlap factor $t$ for this dipeptide system. The standard deviation of potential energy, $\sigma_d$, is employed to characterize the width of the energy range visited in the ITS simulations: \begin{equation} \label{equ:ermsd} \sigma_d = \sqrt{\langle U^2 \rangle - {\langle U \rangle}^2 }\,. \end{equation} The results are shown in TABLE~\ref{table:t}. When $t$ is set to $e^{29.8}$, the overlap between adjacent canonical ensembles is very small, thus the energies of the system under study are trapped in a narrow range. As $t$ decreases, the system can visit a broad range of energies. Fig.~\ref{figure:potdist} shows the potential energy distribution generated by $t=e^{5.0}$ (corresponds to $3$ temperatures) and $t=e^{0.05}$ (corresponds to $22$ temperatures). Because the overlap between the $3$ temperatures (when $t=e^{5.0}$) is insufficient, there are obviously $3$ peaks in the potential energy distribution, and the peak corresponding to the lowest temperature is very high, indicating a low sampling efficiency of high energy range. While for $t= e^{0.05}$ ($22$ temperatures), a more uniform potential energy distribution is generated. A good choice of overlap factor therefore should ensure sufficient overlap between adjacent temperatures. \par To illustrate the sampling efficiency in configuration space, we compare the dihedral angle ($\omega$) distributions obtained in ITS, REMD and conventional MD simulations, as shown in Fig.~\ref{figure:omega}. Because the free energy barrier is pretty high for trans/cis transition of this dipeptide, in conventional MD simulation, no transition occurs and a unimodal $\omega$ distribution is observed. While in ITS as well as in REMD simulations, bimodal distributions of $\omega$ are observed. The result indicates that both ITS and REMD methods can overcome high free energy barrier and enhance the sampling of configuration space. \par To further compare the sampling efficiency of ITS and REMD methods, root of mean square derivation (RMSD) of potential of mean force (PMF) along the reaction coordinate ($\omega$) is investigated. PMF along the dihedral angle ($\omega$) is defined as \begin{equation} \label{equ:defpmf} F^{pmf}(\omega) = -k_BT\ln{\langle \rho(\omega) \rangle }\,. \end{equation} In Eq.~(\ref{equ:defpmf}), $\langle \rho(\omega) \rangle$ is average density function defined in Eq.~(\ref{equ:defrho}) \begin{equation} \label{equ:defrho} \langle \rho(\omega) \rangle = \frac{\int{\delta(\omega'(r)-\omega) e^{-\frac{U(r)}{k_BT} }dr} }{\int{e^{-\frac{U(r)}{k_BT} } dr} }\,, \end{equation} which can be calculated through a reweighting scheme using Eq.~(\ref{equ:reweight}). Fig.~\ref{figure:pmfrmsd} shows the time evolution of RMSD of PMF. The RMSD of PMF converges much more quickly in ITS simulation than that in REMD simulation. The result implies that ITS method is more efficient than REMD in sampling configuration and energy space. \par Another important advantage of ITS simulation is that it requires less computational resources. It is necessary in REMD simulation to launch several simulations simultaneously, while in ITS simulation, only one trajectory is needed. The computational resources required by ITS is almost the same as conventional MD simulation. In this dipeptide case, the CPU time for REMD simulation is about $263$ hours (eight trajectories in total) and for ITS simulation is about $35$ hours (only one trajectory is needed). For this simplest peptide system, the REMD simulation is nearly $8$ times computational expensive. \subsection{Coil-globule transition of a flexible single polymer chain} The transition of a flexible polymer chain from a random-coil conformation to a globular compact form has been extensively studied~\cite{liting2010coil,liang2000coil,seaton2010coil}. In order to demonstrate the applicability of this ITS method, we apply it to study the coil-globule transition of a flexible single polymer chain in implicit solvent. By calculating the mean square radius of gyration ($\langle Rg^2\rangle$) of the polymer chain at different temperatures, we can get the transition temperature of coil-globule transition. \par Conventional reduced units are used in the following. We consider a coarse-grained model of polymer with $100$ beads connected by finite extensible nonlinear elastic (FENE) potential, \begin{equation} \label{equ:fene} U_{FENE}(r) = \begin{array}{ll} -\frac{1}{2}K{r_0}^2\ln({1.0-\frac{r^2}{{r_0}^2}}) + U_{WCA} &\qquad r< r_0\\ \infty &\qquad r\ge r_0 \end{array} \,, \end{equation} in which \begin{equation} \label{equ:wca} U_{WCA}(r) = \begin{array}{ll} 4 \varepsilon_{WCA} \left[ \left( \frac{\sigma_{WCA}}{r} \right)^{12} - \left( \frac{\sigma_{WCA}}{r} \right)^{6} \right] + \varepsilon_{WCA} &\qquad r< 2^{\frac{1}{6}}\sigma_{WCA}\\ 0 &\qquad r \ge 2^{\frac{1}{6}}\sigma_{WCA} \end{array}\, \end{equation} and $r_0$ is the bond extension parameter, $K$ is the attractive force strength, and $r$ is the instantaneous bond length. We set $\sigma_{WCA}$=1.05, $\varepsilon_{WCA}$=1.0, $r_0$=1.5, and $K$=20. The Lennard-Jones potential in Eq.~(\ref{equ:lj}) is used between non-bonded beads with $\sigma$ =1.0 and $\varepsilon$=1.0. For comparison, we perform both ITS and MD simulations with the same initial chain configuration and the same parameter set. \par The potential energy curve is obtained by $1.0\times10^6$ steps MD simulation at $8$ temperatures in the range of $1.0\sim 4.5$. We then perform $1.0\times10^9$ steps ITS simulation, for which the overlap factor $t$ is set to $e^{0.5}$ and $18$ temperatures are generated in the range of $1.0\sim 4.35$. By performing one ITS simulation, we can get the $\langle Rg^2\rangle$ at any temperature in the temperature range. By calculating the first order derivative, the transition temperature can be identified, as shown in Fig.~\ref{figure:rg2t} for $\langle Rg^2\rangle$ at $300$ temperatures. As temperature increases, the value of $\langle Rg^2\rangle$ also increases. Because the curve for $\langle Rg^2\rangle$ produced by ITS simulation is smooth with high revolution, we calculate first order and second order derivative directly by difference method. The peak of the first order derivative of $\langle Rg^2\rangle \sim T$ corresponds to the transition temperature. By calculating the second order derivative, we easily identify the transition temperature as $2.42$ for coil-globule transition of a polymer chain with 100 beads. \par For comparison, we also use brute-force canonical MD simulations to study the coil-globule transition of this polymer chain. We perform $31$ MD simulations at $31$ temperatures spaced $0.1$ in the range of $1.0\sim 4.0$. The length of each simulation is $1.0\times10^9$ timesteps. As shown in Fig.~\ref{figure:rg2t}, the curve of $\langle Rg^2\rangle\sim T$ produced by canonical MD simulations basically overlaps with the one produced by ITS simulation. Because the resolution of the curve produced by MD simulations is low and the noise of data is significant, we employ polynomial fitting method to calculate the derivative. we try different orders of polynomial, and find that the 8-order polynomial can best reproduce the results of ITS simulation. Because the computational cost of ITS simulation is nearly the same as conventional MD simulation and we can get even more accurate data with high resolution in one ITS simulation than massive canonical MD simulations, ITS method is much more efficient than conventional MD on identifying the coil-globule transition temperature for polymer chain. The results indicate that ITS method can be used to study complex polymer systems with high efficiency and accuracy. \section{Conclusion}\label{sec:conclusion} In this study, we present a new version of ITS method that provides an easy, quick and robust way to generate suitable parameters. In this method, the only input is potential energy average in the desired range of temperatures. Reasonable values of weighting factors $n_k$ can be obtained directly from potential energy average without iteration, even though the potential energy average is not that accurate. It is also easy to determine the temperature distribution in which there is sufficient overlap between adjacent temperatures by choosing a reasonable overlap factor. This method is very efficient for exploring configuration space and calculating thermodynamic quantities. By running one ITS simulation (i.e., one trajectory), we can sample basically the same configuration space as REMD simulations in the same temperature range. But in ITS method, we do not need to launch tens of parallel simulations simultaneously, so it is extremely suitable to be implemented in GPU version of typical simulation packages for enhanced sampling. \par In the method we proposed, to determine the weighting factors in ITS simulation, we use the optimizing condition that when $W(r)$ is dominated by the $k$-th term in the range $U_{k-1}^p < U < U_{k}^p$, the maximum of the potential energy probability density should be in the same range. This optimizing can be easily satisfied even the potential averages are not very accurate, so we could estimate these parameters by short MD simulation trajectories. \par Glass transition is fundamental and challenging problem in solid state physics and also an important phenomenon in material science. The debate about whether the glass transition is a thermodynamic phase transition or a dynamic phenomenon has been lasting for decades~\cite{gibbs1958thermo,gordon1976dynamic,elenius2010thermo}. Because this ITS method is a powerful tool to calculate the thermodynamic quantities, we hope this method will contribute to solving the problem of glass transition. \par \section*{Acknowledgements} This work is subsidized by the National Basic Research Program of China (973 Program, 2012CB821500), and supported by National Science Foundation of China ($21025416$, $50930001$). \bibliographystyle{model1-num-names}
proofpile-arXiv_068-14077
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{S-Introduction} Explosive expansion of coronal structures associated with CME/flare eruptions frequently creates large-scale large-amplitude waves and shocks in the solar corona (for recent reviews, presenting detailed overview of various aspects of this phenomenon, see \opencite{warmuth07}; \opencite{vrs08cliver}; \opencite{wills09}, \opencite{warmuth10}; \opencite{gallagher11}; \opencite{zhukov11}; \opencite{patsourakos12}). These global disturbances are observed as EUV coronal waves, chromospheric Moreton waves, type II radio bursts, moving soft X-ray, and/or radio sources (see \opencite{warmuth04a}; \opencite{vrs06shock3nov}; \opencite{olmedo12}), as well as sharp fronts in the white-light coronagraphic CME images ({\it e.g.} \opencite{ontiveros09}). In recent years, this phenomenon was the subject of many studies that focused on various observational and theoretical aspects, including the morphology, kinematics, source-region characteristics, shock formation, three-dimensional propagation, {\it etc.} (for a brief overview of recent research activities see, {\it e.g.}, Section 8 in \opencite{IAU09} and Section 9 in \opencite{IAU12}). At low coronal heights, where disturbances are observed in the EUV range, waves usually become recognizable at a distance of $\approx$\,100\,--\,200 Mm from the source active region ({\it e.g.} \opencite{veronig08}; \opencite{patsourakos09}; \opencite{ines11}; \opencite{muhr11}). Thus, EUV waves are observed while propagating through a quiet corona, where the magnetic field is predominantly vertical. Consequently, a low-coronal wave segment can be considered as a perpendicular magnetohydrodynamical (MHD) wave (magnetosonic wave). Typical velocities of EUV waves are a few hundred km\,s$^{-1}$ (for details see, {\it e.g.}, \opencite{thompson09}; \opencite{warmuth11} and references therein). New detailed observations reveal that the wave amplitude initially increases and at the same time the wave accelerates. Eventually, after a phase of approximately constant speed, the wave decelerates to velocities typically around 200\,--\,300 km\,s$^{-1}$ ({\it e.g.} \opencite{long08}; \opencite{muhr12}; \opencite{temmer12}), where faster waves show a stronger deceleration \cite{liu10,kozarev11,ma11,warmuth11,cheng12,olmedo12}. It was also found that waves of higher speed have larger amplitude \cite{ines11}. During the constant-speed and deceleration stage, the amplitude of the perturbation decreases whereas its profile broadens. Such behavior is usually interpreted as a typical signature of a freely propagating ``simple wave" (for terminology we refer to \opencite{vrs05EOS}; \opencite{warmuth07}). The fastest waves are frequently accompanied by type II radio bursts ({\it e.g.} \opencite{klassen00}; \opencite{biesecker02}; \opencite{warmuth04b}; \opencite{veronig06}; \opencite{vrs06shock3nov}; \opencite{muhr10}; \opencite{ma11}; \opencite{kozarev11}), which reveal the formation of a coronal MHD shock. Such waves may also generate Moreton waves ({\it e.g.} \opencite{warmuth04a}; \opencite{vrs06shock3nov}; \opencite{muhr10}; \opencite{asai12}; \opencite{shen12}) if the pressure jump at the shock front is strong enough to push the inert chromospheric plasma downwards, {\it i.e.} if the shock amplitude is high enough. Generally, coronal waves and shocks could be generated by the source-region expansion either related to a coronal mass ejection (CME), or a pressure pulse caused by the flare-energy release (for a discussion see \opencite{vrs08cliver}). Whereas in many events the source-region expansion could be clearly identified with the impulsive-acceleration stage of a CME ({\it e.g.} \opencite{patsourakos10}; \opencite{veronig10}; \opencite{grechnev11}; \opencite{kozarev11}); in some cases there are indications that the shock is initiated by a flare ({\it e.g.} \opencite{vrs06shock3nov}; \opencite{magdalenic10}; \opencite{magdalenic12}). Whatsoever the driver is, perpendicular MHD shocks are created by plasma motion perpendicular to the magnetic field. For example, a supersonic motion of small-scale ejecta would produce a shock (see, {\it e.g.}, \opencite{klein99}), in a similar manner as that in which supersonic projectiles create shocks in the air. However, in the solar corona a much more suitable process is a source-region expansion which acts as the three-dimensional (3D) piston. If the expansion is impulsive enough it creates a large-amplitude perturbation, whose leading edge steepens due to non-linear effects, {\it i.e.} wave elements of higher amplitude move faster. Eventually, a discontinuity occurs in the wavefront profile, meaning that the shock is formed. Whereas the 1D MHD piston problem (planar wave) can be solved analytically \cite{mann95,V&L00a}, an analogous 2D or 3D problem can be treated analytically only by applying severe assumptions and approximations, and after all, a numerical evaluation is needed in any case (see, {\it e.g.}, \opencite{zic08}; \opencite{afanasyev11}). Thus, numerical MHD simulations are required to study a 2D and 3D piston mechanism for the magnetosonic-wave generation. Bearing in mind that the wave formation and evolution are strongly influenced, or even dominated, by physical properties of the environment and the characteristics of the driver itself, there are two alternatives in approaching this complex physical problem. One way is to set up the initial conditions as closely as possible to the real situation in which a particular wave has occurred, and to perform a full 3D simulation that provides a detailed quantitative analysis of the specific event. Such an approach, providing detailed insight into the physics behind a particular event, including coronal diagnostics, were performed by, {\it e.g.}, \inlinecite{uchida73}, \inlinecite{wang00}, \inlinecite{wu01}, \inlinecite{ofman02}, \inlinecite{ofman07}, \inlinecite{cohen09}, \inlinecite{schmidt10}, \inlinecite{downs11}, \inlinecite{selwa12}. Another way is to start from a somewhat simplified initial situation, which provides more extensive parametric studies and gives a more general view on the problem. In this type of simulation the CME is usually represented by an erupting 2D structure, anchored in the inert photosphere (see, {\it e.g.}, \opencite{chen02}, \opencite{chen05}, \opencite{pomoell08}, \opencite{wang09}). In this article we consider some simple initial configurations/geometries to get an insight into the most basic characteristics of nonlinear processes governing the MHD wave formation and evolution in general. The idea is to isolate the basic processes that stand behind the wave formation in an idealized surrounding, {\it i.e.} to identify effects that are present regardless of the specific properties of the environment. In the follow-up article, more realistic configurations will be considered, including the chromosphere/corona density and Alfv\'en speed profile, magnetic field line-tying, and the arcade expansion accompanied by an upward motion. These more advanced simulations will be compared with the results presented in this article, which will help us to distinguish the effects that are intrinsic to the MHD wave formation from those governed by the environment. Special attention is paid to the wavefront steepening, {\it i.e.}, the shock formation process, in a planar and cylindrical geometry. For the simulations we employ the Versatile Advection Code (VAC: \opencite{toth96}; \opencite{Goedbloed03}). This numerical code was developed at the Astronomical Institute at Utrecht, in collaboration with the FOM Institute for Plasma Physics, the Mathematics Department at Utrecht, and the Centrum Wiskunde and Informatica (CWI) at Amsterdam. It is suitable for the analysis of a broad spectrum of astrophysical phenomena, including magnetohydrodynamic (MHD) shock waves. In Section 2 the model employed and the simulation procedure are briefly described. In Section 3 we present the results, first considering a planar geometry, so that the outcome can be compared with the analytical results, and then switching to a cylindrical geometry, which is more closely related to a coronal-arcade eruption or a coronal-loop expansion. In Section 4 we discuss the results and compare them with observations. \section{The Model} \label{S-model} In the following, we consider perpendicular magnetosonic waves, where we focus on a planar and cylindrical geometry. This allows us to set the magnetic field in the $z$-direction, whereas the $x$ and $y$ magnetic-field components, as well as the $z$-component of the velocity, are always kept zero ($B_x=0$, $B_y=0$, $v_z=0$). Furthermore, all quantities are invariant along the $z$-coordinate, {\it i.e.} we perform 2.5D simulations, where the input and the basic output quantities are the density [$\rho$] the momentum [$m_{x}=\rho v_{x}$, $m_{y}=\rho v_{y}$] and the magnetic field [$B_z$]. Note that although we perform 2.5D simulations, physically it is a one-dimensional problem. We use a two-dimensional [2D] numerical mesh containing $995\times995$ cells, supplemented by two ghost-cell layers at each boundary, which are used to regulate the boundary conditions (thus, the complete grid consists of $999\times999$ cells). We apply continuous boundary conditions, meaning that gradients of all quantities are kept zero by copying the variable values from the edge of the mesh into the ghost cells. All quantities are normalized, so that distances are expressed in units of the numerical-box length [$L=1$], velocities are normalized to the Alfv\'en speed [$v_A$], and time is expressed in terms of the Alfv\'en travel time over the numerical-box length [$t_A=L/v_A$]. We apply the approximation $\beta=0$, where $\beta$ is the plasma-to-magnetic pressure ratio. The origin of the coordinate system is set at the numerical-box center. We will consider two basic initial configurations, resulting in a planar wave and a cylindrical one. In the planar option, all quantities are invariant in the $y$-direction, {\it i.e.} quantities depend only on the $x$-coordinate. In the cylindrical option, all quantities depend only on the $r$-coordinate, where $r^2=x^2+y^2$. In all runs, we set up the simulation as an initial-value problem, starting from an unstable magnetic-field configuration of the source region. Thus, the source-region expansion is not fully under control, {\it i.e.} we do not prescribe the time-profile of the ``driver" motion. The overall characteristics of the source-region expansion (the acceleration impulsiveness and the maximum speed) are regulated only indirectly, by increasing or decreasing the initial force imbalance. More precisely, we start from an initial configuration where the force balance is distressed by the excess magnetic pressure, {\it i.e.} the space--time evolution of the plasma flow is entirely determined by the initial spatial profile of $B^2$. Specifically, we set a ``parabolic" profile for the initial magnetic field within the source region: \begin{equation} B_z(x)=\sqrt{B_{0}^{2}-b\,x^{2}}\,, \end{equation} where $B_0$ represents the magnetic field at $x=0$ and $b(x)$ defines the field strength profile within the source region. We employ the form $b=(B_0^2-B_e^2)/x_0^2$, where $x_0$ is the initial source-region size and $B_e$ represents the external magnetic field strength outside the source region. In the cylindrical configuration we use the same function, only replacing $x$ by $r$. The initial magnetic-field profile is drawn in Figure~\ref{1Dprofiles}a by a red line. For the initial source-region size we take $x_0=0.1$; beyond $x=x_0$ we set $B_e=1$ and $\rho_e=1$. To make the source region more inert, and to better visualize the source region, we increased the density within the source region to $\rho=2$ (see the red line in Figure~\ref{1Dprofiles}c). At the beginning, the plasma is at rest, $v=0$ (see the red line in Figure~\ref{1Dprofiles}e). The considered profile of $B_z$ is characterized by a magnetic-pressure gradient [$\partial(B_z^2/2\mu_0)/\partial x$] which causes the initial outward acceleration. The acceleration increases linearly from 0 at $x=0$ to a maximum value at the source-region boundary (hereinafter denoted also as ``contact surface", or ``piston"). The motion of the source-region boundary is tracked by identifying a surface within which the mass content equals the initial one. In the cylindrical geometry we also use another initial magnetic-field profile for the source region. It has the form: \begin{equation} B_{z}(r)= B_{0}\cos^{2}\left(\frac{\pi}{2}\frac{r}{r_{0}}\right)+B_{e}, \end{equation} where $r_0$ represents the source-region size, and $B_{e}$ is the magnetic-field strength for $r>r_0$. In this case the magnetic pressure gradient and the initial acceleration are zero at the source-region center and at the source surface, whereas the peak value is attained within the source-region body, at $r/r_0=\pi/4$. The density is again set to $\rho=2$ within $r<r_0$, and $\rho=1$ for $r>r_0$. Although we do not intend to reproduce directly any specific coronal structure, note that the hereafter presented evolution of the two cylindrical configurations employed can depict, to a certain degree, the coronal-wave formation caused by a lateral expansion of the source region placed in a vertical magnetic field of a quiet corona. Such an expansion can occur, {\it e.g.} in the impulsive-acceleration stage of CMEs (the so-called ``lateral overexpansion"; see, {\it e.g.}, \opencite{kozarev11}; \opencite{patsourakos10}), or presumably, in legs of impulsively heated flaring loops. \section{Results} \label{S-result} \subsection{Formation and Propagation of a Planar Shock } \label{S-1D} \begin{figure} \centerline{ \includegraphics[width=0.9\textwidth]{1D} } \caption{Formation and propagation of a perpendicular shock in the planar geometry: spatial profiles of the magnetic field (a, b), density (c, d), and flow speed (e, f). Left panels (a, c, e): the beginning of the wave formation (increasing wave amplitude); right panels (b, d, f): the shock formation phase (steepening of the wavefront profile). The initial magnetic field in the source-region center is $B_0=2$. Times are displayed in the inset. All quantities presented are normalized: distance $x$ is expressed in units of the numerical-box length [$L=1$], velocity $v_x$ is normalized to the Alfv\'en speed [$v_A$], and time $t$ is expressed in terms of the Alfv\'en travel time over the numerical-box length [$t_A=L/v_A$]. } \label{1Dprofiles} \end{figure} First we analyze the formation and propagation of a perpendicular shock in the planar geometry. The aim is to compare the numerical results with the analytic theory for planar MHD waves (\opencite{V&L00a}; \citeyear{V&L00b}) and to have the reference-results when the influence of the geometry on the results will be considered. The formation and propagation of the wave is presented in Figure~\ref{1Dprofiles}. In Figures~\ref{1Dprofiles}a and b we show the magnetic-field profiles [$B_z(x)$] in Figures~\ref{1Dprofiles}c and d the density profiles [$\rho(x)$] and in Figures~\ref{1Dprofiles}e and f the flow-speed profiles [$v_x(x)$]. The graphs in the left column show the formation phase of the wave, whereas those on the right side represent the propagation phase. The kinematics of various features recognized in the density profiles in Figures~\ref{1Dprofiles}c and d (the wavefront leading edge, the wave peak, a dip between the wave and the piston, and the source-region boundary) are displayed in Figure~\ref{1d kinem}. Due to the magnetic pressure gradient of the unstable initial configuration, the source-region expansion starts immediately at $t=0$. The acceleration is strongest at the source-region surface, whereas the source-region center ($x=0$) stays at rest. Over most of the source-region volume, the density decreases due to the expansion, whereas close to the contact surface it increases due to the velocity gradient. The kinematics of this density peak closely follows the kinematics of the contact surface, just slightly lagging behind it. A peak density, $\rho=2.34$, is attained around $t=0.06$. The source-region boundary accelerates until $t\approx0.1$, attaining a speed of $v=0.4$. After that, it gradually slows down, and stops around $t\approx0.35$ (see kinematics shown in Figure~\ref{1d kinem}a). During the accelerated-expansion phase, the flow speed increases, attaining a value of $v\approx0.4$ around $t\approx0.1$ (Figure~\ref{1Dprofiles}e), {\it i.e.} the fastest flow elements are adjusted to the piston motion. Ahead of the contact surface, a wavefront forms as a result of the source-region expansion. It can be easily recognized in the magnetic-field and density profiles shown in Figures~\ref{1Dprofiles}a and c. The wave detaches from the source region after $t\approx0.1$ ({\it i.e.} after the piston acceleration-phase ends), and continues to evolve as a freely propagating simple wave (for a hydrodynamic analog see Sections 101 and 102 in \opencite{L&L87}). Note that a dip in the density profile, formed between the wave peak and the contact-surface peak, never gets values $\rho<1$. On the other hand, the density in central parts of the source region becomes strongly depleted. The wavefront steepens in time, whereas its amplitude remains constant, staying at values of $\rho=1.44$, $B=1.44$, and $v=0.42$, respectively. The shock formation is completed at $t\approx0.26$. \begin{figure} \centerline{ \includegraphics[width=.6 \textwidth]{1D_kinem} } \caption{ Kinematics of various wave features and the source-region boundary (thin-solid line -- the wavefront leading edge; thick-solid line -- the wave crest; dot-dashed line -- a trailing density dip; dotted line -- the source-region boundary): a) distance {\it versus} time; b) velocity {\it versus} time. } \label{1d kinem} \end{figure} The kinematics of the wave leading edge, the wave peak, a rarefaction dip, and the piston, measured from the density profiles shown in Figure~\ref{1Dprofiles}, are displayed in Figure~\ref{1d kinem}, revealing that the piston accelerates until $t\approx0.08$. Thereafter, it continues to move at an approximately constant speed of $v\approx0.4$ until $t\approx0.13$. During this period the wave amplitude increases (see Figure~\ref{1Dprofiles}) and the wave-crest phase speed increases from $w\approx1$ to $w=1.76$. At the same time, the wavefront leading edge moves at $w\approx1$. The wave crest reaches the leading edge, {\it i.e.} the shock formation is completed, around $t\approx0.25$. After that, the shock front moves at a speed of $w=1.35$, consistent with the Rankine--Hugoniot jump relations. The described evolution of the source/wave system and its kinematics is fully consistent with the analytical model presented by \inlinecite{V&L00a}. After $t\approx0.13$, the source-region expansion gradually decelerates, and practically stops at $t\approx0.35$. A density dip between the wave peak and the piston, which forms around $t\approx0.11$, closely follows the kinematics of the source-region boundary, being only slightly faster then the piston. \begin{figure} \centerline{ \includegraphics[width=.6\textwidth]{w-vx} } \caption{ Relationship between the wave speed [$w$] and the flow speed [$v$] for the planar wave. Numerical results (red squares) are compared with the analytical relationship $w=1+3v/2$ (blue line) derived by Vr\v snak and Luli\'c (2000). } \label{1d} \end{figure} We repeated the simulations using various values of $B_0$, to analyze how the evolution of the piston/wave system depends on the impulsiveness of the source-region expansion. A stronger $B_0$ results in a more impulsive source-region acceleration, which leads to a higher shock amplitude and Mach number. Furthermore, the shock is created earlier and closer to the piston, so in the case of extremely impulsive expansions, the shock-sheath region and the source region cannot be clearly resolved. In Figure~\ref{1d} we show the dependence of the phase speed [$w$] of the perturbation-segment at the wave crest (before being shocked) as a function of the corresponding flow speed $v$. In the graph we display the results for $B_0=1.5$, 2.0, 3.0, and 5.0. For $B_0=2$ we also measured $w$ and $v$ at several suitable wavefront-segments ahead of the wave crest (the lowest $v$-values in Figure~\ref{1d}). The results are fully consistent with the outcome of the analytical theory for $\beta=0$ presented by \inlinecite{V&L00a}, where the relationship $w=1+3v/2$ was established. Whereas in the non-shocked phase the wave behavior is consistent with the analytical theory, in the shocked phase for $B_0\gtrsim3$, corresponding to $w\gtrsim1.8$, the numerical results start to deviate from the analytical Rankine--Hugoniot relations, the disagreement increasing with the increasing amplitude. The equation of continuity and the relationship between Mach number and the downstream flow speed behave as expected, but the relationship between Mach number and the downstream/upstream density jump deviates from the analytical results. This is probably due to numerical effects and the fact that at very high values of $B_0$ it becomes impossible to clearly resolve the compression at the source-region surface and the shock itself. \subsection{Cylindrical Geometry} \label{S-cyl} \subsubsection{Wave Formation} \label{S-form} \begin{figure} \centerline{ \includegraphics[width=0.9\textwidth]{cyl1} } \caption{Formation and propagation of the perpendicular shock in the cylindrical geometry for the initial magnetic field profile given by Equation (1) with $B_0=2$: spatial profiles of the magnetic field (a, b), the density (c, d), and the flow speed (e, f). Left panels (a, c, and e) show the beginning of the wave formation (increasing wave amplitude); right panels (b, d, and f) present the shock formation phase (steepening of the wavefront profile). Times are displayed in the inset. All quantities presented are normalized: distance $r$ is expressed in units of the numerical-box length [$L=1$], velocity $v_r$ is normalized to the Alfv\'en speed [$v_A$], and time $t$ is expressed in terms of the Alfv\'en travel time over the numerical-box length [$t_A=L/v_A$]. } \label{cyl_profiles} \end{figure} In Figure~\ref{cyl_profiles} the formation and propagation of the wave in the cylindrical geometry is presented. Spatial profiles of the magnetic field [$B_z(r)$] are shown in Figures~\ref{cyl_profiles}a and b, the density profiles [$\rho(r)$] are presented in Figures~\ref{cyl_profiles}c and d, whereas in Figures~\ref{cyl_profiles}e and f the flow speed [$v_r(r)$] is displayed. The initial magnetic-field and density profiles are defined in the same way as in the planar case (Equation (1)), only replacing $x\rightarrow r$. In the left column the wave formation phase is shown, whereas the right column represents the propagation phase. In Figure~\ref{cyl_kinem}, the kinematics of various features recognized in Figure~\ref{cyl_profiles} are shown. As in the planar case, the source-region expansion starts immediately at $t=0$, the acceleration being strongest at the source-region surface. The source-region center ($r=0$) remains at rest at all times (Figures~\ref{cyl_profiles}e and f). The density within the source region starts to decrease due to the expansion, whereas at the contact surface it increases due to the flow-speed gradient. The source-region expansion initially accelerates, attaining $v=0.28$ at $t\approx0.07$ (see Figure~\ref{cyl_kinem}). Note that the acceleration phase is shorter than in the planar case, the peak speed is considerably lower, and the $v$\,$\approx$\,const. phase is absent. After attaining the maximum speed, the piston gradually slows down, stops around $t\approx0.2$, and then retreats slowly towards the initial position (see the kinematics shown in Figure~\ref{cyl_kinem}). During the accelerated-expansion phase, the flow speed increases, reaching $v=0.28$ around $t\approx0.04$ (Figure~\ref{cyl_profiles}e). Note that, unlike in the planar geometry, the plasma flow is not fully synchronized with the piston motion. Ahead of the contact surface, the wavefront forms as a result of the source-region expansion. It can be readily recognized in the magnetic field and density profiles shown in Figures~\ref{cyl_profiles}a and c. The wave detaches from the source region around $t\approx0.08$, having an amplitude in $\rho$ and $B$ of around 1.22. After that, the perturbation continues to propagate as a freely propagating simple wave, but unlike in the planar case, the amplitude of the wave decreases with distance (Figure~\ref{cyl_profiles}). The wavefront steepens with time, whereas the peak flow-speed decreases. A discontinuity in the leading-edge profile occurs, {\it i.e.} the shock formation begins at $t\approx0.15$. The shock is fully completed at $t\approx0.28$, when it has an amplitude of $\rho=1.16$ and $v=0.16$. Note that flows within the source region are more complex than in the planar configuration. We also stress that a dip in the density profile, which forms between the wave peak and the contact-surface, now deepens to a value of $\rho=0.88$, {\it i.e.} the rarefaction region forms ($\rho<1$), as in the case of cylindrical hydrodynamic waves (see Section 102 in \opencite{L&L87} and references therein). \subsubsection{Wave Kinematics} \label{S-kinem} \begin{figure} \centerline{ \includegraphics[width=.6 \textwidth]{cyl_kinem} } \caption{ Kinematics of the wave and the source-region boundary derived from the density profiles shown in Figure~\ref{cyl_profiles}: a) distance {\it versus} time; b) velocity {\it versus} time. Thin-solid line -- the wavefront leading edge; thick-solid line -- the wave peak; dashed line -- a deep front measured at $\rho=1$; dot-dashed line -- the dip minimum; dotted line -- the source-region boundary. All quantities presented are normalized: radial distance $r$ is expressed in units of the numerical-box length [$L=1$], velocities $v$ and $w$ are normalized to the Alfv\'en speed [$v_A$], and time $t$ is expressed in terms of the Alfv\'en travel time over the numerical-box length [$t_A=L/v_A$]. } \label{cyl_kinem} \end{figure} The kinematics of the piston and the wave, estimated from the density profiles displayed in Figure~\ref{cyl_profiles}, are shown in Figure~\ref{cyl_kinem}. Comparing Figures~\ref{cyl_profiles} and ~\ref{cyl_kinem} one finds that during the piston acceleration the wave amplitude first increases, but then starts to decrease, even before the piston reaches its maximum velocity. The phase speed of the wave crest increases from $w\approx1$ to $w\approx1.3$ at $t\gtrsim0.15$, thereafter it gradually decreases. A dip between the wave peak and the piston, which forms around $t\approx0.08$, first closely follows the piston kinematics, but then, after the dip becomes characterized by $\rho<1$ at $t\approx0.2$, it ``detaches" from the piston and attains a speed of $w\approx1$ around $t\approx0.35$. Note that a segment of the dip characterized by $\rho=1$ moves at a speed of $w\approx1$ all the time. \begin{figure} \centerline{ \includegraphics[width=.6 \textwidth]{hyster} } \caption{ The evolution of: a) downstream flow-speed amplitude {\it versus} phase-speed of the wave crest; b) density amplitude {\it versus} phase-speed of the wave crest. Solid-blue and dashed-red lines show results for the initial configuration defined by Equations (1) and (2), respectively, with $B_0=2$. Arrows indicate the course of the temporal sequence. Velocities $v$ and $w$ are normalized to the Alfv\'en speed [$v_A$]. } \label{hyster} \end{figure} The relationship between the shock speed and the downstream flow speed in the cylindrical geometry is more complex than in the planar case. This is illustrated in Figure~\ref{hyster}a, where the downstream flow-speed [$v$] is shown {\it versus} the phase-speed of the wave crest [$w$]. Analogously, in Figure~\ref{hyster}b we present the dependence of the downstream peak density [$\rho$] on the phase speed [$w$]. Note that the displayed values are based on smoothed curves $w(t)$, $v(t)$, and $\rho(t)$. The presented graphs show that initially, during the wave formation phase, the wave phase-speed increases, while the amplitudes of $v$ and $\rho$ are almost constant, showing only a slight increase. Then, the wave speed [$w$] remains almost constant, whereas the wave amplitude decreases. Eventually, in the third step, both the wave propagation speed and its amplitude decrease. The highest values of $v$ and $\rho$ are attained roughly at a time when the maximum speed of the piston is reached. The shock formation starts, {\it i.e.} a discontinuity occurs at the leading edge of the wave profile, around the ``nose" of the curves presented in Figure~\ref{hyster}, which also approximately coincides with the end of the piston expansion. Thus, roughly speaking, the upper branch of the $v(w)$ and $\rho(w)$ curves corresponds to the ``driven phase" of the wave, whereas the lower branch represents the ``decay" of a freely-propagating simple wave. The shock is completed when the wave speed attains a value of $w\approx1.11$ at the lower branch of the curve. \subsubsection{Piston Impulsiveness} \label{S-impuls} \begin{figure} \centerline{ \includegraphics[width=.6 \textwidth]{w-v_cyl} } \caption{ a) Peak phase-speed [$w$] of the wavefront (solid line and diamond symbols) and the speed [$v$] of the piston (dotted line and crosses) for five values of $B_0$. b) The phase-speed of the wavefront around the peak amplitude (squares, triangles, crosses, asterisks, and diamonds; the values of $B_0$ are given in the inset) presented as a function of corresponding flow speed. The solid line represents the peak velocity of the wave as a function of the maximum speed of the piston for the same five values of $B_0$. Velocities $v$ and $w$ are normalized to the Alfv\'en speed [$v_A$]. } \label{w(v)} \end{figure} We repeated the procedure presented in Sections \ref{S-form} and \ref{S-kinem} for several values of the maximum magnetic-field strength in the source-region center [$B_0$] to inspect the role of the impulsiveness of the piston expansion. In particular, we applied $B_0=1.1$, 1.5, 2.0, 3.0, and 5.0. A stronger field $B_0$ causes a more impulsive acceleration of the piston, which results also in a higher wave amplitude and wave-crest speed, and consequently, an earlier formation of the shock. On the other hand, the evolution of the system, as well as the relationship between different parameters, does not depend qualitatively on the impulsiveness of the piston acceleration. Morphologically, the main difference between very impulsive piston accelerations and more gradual ones is that in the former case the shock forms very close to the contact surface. Because of this, in the case of a very impulsive source-region expansion, it is not possible to follow the shock-formation phase, since the wavefront and the piston cannot be resolved. On the other hand, we note that for $B_0=1.1$ and 1.5 the shock did not form within the numerical box, which implies that in reality, particularly bearing in mind dissipative effects, the coronal shock would not be formed if the source-region acceleration is not impulsive enough. In Figure~\ref{w(v)}a the peak velocity of the wavefront is compared with the peak velocity of the piston for different values of $B_0$. In the considered range, the wave speed is much larger than the piston speed, so the distance from the wavefront and the piston rapidly increases. However, the graph shows that beyond $B_0\approx1.5$ maximal piston velocities are proportional to $B_0$, whereas the wave speed shows a nonlinear trend, {\it i.e.} the slope of the $w(B_0)$ curve gradually decreases. This implies that for a very impulsive expansion of the source region one can expect that the velocities of the shock and the piston become comparable, and that the separation is small. In Figure~\ref{w(v)}b we present the dependence of the maximum speed of the wave crest on the maximum speed of the corresponding downstream flow speed for all considered values of $B_0$. The displayed data points are numerical values from the ``nose" of the $w(v)$ curves (non-smoothed), analogous to the one shown in Figure~\ref{hyster}a for $B_0=2$. Peak values of $w$, estimated from smoothed $w(v)$ curves, are presented as a function of the maximum speed of the piston by a solid line. Note that this ``piston-curve" is shifted to the right with respect to the presented data points, implying that the piston speed is somewhat higher than the flow speed. The difference increases with the increasing piston speed, {\it i.e.} with the impulsiveness of the expansion. The main feature of Figure~\ref{w(v)}b is that the relationship between the wave speed and the downstream flow speed [$w=1+3v/2$] is not valid in cylindrical geometry. The $w(v)$ dependence is not linear, but is closer to a power-law. A least-square fit of the form $w-1=av^b$ gives $w=1+0.9\,v^{0.45}$, with a correlation coefficient of $R=0.91$. On the other hand, the relationship between the maximum wave speed and the maximum piston speed is well described ($R=0.99$) by the function $w=1+1.26\,v^{1/3}$. \subsubsection{Initial Configuration} \label{S-trigon} \begin{figure} \centerline{ \includegraphics[width=0.9\textwidth]{cyl2} } \caption{Formation and propagation of the perpendicular shock in the cylindrical geometry for the initial magnetic-field profile given by Equation (2), with $B_0=2$: spatial profiles of the magnetic field (a, b), density (c, d), and flow speed (e, f). Left panels (a, c, and e) show the beginning of the wave formation (increasing wave amplitude); right panels (b, d, and f) present the shock formation phase (steepening of the wavefront profile). Normalized times are displayed in the inset. } \label{trig_profiles} \end{figure} To check how much the initial magnetic-field structure in the source region affects the process of the wave formation and evolution, we also applied the magnetic-field configuration defined by Equation (2). In this case, the steepest magnetic-pressure gradient is located within the source region, {\it i.e.} not at its edge as was the case described by Equation (1). The outcome of the simulation for $B_0=2$ is presented in Figure~\ref{trig_profiles}. The analysis of the data in Figure~\ref{trig_profiles} shows that there are no significant differences in the overall wave kinematics for the two configurations considered (thus, the corresponding graphs are not shown). To illustrate the similarity of the two kinematics, we also included in Figure~\ref{hyster} the results concerning the wave formation/propagation resulting from the configuration defined by Equation (2). However, Figure~\ref{trig_profiles} reveals a considerable difference in the morphology of the evolving piston/wave system. The strongest magnetic pressure gradient is initially located within the source region, whereas it is zero at its edge. Thus, the highest acceleration occurs at $r<r_0$, and consequently, the initial compression forms within the source region. This causes a more complex flow pattern within the source and obscures the initiation of the wave, {\it i.e.} the source-region boundary and the wave cannot be clearly distinguished, and the term piston becomes meaningless. The wave leading edge leaves the source region at $t=0.015$, whereas the wave crest detaches from the source-region boundary at $t=0.08$, when it has the highest amplitude ($\rho=1.22$). The shock formation starts at $t~\approx0.19$ and is completed at $t\approx0.32$, when it has an amplitude of $\rho=1.13$ and $v=0.13$, {\it i.e.} the formation is delayed, and the amplitude is lower compared to the case based on Equation (1). Similarly, the dip between the wavefront and the source region is somewhat shallower ($\rho=0.92$). \section{Discussion and Conclusion} \label{S-concl} In this article we presented numerical simulations of the formation and evolution of large-amplitude MHD simple waves. We considered the very basic initial configurations to educe general characteristics of the MHD shock formation in an idealized homogeneous environment. The main purpose is to have reference results that can be compared to the results of more sophisticated simulations that consider more realistic characteristics of the environment. Now that we have at our disposal the results of the simulations presented in this article, such a comparison will help us to distinguish which characteristics are a consequence of basic processes and which are caused by the details of a given environment. Furthermore, it should be noted that in spite of the high level of idealization, the cylindrical configurations employed can represent, to a certain degree, the initial stage of the coronal-wave formation by the lateral expansion of a CME during its impulsive-acceleration stage, or presumably, by the expanding leg of the flaring loop. The most general outcome, common to all situations analyzed, is that a more impulsive source-region expansion results in a shorter time/distance needed for the shock formation, consistent with analytical considerations ({\it e.g.} \opencite{V&L00a}; \opencite{V&L00b}; \opencite{vrs01shocks}; \opencite{zic08}) and observations \cite{vrs01shocks}. The simulations show that in the most impulsive events a shock forms very close to the source-region boundary and it is initially difficult to resolve the two entities. This explains why in some studies the coronal EUV waves are (erroneously) identified as CME flanks (for a discussion see \opencite{cheng12}). On the other hand, when the piston acceleration is low, the wave amplitude remains small and the wavefront steepening is slow. Thus, weakly accelerated eruptions are not likely to result in an observable coronal wave. For the case of a planar magnetosonic wave we have confirmed the relationship between the wave speed and the flow speed [$w=1+3v/2$] that was analytically derived by \inlinecite{V&L00a}. At small amplitudes, the numerical simulations reproduce the Rankine--Hugoniot jump relations after the shock formation is completed. However, at large amplitudes the numerical results deviate from the analytical theory, most likely due to the numerical resolution. From the observational point of view, the cylindrical geometry is far more interesting, since it can give us insight into the process of shock formation caused by a magnetic-arcade expansion, including the amplitude fall-off due to energy conservation \cite{zic08}. In this article we have analyzed only the most general characteristics of the perpendicular-shock formation, generated by a flux-tube expansion in an idealized homogeneous environment. Such a process represents a two-dimensional piston mechanism of the shock-wave formation. The basic difference from the planar case (one-dimensional piston) lies in the fact that in the cylindrical geometry there are two competing effects involved in the shock-formation process. One is the nonlinear steepening of the wavefront profile (as in the planar geometry), and the other is a decrease of the wave amplitude with distance due to energy conservation (absent in planar geometry). Of course, spherical geometry would be more relevant than the cylindrical option, since real EUV waves expand spherically. This would certainly modify the results, since the decrease rate of the wave amplitude would be governed by the $\propto r^{-2}$ effect rather than $\propto r^{-1}$. This aspect was treated in detail by \inlinecite{zic08} in a semi-analytical study which showed that the shock formation time and distance depend much more on the characteristics of the piston acceleration than on the choice of the geometry. The geometrical effect ({\it i.e.} the $\propto r^{-2}$ aspect of the energy conservation) becomes dominant only after the acceleration phase. Note that the same conclusion can be drawn from the results presented in this article by comparing the outcome for the planar and cylindrical case. In this respect, let us note that \inlinecite{zic08} have also shown that the value of the plasma-to-magnetic pressure ratio [$\beta$] does not-play a significant role as well. In our study we considered two different types of the initial configuration: one resulting in the highest initial acceleration at the source-region boundary, and another, causing the strongest acceleration within the source-region body. The performed simulations show that, although there are differences in the evolution of the source region, the process of the shock-wave formation does not differ significantly, {\it i.e.} the evolution of the perturbation and the wave kinematics are similar in both cases. The most important outcome of the analysis is that the formation of a perpendicular MHD shock is expected already for relatively low expansion velocities, as low as 10\,--\,20\,\% of the Alfv\'en speed. This implies that a lateral expansion of the eruption in the early phase of CMEs is a viable mechanism of the coronal wave formation \cite{ines09,patsourakos10,muhr10,veronig10,grechnev11,kozarev11,liu12,temmer12}. Furthermore, our simulations show that at the beginning of the wave formation it is difficult to distinguish the wave and the source-region expansion, especially for the case defined by Equation (2), where the strongest acceleration occurs within the source region. The presented analysis shows that the wave initially accelerates from $w\gtrsim v_A$ to a maximum phase speed, which depends on the impulsiveness of the source-region expansion. In the decay phase, the wave-crest velocity decreases, $w\rightarrow v_A$. Thus, the initial and the late phase of the coronal wave could be used for the coronal diagnostics, since measurements of the wave kinematics in the acceleration and deceleration phase should reflect the coronal Alfv\'en speed. Similarly, the traveling density depletion that forms in the wake of the wave travels at $w\approx v_A$. Such depletions are sometimes observed in the base--difference or base--ratio EUV images, appearing as traveling coronal dimming behind the wavefront \cite{thompson00,chen02,zhukov04,muhr11}. Thus, such features can be also used to estimate $v_A$ in the quiet corona. Finally, we note that after the acceleration stage, the compression region associated with the source-region boundary becomes a stationary feature. This might be related to stationary brightenings that are sometimes observed behind the outgoing wave \cite{muhr11}. In cylindrical geometry, the wave amplitude and the wave phase-speed are related in a relatively complex manner. In the ``driven phase", the wave amplitude at a given wave speed is higher than in the ``decay phase". In the transition between these two phases, the phase speed is almost constant for a certain period of time, while the wave amplitude decreases. This results in a loop form of the $\rho(w)$ and $v(w)$ evolutionary curves, which is consistent with observations presented by \inlinecite{muhr12}, where the dependence of the wave amplitude on the wave speed forms a closed, hysteresis-like, curve. In Section~\ref{S-result} we have presented results in a normalized form, where velocities are expressed in units of the Alfv\'en speed and the time is expressed in units of the Alfv\'en travel time across the numerical box. As an illustration, let us assume that the diameter of the coronal source region is $2r_0=100$ Mm, which implies that the numerical box corresponds to $L=500$ Mm, since we used $r_0=0.1$. If we assume that the Alfv\'en speed in the quiet corona is in the range of $v_A=250$ km\,s$^{-1}$ \cite{WarmMann05vA}, we obtain for the Alfv\'en travel time $t_A=L/v_A=2000$ seconds. The same would be obtained for, {\it e.g.} $L=1000$ Mm and $v_A=500$ km\,s$^{-1}$. Applying these values, one finds that the wave typically forms and steepens into a shock a few minutes after the onset of the source-region expansion. The delay is shorter for a higher source-expansion velocity, {\it i.e.} for a higher wave velocity. The distance at which the wave crest forms and detaches from the source-region boundary, {\it i.e.} the distance at which the wave should become observable, is in the range $\approx100$\,--\,200 Mm. Such time delays and starting distances are fully consistent with observations of Moreton waves ({\it e.g.} \opencite{warmuth04b}), type II bursts ({\it e.g.} \opencite{vrs95}), and EUV waves ({\it e.g.} \opencite{ines11}; \opencite{liu12}). To conclude, the simulations presented show that already the simplest initial configurations are able to reproduce most of the features observed in typical large-amplitude, large-scale coronal waves, including the morphology, kinematics, and scalings. Our next step will be to perform similar simulations, but employing more realistic initial configurations that depict a magnetic arcade anchored in the photosphere, and include a realistic density profile of the chromosphere, transition-region, and corona. This will enable us to distinguish the effects that are intrinsic to the MHD wave formation from those governed by the environment. \begin{acks} The work presented has received funding from the European Commission's Seventh Framework Programs (FP7/2007-2013) under grant agreements No. 263252 (COMESEP project, \url{www.comesep.eu}) and No. 284461 (eHEROES project, \url{soteria-space.eu/eheroes/html/}). MT, AMV, NM, IWK acknowledge the Austrian Science Fund (FWF): FWF V195-N16 and P24092-N16. The Versatile Advection Code (VAC) was developed by G\'abor T\'oth at the Astronomical Institute at Utrecht in a collaboration with the FOM Institute for Plasma Physics, the Mathematics department at Utrecht, and the CWI at Amsterdam; in particular, Rony Keppens (FOM), Mikhail Botchev (Mathematics Dept.), and Auke van der Ploeg (CWI) contributed significantly in completing the project. G. T\'oth and R. Keppens share the responsibility and work associated with the development, maintenance, distribution, and management of the software. We are grateful to Tayeb Aiouaz and Tibor T\"or\"ok for help in getting acquainted with VAC. We are grateful to the referee for very constructive comments and suggestions that helped us to improve the article. \end{acks} \bibliographystyle{spr-mp-sola}
proofpile-arXiv_068-14099
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Remarkable progress has been made in the study of the topological insulator (TI) as a new class of materials characterized by an energy gap in the bulk but gapless edge (surface) states at its boundary~\cite{PhysRevLett.95.146802, bernevig2006quantum, PhysRevLett.98.106803}. In recent experiments, the observation of such TIs is reported for HgTe/CdTe quantum wells~\cite{bernevig2006quantum,konig2007quantum} and some bismuth compounds (such as Bi$_2$Se$_3$ and Bi$_{1-x}$Sb$_x$) ~\cite{zhang2009topological,xia2009observation,hsieh2008topological,hsieh2009observation}, corresponding to two- and three-dimensional TIs, respectively. Edge states of TIs are protected by non-trivial topological properties of the electronic bulk spectrum, thereby being robust against small perturbations conserving time-reversal symmetry, such as non-magnetic impurities~\cite{PhysRevLett.96.106401}. Hence, as long as the bulk gap exists, the low-energy physics of the TI is dominated by the edge states. Heterostructures involving the TIs are currently the subject of intensive studies as they might have applications in future spintronics devices. They also provide a versatile platform for searching exotic interface phenomena, such as Majonara bound states~\cite{PhysRevLett.100.096407,PhysRevB.81.241310} and anomalous magnetoresistance~\cite{PhysRevB.81.121401}. In particular, we would like to explore the interface physics which emerges in a heterostructure composed of a TI and strongly correlated materials, since the previous studies of TI have mainly focused on heterostructures involving non-interacting electrons. Regarding heterostructures with electron correlations, recent years have seen tremendous advances in producing high-quality interfaces composed of various materials such as band-insulator/Mott-insulator (BI/MI) or different types of band insulators. Unusual properties have been discovered at these interfaces, such as strongly confined metallic phases~\cite{ohtomo2002artificial}, magnetism~\cite{brinkman2007magnetic}, and superconductivity~\cite{reyren2007superconducting} to name a few. The occurrence of a metallic interface through electronic rearrangement is one of the intriguing features of the BI/MI heterostructures~\cite{okamoto2004electronic, PhysRevB.70.241104, PhysRevB.85.235112, PhysRevLett.108.117003, PhysRevLett.101.066802, PhysRevLett.108.246401}. Naturally we may ask how the situation is modified when the band insulator is replaced by a topological insulator (TI). The interface metallic state should be influenced by the presence of topological edge states and the interplay between such edge states and the strong electron correlation can give rise to novel physical properties. In this study, we analyze the electronic properties at a two-dimensional heterostructure consisting of a paramagnetic Mott insulator (MI) and a TI, by using a rather simple microscopic model for both insulators of TI/MI heterostructure considering onsite Coulomb repulsion but ignoring long-range interactions. The electronic correlations are treated by dynamical mean-field theory~\cite{RevModPhys.68.13} so that we can follow the renormalization of quasiparticles penetrating the MI and their interplay with the nearly localized degrees of freedom of the MI. We will elucidate how the important parameters such as the (onsite) Coulomb repulsion and the coupling (hopping) between the two insulators influence the metallic quasiparticle states. The model describes a two-dimensional single-quantum-well geometry (square lattice) composed of a TI sandwiched by identical paramagnetic MIs on both sides. In view of the fact that our TI is based on a two-orbital model, introduced below, we choose also for the MI a configuration of two independent bands at half filling with strong Coulomb repulsion. The Hamiltonian is decomposed into $H = H_{\text{TI}} +\sum _{i=\text{R},\text{L}}(H_{\text{M}}^{i} +H_{\text{V}}^{i})$ with \begin{eqnarray} H_{\text{M}}^{\text{R},\text{L}} &=& \sum _{\langle i,j \rangle, \sigma, \alpha} t_{\alpha} \hat {c}^{\dagger}_{i\sigma \alpha}\hat{c}_{j\sigma \alpha} +U_{\text M}\sum _{i \alpha} \hat{n}_{i \uparrow \alpha}\hat{n}_{i \downarrow \alpha}, \label{eq:scm} \\ H_{\text{V}}^{\text{R},\text{L}} &=& \sum _{\langle i,j \rangle, \sigma, \alpha} V_{\alpha} \left ( \hat{c}^{\dagger}_{i \sigma \alpha}\hat{a}_{j \sigma \alpha} +\hat{a}^{\dagger}_{i \sigma \alpha}\hat{c}_{j \sigma \alpha} \right ) \label{eq:hybridization}. \end{eqnarray} Here, $H_{\text{M}}^{\text{R}}$ ($H_{\text{M}}^{\text{L}}$) denotes the Hamiltonian for the MI on the right (left) edge of the TI region, and the coupling between these regions is implemented by the hybridization matrix, $H_{\text{V}}^{\text{R}}$ ($H_{\text{V}}^{\text{L}}$). The parameters $t_{\alpha}$ and $V_{\alpha}$ are the hopping integrals for orbital $\alpha$. We assume $t_1=-t_2=-t$ and $V_1=-V_2=-V$ for simplicity. The fermion operators $\hat c^{\dagger}_{i \sigma \alpha}$ and $\hat a^{\dagger}_{i \sigma \alpha}$ ($\hat c_{i \sigma \alpha}$ and $\hat a_{i \sigma \alpha}$) create (annihilate) a spin $\sigma=\uparrow, \downarrow $ electron of orbital $\alpha=1,2$ at site $i$ on the square lattice. Note that $\hat c_{i \sigma \alpha}$ and $\hat a_{i \sigma \alpha}$ operate on the orbitals for MI and TI, respectively. For the TI region we introduce a generalized Bernevig-Hughes-Zhang (BHZ) model~\cite{PhysRevB.85.165138}, given by $ H_{\text{TI}} = H_{\text{BHZ}} +U_{\text{TI}}\sum_{i \alpha} \hat{n}_{i \uparrow \alpha}\hat{n}_{i \downarrow \alpha}$, where \begin{eqnarray} H_{\text{BHZ}} = \sum _{i,\sigma,\alpha} \epsilon _{\alpha} \hat {n}_{i \sigma \alpha} +\sum _{\substack {\langle i,j \rangle,\\ \sigma, \alpha, \beta}} \hat {a}^{\dagger}_{i \sigma \alpha} \left [ \hat {t}_{\sigma}(\delta) \right ]_{\alpha \beta} \hat {a}_{j \sigma \beta}, \label{eq:bhz+u} \\ \hat {t}_{\sigma}(\pm x) = \begin{pmatrix} t_1 \:\:\: \pm i \sigma t_{\text{so}} \\ \pm i \sigma t_{\text{so}} \:\:\: t_2 \end{pmatrix}, \:\:\: \hat {t}_{\sigma}(\pm y) = \begin{pmatrix} t_1 \:\:\: \pm t_{\text{so}} \\ \mp t_{\text{so}} \:\:\: t_2 \end{pmatrix}. \end{eqnarray} Here, $\hat t_{\sigma}(\delta)$ with $\delta = \pm x$ ($\pm y$) denotes the spin dependent hopping integral along the $x$ ($y$)-direction. We assume $\epsilon _1 = -\epsilon _2 =-t$ in the following. The topologically non-trivial phase is driven via the finite inter-site and inter-orbital hybridization $t_{\text{so}}$ for $0 < |\epsilon _1| < 4t$. For our calculation we set the width of the TI (MI) region to $20$ ($10$) unit cells along the $y$-direction, while the system keeps the translation symmetry in the $x$-direction. Unless otherwise mentioned, $t_{\text{so}}$ and $V$ are fixed to $0.25t$ and $t$, respectively, and the spin index $\sigma$ is dropped. Throughout this paper, we restrict ourselves to zero temperature and half filling ($\langle \hat {n}_{i} \rangle = \sum_{\sigma \alpha} \langle \hat {n}_{i \sigma \alpha} \rangle = 2$). We note here that correlation effects in TIs have been studied extensively, also suggesting that the strong electron correlation disturbs the topological nature~\cite{PhysRevLett.106.100403,PhysRevLett.107.010401,PhysRevB.85.165138} and even induces topological phases without gapless edge states~\cite{PhysRevB.83.205122}. Thus we may ask how the correlation effects manifest themselves at TI/MI interfaces. This is the central issue in the present paper. In the following, many-body effects in the above Hamiltonian are treated within the inhomogeneous dynamical mean-field theory (IDMFT)~\cite{PhysRevLett.101.066802, PhysRevB.59.2549, PhysRevB.70.241104,PhysRevB.79.045130}, which solves the $y$-dependent self-energy as a diagonal matrix with self-consistent equations, \begin{equation} \mathcal {G}_{0\alpha \beta}^{-1}(y,y';\omega ) = \left [ \int \frac{d k_x}{2\pi} G(y,y';k_x, \omega ) \right ]_{\alpha \beta}^{-1} +\Sigma_{\alpha \beta}(y; \omega ), \notag \end{equation} where, $G_{\alpha \beta}(y,y';k_x, \omega )$ and $\mathcal {G}_{0\alpha \beta}(y,y';\omega )$ are the lattice and cavity Green's functions, respectively. Note that we treat this system as a spatially modulated two-dimensional system with a finite strip width, following the treatment done in Ref.~\onlinecite{PhysRevB.85.165138}. The local self-energy for the quantum impurity model is obtained using the exact diagonalization methods~\cite{PhysRevLett.72.1545}, suitably extended for the IDMFT analysis~\cite{PhysRevB.79.045130}. To avoid time-consuming numerics, we neglect the inter-band self-energy, $\Sigma_{12}(y; \omega)$ and $\Sigma_{21}(y; \omega)$. The validity of this approximation has been discussed in Ref.~\onlinecite{PhysRevB.85.165138}. Here, we employ the Lanzcos algorithm to solve the local Green's function, and the bath parameters for the finite-size system are obtained by minimizing the following function~\cite{liebsch2011temperature}: \begin{equation} \chi (y) = \sum _{\omega _n} | \, {\mathcal {G}_{0}^{\text{fs}}}(y; i\omega _n) -{ \mathcal{G} }_{0}(y, y; i\omega _n)|^2, \end{equation} where, $\mathcal { {G} }_{0}^{\text{fs}}(y; \, i\omega _n)$ is the non-interacting Green's function of the impurity model for the discretized Matsubara frequency $\omega_n = (2n+1)\pi/\tilde {\beta}$ on chain $y$. We choose the number of the bath levels coupled to each band to $n_{\text b}=7$, and fix the inverse imaginary temperature $\tilde {\beta}=200$. \begin{figure}[!t] \centering \includegraphics[width=0.85\linewidth]{Dos_um133ut04hyb1_ver2.eps} \hfil \includegraphics[width=0.85\linewidth]{Disp_and_Dos-ver2.eps} \vspace{-1mm} \caption{(a) The orbital-resolved local spectral-function $A_{\alpha}(y; \omega)$ for MI region (left half of the system) with $U_{\text M}=13.3t$ and $U_{\text {TI}}=4t$. The TI (MI) regions correspond to $y \geq 0$ ($y\leq -1$), and the solid and dashed lines indicate $A_{\text 1}$ and $A_{\text 2}$, respectively. Plots of the corresponding momentum-resolved spectral functions: (b) at $y=0$ (TI-edge) with both spin states and (c) at $y=-1$ (MI-edge) restricted to up-spin state only. (d) The spectral function ($A_{\text 1}+A_{\text 2}$) around zero frequency inside the MI region near $y=0$. The curves from top to bottom in the vicinity of zero frequency correspond to the spectral functions at $y=-1$, $-2$ and $-3$, respectively. (e) The same as in (d) for the interface of a trivial BI and MIs for $y=-1$ (solid line) and $y=0$ (dotted line).} \label{fig:spec-func} \end{figure} \begin{figure}[b] \centering \includegraphics[width=0.85\linewidth]{Z-SO01-SO1-Um_ver2.eps} \vspace{-3mm} \caption{The quasi-particle weight $Z(y)$ at $y=-1$ as a function of $U_{\text M}$ with $t_{\text{so}}/t = 0.1, 0.2, \dots 1.0$ (increasing from top to bottom), fixing $Z(9)= 0.8\pm 0.001$. Inset: comparison of the corresponding $Z(y)$ at $y=-5$ (triangle) and $y=-1$ (square) for $t_{\text {so}}=0.6t$. } \label{fig:mod-z} \end{figure} Figure~\ref{fig:spec-func}(a) shows the orbital-resolved local spectral function $A_{\alpha}(y; \omega) = -(1/\pi) \text{Im} G_{\alpha \alpha}(y, y; \omega +i\delta)$ for the MI region with $U_{\text M}=13.3t$ and $U_{\text {TI}}=4t$. Note that the strength of $U_{\text M}$ is slightly larger than the critical value $U_{\text c} \sim 13.2t$ for Mott transition in the bulk, while the strength of $U_{\text {TI}}$ is small enough to realize the non-trivial topological phase inside the heterostructure ($0\leq y \leq 19$)~\cite{PhysRevB.85.165138}. In this figure, we follow the evolution of the edge state in the energy gap when approaching the TI-edge ($y=0$) from $y=3$, whose existence is characteristic of the interface between topologically trivial and nontrivial materials. Actually the momentum-resolved spectral function, $A(y; k_x, \omega) = -(1/\pi) \sum_{\alpha} \text{Im}G_{\alpha \alpha}(y,y;k_x, \omega +i\delta)$, displays the edge state at $y=0$ in Fig.~\ref{fig:spec-func} (b). The edge state penetrates also into the MI region and induces a narrow band of mid-gap states. Figures~\ref{fig:spec-func} (c) and (d) show the momentum-resolved spectral function for up-spin states and the quasi-particle peak around $\omega\sim 0$ for $y\leq -1$, respectively. While the width of quasi-particle peak rapidly decreases away from $y=0$ in Fig.~\ref{fig:spec-func} (b), the existence of the renormalized quasiparticle states in the depth of the MI region is confirmed through the non-vanishing renormalization factor $Z(y)\equiv \left [ 1- \partial_{\omega} \text{Im} \Sigma(y; i\omega) \right ]_{\omega=0}^{-1}$. In addition, the energy spectrum describes the antisymmetric behavior across $k_x=\pi$, which implies that a heavy-fermion-like mid-gap state for $y\leq -1$ is induced by the penetration of the helical edge state. This scenario is further underlined by Fig.~\ref{fig:spec-func} (e), where the TI is replaced by the trivial BI with the same gap-size as the present TI. In this case, the heavy quasiparticles in the MI no longer exist, in contrast to the MI/TI interface. We also confirm that once the topological order is destroyed via the strong on-site Coulomb repulsion $U_{\text{TI}}$~\cite{PhysRevB.85.165138}, the spectral-weight for the mid-gap state simultaneously goes to zero (not shown here). The formation of renormalized mid-gap states is understood in terms of a ``Kondo-type screening" mechanism between spin states in the MI and helical edge states. In the MI region, spin excitations are gapless, which appear as essentially localized free spins in the DMFT treatment. Such free spins are screened by helical edge states forming a Kondo-type resonance and establishing strongly renormalized mid-gap states in the first few layers of the MI. In this sense, the MI/TI interface effectively realize a Kondo lattice system with helical conduction electrons. Similar proximity effects can also be found in the studies of the MI/metal heterostructure~\cite{PhysRevLett.101.066802,PhysRevB.81.115134}. However, the distinctive difference becomes visible in $Z(y)$ around the interface. In the present case, the Kondo-type screening is driven only by helical edges states, so that the resulting mid-gap electron states become much heavier than that for MI/metal interfaces. It originates from the band-insulating nature of the TI with the energy gap $\Delta_{ \text{SO}}$, because in inhomogeneous correlated systems, the electron renormalization occurs strongly on their surface due to the reduced coordination number~\cite{PhysRevB.59.2549,PhysRevB.81.115134}. Hence, it is naturally expected that the renormalization of the mid-gap state is closely related to the gap size $\Delta_{ \text{SO}} \sim 4Zt_{ \text{so} }$~\cite{PhysRevB.85.165138}. Figure~\ref{fig:mod-z} in fact shows the monotonic suppression of $Z(y=-1)$ against the increase in $t_{ \text{so}}$. The inset of this figure also confirms the strong correlation effects around the interface for weak $U_{\text M}$, while this behavior is inverted for large $U_{\text M}$ where the penetration of the edge state mainly controls the spatial modulation of $Z(y)$. \begin{figure}[!t] \centering \includegraphics[width=0.85\linewidth]{Z-HYB-ver4.eps} \hfil \includegraphics[width=0.88\linewidth]{Dos_Um133Ut04-V_ver2.eps} \vspace{-3mm} \caption{ (a) The $V$ dependence of the renormalization factor $Z(y)$ with $U_{\text M}=13.3t$ and $U_{\text {TI}}=4t$. The square and circle symbols represent the $Z(y)$ at $y=-1$ and $y=-2$, respectively; we note that $Z(-2)$ is enlarged 20 times. (b) The corresponding local spectral function: the left (right) panel for $y=-1$ ($y=0$).} \label{fig:eff-of-v-disp} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.90\linewidth]{Spec-Dos-V123-v12_ver4.1.eps} \vspace{-2mm} \includegraphics[width=0.90\linewidth]{Helicity_ver4.1.eps} \caption{(a) The momentum-resolved spectral function $A(y; k_x,\omega)$ for $y=-1$ (left), $y=0$ (center) and $y=1$ (right) with $U_{\text M}=13.3t$, $U_{\text{TI}}=4t$. Here $A(-1; k_x,\omega)$ for $V/t=1.0$ (left top) and $A(0; k_x,\omega)$ for $V/t=3.0$ (center bottom) are enlarged 5 times. (b) The corresponding helicity $\eta (y)$ defined in Eq.~\ref{helicity} as a function of $y$ with several values of $V/t$. (c) The $V$dependence of $\eta (y)$ for $y=0, \pm1$ (left axis) and the dimerization gap $\Delta \equiv 2V\sqrt {Z(-1)Z(0)}$ (right axis).} \label{fig:mom-spec} \end{figure} To get further insight into the mid-gap state, we now focus on the effect of interface electron tunneling $V$ between the TI and MI regions. We present the $V$-dependence of the renormalization factor in Fig.~\ref{fig:eff-of-v-disp} (a), where the change in the slope of $Z(-2)$ is found, in spite of the monotonic evolution of $Z(-1)$. This correlation enhancement for $y=-2$ can be understood as the band reconstruction at the interface: the formation of a dimerized state between $y=-1$ and $y=0$, whose energy gap is roughly given by $\Delta \sim 2V\sqrt{Z(-1)Z(0)}$~\cite{PhysRevLett.95.066402,PhysRevB.73.245118}. In the present system, $\Delta$ exceeds the band width of the edge state ($\sim \Delta_{\text{SO}} \simeq t$) above $V/t\sim 1.8$, which explains the $V$-dependence of $Z(-2)$ in Fig.~\ref{fig:eff-of-v-disp} (a) [see also Fig.~\ref{fig:mom-spec} (c)]. In Fig.~\ref{fig:eff-of-v-disp} (b), we further show the evolution of the energy gap around $\omega \sim 0$ for $y=-1$ and $y=0$. As anticipated, in both cases the gap structures are formed around $V/t \sim 2.5$ and the gap size monotonically grows with increasing $V$. An important question is how the topological edge state at $y=0$ depends on the dimerization between $y=-1$ and $y=0$. To this end, in Fig.~\ref{fig:mom-spec} (a), $A(y; k_x, \omega)$ for $y=0, \pm 1$ is plotted along the $k_x = \pi/2$ to $3\pi/2$. From the result we see that the spectral weight for the edge sites of the TI ($y=0$) is gradually suppressed as $V$ increases. On the contrary, the magnitude of $A(1; k_x, \omega)$ shows an upward turn across $V/t \sim 2.0$, and above $V/t \sim 3.0$ we find the characteristic $k_x$ dependence of $A(1; k_x, \omega)$ which is identical to that of $A(0; k_x, \omega)$ at $V/t=0$. Thus we conclude that in the limit $V \rightarrow \infty$, the edge state shifts its position from $y=0$ to $y=1$. More closely looking at the bottom panels of Fig.~\ref{fig:mom-spec} (a), we find that the edge state exhibits an anomalous $y$-dependence: due to electron tunneling along $y$-direction, the magnitude of the Dirac-cone dispersion is expected to monotonically decrease away from $y=1$, but the obtained spectral-weight at $y=-1$ even exceeds that at $y=0$. This indicates that there exists two Dirac cones at $y = \pm 1$. Since the edge state should be localized at $y=1$ in the strong $V$ limit, we refer to the Dirac cone at $y=-1$, which looks like a copy of the edge state at $y=1$, as a {\it topological shadow edge-state} (TSE). We can characterize the edge states in the TSE by introducing the helicity function $\eta (y)$ defined as, \begin{equation} \eta (y) = \int _{ |\omega |<\Delta_{\text{SO}} } \frac{ d\omega d k_x}{ (2\pi )^2 } | A_{\uparrow}(y; \omega, k_x) -A_{\downarrow}(y; \omega, k_x)|, \label{helicity} \end{equation} where $A_{\sigma}$ is the spectral function ($A_1+A_2$) with spin $\sigma$ and the $\omega$-integral is limited within the gap $\Delta _{\text {SO}} = 4Zt_{\text{so}}$. We emphasize that $\eta (y)$ in Figs.~\ref {fig:mom-spec} (b) and~\ref {fig:mom-spec} (c) shows the good correspondence to the edge-state behavior in Fig.~\ref {fig:mom-spec} (a). Until the dimerization gap opens for $V/t \sim 2.5$, the magnitude of $\eta(-1)$ increases while that of $\eta(0)$ decreases with increasing V, as shown in Fig.~\ref {fig:mom-spec} (c) [see also Fig.~\ref{fig:eff-of-v-disp}(a)]. This may be understood in terms of the topological band-reconstruction. For relatively larger values of $V/t \lesssim 2.5$, the energy spectrums at $y=-1$ and $y=0$ are strongly hybridized with each other, which extends the topologically non-trivial band-structure toward $y=-1$. In this sense, the sites at $y=-1$ rather than $y=0$ shall be regarded as the edge of the TI. We note that when $V$ further increases, the edge state should shift its position from $y=-1$ to $y=1$. Therefore, the TSE for $V/t \sim 3$ can be understood as the remnant of this displacement, as plotted in Fig.~\ref{fig:mom-spec} (c). To summarize, we presented the DMFT study of a minimal model for the heterostructure of the two-dimensional TI embedded in the MIs. Our results showed that the helical edge state at the end of the TI penetrates into the MI, even if the Hubbard gap is very large. We clarified that such proximity effects induce the strongly renormalized mid-gap state having a remnant of the helical energy spectrum inside the MI region. It was found that the correlation effects around the interface are strongly enhanced due to the spin-orbit gap in the TI region. We also demonstrated how the hybridization between the TI and MI affects the electron penetration, and found the enhanced correlation effect and the existence of the TSE inside the MI, driven by the band reconstruction at the interface. We acknowledge financial support by a Grant-in-Aid for the Global COE Program ``The Next Generation of Physics, Spun from Universality and Emergence" from MEXT of Japan. N.K. is supported by KAKENHI (No.20102008) and JSPS through its FIRST Program. S.U. is supported by a JSPS Fellowship for Young Scientists and M.S. is grateful for support by the Swiss Nationalfonds and the NCCR MaNEP. \input{MI-TI_interface_paper-V3-MS.bbl} \end{document}
proofpile-arXiv_068-14131
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} This paper aims at extending to non-compact Riemannian manifolds with boundary the use of two important tools in the geometric analysis of compact spaces, namely, the integration by parts and the weak maximum principle for subharmonic functions. The motivation underlying this study is mainly the attempt to obtain new information on the geometry of graphs or, more generally, of hypersurfaces with boundary and prescribed mean curvature inside a Riemannian product of the type $N\times\mathbb{R}$.\bigskip In the setting of Riemannian manifolds without boundary, it is by now well known that parabolicity represents a good substitute of the compactness of the underlying space, see e.g. the account in \cite{PS-Fortaleza}. Thus, in order to extend the use of the classical tools alluded to above, we are naturally to a deeper study of parabolicity for manifolds with boundary. As we shall see in Appendix \ref{appendix-different} there are several concepts of parabolicity in this setting and they are in a certain hierarchy, so one has to make a choice. In view of our geometric purposes we decided to follow the more traditional path, \cite{Gr, Gr1, Gr2, GN}, that, from the stochastic viewpoint, translates into the property that the reflected Brownian motion be recurrent. This is the strongest of the notions of parabolicity known \ in the literature, but it is also the one which seems to be more related to the geometry of the space. Thus, for instance, every proper minimal graph over a smooth domain of $\mathbb{R}^{2}$ is parabolic in our traditional sense because of its area growth property; see Appendix \ref{appendix-different}. In order to put the precise definition of parabolicity we need to recall the notion of weak sub (super) solution subjected to Neumann boundary conditions.\bigskip Let $\left( M,g\right) $ be an oriented Riemannian manifold with smooth boundary $\partial M\neq\emptyset$ and exterior unit normal $\nu.$ By a domain in $M$ we mean a non-necessarily connected open set $D\subseteq M$. We say that the domain $D$ is smooth if its topological boundary $\partial D$ is a smooth hypersurface $\Gamma$ with boundary $\partial\Gamma=\partial D\cap\partial M$. Clearly, if $\partial M=\emptyset$ then the smoothness condition reduces to the usual one. It is a standard fact that every manifold $M$ with (possibly empty) boundary has an exhaustion by smooth pre-compact domains. Simply choose a proper smooth function $\rho:M\rightarrow \mathbb{R}_{\geq0}$ and, according to Sard theorem, take a sequence $\left\{ t_{k}\right\} \nearrow+\infty$ such that $t_{k}$ is a regular value for both $\left. \rho\right\vert _{\mathrm{int}M}$ and $\left. \rho\right\vert _{\partial M}$. Then $D_{k}=\left\{ \rho<t_{k}\right\} $ defines the desired exhaustion with smooth boundary $\partial D_{k}=\left\{ \rho=t_{k}\right\} $. Adopting a notation similar to the one in \cite{Gr1}, for any domain $D\subseteq M$ we defin \[ \partial_{0}D=\partial D\cap\mathrm{int}M. \] Note also that $D$ could include part of the boundary of $M$. We therefore se \[ \partial_{1}D=\partial M\cap D \] \bigskip Now, suppose $D\subseteq M$ is any domain. We put the following \begin{definition} By a weak Neumann solution $u\in W_{loc}^{1,2}\left( D\right) $ of the proble \begin{equation} \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D, \end{array} \right. \label{subneumannproblem \end{equation} we mean that the following inequalit \begin{equation} -\int_{D}\left\langle \nabla u,\nabla\varphi\right\rangle \geq0 \label{subsol \end{equation} holds for every $0\leq\varphi\in C_{c}^{\infty}\left( D\right) $. Similarly, by taking $D=M$, one defines the notion of weak Neumann subsolution of the Laplace equation on $M$ as a function $u\in W_{loc}^{1,2}\left( M\right) $ which satisfies (\ref{subsol}) for every $0\leq\varphi\in C_{c}^{\infty }\left( M\right) $. \ As usual, the notions of weak supersolution and weak solution can be obtained by reversing the inequality or by replacing the inequality with an equality in (\ref{subsol}), and removing the sign condition on $\varphi$. \end{definition} \begin{remark} Clearly, in the above definition, it is equivalent to require that (\ref{subsol}) holds for every $0\leq\varphi\in Lip_{c}\left( M\right) $. Note also that standard density arguments work even for manifolds with boundary and, therefore, (\ref{subsol}) extends to all compactly supported $0\leq\varphi\in W_{0}^{1,2}\left( D\right) $. Here, as usual, $W_{0 ^{1,2}\left( D\right) $ denotes the closure of $C_{c}^{\infty}\left( D\right) $ with respect to the $W^{1,2}$-norm. \end{remark} \begin{remark} Note that in the equality case we have the usual notion of variational solution of the mixed problem \ \begin{cases} \Delta u=0 & \text{on }D\\ \dfrac{\partial u}{\partial\nu}=0 & \text{on }\partial_{1}D\\ u=0 & \text{on }\partial_{0}D. \end{cases} \] \end{remark} \begin{remark} If $\partial M=\emptyset$ or, more generally, $D\subseteq\mathrm{int}M$, the Neumann condition disappears and we recover the usual definition of weak sub- (super-)solution. Obviously, in the smooth setting, a classical solution of (\ref{subneumannproblem}) is also a weak Neumann subsolution as one can verify using integration by parts. Actually, this is true in a more general setting. See Definition \ref{def_weak-sol-divX} and Lemma \ref{lemma_equiv-weak-def} in Subsection \ref{subsection-divergence}. \end{remark} We are now ready to give the following definition of parabolicity in the form of a Liouville-type result. \begin{definition} \label{def_parab}An oriented Riemannian manifold $M$ with boundary $\partial M\neq\emptyset$ is said to be parabolic if any bounded above, weak Neumann subsolution of the Laplace equation on $M$ must be constant. Explicitly, for every $u\in C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M\right) $ \begin{equation} \label{def_par \begin{array} [c]{ccc \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }M\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial M\\ \sup_{M}u<+\infty & \end{array} \right. & \Rightarrow & u\equiv\mathrm{const}. \end{array} \end{equation} \end{definition} It is known from \cite{Gr1}\ that, in case $M$ is complete with respect to the intrinsic distance function $d$, then geometric conditions implying parabolicity rely on volume growth properties of the space. In order to give the precise statement it is convenient to introduce some notation. Having fixed a reference origin $o\in\mathrm{int}M$, we set $B_{R}^{M}\left( o\right) =\left\{ x\in M:d\left( x,o\right) <R\right\} $ and $\partial B_{R}^{M}\left( o\right) =\left\{ x\in M:d\left( x,o\right) =R\right\} $, the metric ball and sphere of $M$ centered at $o$ and of radius $R>0$. We also denote by $r\left( x\right) =d\left( x,o\right) $ the distance function from $o$. Clearly, $r\left( x\right) $ is Lipschitz, hence differentiable a.e. in $\mathrm{int}M$. Moreover, for a.e. $x\in\mathrm{int M$, differentiating $r$ along a minimizing geodesic from $o$ to $x$ (which exists by completeness) we easily see that the usual Gauss Lemma holds, namely, $\left\vert \nabla r\right\vert =1$ a.e. in $\mathrm{int}M$. Therefore, by the co-area formula applied to $\left. r\right\vert _{\mathrm{int}M}$ and the fact that $\mathrm{vol}B_{R}^{M}\left( o\right) =\mathrm{vol}\left( B_{R}^{M}\left( o\right) \cap\mathrm{int}M\right) $, we hav \[ \frac{d}{dR}\mathrm{vol}B_{R}^{M}\left( o\right) =\mathrm{Area}\left( \partial_{0}B_{R}^{M}\left( o\right) \right) , \] for a.e. $R>0$. The following result is due to Grigor'yan \cite{Gr1}. For a proof in the $C^{2}$ case see Theorem \ref{th_areagrowth_meancurvop} and Remark \ref{rmk_areagrowth_meancurvop}. \begin{theorem} \label{th_growth}Let $\left( M,g\right) $ be a complete Riemannian manifold with boundary $\partial M\neq\emptyset$. If, for some reference point $o\in M$, eithe \[ \frac{R}{\mathrm{vol}B_{R}^{M}\left( o\right) }\notin L^{1}\left( +\infty\right) \] o \[ \frac{1}{\mathrm{Area}\left( \partial_{0}B_{R}^{M}\left( o\right) \right) }\notin L^{1}\left( +\infty\right) \] then $M$ is parabolic. \end{theorem} It is a usual consequence of the co-area formula that the area growth condition is weaker than the volume growth condition. On the other hand, the volume growth condition is more stable with respect to (even rough) perturbations of the metric and sometimes it characterizes the parabolicity of the space. Therefore, both are important. The first main result of the paper is the following maximum principle characterization of parabolicity. It extends to manifolds with boundary a classical result by L.V. Ahlfors. \begin{theorem} [Ahlfors maximum principle]\label{th_intro_Ahlfors}$M$ is parabolic if and only the following maximum principle holds. For every domain $D\subseteq M$ with $\partial_{0}D\neq\emptyset$ and for every $u\in C^{0}\left( \overline{D}\right) \cap W_{loc}^{1,2}\left( D\right) $ satisfying, in the weak Neumann sense \[ \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D\\ \sup\limits_{D}u<+\infty, & \end{array} \right. \] it hold \[ \sup_{D}u=\sup_{\partial_{0}D}u. \] \end{theorem} It is worth to observe that, in case $D=M$, the Neumann boundary condition plays no role and the result takes the following form which is crucial in the applications. \begin{theorem} \label{th_ahlfors-wholeM}Let $M$ be a parabolic manifold with boundary $\partial M\neq\emptyset$. If $u\in C^{0}\left( M\right) \cap W_{loc ^{1,2}\left( \mathrm{int}M\right) $ satisfie \[ \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }\mathrm{int}M\\ \sup_{M}u<+\infty & \end{array} \right. \] the \[ \sup_{M}u=\sup_{\partial M}u. \] \end{theorem} It is not surprising that this global maximum principle proves to be very useful to get height estimates for constant mean curvature hypersurfaces in product spaces. By way of example, we point out the following \begin{theorem} [Height estimate]\label{th_intro_hest} Let $N$ be a Riemannian manifold without boundary and Ricci curvature satisfying $Ric_{N}\geq0$. Let $\Sigma$ be a complete, oriented hypersurface in $N\times\mathbb{R}$ with boundary $\partial\Sigma\neq\emptyset$ and satisfying the following requirements: \begin{enumerate} \item[(i)] $\Sigma$ has quadratic intrinsic volume growt \begin{equation} \mathrm{vol}B_{R}^{\Sigma}\left( o\right) =O\left( R^{2}\right) ,\text{ as }R\rightarrow+\infty; \label{vg-hesimate \end{equation} \item[(ii)] $\partial\Sigma$ is contained in the slice $N\times\left\{ 0\right\} $; \item[(iii)] For a suitable choice of the Gauss map $\mathcal{N}$ of $\Sigma$, the hypersurface $\Sigma$ has constant mean curvature $H>0$ and the angle $\Theta$ between $\mathcal{N}$ and the vertical vector field $\partial /\partial t$ is contained in the interval $[\frac{\pi}{2},\frac{3\pi}{2}]$, i.e. \[ \cos\Theta=\left\langle \mathcal{N},\frac{\partial}{\partial t}\right\rangle \leq0. \] \end{enumerate} If $\Sigma$ is contained in a slab $N\times\lbrack-T,T]$ for some $T>0$, the \[ \Sigma\subseteq N\times\left[ 0,\frac{1}{H}\right] . \] \end{theorem} We observe explicitly that (\ref{vg-hesimate}) can be replaced by the stronger extrinsic conditio \[ \mathrm{vol}\left( B_{R}^{N}\left( o\right) \cap\Sigma\right) =O\left( R^{2}\right) ,\text{ as }R\rightarrow+\infty, \] which, in turn, follows from the relatio \[ B_{R}^{\Sigma}\left( o\right) \subseteq B_{R}^{N}\left( o\right) \cap\Sigma. \] We also note that there are important situations where the assumption on the Gauss map is automatically satisfied and the volume growth condition on the hypersurface is inherited from that of the ambient space. The following height estimate extends previous results for $H$-graphs over non-compact domains (\cite{He}, \cite{HoLiRo}, \cite{CheRo}, \cite{Sp}). \begin{theorem} [Height estimate for graphs]\label{th_intro_hest_graph}Let $\left( N,g\right) $ be a complete, Riemannian manifold without boundary satisfying $Ric_{N}\geq0$ an \[ \mathrm{vol}B_{R}^{N}\left( o\right) =O\left( R^{2}\right) ,\text{ as }R\rightarrow+\infty. \] Let $M\subset N$ be a closed domain with smooth boundary $\partial M\neq\emptyset$. Suppose we are given a graph $\Sigma$ over $M$ with boundary $\partial\Sigma\subset M\times\left\{ 0\right\} $ and constant mean curvature $H>0$ with respect to the downward Gauss map. If $\Sigma$ is contained in a slab, the \[ \Sigma\subseteq M\times\left[ 0,\frac{1}{H}\right] . \] \end{theorem} In the particular case of graphs over a domain of a surface of non-negative Gauss curvature we obtain the following result that extends to non-homogeneous surfaces Theorem 4 in \cite{RoRo}. \begin{corollary} \label{coro_intro_hest_graph} Let $\left( N,g\right) $ be a complete $2$-dimensional Riemannian manifold without boundary of non-negative Gauss curvature. Let $M\subset N$ be a closed domain with smooth boundary $\partial M\neq\emptyset$. Suppose we are given a graph $\Sigma$ over $M$ with boundary $\partial\Sigma\subset M\times\left\{ 0\right\} $ and constant mean curvature $H>0$ with respect to the downward Gauss map. The \[ \Sigma\subseteq M\times\left[ 0,\frac{1}{H}\right] . \] \end{corollary} In the setting of manifolds without boundary, it is well known from a classical work by T. Lyons and D. Sullivan \cite{LS} that the validity of an $L^{2}$-divergence theorem is related, and in fact equivalent, to the parabolicity of the space. We shall complete the picture by extending the $L^{2}$-divergence theorem to non-compact manifolds with boundary. \begin{theorem} [$L^{2}$-divergence theorem]\label{th_intro_Stokes}Let $M$ be a parabolic Riemannian manifold with boundary $\partial M\neq\emptyset$ and outward pointing unit normal $\nu$. Then $M$ is parabolic if and only if the following holds. Let $X$ be a vector field on $M$ satisfying the following conditions \begin{align} & \text{(a) }\left\vert X\right\vert \in L^{2}\left( M\right) \label{KNR1}\\ & \text{(b) }\left\langle X,\nu\right\rangle \in L^{1}\left( \partial M\right) \nonumber\\ & \text{(c) }\operatorname{div}X\in L_{loc}^{1}(M),\ \left( \operatorname{div}X\right) _{-}\in L^{1}\left( M\right) .\nonumber \end{align} The \[ \int_{M}\operatorname{div}X=\int_{\partial M}\left\langle X,\nu\right\rangle . \] \end{theorem} A weaker version of the $L^{2}$-divergence theorem, involving solutions $X$ of inequalities of the type $\operatorname{div}X\geq f$ with boundary conditions $\left\langle X,\nu\right\rangle \leq0$, will be employed in our investigations on hypersurfaces in product spaces; see Proposition \ref{propineq}. In particular, from this latter we shall obtain the following result for hypersurfaces contained in a half-space of $N\times\mathbb{R}$. \begin{theorem} [Slice theorem]\label{th_intro_slice} Let $N$ be a Riemannian manifold without boundary. Let $\Sigma\subset N\times\lbrack0,+\infty)$ be a complete, oriented hypersurface with boundary $\partial\Sigma\neq\emptyset$ contained in the slice $N\times\left\{ 0\right\} $ and satisfying the volume growth conditio \[ \mathrm{vol}B_{R}^{\Sigma}\left( o\right) =O\left( R^{2}\right) ,\text{ as }R\rightarrow+\infty. \] Assume that, for a suitable choice of the Gauss map $\mathcal{N}$ of $\Sigma$, the hypersurface $\Sigma$ has non-positive mean curvature $H\left( x\right) \leq0$ and the angle $\Theta$ between $\mathcal{N}$ and the vertical vector field $\partial/\partial t$ is contained in the interval $[\frac{\pi}{2 ,\frac{3\pi}{2}]$, i.e. \[ \cos\Theta=\left\langle \mathcal{N},\frac{\partial}{\partial t}\right\rangle \leq0. \] If there exists some half-space $N\times\lbrack t,+\infty)$ of $N\times \mathbb{R}$ such tha \[ \mathrm{vol}\left( \Sigma\cap N\times\lbrack t,+\infty)\right) <+\infty, \] then $\Sigma\subset N\times\left\{ 0\right\} $. \end{theorem} In case $\Sigma$ is given graphically over a parabolic manifold $M$, we shall obtain the following variant of the slice theorem that involves the volumes of orthogonal projections of $\Sigma$ on $M.$ Its proof \ requires a Liouville-type theorem for the mean curvature operator under volume growth conditions; see Theorem \ref{th_areagrowth_meancurvop}. \begin{theorem} [Slice theorem for graphs]\label{th_intro_slice_graphs}Let $M$ be a complete manifold with boundary $\partial M\neq\emptyset$, outward pointing unit normal $\nu$, and (at most) quadratic volume growth, i.e. \[ \mathrm{vol}B_{R}^{M}\left( o\right) =O\left( R^{2}\right) ,\text{ as }R\rightarrow+\infty, \] for some origin $o\in M$. Let $\Sigma$ be a graph over $M$ with non-positive mean curvature $H\left( x\right) \leq0$ with respect to the orientation given by the downward pointing Gauss map $\mathcal{N}\left( x\right) $. Assume that $\partial\Sigma\cap M\times\left\{ T\right\} =\emptyset$ for some $T>0$ and that at least one of the following conditions is satisfied: \begin{enumerate} \item[(a)] $\partial\Sigma=\partial M\times\left\{ 0\right\} $ and $\Sigma\subset M\times\lbrack0,+\infty).$ \item[(b)] $M$ and $\Sigma$ are real analytic. \item[(c)] On $\partial\Sigma$, the Gauss map $\mathcal{N}\left( x\right) $ of $\Sigma$ and the Gauss map $\mathcal{N}_{0}\left( x\right) =\left( -\nu\left( x\right) ,0\right) $ of the boundary $\partial M\times\left\{ t\right\} $ of any slice form an angle $\theta\left( x\right) \in \lbrack-\frac{\pi}{2},\frac{\pi}{2}]$. \end{enumerate} \noindent If the portion of the graph $\Sigma$ contained in some half-space $M\times\lbrack t,+\infty)$ has finite volume projection on the slice $M\times\left\{ 0\right\} $, then $\Sigma$ is a horizontal slice of $M\times\mathbb{R}$. \end{theorem} It is worth to point out that, in the setting of manifolds without boundary and for $H=0$, half-space properties in a spirit similar to our slice-type theorems have been obtained in the very recent paper \cite{RSS-JDG} by H. Rosenberg, F. Schulze and J. Spruck. More precisely, they are able to show that curvature restrictions and potential theoretic properties (parabolicity) of the base manifold $M$ in the ambient product space $M\times\mathbb{R}$ force properly immersed minimal hypersurfaces and entire minimal graphs in a half-space to be totally geodesic slices. This holds without any further condition on their superlevel sets. \bigskip The paper is organized as follows. In Section \ref{section-equilibrium} we recall the link between parabolicity and absolute capacity of compact subsets. We also take the occasion to give a detailed proof of the existence and regularity of the equilibrium potentials of condensers in the setting of manifolds with boundary. These rely on the solution of mixed boundary value problems in non-smooth domains. Section \ref{section-ahlfors} contains the proof of the maximum principle characterization of parabolicity and its applications to obtain height estimates for complete CMC hypersurfaces with boundary into Riemannian products. In Section \ref{section-stokes} we relate the parabolicity of a manifold with boundary to the validity of the $L^{2 $-Stokes theorem. We also provide a weak form of this result that applies to get slice-type results for hypersurfaces with boundary in Riemannian products. Further slice-type results that are based on Liouville-type theorem for graphs are also given. In the final Appendix we survey, and compare, different notions of parabolicity for manifolds with boundary. We also exemplify how the results of this paper can be applied in the setting of minimal surfaces. In particular, we recover, with a deterministic proof, a result by R. Neel on the parabolicity of minimal graphs.\bigskip In conclusion of this introductory part we mention that there are natural and interesting applications and extensions of the the results obtained in this paper both to Killing graphs and to the $p$-Laplace operator. These aspects will be presented in the forthcoming papers \cite{ILPS-Killing} and \cite{IPS-preprint}, respectively. \section{Capacity \& equilibrium potentials\label{section-equilibrium}} As in the case where $M$ has no boundary, given a compact set $K$ and an open set $\Omega$ containing $K$ the capacity of the condenser $(K, \Omega)$ is defined by \[ \text{cap}(K, \Omega) = \inf\{\int_{\Omega}|\nabla u|^{2} \, :\, u\in C^{\infty}_{c}(\Omega) \,\, , u\geq1 \text{ on } K\}. \] When $\Omega=M$, we write $\text{cap}(K, M) = \text{cap}(K)$ and we refer to it as the (absolute) capacity of $K$. A simple approximation argument shows that the infimum on the right hand side can be equivalently computed letting $u$ range over the set \[ \{ u \in Lip_{c}(\Omega) \,:\, u=1 \text{ on }K\} \] or even over \[ W_{0}(K,\Omega) = \{ u\in C(\overline{\Omega})\cap W^{1,2}_{0}(\Omega) \,:\, u=1 \text{ on }K \}, \] where $W^{1,2}_{0}(\Omega)=\overline{C^{\infty}_{c}(\Omega)}$. We refer to functions in $W_{0}(K,\Omega)$ as admissible potentials for the condenser $(K, \Omega)$. The usual monotonicity properties of capacity hold, namely, if $K\subseteq K_{1}$ are compact sets and $\Omega\subseteq\Omega_{1}$ are open, then $\text{cap}(K,\Omega_{1})\leq\text{cap}(K_{1},\Omega_{1})\leq\text{cap (K_{1},\Omega)$ and this allows to define first the capacity of an open set $U\subset\Omega$ as $\text{cap} (U,\Omega) = \sup_{U\supset K, \text{ compact} } \text{cap}(K,\Omega)$ and then the capacity of an arbitrary set $E\subset\Omega$ as $\text{cap}(E,\Omega) = \inf_{E\subset U \text{open }\text{cap}(U,\Omega)$. We are going to show that the Liouville-type definition of parabolicity given in the introduction is equivalent to the statement that every compact subset has zero capacity. This depends on the construction of equilibrium potentials for capacity, which plays a vital role also in the proof of the $L^{2}$ divergence theorem characterization of parabolicity, Theorem~\ref{th_intro_Stokes}. It should be pointed out that while these results are in some sense well known, we haven't been able to find a reference which deals explicitly with matters concerning regularity up to the boundary of these equilibrium potentials. The following simple lemma will be useful in the proof of the proposition. \begin{lemma} \label{cap_exaustion} Let $D\Subset\Omega$ be open sets, and let $D_{n}$ and $\Omega_{n}$ be a sequence of open sets such that \[ \overline D\subseteq D_{n+1}\subseteq D_{n} \subseteq\overline{D}_{n} \Subset\Omega_{n} \subseteq\Omega_{n+1}\subseteq\Omega, \quad\cap_{n} D_{n} = D,\quad\cup_{n} \Omega_{n}=\Omega. \] Then \begin{equation} \label{limcap}\lim_{n} \mathrm{cap}(\overline{D}_{n}, \Omega_{n}) = \mathrm{cap}(\overline D, \Omega). \end{equation} \end{lemma} \begin{proof} It follows from monotonicity that, for every $n$, $\mathrm{cap}(\overline {D}_{n},\Omega_{n})$ is monotonically decreasing and greater than or equal to $\mathrm{cap}(\overline{D}, \Omega)$ so the limit on the left hand side of \eqref{limcap} exists and \[ \lim_{n} \mathrm{cap}(\overline{D}_{n}, \Omega_{n}) \geq\mathrm{cap}(\overline D, \Omega). \] For the converse, let $\phi\in Lip_{c}(\Omega)$ with $\phi=1$ on $\overline D$, and for $\epsilon>0$ let \[ \phi_{\epsilon}= \min\left\{ 1, \left( \frac{\phi-\epsilon}{1-2\epsilon }\right) _{+} \right\} . \] By assumption, for every sufficiently large $n$ we have \[ \overline{D}_{n} \subseteq\{x\,:\, \epsilon\leq\phi(x)\leq1-\epsilon\} \subset\Omega_{n}, \] and therefore $\phi_{\epsilon}$ is an admissible potential for the condenser $(\overline D_{n}, \Omega_{n}) $ so that \[ \int|\nabla\phi_{\epsilon}|^{2} \geq\mathrm{cap}(\overline{D}_{n}, \Omega _{n}), \] whence, letting $n\to\infty$, \[ \lim_{n} \mathrm{cap} (\overline{D}_{n}, \Omega_{n})\leq\int|\nabla \phi_{\epsilon}|^{2} \quad\forall\epsilon>0. \] On the other hand, by monotone convergence, \[ \int|\nabla\phi_{\epsilon}|^{2} = \frac{1}{(1-2\epsilon)^{2}} \int_{\{x\,:\, \epsilon\leq\phi(x)\leq1-\epsilon\}} |\nabla\phi|^{2} \to\int_{\Omega |\nabla\phi|^{2} \quad\text{as }\, \epsilon\to0, \] and we conclude that \[ \lim_{n} \mathrm{cap} (\overline{D}_{n}, \Omega_{n})\leq\int|\nabla\phi|^{2}, \] which in turn implies that \[ \lim_{n} \mathrm{cap}(\overline{D}_{n}, \Omega_{n}) \leq\mathrm{cap}(\overline D, \Omega). \] \end{proof} \begin{proposition} \label{equilibrium_potentials} Let $D\Subset\Omega$ be relatively compact domains with smooth boundaries $\overline{\partial_{0}D}$ and $\overline {\partial_{0}\Omega}$ transversal to $\partial M$. Then there exists $u\in W_{0}(\overline{D},\Omega)\cap C^{\infty}((\Omega\setminus\overline{D )\cup\partial_{1}(\Omega\setminus\overline{D}))$ such that $0\leq u\leq1$ and \[ \mathrm{cap}(\overline{D},\Omega)=\int_{\Omega}|\nabla u|^{2}. \] \end{proposition} \begin{proof} Consider the mixed boundary value problem \begin{equation} \label{cap1 \begin{cases} \Delta u = 0 \, \text{ in } \Omega\setminus\overline D & \\ \frac{\partial u}{\partial\nu} = 0 \, \text{ on } \partial_{1} (\Omega \setminus\overline D)) & \\ u=0 \text{ on } \partial_{0} \Omega\,\, , u=1 \text{ on } \partial_{0} D. & \end{cases} \end{equation} If follows from \cite{Lieberman- JMAA}, and the well known local regularity theory, that \eqref{cap1} has a classical solution $u\in C(\overline{\Omega }\setminus D) \cap C^{\infty}((\Omega\setminus\overline D) \cup\partial_{1} (\Omega\setminus\overline D))$. By the strong maximum principle and the boundary point lemma, it follows that $0<u<1$ on $\Omega\setminus\overline D$. We extend $u$ to $\Omega$ by setting it equal to $1$ on $D$. To show that $u\in W^{1,2}(\Omega)$, choose $\epsilon\in(0,1)$ such that $\epsilon$ and $1-\epsilon$ are regular values of $u$, and let $\Omega_{\epsilon}= \{x \,:\, u(x)\geq\epsilon\}$, $D_{\epsilon}=\{x\,:\, u(x) <1-\epsilon\}$ and \[ u_{\epsilon}= \frac{u-\epsilon}{1-2\epsilon}, \] so that $u_{\epsilon}\in C^{2} (\overline{\Omega}_{\epsilon}\setminus D_{\epsilon})$ satisfies \ \begin{cases} \Delta u_{\epsilon}= 0 \, \text{ in } \Omega_{\epsilon}\setminus\overline D_{\epsilon} & \\ \frac{\partial u_{\epsilon}}{\partial\nu} = 0 \, \text{ on } \partial_{1} (\Omega_{\epsilon}\setminus\overline D_{\epsilon})) & \\ u_{\epsilon}=0 \text{ on } \partial_{0} \Omega_{\epsilon}\,\, , u=1 \text{ on } \partial_{0} D_{\epsilon}, & \end{cases} \] By the usual Dirichlet principle $u_{\epsilon}$ is the equilibrium potential of the capacitor $(\overline{D}_{\epsilon}, \Omega_{\epsilon}),$ and, in particular, \begin{equation} \label{cap2}\frac1{1-2\epsilon}\int_{\Omega_{\epsilon}\setminus D_{\epsilon }|\nabla u|^{2} =\int_{\Omega_{\epsilon}\setminus D_{\epsilon}}|\nabla u_{\epsilon}|^{2} = \mathrm{cap}(\overline{D}_{\epsilon}, \Omega_{\epsilon}) \end{equation} Indeed, let $\phi\in Lip_{c}(\Omega_{\epsilon})$ with $\phi=1$ on $D_{\epsilon}$, and let $v=u_{\epsilon}-\phi$. Then $\phi=u_{\epsilon}-v$ and we have \[ \int_{\Omega\epsilon} |\nabla\phi|^{2} = \int_{\Omega_{\epsilon}\setminus D_{\epsilon}} |\nabla(u_{\epsilon}-v)|^{2}= \int_{\Omega_{\epsilon}\setminus D_{\epsilon}}(|\nabla u_{\epsilon}|^{2} + |\nabla v|^{2} - 2\langle\nabla u_{\epsilon},\nabla v\rangle) \] Since $\Delta u_{\epsilon}=0$ on ${\Omega_{\epsilon}\setminus D_{\epsilon}}$ and $v=0$ on $\partial_{0} (\Omega_{\epsilon}\setminus\overline{D}_{\epsilon })$ while $\partial u_{\epsilon}/\partial\nu= 0$ on $\partial_{1 (\Omega_{\epsilon}\setminus\overline{D}_{\epsilon})$, \[ \int_{\Omega_{\epsilon}\setminus D_{\epsilon}}\langle\nabla u_{\epsilon },\nabla v\rangle) = - \int_{\Omega_{\epsilon}\setminus D_{\epsilon}} v\Delta u_{\epsilon}+\int_{\partial_{0}(\Omega_{\epsilon}\setminus D_{\epsilon )\cup\partial_{1}(\Omega_{\epsilon}\setminus D_{\epsilon})} \langle\nabla u_{\epsilon},\nu\rangle v =0, \] so that \[ \int_{\Omega\epsilon} |\nabla\phi|^{2} = \int_{\Omega_{\epsilon}\setminus D_{\epsilon}}(|\nabla u_{\epsilon}|^{2} + |\nabla v|^{2})\geq\int _{\Omega_{\epsilon}\setminus D_{\epsilon}} |\nabla u_{\epsilon}|^{2},. \] as claimed. Letting $\epsilon\rightarrow0$ $\Omega_{\epsilon}\setminus D_{\epsilon }\nearrow\Omega\setminus D$, so that, by monotone convergence, the integral in \eqref{cap2} converges to \[ \int_{\Omega\setminus D}|\nabla u|^{2}. \] On the other hand, by the previous lemma, \[ \mathrm{cap}(\overline{D}_{\epsilon},\Omega_{\epsilon})\rightarrow \mathrm{cap}(\overline{D},\Omega),\quad\text{as }\,\epsilon\rightarrow0 \] and we conclude that $u\in W^{1,2}(\Omega)$ so that, in fact, $u\in W_{0}(\overline{D},\Omega)$ and \[ \int_{\Omega}|\nabla u|^{2}=\mathrm{cap}(\overline{D},\Omega), \] as required to complete the proof. \end{proof} \begin{remark} It is worth to point out that the equilibrium potential $u$ of the capacitor $(\overline{D},\Omega)$ constructed using Liebermann approach coincides with the one obtained by applying the direct calculus of variations to the energy functional on the closed convex space \[ W_{\Gamma}^{1,2}\left( \Omega\backslash\overline{D}\right) =\left\{ u\in W^{1,2}\left( \Omega\right) :\left. u\right\vert _{\partial_{0}D}=0\text{ and }\left. u\right\vert _{\partial_{0}\Omega}=1\right\} . \] Here, Dirichlet data are understood in the trace sense. Thanks to the global $W^{1,2}$-regularity established in Proposition \ref{equilibrium_potentials}, this follows e.g. either from maximum principle considerations or from the convexity of the energy functional. \end{remark} \begin{proposition} \label{equilibrium_potentials2} Let $D$ be a relatively compact domain and let $\Omega_{j}$ be an increasing exhaustion of $M$ by relatively compact open domains with $\overline{D}\subset\Omega_{1}$. Assume that $\overline {\partial_{0}D}$ and $\overline{\partial_{0}\Omega_{j}}$ are smooth and transversal to $\partial M$, and for every $j$, let $u_{j}$ be the equilibrium potential of the capacitor $(\overline{D},\Omega_{j})$ constructed in Proposition~\ref{equilibrium_potentials}. Then $u_{j}$ converges monotonically to a function $u\in C(M)\cap W_{loc}^{1,2}\cap C^{2}(M\setminus\overline{D})$ such that $0\leq u\leq1$, $u=1$ on $\overline{D}$, $u$ is harmonic on $M\setminus\overline{D}$, $\partial u/\partial\nu=0$ on $\partial _{1}(M\setminus\overline{D})$ and $u$ is a weak Neumann supersolution of the Laplace equation on $M$. Moreover $\nabla u\in L^{2}(M)$, \[ \mathrm{cap}(\overline{D})=\int_{M}|\nabla u|^{2}. \] \end{proposition} \begin{proof} Extend $u_{j}$ to all of $M$ by setting it equal to zero in $M\setminus \Omega_{j}$. It follows by the comparison principle that $0\leq u_{j}\leq u_{j+1}\leq1$ in $\Omega_{j}\setminus\overline{D}$, and therefore the sequence $u_{j}$ converges monotonically to a function $u$. Note that since $u_{j}(x)\leq u(x)\leq1$ and $u_{j}(x)\rightarrow1$ as $x\rightarrow y\in\partial_{0}D$ is follows that $u$ is continuous on $\overline{D}$ and there it is equal to $1$. Moreover, by the Schauder type estimate contained in Lemma 1 in \cite{Lieberman- JMAA}, for every $\alpha\in(0,1)$, every $j_{o}$ and every sufficiently small $\eta>0$ there exists a constant $C$ depending only on $\alpha$ $\eta$, $j_{o}$ and on the geometry of $M$ in a neighborhood of $B_{j_{o},\eta}=\{x\in\Omega_{j_{o}}\setminus\overline{D}\,:\,\mathrm{dist (x,\partial_{0}D\cup\partial_{0}\Omega_{jo})\geq\eta\} $ such that, for every $j\geq j_{o}$ \[ ||u_{j}||_{C^{2,\alpha}(B_{\eta})}\leq C\sup_{B_{\eta/2}}|u_{j}(x)|. \] It follows immediately that (possibly passing to a subsequence) the sequence $u_{j}$ converges in $C^{2}(B_{j_{o},\eta})$ for every $j_{o}$ and $\eta>0$ so that the limit function $u$ is harmonic in $\mathrm{int}\,M\setminus \overline{D}$ and $C^{2}$ up to $\partial_{1}(M\setminus\overline{D})$ where it satisfies the Neumann boundary condition $\partial u/\partial\nu=0$. Summing up, $u\in C^{0}(M\setminus D)\cup C^{2}((M\setminus\overline{D )\cup(\partial_{1}(M\setminus\overline{D})))$ is a classical solution of the mixed boundary problem \ \begin{cases} \Delta u\geq0\text{ on }M\setminus\overline{D}\\ \frac{\partial u}{\partial\nu}\leq0\,\text{ on }\,\partial_{1}(M\setminus \overline{D})\\ u=1\text{ on }\partial_{0}D\\ 0\leq u\leq1. \end{cases} \] On the other hand, since \[ \int_{\Omega_{j}}|\nabla u_{j}|^{2}=\mathrm{cap}(\overline{D},\Omega _{j})\searrow\mathrm{cap}(\overline{D}), \] the sequence $u_{j}\in C^{0}(M)\cap W_{c}^{1,2}(M)$ converges pointwise to $u$ and $\nabla u_{j}$ is bounded in $L^{2}(M).$ It follows easily (see, e.g., Lemma 1.33 in \cite{HKM}) that $\nabla u\in L^{2}(M)$ and $\nabla u_{j}\rightarrow\nabla u$ weakly in $L^{2}$. By the weak lower semicontinuity of the energy functional, it follows that \[ \int_{M}|\nabla u|^{2}\leq\liminf_{j}\int_{M}|\nabla u_{j}|^{2}=\mathrm{cap (\overline{D}) \] On the other hand, By Mazur's Lemma, a convex combination $\tilde{u}_{j}$ of the $u_{j}$ is such that $\nabla\tilde{u}_{j}\rightarrow\nabla u$ strongly in $L^{2}(M),$ and since each $\tilde{u}_{j}\in C^{0}\cap W^{1,2}(M)$ is compactly supported, and equal to $1$ on $\overline{D}$, it admissible for the capacitor $(\overline{D},M)$ and we deduce that \[ \int_{M}|\nabla u|^{2}=\lim\int_{M}|\nabla\tilde{u}_{j}|^{2}\geq \mathrm{cap}(\overline{D}), \] and we conclude that \[ \int_{M}|\nabla u|^{2}=\mathrm{cap}(\overline{D}), \] as required. Finally, assume that $u$ is non-constant so that, by the strong maximum principle, $u<1$ in $M\setminus\overline D$. Let $\eta_{n}\to1$ be a sequence of regular values of $u$, and set $\Gamma_{n}=\{x \,:\, u(x)< \eta_{n}\}.$ Using the fact that $\Delta u=0$ on $\Gamma_{n} \subset M\setminus\overline D$, $\partial u/\partial\nu=0$ on $\partial_{1} \Gamma_{n}$ and $\partial u/\partial\nu\geq0$ on $\partial_{0}\Gamma_{n}$, given $0\leq\rho\in C^{\infty}_{c} (M)$, we compute \[ \int_{M} \langle\nabla u,\nabla\rho\rangle= \lim_{n} \int_{\Gamma_{n}} \langle\nabla u, \nabla\rho\rangle= \lim_{n} \{- \int_{\Gamma_{n}} \rho\Delta u + \int_{\partial_{0} \Gamma_{n} \cup\partial_{1}\Gamma_{n}} \!\!\!\!\!\!\!\!\rho\langle\nabla u, \nu\rangle\} \geq0, \] and $u$ is a weak Neumann supersolution of the Laplace equation on $M$. \end{proof} We then obtain the announced equivalent characterization of parabolicity. \begin{theorem} \label{capacity_parabolicity} Let $(M,\langle\,,\rangle)$ be a connected Riemannian manifold with (possibily empty) boundary $\partial M$. The following are equivalent: \begin{itemize} \item[(i)] The capacity of every compact set $K$ in $M$ is zero. \item[(ii)] For every relatively compact open domain $D\Subset M$ there exists an increasing sequence of functions $h_{j}\in C^{0}(M)\cap W^{1,2}_{c}(M)$ with $h_{j}=1$ on $D$, $0\leq h_{j}\leq h_{j+1}\leq1$, $h_{j}$ harmonic in the set $\{x: 0<h_{j}(x)<1\}\cap\mathrm{int} M$, such that \[ \int_{M} |\nabla h_{j}|^{2}\to0 \quad\text{ as }\, j\to+\infty. \] \item[(iii)] $M$ is parabolic. \end{itemize} \end{theorem} \begin{proof} (i) $\Rightarrow$ (ii). Assume first that $\mathrm{cap}(K)=0$ for every compact set $K$ in $M$, let $D$ be as in (ii) and let $\Omega_{j}$ be an increasing exhastion of $M$ by relatively compact open set with smooth boundary transversal to $\partial M$ with $\overline D\subset\Omega_{1}$. For every $j$ let $u_{j}$ be the equilibrium potential of the capacitor $(\overline D, \Omega_{j})$, and extend $u_{j}$ to be $0$ off $\Omega_{j}$. Then $u_{j}$ has the regularity properties listed in (ii), and, by Proposition~\ref{equilibrium_potentials}, \[ \int|\nabla u_{j}|^{2} = \mathrm{cap}(\overline D, \Omega_{j}) \to \mathrm{cap}(\overline D)=0. \] (ii) $\Rightarrow$ (i) Conversely, assume that (ii) holds. Clearly it suffices to prove that $\mathrm{cap}(\overline D)=0 $ for every relatively compact open domain $D$ with smooth boundary transversal to $\partial M$. Choose an increasing exhaustion of $M$ by relatively compact domains $\Omega_{j}$ with smooth boundary transversal to $\partial M$ such that $\mathrm{supp} u_{j} \Subset\Omega_{j}$. Then \[ \mathrm{cap} (\overline D) = \lim_{j} \mathrm{cap} (\overline D, \Omega_{j}) \leq\lim_{j} \int_{\Omega_{j}} |\nabla u_{j}|^{2} \to0, \] as required. (i) $\Rightarrow$ (iii) Suppose that $\mathrm{cap}(K)= 0 $ for every compact set in $M$, and let $u\in C^{0}(M)\cap W^{1,2}_{loc}(M)$ satisfy, in the weak Neumann sense, \begin{equation} \label{weak_liou \begin{cases} \Delta u\geq0\\ \frac{\partial u}{\partial\nu}\leq0 \, \text{ on } \, \partial M\\ \sup_{M} u<+\infty. \end{cases} \end{equation} Let $v=\sup_{M} u- u +1$, so that $v\geq1$ and, by definition of weak solution of the differential problem \eqref{weak_liou}, $v$ satisfies \[ \int\langle\nabla v, \nabla\rho\rangle\geq0 \quad\forall0\leq\rho\in C^{0}(M)\cap W^{1,2}_{0}(M). \] Next, for every relatively compact domain $D$, let $\varphi\in Lip_{c}(M)$ with $\varphi=1$ on $D$, and $0\leq\varphi\leq1$. Using $\rho= \varphi^{2} v^{-1}\in C^{0}(M) \cap W^{1,2}_{c}(M)$ as a test function we have \ \begin{split} 0\leq\int\langle v, \nabla\rho\rangle & = 2\int\varphi\langle v^{-1} \nabla v, \nabla\varphi\rangle- \int\varphi^{2} |v^{-1}\nabla v |^{2}\\ & \leq2\int\varphi|v^{-1}\nabla v| |\nabla\varphi| - \int\varphi^{2} |v^{-1}\nabla v |^{2}. \end{split} \] Rearranging, using Young's inequality $2ab \leq2a^{2} + \frac12 b^{2}$, and recalling that $\varphi=1$ on $D$ we obtain \[ \int_{D} |v^{-1}\nabla v |^{2}\leq4\int|\nabla\varphi|^{2}, \] and taking the inf of the right hand side over all $Lip_{c}$ function $\varphi$ which are equal to $1$ on $D$ we conclude that \[ \int_{D} |v^{-1}\nabla v |^{2}\leq4\mathrm{cap}(\overline{D}) = 0 \] Thus $v$ and therefore $u$ is constant on every relatively compact domain $D$. Thus $u$ is constant on $M$, and $M$ is parabolic in the sense of Definition~\ref{def_parab}. (iii) $\Rightarrow$ (i) Assume by contradiction that there exists compact set $K$ with nonzero capacity. Without loss of generality we can suppose that $K$ is the closure of a relatively compact open domain $D$ with smooth boundary $\partial_{0}D$ transversal to $\partial M$. Let $u$ be the equilibrium potential of $\overline D$ constructed in Proposition~\ref{equilibrium_potentials2}, which is non-constant since the capacity of $\overline D$ is positive. But then $u\in C^{0}(M)\cap W^{1,2}(M)$ is a non-constant bounded weak Neumann superharmonic function, contradicting the assumed parabolicity of $M$. \end{proof} \section{Maximum principles \& height estimates\label{section-ahlfors}} It is a classical result by L.V. Ahlfors that a Riemannian manifold $N$ (without boundary) is parabolic if and only if, for every domain $D\subseteq N$ with $\partial D\neq\emptyset$ and for every bounded above, subharmonic function $u$ on $D$ it holds that $\sup_{D}u=\sup_{\partial D}u$. The result has been extended in the setting of $p$-parabolicity in \cite{PST}. This section aims to provide a new form of the Ahlfors characterization which is valid on manifolds with boundary. This, in turn, will be used to obtain estimate of the height function of complete hypersurfaces with constant mean curvature (CMC for short)\ immersed into product spaces of the form $N\times\mathbb{R}$. \subsection{Global maximum principles\label{subsection-ahlfors}} We are going to prove the Ahlfors-type characterization of parabolicity stated in Theorem \ref{th_intro_Ahlfors}. Actually, a version of this global maximum principle involving the whole manifold and without any Neumann condition will be crucial in the geometric applications. This is the content of Theorem \ref{th_ahlfors-wholeM} that will be proved at the end of the section. \begin{proof} [Proof (of Theorem \ref{th_intro_Ahlfors})]Assume first that $M$ is parabolic and suppose, by contradiction, that there exists a domain $D\subseteq M$ and a function $u$ as in the statement of the Theorem, such tha \[ \sup_{D}u>\sup_{\partial_{0}D}u. \] Let $\varepsilon>0$ be so small tha \[ \sup_{D}u>\sup_{\partial_{0}D}u+\varepsilon. \] Then, the open set $D_{\varepsilon}=\left\{ x\in D:u>\sup_{D}u-\varepsilon \right\} \neq\emptyset$ satisfies $\overline{D}_{\varepsilon}\subset D$ and, therefore \[ u_{\varepsilon} \begin{cases} \max\left\{ u,\sup_{D}u-\varepsilon\right\} & \text{on }D\\ \sup_{D}u-\varepsilon & \text{on }M\backslash D \end{cases} \] well defines a $C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M\right) $-subsolution of the Laplace equation on $M$. Furthermore, $\sup _{M}u_{\varepsilon}=\sup_{D}u<+\infty$. It follows from the very definition of parabolicity that $u_{\varepsilon}$ is constant on $M$. In particular, if we suppose to have chosen $\varepsilon>0$ in such a way that $\sup_{D u-\varepsilon$ is not a local maximum for $u$, then $u_{\varepsilon}=\sup _{D}u-\varepsilon$ on $\partial D_{\varepsilon}\neq\emptyset$ and we conclud \[ u\equiv\sup_{D}u-\varepsilon\text{, on }D, \] which is absurd. Suppose now that, for every domain $D\subseteq M$ with $\partial_{0 D\neq\emptyset$ and for every $u\in C^{0}\left( \overline{D}\right) \cap W_{loc}^{1,2}\left( D\right) $ satisfying, in the weak Neumann sense \[ \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }D\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial_{1}D\\ \sup_{D}u<+\infty, & \end{array} \right. \] it hold \[ \sup_{D}u=\sup_{\partial_{0}D}u. \] By contradiction assume that $M$ is not parabolic. Then, there exists a non-constant function $v\in C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M\right) $ satisfying \[ \left\{ \begin{array} [c]{ll \Delta v\geq0 & \text{on }M\\ \dfrac{\partial v}{\partial\nu}\leq0 & \text{on }\partial M\\ v^{\ast}=\sup_{M}v<+\infty. & \end{array} \right. \] Given $\eta<v^{\ast}$ consider the domain $\Omega_{\eta}=\{x\in M:v(x)>\eta \}\neq\emptyset$. We can choose $\eta$ sufficiently close to $v^{\ast}$ in such a way that $\mathrm{int}M\not \subseteq \Omega_{\eta}$. In particular, $\partial\Omega_{\eta}\subseteq\left\{ v=\eta\right\} $ and $\partial _{0}\Omega_{\eta}\neq\emptyset$. \ Now, $v\in C^{0}\left( \overline{\Omega }_{\eta}\right) \cap W_{loc}^{1,2}\left( \Omega_{\eta}\right) $ is a bounded above weak Neumann subsolution on $\partial_{1}\Omega_{\eta}$. Moreover \[ \sup_{\partial_{0}\Omega_{\eta}}{v}=\eta<\sup_{\Omega_{\eta}}{v}, \] contradicting our assumptions. \end{proof} \begin{remark} If we take $D=M$ in the first half of the above proof then we immediately realize that the Neumann boundary condition plays no role. This suggests the validity of the following restricted form of the maximum principle that was adopted by F.R. De Lima \cite{DeLi} as a definition of a weak notion of parabolicity; see Appendix \ref{appendix-different}. \end{remark} \begin{proof} [Proof (of Theorem \ref{th_ahlfors-wholeM})]If, by contradiction \[ \sup_{M}u>\sup_{\partial M}u \] then, we can choose $\varepsilon>0$ so small tha \[ \sup_{M}u>\sup_{\partial M}u-2\varepsilon. \] Define $u_{\varepsilon}\in C^{0}\left( M\right) \cap W_{loc}^{1,2}\left( M\right) $ by settin \[ u_{\varepsilon}=\left\{ \begin{array} [c]{ll \max\left( u,\sup_{M}u-\varepsilon\right) & \text{on }\Omega_{2\varepsilon }\\ \sup_{M}u-\varepsilon & \text{on }M\backslash\Omega_{2\varepsilon}, \end{array} \right. \] where we have se \[ \Omega_{2\varepsilon}=\left\{ x\in M:u\left( x\right) >\sup_{M u-2\varepsilon\right\} . \] Since $\overline{\Omega}_{2\varepsilon}\subset\mathrm{int}M$, we have that $u_{\varepsilon}$ is constant in a neighborhood of $\partial M$. Since $\Delta u \geq0 $ weakly on $\mathrm{int} M$, it follows that $u_{\epsilon}$ is a weak Neumann subsolution on $M$. Moreover, $\sup_{M}u_{\varepsilon}=\sup _{M}u<+\infty$ so that, by parabolicity, $u_{\varepsilon}\equiv\sup _{M}u-\varepsilon$, a contradiction. \end{proof} \subsection{Height estimates for CMC hypersurfaces in product spaces\label{section-heightestimates}} We now present some applications of this global maximum principle to get height estimates both for $H$-hypersurfaces with boundary in product spaces and for $H$-graphs over manifolds with boundary. By an $H$-hypersurfaces of $N\times\mathbb{R}$ we mean and oriented hypersurface $\Sigma$ with constant mean curvature $H$ with respect to a choice of its Gauss map. An $H$-graph over the $m$-dimensional Riemannian manifold $M$ with boundary $\partial M\neq\emptyset$ is an embedded $H$-hypersurfaces given by $\Sigma=\Gamma _{u}\left( M\right) $ where $\Gamma_{u}:M\rightarrow M\times\mathbb{R}$ \ is defined, as usual, by $\Gamma_{u}\left( x\right) =\left( x,u\left( x\right) \right) $, for some smooth function $u:M\rightarrow\mathbb{R}$. The downward (pointing) unit normal to $\Sigma$ is defined by \[ \mathcal{N}=\frac{1}{\sqrt{1+\left\vert \nabla_{M}u\right\vert ^{2}}}\left( \nabla_{M}u,-1\right) . \] With respect to $\mathcal{N}$, the mean curvature of the graph writes a \[ H=-\frac{1}{m}\operatorname{div}_{M}\left( \frac{\nabla_{M}u}{\sqrt {1+\left\vert \nabla_{M}u\right\vert ^{2}}}\right) . \] On the other hand, let \thinspace$M_{\Sigma}$ be the original manifold $M$ endowed with the metric pulled back from $M\times\mathbb{R}$ via $\Gamma_{u}$. Then, it is well known that the mean curvature vector field of the isometric immersion $\Gamma_{u} \[ \mathbf{H}\left( x\right) =H\left( x\right) \mathcal{N}\left( x\right) \] satisfie \[ \Delta_{\Sigma}\Gamma_{u}=m\mathbf{H}, \] where $\Delta_{\Sigma}$ \ denotes the Laplacian on manifold-valued maps. Since $\Delta_{\Sigma}$ is linear with respect to the Riemannian product structure in the codomain, from the above we also ge \begin{align} \Delta_{\Sigma}u & =\frac{1}{\sqrt{1+\left\vert \nabla_{M}u\right\vert ^{2 }}\operatorname{div}_{M}\left( \frac{\nabla_{M}u}{\sqrt{1+\left\vert \nabla_{M}u\right\vert ^{2}}}\right) \label{lap-meancurv}\\ & =-\frac{m}{\sqrt{1+\left\vert \nabla_{M}u\right\vert ^{2}}}H\left( x\right) \nonumber \end{align} With this preparation, we begin by noting the following version of Lemma 1 in \cite{LW}. \begin{lemma} \label{prop_height} Let $N$ be an $m$-dimensional complete manifold without boundary and let $M\subset N$ be a closed domain with smooth boundary $\partial M\neq\emptyset$. Consider a graph $\Sigma=\Gamma_{u}\left( M\right) \subset N\times\mathbb{R}$ over $M$ with smooth boundar \[ \partial\Sigma\subset M\times\left\{ 0\right\} . \] Assume tha \[ \sup_{M}|u|+\sup_{M}|H|<+\infty. \] Then there exists a constant $C=C(m,\sup_{M}|u|,\sup_{M}|H|)>0$ such that, for every $\delta>0$ and $R>1$ \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq C\left( 1+\frac {1}{\delta R}\right) \mathrm{vol}\left( M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) \right) , \] where $\bar{x}$ is a reference point in $N$ and $\bar{p}=(\bar{x},u(\bar{x ))$. Moreover, the following estimat \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq C\left\{ \mathrm{vol}B_{R}^{N}\left( \bar{x}\right) +\mathrm{Area}\left( \partial B_{R}^{N}\left( \bar{x}\right) \right) \right\} \] holds for almost every $R>1$. \end{lemma} \begin{proof} Note tha \begin{align*} d_{\Sigma}\left( \left( \bar{x},u\left( \bar{x}\right) \right) ,\left( x,u\left( x\right) \right) \right) & \geq d_{N\times\mathbb{R}}\left( \left( \bar{x},u\left( \bar{x}\right) \right) ,\left( x,u\left( x\right) \right) \right) \\ & \geq\max\left\{ d_{N}\left( \bar{x},x\right) +\left\vert u\left( \bar{x}\right) -u\left( x\right) \right\vert \right\} . \end{align*} Set $\bar{p}=(\bar{x},u(\bar{x}))$. Therefor \begin{align*} B_{R}^{\Sigma}\left( \bar{p}\right) & \subseteq\Sigma\cap B_{R ^{N\times\mathbb{R}}\left( \bar{p}\right) \\ & \subseteq(M\cap B_{R}^{N}\left( \bar{x}\right) )\times\left( -R+u\left( \bar{x}\right) ,R+u\left( \bar{x}\right) \right) \end{align*} and it follows tha \begin{align} \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) & =\int_{\Pi_{N}\left( B_{R}^{\Sigma}\left( \bar{p}\right) \right) }\sqrt{1+\left\vert \nabla u\right\vert ^{2}}d\mathrm{vol}_{N}\label{LW-volest1}\\ & \leq\int_{M\cap B_{R}^{N}\left( \bar{x}\right) }\sqrt{1+\left\vert \nabla u\right\vert ^{2}}d\mathrm{vol}_{N}\nonumber\\ & =\int_{M\cap B_{R}^{N}\left( \bar{x}\right) }\frac{\left\vert \nabla u\right\vert ^{2}}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}} d\mathrm{vol}_{N}+\int_{M\cap B_{R}^{N}\left( \bar{x}\right) }\frac{1 {\sqrt{1+\left\vert \nabla u\right\vert ^{2}}}d\mathrm{vol}_{N}\nonumber\\ & \leq\int_{M\cap B_{R}^{N}\left( \bar{x}\right) }\frac{\left\vert \nabla u\right\vert ^{2}}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}} d\mathrm{vol}_{N}+\mathrm{vol}(M\cap B_{R}^{N}\left( \bar{x}\right) ).\nonumber \end{align} Here $\Pi_{N}:\Sigma\rightarrow N$ denotes the projection on the $N$ factor. Now, for any $\delta>0$, we choose a cut-off function $\rho$ as follows \[ \rho(x) \begin{cases} 1 & \mathrm{on}\quad B_{R}(\bar{x})\\ \frac{\left( 1+\delta\right) R-r(x)}{\delta R} & \mathrm{on}\quad B_{\left( 1+\delta\right) R}(\bar{x})\backslash B_{R}(\bar{x})\\ 0 & \mathrm{elsewhere}, \end{cases} \] where $r(x)$ denotes the distance function on $N$ from a reference point $\bar{x}$. Since \[ X=\rho u\frac{\nabla u}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}} \] is a compactly supported vector field that vanishes on $\partial M$ and on $\partial B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) $, as an application of the divergence theorem we get \begin{align*} 0= & \int_{M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) }\mathrm{div}(X)d\mathrm{vol}_{N}\\ = & -m\int_{M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) }\rho Hud\mathrm{vol}_{N}+\int_{M\cap B_{\left( 1+\delta\right) R ^{N}\left( \bar{x}\right) }\frac{\rho\left\vert \nabla u\right\vert ^{2 }{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}}d\mathrm{vol}_{N}\\ & -\frac{1}{\delta R}\int_{M\cap(B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) \backslash B_{R}^{N}\left( \bar{x}\right) )}u\frac {\langle\nabla u,\nabla r\rangle}{\sqrt{1+\left\vert \nabla u\right\vert ^{2 }}d\mathrm{vol}_{N}. \end{align*} Hence \begin{align*} \int_{M\cap B_{R}^{N}\left( \bar{x}\right) }\frac{\left\vert \nabla u\right\vert ^{2}}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}} d\mathrm{vol}_{N}\leq & \int_{M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) }\frac{\rho\left\vert \nabla u\right\vert ^{2}}{\sqrt {1+\left\vert \nabla u\right\vert ^{2}}}d\mathrm{vol}_{N}\\ \leq & m\sup_{M}|u|\sup_{M}|H|\mathrm{vol}(M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) )\\ + & \frac{\sup_{M}|u|}{\delta R}\mathrm{vol}(M\cap(B_{\left( 1+\delta \right) R}^{N}\left( \bar{x}\right) \backslash B_{R}^{N}\left( \bar {x}\right) )). \end{align*} Inserting this latter into (\ref{LW-volest1}) gives, for every $R>1$, \begin{align*} \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq & C\left\{ \mathrm{vol}(M\cap B_{R}^{N}\left( \bar{x}\right) )+\mathrm{vol}(M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) )\right. \\ & \left. +\frac{1}{\delta R}\mathrm{vol}(M\cap(B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) \backslash B_{R}^{N}\left( \bar{x}\right) ))\right\} . \end{align*} To conclude, we let $\delta\rightarrow0$ and we use the co-area formula. \end{proof} \begin{remark} We note that, actually, the somewhat weaker conclusion \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq C\left( 1+\frac {1}{\delta}\right) \mathrm{vol}\left( M\cap B_{\left( 1+\delta\right) R}^{N}\left( \bar{x}\right) \right) , \] an \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq C\left\{ \mathrm{vol}B_{R}^{N}\left( \bar{x}\right) +R\mathrm{Area}\left( \partial B_{R}^{N}\left( \bar{x}\right) \right) \right\} \] hold under the assumption \[ \sup_{M}|uH|<+\infty. \] Indeed, to overcome the problem that $u$ can be unbounded, following the proof in the minimal case $H\equiv0$, one can apply the divergence theorem to the vector field \[ X=\rho u_{\sqrt{2}R}\frac{\nabla u}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}}, \] where $u_{R}$ is defined as \[ u_{R} \begin{cases} -R & \text{if}\quad u(x)<-R\\ u(x) & \text{if}\quad|u(x)|<R\\ R & \text{if}\quad u(x)>R. \end{cases} \] \end{remark} \begin{remark} It could be interesting to observe that, in certain situations, an improved version of Lemma \ref{prop_height} can be obtained from the a-priori gradient estimates due to N. Koreevar, X.-J. Wang and J. Spruck, \cite{Ko, Wa, Sp}. See also \cite{RSS-JDG} where the injectivity radius assumption has been removed. More precisely, we have the next simple result. We explicitly note that, with respect to Lemma \ref{prop_height}, no assumption on $\partial\Sigma$ is required. Moreover, the volume estimate involves the same radius $R>0$ without any further contribution. \end{remark} \begin{lemma} \label{lemma_volume}Let $\left( N,g\right) $ be a complete, $m$-dimensional Riemannian manifold (without boundary) satisfying $Sec_{N}\geq-K$ and let $M\subset N$ be a closed domain with smooth boundary $\partial M\neq\emptyset $. Suppose we are given a vertically bounded graph $\Sigma_{\varepsilon }=\Gamma_{u}\left( \mathcal{U}_{\varepsilon}\left( M\right) \right) $ with bounded mean curvature $H$, parametrized over an $\varepsilon$-neighborhood $\mathcal{U}_{\varepsilon}\left( M\right) $ of $M$. Let $\Sigma=\Gamma _{u}\left( M\right) .$ Then, there exists a constant $C=C\left(m, \varepsilon,H,K,\sup_{M}\left\vert u\right\vert ,\sup_{M}\left\vert H\right\vert \right) >0$ such tha \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) \leq C\mathrm{vol}\left( M\cap B_{R}^{N}\left( \bar{x}\right) \right) , \] for every $R>0$, where $\bar{x}\in\mathrm{int}M$ is a reference point and $\bar{p}=\left( \bar{x},u\left( \bar{x}\right) \right) $. \end{lemma} \begin{proof} Indeed, sinc \[ \mathrm{vol}B_{R}^{\Sigma}\left( \bar{p}\right) =\int_{\Pi_{N}\left( B_{R}^{\Sigma}\left( \bar{p}\right) \right) }\sqrt{1+\left\vert \nabla u\right\vert ^{2}}d\mathrm{vol}_{N}\leq\int_{M\cap B_{R}^{N}\left( \bar {x}\right) }\sqrt{1+\left\vert \nabla u\right\vert ^{2}}d\mathrm{vol}_{N}, \] we have only to show that $\left\vert \nabla u\right\vert $ is uniformly bounded on $M$. To this end, note that $u:\mathcal{U}_{\varepsilon}\left( M\right) \rightarrow\mathbb{R}$ is a bounded function defining a bounded mean curvature graph $\Gamma_{u}\left( \mathcal{U}_{\varepsilon }\left( M\right) \right) $. Therefore, we can apply Theorem 1.1 in \cite{Sp} to either $w\left( x\right) =\sup_{M}u-u\left( x\right) \geq0$ or $w\left( x\right) =u\left( x\right) -\inf_{M}u\geq0$ and obtain that, in fact, $\left\vert \nabla^{M}u\right\vert $ is uniformly bounded on every ball $B_{\varepsilon/2}^{N}\left( x\right) \subset\mathcal{U}_{\varepsilon }\left( M\right) $, with $x\in M$. This completes the proof. \end{proof} \bigskip Lemma \ref{prop_height} allows to prove Theorem \ref{th_intro_hest_graph} stated in the Introduction. \begin{proof} [Proof (of Theorem \ref{th_intro_hest_graph})]Observe first that, according to Lemma \ref{prop_height}, since $N$ has quadratic volume growth, so has $\Sigma$. In particular, by Theorem \ref{th_growth}, if we denote by $M_{\Sigma}$ the original domain $M$ endowed with the metric pulled back from $\Sigma$ via $\Gamma_{u}$, we conclude that $M_{\Sigma}$ is parabolic. Consider now the real-valued function $w\in C^{0}\left( M_{\Sigma}\right) \cap C^{\infty}\left( \mathrm{int}M_{\Sigma}\right) $ defined b \[ w\left( x\right) =Hu\left( x\right) -\frac{1}{\sqrt{1+\left\vert \nabla u\left( x\right) \right\vert ^{2}}}. \] Since $Ric_{N}\geq0$, it is well known that $w$ is subharmonic; see e.g. \cite{AD}. Moreover, $w\leq0$ on $\partial M_{\Sigma}$ and $\sup_{M_{\Sigma }w\leq H\sup_{M}u<+\infty$. It follows from Theorem \ref{th_ahlfors-wholeM} tha \[ \sup_{M_{\Sigma}}w=\sup_{\partial M_{\Sigma}}w\leq0 \] and, therefore \[ H\sup_{M}u-1\leq\sup_{M_{\Sigma}}w\leq0. \] This shows that $u\leq1/H$. To conclude the proof, observe that, by (\ref{lap-meancurv}), $u\in C^{0}\left( M_{\Sigma}\right) \cap C^{\infty }\left( \mathrm{int}M_{\Sigma}\right) $ is a superharmonic function. Moreover, by assumption, $u$ is bounded and $u=0$ on $\partial M_{\Sigma}$. Therefore, using again Theorem \ref{th_ahlfors-wholeM} in the form of a minimum principle, we deduc \[ \inf_{M_{\Sigma}}u=\inf_{\partial M_{\Sigma}}u=0, \] proving that $u\geq0$. \end{proof} \begin{remark} It is well known that, in case $\partial M=\emptyset$, the above volume growth assumption implies that the vertically bounded $H$-graph must be necessarily minimal, $H=0$. Actually, according to Theorem 5.1 in \cite{RS-Revista}, the same conclusion holds if $\mathrm{vol}B_{R}\leq C_{1}e^{C_{2}R^{2}}$ for some constants $C_{1},C_{2}>0$. Indeed, under this condition, the weak maximum/minimum principle at infinity for the mean-curvature operator holds on $M$. Therefore, there exists a sequence $x_{k}$ along whic \ \begin{array} [c]{ll (a) & u\left( x_{k}\right) <\inf_{M}u+1/k\ \\ (b) & mH\equiv-\operatorname{div}((1+\left\vert \nabla_{M}u\right\vert ^{2}\left( x_{k}\right) )^{-1/2}\nabla_{M}u\left( x_{k}\right) )<1/k. \end{array} \] This shows that $H\leq0$. In a similar fashion we obtain the opposite inequality, proving that $H\equiv0$. The same conclusion was also obtained in \cite{PRS-Pacific} by different methods. On the other hand, if $\partial M=\emptyset$ and the volume growth of $M$ is sub-quadratic then $M$ is parabolic with respect to the mean curvature operator, \cite{RS-Revista}. Therefore, not only the $H$-graph is minimal, but it must be a slice of $M\times\mathbb{R}$. \end{remark} \begin{remark} Theorem \ref{th_intro_hest_graph} goes in the direction of generalizing Theorem 4 in \cite{RoRo} by A. Ros and H. Rosenberg to non-homogeneous domains. Indeed, assume that $m=2,3,4$ and $\mathrm{Sec}_{N}\geq0$. Then, for every $\left\vert H\right\vert >0$, an $H$-graph $\Sigma=\Gamma_{u}(M)$ in $N\times\mathbb{R}$ over a domain $M\subseteq N$, is necessarily bounded; \cite{RoRo, Ch, ENR}. Furthermore, in case $m=2$, it follows by the Bishop-Gromov comparison theorem that, if $\mathrm{Sec}_{N}\geq0$, then $N$ has quadratic volume growth, that is \[ \mathrm{vol}B_{R}^{N}\left( \bar{x}\right) \leq\omega_{2} R^{2}, \] where $\omega_{2}$ denotes the area of the unit ball in $\mathbb{R}^{2}$. Moreover, if $N$ is complete, $\partial N =\emptyset$, then $\overline{M}$is a complete parabolic manifold with boundary. Indeed, let $d_{M}$ and $d_{N}$ denote the intrinsic distance functions on $M$ and $N$, respectively. Clearl \begin{equation} d_{M}\geq\left. d_{N}\right\vert _{M\times M} \label{distances \end{equation} and $\left( M,d_{M}\right) $ is a complete metric space. Indeed, from (\ref{distances}), any Cauchy sequence $\left\{ x_{k}\right\} \subset\left( M,d_{M}\right) $ is Cauchy in the complete space $\left( N,d_{N}\right) $. It follows that $x_{k}\overset{d_{N}}{\rightarrow}\bar{x}\in N$ as $k\rightarrow+\infty$. Actually, since $M$ is a closed subset of $\left( N,d_{N}\right) $, we have $\bar{x}\in M$. To conclude that $x_{k \overset{d_{M}}{\rightarrow}\bar{x}$, simply recall that the metric topology on $M$ induced by $d_{M}$ is the original topology of $M$, i.e., the subspace topology inherited from $N$. Moreover, since, by (\ref{distances}) \[ \mathrm{vol}B_{R}^{M}\left( x\right) \leq\mathrm{vol}\left( B_{R ^{N}\left( x\right) \cap M\right) \leq\mathrm{vol}\left( B_{R}^{N}\left( x\right) \right) , \] for every $x\in M$, it follows that $M$ enjoys the same volume growth property of $N$. In light of the considerations above, Corollary \ref{coro_intro_hest_graph} is now straightforward. \end{remark} We end this section, by considering the more general case of an oriented CMC hypersurface in the Riemannian product $N\times\mathbb{R}$. Abstracting from the previous arguments, and up to using more involved computations as in \cite{AD}, we easily obtain the proof of Theorem \ref{th_intro_hest} stated in the Introduction. \begin{proof} [Proof (of Theorem \ref{th_intro_hest})]Let $f:\Sigma^{m}\rightarrow N^{m}\times\mathbb{R}$ be a complete, oriented $H$-hypersurface isometrically immersed in $N\times\mathbb{R}$, and denote by $h$ the projection of the image of $\Sigma$ on $\mathbb{R}$ under the immersion, that is, $h=\pi_{\mathbb{R }\circ f$. Note that \begin{equation} \Delta_{\Sigma}h=n\cos\Theta H\leq0, \label{Lap h \end{equation} where, we recall, $\Theta\in\lbrack\frac{\pi}{2},\frac{3\pi}{2}]$ stands for the angle between the Gauss map $\mathcal{N}$ and the vertical vector field $\partial/\partial t$. Since, by Theorem \ref{th_growth}, $\Sigma$ is parabolic and $h$ is a bounded below superharmonic function, we can apply the Ahlfors maximum principle to get \[ h\geq\inf_{\Sigma}h=\inf_{\partial\Sigma}h=0. \] Consider now the function $\varphi$ defined as \[ \varphi=Hh+\cos\Theta. \] We know by Theorem 3.1 in \cite{AD} that $\varphi$ is subharmonic. Since it is also bounded, applying again the Ahlfors maximum principle we conclude that \[ Hh-1\leq\varphi\leq\sup_{\Sigma}\varphi=\sup_{\partial\Sigma}\varphi\leq0. \] We have thus shown tha \[ 0\leq\pi_{\mathbb{R}}\circ f\left( x\right) \leq\frac{1}{H}, \] as required. \end{proof} \section{The $L^{2}$-Stokes theorem \& slice-type results\label{section-stokes}} In this section we prove the global divergence theorem stated in the Introduction as Theorem \ref{th_intro_Stokes}. We also provide a somewhat weaker form of this result which involves differential inequalities of the type $\operatorname{div}X\geq f$; see Proposition \ref{propineq} below. This latter, together with the Ahlfors maximum principle, is then applied to prove slice-type results for hypersurfaces in product spaces and for graphs; see Theorems \ref{th_intro_slice} and \ref{th_intro_slice_graphs} in the Introduction. Actually, the graph-version of this result also requires a Liouville-type theorem for the mean curvature operator on manifolds with boundary, under volume growth conditions. This is modeled on \cite{RS-Revista}. \subsection{Global divergence theorems\label{subsection-divergence}} Recall that, for a given smooth, compactly supported vector field $X$ on an oriented Riemannian manifold $M$ with boundary $\partial M\neq\emptyset$, the ordinary Stokes theorem asserts that \begin{equation} \int_{M}\operatorname{div}X=\int_{\partial M}\langle X,\nu\rangle, \label{stokes \end{equation} where $\nu$ is the exterior unit normal to $\partial M$. In particular, this holds for every smooth vector field if $M$ is compact. The result still holds if we relax the regularity conditions on $X$ up to interpret its divergence in the sense of distributions. To be precise, we introduce the following definition. \begin{definition} \label{def_weakdiv}Let $X$ be a vector field on $M$ satisfying $X\in L_{loc}^{1}(M)$ and $\left\langle X,\nu\right\rangle \in L_{loc}^{1}\left( \partial M\right) $. The \textit{distributional divergence} of $X$ is defined by \begin{equation} (\operatorname{div}X,\varphi)=-\int_{M}\langle X,\nabla\varphi\rangle +\int_{\partial M}\varphi\langle X,\nu\rangle, \label{weakdiv \end{equation} for every $\varphi\in C_{c}^{\infty}(M)$. \end{definition} \begin{remark} \label{rem_divweak}The above definition extends trivially to $\varphi\in Lip_{c}\left( M\right) $. Actually, more is true. Recall that, given a domain $D\subseteq M$, $W_{0}^{1,p}\left( D\right) $ denotes the closure of $C_{c}^{\infty}(D)$ in $W^{1,p}(D)$. Then, by a density argument, the previous definition extends to every $\varphi\in C_{c}^{0}\left( M\right) \cap W_{0}^{1,2}\left( M\right) $. Indeed, let $\varphi$ be such a function. Then, we find an approximating sequence $\varphi_{n}\in C_{c}^{\infty}\left( M\right) $ such that $\varphi_{n}\rightarrow\varphi$ in $W^{1,2}\left( M\right) $, as $n\rightarrow+\infty$. Since $\mathrm{supp}\left( \varphi\right) $ is compact, we can assume that there exists a domain $\Omega\subset\subset M$ such that $\mathrm{supp}\left( \varphi_{n}\right) \subset\Omega$, for every $n$. Moreover, a subsequence (still denoted by $\varphi_{n}$) converges pointwise a.e. to $\varphi$. Let $c=\max _{M}\left\vert \varphi\right\vert +1$ and define $\phi_{n}=f\circ\varphi _{n}\in Lip_{c}\left( M\right) $ wher \[ f\left( t\right) =\left\{ \begin{array} [c]{ll c, & t\geq c\\ t, & -c<t<c\\ -c, & t\leq-c. \end{array} \right. \] Note that $\left\{ \phi_{n}\right\} $ is an equibounded sequence, $\mathrm{supp}\left( \phi_{n}\right) \subset\Omega$ and, furthermore, $\phi_{n}\rightarrow f\circ\varphi=\varphi$ in $W^{1,2}\left( M\right) $ and pointwise a.e. in $M$. Therefore, evaluating (\ref{weakdiv}) along $\phi_{n}$, taking limits as $n\rightarrow+\infty$ and using the dominated convergence theorem completes the proof. \end{remark} Now, suppose also that $\operatorname{div}X\in L_{loc}^{1}(M)$. Then we can writ \[ (\operatorname{div}X,\varphi)=\int_{M}\varphi\operatorname{div}X \] and, therefore, from (\ref{weakdiv}) we ge \[ \int_{M}\varphi\operatorname{div}X=-\int_{M}\langle X,\nabla\varphi \rangle+\int_{\partial M}\varphi\langle X,\nu\rangle. \] In particular, if $X$ is compactly supported , by choosing $\varphi=1$ on the support of $X$, we recover the Stokes formula (\ref{stokes}) for every compactly supported vector field $X$ satisfying $X\in L_{loc}^{1}(M)$, $\operatorname{div}X\in L_{loc}^{1}\left( {M}\right) $ and $\left\langle X,\nu\right\rangle \in L_{loc}^{1}\left( \partial M\right) $.\bigskip Note that, by similar reasonings, if the vector field $X\in L_{loc}^{1}\left( M\right) $ has a weak divergence $\operatorname{div}X\in L_{loc}^{1}\left( M\right) $ and $\left\langle X,\nu\right\rangle \in L_{loc}^{1}\left( \partial M\right) ,$ then, for every $\rho\in C_{c}^{0}\left( M\right) \cap W_{0}^{1,2}\left( M\right) $, we have that $\operatorname{div}\left( \rho X\right) \in L_{loc}^{1}\left( M\right) $. Moreover, as in the smooth case \[ \int_{M}\operatorname{div}\left( \rho X\right) =\int_{M}\left\langle \nabla\rho,X\right\rangle +\int_{M}\rho\operatorname{div}X. \] To see this, we take $\varphi\in C_{c}^{\infty}\left( M\right) $ and, using (\ref{weakdiv}) in the form of Remark \ref{rem_divweak}, we comput \begin{align*} \left( \operatorname{div}\left( \rho X\right) ,\varphi\right) & =-\int_{M}\left\langle \rho X,\nabla\varphi\right\rangle +\int_{\partial M}\rho\varphi\left\langle X,\nu\right\rangle \\ & =-\int_{M}\left\langle X,\nabla\left( \rho\varphi\right) \right\rangle +\int_{\partial M}\rho\varphi\left\langle X,\nu\right\rangle +\int_{M \varphi\left\langle X,\nabla\rho\right\rangle \\ & =\left( \operatorname{div}X,\rho\varphi\right) +\int_{M}\varphi \left\langle X,\nabla\rho\right\rangle \\ & =\int_{M}\left( \rho\operatorname{div}X+\left\langle X,\nabla \rho\right\rangle \right) \varphi\\ & =\left( \rho\operatorname{div}X+\left\langle X,\nabla\rho\right\rangle ,\varphi\right) . \end{align*} Whence, we conclude tha \[ \operatorname{div}\left( \rho X\right) =\rho\operatorname{div}X+\left\langle X,\nabla\rho\right\rangle \in L_{loc}^{1}\left( M\right) \] as desired. All these facts will be tacitly employed several times in the rest of the Section.\bigskip If $M$ is not compact, we can still prove a global version of Stokes theorem for vector fields with prescribed asymptotic behavior at infinity. This is the content of Theorem \ref{th_intro_Stokes}.\newline \begin{proof} [Proof (of Theorem \ref{th_intro_Stokes})]Suppose $M$ is parabolic. According to Theorem~\ref{capacity_parabolicity} (ii) there exists an increasing sequence of functions $\varphi_{n}\in C_{c}(M) \cap W^{1,2}(M)$ such that $0\leq\varphi_{n}\leq1$ and \[ \varphi_{n} \to1\,\, \text{ locally uniformly on }\, M \,\,\text{ and } \int_{M} |\nabla\varphi_{n}|^{2} \to0. \] Consider now any vector field $X$ satisfying (\ref{KNR1}). Since $\varphi _{n}X$ is compactly supported, applying the usual (weak) divergence theorem we ge \begin{equation} \int_{M}\operatorname{div}\left( \varphi_{n}X\right) =\int_{\Omega_{n }\operatorname{div}\left( \varphi_{n}X\right) =\int_{\partial_{1}\Omega_{n }\varphi_{n}\left\langle X,\nu\right\rangle . \label{KNR2 \end{equation} On the other han \[ \int_{M}\operatorname{div}\left( \varphi_{n}X\right) =\int_{M}\left\langle \nabla\varphi_{n},X\right\rangle +\int_{M}\varphi_{n}\operatorname{div}X, \] wher \[ \left\vert \int_{M}\left\langle \nabla\varphi_{n},X\right\rangle \right\vert \leq\left( \int_{M}\left\vert \nabla\varphi_{n}\right\vert ^{2}\right) ^{\frac{1}{2}}\left( \int_{M}\left\vert X\right\vert ^{2}\right) ^{\frac {1}{2}}\rightarrow0 \] as $n\rightarrow+\infty$. Moreove \[ \int_{M}\varphi_{n}\operatorname{div}X=\int_{M}\varphi_{n}(\operatorname{div X)_{+}-\int_{M}\varphi_{n}(\operatorname{div}X)_{- \] an \[ \int_{M}\varphi_{n}(\operatorname{div}X)_{+}\leq\int_{M}\varphi_{n (\operatorname{div}X)_{-}+\int_{\partial_{1}\Omega_{n}}\varphi_{n}\left\langle X,\nu\right\rangle -\int_{M}\left\langle \nabla\varphi_{n},X\right\rangle . \] Using the monotone convergence theorem and the fact that $0\leq\varphi_{n \leq1$, we obtain \[ \int_{M}(\operatorname{div}X)_{+}\leq\int_{M}(\operatorname{div}X)_{- +\int_{\partial_{1}\Omega_{n}}\varphi_{n}\left\langle X,\nu\right\rangle <+\infty. \] Hence $\operatorname{div}X\in L^{1}(M)$ and taking limits on both sides of (\ref{KNR2}) completes the first part of the proof. Conversely, assume that $M$ is not parabolic so that $M$ possesses a smooth, finite, positive Green kernel, \cite{Gr1, GN}. We shall show that the global Stokes theorem fails. To this end, choose an exhaustion $\left\{ \Omega _{n}\right\} $ of $M$ by smooth and relatively compact domains. Then, the Neumann Green kernel $G\left( x,y\right) $ of $M$ is obtained as the limit of the Green functions $G_{n}\left( x,y\right) $ of $\Omega_{n}$ which satisf \[ \left\{ \begin{array} [c]{ll \Delta G_{n}\left( x,y\right) =-\delta_{x}\left( y\right) & \text{on }\Omega_{n}\cap\mathrm{int}M\\ \dfrac{\partial G_{n}}{\partial\nu}=0 & \text{on }\partial_{1}\Omega_{n}\\ G_{n}=0 & \text{on }\partial_{0}\Omega_{n}. \end{array} \right. \] Let $f\geq0$ be a smooth function compactly supported in $\mathrm{int}M$. For each $n$ defin \[ u_{n}\left( x\right) =\int_{\Omega_{n}}G_{n}\left( x,y\right) f\left( y\right) dy. \] Then, each $u_{n}$ is a positive, classical solution of the boundary value proble \[ \left\{ \begin{array} [c]{ll \Delta u_{n}=-f & \text{on }\Omega_{n}\cap\mathrm{int}M\\ \dfrac{\partial u_{n}}{\partial\nu}=0 & \text{on }\partial_{1}\Omega_{n}\\ u_{n}=0 & \text{on }\partial_{0}\Omega_{n}. \end{array} \right. \] By the maximum principle and the boundary point lemma, the sequence is monotonically {increasing} and converges to a solution $u$ o \[ \left\{ \begin{array} [c]{ll \Delta u=-f & \text{on }M\\ \dfrac{\partial u}{\partial\nu}=0 & \text{on }\partial M. \end{array} \right. \] Also, using Fatou Lemma \[ \int_{M}\left\vert \nabla u_{n}\right\vert ^{2}\geq\int_{M}\left\vert \nabla u\right\vert ^{2}. \] Now consider the vector fiel \[ X=\nabla u. \] Clearly $X$ satisfies all the conditions in (\ref{KNR1}). On the other hand, we hav \[ \int_{M}\operatorname{div}X=-\int_{M}f\neq0 \] an \[ \int_{\partial M}\left\langle X,\nu\right\rangle =\int_{\partial M \dfrac{\partial u}{\partial\nu}=0, \] proving that the global Stokes theorem fails to hold. \end{proof} Using Definition \ref{def_weakdiv} of weak divergence one could introduce the notion of weak solution of a differential inequality like $\operatorname{div X\geq f$. We stress that $\operatorname{div}X$ is not required to be a function. \begin{definition} \label{def_weak-sol-divX}Let $X\in L_{loc}^{1}\left( M\right) $ be a vector field satisfying $\left\langle X,\nu\right\rangle \in L_{loc}^{1}\left( \partial M\right) $ and let $f\in L_{loc}^{1}\left( M\right) $. We say that $\operatorname{div}X\geq f$ in the distributional sense on $M$ i \[ \left( \operatorname{div}X,\varphi\right) \geq\int_{M}f\varphi, \] for every $0\leq\varphi\in C_{c}^{\infty}\left( M\right) $. Actually, according to Remark \ref{rem_divweak}, the definition extends to every $0\leq\varphi\in C_{c}^{0}\left( M\right) \cap W^{1,2}\left( M\right) $. In the special case where $f=0$ and $X=\nabla u$ for some $u\in W_{loc ^{1,2}\left( M\right) $ satisfying $\partial u/\partial\nu\in L_{loc ^{1}\left( \partial M\right) $, we obtain the corresponding notion of weak solution of $\Delta u\geq0$ on $M$. \end{definition} Although elementary, it is important to realize that, as in the smooth setting, the above definition is compatible with that of weak Neumann \ subsolution given in the Introduction. \begin{lemma} \label{lemma_equiv-weak-def}Let $u\in W_{loc}^{1,2}\left( M\right) $ satisfy $\partial u/\partial\nu\in L_{loc}^{1}\left( \partial M\right) $. Then $u$ is a weak Neumann subsolution of the Laplace equation provided $u$ satisfie \[ \left\{ \begin{array} [c]{ll \Delta u\geq0 & \text{on }M\medskip\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial M, \end{array} \right. \] where the differential inequality is interpreted according to Definition \ref{def_weak-sol-divX}. \end{lemma} \begin{proof} Straightforward from the equatio \[ \left( \Delta u,\varphi\right) \overset{\mathrm{def}}{=}-\int_{M \left\langle \nabla u,\nabla\varphi\right\rangle +\int_{\partial M \dfrac{\partial u}{\partial\nu}\varphi, \] with $0\leq\varphi\in C_{c}^{\infty}\left( M\right) $. \end{proof} Reasoning as in the proof of Theorem \ref{th_intro_Stokes}, we can now prove the following result which extends to manifolds with boundary a result in \cite{HPV}. \begin{proposition} \label{propineq} Let $\left( M,g\right) $ be an $m$-dimensional, parabolic manifold with smooth boundary $\partial M$. Let $X$ be a vector field on $M$ satisfying \[ \text{(a) }\left\vert X\right\vert \in L^{2}\left( M\right) \text{; (b) }0\geq\left\langle X,\nu\right\rangle \in L_{loc}^{1}\left( \partial M\right) . \] Assume that $\operatorname{div}X\geq f$ for some $f\in L^{1}(M)$ in the sense of distributions. Then \[ \int_{M}f\leq\int_{\partial M}\langle X,\nu\rangle. \] {The same conclusion holds if $0\leq f\in L^{1}_{loc}(M)$ and yields \[ f\equiv0. \] } Moreover, if $\operatorname{div}X\geq0$ in the distributional sense, the \[ \int_{M}\langle X,\nabla\alpha\rangle\leq\int_{\partial M}\alpha\left\langle X,\nu\right\rangle \] for every $0\leq\alpha\in C_{c}^{\infty}\left( M\right) $. \end{proposition} \begin{proof} Choose a smooth, relatively compact exhaustion $\Omega_{n}\subset M$ and denote by $\varphi_{n}$ the equilibrium potential of the capacitor $(\overline{\Omega}_{0},\Omega_{n})$. Extend $\varphi_{n}$ to be identically $1$ on $\Omega_{0}$ and identically $0$ on $M\backslash\Omega_{n}$. Then, by assumption \begin{align*} \int_{M}\varphi_{n}f & \leq\left( \operatorname{div}X,\varphi_{n}\right) \\ & =-\int_{M}\langle X,\nabla\varphi_{n}\rangle+\int_{\partial M}\varphi _{n}\langle X,\nu\rangle\\ & \leq\left( \int_{M}|X|^{2}\right) ^{\frac{1}{2}}\left( \int_{M |\nabla\varphi_{n}|^{2}\right) ^{\frac{1}{2}}+\int_{\partial M}\varphi _{n}\langle X,\nu\rangle. \end{align*} The first part of the statement follows by taking the $\limsup$ as $n\rightarrow+\infty$ and applying the Fatou Lemma and either the monotone convergence theorem if $0\leq f\in L^{1}_{loc}(M)$ or the dominated convergence theorem if $f\in L^{1}(M)$. For what concern the second part, consider the test function $\eta=\varphi_{n}\alpha$. Then, \begin{align*} 0 & \leq\left( \operatorname{div}X,\alpha\varphi_{n}\right) \\ & =-\int_{M}\alpha\langle X,\nabla\varphi_{n}\rangle-\int_{M}\varphi _{n}\langle X,\nabla\alpha\rangle+\int_{\partial M}\alpha\varphi_{n}\langle X,\nu\rangle\\ & \leq\sup_{M}|\alpha|\left( \int_{M}|X|^{2}\right) ^{\frac{1}{2}}\left( \int_{M}|\nabla\varphi_{n}|^{2}\right) ^{\frac{1}{2}}-\int_{M}\varphi _{n}\langle X,\nabla\alpha\rangle+\int_{\partial M}\alpha\varphi_{n}\langle X,\nu\rangle. \end{align*} and the conclusion follows as above computing the $\limsup$ as $n\rightarrow +\infty$. \end{proof} \subsection{Slice-type theorems for hypersurfaces in a half-space} This Section is devoted to the proofs of Theorems \ref{th_intro_slice} and \ref{th_intro_slice_graphs} stated in the Introduction. The first one of these results involves a complete hypersurface\ $\Sigma$ contained in the half-space $N\times\lbrack0+\infty)$ of the ambient product space $N\times\mathbb{R}$. It is assumed that the boundary $\partial\Sigma\neq\emptyset$ lies in the slice $N\times\left\{ 0\right\} $ and that $\Sigma$ has non-positive mean curvature $H\leq0$ with respect to the \textquotedblleft downward\textquotedblright Gauss map. The result states that, under a quadratic area growth assumption on $\Sigma$ and regardless of the geometry of $N$, the portion of the hypersurface $\Sigma$ in any upper-halfspace of $N\times\mathbb{R}$ must have infinite volume unless $\Sigma$ is contained in the totally geodesic slice $N\times\left\{ 0\right\} $. The second result provides a graphical version of this theorem when $\Sigma=\Gamma_{u}\left( M\right) $. If $M$ satisfies a quadratic volume growth assumption, then each superlevel set $M_{t}=\left\{ u\geq t>0\right\} \subseteq M$ has infinite volume unless $\Sigma$ is contained in the totally geodesic slice $M\times\left\{ 0\right\} $. Note that $M_{t}$ is the orthogonal projection of $\Sigma\cap\lbrack t,+\infty)$ on the slice $M\times\left\{ 0\right\} $.\bigskip Let us begin with the \begin{proof} [Proof (of Theorem \ref{th_intro_slice})]Suppose that $\Sigma$ is not contained in the slice $N\times\left\{ 0\right\} $. If the height function $h$ on $\Sigma$ is bounded from above (for the precise definition of $h$ see the proof of Theorem \ref{th_intro_hest} in Subsection \ref{section-heightestimates}) the parabolicity of $\Sigma$ in the form of the Ahlfors maximum principle implies that \[ h\leq\sup_{\Sigma}h=\sup_{\partial\Sigma}h=0. \] The conclusion is then immediate because, by assumption, $\Sigma$ is contained in the half-space $N\times\lbrack0,+\infty)$. Suppose now that $\sup_{\Sigma }h=+\infty$, so that $\Sigma\cap N\times\left\{ t\right\} \neq\emptyset$ for an arbitrary $t>0$. Lettin \[ \Sigma_{t}=\Sigma\cap N\times\lbrack t,+\infty)=\left\{ p\in\Sigma:h\left( p\right) \geq t\right\} , \] and since $\mathrm{vol}\left( \Sigma_{t}\right) \geq\mathrm{vol}\left( \Sigma_{s}\right) $, for every $s\geq t$, we can assume that $\mathrm{vol \left( \Sigma_{t}\right) <+\infty$ for every $t>>1$. Moreover, by Sard theorem we can suppose that $t$ is a regular value of $\left. h\right\vert _{\mathrm{int}\Sigma}$. In particular, $\Sigma_{t}$ is a smooth complete hypersurface with boundary $\partial\Sigma_{t}=\left\{ p\in\Sigma:h\left( p\right) =t\right\} $ and exterior unit normal $\nu_{t}=-\nabla h/|\nabla h|$. Clearly, $\Sigma_{t}$ is parabolic because it has finite volume. According to (\ref{Lap h}), $h$ is a subharmonic function on $\Sigma_{t}$ and satisfies $\left\vert \nabla h\right\vert \leq1$. In particular, $\left\vert \nabla h\right\vert \in L^{2}\left( \Sigma_{t}\right) $. For any $\varepsilon>0$ defin \[ h_{\varepsilon}=\max\left\{ h,t+\varepsilon\right\} . \] Then $h_{\varepsilon}$ is again subharmonic on $M_{t}$, it has finite Dirichet energy $\left\vert \nabla h_{\varepsilon}\right\vert \in L^{2}\left( \Sigma_{t}\right) $ and, furthermore, $\partial h_{\varepsilon}/\partial \nu=0$ on $\partial\Sigma_{t}$. Therefore, we can apply Proposition \ref{propineq} and deduce that $h_{\varepsilon}$ has to be harmonic on $\Sigma_{t}$. Actually, since $h_{\varepsilon}$ is bounded from below on the parabolic manifold $\Sigma_{t}$ it follows that $h_{\varepsilon}$ is constant on every connected component of $\Sigma_{t}$. Whence, on noting that $h_{\varepsilon}=t+\varepsilon$ on $\partial\Sigma_{t}$ we obtain that $t\leq h\leq t+\varepsilon$ on $\Sigma_{t}$. Since this holds for every $\varepsilon>0$ we conclude that $h\equiv t$ on $\Sigma_{t}$, contradicting the assumption of $h$ being unbounded. \end{proof} The proof of Theorem \ref{th_intro_slice_graphs} is completely similar but requires some preparation. The next Liouville-type result for the mean curvature operator is adapted from \cite{RS-Revista}; see also \cite{CY-CPAM, Ch-Manuscripta}. We provide a detailed proof for the sake of completeness. \begin{theorem} \label{th_areagrowth_meancurvop}Let $(M,g)$ be a complete Riemannian manifold with boundary $\partial M\neq\emptyset$. If, for some reference point $o\in\mathrm{int}M$ \begin{equation} \frac{1}{{\mathrm{Area}}\left( {\partial}_{0}{B_{R}}\left( o\right) \right) }\notin L^{1}(+\infty), \label{area-varphipar \end{equation} then the following holds. Let $u\in C^{1}\left( M\right) $ be a weak Neumann solution of the proble \begin{equation} \left\{ \begin{array} [c]{ll \operatorname{div}\left( \dfrac{\nabla u}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}}\right) \geq0 & \text{on }M\medskip\\ \dfrac{\partial u}{\partial\nu}\leq0 & \text{on }\partial M\medskip\\ \sup_{M}u<+\infty. & \end{array} \right. \label{parab-meancurv \end{equation} Then $u\equiv\mathrm{const}.$ \end{theorem} \begin{remark} As already pointed out for the Laplace-Beltrami operator, being a weak Neumann solution of $\operatorname{div}((1+\left\vert \nabla u\right\vert ^{2 )^{-1/2}\nabla u))\geq0$ means tha \begin{equation} -{\displaystyle\int_{M}} \left\langle \dfrac{\nabla u}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}},\nabla\varphi\right\rangle \geq0, \label{weak_sub_meancurv \end{equation} for every $0\leq\varphi\in C_{c}^{\infty}\left( M\right) $. Actually, it is obvious that the same definition extends to any elliptic operator of the form $L_{\Phi}\left( u\right) =\operatorname{div}(\Phi(\left\vert \nabla u\right\vert )\nabla u)$, where $\Phi\left( t\right) $ is subjected to certain structural conditions. Moreover, under the assumptio \[ |\nabla u| \in L_{loc}^{1}\left( \partial M\right) , \] this definition is also coherent with the notion of weak divergence. Namely $u$ satisfies (\ref{weak_sub_meancurv}) provided $\left( \operatorname{div X,\varphi\right) \geq0$ and $\partial u/\partial\nu\leq0$, where we have set $X=(1+\left\vert \nabla u\right\vert ^{2})^{-1/2}\nabla u$. This follows immediately from the equatio \[ \left( \operatorname{div}X,\varphi\right) \overset{\mathrm{def} {=}-{\displaystyle\int_{M}} \left\langle \dfrac{\nabla u}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}},\nabla\varphi\right\rangle +\int_{\partial M \frac{\varphi}{\sqrt{1+\left\vert \nabla u\right\vert ^{2}}}\frac{\partial u}{\partial\nu}. \] \end{remark} \begin{remark} \label{rmk_areagrowth_meancurvop} If we take $\Phi\left( t\right) =1$ in the argument below we recover Theorem \ref{th_growth} by Grigor'yan, in the form of a Liouville result for $C^{1}(M)$ subsolutions of the Laplace equation. \end{remark} \begin{proof} Let $u$ be as in the statement of the theorem and assume, by contradiction, that $u$ is non-constant on the ball $B_{R_{0}}(o)$, for some $R_{0}>0$. Without loss of generality we can suppose that $u\leq0$ on $M$. Defin \[ \Phi\left( t\right) =\frac{1}{\sqrt{1+t^{2}}}. \] Now, having fixed $R>R_{0}$ and $\varepsilon>0$, we choose $\rho =\rho_{\varepsilon,R}$ as follows \[ \rho(x) \begin{cases} 1 & \mathrm{on}\quad B_{R}(o)\\ \frac{R+\varepsilon-r(x)}{\varepsilon} & \mathrm{on}\quad B_{R+\varepsilon }(o)\backslash B_{R}(o)\\ 0 & \mathrm{elsewhere}. \end{cases} \] Inserting the test function $\varphi=\rho\mathrm{e}^{u}$ into (\ref{weak_sub_meancurv}) and elaborating we ge \begin{align*} 0 & \leq-\int_{M}\langle\Phi(|\nabla u|)\nabla u,\nabla(\rho\mathrm{e ^{u})\rangle\\ & =-\int_{M}\mathrm{e}^{u}\Phi(|\nabla u|)\left\langle \nabla u,\nabla \rho\right\rangle -\int_{M}\rho\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}. \end{align*} Then, on noting also that $\partial M$ has measure zero, we hav \[ \varepsilon^{-1}\int_{(B_{R+\varepsilon}(o)\backslash B_{R}(o))\cap \mathrm{int}M}\mathrm{e}^{u}\Phi(|\nabla u|)\langle\nabla u,\nabla r\rangle\geq\int_{B_{R}(o)\cap\mathrm{int}M}\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}. \] Using the co-area formula and letting $\varepsilon\rightarrow0$ we get, for a.e. $R>R_{0}$ \[ \int_{\partial_{0}B_{R}\left( o\right) }\mathrm{e}^{u}\Phi(|\nabla u|)\langle\nabla u,\nabla r\rangle\geq\int_{B_{R}(o)\cap\mathrm{intM }\mathrm{e}^{u}\Phi(|\nabla u|)|\nabla u|^{2}. \] On the other hand, using the Cauchy-Schwartz and H\"{o}lder inequalities, we obtai \begin{align*} \int_{\partial_{0}B_{R}\left( o\right) }\!\!\!\!\mathrm{e}^{u}\Phi(|\nabla u|)\langle\nabla u,\nabla r\rangle & \leq\int_{\partial_{0}B_{R}\left( o\right) }\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert \\ & \leq\left( \int_{\partial_{0}B_{R}\left( o\right) }\!\!\!\! \mathrm{e}^{u}\Phi(|\nabla u|)\right) ^{\frac{1}{2}}\left( \int _{\partial_{0}B_{R}\left( o\right) }\!\!\!\! \mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}\right) ^{\frac{1}{2}}\\ & \leq\mathrm{Area}(\partial_{0}B_{R}(o))^{\frac{1}{2}}\left( \int _{\partial_{0}B_{R}\left( o\right) }\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}\right) ^{\frac{1}{2}}. \end{align*} Now, set \[ H(R)=\int_{B_{R}(o)\cap\mathrm{intM}}\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}, \] Then, by the co-area formula and the previous inequalities, \[ \frac{H^{\prime}(R)}{H(R)^{2}}\geq\frac{1}{\mathrm{Area}(\partial_{0 B_{R}(o))}. \] Integrating this latter on $[R_{0},R]$ and letting $R\rightarrow+\infty$ we conclud \[ H(R_{0})\leq\frac{1}{\int_{R_{0}}^{+\infty}\mathrm{Area}(\partial_{0 B_{R}(o))^{-1}}=0, \] proving tha \[ \int_{B_{R_{0}}(o)\cap\mathrm{intM}}\mathrm{e}^{u}\Phi(|\nabla u|)\left\vert \nabla u\right\vert ^{2}=0. \] Therefore, $u$ must be constant on $B_{R_{0}}(o)$, leading to a contradiction. \end{proof} We are now ready to prove the slice theorem for graphs. \begin{proof} [Proof (of Theorem \ref{th_intro_slice_graphs})]Let $\Sigma=\Gamma_{u}\left( M\right) $, with $u\in C^{0}\left( M\right) \cap C^{\infty}\left( \mathrm{int}M\right) $, and for every $s\in\mathbb{R}$ defin \[ M_{s}:=\{x\in M:u(x)\geq s\}. \] By the assumption on $\partial\Sigma=\Gamma_{u}\left( \partial M\right) $, there exists $t>0$ such that $M_{t}\subset\mathrm{int}M$ and $\mathrm{vol (M_{t})<+\infty$. Assume that $M_{t}\neq\emptyset$ for, otherwise, as in Theorem \ref{th_intro_slice}, the proof is easier. We claim that $u$ is constant on $M_{t}$. Indeed, by contradiction, suppose that this is not the case. Then, by Sard Theorem, we can choose $t<c<\sup_{M}u$ such that $c$ is a regular value of $\left. u\right\vert _{\mathrm{int}M}$. Thus, the closed subset $M_{c}$ is a complete manifold with boundary $\partial M_{c \neq\emptyset$ and exterior unit normal $\nu_{c}=-\nabla u/|\nabla u|$. In particular, as a complete manifold with finite volume, $M_{c}$ is parabolic. Since the smooth function $u$ satisfie \[ \operatorname{div}\left( \dfrac{\nabla_{M}u}{\sqrt{1+|\nabla_{M}u|^{2} }\right) =-mH\geq0\text{, on }M_{c \] then, having fixed any $\varepsilon>0$, the same differential inequality holds fo \[ u_{\varepsilon}=\max\left\{ u,c+\varepsilon\right\} ; \] see e.g. \cite{PS-Fortaleza}. Note also that $\partial u_{\varepsilon }/\partial\nu=0$ on $\partial M_{c}$. Summarizing, the vector fiel \[ X_{\varepsilon}=\dfrac{\nabla_{M}u_{\varepsilon}}{\sqrt{1+|\nabla _{M}u_{\varepsilon}|^{2}} \] satisfies \[ \left\{ \begin{array} [c]{ll \operatorname{div}_{M}X_{\varepsilon}\geq0 & \text{on }M_{c}\\ 1\geq\left\vert X_{\varepsilon}\right\vert \in L^{2}\left( M_{c}\right) & \\ {0=\left\langle X_{\varepsilon},\nu_{c}\right\rangle }. & \end{array} \right. \] By applying Proposition \ref{propineq} we deduce that $\operatorname{div _{M}X=0$ on $M_{c}$, i.e., $\Sigma_{c}=\Gamma_{u}\left( M_{c}\right) $ is a minimal graph. Actually, since $\mathrm{vol}\left( M_{c}\right) <+\infty$, by Theorem \ref{th_areagrowth_meancurvop} we get that $u_{\varepsilon}$ must be constant on every connected component of $M_{c}$. Since $u_{\varepsilon }=c+\varepsilon$ on $\partial M_{c}$ it follows that $c\leq u\leq c+\varepsilon$ on $M_{c}$. Whence, using the fact that $\varepsilon>0$ was chosen arbitrarily, we conclude that $u\equiv c$ on $M_{c}$. This contradicts the fact that $c$ is a regular value of $u$, and the claim is proved. Since $u$ is constant on $M_{t}$ we have that $\sup_{M}u<+\infty$. We now distinguish three cases. (a) Suppose that $\partial\Sigma=\partial M\times\left\{ 0\right\} $ and $\Sigma\subset\lbrack0,+\infty)$. This means that $u\geq0$ with $u=0$ on $\partial M$. In this case the conclusion $u\equiv0$ follows exactly as in proof of Theorem \ref{th_intro_slice}. (b) Suppose that $\Sigma$ is real analytic, i.e., it is described by a real analytic function $u$. Since $u$ is constant on the open set $\left\{ u<c\right\} $ we must conclude that $u$ is constant everywhere. (c) Suppose that $\cos\widehat{\mathcal{N}_{0}\mathcal{N}}\leq0$ on $\partial\Sigma=\Gamma_{u}\left( \partial M\right) $. This means that $\partial u/\partial\nu\leq0$ on $\partial M$. \ The desired conclusion follows by a direct application of Theorem \ref{th_areagrowth_meancurvop}. \end{proof} The following corollary is a straightforward consequence of the above proof. \begin{corollary} Let $\left( M,g\right) $ be a complete manifold with boundary $\partial M$ and assume that $\mathrm{vol}M<+\infty$. Let $\Sigma=\Gamma_{u}\left( M\right) $ be a graph with non-positive mean curvature $H\left( x\right) $ with respect to the downward Gauss map $\mathcal{N}$. Assume also that the angle $\theta$ between the Gauss map $\mathcal{N}$ of the graph $\Sigma$ and the Gauss map $\mathcal{N}_{0}=\left( -\nu,0\right) $ of $\partial M\times\left\{ t\right\} \hookrightarrow M\times\left\{ t\right\} $ satisfies $\theta\in\lbrack-\frac{\pi}{2},\frac{\pi}{2}]$. Then $\Sigma$ is a horizontal slice of $M\times\mathbb{R}$. \end{corollary}
proofpile-arXiv_068-14222
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Supplemental Material:} \renewcommand{\r}{\mathbf{r}} \renewcommand{\k}{\mathbf{k}} \renewcommand{\theequation}{S\arabic{equation}} \section*{Linear response theory for the EIT spectrum} We apply the linear response theory \cite{manyparticle} to the electric susceptibility of EIT.\ For the perturbation Hamiltonian $\hat{H}_1(t)$, the linear response of any operator $\hat{O}$ under the ground state $|\Psi_G\rangle_H$ of the unperturbed Hamiltonian $\hat{H}$ is ($\hbar\equiv 1$) \begin{align} \delta\left\langle \hat{O}(t)\right\rangle & =i\int_{-\infty}^{t}dt^{\prime}{\Big.}_H\left\langle \Psi_{G}\left\vert \left[ \hat{H}_{1,H}(t^{\prime }),\hat{O}_{H}(t)\right] \right\vert \Psi_{G}\right\rangle_{H},\label{linear} \end{align} where $\hat{H}_{1,H}(t)=e^{i\hat{H}t}\hat{H}_{1}(t)e^{-i\hat{H}t}$ is the operator defined in the Heisenberg picture (with subscript $H$), and the same as for $\hat{O}_{H}(t)$. To calculate the electric susceptibility of EIT, we consider the variation of the polarization operator, \begin{align} &\hat{P}(\k,t)\equiv\int d\r\hat{P}(\r,t)e^{-i\k\cdot\r}\nonumber\\&=d_{0}\sum_{\mathbf{q}}\left[ \hat{e}_{-\k+\mathbf{q}}^{\dag}(t)\hat{g}_{\mathbf{q}}(t)+\hat{g} _{\mathbf{q}}^{\dag}(t)\hat{e}_{\k+\mathbf{q}}(t)\right] , \end{align} and put the above into Eq. (\ref{linear}), we have \begin{widetext} \begin{align} & \delta\left\langle \hat{P}(\mathbf{q},t)\right\rangle =i\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime}){\Bigg.}_H\left\langle \Psi_{G}\left\vert \left[-\frac{1}{V}\sum_{\k,\k'}e^{i\hat{H}t^{\prime}}\left(\bar{\Omega}_{1,\k}(t^{\prime})\hat{e}_{\k+\k'}^{\dag}(t^{\prime})\hat{g}_{\k'}(t^{\prime})+h.c.\right)e^{-i\hat{H}t^{\prime}},\right.\right.\right. \nonumber\\&\left.\left.\left.d_{0}\sum_{\mathbf{q}^{\prime}}e^{i\hat{H}t}\left(\hat{e}_{-\mathbf{q}+\mathbf{q}^{\prime}}^{\dag}(t)\hat{g}_{\mathbf{q}^{\prime}}(t)+\hat{g}_{\mathbf{q}^{\prime}}^{\dag}(t)\hat{e}_{\mathbf{q}+\mathbf{q}^{\prime}}(t)\right) e^{-i\hat{H}t}\right]\right\vert \Psi_{G}\right\rangle_H ,\nonumber\\& =-i\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime})\frac{d_{0}}{V}\sum_{\k,\k',\mathbf{q}^{\prime}}\Big\{\bar{\Omega}_{1,\k}(t^{\prime}){\Big.}_H\left\langle \Psi_{G}\left\vert\left[e^{i\hat{H}t^{\prime}}\hat{e}_{\k+\k'}^{\dag}(t^{\prime})\hat{g}_{\k'}(t^{\prime})e^{-i\hat{H}t^{\prime}},e^{i\hat{H}t}\hat{g}_{\mathbf{q}^{\prime}}^{\dag}(t)\hat{e}_{q+\mathbf{q}^{\prime}}(t)e^{-i\hat{H}t}\right] \right\vert \psi_{G}\right\rangle_H \nonumber\\ & +\bar{\Omega}_{1,\k}^{\ast}(t^{\prime}){\Big.}_H\left\langle \psi_{G}\left\vert\left[ e^{i\hat{H}t^{\prime}}\hat{g}_{\k'}^{\dag}(t^{\prime})\hat{e} _{\k+\k'}(t^{\prime})e^{-i\hat{H}t^{\prime}},e^{i\hat{H}t}\hat{e}_{-\mathbf{q}+\k^{\prime}}^{\dag}(t)\hat{g}_{\mathbf{q}^{\prime}}(t)e^{-i\hat{H}t}\right] \right\vert \Psi_{G}\right\rangle_H\Big\}, \end{align} \end{widetext} where in the last line we have neglected the terms of $\hat{e}^{\dag}\hat{g}\hat{e}^{\dag}\hat{g}$ and $\hat{g}^{\dag}\hat{e}\hat{g}^{\dag}\hat{e}$, which are vanishing in the expectation value of ground state wavefunction, $|\Psi_G\rangle_H$. For the unperturbed Hamiltonian (or probe-free Hamiltonian, $\hat{H}=\hat{H}_0+\hat{H}_U$), all atoms are in the atomic ground state $|g\rangle$, and states $|e\rangle$ and $|s\rangle$ are empty. Therefore, further expansion of the above equation gives \begin{widetext} \begin{align} & \delta\left\langle \hat{P}(\mathbf{q},t)\right\rangle=\frac{id_{0}}{V}\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime}) \sum_{\k,\k',\mathbf{q}^{\prime}}\Big[ \tilde{\Omega}_{1,\k}(t^{\prime}){\Big.}_H\left\langle\psi_{G}\left\vert \hat{g}_{\mathbf{q}^{\prime}}^{\dag}(t)\hat{e}_{\mathbf{q}+\mathbf{q}^{\prime}}(t)\hat{e}_{\k+\k'}^{\dag}(t^{\prime})\hat{g}_{\k'}(t^{\prime})\right\vert \psi_{G}\right\rangle _{H} -h.c.(\mathbf{q}\rightarrow-\mathbf{q})\Big] . \end{align} \end{widetext} Since the excited state $|e\rangle$ and the second ground state $|s\rangle$ are coupled by the control field, we then replace $\hat{e}_{\k+\k_1}(t)$ with the new eigen-bases $\hat{a}_\k$ and $\hat{b}_\k$ with corresponding eigenenergy $\epsilon_\pm(\k)$ respectively. Here $\hat{a}_{\k}$ $=$ $\cos\phi_\k\hat{e}_{\k+\k_1}$ $+$ $\sin\phi_\k\hat{s}_{\k+\k_1-\k_2}$ and $\hat{b}_{\k}$ $=$ $\sin\phi_\k\hat{e}_{\k+\k_1}$ $-$ $\cos\phi_\k\hat{s}_{\k+\k_1-\k_2}$, where $\cos\phi_\k$$=$$\sqrt{[\epsilon_+(\k)-\epsilon_{0,\k+\k_1}+\Delta_1]/[\epsilon_+(\k)-\epsilon_-(\k)]}$. The associated eigenvalues are: $\epsilon_\pm(\k)$$=$$-\Delta_1+[\bar{\Delta}_2+\epsilon_{0,\k+\k_1}\pm\sqrt{(\bar{\Delta}_2-\epsilon_{0,\k+\k_1})^2+4\Omega_2^2}]/2$, where $\bar{\Delta}_2\equiv\Delta_2+\epsilon_{0,\k+\k_r}$, $\epsilon_{0,\k}\equiv\k^2/(2m)-\mu$, and $\k_r\equiv\k_1-\k_2$ as the recoil momentum. A phenomenonological spontaneous decay rate ($\Gamma$) of the excited state can be added by replacing $\epsilon_{0,\k+\k_1}$ with $\epsilon_{0,\k+\k_1}-i\Gamma$.\ After shifting the momentum, $\mathbf{q}\rightarrow\mathbf{q}-\k_1$ and $\k\rightarrow\k-\k_1$, we have \begin{widetext} \begin{align} & \delta\left\langle \hat{P}(\mathbf{q},t)\right\rangle =\frac{id_{0}}{V}\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime})\sum_{\k,\k',\mathbf{q}^{\prime}}\Big\{ \bar{\Omega}_{1,\k}(t^{\prime}){\big.}_H\left\langle\Psi_{G}\right\vert\hat{g}_{\mathbf{q}^{\prime}}^{\dag}(t)\left[\cos\phi_{\mathbf{q}+\mathbf{q}^{\prime}}\hat{a}_{\mathbf{q}+\mathbf{q}^{\prime}}(t)-\sin\phi_{\mathbf{q}+\mathbf{q}^{\prime}}\hat{b}_{\mathbf{q}+\mathbf{q}^{\prime}}(t)\right] \nonumber\\& \left[\cos\phi_{\k+\k'}\hat{a}_{\k+\k'}^{\dag}(t^{\prime})-\sin\phi_{\k+\k'}\hat{b}_{\k+\k'}^{\dag}(t^{\prime})\right] \hat{g}_{\k'}(t^{\prime})\left\vert\Psi_{G}\right\rangle _{H} -h.c.(\mathbf{q}\rightarrow-\mathbf{q})\Big\},\nonumber\\ & =\frac{id_{0}}{V}\int_{-\infty}^{\infty}dt^{\prime}\theta(t-t^{\prime})\sum_{\k}\Big\{\bar{\Omega}_{1,\mathbf{q}}(t^{\prime})F_{\k}(t-t^{\prime}) \left[\cos^{2}\phi_{\k+\mathbf{q}}e^{-i\epsilon_-(\k+\mathbf{q})(t-t^{\prime})}+\sin^{2}\phi_{\k+\mathbf{q}}e^{-i\epsilon_+(\k+\mathbf{q})(t-t^{\prime})}\right] \nonumber\\& -h.c.(\mathbf{q}\rightarrow-\mathbf{q})\Big\}, \end{align} \end{widetext} where we have defined the ground state correlation function as $F_{\k}(t-t^{\prime})\delta_{\k,\k^{\prime}}$$=$${}_H\langle \Psi_{G}\vert \hat{g}_{\k^{\prime}}^{\dag}(t)\hat{g}_{\k}(t^{\prime})\vert \Psi_{G}\rangle_{H}$.\ Use the Fourier transform $\delta\langle \hat{P}(\mathbf{q},\omega)\rangle$$=$$\int_{-\infty}^{\infty}dt\delta\langle \hat{P}(\mathbf{q},t)\rangle e^{-i\omega t}$ , and define the electric susceptibility as $\chi(\mathbf{q},\omega)=\delta\langle\hat{P}(\mathbf{q},\omega)\rangle/\tilde{\Omega}_{1,\mathbf{q}}(\omega)$ where $\tilde{\Omega}_{1,\mathbf{q}}(\omega)$ is the Fourier transform of $\bar{\Omega}_{1,\mathbf{q}}(t)$, we have (after the change of variables, $t=t^{\prime}+t^{\prime\prime}$, $dt=dt^{\prime\prime}$) \begin{align} &\chi(\mathbf{q},\omega)\nonumber\\&=\frac{id_{0}}{V}\sum_{\k}\int_{-\infty}^{\infty}dt\theta(t)F_{\k}(t) \left[ \cos^{2}\phi_{\k+\mathbf{q}}e^{-i(\omega+\epsilon_-(\k+\mathbf{q}))t}\right.\nonumber\\&\left.+\sin^{2}\phi_{\k+\mathbf{q}}e^{-i(\omega+\epsilon_+(\k+\mathbf{q}))t}\right]. \end{align} Finally, since the correlation function $F_\k(t-t')$ is related to the time-ordered retarded Green's function via $\theta(t)F_{\k}(t)=\int d\r e^{i\k\cdot\r}iG^{<}(0,0;\r,t)$, where $iG^{<}(0,0;\r,t)\equiv{}_H\langle\Psi_G|\hat{\psi}_{g}^{\dag}(\r,t)\hat{\psi}_{g}(0,0)|\Psi_G\rangle_H\theta(t)$, we can rewrite above results to be \begin{align} & \chi(\mathbf{q},\omega)=\frac{id_{0}}{V}\sum_{\k}\int_{-\infty}^{\infty}d\tilde{\omega}i\tilde{G}^{<}(\k,\tilde{\omega})\nonumber\\ &\left[ \frac{i\cos^{2}\phi_{\k+\mathbf{q}}}{\tilde{\omega}-\omega-\epsilon_-(\k+\mathbf{q})} +\frac{i\sin^{2}\phi_{\k+\mathbf{q}}}{\tilde{\omega}-\omega-\epsilon_+(\k+\mathbf{q})}\right],\label{chi1} \end{align} where $\tilde{G}^{<}(\k,\tilde{\omega})$ is the Fourier transforms of the Green's function $iG^<(0,0;\r,t)$.\ The eigenvalues $\epsilon_\pm(\k)$ and $\phi_{\k}$ have been defined before. As an example we consider a weakly interacting condensate.\ By separating the fluctuation from the condensate: $\hat{g}_\k(t)$ $=$ $\sqrt{N_c}\delta_{\k=0}$ $+$ $\delta\hat{g}_\k(t)$ with $N_c$ being the condensate particle number, we apply Bogoliubov transformation to calculate the single particle Green's function. The final EIT spectrum can be separated into the condensate (C) and the non-condensate (NC) parts: $\chi(\mathbf{q},\omega)=\chi_C(\omega)+\chi_{NC}(\mathbf{q},\omega)$, where $\chi_C(\omega)=d_0 n_c[\cos^2\phi_0/(\omega+\epsilon_{-}(0))+\sin^2\phi_0/(\omega+\epsilon_{+}(0))]$, and $\chi_{NC}(\omega)=d_0V^{-1}\sum_{\k\neq 0}v_\k^2[\cos^2\phi_\k/(\omega+\epsilon_{-}(\k)+\epsilon(\k))+\sin^2\phi_\k/(\omega+\epsilon_{+}(\k)+\epsilon(\k))]$. Here $n_c\equiv N_c/V$ is condensate density, $\epsilon(\k)\equiv(\epsilon_1^2(\k)-\epsilon_2^2)^{1/2}$ is the phonon excitation energy, and $v_\k^2\equiv\sinh^2\theta(\k)$, where $\tanh2\theta(\k)\equiv\epsilon_2/\epsilon_1(\k)$, $\epsilon_1(\k)\equiv\k^2/2m+n_cU_{gg}$, and $\epsilon_2\equiv n_cU_{gg}$. Note that the non-condensate part is contributed by all momentum channels due to the quantum depletion, consistent with results from the dark state approach \cite{Jen}. It is instructive to simplify Eq. (\ref{chi1}) further in the strong control field limit, i.e. $\Omega_2$ is much larger than atomic kinetic and interaction energies.\ It is straightforward to have following leading order results: \begin{align} \chi(\omega) &=-\frac{d_0}{V}\sum_\k\int_{-\infty}^{\infty}d\tilde{\omega}i\tilde{G}^<(\k,\tilde{\omega}) \Big(\frac{\cos^2\phi}{\tilde{\omega}-\omega-\epsilon_{-}}\nonumber\\ &+\frac{\sin^2\phi}{\tilde{\omega}-\omega-\epsilon_{+}}\Big), \label{chi2} \end{align} where $\cos\phi\equiv\sqrt{(\epsilon_+ +\Delta_1)/(\epsilon_+ - \epsilon_-)}$, and $\epsilon_\pm\equiv-\Delta_1+(\Delta_2\pm\sqrt{\Delta_2^2+4\Omega_2^2})/2$. Note that $\chi(\omega)$ obtained in this limit has no momentum ($\mathbf{q}$) dependence. \section*{Single particle Green's function of a Luttinger liquid} Here we demonstrate how to derive the single particle Green's function \cite{manyparticle} for a Luttinger liquid (LL).\ First, we use the density-phase representation of low-energy bosonic field operators, $\hat{\psi}_{g}^{\dagger}\left( x,t\right) =\sqrt{\hat{\rho}(x,t)}e^{-i\hat{\phi}(x,t)}$, for the Luttinger liquid \cite{LL}, where $\hat{\rho}(x,t)$ and $\hat{\phi}(x,t)$ are density and phase fluctuation operators. The single particle Green's function becomes \begin{align} &iG^<_{LL}(0,0;x,t) \nonumber\\ & =\left\langle \hat{\psi}_{g}^{\dag}(x,t)\hat{\psi}_{g}(0,0)\right\rangle\theta(t) ,\nonumber\\ & \simeq\sqrt{\hat{\rho}(x)\hat{\rho}(0)}\left\langle e^{-i\hat{\phi}(x,t)}e^{i\hat{\phi}(0,0)}\right\rangle\theta(t) ,\nonumber\\ & \simeq n\left\langle e^{-i\hat{\phi}(x,t)}e^{i\hat{\phi}(0,0)}\right\rangle\theta(t) , \end{align} where we have used the fact that the density fluctuation is suppressed and negligible for a repulsively interacting 1D gas \cite{LL}, and therefore only phase fluctuation is kept for the low energy effective theory. $\theta(t)$ is a Heaviside step function.\ The above equation can be simplified by using $e^{\hat{A}}e^{\hat{B}}=e^{\hat{A}+\hat{B}}e^{[ \hat{A},\hat{B}] /2}$ if $\hat{A}$ and $\hat{B}$ are linear combination of bosonic operators and $[\hat{A},\hat{B}]$ is a complex number. We can also apply $\langle e^{\hat{A}}\rangle =e^{\langle \hat{A}^{2}\rangle /2}$ for the expectation on a bilinear Hamiltonian of $\hat{A}$. In the low energy limit, it has been shown that the effective Hamiltonian of a 1D bosonic gas can be described by a LL model, where the phase operator $\hat{\phi}(x,t)$ can be calculated within the periodic boundary condition \cite{LL}, \begin{align} \hat{\phi}(x)&=\frac{1}{2}\sum_{q\neq 0}\bigg|\frac{2\pi}{qL\kappa}\bigg|^{1/2}\nonumber\\&\times e^{-a|q|/2}\text{sgn}(q)\left[e^{iqx}\hat{b}(q)+e^{-iqx}\hat{b}^\dagger(q)\right], \label{phi_b} \end{align} where $\hat{b}(q)$ is the bosonic eigenstate field operator of the effective LL model. A positive length scale, $a$, is introduced as a cutoff length scale for the convergence of integrals, and $L$ is the system size.\ Here the LL parameter is denoted as $\kappa$, and the dispersion is $\omega(q)=|q|v$ with $v$ being the phonon velocity. Note that the density-phase representation in a LL is valid in the long wavelength limit $q\ll\rho^{-1}$. The exact values of $\kappa$ and $v$ should be determined by a more microscopic calculation or from the experimental measurement. As a result, the single particle Green's function of the LL model can be calculated to be \begin{align} &iG^<_{LL}(0,0;\r,t)\nonumber\\ &\simeq e^{\left[ \hat{\phi}(x,t),\hat{\phi}(0)\right] /2}n\exp\left\{ -\frac{1}{2}\left\langle \hat{T}\left[ \hat{\phi}(x,t)-\hat{\phi}(0)\right] ^{2}\theta(t)\right\rangle \right\} ,\nonumber\\ &\simeq e^{\left[ \hat{\phi}(x,t),\hat{\phi}(0)\right] /2}n\exp\left\{ -\frac{1}{4\kappa}\ln\frac{x^2+(a+ivt)^2}{a^2}\right\}\theta(t), \label{G_LL1} \end{align} where $\hat{T}$ is the time-ordered operator.\ This correlation function from LL model describes the behavior of quasi-long range order that has an exponent proportional to the interaction strength.\ The logarithmic function inside the exponential function is derived from (use $\omega_{q}=|q|v$ and Eq. (\ref{phi_b})) \begin{align} &\left\langle \hat{T}\left[ \hat{\phi}(x,t)-\hat{\phi}(0)\right] ^{2}\right\rangle \nonumber\\&=\frac{\theta(t)}{4}\sum_{q\neq0}\left\vert \frac{2\pi}{qL\kappa}\right\vert e^{-a\left\vert q\right\vert}\left[2-2e^{i(qx-|q|vt)}\right],\nonumber\\ & =\frac{\theta(t)}{2\kappa}\int_{0}^{\infty}dq\frac{e^{-aq}}{q}\left[ 2-e^{iq(x-vt)}-e^{-iq(x+vt)}\right],\nonumber\\ & =\frac{\theta(t)}{2\kappa}\ln\frac{x^2+(a+ivt)^2}{a^2}, \end{align} which can be also found in Ref. \cite{Giamarchi}.\ The commutation relation in the prefactor of the Green's function in Eq. (\ref{G_LL1}) can be also calculated to be \begin{align} &\left[ \hat{\phi}(x,t),\hat{\phi}(0)\right] &\nonumber\\ &=\frac{1}{4K}\int_{-\infty}^{\infty}\frac{e^{-a\left\vert q\right\vert }}{\left\vert q\right\vert }\left[ e^{i(qx-\left\vert q\right\vert vt)}-e^{-i(qx-\left\vert q\right\vert vt)}\right] ,\nonumber\\ & =\frac{1}{4K}\log\frac{\left[ a+i\left( x-vt\right) \right] \left[a-i\left( x+vt\right) \right] }{\left[ a-i\left( x-vt\right) \right] \left[ a+i\left( x+vt\right)\right]}=0, \end{align} where in the last line we have taken the limit $a\ll |x|,v|t|$. Finally, the full dynamical correlation function of a LL model (which is in general valid only in the low-energy and long wavelength limit of a 1D Bose gas) is then derived as \begin{align} &iG^<_{LL}(0,0;x,t)=\frac{n a^{1/(2\kappa)}}{[x^2+(a+ivt)^2]^{1/(4\kappa)}}. \label{green} \end{align} \section*{The electric susceptibility of Luttinger liquid} Here we proceed to calculate the electric susceptibility (i.e. EIT spectrum) of a LL. It is more instructive to consider the case of large control field limit (i.e. Eq. (\ref{chi2})) so that by inserting the result of Eq. (\ref{green}), we have \begin{align} &\chi_{LL}(\omega)\nonumber\\ &=\frac{id_{0}}{2\pi}\int_{-\infty}^{\infty}dk\int_{0}^{\infty}dt\int_{-\infty}^{\infty}dx\frac{n e^{ikx}a^{1/(2\kappa)}}{[x^2+(a+ivt)^2]^{1/(4\kappa)}}\nonumber\\ &\times\left[ \cos^{2}\phi e^{-i(\omega+\epsilon_-)t}+\sin^{2}\phi e^{-i(\omega+\epsilon_+)t}\right],\nonumber\\ &=id_{0}n\int_{0}^{\infty}dt\frac{a^{1/(2\kappa)}}{(a+ivt)^{1/(2\kappa)}}\left[ \cos^{2}\phi e^{-i(\omega+\epsilon_- )t}\right. \nonumber\\&\left.+\sin^{2}\phi e^{-i(\omega+\epsilon_+ )t}\right],\nonumber\\ &=d_0n\Big(\frac{a}{v}\Big)^{1/(2\kappa)}\Gamma(1-\frac{1}{2\kappa})\times\nonumber\\& \bigg[\frac{\cos^2\phi }{(\omega+\epsilon_{-})^{1-1/(2\kappa)}}+\frac{\sin^2\phi }{(\omega+\epsilon_{+})^{1-1/(2\kappa)}}\bigg],\label{LL_analytic} \end{align} where in the last line we have used the integral property \cite{integral}, \begin{align} &\int_{0}^{\infty}dt\frac{e^{-i\omega t}}{(a+ivt)^b}=e^{a\omega/v}(iv)^{-b}(i\omega)^{-1+b}\Gamma(1-b,\frac{a\omega}{v}), \end{align} under the conditions that Re$[a]>0$, Im$[\omega]<0$, and Re$[v]\geq 0$.\ $\Gamma(s,x)$ is the incomplete gamma function which becomes the gamma function $\Gamma(s)$ when $a\rightarrow 0$. Eq. (\ref{LL_analytic}) indicates a nontrivial power law dependence in the EIT spectrum, as shown in Fig. 1 from strong to weak interacting regimes ($\kappa=0.6-10$)\cite{explain}.\ The standard (noninteracting) EIT spectrum is similar to the weak interacting one (Fig. 1(d)) where the zero of the dispersion relation coincides with the transparency point ($\Delta_1=0$), and symmetric (anti-symmetric) absorption (dispersion) profile is retrieved around $\Delta_1=0$.\ When the interaction becomes stronger (smaller $\kappa$), the zero of dispersion relation moves away from the transparency point, and the EIT is destroyed for even stronger interaction.\ Interestingly, the EIT profile for $\kappa=1$ case (hard core boson limit) shows an inversion symmetry between absorption and dispersion profiles around $\Delta_1=0$, very different from the standard EIT spectrum in the noninteracting limit ($\kappa=10$). \begin{figure}[t] \centering\includegraphics[height=4.5cm, width=8.5cm]{SM_1.eps} \caption{(Color online) EIT profiles for a LL of $^{87}$Rb atoms in the large control field limit from Eq. (\ref{LL_analytic}). We take a static probe field ($\mathbf{q},\omega=0$) and a resonant control field ($\Delta_2=0$) with Rabi frequency $\Omega_2=5~\Gamma$. The excited state is chosen as low-lying Rydberg transition of $|24\textrm{P}_{3/2}\rangle$ with $\Gamma^{-1}=28.3~\mu\text{s}$. The absorption (Re[$i\chi$], solid-blue) and dispersion (Im[$i\chi$], dash-red) profiles for (a) $\kappa=0.6$, (b) $1$, (c) $2$, and (d) $10$. The horizontal line guides the eye to the zero.} \label{SM_1} \end{figure} \section*{Single particle Green's function for Mott state} Here we show how to derive the single particle Green's function for a Mott state. At zero temperature, and in the deep Mott-insulating limit of average $n_0$ particle per site, we may assume only small number fluctuations about $n_0$ so that the Hilbert space can be truncated to three particle numbers per site only: $n_0-1$, $n_0$, and $n_0+1$. Within such three-state model \cite{Mott,three}, the original bosonic field operator at site ${\bf R}$ can be re-expressed to be: $\hat{g}^\dagger_\mathbf{R}(t)=\sqrt{n_0+1}\hat{t}^\dagger_{1,\mathbf{R}}(t)\hat{t}_{0,\mathbf{R}}(t)+\sqrt{n_0}\hat{t}^\dagger_{0,\mathbf{R}}(t)\hat{t}_{-1,\mathbf{R}}(t)$, where $\hat{t}_{\pm 1,\mathbf{R}}$ and $\hat{t}_{0,\mathbf{R}}$ are the raising and lowering operators. The conservation of total number of particles provides an additional constraint on the Hilbert space: $\sum_{\alpha=\pm1,0}\hat{t}_{\alpha,\mathbf{R}}^{\dag}(t)\hat{t}_{\alpha,\mathbf{R}}(t)=1$. Since we are interested in the deep Mott insulator regime with little number fluctuation, we can apply the conservation of particle number shown above to have the following approximation \cite{Mott,three}: $\hat{t}_{0,\mathbf{R}}(t),\hat{t}_{0,\mathbf{R}}^{\dag}(t)\simeq 1-\frac{1}{2}\hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{1,\mathbf{R}}(t)-\frac{1}{2}\hat{t}_{-1,\mathbf{R}}^{\dag}(t)\hat{t}_{-1,\mathbf{R}}(t)$. As a result, the single particle correlation function can then be easily expressed to the quadratic order of small fluctuations to be \begin{widetext} \begin{align} \langle \hat{g}_\mathbf{R}^{\dag}(t)\hat{g}_{\mathbf{R}^{\prime}}(0)\rangle= &(n_{0}+1)\langle \hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{0,\mathbf{R}}(t)\hat{t}_{0,\mathbf{R}^{\prime}}^{\dag}(0)\hat{t}_{1,\mathbf{R}^{\prime}}(0)\rangle +n_{0}\langle\hat{t}_{0,\mathbf{R}}^{\dag}(t)\hat{t}_{-1,\mathbf{R}}(t)\hat{t}_{-1,\mathbf{R}^{\prime}}^{\dag}(0)\hat{t}_{0,\mathbf{R}^{\prime}}(0)\rangle \nonumber\\ &+\sqrt{n_{0}(n_{0}+1)}\left[\langle\hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{0,\mathbf{R}}(t)\hat{t}_{-1,\mathbf{R}^{\prime}}^{\dag}(0)\hat{t}_{0,\mathbf{R}^{\prime}}(0)\rangle +\langle\hat{t}_{0,\mathbf{R}}^{\dag}(t)\hat{t}_{-1,\mathbf{R}}(t)\hat{t}_{0,\mathbf{R}^{\prime}}^{\dag}(0)\hat{t}_{1,\mathbf{R}^{\prime}}(0)\rangle\right], \nonumber\\ \simeq &(n_{0}+1)\langle \hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{1,\mathbf{R}^{\prime}}(0)\rangle +n_{0}\langle \hat{t}_{-1,\mathbf{R}}(t)\hat{t}_{-1,\mathbf{R}^{\prime}}^{\dag}(0)\rangle +\sqrt{n_{0}(n_{0}+1)} \nonumber\\ &\times\left[\langle \hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{-1,\mathbf{R}^{\prime}}^{\dag}(0)\rangle +\langle \hat{t}_{-1,\mathbf{R}}(t)\hat{t}_{1,\mathbf{R}^{\prime}}(0)\rangle \right]. \label{G_Mott} \end{align} \end{widetext} Within the MI state, it has been shown that we can diagonalize the effective Hamiltonian \cite{Mott} in the three state model, and change the lowering and raising operators by the quasi-hole and quasi-particle excitations: $\hat{\beta}_h$ and $\hat{\beta}_p$, i.e. $\hat{t}_{-1,\k}(t)=-B(\k)\hat{\beta}_{p,\k}^{\dag}(t)-A(\k)\hat{\beta}_{h,\k}(t)$, and $\hat {t}_{1,-\k}^{\dag}(t)=A(\k)\hat{\beta}_{p,\k}^{\dag}(t)+B(\k)\hat{\beta}_{h,\k}(t)$, where \begin{align} \begin{split} A(\k) & =\cosh(\frac{D_{\k}}{2}),~B(\k)=\sinh(\frac{D_{\k}}{2}),\\ \tanh(D_{\k}) & =\frac{-2\epsilon_{0}(\k)\sqrt{n_{0}(n_{0}+1)}}{U-\epsilon _{0}(\k)(2n_{0}+1)},\\ \epsilon_{0}(\k) & =2J\sum_{\alpha=1}^{3}\cos(k_{\alpha}a). \end{split} \end{align} The corresponding eigenenergies of the particle and hole excitations are $\epsilon_{p,h}(\k)=\mp[\epsilon_{0}(\k)/2+\delta\mu]+\tilde{\omega}(\k)$, where $\tilde{\omega}(\k)$ $=$ $\sqrt{U^{2}-U\epsilon_{0}(\k)(4n_{0}+2)+\epsilon_{0}^{2}(\k)}/2$, and $\delta\mu= -3J$ for a 3D cubic lattice \cite{Mott}. As a result, we can easily calculate a correlation function as following: \begin{align} &\langle \hat{t}_{1,\mathbf{R}}^{\dag}(t)\hat{t}_{1,\mathbf{R}^{\prime}}(0)\rangle \nonumber\\&=\frac{1}{L^{3}}\sum_{\k}B^{2}(\k)e^{-i\epsilon_{h}(\k)t}e^{-i\k\cdot(\mathbf{R}-\mathbf{R}^{\prime})}. \end{align} Similarly the single particle Green's function in Eq. (\ref{G_Mott}) becomes \begin{align} \langle\hat{g}_\mathbf{R}^{\dag}(t)\hat{g}_{\mathbf{R}^{\prime}}(0)\rangle=&\frac{1}{L^{3}}\sum_{\k}\left[\sqrt{(n_{0}+1)}B(\k)-\sqrt{n_{0}}A(\k)\right]^{2} \nonumber\\&\times e^{-i\k\cdot(\mathbf{R}-\mathbf{R}^{\prime})}e^{-i\epsilon_{h}(\k)t}. \end{align} In the deep Mott regime, we have $U\gg J,$ and hence $A(\k)\to 1$, and $\langle \hat{g}_{R}^{\dag}(t)\hat{g}_{R^{\prime}}(0)\rangle=n_{0}e^{-i\epsilon_{h}(\k)t/\hbar}\delta_{R,R^{\prime}}$.\ The single particle Green's function for the Mott state could be further simplified to be (use $\tilde{\psi}_g(\k)$ as the Fourier transform of $\hat{\psi}_g(\r)=\sum_\mathbf{R}\hat{g}_\mathbf{R}\omega_\mathbf{R}(\r)$) \begin{align} i\tilde{G}^<(\k,t)&=\left\langle\tilde{\psi}^\dag_g(\k,t)\tilde{\psi}_g(\k,0)\right\rangle\theta(t),\nonumber\\ &=\sum_{\mathbf{R},\mathbf{R}'}\tilde{\omega}^*_\mathbf{R}(\k)\tilde{\omega}_{\mathbf{R}'}(\k)\left\langle\hat{g}^\dag_\mathbf{R}(t)\hat{g}_{\mathbf{R}'}(0) \right\rangle\theta(t)\nonumber\\ &=\sum_\mathbf{R}\left\vert\tilde{\omega}_\mathbf{R}(\k)\right\vert^2 n_0e^{-i\epsilon_h(\k)t}\theta(t). \end{align} Note that only hole excitation energy appears because the Green's function we need for EIT spectrum is time-ordered, i.e. an atom is excited from the ground state ($|g\rangle$) to the excited state ($|e\rangle$) by the probe field, leaving a hole excitation inside the strongly interacting system. \section*{Single particle Green's function for superfluid case of two-component Fermi gases} Here we show how to derive the single particle Green's function for a BCS superfluid state of two-component Fermi gases.\ The single particle Green's function in this context is defined as \begin{align} -i\tilde{G}_{BCS}^<(\k,t)&=\left\langle\hat{g}^\dag_{\k,\uparrow}(t)\hat{g}_{\k,\uparrow}(0)\right\rangle\theta(t), \end{align} where we define the original ground state (in the EIT $\Lambda$ scheme) to be spin up. The other state of spin down is assumed not directly involved in the EIT experiment. To evaluate the above expectation, we can express the ground state in terms of Bogoliubov quasi-particles (denoted by $\hat{\alpha}$ and $\hat{\beta}$) of Cooper pairs in the superconducting state, \begin{align} \hat{g}_{\k,\uparrow}=\cos\theta_\k\hat{\alpha}_\k+\sin\theta_\k\hat{\beta}_{-\k}^\dagger, \end{align} where $\sin^2\theta_\k=(1-\xi_\k/E_\k)/2$.\ Here $E_\k=\sqrt{\Delta_{S}^2+\xi_\k^2}$ is the excitation energy of the quasi-particles with $\xi_k\equiv\k^2/(2m)-\mu$. $\mu$ is the chemical potential and is determined by the particle density. The Green's function can then be obtained to be \begin{align} -i\tilde{G}^<_{BCS}(\k,t)&=\left\langle\left(\cos\theta_\k \hat{\alpha}_{\k}^\dag e^{iE_{\k}t}+\sin\theta_\k \hat{\beta}_{-\k}e^{-iE_{\k}t}\right)\right.\times\nonumber\\ &\left.\left(\cos\theta_\k \hat{\alpha}_{\k}+\sin\theta_\k \hat{\beta}_{-\k}^\dag\right)\right\rangle_{H}\theta(t),\nonumber\\ &=\sin^2\theta_\k e^{-iE_\k t}\theta(t). \end{align}
proofpile-arXiv_068-14502
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgments} M.P. thanks Laura Jenniches and Alexander M\"uck for useful discussions. C.H. has been supported by the \lq Karlsruher Schule für Elementarteilchen- und Astroteilchenphysik: Wissenschaft und Technologie (KSETA)'. This work has been supported by the DFG SFB/TR9 “Computational Particle Physics”. The research of M.S. is supported in part by the European Commission through the “HiggsTools” Initial Training Network PITN-GA-2012-316704. Last but not least, we thank Carlo Oleari for making our code publicly available via the \PB\ website. \section{Conclusions} \label{ch:conclusion} One of the main tasks of the LHC is the search for beyond the SM physics, in particular supersymmetry. At the high-energy run of the LHC coloured SUSY particles can be produced with masses up to the multi-TeV range. In order to find these particles and be able to measure their properties, reliable predictions for the production cross sections both at the inclusive and at the exclusive level are mandatory. In this paper we continue our effort in providing accurate theoretical predictions by presenting results for the squark-antisquark production of the first two generations at NLO SUSY-QCD without making any simplifying assumptions on the sparticle masses and by treating the different subchannels individually. As developed in our previous paper \cite{ownpaper} we have performed the subtraction of possible on-shell intermediate gluinos in a gauge-invariant approach and compared to several methods proposed in the literature. While in squark pair production for the investigated scenarios the differences in the total rates turned out to be negligible and quite small for distributions, in squark-antisquark production, where the contributions of the $qg$ initiated channels are more important, larger differences were found. They amount to about 4\% for the inclusive NLO cross section in the investigated scenario. Even larger effects are found in the distributions, where the discrepancies between the investigated methods can be up to 30\% in the $p_T$ distribution of the radiated parton. The invariant mass distribution of the squark-antisquark pair is not affected by the chosen method, however, and only reflects the discrepancy in the total cross section. The $K$-factor for squark-antisquark production has been found to be sizeable and positive with $K \equiv \sigma_{\rm NLO}/\sigma_{\rm LO}\approx 1.4$ and the scale uncertainty is strongly reduced by taking into account the NLO corrections. The comparison of the results for individual $K$-factors for the subchannels contributing to squark-antisquark production and the $K$-factor obtained after summing the cross sections differ significantly, so that the use of a global $K$-factor in general does not lead to accurate predictions. Combining the NLO production cross section with LO decays of the (anti)squark into the lightest neutralino and (anti)quark leads to discrepancies of about 10\% between the exact result and the one assuming a common $K$-factor. The more the branching ratios of the squarks for the specific decay channel under consideration differ, the more important the consistent treatment of the individual corrections becomes. In a next step we have combined our results for squark pair and for squark-antisquark production with the decays of the final state (anti)squarks into the lightest neutralino and (anti)quark at NLO SUSY-QCD at fully differential level. In this context we have discussed two methods for the combination of production and decay with NLO accuracy in the kinematics. One is based on a Taylor expansion respecting unitarity, but suffering from possibly negative contributions. The second approach, which does not expand the total decay width entering the branching ratios of the decays, avoids this problem, however violates unitarity. The results for these two approximations and for the case where no expansion in the strong coupling is performed at all, differ by at most 4\% for the total cross sections. In the jet distributions the discrepancies between the two approximations can be up to 15\%, whereas in the $\slash{\!\!\!\! E}_T$ distribution they are purely given by the discrepancy in the total cross sections. In view of these small deviations, in particular for the inclusive quantities, we have adopted the unitarity preserving approach in the remaining numerical analysis. The influence of the NLO corrections on the distributions has been investigated for several observables. While in the $\slash{\!\!\!\! E}_T$ distribution the deviation of the differential $K$-factor and the total one is of ${\cal O}(5\%)$, the $K$-factor for the $p_T$ distributions of the two hardest jets can vary in a range of $\pm 40$\%, hence the assumption of a constant $K$-factor clearly is not valid here any more. In order to obtain realistic predictions for exclusive observables we have matched the NLO cross sections with parton showers using the \textsc{Powheg-Box} framework. The implementation is publicly available and can be downloaded from \cite{powhegweb}. The matched NLO results have been interfaced with the $p_T$ ordered shower of \textsc{Pythia6} as well as the default shower and the Dipole shower of \textsc{Herwig++}. To allow for a consistent comparison of the three showers, in \textsc{Pythia} the starting scale for the radiation off the decay products had to be modified. The largest differences in the three shower predictions is found in the pseudorapidity distribution of the third hardest jet. Thus \textsc{Herwig++} predicts more jets in the central region, which is particularly pronounced for squark-antisquark production in the investigated scenario. The comparison of the showered result with a pure NLO simulation shows small differences for more inclusive quantities. In more exclusive distributions, in particular \textsc{Herwig++} shows large deviations from the pure NLO result, as {\it e.g.}\ in the predictions for the third hardest jet. To decide if this is an effect of the missing truncated shower or a relict of the way the phase space is populated would require further investigations and is beyond the scope of this work. Finally, we performed a cut-based analysis of the total cross sections in two benchmark scenarios using realistic event selection cuts taken from an ATLAS analysis. Comparing our results with the approximate approach used by the experiments revealed small discrepancies for squark pair production, but up to 20\% differences for squark-antisquark production. This effect could be traced back to assuming a common $K$-factor for the production cross sections of all subchannels instead of using the exact results. These examples show that the effects can be sizeable and precise theoretical predictions should take into account the full NLO calculation for the production processes, consistently combined with the squark decays at NLO. The reliable exploitation and interpretation of the LHC data in the search for new physics requires accurate theoretical predictions for production and decay of SUSY particles including higher order corrections not only for inclusive quantities but also for distributions. Our results for the fully differential calculation of the SUSY-QCD corrections to squark pair and squark-antisquark production combined with their subsequent decay at NLO SUSY-QCD and matched with parton showers show that the independent treatment of the contributing subchannels is essential and that differential $K$-factors can not be assumed to be flat. The results presented here are the next step in our program of providing a fully differential description of SUSY particle production and decay at the LHC. \section{Squark Decays at NLO and Combination with Production Processes} \label{ch:decay} The calculation of NLO SUSY-QCD corrections to production processes is only a first step towards a realistic prediction of possible events at the LHC. The next step requires the inclusion of the decays of the produced particles. Here, only the decay mode into a quark and the lightest neutralino for squarks, $\tilde{q} \to q \tilde{\chi}^0_1$, or into an antiquark and the lightest neutralino for antisquarks, $\bar{\tilde{q}} \to \bar{q} + \tilde{\chi}^0_1$, will be taken into account. In many SUSY scenarios, in particular the ones studied in this paper, the lightest neutralino is the lightest supersymmetric particle and stable (if R-parity conservation is assumed). The SUSY-QCD corrections to this decay have been known for several years. However, in the original calculation \cite{decay1} only results for the partial width have been given, so that a differential description is not possible. Recently, a fully differential calculation in the context of squark pair production and decay has been presented \cite{hollik} where the radiative corrections to the decay have been included by using the phase-space-slicing technique. In the next subsection we present a recalculation of the decay at the fully differential level by applying the subtraction method developed for single top production and decay \cite{proddec2} to our process. The second part of this section deals with the consistent calculation of the total squark width, which is required for the combination of the production and decay processes described in the last part of this section. \subsection{Decay Width for $\tilde{q} \to q + \tilde{\chi}^0_1$ at NLO} \label{mt:decay} \begin{figure}[!t] \centering \includegraphics[width=12cm]{./Plots/decays.pdf} \caption{Feynman diagrams contributing to the decay $\tilde{q}_i\rightarrow q_i \tilde{\chi}^0_1$ at LO (a) and at NLO: virtual corrections (b) and real gluon radiation (c).} \label{fig:decgraphs} \end{figure} The LO contribution to the decay width of the process \begin{equation} \tilde{q} \to q + \tilde{\chi}^0_1 \end{equation} comprises only one Feynman diagram, which is depicted in Fig.~\ref{fig:decgraphs} (a). As in the production process, at NLO virtual and real corrections have to be taken into account. The two diagrams contributing to the virtual corrections are shown in Fig.~\ref{fig:decgraphs} (b). The calculation is performed in \textsc{DR} and the external fields are renormalized on-shell. All integrals have been evaluated analytically. The package \textsc{HypExp} \cite{HuMa05, HuMa07} has been used to expand hypergeometric functions. Using \textsc{DR} requires, again, the introduction of a finite counterterm. Here, the squark-quark-neutralino Yukawa coupling $\hat{g}$ is affected. The SUSY-restoring counterterm leads to the following relation to the gauge coupling $g$ \cite{martinvaughn}, \begin{equation} \hat{g} = g \left[1-\frac{\alpha_s}{6 \pi} \right] \ . \end{equation} The real corrections involve an additional gluon, \begin{equation} \tilde{q} \to q + \tilde{\chi}^0_1 + g \ , \end{equation} emitted either from the squark in the initial state or the quark in the final state, as displayed in Fig.~\ref{fig:decgraphs} (c). The IR divergences arising from the soft and/or collinear emission of the gluon cancel against the corresponding ones in the virtual corrections. This cancellation is achieved on the differential level by applying the subtraction method developed in \cite{proddec2} for single top production and decay to our decay process. The divergences in the real radiation process are cancelled by a local counterterm, which is constructed such that it has the same singular behaviour as the full matrix element. It takes the form of the LO matrix element squared multiplied by a function $D$, which describes the emission of the soft or collinear radiation: \begin{equation} |\mathcal{M}_r(p_{\tilde{q}_i},p_q,p_{\tilde{\chi}^0_1},p_g)|^2 \rightarrow |\mathcal{M}_0(p_{\tilde{q}_i},p'_q,p'_{\tilde{\chi}^0_1})|^2 \times D(p_g \cdot p_{\tilde{q}_i},p_g \cdot p_q,m^2_{{\tilde{q}_i}},m^2_{\tilde{\chi}^0_1}) \ . \label{eq:sing} \end{equation} In the limit of soft emission, when $p_g \to 0$, or where the momenta of the quark $p_q$ and the gluon $p_g$ are collinear, the counterterm on the right-hand side of Eq.~(\ref{eq:sing}) has the same singular structure as the full matrix element squared on the left-hand side. The LO matrix element $\mathcal{M}_0$ is evaluated with modified momenta $p'_q$ and $p'_{\tilde{\chi}^0_1}$ which absorb the momentum carried away by the gluon. They are subject to momentum conservation $p_{\tilde{q}_i} = p'_q + p'_{\tilde{\chi}^0_1}$, as well as to the on-shell conditions $p'^2_q = 0$ and $p'^2_{\tilde{\chi}^0_1} = m^2_{\tilde{\chi}^0_1}$. In the process at hand advantage can be taken from the fact that the LO matrix element squared can be easily factorized from the divergent part of the real matrix element squared, \eq{|\mathcal{M}_r^\textnormal{Div}|^2 = \frac{4}{3}\ \frac{16 \pi}{m_{\tilde{q}_i}^2}\ \alpha_s\ |\mathcal{M}_0|^2 f(y,z) } with the function $f(y,z)$, calculated in $d=4-2\epsilon$ dimensions, defined as \eq{f(y,z) = - \frac12 \frac{1}{(1-\sqrt{r})^2} \left( \frac{1+z}{y} + \frac{1-z}{y} \ \epsilon \right) + \frac{1}{(1-\sqrt{r})^2} \frac{1}{y(1-z)} - \frac{1}{(1-r)^2 (1-z)^2} \; .} In this function the following substitutions have been made \eq{p_q \cdot p_g = \frac{{m_{{\tilde{q}_i}}}^2}{2} (1-\sqrt{r})^2 y \quad \textnormal{and} \quad p_{\tilde{q}} \cdot p_g = \frac{{m_{{\tilde{q}_i}}}^2}{2} (1-r) (1-z) } with $r$ denoting the squared ratio of the neutralino over the squark mass, \begin{equation} r = {m^2_{\tilde{\chi}^0_1}} / {m^2_{\tilde{q}_i}} \ . \end{equation} The coefficient of the LO matrix element squared can then be chosen to serve as the divergent part, denoted by $D$, in the counterterm in Eq.~(\ref{eq:sing}), \begin{equation} D(p_g \cdot p_{\tilde{q}_i},p_g \cdot p_q,m^2_{{\tilde{q}_i}},m^2_{\tilde{\chi}^0_1}) =\frac{4}{3}\ \frac{16 \pi}{m_{\tilde{q}_i}^2}\ \alpha_s\ f(y,z) . \end{equation} In order to cancel the IR divergences in the virtual corrections this counterterm needs to be integrated analytically over the one-particle phase space of the emitted gluon. The results for the necessary integrals can be found in Table 1 of \cite{proddec2}. The integrated counterterm then reads \eq{\label{eqn6} \int d \Phi_1\ |\mathcal{M}_r^\textnormal{Div}|^2 = \frac{4}{3}\ \frac{\alpha_s}{\pi}\ \left(\frac{4\pi}{m_{\tilde{q}_i}^2} \right)^{\epsilon} |\mathcal{M}_0|^2 <f(y,z)(1-\sqrt{r})^2> } with \begin{eqnarray} \hspace*{-0.9cm} <f(y,z)(1-\sqrt{r})^2> &=& \frac{1}{2\,\epsilon^2} + \frac{5}{4\,\epsilon} - \frac1\epsilon \ln(1-r) - \frac52 \ln(1-r) + \frac{(7-5r)}{8(1-r)} -\mathrm{Li}_2(r) - \frac{7 \pi^2}{24}\nonumber \\ &-& \frac32 \frac{r}{1-r} \ln r + \frac14 \frac{r^2}{(1-r)^2} \ln r - \ln(r)\ln(1-r) + \ln^2(1-r) + \frac{11}{4} \; . \label{eqn7} \end{eqnarray} All steps of the analytical calculation have been checked against \cite{LaJe}. The results for the partial widths at LO and NLO have been compared to the result obtained from \textsc{Sdecay 1.3} \cite{sdecay}. Moreover, we have compared our results to the results presented in the independent calculation of squark pair production and decay of \cite{hollik}, in particular to the results given in Table 6 for the benchmark point 10.1.5 and the corresponding distributions, and have found agreement. In addition, this decay has been implemented in the \textsc{Powheg-Box}. The virtual corrections for this independent calculation have been calculated with \textsc{FeynArts/FormCalc} and the loop integrals have been evaluated with \textsc{LoopTools}. The real matrix elements squared, calculated by hand, have been tested numerically for a multitude of phase space points against the corresponding routines obtained with \textsc{MadGraph}. In the \textsc{Powheg-Box} the cancellation of the divergences is achieved automatically via the implemented FKS subtraction method. We have found perfect agreement between the calculation presented in this section and the implementation in the \textsc{Powheg-Box}. \subsection{Total Squark Width at NLO} \label{sec:totwidth} For the calculation of the squark branching ratios we also need the total decay width $\Gamma^{\tilde{q}}_{\textnormal{tot}}$, both at LO and NLO. Furthermore, the NLO total decay width will be necessary to normalize the expressions for the combination of the production and decay processes, as we will see in the next subsection. Since we only consider the decay into a quark and the lightest neutralino as possible \lq decay chain' for the produced squarks, it is not necessary to describe all other partial decay widths differentially. Therefore, they can be extracted from the literature or their implementation in \textsc{Sdecay}. In order to implement the various decay routines from \textsc{Sdecay} in our code the following adaptions had to be made for the individual decay modes: \begin{itemize} \item Electroweak decays: $\tilde{q}_i \rightarrow q_i \tilde{\chi}^0_k$ ($k=1,2,3,4$) and $\tilde{q}_i \rightarrow q_j \tilde{\chi}^\pm_l$ ($l=1,2$)\\ The decays into neutralinos $\tilde{\chi}^0_k$ or charginos $\tilde{\chi}^\pm_l$ are mediated by electroweak interactions. The decay into charginos is only possible for left-chiral squarks. In the routines for the (N)LO results \cite{decay1} taken from \textsc{Sdecay} only the conventions for the parameters, especially those entering the calculation of the squark-quark-gaugino vertex, had to be adapted. In our calculation the weak mixing angle $\theta_W$ is determined according to Eq.~(10.11) from \cite{pdg}, yielding \be \sin^2\theta_W = \frac{1}{2}-\sqrt{\frac{1}{4}-\frac{\pi\alpha(m_Z)}{\sqrt{2} G_F m_Z^2}}\, . \ee All other parameters needed for the numerical evaluation of the decay widths can be found in Sec.~\ref{sec:setup}. \item Strong decay: $\tilde{q}_i \rightarrow q_i \go$\\ The NLO corrections to this strong decay mode have been calculated in \cite{decay3}. However, this calculation has been performed for degenerate squark masses and implemented in the same way in \textsc{Sdecay}. In order to incorporate the full mass dependence, we have calculated the gluino self energy using the corresponding function from the calculation of the stop decays \cite{decay2}. In these decays the correct $\tilde{t}_{1,2}$ masses, the top quark mass and the $\tilde{t}$ mixing angles have been used. For each squark of the first two generations this function is called by replacing the appropriate squark mass and setting the quark mass and mixing angles to zero. Also in the calculation of $\alpha_s$, where the heavy particles are decoupled from the running, the squark masses are assumed to be degenerate. To restore the full mass dependence in $\alpha_s$, the logarithms of the masses of the heavy, decoupled particles have been modified to obtain the logarithms given in Eq.~(3) of \cite{ownpaper}. In \textsc{Sdecay} the strong coupling constant is converted from the $\overline{\textnormal{MS}}$ scheme, used in the original calculation, to the $\overline{\textnormal{DR}}$ scheme. In order to use $\alpha_s$ as implemented in the \PB\ we calculate $\alpha_s$ in the $\overline{\textnormal{MS}}$ scheme by omitting the conversion factor introduced in \textsc{Sdecay}. \end{itemize} \subsection{Combination with the Production Processes} \label{sec:proddec} A consistent combination of the production processes at NLO with the subsequent decays of the squarks, $\tilde{q} \to q + \tilde{\chi}^0_1$, or antisquarks, $\bar{\tilde{q}} \to \bar{q} + \tilde{\chi}^0_1$, at NLO is the next necessary step. In this combination we take into account only those contributions to the process $ p p \to 2 q + 2 \tilde{\chi}_1^0$ that lead to two on-shell intermediate squarks. In the narrow width approximation, which is valid in the scenarios analysed here since the widths of the squarks fulfil $\Gamma_{\tilde{q}_i}/m_{\tilde{q}_i}\ll 1$, the differential cross section factorizes into the production cross section times the branching ratios of both squark decays \be \ds_{\textnormal{tot}} = \ds_{\textnormal{prod}} \ \frac{\dGa}{\Ga} \ \frac{\dGb}{\Gb}\, . \label{eq:LOproddec} \ee By applying the narrow width approximation we not only neglect contributions with off-shell squarks, which are known to be suppressed by $\Gamma_{\tilde{q}_i}/m_{\tilde{q}_i}$, but also non-factorizable higher-order contributions. The latter ones comprise interactions between particles of the production and decay stage or between final-state particles of the two decays. These contributions are expected to be suppressed by $\Gamma_{\tilde{q}_i}/m_{\tilde{q}_i}$ as well \cite{khoze1,khoze2}. Only long-range interactions induced by the exchange of soft gluons could still affect the results of exclusive observables. However, an analysis of these effects is beyond the scope of this publication. Aiming at a combination of the decays at NLO with the production process at NLO the factors in Eq.~(\ref{eq:LOproddec}) have to be replaced by the NLO quantities: \be \label{eq:NLOquantities} \ds_{\textnormal{tot}} = (\ds_0 + \alpha_s \ds_1) \ \frac{\textnormal{d}\Gamma^{\tilde{q}_1 \rightarrow \tilde{\chi}^0_1 q}_0 + \alpha_s \textnormal{d}\Gamma^{\tilde{q}_1 \rightarrow \tilde{\chi}^0_1 q}_1}{\Gamma_{\textnormal{tot},0}^{\tilde{q}_1} + \alpha_s \Gamma_{\textnormal{tot},1}^{\tilde{q}_1} } \ \frac{\textnormal{d}\Gamma^{\tilde{q}_2 \rightarrow \tilde{\chi}^0_1 q}_0 + \alpha_s \textnormal{d}\Gamma^{\tilde{q}_2 \rightarrow \tilde{\chi}^0_1 q}_1}{\Gamma_{\textnormal{tot},0}^{\tilde{q}_2} + \alpha_s \Gamma_{\textnormal{tot},1}^{\tilde{q}_2} }\ . \ee This expression obviously includes beyond-NLO contributions. In order to strictly consider NLO accuracy it has to be expanded to NLO in $\alpha_s$. There exist two approaches for this problem, both developed in the context of single and pair production of top quarks \cite{proddec2,proddec1}. In the first approach a Taylor expansion of the full expression is performed. This leads to a formula which is normalized to the LO total widths and subtracts the ratios of the NLO corrections to the total widths over the LO total widths from the first term: \begin{eqnarray} \label{eq:proddec1} \ds_{\textnormal{tot}} &=& \,\,\,\frac{1}{\GaLO \GbLO} \Biggl[\ds_0\, \dGa_0 \,\dGb_0 \left(1-\frac{\alpha_s \GaNLO }{\GaLO }-\frac{\alpha_s \GbNLO }{\GbLO }\right) \\ &+ & \alpha_s \left( \ds_0\, \dGa_1 \dGb_0 + \ds_0\, \dGa_0 \dGb_1 + \ds_1\, \dGa_0 \dGb_0 \right)\Biggr]\, .\nonumber \end{eqnarray} This subtracted term might lead to negative contributions, if the NLO corrections to the total width are positive and large while the corrections to the partial widths are small. However, this expansion has the advantage that the sum over all possible decay channels reproduces the production cross section, {\it i.e.} the branching ratios of all subchannels add up to one. In the second approach only the numerator is expanded in $\alpha_s$ while the NLO total widths are kept in the denominator. This expansion avoids the problem of potentially negative contributions and leads to the expression: \begin{eqnarray} \label{eq:proddec2} \ds_{\textnormal{tot}} &= &\,\,\,\frac{1}{\Ga \Gb} \Biggl[\ds_0 \,\dGa_0 \dGb_0 + \alpha_s \bigl( \ds_0\, \dGa_1 \dGb_0 \nonumber\\ & &+ \ds_0\, \dGa_0 \dGb_1 + \ds_1\, \dGa_0 \dGb_0 \bigr)\Biggr]. \end{eqnarray} In this approach summing over all possible decay channels does not reproduce the production cross section, as the branching ratios do not add up to one, and in this sense unitarity is violated. Since both expansions to NLO accuracy may cause problems the complete expression in Eq.~(\ref{eq:NLOquantities}) can be used as an alternative approach. On the one hand, in this approach the branching ratios sum up to one, but on the other hand only parts of the possible beyond-NLO corrections are included. Given a good convergence of the perturbative series we expect these terms to be small, however. In chapter \ref{ch:res} results for all three possible combinations of production and decays at NLO, according to Eqs.~(\ref{eq:NLOquantities})-(\ref{eq:proddec2}), will be presented and compared. However, in all other results the Taylor expansion of the cross section, Eq.~(\ref{eq:proddec1}), will be used since in this approach the unitarity of branching ratios is preserved. In the scenarios analysed here the subtracted terms in Eq.~(\ref{eq:proddec1}) are unproblematic, {\it i.e.} the NLO corrections to the total decay widths are small (see Tab.~\ref{tab:totwidths}). \section{Introduction} \label{ch:introduction} Among the numerous extensions of the Standard Model (SM), SUSY \cite{Volkov:1973ix,golfand,ramond,Wess,Wess2,Sohnius,Nilles,HaberKane,Gunion1,Gunion2,Gunion3} constitutes one of the most attractive and most intensely studied options. SUSY allows to cure some of the flaws of the SM like the hierarchy problem or the existence of Dark Matter, for which SUSY with $R$-parity conservation provides a natural candidate. Thus one of the main tasks of the LHC is the search for SUSY particles. With the next run of the LHC at high energy it will be possible to search for the colour-charged SUSY particles, the squarks ($\tilde{q}$) and gluinos ($\tilde{g}$), in the multi-TeV mass range \cite{CMS:2013xfa,ATLAS:2013hta,Gershtein:2013iqa}. In $R$-parity conserving SUSY, they are copiously produced in pairs through the main SUSY-QCD production processes at the LHC, $pp \to \tilde{q} \tilde{q}, \tilde{q} \overline{\tilde{q}}, \tilde{q} \tilde{g}$ and $\tilde{g} \tilde{g}$. The pair production cross sections for strongly-interacting SUSY particles have been provided at leading order (LO) quite some time ago \cite{squarklo1,squarklo2,squarklo3,squarklo4}. The NLO SUSY-QCD corrections have been completed about ten years later in \cite{squarknlo1,squarknlo2,prospino,squarknlo3}. In these calculations the squark masses have been assumed to be degenerate, with the exception of stop pair production, where all squarks but the stop have been taken degenerate. The NLO corrections turned out to be large, increasing the cross sections by 5\% to 90\% depending on the process and on the SUSY scenario under consideration. Furthermore, the inclusion of the NLO corrections reduces the uncertainties due to the unknown higher order corrections, reflected in the dependence of the cross section on the unphysical factorization and renormalization scales, from about $\pm 50$\% at LO to $\pm 15$\% at NLO. In view of the still large corrections at NLO, calculations have been performed beyond NLO, including resummation and threshold effects \cite{sqbnlo1,sqbnlo2,sqbnlo3,sqbnlo4,sqbnlo5,sqbnlo6,sqbnlo11,Borschensky:2014cia,sqbnlo7,sqbnlo8,sqbnlo9,sqbnlo10,sqthresh1,sqthresh2,sqthresh3}. These corrections lead to a further increase by up to 10\% of the inclusive cross section and reduce the scale uncertainty further. Also electroweak contributions have been considered \cite{ewlo1,ewlo2}, and their NLO corrections, calculated in \cite{ewnlo1,ewnlo2,ewnlo3,ewnlo4,ewnlo5,ewnlo6,ewnlo7}, have been shown to be significant, depending on the model and the flavour and chirality of the final state squarks. The computation of the cross sections at LO and NLO SUSY-QCD can be performed with the publicly available computer program \textsc{Prospino} \cite{prospino_manual}. Based on the calculations in \cite{prospino,squarknlo3}, the NLO corrections, however, are only evaluated for degenerate squark masses. Additionally, the loop-corrected cross sections for the various subchannels of the different flavour and chirality combinations are summed up. Though results for the individual subchannels can be obtained, they are provided in the approximation of scaling the exact LO cross section of the individual subchannel with a global $K$-factor, that is given by the ratio of the total NLO cross section and the total LO cross section for degenerate squark masses.\footnote{Note that this is only possible with the second version of \textsc{Prospino}, called \textsc{Prospino2}. Although the original version could be modified to return also results for the separate channels, in its public version it returns all LO and NLO subchannels summed up.} In this approximation it is assumed that the $K$-factors of the different subchannels do not vary significantly. In principal, the program also allows for the computation of the NLO differential distributions in the transverse momentum and the rapidity of the SUSY particles, based on the results in \cite{prospino}. There it was found that the distributions for the investigated SUSY scenarios were only mildly distorted by the NLO corrections, and it has thus been assumed that differential $K$-factors are rather flat in general. Recently, results have been presented for the NLO SUSY-QCD corrections to squark pair production without any simplifying assumptions on the SUSY particle spectrum \cite{hollik,hollik2}, and including the subsequent NLO decays of the final state squarks into a quark and neutralino.\footnote{A complete next-to-leading order study of top-squark pair production at the LHC, including QCD and EW corrections has been published in \cite{hollik3}.} In \cite{plehn} completely general NLO squark and gluino production cross sections based on the \textsc{MADGOLEM} framework have been provided and compared to resummed predictions from jet merging. In \cite{ownpaper,evathesis,owndiss}, we have calculated the NLO corrections to the pair production of squarks of the first two generations and implemented the cross section in a fully flexible partonic Monte Carlo program without making any simplifying assumptions on the squark masses and treating the different subchannels individually. In the course of this calculation we have developed a new gauge-independent approach for the subtraction of on-shell intermediate gluinos at the fully differential level and compared our approach to several methods proposed in the literature. Moreover, we have extended the results \cite{hollik,hollik2,plehn}, by matching our NLO calculation to parton showers using the \textsc{Powheg-Box} \cite{nason,powheg,powhegbox} framework. These recent NLO calculations which take into account the full mass spectrum have shown that the $K$-factors of the individual subchannels can vary by up to 20\%. Therefore, in order to improve the accuracy of the cross section predictions a proper NLO treatment of the individual subchannels is necessary, without relying on an averaged $K$-factor. Furthermore, it was found, that while the shapes of semi-inclusive distributions are only mildly affected by NLO corrections, this is not the case for more exclusive observables. Here the $K$-factors can vary by up to $\pm 20$\% depending on the kinematics, both at the production level and after including squark decays supplemented by the clustering of partons to form jets. Irrespective of the use of fixed or dynamical scales, simply scaling LO distributions with a global $K$-factor is not a good approximation for exclusive observables. In continuation of our effort to provide accurate predictions for SUSY production processes at the LHC we present in this work our results for the NLO SUSY-QCD corrections to squark-antisquark production of the first two generations. We furthermore combine our results both for squark pair production and for squark-antisquark production with the decay of the (anti)squark into the lightest neutralino and (anti)quark at NLO SUSY-QCD. All results are obtained at fully exclusive level and without making any simplifying assumptions on the squark mass spectrum. In order to obtain realistic predictions for exclusive observables we have combined our fixed-order NLO calculations with parton showers. To this end, the processes have been implemented in the \textsc{Powheg-Box} framework \cite{owndiss,powhegbox} and interfaced with different parton shower programs. The implementation has been made publicly available and can be obtained from \cite{powhegweb}. The outline of the paper is as follows. In section~\ref{ch:nlo} we present the NLO calculation of the squark-antisquark production process. Section~\ref{ch:decay} is devoted to the computation of the squark decays at NLO and the combination with the production processes. Here we study different approaches for the consistent combination at NLO. The implementation in the \textsc{Powheg-Box} as well as our results at fixed order and including parton shower effects are presented in section~\ref{ch:res}. Finally, we compare our results with results obtained with an approximate approach used in the SUSY searches by the LHC experiments. We summarize and conclude in section~\ref{ch:conclusion}. \section{Squark-Antisquark Production at NLO} \label{ch:nlo} The calculation of the NLO corrections to squark-antisquark production is very similar to the one for squark pair production already presented in \cite{ownpaper}. Therefore, the following discussion summarizes only the main steps and points out the most important differences between the two processes. \subsection{Contributing Channels} The production of a squark-antisquark pair at LO either proceeds via a pair of gluons or a quark-antiquark pair in the initial state: \begin{equation} \begin{aligned} q_i\, \qbar_j &\rightarrow \tilde{q}_k^{\,c1}\, \bar{\sq}_l^{\,c2}\, ,\\ g\, g &\rightarrow \tilde{q}_i^{\,c}\, \bar{\sq}_i^{\,c}\, . \end{aligned} \label{eq:borncontri} \end{equation} Here, the lower indices indicate the flavour of the particle, whereas the upper indices for the squarks denote the respective chirality. The contributing Feynman diagrams are depicted in Fig.~\ref{fig:sqsqbarLO}. Due to the flavour conserving structure of the occurring vertices the $gg$ initiated diagrams and the $s$-channel diagram contribute only to the production of squarks of the same flavour and chirality. The results for the individual matrix elements squared can be found in \cite{owndiss}. We consider in the following only the production of squarks of the first two generations mediated by the strong interaction. Correspondingly, the higher-order calculation comprises only SUSY-QCD corrections. In total, this leads to 64 possible final state combinations. This number can be reduced to 36 independent channels if the invariance under charge conjugation is taken into account. The number of independent channels can be reduced further if some of the squark masses are degenerate, as in this case the results for the $q\qbar$ initiated contributions differ only in the respective PDFs. However, we perform the calculation for a general mass spectrum and take advantage of this point only in the numerical analysis. \begin{figure} \centering \includegraphics[width=7.5cm]{./Plots/qqbar_sqsqbar.pdf}\\ \vspace{0.5cm} \includegraphics[width=15cm]{./Plots/gg_sqsqbar.pdf} \caption{Feynman diagrams contributing to squark-antisquark production at LO.} \label{fig:sqsqbarLO} \end{figure} \subsection{Virtual and Real Corrections} \label{sec:virtreal} At NLO the squark-antisquark production processes receive contributions from virtual and real corrections. For the calculation of the virtual corrections we use the \textsc{Mathematica} packages \textsc{FeynArts 3.8} \cite{feynartsorig,feynarts,feynartsmssm} and \textsc{FormCalc 6.1} \cite{formcalc&looptools,formcalc2}. The numerical evaluation of the loop integrals is performed with \textsc{Looptools 2.7} \cite{formcalc&looptools,looptools2}. In order to regularise the occurring ultraviolet (UV) divergences we apply Dimensional Regularisation (DR) \cite{dimreg,Wilson,WilsonFisher,Bollini,Ashmore}. The UV divergences are absorbed into the fields and parameters of the theory by introducing renormalization constants. For the renormalization of the strong coupling constant we use the $\overline{\text{MS}}$ scheme and decouple the heavy particles, {\it i.e.} the gluino, the top-quark and the squarks, from the running of the strong coupling constant $\alpha_s$. In the numerical analysis the 2-loop results for the determination of $\alpha_s$ at the scale of the process are used, hence we require the 1-loop decoupling coefficient, which can be found {\it e.g.} in \cite{prospino,collins,bernreuther,decoup}. Dimensional Regularisation violates SUSY explicitly by changing the number of degrees of freedom of the gluon field, inducing a mismatch between the gauge and the Yukawa couplings beyond LO. At NLO this effect can be cured by adding a finite SUSY restoring counterterm to the counterterm of the Yukawa coupling, see \cite{martinvaughn}. With these steps it is possible to use the five-flavour $\alpha_s^{(5),\overline{\text{MS}}}$ in the numerical analysis. The occurring fields and masses are renormalized using on-shell renormalization conditions. As the relevant counterterms are not included in the \textsc{FormCalc} version we use, they had to be implemented by hand in the MSSM model file. The actual calculation of the corrections is performed such that the full mass dependence is preserved. In principle this requires the generation of all possible production modes with \textsc{FeynArts}, which is obviously a very inefficient procedure. Instead, we generated only the virtual contributions for $u \bar u\rightarrow \tilde{u}_L \bar{\tilde{u}}_L$, $u \bar u\rightarrow \tilde{u}_L \bar{\tilde{u}}_R$, $u \bar d\rightarrow \tilde{u}_L \bar{\tilde{d}}_L$, $d \bar d\rightarrow \tilde{u}_L \bar{\tilde{u}}_L$ and $ g g\rightarrow \tilde{u}_L \bar{\tilde{u}}_L$, where the indices $L$ and $R$ refer to the left- and right-handed chirality of the squarks. All other combinations of squarks in the final state can be traced back to one of these cases. However, this procedure requires a generalization of the masses of the internal squarks, if the corresponding propagators are connected to an external squark or quark line. In case of squark pair production this step amounted to simply replacing all internal squark masses in the vertex and box corrections with the masses of the external squarks, while the self-energy corrections could be left unchanged. For squark-antisquark production this generalization is more involved and requires a dedicated consideration of the individual diagrams. Some sample graphs are depicted in Fig.~\ref{fig:sqsqbarvirt}. The first two diagrams in the upper row are examples for the case where all internal masses have to be kept, {\it i.e.} here no changes are necessary. In the next two graphs the masses of the squarks in the loop have to be replaced case by case according to the flavour of the initial state quarks. Note that both chiralities have to be taken into account. The diagrams depicted in the lower row of the figure are examples for the case where one or more internal squarks are connected directly or indirectly to the final state squarks. The masses in the corresponding propagators and loop integrals have to be generalized accordingly. \begin{figure} \centering \includegraphics[width=15cm]{./Plots/nochange.pdf}\\ \vspace{0.5cm} \includegraphics[width=15cm]{./Plots/change.pdf} \caption{Sample Feynman diagrams contributing to the virtual corrections of squark-antisquark production.} \label{fig:sqsqbarvirt} \end{figure} The real corrections consist of the contributions with one additional gluon in the final state: \begin{equation} \begin{aligned} q_i\, \qbar_j &\rightarrow \tilde{q}_k^{\,c1}\, \bar{\sq}_l^{\,c2}\,g\, ,\\ g\, g &\rightarrow \tilde{q}_i^{\,c}\, \bar{\sq}_i^{\,c}\,g\, . \end{aligned} \label{eq:realgcontri} \end{equation} Moreover, at NLO a new channel occurs with a gluon and an (anti)quark in the initial state: \begin{equation} \begin{aligned} q_i\, g &\rightarrow \tilde{q}_k^{\,c1}\, \bar{\sq}_l^{\,c2} q_j\, , \\ g \, \qbar_j &\rightarrow \tilde{q}_k^{\,c1}\, \bar{\sq}_l^{\,c2} \qbar_i\, . \end{aligned} \label{eq:realcontri2} \end{equation} These channels are related to each other by invariance under charge conjugation. In order to calculate the $q_i \qbar_j$, $q_i g$ and $g \qbar_j$ channels it is sufficient to perform the calculation for one of them and construct the other combinations by either crossing the gluon or by charge conjugating the respective process. Here, the calculation is performed analytically for the $q_i\, g \rightarrow \tilde{q}_k^{\,c1}\, \bar{\sq}_l^{\,c2} q_j$ subprocesses. The occurring traces are evaluated with \textsc{FeynCalc 8.2} \cite{feyncalc}. The calculation is performed using two gauges for the external gluon, the Feynman gauge and a light-cone gauge. The $gg$-channels are obtained from \textsc{MadGraph 5.1.3.1} \cite{madgraph3} by generating the HELAS calls \cite{helas} for the specific process $g\, g \rightarrow \tilde{u}_L\,\bar{\tilde{u}}_L\,g$, generalizing the masses of the occurring squarks and removing the widths of the intermediate particles. All these contributions exhibit infrared (IR) divergences, which cancel by virtue of the Kinoshita-Lee-Nauenberg theorem \cite{Ki62,LeNa64} against the corresponding divergences in the virtual contributions. As apt for a Monte Carlo event generator this cancellation is achieved by means of a subtraction formalism. We employ the FKS method \cite{fks}, which is automated in the \PB. In the $qg$-initiated channels $q_i\, g \rightarrow \tilde{q}_i\, \bar{\sq}_j q_j$ a second type of singularity occurs for scenarios with $m_{\go}>m_{\tilde{q}_j}$.\footnote{An equivalent problem appears in the $\qbar_i\, g \rightarrow \tilde{q}_j\, \bar{\sq}_i \qbar_j$ channels. However, these contributions are related to the $q_i g$ case by charge conjugation and have been treated accordingly. They will not be discussed explicitly in the following.} For these mass configurations the intermediate gluino in the diagrams depicted in Fig.~\ref{fig:realqg} can be produced on-shell, causing a resonant behaviour. A similar problem has already been encountered in the calculation of squark pair production \cite{prospino,ownpaper}. Being formally equivalent to the Born contribution of on-shell squark-gluino production with the gluino decaying subsequently into a quark and an antisquark these contributions are large and require a proper definition of the process of interest. Keeping these terms would cause a double counting if the predictions for squark-antisquark production were combined with the ones for squark-gluino production. Hence, in order to obtain a meaningful result these on-shell contributions have to be subtracted consistently. \begin{figure} \centering \includegraphics[width=15cm]{./Plots/real_qg2.pdf}\\ \caption{Feynman diagrams contributing to the real corrections of squark-antisquark production with potentially on-shell intermediate gluinos.} \label{fig:realqg} \end{figure} There exist several methods to cope with this type of singularities, which have been developed in the context of $tW$ production \cite{twmcatnlo}, squark pair production \cite{ownpaper,hollik} and squark/gluino production \cite{prospino}. These approaches can be categorized as follows: \bi \item \textbf{Diagram Removal (DR):} In this approach the resonant contributions are removed by either completely neglecting the Feynman diagrams in Fig.~\ref{fig:realqg} (DR-I) or by keeping the interference terms with the non-resonant contributions, but removing the amplitude squared of the depicted graphs (DR-II). Both approaches are rather easy to implement in a Monte Carlo program, but break gauge invariance. \item \textbf{Diagram Subtraction (DS):} These methods aim at a pointwise subtraction of the on-shell contributions by constructing a counterterm and performing a suitable reshuffling of the momenta. Hence both the interference terms and the off-shell contributions are kept by construction. In order to regularise the singular behaviour for $(p_{\bar{\sq}_j}+p_{q_j})^2\rightarrow m_{\go}^2$ a finite width $\Gamma_{\go}$ for the resonant gluino has to be introduced (in fact this is also required in the DR-II scheme in order to regularise the integrable singularity in the interference terms). In the original proposal for $tW$ production \cite{twmcatnlo} this is achieved by replacing the corresponding propagator: \be \label{eq:bwprop} \frac{1}{(p_{\bar{\sq}_j}+p_{q_j})^2-m_{\go}^2}\rightarrow\frac{1}{(p_{\bar{\sq}_j}+p_{q_j})^2-m_{\go}^2+i m_{\go}\Gamma_{\go}}\quad. \ee However, this approach is only gauge invariant in the limit $\Gamma_{\go}\rightarrow 0$. A fully gauge invariant modification of the DS scheme (denoted DS$^*$ in the following) has been proposed in the context of squark pair production \cite{ownpaper}. In this approach the analytic expression for the amplitude squared is expanded in the poles $(p_{\bar{\sq}_j}+p_{q_j})^2-m_{\go}^2\equiv s_{jg}$ before introducing the regularising width: \be |M_{\text{tot}}|^2 = \frac{f_0}{s_{jg}^2}+\frac{f_1}{s_{jg}}+f_2(s_{jg}). \ee The coefficients $f_k$ ($k=0,1,2$) are gauge invariant quantities, i.e. introducing a regulator $\Gamma_{\go}$ at this point preserves gauge invariance and leads to \be \label{eq:expan} |M_{\text{tot}}|^2 = \frac{f_0}{s_{jg}^2+m_{\go}^2\Gamma_{\go}^2}+\frac{s_{jg}}{s_{jg}^2+m_{\go}^2\Gamma_{\go}^2} f_1 +f_2(s_{jg}). \ee The differences between the expressions obtained with the DS$^*$ and with the \lq usual' DS method vanish for $\Gamma_{\go}\rightarrow 0$ as expected, see \cite{ownpaper}. The counterterm for the subtraction of the on-shell contributions in this method is given by $f_0$ and reproduces the one used in the DS scheme in the limit $(p_{\bar{\sq}_j}+p_{q_j})^2\rightarrow m_{\go}^2$. For more details on the momentum reshuffling and the construction of the subtraction term see \cite{ownpaper}. \ei \begin{table} \renewcommand{\arraystretch}{1.2} \small \bc \begin{tabular}{|c || c |c || r ||r | r|c|r| }\hline Process & $\sigma^{\text{DS}^*} [\text{fb}]$ & $\sigma^{\text{DR-II}} [\text{fb}]$ & $\Delta_{\sigma} [\%]$ & $\sigma^{\text{DS}^*}_{qg} [\text{fb}]$ & $\frac{\sigma^{\text{DS}^*}_{qg}}{\sigma^{\text{DS}^*}} [\%]$ & $\sigma^{\text{DR-II}}_{qg} [\text{fb}]$ & $\frac{\sigma^{\text{DR-II}}_{qg}}{\sigma^{\text{DR-II}}} [\%]$\\\hline\hline $\tilde{u}_{L}\bar{\tilde{u}}_{L}$ & $ 1.74\cdot 10^{-1} $ & $ 1.67\cdot 10^{-1} $ & $ 4.09 $ & $1.60\cdot 10^{-3}$ & $ 0.92 $ & $-5.46\cdot 10^{-3}$ & $ -3.27 $\\ $\tilde{u}_{R}\bar{\tilde{u}}_{R}$ & $ 2.31\cdot 10^{-1} $ & $ 2.24\cdot 10^{-1} $ & $ 3.06 $ & $-5.71\cdot 10^{-4}$ & $ -0.25 $ & $-7.56\cdot 10^{-3}$ & $ -3.38 $\\ $\tilde{d}_{L}\bar{\tilde{d}}_{L}$ & $ 1.15\cdot 10^{-1} $ & $ 1.13\cdot 10^{-1} $ & $ 2.02 $ & $-3.38\cdot 10^{-3}$ & $ -2.94 $ & $-5.67\cdot 10^{-3}$ & $ -5.03 $\\ $\tilde{d}_{R}\bar{\tilde{d}}_{R}$ & $ 1.64\cdot 10^{-1} $ & $ 1.62\cdot 10^{-1} $ & $ 1.37 $ & $-6.02\cdot 10^{-3}$ & $ -3.66 $ & $-8.25\cdot 10^{-3}$ & $ -5.09 $\\ $\tilde{u}_{L}\bar{\tilde{u}}_{R}$ & $ 6.94\cdot 10^{-1} $ & $ 6.79\cdot 10^{-1} $ & $ 2.12 $ & $-9.44\cdot 10^{-3}$ & $ -1.36 $ & $-2.40\cdot 10^{-2}$ & $ -3.54 $\\ $\tilde{d}_{L}\bar{\tilde{d}}_{R}$ & $ 2.41\cdot 10^{-1} $ & $ 2.36\cdot 10^{-1} $ & $ 1.91 $ & $-3.41\cdot 10^{-3}$ & $ -1.42 $ & $-8.15\cdot 10^{-3}$ & $ -3.45 $\\ $\tilde{u}_{L}\bar{\tilde{d}}_{L}$ & $ 8.42\cdot 10^{-2} $ & $ 7.49\cdot 10^{-2} $ & $ 11.1 $ & $7.80\cdot 10^{-3}$ & $ 9.27 $ & $-1.55\cdot 10^{-3}$ & $ -2.07 $\\ $\tilde{u}_{L}\bar{\tilde{d}}_{R}$ & $ 4.92\cdot 10^{-1} $ & $ 4.83\cdot 10^{-1} $ & $ 1.88 $ & $-6.90\cdot 10^{-3}$ & $ -1.4 $ & $-1.60\cdot 10^{-2}$ & $ -3.3 $\\ $\tilde{u}_{R}\bar{\tilde{d}}_{L}$ & $ 4.84\cdot 10^{-1} $ & $ 4.74\cdot 10^{-1} $ & $ 2.09 $ & $-6.03\cdot 10^{-3}$ & $ -1.25 $ & $-1.63\cdot 10^{-2}$ & $ -3.44 $\\ $\tilde{u}_{R}\bar{\tilde{d}}_{R}$ & $ 1.09\cdot 10^{-1} $ & $ 1.00\cdot 10^{-1} $ & $ 8.33 $ & $7.47\cdot 10^{-3}$ & $ 6.83 $ & $-1.64\cdot 10^{-3}$ & $ -1.64 $\\ \hline\hline Sum & 2.79 & 2.71 & 2.72 & -0.0189 & -0.677 & -0.0946 & -3.49 \\\hline \end{tabular} \caption{\label{tab:qgxs}The NLO cross sections for squark-antisquark production of the first generation obtained for the CMSSM point $10.4.5$ applying the DS$^*$ scheme (second column) and the DR-II method (third column), with $\Delta_{\sigma}\equiv\left(\sigma^{\text{DS}^*} - \sigma^{\text{DR-II}}\right)/\sigma^{\text{DS}^*}$. The charge conjugate channels are combined. The last four columns contain the numerical values for the quantity $\sigma_{qg}$ as defined in the text and the respective contribution to the full NLO cross section, again for both the DS$^*$ and the DR-II method.} \ec \vspace*{-0.2cm} \end{table} The comparison of these different subtraction methods for squark pair production revealed for the scenario considered in \cite{ownpaper} only discrepancies in the total cross section at the per mille level. Repeating this study for squark-antisquark production, however, leads to larger differences, as the contributions of the $qg$ initiated channels are larger in this case. To illustrate this point the predictions for the total production cross sections of squarks of the first generation as obtained with the DR-II (using the light-cone gauge) and the DS$^*$ scheme are summarized in the second and third column of Tab.~\ref{tab:qgxs}. The scenario considered here corresponds to the mSUGRA point $10.4.5$ \cite{susybench} specified in Sec.~\ref{ch:res}. For the regularising width we choose $\Gamma_{\go}= 1\,\text{GeV}$. As can be inferred from the percental difference between the respective numbers given in the fourth column the predictions obtained with these two methods differ by up to 11\%, leading to a discrepancy of 2.7\% after summing these channels. Taking into account the contributions of the squarks of the second generation, too, increases this discrepancy further: \be \sigma^{\text{DS}^*} = 4.37\,\text{fb} \quad \textnormal{and} \quad \sigma^{\text{DR-II}}= 4.21\,\text{fb} \,, \ee corresponding to a discrepancy of $3.6\%$. The fifth and seventh column of Tab.~\ref{tab:qgxs} contain the respective predictions $\sigma_{qg}$ for the $qg$ contribution to each channel. This (unphysical) quantity comprises the $2\rightarrow 3$ parts of the respective channel, {\it i.e.} the real amplitudes squared and the corresponding FKS counterterm and hence allows for a direct estimation of the effects of the applied subtraction scheme. As can be inferred from the table these contributions make up several percent of the individual cross sections. Hence the large discrepancies observed between the two subtraction methods have significant effects on the predictions for the total cross sections. Even larger effects of the chosen subtraction scheme can be observed in differential distributions which are sensitive to the emitted parton of the real corrections. As an example, the $p_T$ distribution of the radiated parton obtained with the DR-II scheme and the DS$^*$ method is shown in Fig.~\ref{fig:oss_sqantisq_distri} (left). For $p_T^j>200\,\text{GeV}$ the two predictions differ by about 30\%. In contrast, the shape of the $m^{\tilde{q}\bar{\sq}}$ distribution (right plot in Fig.~\ref{fig:oss_sqantisq_distri}), which is supposed to be less sensitive to additional radiation, is not affected by the chosen method. Solely the normalization reflects the 3.6\% discrepancy already encountered in the total cross section \begin{figure} \begin{minipage}{0.49\textwidth} \includegraphics[width=\textwidth]{./Plots/ptj_sqantisq_DRIIvsDS3.pdf} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=\textwidth]{./Plots/msqantisq_DRIIvsDS3.pdf} \end{minipage} \caption {The distributions for squark-antisquark production of the transverse momentum of the radiated parton generated in the real contributions, $p_T^j$, (left) and the invariant mass $m^{\tilde{q}\bar{\sq}}$ (right) for the subtraction methods DS$^*$ and DR-II. The lower panels show the respective ratio of the DR-II and the DS$^*$ result.} \label{fig:oss_sqantisq_distri} \end{figure} \subsection{Tests and Comparison} The calculation presented in the last section has undergone numerous checks and comparisons. An obvious test for the correctness of the calculation consists in a comparison with the public program \textsc{Prospino2} for the limit of a mass degenerate spectrum. Unfortunately, a direct comparison of the results obtained with this public code is not straightforward, as it implicitly takes into account the sbottom production processes $g g \rightarrow \tilde{b} \bar{\tilde{b}}$ and $q \qbar \rightarrow \tilde{b} \bar{\tilde{b}}$, while the contributions $b \bar{b}\rightarrow \tilde{b} \bar{\tilde{b}}$ are neglected. Moreover, at NLO the contributions $q g \rightarrow \tilde{q} \bar{\tilde{b}} b$ and the charge conjugate processes are taken into account. Instead of mimicking the way the total $K$-factor is calculated in \textsc{Prospino2} we have compared the numerical results of our calculation with a non-public implementation of the original results from \cite{prospino}, denoted \textsc{Prospino$^*$} in the following. Besides testing our calculation for the special case of degenerate squarks we have intensively checked the individual building blocks: \bi \item The Born expressions have been compared with results given in the literature \cite{squarklo2,squarklo3,squarklo4}. In addition, the numerical comparison of the total cross section with \textsc{Prospino$^*$} provides a simple cross check for the correctness of the nontrivial combinatorics of the contributing channels. \item The UV finiteness of the virtual corrections has been checked both analytically and numerically. The correct structure of the IR poles has been verified by comparison with the known structure for the case of massive coloured particles in the final state, see {\it e.g.} \cite{madfks}. The correctness of the modifications performed in the virtual routines in order to generalize them to an arbitrary mass spectrum has been tested by performing this generalization for both $g g \rightarrow \tilde{u}_L \bar{\tilde{u}}_L$ and $g g \rightarrow \tilde{d}_L \bar{\tilde{d}}_L$ and comparing the outcome numerically. Likewise, the other cases mentioned in Sec.~\ref{sec:virtreal} have been checked. \item The analytic results for the real matrix elements squared have been compared numerically for a multitude of arbitrary phase space points with the routines generated with \textsc{MadGraph 5}. The cancellation of the IR poles against the FKS counterterms has been tested using the automatic procedure provided by the \PB. The gauge invariance of the DS$^*$ scheme has been explicitly checked by comparing the outcome of the two different gauges used in the calculation. Furthermore, the equivalence of the DS and the DS$^*$ scheme in the limit $\Gamma_{\go}\rightarrow 0$ has been verified numerically. \item The individual results for the three production channels $gg$, $q\qbar$ and $qg$ have been compared for degenerate mass spectra with \textsc{Prospino$^*$}. \ei \section{Implementation and Results} \label{ch:res} After a brief discussion of the steps required for the implementation of squark production and decay in the \PB~this section summarizes our main findings, including both numerical results at fixed order perturbation theory and after application of different parton showers. Moreover, we present some results for total rates after applying realistic experimental search cuts. \subsection{Implementation in the \PB} The implementation of squark-antisquark production in the \PB~is essentially identical to the case of squark pair production, which has been extensively discussed in \cite{ownpaper}. Besides several changes in the main code required for the consideration of processes with strongly interacting SUSY particles the process-dependent parts have to be provided, which comprise \bi \item all independent flavour structures contributing to the Born and real channels, \item the Born and the spin/colour-correlated matrix elements squared, \item the finite part of the virtual contributions, \item the real matrix elements squared \item and the colour flows for the Born configurations. \ei The implementation of the various subtraction schemes discussed in Sec.~\ref{sec:virtreal} is rather involved and has been described in detail in \cite{ownpaper}, too. In essence, we have implemented (besides the two DR schemes) several versions of the DS scheme by splitting the real matrix element squared into a part containing the resonant gluino contributions and the corresponding subtraction terms and a part containing all other terms. The resonant parts do not contain any IR singularities and can therefore be treated independently from the \textsc{Powheg}-like event generation, similar to the \lq hard' part $\matR_h$ of the real matrix elements squared introduced below. We have implemented these building blocks for squark-antisquark production into the version~2 (V2) of the \PB~and ported our previous implementation of squark pair production to the V2. This newer version of the \PB~allows for the consideration of NLO corrections to the decays of the on-shell produced squarks. We use this new option to combine our results for the NLO production processes with the corrections to the specific decay $\tilde{q}\rightarrow q \tilde{\chi}^0_1$ described in the previous section. Besides taking into account the decay products in the flavour structures as described in the manual of the \PB~V2 this requires the combination of the production and decay matrix elements according to the combination formula in Eq.~(\ref{eq:proddec1}). Moreover, the FKS subtraction of the IR divergences related to the gluon emission off either the squark or the quark in the NLO corrections to the decay process requires the specification of the colour correlated Born matrix elements squared. These are trivial in the case at hand and read in the convention of the \PB~$ \matB_{\tilde{q} q} = C_F \matB$. In order to check the correctness of the implemented results the same tests as described in \cite{ownpaper} have been performed. These comprise a comparison of numerous differential distributions evaluated at NLO with the corresponding results after generation of the hardest emission according to the \textsc{Powheg} method, both at the level of the production processes and after including the decays. While we find an excellent agreement for inclusive quantities, a strong enhancement of the \textsc{Powheg} results compared to the respective NLO distributions is observed for exclusive quantities like the transverse momentum of the squark-antisquark system, $p_T^{\tilde{q}\bar{\sq}}$. The same artificial enhancement has already been observed in case of squark pair production and can be cured by using the soft/collinear limits of the real matrix elements squared $\matR$ instead of the full expressions for the generation of the hardest radiation. In the \PB~this is achieved by introducing a function $\matF$ which separates the soft/collinear part $\matR_s$ and the hard part $\matR_h$ of the real matrix elements squared: \be \matR = \matF \matR + (1-\matF) \matR \equiv \matR_s + \matR_h\, . \ee This function $\matF$ has to fulfil $\matF\rightarrow 1$ in the soft/collinear limit and should vanish far away from the corresponding phase space regions. In the \PB~the functional form \be \matF=\frac{h^2}{p_T^2+h^2} \label{eq:fdamp} \ee is used, with the transverse momentum $p_T$ of the emitted parton with respect to the emitter and a damping parameter $h$ (see \cite{nason} and \cite{powhegbox} for further details). Similar to our earlier studies on squark pair production we use $h=50\,\text{GeV}$ throughout. This choice was found to damp the artificial enhancement in the $p_T^{\tilde{q}\bar{\sq}}$ distribution and reproduces the NLO prediction for $p_T^{\tilde{q}\bar{\sq}}\gtrsim 200\,\text{GeV}$, while maintaining the Sudakov damping for small transverse momenta inherent in the \textsc{Powheg} method. \subsection{Setup} \label{sec:setup} For the numerical analysis we consider two mSUGRA scenarios which are not yet excluded by data, see {\it e.g.} \cite{atlasexcl4,cmsexcl6}. The scenarios are based on the CMSSM points $10.3.6^*$\footnote{For the point $10.3.6$ $m_0$ has been modified to get a mass spectrum consistent with the latest exclusion bounds.} and $10.4.5$ from \cite{susybench}. The input parameters of these scenarios are summarized in Tab.~\ref{tab:msugra}. The mass spectrum of the SUSY particles has been generated with \textsc{Softsusy 3.3.4} \cite{softsusy}, the resulting on-shell masses are then used as input parameters. For the SM parameters the following values are used \cite{pdg}: \begin{gather} m_Z = 91.1876\, \text{GeV}, \quad G_F=1.16637\cdot 10^{-5}\, \text{GeV}^{-2}, \nonumber\\ \alpha_{em}(m_Z)=1/127.934,\quad \alpha_s(m_Z)=0.118,\\ m_b^{\overline{\textnormal{MS}}}(m_b)=4.25\, \text{GeV}, \quad m_t=174.3\, \text{GeV}, \quad m_{\tau} = 1.777\, \text{GeV} , \quad m_{c}^{\overline{\textnormal{MS}}}(m_c) = 1.27\, \text{GeV}.\nonumber \end{gather} \begin{table} \bc \begin{tabular}{|c |c | c | c | c | c|} \hline Scenario & $m_0$ & $m_{1/2}$ & $A_0$ & $\tan(\beta)$ & $\text{sgn}(\mu)$\\\hline\hline $10.3.6^*$ & $825 \, \text{GeV}$ & $550\, \text{GeV}$ & $0\, \text{GeV}$ & $10$ & $+1$ \\ $10.4.5$ & $1150\, \text{GeV}$ & $690\, \text{GeV}$ & $0\, \text{GeV}$ & $10$ & $+1$ \\\hline \end{tabular} \caption{\label{tab:msugra}The input parameters for the considered scenarios.} \ec \vspace*{-0.2cm} \end{table} As \textsc{Softsusy} implements non-vanishing Yukawa corrections, there is a small difference between the masses of the second-generation squarks and the corresponding first-generation ones, {\it i.e.} $m_{\tilde{u}_L}\neq m_{\tilde{c}_L}$ etc. To simplify the analysis and save computing time these masses are replaced by the mean of the mass pairs, {\it i.e.} $m_{\tilde{u}_L}$ and $m_{\tilde{c}_L}$ are replaced by $(m_{\tilde{u}_L}+m_{\tilde{c}_L})/2$ and so on. The obtained masses for the squarks of the first two generations and the gluino masses are summarized in Tab.~\ref{tab:sqmass}. Note that for the point $10.3.6^*$ the mass hierarchy is $m_{\tilde{q}}>m_{\go}$, while for $10.4.5$ $m_{\tilde{q}}<m_{\go}$, the latter point requiring the subtraction of contributions with on-shell intermediate gluinos as described in Sec.~\ref{sec:virtreal}. Here, the DS$^*$ method is used, with a default value for the regulator $\Gamma_{\go}=1\,\text{GeV}$ (recall that this regulator is only needed if a subtraction is required, thus in all other cases it is set to zero). \begin{table}[!ht] \renewcommand{\arraystretch}{1.3} \bc \begin{tabular}{| c |c | c | c | c | c| } \hline Scenario & $m_{\tilde{u}_L} = m_{\tilde{c}_L}$ & $m_{\tilde{u}_R} = m_{\tilde{c}_R}$ & $m_{\tilde{d}_L} = m_{\tilde{s}_L}$ & $m_{\tilde{d}_R} = m_{\tilde{s}_R}$ & $m_{\tilde{g}}$ \\\hline\hline $10.3.6^*$ & $1799.53$ & $1760.21$ & $1801.08$ & $1756.40$ & $1602.96$ \\ $10.4.5$ & $1746.64$ & $1684.31$ & $1748.25$ & $1677.82$ & $1840.58$ \\\hline \end{tabular} \caption{\label{tab:sqmass}The squark masses in $\text{GeV}$ obtained with the parameters from Tab.~\ref{tab:msugra} after averaging the masses of the first two generations as described in the text.} \ec \vspace*{-0.2cm} \end{table} Furthermore, the partial and total decay widths of the squarks depend on the masses of the charginos and neutralinos and the respective mixing matrices. The masses of the neutralinos and charginos for the two scenarios are given in Tab.~\ref{tab:neutmass}. \begin{table} \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{| c |c | c | c | c | c| c| } \hline Scenario & $m_{\tilde{\chi}^0_1}\, [\text{GeV}]$ & $m_{\tilde{\chi}^0_2}\, [\text{GeV}]$ & $m_{\tilde{\chi}^0_3}\, [\text{GeV}]$ & $m_{\tilde{\chi}^0_4}\, [\text{GeV}]$ & $m_{\tilde{\chi}^{\pm}_1}\, [\text{GeV}]$ & $m_{\tilde{\chi}^{\pm}_2}\, [\text{GeV}]$ \\\hline\hline $10.3.6^*$ & $290.83$ & $551.76$ & $-844.74$ & $856.87$ & $551.99$ & $856.40$ \\ $10.4.5$ & $347.71$ & $657.84$ & $-993.42$ & $1003.79$ & $856.06$ & $1003.46$\\\hline \end{tabular} \caption{\label{tab:neutmass}The neutralino and chargino masses for the benchmark scenarios defined in Tab.~\ref{tab:msugra}.} \ec \vspace*{-0.2cm} \end{table} The neutralino mixing matrices for the scenarios $10.3.6^*$ and $10.4.5$ read \begin{equation} \begin{aligned} N^{10.3.6^*}&= \left( \begin{array}{rrrr} 0.99759 & -0.00979 & 0.06292 & -0.02740 \\ 0.02329 & 0.97889 &-0.16595 & 0.11704 \\ -0.24682 & 0.03551 & 0.70512 & 0.70776 \\ -0.06044 & 0.20106 & 0.68651 & -0.69615 \end{array} \right)\qquad\text{and}\\\\ N^{10.4.5} &= \left( \begin{array}{rrrr} 0.98267 & -0.00716 & 0.05338 & -0.02358 \\ -0.20847 & 0.02997 & 0.70567 & 0.70760 \\ 0.01724 & 0.98393 &-0.14473 & 0.10318 \\ -0.05226 & 0.17590 & 0.69154 & -0.69865 \end{array} \right). \end{aligned} \end{equation} In order to diagonalize the chargino mass matrix two matrices are needed, one for the left-handed components (denoted $U$) and one for the right-handed ones (denoted $V$). These $2\times2$ mixing matrices are parametrized as ($i=U,V$) \be \left( \begin{array}{cc} \cos{\theta_i} & -\sin{\theta_i}\\ \sin{\theta_i} & \cos{\theta_i} \end{array} \right)\, . \ee The mixing angles are given by $\cos{\theta_U} = 0.97213$ and $\cos{\theta_V} = 0.98594$ for the parameter point $10.3.6^*$. Likewise, those for the scenario $10.4.5$ read $\cos{\theta_U} = 0.97894$ and $\cos{\theta_V} = 0.98914$. The renormalization ($\mu_R$) and factorization ($\mu_F$) scales are chosen as $\mu_R=\mu_F=\overline{m}_{\tilde{q}}$, with $\overline{m}_{\tilde{q}}$ representing the average of the squark masses of the first two generations. For the two scenarios defined above one obtains $\overline{m}_{\tilde{q}}^{10.3.6^*}=1779.31\,\text{GeV}$ and $\overline{m}_{\tilde{q}}^{10.4.5}=1714.25\,\text{GeV}$, respectively. All subchannels for the production of first- and second-generation squarks are taken into account for the results, {\it i.e.}\ if not stated otherwise all results presented in the rest of this section are obtained by adding up the subchannels. For squark pair production the (tiny) contributions of the antisquark pair production channels are always taken into account. The PDFs are taken from the \textsc{LHAPDF} package \cite{lhapdf}. For the LO results shown in the following the LO set \textsc{CTEQ6L1} \cite{cteq6} with $\alpha_s(m_Z)=0.130$ is used, while the NLO results are calculated with the NLO set \textsc{CT10NLO} with $\alpha_s(m_Z)=0.118$ \cite{cteq}. The strong coupling constant for the LO results is correspondingly computed using the one-loop renormalization group equations (RGEs), while the value used in the NLO results is obtained from the two-loop equations. All results are calculated for the LHC with $\sqrt{s} = 14\,\text{TeV}$. The error bars shown in the following represent the statistical errors of the Monte Carlo integration. Taking into account the decays of the produced squarks into $q\tilde{\chi}^0_1$ or applying a parton shower algorithm leads to a potentially large number of partons in the final state. These partons are clustered into jets with \textsc{Fastjet 3.0.3} \cite{fastjet1,fastjet2}. To this end the anti-$k_T$ algorithm \cite{antikt} is adopted, using $R=0.4$. In the following only minimal cuts are applied on the transverse momentum and the pseudorapidity of the resulting jets: \be p_T^j>20\,\text{GeV}\quad \textnormal{and} \quad |\eta^j|<2.8\, . \ee Except for the results shown in Sec.~\ref{sec:totrat} no event selection cuts are imposed. \subsection{Numerical Results} \subsubsection{Results at Fixed Order} \label{sec:fixedorder} The first part of this section is devoted to a discussion of the NLO corrections to squark-antisquark production. In the second part we present some results for the combination of production and decay, both for squark-antisquark and squark pair production. Hence this part extends our previous results for the squark pair production processes in \cite{ownpaper} by also including the NLO corrections to the decay. \subsubsection*{Squark-Antisquark Production} The results for the total squark-antisquark production cross sections determined at LO and NLO for the two benchmark scenarios defined in Sec.~\ref{sec:setup} are summarized in Tab.~\ref{tab:totprod}. In order to assess the theoretical uncertainties we vary the renormalization and factorization scales by a factor two around the central value $\mu=\overline{m}_{\tilde{q}}$. The resulting percental uncertainties are also given in the table. Considering the resulting $K$-factors we note that in both cases the SUSY-QCD NLO corrections are positive and large, resulting in $K\equiv \sigma_{\rm NLO}/\sigma_{\rm LO} \approx 1.4$. The scale uncertainties are strongly reduced by taking into account the NLO corrections, as expected. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{|c || c |c | c |}\hline Scenario & $\sigma_{\text{LO}}^{\pm\Delta \sigma} [\text{fb}]$ & $\sigma_{\text{NLO}}^{\pm\Delta \sigma} [\text{fb}]$ & K-factor \\\hline $10.3.6^*$ & $2.319^{+34\%}_{-24\%} $ & $3.218^{+13\%}_{-14\%}$ & 1.39 \\ $10.4.5$ & $3.098^{+34\%}_{-24\%}$ & $4.366^{+14\%}_{-14\%}$ & 1.41\\ \hline \end{tabular} \caption{\label{tab:totprod}The LO and NLO cross sections for squark-antisquark production for the two benchmark scenarios defined in Sec.~\ref{sec:setup}. The theoretical error estimates $\pm\Delta\sigma$ have been obtained by varying the renormalization and factorization scales by a factor two around the central values.} \ec \vspace*{-0.2cm} \end{table} Turning next to the individual $K$-factors for the subchannels contributing to squark-antisquark production we observe that they differ significantly from the total $K$-factor obtained after summing the cross sections for all individual channels. To illustrate this point, the LO/NLO cross sections and the resulting $K$-factors for the production channels involving only squarks of the first generation are given in Tab.~\ref{tab:totxs} for the CMSSM point $10.3.6^*$. Note, that the channels with squarks of the same flavour and chirality in the final state, displayed in the first four rows of the table, have contributions from $gg$ initial states and therefore larger $K$-factors than channels with squarks of different flavour or chirality. Hence the assumption that the individual $K$-factors can be approximated by the total $K$-factor obtained from \textsc{Prospino} is in general not valid. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{|c || c |c | c |}\hline Process & $\sigma_{\text{LO}} [\text{fb}]$ & $\sigma_{\text{NLO}} [\text{fb}]$ & K-factor \\\hline\hline $\tilde{u}_{L}\bar{\tilde{u}}_{L}$ & $ 9.51\cdot 10^{-2} $ & $ 1.43\cdot 10^{-1} $ & $ 1.50 $ \\ $\tilde{u}_{R}\bar{\tilde{u}}_{R}$ & $ 1.14\cdot 10^{-1} $ & $ 1.72\cdot 10^{-1} $ & $ 1.51 $ \\ $\tilde{d}_{L}\bar{\tilde{d}}_{L}$ & $ 5.50\cdot 10^{-2} $ & $ 8.79\cdot 10^{-2} $ & $ 1.60 $ \\ $\tilde{d}_{R}\bar{\tilde{d}}_{R}$ & $ 6.89\cdot 10^{-2} $ & $ 1.11\cdot 10^{-1} $ & $ 1.61 $ \\ $\tilde{u}_{L}\bar{\tilde{u}}_{R}$ & $ 3.75\cdot 10^{-1} $ & $ 5.12\cdot 10^{-1} $ & $ 1.37 $ \\ $\tilde{d}_{L}\bar{\tilde{d}}_{R}$ & $ 1.41\cdot 10^{-1} $ & $ 1.70\cdot 10^{-1} $ & $ 1.21 $ \\ $\tilde{u}_{L}\bar{\tilde{d}}_{L}$ & $ 6.98\cdot 10^{-2} $ & $ 7.89\cdot 10^{-2} $ & $ 1.13 $ \\ $\tilde{u}_{L}\bar{\tilde{d}}_{R}$ & $ 2.98\cdot 10^{-1} $ & $ 3.54\cdot 10^{-1} $ & $ 1.19 $ \\ $\tilde{u}_{R}\bar{\tilde{d}}_{L}$ & $ 2.94\cdot 10^{-1} $ & $ 3.49\cdot 10^{-1} $ & $ 1.19 $ \\ $\tilde{u}_{R}\bar{\tilde{d}}_{R}$ & $ 8.36\cdot 10^{-2} $ & $ 9.54\cdot 10^{-2} $ & $ 1.14 $ \\ \hline\hline Sum & 1.59 & 2.07 & 1.30 \\\hline \end{tabular} \caption{\label{tab:totxs}The LO and NLO cross sections for squark-antisquark production of the first generation obtained for the CMSSM point $10.3.6^*$. The charge conjugate channels have been combined.} \ec \vspace*{-0.2cm} \end{table} Determining the individual corrections consistently is especially important if the decays are taken into account and the branching ratios of the different squarks differ significantly for the specific decay channel under consideration. In order to assess the possible numerical impact of this approximation we consider the decay $\tilde{q}\rightarrow q \tilde{\chi}^0_1$ at LO at the level of total cross sections, {\it i.e.} we multiply the production cross sections for the individual squark-antisquark production channels with the respective LO branching ratios. In this step we take into account the contributions of the second generation squarks as well. We first consider the benchmark scenario 10.3.6$^*$. Using the correctly calculated NLO results for the individual production channels, multiplying them with the corresponding branching ratios and summing all channels, we obtain \be \sum_{\text{channels}}\sigma_{\text{NLO}}\cdot \text{BR}^{\text{LO}}\left(\tilde{q}\rightarrow \tilde{\chi}^0_1 q \right) \cdot \text{BR}^{\text{LO}}\left(\bar{\sq}\rightarrow \tilde{\chi}^0_1 \qbar \right) =0.139\, \text{fb}. \ee To mimic the way \textsc{Prospino} obtains the individual NLO results a common $K$-factor has to be calculated, using an averaged squark mass $m_{\tilde{q}} = 1779.31 \,\text{GeV}$. In the case at hand this leads to \begin{equation} \begin{aligned} \sigma^{\text{avg}}_{\text{LO}} = 2.315 &\,\text{fb}\; ,\qquad \sigma^{\text{avg}}_{\text{NLO}} = 3.218\,\text{fb} \\ &\Rightarrow K^{\text{avg}} =1.39\,. \end{aligned} \end{equation} Note that the difference compared to the full calculation given in Tab.~\ref{tab:totprod} is marginal and not visible when rounding to the second decimal place. This is due to the fact that the spread in the squark masses is rather small. Multiplying the LO result for each subchannel with this common $K$-factor and the corresponding branching ratios gives \be \sum_{\text{channels}}\sigma_{\text{LO}}\cdot K^{\text{avg}}\cdot \text{BR}^{\text{LO}}\left(\tilde{q}\rightarrow \tilde{\chi}^0_1 q \right) \cdot \text{BR}^{\text{LO}}\left(\bar{\sq}\rightarrow \tilde{\chi}^0_1 \qbar \right) =0.126\, \text{fb}\, . \ee Thus the rate obtained with the approximation relying on a constant $K$-factor for all subchannels is roughly $10\%$ smaller for this special case. Repeating this procedure for the benchmark scenario $10.4.5$ one obtains for the \textsc{Prospino}-like $K$-factor \begin{equation} \begin{aligned} \sigma^{\text{avg}}_{\text{LO}} = 3.090&\,\text{fb}\; , \qquad \sigma^{\text{avg}}_{\text{NLO}} = 4.356\,\text{fb} \\ &\Rightarrow K^{\text{avg}}=1.41\,. \end{aligned} \end{equation} Again, comparing this result to the full calculation given in Tab.~\ref{tab:totprod} the discrepancy is only marginal. Considering the individual subchannels with the correct individual NLO corrections yields \be \sum_{\text{channels}}\sigma_{\text{NLO}}\cdot \text{BR}^{\text{LO}}\left(\tilde{q}\rightarrow \tilde{\chi}^0_1 q \right) \cdot \text{BR}^{\text{LO}}\left(\bar{\sq}\rightarrow \tilde{\chi}^0_1 \qbar \right) =0.916\, \text{fb}, \ee while the approximation of the common $K$-factor gives \be \sum_{\text{channels}}\sigma_{\text{LO}}\cdot K^{\text{avg}}\cdot \text{BR}^{\text{LO}}\left(\tilde{q}\rightarrow \tilde{\chi}^0_1 q \right) \cdot \text{BR}^{\text{LO}}\left(\bar{\sq}\rightarrow \tilde{\chi}^0_1 \qbar \right) =0.807\, \text{fb} \ee and thus again a discrepancy of about $10\%$. \subsubsection*{Squark Production and Decay at NLO} As discussed in Sec.~\ref{sec:proddec} we have used three different approaches to combine the production and decay processes at NLO, differing in the way the combined expression is expanded in $\alpha_s$. All approaches require the calculation of the total squark width, either at LO or NLO accuracy. The results for the two considered benchmark scenarios are summarized in Tab.~\ref{tab:totwidths}. \begin{table}[ht] \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{|c || c |c || c | c | }\hline & \multicolumn{2}{c||}{$10.3.6^*$} & \multicolumn{2}{c|}{$10.4.5$} \\\cline{2-5} & $\Gamma_{\text{LO}} [\text{GeV}]$ & $\Gamma_{\text{NLO}} [\text{GeV}]$ & $\Gamma_{\text{LO}} [\text{GeV}]$ & $\Gamma_{\text{NLO}}[\text{GeV}] $\\\hline $\tilde{u}_L$ & 22.79 & 23.44 & 16.21 & 15.81\\ $\tilde{u}_R$ & 6.561 & 7.413 & 3.493 & 3.411\\ $\tilde{d}_L$ & 22.78 & 23.45 & 16.14 & 15.74\\ $\tilde{d}_R$ & 3.610 & 4.553 & 0.869 & 0.849 \\ \hline \end{tabular} \caption{\label{tab:totwidths} The total widths for first-generation squarks at LO and NLO for the two scenarios considered here. The widths for the second-generation squarks are identical. For the parameters see the main text. The scale for $\alpha_s$ has been set to $\mu_R=\overline{m}_{\tilde{q}}$.} \ec \vspace*{-0.2cm} \end{table} In a first step we compare the numerical results obtained with these approaches, both for differential distributions and total cross sections. In Fig.~\ref{fig:compapp} the distributions for the transverse momenta of the hardest and the second hardest jet, $p_T^{j_1/j_2}$, their invariant mass $m^{j_1j_2}$ and the missing transverse energy $\slashed{E}_T$ are depicted for squark pair production using the benchmark scenario $10.3.6^*$. Here, App.~1 corresponds to the Taylor expansion according to Eq.~(\ref{eq:proddec1}), whereas in App.~2 only the numerator in the combination formula is expanded, see Eq.~(\ref{eq:proddec2}). Approach 3 is the result obtained without any expansion, {\it i.e.} these distributions include contributions which are formally of beyond-NLO. The discrepancies between the approaches 1 and 2 can amount to up to $\matO(15\%)$ for the jet distributions and are largest close to threshold, while the results for $\slashed{E}_T$ reflect only the overall discrepancy in the total cross sections, which amounts for this scenario to approximately 4\%. The distributions obtained with the third approach do not show any large deviations from the results obtained with the other two approaches, but suffer from large statistical fluctuations. These result from the more complicated structure of the phase space integrations. The total cross sections for the combined production and decay processes as obtained with the approaches 1 and 2 are summarized in Tab.~\ref{tab:proddec}, both for the scenario 10.3.6$^*$ and 10.4.5. Note that the predictions for the LO cross sections are identical in both approaches and have been calculated according to Eq.~(\ref{eq:LOproddec}) using the LO quantities. Comparing the results for the different predictions at NLO reveals only rather small discrepancies $<4\%$ for the total rates for the scenarios considered here. In the rest of this chapter we use exclusively the first approach to combine production and decay processes. \bfig[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/comb/pt_j1.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/comb/pt_j2.pdf} \end{minipage} \begin{minipage}{0.04\textwidth} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/comb/ptinv.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/comb/m_j1_j2.pdf} \end{minipage} \caption {\label{fig:compapp} Comparison of the three approaches specified in the text for the combination of NLO corrections in production and decay. Shown are the distributions obtained for squark pair production and subsequent decays for the scenario $10.3.6^*$. The lower panels show the differential ratios of the second/third approach with respect to the first approach.} \efig \begin{table} \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{|c || c |c |c || c | c | c | }\hline Scenario & \multicolumn{3}{c||}{$10.3.6^*$} & \multicolumn{3}{c|}{$10.4.5$} \\\hline Process & $\sigma_{\text{LO}} [\text{fb}]$ & $\sigma_{\text{NLO}} [\text{fb}]$ & $K$-factor &$\sigma_{\text{LO}} [\text{fb}]$ & $\sigma_{\text{NLO}} [\text{fb}]$ & $K$-factor \\\hline\hline $\tilde{q}\sq$ - App.~1 & 1.34 & 1.12 & 0.84 & 7.57 & 8.75 & 1.16\\ $\tilde{q}\sq$ - App.~2 & 1.34 & 1.09 & 0.81 & 7.57 & 8.89 & 1.17\\\hline\hline $\tilde{q}\bar{\sq}$ - App.~1 & $9.29\cdot 10^{-2}$ & $1.03\cdot 10^{-1}$ & $1.11$ & $5.73\cdot 10^{-1}$ & $9.15\cdot 10^{-1}$ & $1.60$\\ $\tilde{q}\bar{\sq}$ - App.~2 & $9.29\cdot 10^{-2}$ & $9.88\cdot 10^{-2}$ & $1.06$ & $5.73\cdot 10^{-1}$ & $9.32\cdot 10^{-1}$ & $1.63$\\\hline \end{tabular} \caption{\label{tab:proddec} Cross sections for squark production and decay at LO and NLO, combined according to Eq.~(\ref{eq:proddec1}) (App.~1) and Eq.~(\ref{eq:proddec2}) (App.~2). } \ec \vspace*{-0.2cm} \end{table} In order to assess the influence of the NLO corrections on differential cross sections we consider in the following the differential $K$-factors for several observables. In Fig.~\ref{fig:LONLOdec1} the LO and the NLO distributions for the transverse momentum of the hardest jet, $p_T^{j_1}$, its rapidity, $y^{j_1}$, the missing transverse energy $\slashed{E}_T$ and the effective mass $m_{\text{eff}} \equiv p_T^{j_1} + p_T^{j_2} + \slashed{E}_T$ are depicted for squark-antisquark production, using the benchmark scenario $10.3.6^*$. The results for the scenario 10.4.5 are qualitatively the same. Considering the $p_T$ distribution of the hardest jet one observes a strong enhancement of the NLO corrections for small values of $p_T$, while they turn even negative for large values. The result for the second hardest jet, which is not shown here, is qualitatively the same. A similar observation holds for the effective mass: the NLO curve is dragged to smaller values of $m_{\text{eff}}$ and the differential $K$-factor depicted in the lower panel is far from being flat over the whole region. For the $\slashed{E}_T$ predictions, in contrast, the deviation of the differential $K$-factor from the total one is rather small, of $\matO(5\%)$, except for events with very small or very large missing transverse energy. Likewise, the shape of the rapidity distribution of the hardest jet is hardly affected by the NLO corrections. \bfig[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsqbar_point1/ptj1.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsqbar_point1/yj1.pdf} \end{minipage} \begin{minipage}{0.04\textwidth} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsqbar_point1/Etmiss.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsqbar_point1/meff.pdf} \end{minipage} \caption {\label{fig:LONLOdec1} Differential distributions as defined in the text for squark-antisquark production, combined with the subsequent decay $\tilde{q}\rightarrow q \tilde{\chi}^0_1$ and the corresponding decay for the antisquark for the scenario $10.3.6^*$. Shown are the LO predictions obtained using Eq.~(\ref{eq:LOproddec}) and the NLO results determined according to Eq.~(\ref{eq:proddec1}). In all plots the lower panels depict the respective differential $K$-factor (full) and the total $K$-factor from Tab.~\ref{tab:proddec} (dashed).} \efig Next we consider the same set of distributions for squark pair production with subsequent decays, this time for the scenario $10.4.5$, depicted in Fig.~\ref{fig:LONLOdec3}. Again the results for $10.3.6^*$ are qualitatively identical and not shown here. In essence, the behaviour is very much the same as for squark-antisquark production in Fig.~\ref{fig:LONLOdec1} and differs only in details. For example the differential $K$-factor of the rapidity distribution $y^{j_1}$ shows slightly larger deviations from the total $K$-factor, whereas the one for $\slashed{E}_T$ is a bit flatter in the range considered here. \bfig[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsq_point2/ptj1.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsq_point2/yj1.pdf} \end{minipage} \begin{minipage}{0.04\textwidth} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsq_point2/Etmiss.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/decay_LOvsNLO_sqsq_point2/meff.pdf} \end{minipage} \caption {\label{fig:LONLOdec3} Same as Fig.~\ref{fig:LONLOdec1} for squark pair production and the scenario $10.4.5$.} \efig \subsubsection{Parton Shower Effects} In order to investigate parton shower effects we have combined our implementations of the squark production and decay processes with different parton shower programs. To this end five million events have been generated for squark-antisquark and squark pair production for each of the two benchmark scenarios defined in Sec.~\ref{sec:setup}. The results shown in the following have been obtained by setting the folding parameters of the \PB~to the values \be n_{\xi}=5,\quad n_y=5,\quad n_{\phi}=1\,, \ee reducing such the number of events with negative weights. However, in the context of squark production and decay processes two further sources of negative events can occur. The first one originates from the way production and decay are combined in Eq.~(\ref{eq:proddec1}), see the discussion in Sec.~\ref{sec:proddec}. It is not possible to apply the folding procedure described above in this case, since the negative contributions to $\overline\matB$ are directly related to the (modified) Born contribution. Using a different expansion of the combination formula, {\it e.g.} Eq.~(\ref{eq:proddec2}), would remedy this point, however this approach violates unitarity and should therefore be avoided. The implemented subtraction schemes described in Sec.~\ref{sec:virtreal} present another source of contributions with negative weights. While these are completely absent for the DR-I method and their number can be reduced again by means of folding for the DS$^*$ and the DR-II method, they inevitably occur for the methods relying on a splitting of $\matR$. All in all, using the (for conceptual reasons preferable) DS$^*$ subtraction scheme with split real matrix elements squared and Eq.~(\ref{eq:proddec1}) for the combination of production and decay leads unavoidably to events with negative weights, which cannot be neglected. Therefore, they are kept in the generated event files by setting the \PB~flag {\tt withnegweights=1}. The generated event samples have been showered with two Monte Carlo event generators, using three different parton shower algorithms implemented in these programs: \bi \item \textbf{\textsc{Pythia 6}:} We use the version 6.4.28 \cite{pythia6}. All results have been obtained with the Perugia 0 tune \cite{perugia}, invoked by setting {\tt MSTP(5) = 320}. A comparison with the Perugia 11 tune ({\tt MSTP(5) = 350}) yields only tiny discrepancies.\footnote{To be more precise, for squark-antisquark production, including the decays and using the benchmark scenario $10.3.6^*$, of all observables considered in this section only the $p_T^{j_3}$ distribution shows with $\matO(5\%)$ a deviation larger than 1\%.} In order to study only effects of the parton shower, hadronization and multi-parton interaction (MPI) effects have been turned off by setting {\tt MSTP(111) = 0} and {\tt MSTP(81) = 20}, invoking thus the use of the $p_T$-ordered shower. However, in the simulation of the full process, including NLO corrections to the production and the decays, a further subtle difficulty arises when using \textsc{Pythia}, which is related to the way the starting scales for the shower are chosen. The \textsc{Powheg} approach relies on the assumption that the $p_T$ of the emitted final-state parton is larger than the transverse momentum of any subsequent splitting generated by the parton shower. This requires the application of a $p_T$ veto in the parton shower, with the maximal scale being read for each event from the event file. However, if final-state resonances are present the mass of those has to be preserved by the reshuffling operations performed in the shower algorithm. Therefore, the showering of partons originating from the decays of these resonances, {\it i.e.} in the processes considered here the produced squarks, is performed separately in \textsc{Pythia}. The starting scale for these shower contributions is set to the invariant mass of all decay products, hence in the case at hand to the mass of the respective squark. In the scenarios considered here this scale is typically an order of magnitude larger than the upper scale written to the event file, leading to much more radiation and thus to a strong bias of the results. In order to correct for this effect, the \textsc{Pythia} routines had to be adapted to use the scale specified in the event file as starting scale in all individual contributions to the parton shower. \item \textbf{\textsc{Herwig++}:} The default shower of \textsc{Herwig++} \cite{herwigpp} is ordered in the angles of the branchings. Applying this shower to an event sample generated according to the \textsc{Powheg} method requires again the use of a $p_T$ veto. However, this combination lacks the emission of soft wide-angle partons, as the first emission in an angular-ordered shower is not necessarily the hardest one. In principle these missing parts have to be simulated in an extra step via a so-called vetoed truncated shower, which is not provided by \textsc{Herwig++} and thus not taken into account in the following. The effect of this missing part will be estimated by comparing the results to those obtained with the $p_T$-ordered \textsc{Dipole-Shower} \cite{herwigdp1,herwigdp2}, which is also part of the \textsc{Herwig++} framework. The results presented in the following sections have been obtained using the version 2.6.1 \cite{herwigpp26}. In the following \textsc{Herwig++}~refers only to the default shower, while the results labeled \textsc{Dipole-Shower}~or, for the sake of brevity, \textsc{Dipole}~refer to the \textsc{Dipole-Shower}~included in the \textsc{Herwig++}~framework. \ei The showered results for squark-antisquark production are shown in Fig.~\ref{fig:Shower1}, using the scenario $10.3.6^*$. Likewise, Fig.~\ref{fig:Shower2} depicts the results for squark pair production, obtained with the scenario $10.4.5$. All plots show the outcome of the three parton showers described above and the NLO prediction, which serves as normalization in the ratio plots shown in the lower panels. The results for squark pair production using scenario $10.3.6^*$ and squark-antisquark production with scenario $10.4.5$ do not reveal any new features compared to the depicted combinations and are therefore not shown here. \bfig[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsqbar_point1/P1_ptj1.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsqbar_point1/P1_ptj3.pdf} \end{minipage} \begin{minipage}{0.04\textwidth} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsqbar_point1/P1_ptinv.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsqbar_point1/P1_etaj3.pdf} \end{minipage} \caption {\label{fig:Shower1} Differential distributions for squark-antisquark production, combined with the subsequent decay $\tilde{q}\rightarrow q \tilde{\chi}^0_1$ for the scenario $10.3.6^*$. The NLO predictions and the results after applying the parton showers \textsc{Pythia}, \textsc{Herwig++} and the \textsc{Dipole-Shower}\ are shown. In all plots the lower panels depict the respective ratios of the results obtained with the three parton showers and the pure NLO prediction.} \efig Comparing the predictions for the individual observables shown in the two figures we note that in all cases considered here the $p_T^{j_1}$ result obtained with \textsc{Herwig++}~is in the low-$p_T$ region slightly ($\matO(10\%)$) enhanced compared to the other parton showers, whereas the \textsc{Dipole-Shower}~and \textsc{Pythia}~essentially agree here. At the other end of the spectrum, however, both the \textsc{Herwig++}~and the \textsc{Dipole-Shower}~predict $\matO(10\%)$ smaller rates than \textsc{Pythia}, which is almost in accordance with the NLO result for large values of $p_T^{j_1}$. The outcome of \textsc{Herwig++}~and the \textsc{Dipole-Shower}~is identical in this kinematic regime. Similar conclusions can be drawn from the $p_T^{j_2}$, the $m^{j_1j_2}$ and the $m_{\text{eff}}$ distributions not shown here. In contrast, the distributions describing the third hardest jet show more pronounced differences. Comparing first the results for $p_T^{j_3}$ obtained with \textsc{Herwig++}~and the \textsc{Dipole-Shower}~one notices that they agree within $\matO(5-10\%)$. The result for the third jet obtained with \textsc{Pythia}~is in all cases smaller compared to the other two parton showers. While the discrepancy using the benchmark scenario $10.3.6^*$ is for both squark-antisquark and squark pair production smaller than 10\%, it amounts to 10-15\% for the scenario $10.4.5$ in both cases. The largest differences in the three shower predictions emerge in the results for the pseudorapidity of the third hardest jet, $\eta^{j_3}$. While \textsc{Pythia}~and the \textsc{Dipole-Shower}~agree within 5\% for all cases and differ in case of squark pair production only in the overall normalization, but not in the shape of the distributions, \textsc{Herwig++}~predicts evidently more jets in the central region $\left|\eta^{j_3}\right|\lesssim 1$. Comparing the \textsc{Herwig++}~result and the \textsc{Pythia}~outcome for squark-antisquark production, this enhancement amounts to a 20\% higher rate in the centre and a reduction of the same magnitude for $\left|\eta^{j_3}\right|\approx 2.8$. In case of squark pair production, this effect is smaller, of $\matO(10\%)$, but still clearly visible. The predictions for the missing transverse energy $\slashed{E}_T$ agree very well and essentially reproduce the NLO result. Tiny deviations are only visible in the tails of the distributions, however they are smaller than 5\% in all cases. \bfig[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsq_point2/P2_ptj1.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsq_point2/P2_ptj3.pdf} \end{minipage} \begin{minipage}{0.04\textwidth} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsq_point2/P2_ptinv.pdf} \vspace{0.1cm} \newline \includegraphics[width=\textwidth,height=5cm]{./Plots/shower_sqsq_point2/P2_etaj3.pdf} \end{minipage} \caption {\label{fig:Shower2} Same as Fig.~\ref{fig:Shower1} for squark pair production and the benchmark scenario 10.4.5.} \efig All in all, the predictions of the different parton showers for the observables depending solely on the two hardest jets agree within $\matO(10\%)$ or better. Comparing the showered results with the outcome of a pure NLO simulation the effects of the parton showers on these observables are at most of $\matO(10-20\%)$, except for the threshold region. By and large, the two \textsc{Herwig++}~showers yield larger deviations from the NLO outcome for these observables, whereas \textsc{Pythia}~reproduces the NLO curves within $\matO(10\%)$. The $\slashed{E}_T$ distribution is in all cases hardly affected by parton shower effects. Larger deviations between the different parton showers emerge in the predictions for the third hardest jet, which is formally described only at LO in the hard process. Especially the \textsc{Herwig++}~prediction differs significantly from the other two showers and predicts more jets in the central region of the detector. At this point it is not possible to decide ultimately if this discrepancy is an effect of the missing truncated shower or simply a relict of the way the phase space is populated in the different shower algorithms. This would require the actual implementation of a vetoed truncated shower, which is beyond the scope of this work. However, comparing the outcomes of the \textsc{Dipole-Shower}~and \textsc{Herwig++}~reveals only very small discrepancies in other observables. Hence the overall effect of the neglected truncated shower seems to be small. \subsubsection{Total Rates} \label{sec:totrat} In the last step the created event samples are analysed using a realistic set of event selection cuts, which corresponds to the definition of the signal region \lq A-loose' for the SUSY searches in two-jet events performed by the ATLAS collaboration \cite{atlasexcl4}. The event selection cuts used in this analysis are \begin{gather} p_T^{j_1}>130\,\text{GeV}, \quad p_T^{j_2}>60\,\text{GeV}, \quad \slashed{E}_T>160\,\text{GeV},\quad \frac{\slashed{E}_T}{m_{\text{eff}}}>0.2,\quad m_{\text{eff}}^{\text{incl}}>1\,\text{TeV},\nonumber\\ \Delta\phi(j_{1/2},\vec{\slashed{E}}_T)>0.4,\quad \Delta\phi(j_3,\vec{\slashed{E}}_T)>0.4 \,\,\,\,\,\text{if}\,\,\,\,\, p_T^{j_3}>40\,\text{GeV}\,. \label{eq:atlascuts} \end{gather} Here, the effective mass $m_{\text{eff}}$ is defined as the sum of the $p_T$ of the two hardest jets and $\slashed{E}_T$, whereas the inclusive definition of this observable includes all jets with $p_T^j>40\,\text{GeV}$, \be m_{\text{eff}}^{\text{incl}} = \sum_{i=1}^{n_j} p_T^{j_i} + \slashed{E}_T\,. \ee Moreover, $\Delta\phi(j_i,\vec{\slashed{E}}_T)$ denotes the minimal azimuthal separation between the direction of the missing transverse energy, $\vec{\slashed{E}}_T$, and the $i^{\text{th}}$ jet. The additional cut $\Delta\phi(j_3,\vec{\slashed{E}}_T)>0.4$ is only applied if a third jet with $p_T^{j_3}>40\,\text{GeV}$ is present. \begin{table} \renewcommand{\arraystretch}{1.2} \bc \begin{tabular}{|c || c |c || c | c | }\hline & \multicolumn{2}{c||}{$10.3.6^*$} & \multicolumn{2}{c|}{$10.4.5$} \\\cline{2-5} & $\tilde{q}\sq$ & $\tilde{q}\bar{\sq}$ & $\tilde{q}\sq$ & $\tilde{q}\bar{\sq}$\\\hline\hline NLO & $0.871\,\text{fb}$ & $0.0781\,\text{fb}$ & $6.809\,\text{fb}$ & $0.696\,\text{fb}$ \\ \textsc{Pythia} & $0.883\,\text{fb}$ & $0.0797\,\text{fb}$ & $6.854\,\text{fb}$ & $0.704\,\text{fb}$ \\ \textsc{Herwig++} & $0.895\,\text{fb}$ & $0.0807\,\text{fb}$ & $6.936\,\text{fb}$ & $0.711\,\text{fb}$ \\\hline \textsc{Pythia}~(approx.) & $0.855\,\text{fb}$ & $0.0664\,\text{fb}$ & $6.844\,\text{fb}$ & $0.617\,\text{fb}$ \\ \textsc{Herwig++}~(approx.) & $0.858\,\text{fb}$ & $0.0667\,\text{fb}$ & $6.876\,\text{fb}$ & $0.620\,\text{fb}$ \\\hline \end{tabular} \caption{\label{tab:sigcuts} Total cross sections after applying the event selection cuts defined in Eq.~(\ref{eq:atlascuts}) for the different production modes in the two benchmark scenarios. The decays of the squarks (antisquarks) to $q\tilde{\chi}^0_1$ ($\bar{q}\tilde{\chi}^0_1$) are included at NLO. The given results have been obtained at the level of a pure NLO simulation and including parton shower effects with \textsc{Pythia}~and \textsc{Herwig++}, respectively. The last two rows have been obtained by rescaling LO events after application of the \textsc{Pythia}/\textsc{Herwig++}~shower with the constant $K$-factor and the individual NLO branching ratios.} \ec \vspace*{-0.2cm} \end{table} Applying these cuts at the level of a pure NLO simulation yields for the total cross sections for squark (anti)squark production combined with the subsequent decays in the two benchmark scenarios $10.3.6^*$ and $10.4.5$ the results given in the first row of Tab.~\ref{tab:sigcuts}. Matching these NLO results with a parton shower hardly affects the outcome after using the cuts defined in Eq.~(\ref{eq:atlascuts}), as can be inferred from the results obtained with \textsc{Pythia}~and the \textsc{Herwig++}~default shower listed in the second and third row, respectively. Note that due to the mixture of cuts on inclusive and exclusive quantities the rates predicted by the two showers are slightly larger compared to the NLO case. Moreover, the two parton showers yield identical rates within 1-2\%. In order to compare these results obtained with our new calculations and implementations with the values determined according to the setup used so far for the simulation of these processes we proceed as follows: first the production and decays of the squarks are simulated with LO accuracy. The resulting events are reweighted with a common $K$-factor for squark-antisquark or squark pair production, which is obtained from \textsc{Prospino}, {\it i.e.} assuming degenerate squark masses and averaging over all channels. Each individual production channel is then multiplied with the corresponding NLO branching ratios for the produced squarks. The rescaled events are subsequently processed with the \textsc{Pythia}~and the \textsc{Herwig++}~default shower, neglecting again effects of hadronization, MPI, etc. The results obtained with this approximate setup after applying the event selection cuts defined in Eq.~(\ref{eq:atlascuts}) are summarized in the last two rows of Tab.~\ref{tab:sigcuts}. Comparing these total rates with those obtained in the full simulation one notes that the discrepancy is almost negligible in case of squark pair production, but amounts to 15-20\% for squark-antisquark production. This discrepancy is mainly caused by assuming a common $K$-factor for all subchannels instead of using the exact results with individual $K$-factors when combining production with decay. This effect in squark-antisquark production has already been demonstrated in section \ref{sec:fixedorder} for the case of LO decays. In squark pair production, however, subchannels with $K$-factors close to the global $K$-factor have large branching ratios and therefore the exact and the approximate method give similar results. These examples illustrate that in order to obtain precise predictions it is not in all cases sufficient to use the approximate approach.
proofpile-arXiv_068-14803
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \begin{figure}[h!] \begin{center} \begin{comment} \begin{tikzpicture} \coordinate (vsep) at (0,0.3); \coordinate (hsep) at (0.4,0); \coordinate (NW) at (-2,2); \draw node at (NW) {$N$ D2}; \coordinate (SW) at (-2,-2); \draw node at ($(SW)-(1,0)$) {D2-D4 $\subset$ $N$ D0}; \coordinate (SE) at (2,-2); \draw node at ($(SE) + (1.2,0)$) {$N_4$ D4 $\supset$ D2-D0}; \coordinate (NE) at (2,2); \draw node at (NE) {$N_4$ D2}; \draw[->,thick] ($(NW) - (vsep)$) -- ($(SW) + (vsep)$); \node[rotate=90] at ($(-2,0) - (hsep)$) {T$^2$-duality}; \draw[->,thick] ($(SW) + 2*(hsep)$) -- ($(SE) - (hsep)$); \draw[->,thick] ($(SE) + (vsep)$) -- ($(NE) - (vsep)$); \node[rotate=90] at ($(2,0) + (hsep)$) {T$^2$-duality}; \draw[dashed] ($(0,-2) + (vsep)$) -- ($(0,2) + (vsep)$); \node[rotate=90] at ($(-2,0) + 2.5*(hsep)$) {non-Abelian}; \node[rotate=90] at ($(2,0) - 2.5*(hsep)$) {Abelian}; \draw[->,thick,dashed] ($(NW) + (vsep) + (hsep)$) to[out=30,in=150] ($(NE) + (vsep) - (hsep)$); \node at ($(0,2) + 6.5*(vsep)$) {\phantom{a}}; \end{tikzpicture} \end{comment} \includegraphics[scale=1.2]{name_1.pdf} \end{center} \caption{The summary of our construction. } \end{figure}\label{MasterFigure} T-branes are supersymmetric brane configurations in which two scalars and the worldvolume flux acquire non-commuting expectation values. They were first introduced in \cite{Cecotti:2010bp}, and have since received a fair bit of interest, with reasons ranging from their {fundamental} structure to the attractiveness of their low-energy features for model-building in string phenomenology \cite{Chiou:2011js,Donagi:2011jy,Donagi:2011dv,Marsano:2012bf,Font:2012wq,Font:2013ida,Anderson:2013rka,DelZotto:2014hpa,Collinucci:2014qfa,Collinucci:2014taa,Marchesano:2015dfa,Cicoli:2015ylx,Carta:2015eoh,Collinucci:2016hpz,Ashfaque:2017iog,Anderson:2017rpr}. Despite this, several aspects of T-branes have remained quite mysterious. In particular, the presence of non-Abelian scalar vevs seems to hint at a possible interpretation of T-branes in terms of higher-dimensional branes, similar in spirit to the Myers effect \cite{Myers:1999ps}. In \cite{Bena:2016oqr}, the authors and Minasian have shown that for certain classes of T-branes such an interpretation is incorrect: In the regime of large worldvolume fields (in string units) these T-branes appear rather to be described by Abelian branes wrapping certain holomorphic surfaces, whose curvature encodes the original T-brane data. Roughly speaking, the non-Abelian vacuum profiles of T-branes give rise to brane bending, and not to brane polarization. The original purpose of the present investigation was to understand how universal the connection found in \cite{Bena:2016oqr} is, by investigating other classes of T-brane solutions and, in particular, those with constant worldvolume fields. However, a surprise awaited us: We discovered that the Hitchin system describing this class of T-branes is exactly the same as the system of equations that was found by Banks, Seiberg and Shenker in \cite{Banks:1996nn} to describe longitudinal five-branes in the BFSS matrix model \cite{Banks:1996vh} (see also \cite{Ganor:1996zk,Berkooz:1996is,Castelino:1997rv}). Upon reduction to type IIA string theory, the Banks-Seiberg-Shenker equations describe a non-Abelian configuration of D$0$-branes that preserves eight supercharges and carries D$2$ and D$4$ charges. The fact that these equations are identical points to the existence of a more profound connection, which has to do with the fact that both the BFSS matrix model and the Hitchin system describing T-branes come from reductions of ten-dimensional super-Yang-Mills theory to lower dimensions: The BFSS matrix model is the reduction of this theory to a particular one-dimensional matrix quantum mechanics, while the Hitchin system arises from an intermediate two-dimensional compactification of the self-duality equations of the super-Yang-Mills theory \cite{Hitchin:1986vp}. Armed with this connection, one can use the extensive technology developed in the good old matrix-model days to construct, rather straightforwardly, several solutions of T-branes with constant fields. As we will show, to obtain such T-branes one has to consider infinite matrices, and we construct a map between these T-branes and their Abelian counterparts following a path similar to that of \cite{Bena:2016oqr}: The system of equations we obtain in the T-brane frame is mapped to a dual system via two T-dualities along the worldvolume of the T-brane. The resulting dual system describes a particular D$0$-D$2$-D$4$\footnote{We will mostly refer to the T-brane as made of D$2$-branes in this paper, for historical matrix-model reasons. This is however done without loss of generality; all the same conclusions can be drawn for any D$p$-brane stack for $p=2,\ldots,7$.} configuration from the perspective of D0-branes with non-Abelian worldvolume-scalar vacuum expectation values. The same system can be described as two or more D4-branes with Abelian worldvolume fluxes, which, when T-dualized back to the original frame, give rise to several intersecting D2 branes. In ``black-hole'' language, the map between the D0 and the D4 descriptions that we construct is not a microscopic map, but a macroscopic one. To see this, it is important to recall that the D0-D4 system has a very large number of states, of order $e^{2 \pi \sqrt{2 N_0 N_4}}$, and each of these states can be in principle described either from a D0-brane perspective, as a vacuum configuration where the scalars of the D0-brane worldvolume have non-commutative vacuum expectation values, or from the D4 perspective, as an instanton configuration on the D4-brane worldvolume. The precise map between individual microstates is only known for a few very specific microstates, and requires in general pretty complicated technology. Our purpose is not to construct this detailed microscopic map, but rather to identify ensemble representatives that have the same overall D4, D2, and D0 charges. The Abelian system that we find is then brought back to the original T-brane frame by reversing the two T-dualities. At the end of this last step, we recover a D$2$-brane system, which gives the Abelian description of the original non-Abelian {T-brane system}. Thus, we find the same underlying physics as in \cite{Bena:2016oqr}: {T-brane configurations of stacks of D$p$-branes can be mapped to Abelian systems of D$p$-branes}. Our map can clearly be made more precise, both on the lower side of Figure \ref{MasterFigure} (by finding for example relations between three-point functions in the matrix-model description and D0 density modes in the D4 worldvolume description) and on the upper side of Figure \ref{MasterFigure} (by relating the T-brane data to the precise shape of the holomorphic curves wrapped by D2-branes), and we leave this investigation for future work. The paper is organized as follows. In Section \ref{sec:set} we present our T-brane system and map it to the Banks-Seiberg-Shenker system in Matrix Theory through two T-dualities. In the language of Figure \ref{MasterFigure} we start in the upper left corner, and move downwards. In the lower left corner we construct an explicit solution, which is presented in Section \ref{sec:sol}. We work out a map between the lower left and right corners in Section \ref{sec:abel}, and present the resulting D$4$-brane solution. In Section \ref{sec:return}, we move to the upper right corner of Figure \ref{MasterFigure}, where we construct the Abelian intersecting-brane configuration that corresponds to our original T-brane. The paper is concluded with some observations in Section \ref{sec:inter}. \section{From T-branes to Matrix Theory}\label{sec:set} T-branes preserving eight supercharges are non-trivial solutions of the so-called Hitchin system: \begin{subequations}\label{HitchinSystem} \begin{eqnarray} \bar{\partial}_A\,\Phi&=&0\,, \label{HF}\\ F+[\Phi,\Phi^\dagger]&=&0\,.\label{HD} \end{eqnarray} \end{subequations} This system is defined on $\mathbb{C}_w \times \mathbb{C}_z$, parametrized by the complex coordinates $w$ and $z$, that are parallel and transveral to the D-brane directions, respectively. The anti-holomorphic part of the anti-Hermitian $SU(N)$ gauge connection, $A_{\bar{w}}$, has a field strength $F=\partial A_{\bar{w}}+\bar{\partial}A_w+[A_w,A_{\bar{w}}]$, where $\bar{\partial}_A \equiv \bar{\partial}+[A_{\bar{w}},\cdot]$. Moreover, $\Phi$, usually called the ``Higgs field'', is the complex combination of two of the worldvolume scalars of the D-brane stack, and is a holomorphic (1,0)-form valued in the adjoint representation of the gauge group. Before beginning we would like to make some preliminary observations on these equations. T-brane configurations are characterized by a non-trivial commutator $[\Phi,\Phi^\dagger]$ and, because of the cyclicity of the trace, have a traceless worldvolume flux. The field $\Phi$, however, is not necessarily traceless. In this paper we are interested in T-branes that have constant worldvolume fields, for which the equations above are written solely in terms of commutators: \begin{subequations}\label{HitchinSystemC} \begin{eqnarray} \label{HFC} [A_{\bar{w}},\Phi] &=&0\,,\\ \label{HDC} [A_w, A_{\bar{w}}] + [\Phi, \Phi^\dagger] &=&0 \,. \end{eqnarray} \end{subequations} Since, as we will reiterate below, these equations can only be non-trivially solved for infinite matrices, all commutators can in principle admit a non-trivial trace. However, since we have finite-$N$ T-branes in mind, we will still keep the commutators appearing in \eqref{HDC} traceless, whereas we will allow for non-trivial traces in \eqref{HFC} (as we will see, these just give rise to additional harmless brane charges, without spoiling supersymmetry). Upon expressing the complexified fields $A_w$ and $\Phi$ in terms of their Hermitian components\footnote{From now on we only consider the matrix-valued coefficients of the differential forms, but refrain from introducing a new notation.} \begin{equation} \begin{split} A_w &= -\frac{1}{2}(A_3 + i A_4)\,,\\ A_{\bar{w}} &= \frac{1}{2}(A_3 - i A_4)\,,\\ \Phi &= \frac{1}{2}(\Phi_1 + i \Phi_2)\,, \end{split} \end{equation} {the system \eqref{HitchinSystemC} becomes} \begin{equation}\label{HitchinSystemCR} \begin{split} \left\{\begin{array}{r} [\Phi_1, A_4]\\ {[}\Phi_1, A_3{]} \end{array}\right. &\begin{array}{r}=\ [\Phi_2, A_3]\,, \\ =\ {[}A_4, \Phi_2{]}\,, \end{array}\\ [\Phi_1, \Phi_2]\ \,&=\,\,\, [A_3, A_4]\,, \end{split} \end{equation} where the first two equations come from the anti-Hermitian and Hermitian parts of \eqref{HF} respectively, and the last comes from \eqref{HDC}. \begin{table} \begin{center} \begin{tabular}{c||c|c|c|c} & $\mathbb{R}^{p-2,1}$ & $\mathbb{R}^{7-p}$ & {$\mathbb{C}_w \to \mathbb{R} \times \mathbb{R}$} & $\mathbb{C}_z$\\ \hline \hline T-brane & $\times$ & & $\times$ &\\ \hline & & & $A_3$ $A_4$ & $\Phi$\\ \hline \hline T-dual & & & $\downarrow\ $$\ \downarrow$ & \\ \hline \hline dual brane& $\times$ & & & \\ \hline & & &$\Phi_3$ $\Phi_4$ & $\Phi$ \end{tabular} \end{center} \caption{Illustration of the two T-dualities.} \label{tab:T-duals} \end{table} {Following a train of logic similar to that of \cite{Bena:2016oqr}, we now T-dualize the T-brane equations \eqref{HitchinSystemCR} twice along the worldvolume directions $3$ and $4$ (see Table \ref{tab:T-duals}).} This maps the gauge potentials $A_{3,4}$ into worldvolume scalars $\Phi_{3,4}$, and the T-brane equations become: \begin{equation}\label{eq:master} \begin{split} [\Phi_1, \Phi_4] &= [\Phi_2, \Phi_3]\,,\\ [\Phi_1, \Phi_3] &= [\Phi_4, \Phi_2]\,,\\ [\Phi_1, \Phi_2] &= [\Phi_3, \Phi_4]\,. \end{split} \end{equation} or more concisely \begin{equation}\label{eq:mastersmall} \frac{1}{2}\sum_{i,j}\epsilon_{ijkl}[\Phi_i, \Phi_j] = [\Phi_k, \Phi_l].\\ \end{equation} The first surprise in our investigation is that this system is exactly the same as the Banks-Seiberg-Shenker system of equations \cite{Banks:1996nn} that describes longitudinal five-branes in the BFSS matrix model \cite{Banks:1996vh}. Upon compactifying to type IIA string theory, these equations describe multiple D$0$-branes dissolved into D$4$-branes (with extra possible D$2$ charges) from the perspective of the worldvolume non-Abelian Born-Infeld action of the D$0$-branes. As noted in \cite{Banks:1996nn}, this system of equations admits no non-trivial solutions in terms of finite matrices, and hence to proceed we will henceforth use infinite matrices. We will further discuss the relevance of this construction for finite-$N$ T-branes in Section \ref{sec:inter}. To demonstrate that indeed this system contains D$2$-branes as well as D$4$-branes, we can derive an expression for their charge densities from the Wess-Zumino part of the non-Abelian Born-Infeld action of $N$ D0-branes \cite{Myers:1999ps}: \begin{equation} S^{\textrm{D}0}_{\textrm{WZ}} = \mu_0 \int C_1 + \left(- i \frac{\mu_0 \lambda}{L^2} \textrm{Tr}\, [\Phi_i,\Phi_j]\right) \int C_3 ^{ij} + \left(- \frac{\mu_0 \lambda^2}{2 L^4} \epsilon^{ijkl} \textrm{Tr}\, \Phi_l \Phi_k \Phi_j \Phi_i \right) \int C_5^{1234}\,. \end{equation} where $\lambda = 2\pi \ell_s^2 = 2\pi \alpha'$, $\mu_p = 2\pi/(2\pi \ell_s)^{p+1}$, and the extra factors of $L$ come from the fact that the volume\footnote{We are here quite liberal with the use of the phrase \emph{volume}, as $L$ is derived from the topological Wess-Zumino term: It does not strictly give a volume but rather gives information about the boundaries. However, for the flat branes we are considering here, these two agree and we will keep on slightly abusing the nomenclature.} of the D$2$-branes is $L^2$ and the volume of the D$4$-branes is $L^4$. The induced numbers of D$p$-branes, $N_p$, are given by the electric couplings between the D0-brane fields and $ C_{p+1}$ \begin{equation} S_{\textrm{WZ}}^{\textrm{D}0} = \ldots + \mu_p N_p \int C_{p+1} + \ldots\,, \end{equation} and to express them in terms of matrices it is convenient to define the dimensionless quantities $\tilde{\Phi}\equiv\sqrt{\lambda}\Phi$ and $K\equiv L/\sqrt{2\pi\lambda}$. The D2 and D4 numbers are then \begin{equation}\label{eq:0branenr} \begin{split} N_2^{ij} &= -i \frac{1}{K^2} \textrm{Tr}\, \left[\tilde{\Phi}_i,\tilde{\Phi}_j\right]\,,\\ N_4 &= - \frac{1}{2 K^4} \epsilon^{ijkl} \textrm{Tr}\, \tilde{\Phi}_l \tilde{\Phi}_k \tilde{\Phi}_j \tilde{\Phi}_i\,, \end{split} \end{equation} where the $ij$ superscript on the number denoting D$2$-branes signify their orientation, according to the left hand side of Table \ref{tab:system}. From now throughout the rest of this paper we will exclusively use the dimensionless fields $\tilde{\Phi}_i$, but proceed to drop the tilde in order to un-clutter the formulae. Note that $K$ can be thought of as the dimensionless size of the box in which our D0-branes are distributed, and, like $N$, must be taken to infinity. Equations \eqref{eq:0branenr} and the cyclicity of the trace make it clear that to be able to induce non-trivial D2 charges one has to use infinite matrices $\Phi_i$. As explained at the beginning of this section, we will consider T-branes for which the $N_2^{12} = N_2^{34} = 0$ because of the necessity of the tracelessness of Equation (\ref{HD}) for finite matrices. We will impose this condition in order not to introduce new features unrelated to T-branes. However, at the same time, we will allow ourselves to ``dress'' the T-brane with the other D$2$-brane charges, $N_2^{13} = N_2^{42}$ and $N_2^{14} = N_2^{23}$, since these correspond to finite traces of various terms in equation \eqref{HF}, which are allowed for finite matrices. \begin{table} \begin{center} \begin{tabular}{r|cccc} & 1 & 2 & $\tilde{3}$ & $\tilde{4}$ \\ \hline {D$0$} & - & - & - & - \\ \hline D$4$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline \textcolor{red}{\underline{D$2$}} & $\times$ & $\times$ & - & - \\ \textcolor{red}{\underline{D$2$}} & - & - & $\times$ & $\times$ \\ D$2$ & $\times$ & - & $\times$ & - \\ D$2$ & - & $\times$ & - & $\times$ \\ D$2$ & $\times$ & - & - & $\times$ \\ D$2$ & - & $\times$ & $\times$ & - \\ \end{tabular} $\qquad$ \begin{tabular}{r|cccc} & 1 & 2 & 3 & 4\\ \hline {D$2$} & - & - & $\times$ & $\times$ \\ \hline D$2$ & $\times$ & $\times$ & - & -\\ \hline \textcolor{red}{\underline{D$4$}} & $\times$ & $\times$ & $\times$ & $\times$ \\ \textcolor{red}{\underline{D$0$}} & - & - & - & - \\ D$2$ & $\times$ & - & - & $\times$ \\ D$2$ & - & $\times$ & $\times$ & - \\ D$2$ & $\times$ & - & $\times$ & - \\ D$2$ & - & $\times$ & - & $\times$ \\ \end{tabular} \end{center} \caption{In the left table we display the branes present in a general solution of (\ref{eq:master}). To the right is the resulting branes after reversing the T-dualities depicted in Table \ref{tab:T-duals}, e.g. the T-brane frame. The branes colored in red and underlined are not present in a T-brane solution.} \label{tab:system} \end{table} \section{Finding a solution}\label{sec:sol} The goal of this section is to find an explicit solution to the system (\ref{eq:master}). The building blocks for constructing solutions to this system of equations are two infinite Hermitian traceless matrices $D$ and $X$, analogous to momentum and position operators, satisfying the relation \begin{equation}\label{FundComm} [D,X] = i \mathbb{I}_M\,, \end{equation} where the size of the matrices, $M$, is actually infinity, but we keep track of it for the purpose of making the normalizations clear. Explicitly, these matrices can be constructed from the creation and annihilation operators of Quantum Mechanics via \begin{equation} D \equiv \frac{1}{\sqrt{2}} \left(a + a^\dagger\right)\,,\quad X \equiv \frac{i}{\sqrt{2}} \left(a^\dagger - a\right)\,, \end{equation} with\footnote{This particular choice is not compulsory, there exist other types of infinite matrices that can represent $a$ and $ a^\dagger$, but this choice makes the calculations more straightforward.} \begin{equation} a^\dagger = \begin{pmatrix}0&0&0&\dots &0&\dots \\{\sqrt {1}}&0&0&\dots &0&\dots \\0&{\sqrt {2}}&0&\dots &0&\dots \\0&0&{\sqrt {3}}&\dots &0&\dots \\\vdots &\vdots &\vdots &\ddots &\vdots &\dots \\0&0&0&0&{\sqrt {n}}&\dots &\\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{pmatrix}\,. \end{equation} Our dynamics takes place in four dimensions and we can construct four-dimensional momentum and position operators of size $M^4\times M^4 = N\times N$: \begin{equation}\label{eq:decomp} \begin{split} D_i &= \bigotimes_{j=1}^4 \left( (1-\delta_{ij}) \mathbb{I}_M + \delta_{ij} D\right)\,,\\ X_i &= \bigotimes_{j=1}^4 \left( (1-\delta_{ij}) \mathbb{I}_M + \delta_{ij} X\right)\,,\\ \end{split} \end{equation} which {satisfy} \begin{equation} [D_i,X_j] = i \delta_{ij} \times \mathbb{I}_M \otimes \mathbb{I}_M \otimes \mathbb{I}_M \otimes \mathbb{I}_M = i \delta_{ij}\mathbb{I}_N\,, \end{equation} We can now construct Ans\"atze for the matrices $\Phi_i$ in terms of $D_i$ and $X_i$. As already mentioned, the goal is to find a solution that has non-vanishing charge for all the D$2$-branes except the D$2_{12}$ and D$2_{34}$, but still have $[\Phi_1,\Phi_2] = [\Phi_3,\Phi_4]$. This can be achieved for example by the following three-parameter family of solutions \begin{equation}\label{eq:na-sol} \begin{split} \Phi_1 &= D_1 - A_{14} X_4 - A_{13} X_3 - \frac{\gamma}{\sqrt{2M}} (X_2 X_4 + X_1 X_3)\,,\\ \Phi_2 &= D_2 + A_{13} X_4 - A_{14} X_3\,,\\ \Phi_3 &= D_3\,,\\ \Phi_4 &= D_4 + \frac{\gamma}{\sqrt{2M}} (X_3 X_4 - X_1 X_2)\,, \end{split} \end{equation} where $A_{13}, A_{14},\gamma$ are constants whose physical meaning will be clear shortly. The matrices $\Phi_i$ in \eqref{eq:na-sol} have the commutators \begin{equation} \begin{split} [\Phi_1, \Phi_2] = [\Phi_3,\Phi_4] &= i\frac{\gamma}{\sqrt{2M}} X_4\,,\\ [\Phi_1, \Phi_3] = [\Phi_4,\Phi_2] &= iA_{13} \mathbb{I}_N + i\frac{\gamma}{\sqrt{2M}} X_1\,,\\ [\Phi_1, \Phi_4] = [\Phi_2,\Phi_3] &= iA_{14} \mathbb{I}_N \,,\\ \end{split} \end{equation} and hence equation \eqref{eq:0branenr} implies that the D$2$-brane charges are \begin{equation}\label{eq:ND2} \begin{split} N_2^{(12)} &= N_2^{(34)} = 0\\ N_2^{(13)} &= N_2^{(42)} = A_{13} \frac{N}{K^2}\,,\\ N_2^{(14)} &= N_2^{(23)} = A_{14} \frac{N}{K^2}\,. \end{split} \end{equation} These charges do not depend on $\gamma$, because the $X_i$ are traceless. However, the D4-brane charge does depend on $\gamma$: \begin{equation}\label{eq:ND4} N_4 = \frac{N}{K^4} \left( A_{14}^2 + A_{13}^2 + \gamma^2 \right)\,. \end{equation} These dependences highlight the crucial role played by the parameter $\gamma$ of our family of solutions. If a solution allows the following decomposition of the trace \begin{equation} \begin{split}\label{eq:BPS1} N_4 &= - \frac{1}{2 K^4} \epsilon^{ijkl} \textrm{Tr}\, \Phi_l \Phi_k \Phi_j \Phi_i = - \sum_{\textrm{D}2\textrm{-pairs}} \frac{1}{K^4}\textrm{Tr}\, \{\star[\Phi_i, \Phi_j], [\Phi_i,\Phi_j]\}\\ &= \frac{1}{N} \sum_{\textrm{D}2\textrm{-pairs}} \left(-i\frac{1}{K^2} \textrm{Tr}\, [\Phi_i,\Phi_j]\right) \left(-i\frac{1}{K^2} \textrm{Tr}\, [\Phi_i,\Phi_j]\right) \\ &= \frac{1}{N} \sum_{\textrm{D}2\textrm{-pairs}} \left(N^{ij}_2\right)^2\,, \end{split} \end{equation} then it features supersymmetry enhancement and preserves $16$ supercharges. As one can see from \eqref{eq:ND4}, this condition is broken by $\gamma$, and therefore only the solutions with non-zero $\gamma$ will preserve just $8$ supercharges. Hence, it is $\gamma$ that gives to our solution a T-brane character, because it is the only parameter appearing in equation \eqref{HD}. On the other hand, the parameters $A_{13}$ and $A_{14}$ are only there to ``dress'' the T-brane with additional D2 charges, without spoiling its features. Let us now work out the finite physical quantities of our family of solutions. We have a number $N$ of D$0$-branes which we are implicitly sending to infinity. These branes are distributed over an infinite four-dimensional space of volume $K^4$, and the appropriate finite quantity in our solution is the average density of D$0$-branes: \begin{equation}\label{D0dens} \rho_0 = \frac{N}{K^4} < \infty\,. \end{equation} {The same can be said for D2-branes: they are distributed in a subspace of volume $K^2$ so their number is infinite but their density is finite:} \begin{equation}\label{D2dens} \rho_2^{ij} = \frac{N^{ij}_2}{K^2} = \rho_0 A_{ij} < \infty\,. \end{equation} Since the D4-branes wrap the whole four-dimensional space, the D4 charge $N_4$ is the same as the D4 density, $\rho_4=N_4$, and hence Equation \eqref{eq:ND4} can be rewritten using finite brane densities \begin{equation}\label{D4dens} \rho_4 \rho_0 = \sum_{\textrm{D}2\textrm{-pairs}} (\rho_2^{ij})^2 + \rho_0^2 \gamma^2\,. \end{equation} To summarize, we may formulate our non-Abelian picture solely in terms of finite quantities as follows. We start by fixing the quantity $\rho_0$, which is the analogue of the size of finite-dimensional matrices. By rescaling our infinite matrices, we can make it appear in the fundamental commutation relation \eqref{FundComm}, so that \begin{equation}\label{FundCommResc} \frac{1}{N}\textrm{Tr}\,[D_i, X_j]=i \delta_{ij} \rho_0\,. \end{equation} Now, the three-parameter family of explicit solutions is formally given by \eqref{eq:na-sol}, from which, by computing the relevant traces and using \eqref{FundCommResc}, we can extract the D2-brane densities \eqref{D2dens} and the D4-brane charge \eqref{D4dens}. \section{The ``Abelian'' picture}\label{sec:abel} In the previous section we constructed a family of eight-supercharge configurations with D4, D2 and D0 charges, from the non-Abelian D$0$ perspective. Following the same logic as in \cite{Bena:2016oqr}, we now want to work out the corresponding D$4$-brane picture for these configurations. As we explained in the Introduction, we will only construct a macroscopic map between these pictures, by building a D4 configuration that has the same D0, D2 and D4 charges as that of the previous section. A system of $N_4$ flat D$4$-branes with non-trivial worldvolume flux can carry D$2$ and D$0$ charges \cite{Douglas:1995bn}, given by the electric couplings to $C_3$ and $C_1$ \begin{equation}\label{eq:D4WZ} S^{\textrm{D}4}_{\textrm{WZ}} = \mu_4 \int C_5 + \left(\mu_4 \lambda \int \textrm{Tr}\, F_2\right) \int C_3 + \left(\mu_4 \frac{\lambda^2}{2} \int \textrm{Tr}\, F_2 \wedge F_2\right) \int C_1\,, \end{equation} in the conventions of \cite{Myers:1999ps}. Just as in Section \ref{sec:set}, we prefer to use dimensionless quantities and define $\tilde{F}_2 \equiv \lambda F_2$. From here on we will exclusively use $\tilde{F}_2$ but drop the tilde, and all integrals are now over boxes with sides of (dimensionless) size $K$. In these conventions, the brane numbers are given by \begin{equation}\label{eq:4branenr} N_2^{ij} = \int \textrm{Tr}\, \star \! F_2^{ij}\,,\quad N_0 = \frac{1}{2} \int \textrm{Tr}\, F_2 \wedge F_2\,. \end{equation} Much like in the D$0$ picture, this system of branes displays an enhancement of supersymmetry if the trace can be split according to \begin{equation}\label{eq:enhanceD4} \begin{split} N_0 &= \frac{1}{2} \int \Trs{N_4} \{\star F_2 \wedge F_2\}\\ &= \frac{1}{N_4} \sum_{\textrm{D}2\textrm{-pairs}} \left[\left( \int \Trs{N_4} \{\star F_2\}\right) \times \left(\int \Trs{N_4} \{F_2\}\right) \right] \\ &= \frac{1}{N_4} \sum_{\textrm{D}2\textrm{-pairs}} N_2^2\,, \end{split} \end{equation} and our interest here is to prevent this enhancement. The macroscopic map between the D4 and the D0 descriptions preserves the brane numbers according to\footnote{A similar map, as we use here to identify the D$4$ picture of the D$0$-D$2$-D$4$ state, can be found in \cite{KeskiVakkuri:1997ec}, where they use such a map as a technique to find solutions, and also in \cite{Taylor:1996ik,Ganor:1996zk}, in which they perform four T-dualities along a D$0$-D$2$-D$4$ system.} \begin{equation}\label{eq:024map} \begin{split} N_2^{ij} = -i\frac{1}{K^2}\textrm{Tr}\, [\Phi_i, \Phi_j]\ &\to\ N_2^{(ij)} = \int \Trs{N_4}\{ \star F_2 \}^{ij}\,,\\ N_{4} N_0 = -\frac{N_0}{2 K^4} \textrm{Tr}\, \epsilon^{ijkl} \Phi_i \Phi_j \Phi_k \Phi_l\ &\to\ N_0 N_4 = \frac{N_4}{2} \int \Trs{N_4} \{F_2 \wedge F_2\}\,. \end{split} \end{equation} A three-parameter family of D4 configurations with these charges can be obtained using a constant worldvolume flux of the form \begin{equation}\label{eq:F2ansatz} \begin{split} F_{12} = F_{34} &= 0\,,\\ F_{13} = F_{42} &= \frac{\rho_0}{\rho_4} \left(A_{13} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right)\,,\\ F_{14} = F_{23} &= \frac{\rho_0}{\rho_4} \left( A_{14} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi \right)\,. \end{split} \end{equation} {where $\Xi$ is any traceless $N_4\times N_4$ matrix with $\textrm{Tr}\, \Xi^2 = N_4$. It is easy to verify that the above configurations contain the same amounts of D0, D2 and D4 charges as in \eqref{D0dens}, \eqref{D2dens} and \eqref{D4dens} respectively. However, this constant Ansatz is clearly only applicable if $N_4 > 1$.} From the D$0$ point of view discussed in the previous section, nothing appears to prevent us from considering a solution to the T-brane equations whose scalar profile gives $N_4=1$. From the D$4$ perspective, however, a constant D4 worldvolume-flux solution cannot be chosen, as it would correspond to a 16-supercharge configuration. One is therefore bound to rely on non-constant flux profiles. If the number of D0-branes were finite, it would have been impossible to describe them from the perspective of a single D4-brane with flux.\footnote{This is due to the well-known fact that there are no Abelian instantons on $\mathbb{R}^4$.} Here, however, the number of D0-branes must be infinite, which allows to relax the finite-action requirement when trying to solve the self-duality equation for the D4-brane flux. Nevertheless we still believe that there exists no description of our system from the D$4$-brane point of view when $N_4 = 1$, and our argument goes as follows. While we are forced to relax the condition of finite action, we still need to demand that the density of D$0$-branes is finite. This means that either the integral determining the D$0$-brane number scales as $K^4$ -- the same as if the integrand were a constant, or equivalently, as the volume of $\mathbb{R}^4$ -- or the expression for the worldvolume flux must contain explicit $K$ dependence. Any explicit $K$ dependence is ruled out since it would, as we will point out later, fail to produce finite and non-vanishing T-brane dynamics in the $K\to\infty$ limit. Hence we conclude that the D$0$-brane number must make the integral scale as $K^4$ to be a solution of interest. This is in turn not possible since a component of the worldvolume gauge potential must satisfy the Laplace equation if its field-strength is to satisfy the Bianchi identity and the self-duality condition. This indicates that this field-strength and its derivatives in Euclidean coordinates, have to obey the ``maximum principle'', which states that these functions cannot have local extrema. This in turn implies that the function cannot be bounded at infinity (and be regular at finite distances at the same time), and hence must have an integral that scales as $K^{>4}$. Although this constitutes no formal proof, this argument is for us convincing enough to believe that the $N_4 = 1$ solution cannot be described from the D$4$-brane point of view. It would be interesting to look into these discrepancies between the D$0$-brane and D$4$-brane pictures further. We hope to provide more insight into this in future work. \section{Returning to the original frame}\label{sec:return} In this section we start from the Abelian D$4$-brane perspective of the previous section and perform two T-dualities in order to return to the original T-brane frame. We will reverse the T-duality performed in Section \ref{sec:set} along the directions $x^{3,4}$, and the resulting system will be a set of intersecting D$2$-branes. The latter will be extended along non-compact two-dimensional planes parameterized by the coordinates $x^1$ and $x^2$. Performing the T-duality along the directions of the worldvolume flux of the previous section (Equation \eqref{eq:F2ansatz}) produces the following set of differential equations \begin{equation}\label{eq:diff} \begin{split} &\partial_1 X^3 = - \partial_2 X^4 = \frac{\rho_0}{\rho_4} \left(A_{13} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right)\,,\\ &\partial_1 X^4 = \partial_2 X^3 = \frac{\rho_0}{\rho_4} \left(A_{14} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right)\,. \end{split} \end{equation} This system can be easily integrated and {shown to describe} the embedding \begin{equation} \begin{split} X^3 &= \frac{\rho_0}{\rho_4} \left(A_{13} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right) x^1 + \frac{\rho_0}{\rho_4} \left(A_{14} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right) x^2 + \kappa_1\,,\\ X^4 &= \frac{\rho_0}{\rho_4} \left(A_{14} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right) x^1 - \frac{\rho_0}{\rho_4} \left(A_{13} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right) x^2 + \kappa_2\,,\\ X^1 &= x^1 \mathbb{I}_{N_4}\,,\\ X^2 &= x^2 \mathbb{I}_{N_4}\,.\\ \end{split} \end{equation} It should be noted that these matrix-valued coordinates describe the embedding of $N_4$ D$2$-branes at once. Furthermore, even though this embedding has a matrix structure, the solution is still Abelian, in the sense that any commutator between the coordinates is zero, $[X^i,X^j]=0$, as long as the integration constants $\kappa_{1,2}$ allow it. By defining complex coordinates $Z=X^1 + i X^2$ and $W=X^4 + i X^3$, we can write the embedding in a holomorphic way\footnote{With some care, $W$ and $Z$ can be compared to the coordinates with the same labels in \cite{Bena:2016oqr}, although our coordinates are matrices.} \begin{equation}\label{eq:holo} W = C Z + \kappa\,, \end{equation} where $C$ and $\kappa$ are given by \begin{equation}\label{eq:holoexpl} C = \frac{\rho_0}{\rho_4} \left(A_{14} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right) + i \frac{\rho_0}{\rho_4} \left(A_{13} \mathbb{I}_{N_4} + \frac{\gamma}{\sqrt{2}} \Xi\right)\,,\quad \kappa = \kappa_2 +i\kappa_1\,. \end{equation} According to Equation (\ref{eq:holo}), the surface over which each D2-brane extends is a flat complex plane embedded in the $\mathbb{C}^2$ parameterized by $Z$ and $W$. Under a certain projection onto a $\mathbb{R}^2$ subspace of $\mathbb{C}^2$, the embedding for one of these branes can be described by Figure \ref{fig:linear}. We see that $A_{ij}$ and $\gamma$ describe the angle the branes make and the integration constants describe shifts of the branes. Note that in the absence of the term proportional to $\Xi$, the branes would all be parallel and the supersymmetry would be enhanced to 16 supercharges. It is only the parameter $\gamma$ that ensures that the branes are not parallel and hence that the system has only 8 supercharges. \begin{figure} \begin{center} \begin{comment} \begin{tikzpicture} \draw[->] (-3,0) -- (3,0); \draw node at (3,.3) {$x$}; \draw[->] (0,-1) -- (0,3); \draw node at (-.3,3) {$y$}; \draw[thick] (-3,-1) -- (1,3); \draw node at (1,2.5) {D$2$}; \draw (-1,0) arc (0:45:1); \draw node at (-.7,.5) {$\alpha$}; \draw[|-|] (.1,.05) -- (.1,2); \draw node at (.3,1) {$k$}; \end{tikzpicture} \end{comment} \includegraphics{name_2.pdf} \end{center} \caption{A projection of Eq.~(\ref{eq:holo}) onto the equation $y = x \tan \alpha + k$, where the angle $\alpha$ would be derived from the constant $C$, and $k$ from $\kappa$.\label{fig:linear}} \end{figure} The flat shape of these D2-branes is just a consequence of the constant-flux ``ensemble representative'' solution we chose to focus on in the previous section. A generic member of the ensemble will have non-constant fluxes on the system of D4-branes, which in turn would give rise to curved D2-brane embeddings after the T-dualities. Hence, the conclusion of our investigation is that the non-Abelian T-brane configuration we started with, made from a stack of D2-branes at $Z=0$ admits an alternative Abelian description consisting in a number of (generally curved) D2-branes intersecting in the $Z,W$ plane. As we will explain further below, since all the quantities in Equation (\ref{eq:holoexpl}) are independent of $K$ and $N$, our result is finite and hence applies to T-branes with large but finite $N$ as well. \section{{Discussion}}\label{sec:inter} Our result confirms the claim made in \cite{Bena:2016oqr} that T-branes admit an alternative Abelian description in terms of branes wrapping holomorphic cycles. We focused on solutions preserving eight supercharges and we restricted to T-branes characterized by constant profiles of the worldvolume scalars, which forced us to consider stacks made of an infinite number of D-branes. The non-commutative scalar profile encodes important physical information, which we extracted and connected to the number of D-branes (of the same dimensionality) needed to describe the system from the Abelian perspective.\footnote{This quantity can be roughly seen as the analogue of the number of Jordan blocks characterizing finite-$N$ T-branes \cite{Bena:2016oqr}.} The detour we took to link the two pictures allowed us to discover an intriguing connection between the BPS equations governing T-branes and the twenty-year-old Banks-Seiberg-Shenker equation that describe longitudinal five-branes in Matrix Theory. We found a three-parameter family of explicit solutions to these equations and discussed their brane interpretation. As we have pointed out several times throughout this paper, our matrix-model-inspired construction of T-branes uses infinite matrices, and one may ask whether similar conclusions apply to T-branes made from a finite number of branes. Departing from the infinite-$N$ limit, small $1/N$ corrections are expected to affect our map, and there are at least three sources of such corrections. The first originates from higher-derivative terms in the non-Abelian Born-Infeld action \cite{Constable:1999ac}.\footnote{For D7-branes one could try to extract some of these corrections from known $\alpha'$ corrections in F-theory \cite{Grimm:2013gma,Grimm:2013bha,Minasian:2015bxa}.} The second comes from assuming that the physics of the lower part of Figure \ref{MasterFigure} takes place on an $\mathbb{R}^4$ space. However, for a T-brane in a compactification, one expects this physics to receive corrections of order $1/K$, where $K$ is a typical size of the compactification. When the D$0$-density, $\rho_0$, is finite, $N$ and $K$ are related, and therefore our analysis is precise up to $1/N$ corrections. A third source for $1/N$ corrections (possibly related to the first) is going from the D$0$ to the D$4$ description. The exact relation between the finite- and infinite-$N$ map is still not concrete, and left for future study. However, we believe that the map from the Hitchin system to an Abelian system is valid in general. In this paper we have limited ourselves to matching the macroscopic charges between the D0-brane and the D4-brane descriptions, and our purpose has not been to find the precise configuration of D4-branes with worldvolume flux providing the alternative Abelian description of the particular D0-brane solution in \eqref{eq:na-sol}. To construct such a microscopic map, one would need to find for example the precise distribution of D0-branes in the non-compact four-dimensional space, encoded by the details of the scalars $\Phi_i$. In analogy with the finite-dimensional systems studied in \cite{Myers:1999ps, Constable:1999ac}, one should be able to reconstruct the ``fuzzy'' distribution from traces of powers of the scalars $\Phi_i$, something which we did not attempt here. For this reason we focused on the easiest possible flux profile reproducing the same macroscopic charges from the D4 perspective, namely the constant-flux solution, which leads to a uniform distribution of D0-branes. We hope to provide a more refined analysis in a future work. \section*{Acknowledgements} We would like to thank Ulf Danielsson, Giuseppe Dibitetto, Mariana Gra\~na, Fernando Marchesano, Washington Taylor, and Angel Uranga for interesting discussions. The work of I.B.~and J.B.~was supported by the John Templeton Foundation Grant 48222 and by the ANR grant Black-dS-String. The work of J.B.~was also supported by the CEA Eurotalents program. The work of R.S.~was supported by the ERC Advanced Grant SPLE under contract ERC-2012-ADG-20120216-320421. In our calculations we have used SymPy \cite{10.7717/peerj-cs.103} and we would like to thank the developers.
proofpile-arXiv_068-15009
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Artificial neural networks (ANNs) rank among the most successful classes of machine learning models, but are -- superficial similarities to sensory processing pathways in cortex notwithstanding -- difficult to map to biologically realistic spiking neural networks. Nevertheless, we argue that such a reverse mapping is worthwhile for two reasons. First, it could help us understand information processing in the brain -- assuming that it follows similar computational principles. Second, it enables machine learning applications on fast, low-power neuromorphic architectures that are specifically developed to mimic biological neuro-synaptic dynamics. In this manuscript, we discuss several ways to answer what we consider to be a key challenge for neuromorphic architectures with analog components: Is it possible to design spiking architectures and training methods that are amenable to neuromorphic implementation and remain functionally performant despite substrate-inherent imperfections? More specifically, we review three different approaches \cite{petrovici2015fast,petrovici2016robustness,schmitt2016classification}. The first two are based on recent insights about how networks of spiking neurons can be constructed to sample from predefined joint probability distributions \cite{buesing2011neural,petrovici2016stochastic}. When these distributions are learned from data, these networks automatically build an internal, generative model, which is then straightforward to use for pattern recognition and memory recall \cite{leng2016spiking}. Practical problems arise when the hardware dynamics and parameter ranges are incompatible to the target specifications of the network, as these inevitably distort the sampled distribution. The first approach involves the addition of auxiliary network components in order to make it robust to hardware-induced distortions (Sec.\,\ref{sec:sampling}). The second one restricts the network topology in a way that endows it with immunity to some of these effects (Sec.\,\ref{sec:hierarchical}). We demonstrate the effectiveness of both these approaches on the Spikey neuromorphic system \cite{pfeil2013six}. The third strategy maps traditional feedforward architectures, trained offline with a backpropagation algorithm, to a network of spiking neurons on the neuromorphic device (Sec.\,\ref{sec:itl}). Here, the key to good performance is an additional learning phase where parameters are trained on hardware in the loop, while using the abstract network description as an approximation for the parameter updates. We show how this approach can restore network functionality despite having incomplete knowledge about the gradient along which the parameters need to descend. These experiments are performed on the BrainScaleS neuromorphic system \cite{schemmel2010waferscale}. While our networks are small compared to those used in contemporary machine learning applications, they showcase the potential of using accelerated analog neuromorphic systems for pattern representation and recognition. In particular, the used neuromorphic systems operate $10^4$ times faster than their biological archetypes, thereby significantly speeding up both training and practical application. \section{Fast sampling with spikes} \label{sec:sampling} \begin{figure} \centering \begin{tikzpicture}[] \draw[use as bounding box,inner sep=0pt] node {\includegraphics[width=\columnwidth]{fig1.pdf}}; \node at (-2.1, .5) {\includegraphics[width=.47\columnwidth]{fig1c_connection.pdf}}; \begin{scope}[ line width=1pt, shift={(2.2,2.42)}, font={\scriptsize \sffamily}, ->, shorten >=1pt, shorten <=1pt, >=latex, ] \def .5 {.5} \tikzstyle{neuron}=[circle, draw=blue, inner sep=0pt, minimum size=12pt] \tikzstyle{hid}=[dashed, opacity=.5] \tikzstyle{conn}=[-{>[flex=0.75]}, blue] \def 1.2cm {.8cm} \def 17 {17} \node[neuron] (z2) at (-30:1.2cm) {$z_2$}; \node[neuron, hid] (z0) at (90:1.2cm) {}; \node[neuron] (z1) at (210:1.2cm) {$z_1$}; \foreach \i in {1,...,3} { \node (h\i) at (120*\i - 150:2.2*1.2cm) {}; } \draw[conn] (z2) -- node[above,yshift=-2pt,xshift=2pt] {$w_{12}$} node[below,yshift=-9pt,xshift=-1pt] {$w_{21}$}(z1); \draw[conn, hid] (-30+17:1.2cm) arc (-30+17:90-17:1.2cm); \draw[conn, hid] (z1) -- (z0); \draw[conn, hid] (90+17:1.2cm) arc (90+17:210-17:1.2cm); \draw[conn, hid] (z0) -- (z2); \draw[conn] (210+17:1.2cm) arc (210+17:330-17:1.2cm) ; \draw[black!40!green, hid, shorten <=5pt] (h2) to (z0); \draw[black!40!green] (h1) to node[above,shift=({5pt,-4pt})] {$b_2$} (z2); \draw[black!40!green] (h3) to node[above,shift=({-5pt,-4pt})] {$b_1$} (z1); \end{scope} \end{tikzpicture} \caption{Sampling with LIF neurons. \tb{(A)} Exemplary membrane potential traces and mapping of refractory/non-refractory neuron states to states 1/0 of binary RVs. \tb{(B)} Exemplary structure of a BM. A subset of 2 units $(z_1, z_2)$ with biases $(b_1, b_2)$ (green) and connected by weights $w_{12}=w_{21}$ (blue) is highlighted to exemplify the neuromorphic network structure in subplot C. \tb{(C)} Sketch of sampling subnetworks representing binary RVs. Each subnetwork consists of a principal LIF neuron (black circle) and an associated synfire chain that implements refractoriness (red synapses), and coupling between sampling units (blue synapses). \tb{(D)} Exemplary spike activity of a sampling unit and membrane potential of its PN. \tb{(E)} Target (blue) vs. sampled (red) distribution on the Spikey chip. \tb{(F)} Evolution of the Kullback-Leibler divergence between the sampled and the target distribution for multiple experimental runs. Time given in biological units. } \label{fig:1} \end{figure} Following \cite{buesing2011neural,petrovici2016stochastic}, neural network activity can be interpreted as sampling from an underlying probability distribution over binary random variables (RVs). The mapping from spikes to states $\bs z = (z_1,\dots,z_k)$ is defined by \begin{equation} z_k^{(t)} = \left\{\begin{array}{ll} 1 \quad & \quad \text{if $t_k^s < t < t_k^s + {\tau_\mathrm{ref}}\xspace$} \ , \\ 0 \quad & \quad \text{otherwise} \ , \end{array}\right. \label{eqn:refractoriness} \end{equation} where $t_k^s$ are spike times of the $k$th neuron and ${\tau_\mathrm{ref}}\xspace$ its absolute refractory period (Fig.\,\ref{fig:1}\,A). When using leaky integrate-and-fire (LIF) neurons, Poisson background noise is used to achieve a high-conductance state, in which the stochastic response of a single neuron is well approximated by a logistic activation function \begin{equation} p(z_k = 1) = \sigma \left([\bar u_k - \bar u_k^0]/\alpha\right) \ , \label{eqn:actfctlif} \end{equation} where $\sigma(\cdot)$ is the logistic function and $\bar u_k$ represents the noise-free membrane potential of the $k$th neuron. The parameters $\bar u_k^0$ (bias parameter determining the inflection point) and $\alpha$ (slope) are controlled by the intensity of the background noise. With appropriate settings of synaptic weights $w_{ij}$ and bias parameters $\bar u_k^0$, these networks can be trained to sample from Boltzmann distributions \begin{equation} p(\bs z) \propto \exp[-E(\bs{z})] = \exp \left[ \bs{z}^T \bs{W} \bs{z} / 2 + \bs{z}^T \bs{b} \right] \ , \label{eqn:jointboltzmann} \end{equation} where the weight matrix $\bs{W}$ and the bias vector $\bs{b}$ can be chosen freely. This enables the emulation of Boltzmann machines (BMs) with networks of LIF neurons (Fig.\,\ref{fig:1}\,B). A core assumption of the neural sampling framework is that the membrane potential $u_k$ of a neuron reflects the state $\bs z_{\non k}\xspace$ of all presynaptic neurons at any moment in time: \begin{equation} u_k (\bs z_{\non k}\xspace) = \textstyle\sum_{j \neq k}^n W_{kj} z_j + b_k \ . \label{eqn:uabstract} \end{equation} In particular, this requires that all neurons instantaneously transmit their states (spikes) to all their postsynaptic partners. In any physical system, this assumption is necessarily violated to some degree, since signal transmission can never be instantaneous. In the particular case of accelerated neuromorphic hardware, synaptic transmission delays become even more problematic, as they can be in the same order of magnitude as the state-encoding refractory times themselves. Furthermore, the required equivalence between post-synaptic potential (PSP) durations and refractory states (\ref{eqn:refractoriness},\ref{eqn:uabstract}) can be violated if either of these are unstable. On Spikey, for example, refractory times have relative spike-to-spike variations $\sigma_{{\tau_\mathrm{ref}}\xspace}/{\tau_\mathrm{ref}}\xspace$ between \SI{2}{\percent} and \SI{20}{\percent}. These two kinds of timing mismatch pose a fundamental problem to the implementation of spiking BMs in accelerated analog substrates. Here, we alleviate the issue of substrate-induced timing mismatches by using a recurrent network structure that represents each RV with a small subnetwork, called a sampling unit. The subnetworks are built such that refractory times can be well controlled and, in addition, intra-unit refractory states and inter-unit state communication across the network are inseparably coupled (Fig.\,\ref{fig:1}\,C). \begin{figure*} \centering \begin{tikzpicture} \draw[use as bounding box,inner sep=0pt,anchor=south west] node {\includegraphics{fig2.pdf}}; \begin{scope}[ shift={(0.1cm,0.1cm)}, font={\scriptsize \sffamily}, ->, shorten >=2pt, shorten <=2pt, >=latex, anchor=south west ] \def0.9cm{1.225cm} \def.9{.9} \definecolor{viscol}{HTML}{008000} \colorlet{hidcol}{orange!75} \colorlet{labcol}{blue!75} \tikzstyle{neuron}=[circle,minimum size=15pt,inner sep=0pt] \tikzstyle{visible neuron}=[neuron, fill=viscol] \tikzstyle{hidden neuron}=[neuron, fill=hidcol] \tikzstyle{label neuron}=[neuron, fill=labcol] \def2{2} \def3{3} \def4{4} \pgfmathsetmacro{\max}{max(4, 3, 2)} \pgfmathsetmacro{\texthoroffset}{1sp} \foreach \x in {1,...,4} \pgfmathparse{((\max - 4)*0.5 + \x - 1)*.9} \node[visible neuron] (V\x) at (\pgfmathresult,0) {}; \foreach \x in {1,...,3} { \pgfmathparse{((\max - 3)*0.5 + \x - 1)*.9} \node[hidden neuron] (H\x) at (\pgfmathresult, 0.9cm) {}; } \foreach \x in {1,...,2} { \pgfmathparse{((\max - 2)*0.5 + \x - 1)*.9} \node[label neuron] (L\x) at (\pgfmathresult, 0.9cm*2) {}; } \foreach \source in {1,...,4} \foreach \dest in {1,...,3} \draw[<->] (V\source) -- (H\dest); \foreach \source in {1,...,3} \foreach \dest in {1,...,2} \draw[<->] (H\source) -- (L\dest); \node[anchor=west,right=\texthoroffset of V4, align=center, text=viscol] (vl) {visible\\(144)}; \node[anchor=west,right=\texthoroffset of H3, align=center,text=hidcol] {hidden\\(50)}; \node[anchor=west,right=\texthoroffset of L2, align=center,text=labcol] {label\\(6)}; \end{scope} \end{tikzpicture} \caption{ Robustness from structure in hierarchical networks. \tb{(A)} Hierarchical spiking network emulating an RBM. \mbox{\tb{(B)--(E)}} Effects of hardware-induced distortions on the classification rate of the network. Each test image was presented for a duration of \SI{1000}{\milli\second}. Green: training data, blue: test data, brown: mean value and range of distortions measured on Spikey. Error bars represent trial-to-trial variations. \tb{(B)} Synaptic transmission delays. \tb{(C)} Spike-to-spike variability of refractory times. \tb{(D)} Membrane time constant. \tb{(E)} Synaptic weight discretization. \tb{(F)} Comparison of classification rates in three scenarios: software simulation of the ideal, distortion-free case (black), software simulation of combined hardware-induced distortions as measured on Spikey (purple), hybrid emulation with the hidden layer on Spikey (green). Light colors for training data, dark colors for test data. } \label{fig:2} \end{figure*} Sampling units consist of a single principle neuron (PN) and a small synfire chain of excitatory (EPs) and inhibitory populations (IPs). The EPs of each stage project to both populations in the following stage, thereby relaying an activity pulse in the forward direction. The IPs project backwards, ensuring that neurons from previous stages only spike once. Additionally, all IPs and the last EP also project onto the PN with large weights. Therefore, after the PN elicits a spike, the IPs sequentially pull its membrane potential close to the inhibitory reversal potential, prohibiting it from firing as long as the synfire chain is active (Fig.\,\ref{fig:1}\,D). When the pulse has reached the final synfire stage, its EP pulls the PN's membrane potential back to its equilibrium value. The total duration of this pseudo-refractory period can then be controlled by the synfire chain length and parameters. In addition to controlling refractoriness, the synfire chains also mediate the interaction between PNs. The connections from a synfire chain to other PNs simply mirror its connections to its own PN. This guarantees a match between effective interaction durations and pseudo-refractory periods. The correct synapse parameter settings (weights, time constants) are determined in an iterative training procedure \cite{petrovici2015fast}. The results of a hardware emulation can be seen in Fig.\,\ref{fig:1}\,E,\,F. A network of four sampling units was trained on Spikey to sample from a target Boltzmann distribution. After training, the network needs about \SI{e4}{\milli\second} of biological time to achieve a good match between the sampled and the target distribution. Considering the hardware acceleration factor of $10^4$, this happens in \SI{1}{\milli\second} of wall-clock time. \section{Robust hierarchical networks} \label{sec:hierarchical} As discussed in the previous section, sampling LIF networks are ostensibly sensitive to different types of hardware-induced timing mismatch. In this subsection, we discuss how a sampling network model can be made robust by imposing a hierarchy onto the network structure \cite{petrovici2016robustness}. This is the equivalent of moving from general BMs to restricted BMs (RBMs). In addition to making their operation more robust, as we discuss below, this hierarchization has the distinct advantage of significantly speeding up training. To emulate an RBM, we construct a hierarchical LIF network model with 3 layers: a visible layer representing the data, a hidden layer that learns particular motifs in the data and a label layer for classification (Fig.\,\ref{fig:2}\,A). The network was trained with a contrastive learning rule \begin{align} \Delta W_{ij} &\propto {\expect{z_i z_j}}_\mathrm{data} - \expect{z_i z_j}_\mathrm{model} \ , \label{eqn:contrastivew} \\ \Delta b_i &\propto \expect{z_i}_\mathrm{data} - \expect{z_i}_\mathrm{model} \label{eqn:contrastiveb} \end{align} on a modified subset of the MNIST dataset ($\expect{\cdot}_\mathrm{data}$ and $\expect{\cdot}_\mathrm{model}$ represent expectation values when clamping training data and when the network samples freely, respectively). Due to hardware limitations, we used a small network and dataset (6 digits, 12$\times$12 pixels, each with 20 training and 20 test samples) for this proof-of-principle experiment. The specific influence of various hardware-induced distortion mechanisms were first studied in complementary software simulations. These simulations show that the classification accuracy of the network is essentially unaffected by the types of timing mismatch discussed above, even when their amplitudes are much larger than those measured on our neuromorphic substrate (Fig.\,\ref{fig:2}\,B,\,C). In order to facilitate a meaningful comparison with hardware experiments, two further distortion mechanisms were studied. An upper limit to the membrane conductance can prevent neurons from entering a high-conductance state, thereby distorting their activation functions away from their ideal logistic shape (\ref{eqn:actfctlif}) and consequently modifying the sampled distribution. However, within the range achievable on Spikey, the effect on the classification accuracy remains small (Fig.\,\ref{fig:2}\,D). The largest effect (about \SI{5.6}{\percent} regression in classification accuracy compared to ideal software simulations) stems from the discretization of synaptic weights, which have a resolution of 4 bits on Spikey (Fig.\,\ref{fig:2}\,E). The robustness of this hierarchical architecture to timing mismatches is a consequence of both the training procedure and the information flow within the network. Training has the effect of creating a steep energy landscape $E(\bs z)$ (\ref{eqn:jointboltzmann}), for which deep energy minima, corresponding to particular learned digits, represent strong attractors, in which the system is placed during classification by clamping of the visible layer. Throughout the duration of such an attractor, visible neurons represent pixels of constant intensity encoded in their spiking probability, thereby entering a quasi-rate-based information representation regime. Therefore, the information they provide to the hidden layer is unaffected by temporal shifts or zero-mean noise. As they outnumber the hidden neurons 24:1, they effectively control the state of the hidden layer. The hidden layer neurons themselves are unaffected by timing mismatches because they are not interconnected. Second-order (hidden$\rightarrow$label$\rightarrow$hidden) lateral interactions are indeed distorted, but as they are mediated by only few label neurons, their relative strength is too weak to play a critical role. These findings are corroborated by experiments on Spikey (Fig.\,\ref{fig:2}\,F). Due to the system's limitations, we used a hybrid approach, with the visible and label layers implemented in software and the hidden layer running on Spikey. In the ideal, undistorted case, the LIF network had a classification performance of \SI{86.6 +- 1.7}{\percent} (\SI{93.4 +- 0.9}{\percent}) on the test (training) set. This was reduced to \SI{78.1 +- 1.5}{\percent} (\SI{90.7 +- 1.7}{\percent}) when all distortive effects were simultaneously present in software simulations. In comparison, the hybrid emulation showed a performance of \SI{80.7 +- 2.3}{\percent} (\SI{89.8 +- 1.8}{\percent}), which closely matched the software results within the trial-to-trial variability. We stress that this was a result of direct-to-hardware mapping, with no additional training to compensate for hardware-induced distortions (as compared to Sec.\,\ref{sec:itl}). \section{In-the-loop training} \label{sec:itl} \begin{figure} \centering \begin{tikzpicture} \draw[use as bounding box,inner sep=0pt] node {\includegraphics[width=\columnwidth]{fig3.pdf}}; \begin{scope}[ shift={(-1.5in,-0.2cm)}, font={\scriptsize \sffamily}, ->, shorten >=2pt, shorten <=2pt, >=latex ] \def0.9cm{0.9cm} \def.9{.9} \definecolor{viscol}{HTML}{008000} \colorlet{hidcol}{orange!75} \colorlet{labcol}{blue!75} \tikzstyle{neuron}=[circle,minimum size=13pt,inner sep=0pt] \tikzstyle{visible neuron}=[neuron, fill=viscol] \tikzstyle{hidden neuron}=[neuron, fill=hidcol] \tikzstyle{label neuron}=[neuron, fill=labcol] \def2{2} \def3{3} \def4{4} \pgfmathsetmacro{\max}{max(4, 3, 2)} \pgfmathsetmacro{\texthoroffset}{1sp} \foreach \x in {1,...,4} \pgfmathparse{((\max - 4)*0.5 + \x - 1)*.9} \node[visible neuron] (V\x) at (\pgfmathresult,0) {}; \foreach \x in {1,...,3} { \pgfmathparse{((\max - 3)*0.5 + \x - 1)*.9} \node[hidden neuron] (H1\x) at (\pgfmathresult, 0.9cm) {}; } \foreach \x in {1,...,3} { \pgfmathparse{((\max - 3)*0.5 + \x - 1)*.9} \node[hidden neuron] (H2\x) at (\pgfmathresult, 0.9cm*2) {}; } \foreach \x in {1,...,2} { \pgfmathparse{((\max - 2)*0.5 + \x - 1)*.9} \node[label neuron] (L\x) at (\pgfmathresult, 0.9cm*3) {}; } \foreach \source in {1,...,4} \foreach \dest in {1,...,3} \draw[->] (V\source) -- (H1\dest); \foreach \source in {1,...,3} \foreach \dest in {1,...,3} \draw[->] (H1\source) -- (H2\dest); \foreach \source in {1,...,3} \foreach \dest in {1,...,2} \draw[->] (H2\source) -- (L\dest); \node[anchor=west,right=\texthoroffset of V4, align=center, text=viscol] (vl) {visible\\(100)}; \node[anchor=west,right=\texthoroffset of H13, align=center,text=hidcol] {hidden\\(15)}; \node[anchor=west,right=\texthoroffset of H23, align=center,text=hidcol] {hidden\\(15)}; \node[anchor=west,right=\texthoroffset of L2, align=center,text=labcol] {label\\(5)}; \end{scope} \begin{scope}[ font={\scriptsize \sffamily}, line width=2pt, -{>[flex=0.75]}, >=latex, shift={(2.cm,1.cm)}, line width=1pt ] \def 1.2cm {1.2cm} \pgfmathsetmacro{\aangle}{acos(1.0/3.0)} \def \angleoffset {90} \def{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}{{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}} \pgfmathsetmacro{\numdata}{dim({{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}})} \foreach \s [count=\i from 0] in {1,...,\numdata} { \pgfmathsetmacro{\c}{{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}[\i][1] + \angleoffset} \node[fill=white,inner sep=1pt] (\s) at (\c:1.2cm) {\pgfmathparse{{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}[\i][0]}\pgfmathresult}; \pgfmathsetmacro{\arcstart}{mod(\c+{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}[\i][2],360)} \pgfmathsetmacro{\arcend}{{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}[mod(\s,\numdata)][1]+\angleoffset-{{"backpropagation",0,28,8},{"weight updates",\aangle,8,8},{"4\,bit weight discretization",180-\aangle,8,30},{"BrainScaleS",180,30,8},{"spikes",180+\aangle,8,8},{"ANN activity",360-\aangle,10,28}}[\i][3]} \draw (\arcstart:1.2cm) arc (\arcstart:\arcend:1.2cm); } \def .5cm {.5cm} \node[anchor=base] (7) [left=.5cm of 4] {MNIST}; \node[anchor=base] (8) [right=.5cm of 4] {prediction}; \draw (7) -- (4); \draw (4) -- (8); \node (9) [below=-2pt of 7] {}; \draw[color=purple!75] (9) -- (9-|8.south); \node[below=.1cm of 4,color=purple!75] {forward pass}; \node[above=3pt of 1,color=purple!75] {backward pass}; \begin{scope}[on background layer] \pgfmathsetmacro{\arcstart}{-35} \pgfmathsetmacro{\arcend}{215} \def1.2 {1.2} \draw[line width=2pt,color=purple!75] (\arcstart:1.2*1.2cm) arc (\arcstart:\arcend:1.2*1.2cm); \end{scope} \end{scope} \end{tikzpicture} \caption{In-the-loop training. \tb{(A)} Structure of the feed-forward, rate-based deep spiking network. \tb{(B)} Schematic of the training procedure with the hardware in the loop. \tb{(C)} Classification accuracy over training step. Left: software training phase, right: hardware in-the-loop training phase. } \label{fig:3} \end{figure} In Sec.\,\ref{sec:sampling}, we used a training procedure based on (\ref{eqn:contrastivew},\ref{eqn:contrastiveb}) to optimize the hardware-emulated sampling network. Such simple contrastive learning rules can yield very good classification performance in networks of spiking neurons \cite{leng2016spiking}. Another class of highly successful learning algorithms is based on error backpropagation. This, however, requires precise knowledge of the gradient of a cost function with respect to the network parameters, which is difficult to achieve on analog hardware. We propose a training method for hardware-emulated networks that circumvents this problem by using the cost function gradient with respect to the parameters of an ANN as an approximation of the true gradient with respect to the hardware parameters \cite{schmitt2016classification}. A similar method has previously been used for network training on a digital neuromorphic device \cite{esser2016convolutional}. Our training schedule consisted of two phases. In the first phase, an ANN was trained in software on a modified subset of the MNIST dataset (5 digits, 10$\times$10 pixels, with a total of 30690 training and 5083 test samples) using a simple cost function with regularization \begin{equation} \textstyle C(\bs W) = \sum_{s \in S} \left\Vert \bs{\tilde{y}}_{s} - \bs{\hat{y}}_{s}) \right\Vert ^2 + \sum_{kl} \tfrac{1}{2}\lambda W_{kl}^2 \label{eq:cost} \end{equation} and backpropagation with momentum \cite{qian1999momentum} \begin{align} \Delta W_{kl} &\leftarrow \eta \nabla_{W_{kl}} C(\bs W) + \gamma \Delta W_{kl} \ , \\ W_{kl} &\leftarrow W_{kl} - \Delta W_{kl} \ . \end{align} Here, $\bs{\tilde{y}}_{s}$ and $\bs{\hat{y}}_{s}$ denote the target and network state of the label layer, respectively, and the sum runs over all samples within a minibatch $S$. The learned parameters were then translated to a feed-forward spiking neural network (Fig.\,\ref{fig:3}\,A). Here, the BrainScaleS wafer-scale system \cite{schemmel2010waferscale} was used for network emulation. Due to hardware imperfections, the ANN classification accuracy of \SI{97}{\percent} dropped to \asymunc{72}{12}{10}{\si{\percent}} after mapping the network to the hardware substrate. In the second training phase, the hardware-emulated network was trained in the loop (Fig.\,\ref{fig:3}\,B) for several iterations. Parameter updates were calculated using the same gradient descent rule as in the ANN, but the activation of all layers was measured on the hardware. The rationale behind this approach is that the activation function of an ANN unit is sufficiently similar to that of an LIF neuron to allow using the computed gradient as an approximation of the true hardware gradient. As seen in Fig.\,\ref{fig:3}\,C, this assumption is validated by the post-training performance of the hardware-emulated network: after 40 training iterations, the classification accuracy increased back to \asymunc{95}{1}{2}{\si{\percent}}. \section{Discussion} We have reviewed three strategies for emulating performant spiking network models in analog hardware. The proposed methods tackled the problems induced by substrate-inherent imperfections from different (and complementary) angles. The three strategies were implemented and evaluated with two different analog hardware systems. An essential advantage of the employed neuromorphic platforms is provided by their accelerated dynamics. Despite possible losses in performance compared to precisely tunable software solutions, accelerated analog neuromorphic systems have the potential to vastly outperform classical simulations of neural networks in terms of both speed and energy consumption \cite{schmitt2016classification} -- an invaluable advantage for on-line learning of complex, real world data sets. The network in Sec.\,\ref{sec:sampling}, for example, is already faster than equivalent software simulations (NEST 2.2.2 default build, single-threaded, Intel Core i7-2620M) by several orders of magnitude. The studied networks serve as a proof of principle and are scalable to larger network sizes. Future research will have to address whether the results obtained for these small networks still hold as training tasks increase in complexity. Furthermore, the generative properties of the described hierarchical LIF networks remain to be studied. Another major step forward will be taken once training can take place entirely on the hardware, thereby rendering sequential reconfigurations between individual experiments unnecessary. Future generations of the used systems will feature on-board plasticity processor units, with early-stage experiments already showing promising results \cite{friedmann2016demonstrating}. \section*{Acknowledgments} The first five authors contributed equally to this work. This research was supported by EU grants \#269921 (BrainScaleS), \#604102 and \#720270 (Human Brain Project) and the Manfred Stärk Foundation. \bibliographystyle{IEEEtran} \input{main.bbl} \end{document}
proofpile-arXiv_068-15069
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $H$ be a fixed graph, and let $P(H)$ denote the set of all permutations of $V(H)$. Let $G$ be a graph with an edge-labeling $\ell : E(G) \to P(H) \times P(H)$. Each edge $xy$ of $G$ is assumed to have a label $(\pi,\rho)$ where $\pi$ is viewed as associated with $x$ and $\rho$ associated with $y$. An $\ell$-{\em correspondence homomorphism} of $G$ to $H$ of $G$) is a mapping $f : V(G) \to V(H)$ such that $xy \in E(G)$ with $\ell(xy)=(\pi,\rho)$ implies $\pi(f(x))\rho(f(y)) \in E(H)$. When $H$ is the irreflexive (i.e., loopless) complete graph on $k$ vertices, correspondence homomorphisms to $H$ have been called {\em correspondence $k$-colourings}, and applied to answer a question of Borodin on choosability of planar graphs with certain excluded cycles \cite{dp}. These colorings have also been called DP-colourings, in honour of the authors of \cite{dp}, and have proved quite interesting from a graph theoretic point of view. cf. \cite{xuding,wang} and the references therein. To emphasize this connection, we sometimes call correspondence homomorphisms to $H$ also {\em correspondence $H$-homomorphisms}, or {\em correspondence $H$-colourings}. The {\em correspondence $H$-homomorphism problem} takes as input a graph $G$ with labeling $\ell$ and asks whether or not an $\ell$-correspondence homomorphism to $G$ exists. In the {\em correspondence $H$-list-homomorphism problem}, $G$ is also equipped with {\em lists} $L(x), x \in V(G)$, each a subset of $V(H)$, and the $\ell$-correspondence homomorphism $f$ also has to satisfy $f(x) \in L(x), x \in V(G)$. Clearly, these problems are generalizations of the well-known $H$-homomorphism (i.e., $H$-colouring) and list $H$-homomorphism (list $H$-colouring) problems respectively. Those problems are obtained as special cases when $\ell$ chooses two identity permutations on each edge of $G$. They have been studied in \cite{fh,fhh,fhh2,hn}, cf. \cite{hombook}. There is another way to think of the correspondence homomorphism problem, one that is often helpful in the proofs and illustrations. Let $G, \ell$ be an instance of the correspondence $H$-homomorphism problem. Construct a new graph $G^*$ by replacing each vertex $x$ of $G$ with its own separate copy $V_x$ of the set $V(H)$, with the following edges. If $xy$ is an edge of $G$ labelled by $\ell(xy) = (\pi,\rho)$, we join $V_x$ and $V_y$ with the edges from $\pi(x)(u) \in V_x$ to $\rho(y)(v) \in V_y$ for all edges $uv$ of $H$. (Recall that $\pi(x)$ is a permutation of $V(H)$, and we view it as also a permutation of $V_x$, and similarly for $\rho(y)$.) Then an $\ell$-correspondence homomorphism corresponds precisely to a transversal of the sets $V_x, x \in V(G)$, (a choice of exactly one vertex from each $V_x$) which induces in $G^*$ an isomorphic copy of $G$. Consider the edges between two adjacent sets $V_x, V_y,$ $xy \in E(G)$. Recall the bipartite graph $H'$ {\em associated with} $H$, in which each $v \in V(H)$ yields two vertices $v_1, v_2$ in $H'$ and each edge $uv \in E(H)$ yields two edges $u_1v_2, u_2v_1$ in $E(H')$. The edges in $G^*$ between adjacent sets $V_x, V_y$ form an isomorphic copy of $H'$, with the part $V_x$ permuted according to $\pi$, and the part in $V_y$ according to $\rho$. The label $\ell(xy)$ specifies the way $H'$ is layed out between the sets $V_x$ and $V_y$. In this note we focus on the reflexive case, i.e., we assume that $H$ has a loop at every vertex. While the standard $H$-homomorphism problem is trivial for reflexive graphs, the correspondence $H$-homomorphism problem turns out to be more interesting. Moreover, it makes sense to consider inputs $G$ that may have loops and parallel edges, as the permutation constraints on these may introduce significant restrictions. We will use this freedom in Section 4, to simplify NP-completeness proofs. We will also show, in Section 2, that the complexity of the problem for graphs with loops and parallel edges allowed or forbidden are the same. A {\em reflexive clique} is a complete graph with all loops, a {\em reflexive co-clique} is a set of disconnected loops with no other edges. A disjoint union of cliques $K_p$ and $K_q$ will be denoted by $K_p \cup K_q$ or $2K_p$ if $p=q$, and similarly for more than two cliques. We use the same symbols for reflexive cliques and irreflexive cliques, and the right context will always be specified or clear from the context. Our main result is the following dichotomy classification of both the correspondence homomorphism problem and the correspondence list homomorphism problem. \begin{theorem}\label{main} Suppose $H$ is a reflexive graph. If $H$ is a reflexive clique, a reflexive co-clique, or a reflexive $2K_2$, then the correspondence $H$-homomorphism problem is polynomial-time solvable. In all other cases, the correspondence $H$-homomorphism problem is NP-complete. If $H$ is a reflexive clique or a reflexive co-clique, then the correspondence $H$-list-homomorphism problem is polynomial-time solvable. In all other cases the correspondence $H$-list-homomorphism problem is NP-complete. \end{theorem} In the last section we state the analogous result for general graphs (with possible loops) and for bipartite graphs, and in the Appendix we provide a rough sketch of the proofs. \section{Loops and Parallel Edges} Suppose $H$ is a fixed reflexive graph, and $G$ is a graph with loops and parallel edges allowed, with an edge-labeling $\ell$. Thus there are labels on loops, and parallel edges may have different labels. We will construct a modified simple graph $G'$ (without loops and parallel edges), and a modified edge-labeling $\ell'$ on $G'$ such that $G$ has an $\ell$-correspondence homomorphism to $H$ if and only if $G'$ has an $\ell'$-correspondence homomorphism to $H$. The changes from $G, \ell$ to $G', \ell'$ proceed one loop and one pair of parallel edges at a time. Suppose $G$ has a loop $xx$ with label $\ell(xx) = (\pi(x),\rho(x))$. Replace $x$ by a clique with vertices $x_0, x_1, x_2, \dots, x_n$ where $n=|V(H)|$. Each edge $x_ix_j$ will have the same label $\ell(x_ix_j) = (\pi(x),\rho(x))$. Each vertex $x_i$ will have the same adjacencies, with the same labels, as $x$ did. Call the resulting graph $G_1$ and the resulting labeling $\ell_1$. Then we claim that $G$ has an $\ell$-correspondence homomorphism to $H$ if and only if $G_1$ has an $\ell_1$-correspondence homomorphism to $H$. The one direction is obvious, if $f$ is an $\ell$-correspondence homomorphism of $G$ to $H$, then the same mapping, extended to all copies $x_i$ of $x$ is an $\ell_1$-correspondence homomorphism of $G_1$ to $H$. Conversely, suppose that $f$ is an $\ell_1$-correspondence homomorphism of $G_1$ to $H$. This gives $n+1$ values $f(x_i)$ among the $n$ possible images $V(H)$. Thus the majority value must appear on at least two distinct vertices $v_i, v_j$, and therefore $\pi(v_i)\rho(v_j)$ is an edge of $H$. Thus the mapping $F$ which assigns to $x$ the majority value amongst $f(x_i)$ and equals to $f$ on all other vertices is an $\ell$- correspondence homomorphism of $G$ to $H$. Parallel edges are removed by a similar trick using expanders instead of cliques. To be specific, assume that $(xy), (xy)'$ are two different parallel edges of $G$ joining the same vertices $x$ and $y$, with labels $\ell((xy)) = (\pi(x),\rho(y))$ and $\ell'((xy)') = (\pi'(x),\rho'(y))$. Replace $x$ and $y$ by a large number $N$ of vertices $x_i, y_j$, each having the same adjacencies and labels to other vertices as $x, y$ respectively, and all edges $x_iy_j$ for all $i$ and $j$. Some of the edges $x_iy_j$ are labelled by $\ell((xy))$ and the others by $\ell'((xy)')$. The set of edges labelled by $\ell((xy))$ defines a bipartite graph $B$ and those labelled by $\ell'((xy)')$ form its bipartite complement $\overline{B}$. With the right choice of $B$ we will be able to conclude that between any large sets of $x_i$'s and $y_j$'s there is at least one edge of $B$ and at least one edge of $\overline{B}$. This allows the above idea of using majority values to work, namely if $f$ is a correspondence homomorphism on the replaced graph, we may define $F(x)$ to be the majority value of $f(x_i)$ and similarly for $F(y)$. Then the two values appear at both an edge labelled by $\ell((xy))$ and an edge labelled by $\ell'((xy)')$, and so $F$ is a correspondence homomorphism on the original graph $G$. There are many known proofs that such expanders $B$ do exist \cite{something}. It is also not hard to see directly. For instance, if $|V(H)|=n$, let $N > n \log_2 n$ and take for $B$ a random bipartite graph on $N$ versus $N$ vertices. The number of ways of choosing sets of size $$t = \lceil N/n \rceil > 3 \log_2 n$$ for both sides is $A={{N} \choose{t}}^2\leq {(\frac{eN}{t})}^{2t}\leq 2^{2tlog_2 N}$. The probability that the sets will not be joined by a random choice of edges is bounded by $\frac{1}{2^{t^2-1}} < 1/A$, giving a positive probability to the existence of a suitable bipartite graph $B$. \section{Polynomial Cases for Reflexive $H$} If $H$ is a reflexive clique, the correspondence $H$-homomorphism problem can be trivially solved. Each vertex $x$ of the input graph $G$ can be assigned any image, and regardless of the permutation labels $\ell$, the mapping is a perm-homomorphism. In fact, this observation also solves the correspondence $H$-list-homomorphism problem. If $H$ is a reflexive co-clique, the correspondence $H$-homomorphism problem also has an easy solution, since every choice of an image for a vertex $x$ of $G$ implies a unique image of any adjacent vertex $y$. Thus for each component of $G$ we may try all images of a particular vertex $x$, and the component can be mapped to $H$ if and only if one of these images produces an $\ell$-consistent homomorphism. This also works for the correspondence list $H$-homomorphism problem problem. The most interesting case occurs when $H = 2K_2$. We name the vertices of $H$ by binary 2-strings, $00, 01, 10, 11$. In addition to the loops $00-00, 01-01, 10-10, 11-11$, the two edges of $H$ are, say, $00-01$ and $10-11$. Note that because of symmetry, there are only three different permutations of $H$, this one, with edges $00-01$ and $10-11$, and two more, with edges $00-10$ and $01-11$, or with edges $00-11$ and $01-10$. Consider the alternate view via the auxiliary graph $G^*$ discussed earlier. For each edge $xy$ of an input graph $G$, the edges between $V_x$ and $V_y$ depend only on which of the above possibilities apply to each of $V_x$ and $V_y$, because of the nature of the adjacencies, in the form of two copies of $K_{2,2}$. Moreover, it does not matter which vertex on each side of the $K_{2,2}$ is chosen. (Here we use $K_{2,2}$ to denote the {\em irreflexive} complete bipartite graph with two vertices on each side; see Figure 1.) Let us associate with each vertex $x$ of $G$, two $\{0,1\}$ variables $x_a, x_b$, to be understood as describing the two coordinates of the name of the chosen vertex for $V_x$. Each of the above partitions of $V(H)$ into two edges can now be described by linear equations modulo two: \begin{itemize} \item $00-01$ and $10-11$ correspond to the equations $x_a = 0$ and $x_a = 1$ \item $00-10$ and $01-11$ correspond to the equations $x_b = 0$ and $x_b = 1$ \item $00-11$ and $01-10$ correspond to the equations $x_a + x_b = 0$ and $x_a + x_b = 1$ \end{itemize} In Figure 1, we have the partition on $V_x$ corresponding to the first bullet, and the partition of $V_y$ corresponding to the second bullet, while the edges of $2K_{2,2}$ joint the first part of the partition on $V_x$ with the second part of the partition on $V_y$ and vice versa. To satisfy the constraints of correspondence, we have to make sure that if $x$ selects a vertex in the part with $x_a = 0$, then $y$ selects a vertex in the part with $y_a + y_b = 1$, and if $x$ selects in $x_a = 1$ then $y$ selects in $y_a + y_b = 0$. These constraints can be expressed by the linear equation $x_a + y_a + y_b = 1$. In all the other cases it is also easy to check that there is a linear equation modulo two, which describes the constraints. Thus we have reduced the problem of existence of a correspondence $2K_2$-homomorphism to a system of linear equations modulo two, which can be solved in polynomial-time by Gaussian elimination. \begin{figure}[hhhh] \includegraphics[height=6cm]{fig1.pdf} \caption{The edges are described by the equation $x_a + y_a + y_b = 1$ modulo two} \end{figure} It turns out, see the next section, that the list version of the correspondence $2K_2$-homomorphism problem is NP-complete. \section{NP-complete Cases for Reflexive $H$} We begin with the following simple example. Suppose $H$ is $(K_1 \cup K_2)$, the disjoint union of a reflexive cliques $K_1$ and $K_2$. Specifically, let $(K_1 \cup K_2)$ have a loop on $a$, and two adjacent loops on $b$ and $c$. \begin{proposition} The correspondence $(K_1 \cup K_2)$-homomorphism problem is NP-complete. \end{proposition} \begin{proof} We give a reduction from 1-IN-3-SAT (without negated variables). Consider an instance, with variables $x_1, x_2, \dots, x_n$ and triples (of variables) $T_1, T_2, \dots, T_m$. Construct the corresponding instance of the correspondence $H$-homomorphism problem as follows. The instance $G$ will contain vertices $v_1, \dots, v_n$, as well as $T_1, \dots, T_m$, and all edges are of the form $v_iT_j$ with $v_i$ appearing in the triple $T_j$. The variables in the triples are arbitrarily ordered, each $T_j$ having a first, second, and third variable. The edge between $T_j$ and its first variable $v_p$ is labeled so that the special vertex $a$ in $V_{v_p}$ is adjacent to $a$ in $V(T_j)$, the edge between $T_j$ and its second variable $v_q$ is labeled so that the special vertex $a$ in $V_{v_q}$ is adjacent to $b$ in $V(T_j)$, and similarly for the third variable and $c$ in $V(T_j)$. (See Figure 2.) Because of the adjacencies, if any one vertex $x$ is chosen in $V(T_j)$, there is exactly one of its variables $v_p, v_q, v_r$ that has its special vertex $a$ adjacent to $x$. \begin{figure}[hhhh] \includegraphics[height=9cm]{fig2.pdf} \caption{The triple $T_j$ with variables $v_p, v_q, v_r$, considered in that order} \end{figure} We claim that this instance has a correspondence homomorphism if and only if the original instance of 1-IN-3-SAT was satisfiable. Indeed, any satisfying truth assignment sets as true a set of $v_i$'s such that exactly one appears in each $T_j$; hence we can set the value of the vertex $v_i$ to be $a$ whenever the variable $v_i$ was true in the truth assignment (and, say, $b$ otherwise). The value of the vertex $T_j$ will be $a$ if its first variable was true, $b$ if its second variable was true, and $c$ if its third variable was true. It is easy to see that this defines a correspondence $(K_1 \cup K_2)$-homomorphism. For the converse, observe that a correspondence homomorphism selects one vertex from each $V(T_j)$, which forces exactly one of its variable to select the value $a$, thus defining a satisfying truth assignment. \end{proof} We note that this result implies that the list version of the correspondence $2K_2$-homomorphism problem is also NP-complete, since the correspondence $K_1 + K_2$-homomorphism problem reduces to the correspondence $2K_2$-list-homomorphism problem. Indeed, lists may be used to restrict the input vertices never to use one of the four vertices of $2K_2$. A similar reduction from 1-IN-t-SAT shows the following fact. \begin{proposition} The correspondence $(K_1 \cup K_t)$-homomorphism problem is NP-complete, for all $t \geq 2$. \end{proposition} The method we use most often is described in the following proposition. \begin{proposition} \label{last} The correspondence $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism problem reduces to the correspondence $(K_{p+1} \cup K_{q+1} \cup \dots \cup K_z)$-homomorphism problem. \end{proposition} \begin{proof} Given an instance $G$ of the $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism problem, we create a new graph $G'$ by adding at each vertex $v$ of $G$ a loop $e_v$ (even if $v$ may already have loops), labelled by two permutations $(\pi(e_v),\rho(e_v))$ that forbid one vertex of $K_p$ and one vertex of $K_q$. Specifically, suppose we want to forbid a vertex $a$ of $K_p$ and a vertex $b$ of $K_q$. Choose $\pi(e_v)$ to be the identity and $\rho(e_v)$ to be the involution exchanging $a$ and $b$. Now we claim that $G'$ has a correspondence $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism if and only if the original graph $G$ has a correspondence $(K_{p+1} \cup K_{q+1} \cup \dots \cup K_z)$-homomorphism. Indeed, if $f$ is a correspondence $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism for the target graph has neither $a$ nor $b$, and so, the added loops create no problem, i.e., $f$ remains a correspondence $(K_{p+1} \cup K_{q+1} \cup \dots \cup K_z)$-homomorphism. On the other hand, if $f$ is a $(K_{p+1} \cup K_{q+1} \cup \dots \cup K_z)$-homomorphism of $G'$, the label on $e_v$ ensures that $v$ cannot map to $a$ or $b$ as neither $\pi(e_v)(a) = a$ is adjacent to $\rho(e_v)(a) = b$ nor $\pi(e_v)(b) = b$ is adjacent to $\rho(e_v)(b) = a$. Thus $f$ is also a $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism. \end{proof} Proposition \ref{last} allows us to prove the NP-completeness of the correspondence $(K_{p+1} \cup K_{q+1} \cup \dots \cup K_z)$-homomorphism problem from the NP-completeness of the correspondence $(K_p \cup K_q \cup \dots \cup K_z)$-homomorphism problem. Let us call the operation of removing a vertex from each of two distinct cliques a {\em pair-deletion}. Thus any $H$ that is a union of reflexive cliques that can be transformed to $K_1 \cup K_2$ by a sequence of pair-deletions yields an NP-complete correspondence $H$-homomorphism problem. We now consider two additional important special cases, starting with $H = 2K_3$ consisting of two reflexive triangles. Note that any pair-deletion of $2K_3$ produces a $2K_2$ which has a polynomial-time solvable problem. \begin{proposition} The correspondence $2K_3$-homomorphism problem is NP-complete. \end{proposition} \begin{proof} Assume $H = 2K_3$ has triangles $abc$ and $a'b'c'$. We add to each vertex $v$ of $G$ two separate loops $e_v, e'_v$ with labels designed to make it impossible to map $v$ to $a$ or $a'$ or $b'$. This reduces the NP-complete correspondence $(K_1 \cup K_2)$-homomorphism problem to the correspondence $2K_3$-homomorphism problem. The first loop $e_v$ will have the label $(\pi(e_v),\rho(e_v)$ where $\pi(e_v)$ is the identity and $\rho(e_v)$ is the involution exchanging $a$ and $a'$. The second loop $e'_v$ will have the label $(\pi(e'_v),\rho(e'_v))$ where $\pi(e'_v)$ is the identity and $\rho(e'_v)$ is the involution exchanging $a$ and $b'$. Now we claim that the resulting graph $G'$ has a correspondence $(K_1 \cup K_2)$-homomorphism for the target graph on $b, c, c'$ (isomorphic to $K_1 \cup K_2$) if and only if the original graph $G$ has a correspondence $2K_3$-homomorphism. If $f$ is a correspondence $(K_1 \cup K_2)$-homomorphism for the target graph on $b, c, c'$, neither of loops creates a problem, and $f$ remains correspondence to $H$. The converse follows from the fact that the first loop forbids $a$ and $a'$ for any vertex $v$, since neither $\pi(e_v)(a) = a$ is adjacent to $\rho(e_v)(a) = a'$ nor $\pi(e_v)(a') = a'$ is adjacent to $\rho(e_v)(a') = a$, and similarly the second loop $e'v$ forbids $a$ and $b'$. \end{proof} The second example is the union of two isolated loops $a, b$ and two adjacent loops $c, d$, i.e., the graph $H = K_1 \cup K_1 \cup K_2$. It also does not admit a pair-deletion that results in $K_1 \cup K_2$. \begin{proposition} The correspondence $(K_1 \cup K_1 \cup K_2)$-homomorphism problem is NP-complete. \end{proposition} \begin{proof} We reduce the NP-complete correspondence $(K_1 \cup K_3)$-homomorphism problem to the correspondence $(K_1 \cup K_1 \cup K_2)$-homomorphism problem. Suppose $G, \ell$ is an instance of the correspondence $(K_1 \cup K_3)$-homomorphism problem with the target graph consisting of the following vertices inducing $K_1 \cup K_3$: an isolated loop at $a$ and a reflexive triangle $bcd$. We form a new graph $G'$ and labeling $\ell'$ by replacing each edge $xy$ of $G$ by a path $xx', x'y', y'y$ so that if $\ell(xy) = (\pi,\rho)$ then $\ell'(xx') = (\pi,\pi')$, $\ell'(x'y') = (\pi,\pi')$, and $\ell'(y'y) = (\pi,\rho')$ where $\pi'$ is obtained by composing $\pi$ with the involution exchanging $b$ and $c$, and $\rho'$ by composing $\rho$ with the same involution of $b$ and $c$. (See Figure 3). We observe that the path $xx', x'y', y'y$ admits a corresponding homomorphism to $K_1 \cup K_1 \cup K_2$ taking $x$ and $y$ to any of the pairs $aa, bb, cc, dd, bc, cb, bd, db, cd, dc$, but never $a$ with another vertex. It is easy to see that this implies that $G$ admits an $\ell$-correspondence homomorphism to $K_1 \cup K_3$ on $a, b, c, d$ if and only if $G'$ admits an $\ell'$-correspondence homomorphism to $K_1 \cup K_1 \cup K_2$ with edge $cd$ (and all loops). (This is a correspondence version of the 'indicator construction' from \cite{hn,hombook}, where the method is discussed in more detail.) \begin{figure}[hhhh] \includegraphics[height=6cm]{fig3.pdf} \caption{A replacement for the edge $xy$ labeled by $\pi=a'b'c'd'$ and $\rho=a''b''c''d''$} \end{figure} It is now easy to check that any union of reflexive cliques other than a single clique, a union of disjoint $K_1$'s (i.e., a co-clique), or $2K_2$, can be transformed by pair-deletions to one of the NP-complete cases $K_1 \cup K_2, K_1 \cup K_1 \cup K_2, 2K_3$, and hence they are all NP-complete. \end{proof} Now consider a reflexive target graph $H$ that is not a union of cliques. The {\em square} $H^2$ of a graph $H$ has the same vertex-set $V(H^2)=V(H)$, and two vertices are adjacent in $H^2$ if and only if they have distance at most two in $H$. The following observation is useful. \begin{proposition}\label{indi} The correspondence $H^2$-homomorphism problem reduces to the correspondence $H$-homomorphism problem. \end{proposition} \begin{proof} Let $G'$ be obtained from $G$ by subdividing each edge $xy$ to be two edges $xz, zy$. If $\ell(xy) = (\pi,\rho)$ then the label of $xz$ is $(\pi,1)$, and the label of $zy$ is $(1,\rho)$. Now it can be seen that $G$ has a correspondence $H^2$-homomorphism if and only if $G'$ has a correspondence $H$-homomorphism. (We again apply the logic of the indicator construction \cite{hombook}, since the path $xz, zy$ can map $x$ and $y$ to any edge of $H^2$.) \end{proof} It follows that if $H$ is not connected, we can deduce the NP-completeness of the correspondence $H$-homomorphism problem from the above results on the union of reflexive cliques (with the only exceptions of $2K_2$ and $tK_1$). For connected $H$, we can apply Proposition \ref{indi} to a sufficiently high power of $H$ that has diameter two. \begin{proposition}\label{two} If $H$ has diameter two but is not the reflexive path of length two, then he correspondence $H$-homomorphism problem is also NP-complete. \end{proposition} \begin{proof} There must, in $H$, be a path $ab, bc$ where $a$ and $c$ are not adjacent. Suppose first that $H - a$ is not a clique. Then the correspondence $(H-a)$-homomorphism problem can be assumed NP-complete by induction (on $|V(H)|$), and it reduces to the correspondence $H$-homomorphism problem as follows. Suppose $G$ is an instance of the $(H-a)$-homomorphism problem, and form $G'$ by adding at each vertex of $G$ a loop labelled $(\pi,\rho)$ where $\pi$ is the identity and $\rho$ is the cyclic permutation $(a,b,c)$. The effect of these loops is to prevent any vertex from mapping to $a$, as $aa$ is not equal to $\pi(u)\rho(v)$ for any edge $uv \in E(H)$ (but $bb$ and $cc$ are, and even though $cb$ is not an edge $bc$ is an edge, which is sufficient for a loop). If $H - c$ is not a clique we proceed analogously. Otherwise there exists a vertex $d$ adjacent to $a, b, c$. In this case we can add a loop to each vertex of $G$ that effectively deletes $b$ and we can again apply induction on $|V(H)|$. These loops will have labels $(\pi,\rho)$ where $\pi$ is the involution of $a$ and $b$ and $\rho$ is the involution of $b$ and $c$. (See Figure 4.) The argument is similar, noting that only $bb$ is missing, because $ca$ is also equal to $ac$ for a loop. \end{proof} \begin{figure}[hhhh] \includegraphics[height=5cm]{fig5.pdf} \caption{(a) Labelling of a loop to remove $a$; (b) labelling of a loop to remove $b$} \end{figure} \begin{proposition}\label{one} If $H$ is the reflexive path of length two, then he correspondence $H$-homomorphism problem is NP-complete. \end{proposition} \begin{proof} Suppose $H$ is the reflexive path $ab, bc$. We reduce from $3$-colourability. Thus suppose $G$ is an instance of $3$-colourability, and form $G'$ by replacing each edge $xy$ of $G$ by two parallel edges $(xy)_1, (xy)_2$ with permutations $(\pi_1,\rho_1)$ on $(xy)_1$ and $(\pi_2,\rho_2)$ on $(xy)_2$. Both $\pi_1$ and $\rho_1$ are identity permutations, both $\pi_2$ and $\rho_2$ are the involutions exchanging $a$ and $b$. They are illustrated in Figure 5. The effect of $(xy)_1$ is that if $x$ maps to $a$, then $y$ maps to either $b$ or $c$ but not $a$, and if $x$ maps to $c$ then $y$ maps to either $a$ or $b$ but not $c$. The effect of the second edge $(xy)_1$ does not preclude any of these possible images, but also restricts $b$ to map to $a$ or $c$ but not $b$. Thus a correspondence homomorphism of $G'$ is a $3$-colouring of $G$ with the colours $a, b, c$. The converse is also easy to see. \end{proof} \begin{figure}[hhhh] \includegraphics[height=5cm]{fig4.pdf} \caption{Labelling of parallel edges $(xy)_1$ and $(xy)_2$} \end{figure} This completes the proof of Theorem \ref{main}. \section{The General Results} We have similar results for general graphs $H$, where some vertices may have loops and others not. Note that in this case isolated loopless vertices can be removed from $H$ without affecting the complexity of the correspondence $H$-homomorphism problem or list $H$-homomorphism problem. \begin{theorem}\label{mixed} Let $H$ be a graph with possible loops. Suppose moreover that if $H$ has both a vertex with a loop and a vertex without a loop, then it has no isolated loopless vertices. \vspace{2mm} The following cases of the correspondence $H$-homomorphism problem are polynomial-time solvable. \begin{enumerate} \item $H$ is a reflexive clique \item $H$ is a reflexive co-clique \item $H$ is a reflexive $2K_2$ \item $H$ is an irreflexive $pK_2 \cup qK_1$ \item $H$ is an irreflexive $K_{2,2}$ \item $H$ is a star in which the center has a loop and the other vertices do not \item $H$ is an irreflexive $pK_2$ together with a disjoint reflexive $qK_1$. \end{enumerate} Otherwise, the correspondence $H$-homomorphism problem is NP-complete. \end{theorem} For the correspondence $H$-list-homomorphism problem, the classification is the same, except the cases (3) and (5) are NP-complete. \begin{theorem}\label{listmixed} Let $H$ be a graph with possible loops. Suppose moreover that if $H$ has both a vertex with a loop and a vertex without a loop, then it has no isolated loopless vertices. Then the correspondence $H$-list homomorphism problem is polynomial-time solvable in cases (1, 2, 4, 6, 7), and is NP-complete otherwise. \end{theorem} Note that the graphs in cases (1-3) are reflexive, in cases (4-5) irreflexive, and cases (6-7) mix loops and non-loops. In the process of proving Theorem \ref{mixed}, we also classified the complexity of a bipartite version of the correspondence $H$-homomorphism problems and (list)-$H$-homomorphism problems. Specifically, we assume that $H$ is a bipartite graph with a set of {\em black vertices} and a disjoint set of {\em white vertices}. The {\em by-side correspondence $H$-homomorphism problem} asks whether an input bipartite graph $G$ (with edge-labeling $\ell$) admits an $\ell$-correspondence homomorphism to $H$ taking black vertices of $G$ to black vertices of $H$ and white vertices of $G$ to white vertices of $H$. The {\em by-side correspondence $H$-list homomorphism problem} asks whether an input bipartite graph $G$ (with edge-labeling $\ell$ and lists $L(x), x \in V(G)$, such that for black vertices $x$ the lists $L(x)$ contain only black vertices, and similarly for white vertices) admits an $\ell$-correspondence list homomorphism to $H$. \begin{theorem}\label{bipa} Let $H$ be a bipartite graph. Then the by-side correspondence $H$ homomorphism problem is polynomial-time solvable in case (4) above, as well as \begin{enumerate} \item[(8)] $H$ is a complete bipartite graph plus any number of isolated vertices, \item[(9)] $H$ is a tree of diameter $3$ plus any number of isolated vertices, \item[(10)] $H$ consists of two disjoint copies of $K_{1,2}$ with white leaves, plus any number of black isolated vertices, \item[(11)] $H$ consists of two disjoint copies of $K_{2,2}$. \end{enumerate} In all other cases it is NP-complete. \end{theorem} For the list version we have the following result. \begin{theorem}\label{listbipa} Let $H$ be a bipartite graph. Then the by-side correspondence $H$-list homomorphism problem is polynomial-time solvable in cases (4, 8, 9) above, and is NP-complete otherwise. \end{theorem} In the Appendix, we sketch the proofs for Theorems \ref{mixed}, \ref{listmixed}, \ref{bipa}, and \ref{listbipa}. \section*{Acknowledgement} The second author wishes to acknowledge the research support of NSERC Canada, through a Discovery Grant. \hspace{2mm}
proofpile-arXiv_068-15135
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:Intro} Transition metal dichalcogenide (TMDC) monolayers are atomically thin crystal layers exfoliated down from bulk weakly cohesive stacks. Similarly to graphene, a hexagonal lattice of alternating lattice sites results in two inequivalent, time-reversal symmetric valleys ($K$ and $K'$), see Fig. \ref{fig:TMDC_Cartoon}~(b) \cite{wang2012electronics,suzuki2014valley,cao2011mos_2,kormanyos2015k}. Unlike graphene, the monolayer crystals posses broken inversion symmetry, see Fig. \ref{fig:TMDC_Cartoon}~(a), inducing direct band gaps in the visible range about the two valleys \cite{splendiani2010emerging,mak2010atomically,lu2013intervalley}. Furthermore, strong spin-orbit coupling from the transition metal atoms introduces a strong coupling between the spin and valley degrees of freedom, see Fig. \ref{fig:TMDC_Cartoon}~(a) \cite{xiao2012coupled,shan2013spin,xu2014spin}. TMDCs are characterised by the chemical composition MX$_{2}$, where M denotes the transition metal (Mo or W) and X denotes the chalcogenide (S or Se). The presence of a direct band gap and spin-valley coupling in a two-dimensional material allows for a number of interesting electronic, spintronic and valleytronic applications including room temperature quantum spin Hall insulators, optically pumped valley polarisation, long lived exciton spin polarisation and 2D quantum dots (QDs) \cite{yang2015long,PhysRevB.93.035442,cazalilla2014quantum,PhysRevX.4.011034,PhysRevB.93.045313,zeng2012valley}. \begin{figure}[!h] \includegraphics[width=\linewidth]{TMDC_Cartoon.pdf} \caption{(a) 3D view of a TMDC unit cell (red denoting M atoms, blue denoting X atoms) showing the three sub layers of a TMDC monolayer and the broken inversion of the crystal lattice. (b) Planar (X-Y) view of a TMDC lattice. (c) Spin resolved conduction band (red: $\ket{0}_-=\ket{K'\uparrow}$ and $\ket{1}_-=\ket{K\downarrow}$, blue: $\ket{0}_+=\ket{K\uparrow}$ and $\ket{1}_+=\ket{K'\downarrow}$) around the $K$ valley in the BZ of Mo and W based TMDC monolayers demonstrating the spin crossings present in Mo TMDCs and not in W, the $K'$ valley may be visualised simply by the time-reversal of the given band structure.} \label{fig:TMDC_Cartoon} \end{figure} While the strong spin-valley coupling of TMDC monolayers offers numerous interesting physical phenomena, it presents a difficulty for qubit implementation in gated QDs. Kramers pairs of the spin and valley degrees of freedom result from this coupling \cite{cazalilla2014quantum,PhysRevX.4.011034,klinovaja2013spintronics}. At low energy the $\ket{0}_-=\ket{K'\uparrow}$ and $\ket{1}_-=\ket{K\downarrow}$ states are degenerate in zero field and are energetically separate from the $\ket{0}_+=\ket{K\uparrow}$ and $\ket{1}_+=\ket{K'\downarrow}$ states \cite{PhysRevX.4.011034,song2013transport,kormanyos2013monolayer}. This effect can be observed in the spin resolvable structure of the conduction band (CB) about the $K$($K'$) points \cite{kosmider2013large,zhu2011giant} as shown in Fig. \ref{fig:TMDC_Cartoon}~(c). The obvious choice for the computational basis of a qubit is therefore a spin-valley qubit consisting of the two states of the lowest lying Kramers pair, $\ket{0}_-(\ket{1}_-)$ in Mo\textit{X}$_{2}$ and $\ket{0}_+(\ket{1}_+)$ in W\textit{X}$_{2}$, where the required energy difference may be achieved by spin-valley Zeeman splitting induced by a perpendicular magnetic field \cite{flensberg2010bends,aivazian2015magnetic,srivastava2015valley,rostami2015valley}. However, such qubits are inherently limited by a necessity for coupling of the valley states. Methods of doing so have been proposed in carbon nanotubes by means of short range disorder in the dots \cite{flensberg2010bends,palyi2011disorder}, requiring atomic level engineering, or by optical manipulation\cite{ye2016optical}. Additionally, the valley coherence of WSe$_{2}$ excitons has been measured\cite{hao2016direct}, demonstrating an order of magnitude lower coherence times than spin in other TMDC monolayer crystals\cite{yang2015long}. If qubits in TMDC monolayers could operate similarly to semiconductor spin qubits then the broad theoretical and experimental findings of the field \cite{hanson2007spins,zwanenburg2013silicon,petta2005coherent} may be directly utilised. In so doing, a novel breed of 2D, optically active, direct band gap, and relatively nuclear spin free\cite{PhysRevB.93.045313} semiconductor spin qubits are gained without the need for an artificially induced band gap, as is needed in graphene\cite{zhou2007substrate}. This requires a method of manipulating the dots such that the spin-orbit coupling may be suppressed and regimes of pure spin qubits may be accessed. There is a noticeable and useful difference between the low energy band structures of Mo based and W based monolayers as demonstrated in Fig. \ref{fig:TMDC_Cartoon}~(c): the band crossings observed in the spin resolved CB structures in Mo monolayers which are absent in W monolayers which suggest that it is possible to achieve spin degeneracy localised within a given valley. Such spin-degenerate regimes offer the possibility of implementing the desired pure spin qubits in TMDCs. Additionally, by placing a TMDC material in a perpendicular magnetic field, breaking time reversal symmetry, valley Zeeman splitting may be introduced to the system. Previous work\cite{PhysRevX.4.011034} has suggested that it may be possible to access regimes of spin degeneracy within the same valley by introducing a large magnetic field. In this work, we build upon previous analyses of TMDC QDs in a effective low energy regime by solving for various conditions in which a spin qubit may be viable, demonstrating a dot size tuneable spin-orbit splitting and investigating the effects of a finite potential well model as opposed to previous assumptions of an infinite potential. Here, we present methods of achieving spin degeneracy within a given valley of a QD in a TMDC monolayer at zero or moderate fields. Firstly in Sec. \ref{sec:ZF} a zero external field model is discussed, demonstrating the Kramers pairing of states as to derive an expression for a critical radius at which fourfold spin-valley degeneracy may be expected. Also we discuss the best candidate monolayer for a pure spin qubit. Then in Sec. \ref{sec:PerpB} an external magnetic field perpendicular to the dot is considered and numerical solutions to the necessary external field strengths at a given dot radius are shown at which a spin-degenerate state within a given valley is expected. Next the effects of finite confinement potential are shown on the two previously discussed regimes is given in Sec. \ref{sec:FinWell}. Finally, an effective implementation regime for the various methods of achieving valley independent spin degeneracy is discussed in Sec. \ref{sec:QIP} before a summary is given in \ref{sec:Summary}. \section{Zero Field} \label{sec:ZF} To describe a QD in monolayer TMDC the following effective low energy Hamiltonian about the $K$ and $K'$ point in the CB is employed\cite{PhysRevX.4.011034} \begin{equation} H_{\text{dot}}=H_{\text{el}}^{\tau,s}+H_{\text{so}}^{\text{intr}}+V=\frac{\hbar^2q_+q_-}{2m_{\text{eff}}^{\tau,s}}+\tau\Delta_{\text{cb}}s_z+V. \label{eq:Total_Hamiltonian} \end{equation} \noindent Here, $\tau=1(-1)$ refers to the $K$ and $K'$ valley, $s_z$ gives the spin Pauli-z matrix with eigenvalues $s=1(-1)$ for spin $\uparrow(\downarrow)$, wave number operators $q_\pm=q_x\pm iq_y$ where $q_k=-i\partial_k$, $\Delta_{cb}$ is the energy spliting in the CB due to the strong intrinsic spin-orbit coupling of the TMDC monolayer and the spin-valley dependant effective electron mass is defined as $1/m_{\text{eff}}^{\tau,s}=1/m_{el}^0-\tau s/\delta m_{\text{eff}}$ where $\delta m_{\text{eff}}$ is material dependant. Initially, it is assumed that the QD potential $V$ is sufficiently deep such that it may be described by an infinite hard walled potential \begin{equation} V=\begin{cases} 0 & \quad r\leq R_D\\ \infty & \quad r>R_D\\ \end{cases} \label{eq:Infin_Pot} \end{equation} \noindent where $r$ is the radial coordinate and $R_D$ is the radius of the dot. This may be assumed in lieu of a harmonic potential, as is often used in bulk semiconductor QD models, since the 2D nature of a TMDC allows for a more direct interface between the gates and the plane in which an electron will be confined. Additionally, such an assumption allows for edge effects at the boundary of the dot to be neglected. In 2D polar coordinates, the wave number operators may be defined as \begin{equation} q_\pm=\pm i e^{\pm i\phi}(\mp\partial_r-\frac{i}{r}\partial_{\phi}). \label{eq:radial_Wavenumber} \end{equation} \noindent where $\phi$ is the angular coordinate. Assuming the dot to be circular, rotational symmetry about the z-axis dictates that the dot's Hamiltonian will commute and share eigenstates with the z-component of the angular momentum operator ($l_z$). This allows for the normalised solution of the angular component of the wavefunction $\Psi(r,\phi)=R(r)\Phi(\phi)$ to be given as \begin{equation} \Phi(\phi)=\frac{e^{il\phi}}{\sqrt{2\pi}}. \label{eq:angular_Comp_WVFN} \end{equation} \noindent Since the radial component of the wavefunction observes the boundary condition $R(R_D)=0$, the following expression is derived where $j_{n,l}$ is the $n^{th}$ zero ($n=1,2,3,\dots$) of the $l^{th}$ Bessel function of the first kind $J_{l}$ ($l=0,\pm1,\pm2,\dots$) \begin{equation} R_{n,l}(r)= \frac{(-1)^ {\frac{|l|-l}{2}}\sqrt{2}J_{|l|}\left(\frac{j_{n,|l|}}{R_D}r\right)}{R_Dj_{n,|l|+1}}. \label{eq:radial_ZF_WVFN} \end{equation} \noindent As such, the full normalised solutions of a hard wall TMDC quantum dot in zero external field are given in the spinor form as \begin{subequations} \begin{align} \Psi^{\uparrow}_{n,l}(r,\phi)&=\frac{e^{il\phi}}{\sqrt{2\pi}}\left(\begin{array}{c}1\\0\\\end{array}\right)R_{n,l}(r),\\ \Psi^{\downarrow}_{n,l}(r,\phi)&=\frac{e^{il\phi}}{\sqrt{2\pi}}\left(\begin{array}{c}0\\1\\\end{array}\right)R_{n,l}(r), \end{align} \label{eq:Total_Spinors} \end{subequations} \noindent and the spin, valley and dot radius dependant energy eigenvalues are given as \begin{equation} E^{n,l}_{\tau,s}(R_D)=\frac{\hbar^2j_{n,|l|}^2}{2m^{\tau,s}_{\text{eff}}R_D^2}+\tau s \Delta_{\text{cb}}. \label{eq:ZF_Energy} \end{equation} From the four realisations of spin and valley, only two separate energy solutions in zero field emerge, i.e. $E^{n,l}_{K,\uparrow}=E^{n,l}_{K',\downarrow}=E^{n,l}_{+}$ and $E^{n,l}_{K',\uparrow}=E^{n,l}_{K,\downarrow}=E^{n,l}_{-}$. These two possible solutions describe the $\ket{0}_+(\ket{1}_+)$ and $\ket{0}_-(\ket{1}_-)$ Kramers pairs respectively. If the two solutions are assumed to be equivalent, then Eq. (\ref{eq:ZF_Energy}) may be used to describe the radius at which fourfold degeneracy in the valley-spin Hilbert space is achieved. As such, a critical radius $R^{n,l}_{c}$ at which $E^{n,l}_{+}=E^{n,l}_{-}$ is given by \begin{equation} R^{n,l}_{\text{c}}=\frac{\hbar j_{n,|l|}}{2\sqrt{\Delta_{\text{cb}}}}\sqrt{\frac{1}{m^{-}_{\text{eff}}}-\frac{1}{m^{+}_{\text{eff}}}} \label{eq:ZF_Crit_Rad} \end{equation} \noindent where $m^{-}_{\text{eff}}=m^{K\downarrow/K'\uparrow}_{\text{eff}}$ and $m^{+}_{\text{eff}}=m^{K\uparrow/K'\downarrow}_{\text{eff}}$. Therefore, there are real solutions to the critical radius at which fourfold valley-spin degeneracy may exist for dots with intrinsic spin-orbit coupling such that $\Delta_{cb}>0$ and $m^{+}_{\text{eff}}>m^{-}_{\text{eff}}$. The latter condition is given for all possible TMDC monolayers while the former is only satisfied by Mo based TMDCs ($\Delta_{cb}=\unit[1.5]{meV}$ for MoS$_{2}$ and $\Delta_{cb}=\unit[11.5]{meV}$ for MoSe$_{2}$) \cite{kormanyos2015k,PhysRevX.4.011034} (see Fig. \ref{fig:MoS2_ZF_GS}). Alternatively, real solutions of $R_{\text{c}}$ may be found in materials where both $\Delta_{cb}<0$ and $m^{+}_{\text{eff}}<m^{-}_{\text{eff}}$, however, there is no known TMDC that satisfies the latter condition. In the groundstate ($n=1$, $l=0$) the critical radius at which fourfold degeneracy may be expected is $\unit[4.13]{nm}$ for MoS$_{2}$ and $\unit[1.46]{nm}$ for MoSe$_{2}$ QDs. While both radii are difficult to achieve by electrostatic gating, MoS$_{2}$ monolayers offer plausibly achievable fourfold degeneracy through some critical radii and consequently prove themselves as a the most viable candidate for 2D single QD pure spin qubits. For the remainder of the presented work we will focus solely on MoS$_{2}$ monolayers. \begin{figure}[!h] \includegraphics[width=\linewidth]{Inf_GS.pdf} \caption{(a) Zero field energy spectrum of the $n=1$, $l=0$ eigenstates, blue: $\ket{0}_+(\ket{1}_+)$ and red: $\ket{0}_-(\ket{1}_-)$, of MoS$_{2}$ hard wall QD of a given dot radius $R_D$, here a point of fourfold degeneracy of the valley-spin eigenstates is observed at a particular radius. Inset: region about which the fourfold degeneracy is observed in the spectrum. (b) Zero field energy spectrum of the $n=1$, $l=0$ eigenstates of WS$_{2}$ hard wall QD of a given dot radius, here no point of fourfold degeneracy of the valley-spin eigenstates is observable due to the $\Delta_{cb}>0$ not being satisfied by W based TMDCs.} \label{fig:MoS2_ZF_GS} \end{figure} \section{Perpendicular Magnetic Field} \label{sec:PerpB} Following the previous methods\cite{PhysRevX.4.011034}, the spin-valley eigenenergies of a TMDC monolayer QD in a constant perpendicular magnetic field ($B_z$) may be derived from the following Hamiltonian \begin{equation} \begin{split} H_{B_\perp}^{\tau,s}= \hbar\omega_c^{\tau,s}\alpha_+\alpha_-+\tau\Delta_{\text{cb}}s_z+\frac{1+\tau}{2}\frac{B_z}{|B_z|}\hbar\omega_c^{\tau,s} \\ +\frac{1}{2}(\tau g_{\text{vl}}+g_{\text{sp}} s_z )\mu_B B_z& \end{split} \label{eq:BPerp_Hamiltonian} \end{equation} where the cyclotron frequency is defined as $\omega_c^{\tau,s}=e|B_z|/m_{\text{eff}}^{\tau,s}$, $\mu_B$ is the Bohr magneton, $g_{sp}$ is the spin g-factor, $g_{vl}$ is the valley g-factor and $\alpha_{\pm}$ denote the modified wavenumber operators $\alpha_{\pm}=\mp i l_Bq_{\pm}/\sqrt{2}$ where $l_B=\sqrt{\hbar/eB_z}$ is the magnetic length. After appropriate gauge selection wavefunctions in terms of the dimensionless length parameter $\rho=r^2/2l_B^2$ are given as $P_{n,l}(\rho)= \rho^{|l|/2} e^{-\rho/2}M (a_{n,l},|l|+1,\rho)$ where $a_{n,l}$ describes the $n^{th}$ solution of the following bound state identity $M(a_{n,l},|l|+1,\rho_D)=0$, where $\rho_D=\rho[r=R_D]$ and $M(a,b,c)$ is the confluent hypergeometric function of the first kind. The addition of an out of plane magnetic field does not break the rotational symmetry of the dot, hence the angular component of the wavefuntion is not affected by this change. The eigenenergies are therefore given as \begin{equation} \begin{split} E^{\tau,s}_{n,l}= \hbar\omega_c^{\tau,s}\left(\frac{1+\tau}{2}\frac{B_z}{|B_z|}+\frac{|l|+l}{2}-a_{n,l}\right) \\ +\tau\Delta_{\text{cb}}s_z+\frac{1}{2}(\tau g_{\text{vl}}+sg_{\text{sp}} )\mu_BB_z. \end{split} \label{eq:BPerp_total_Energy} \end{equation} \begin{figure}[!h] \includegraphics[width=\linewidth]{MoS2_Inf_BPerp_20nm.pdf} \caption{Energy spectra of the $n=1$, $l=0$ state in a QD of $\unit[20]{nm}$ radius on a MoS$_{2}$ monolayer with under a perpendicular magnetic field. Here the critical field strength at which $E^{n=1,l=0}_{K',\downarrow}=E^{n=1,l=0}_{K',\uparrow}$ is observed at the high magnetic field strength of $\sim\unit[23]{T}$. Blue solid (dashed) line: $\ket{K'\uparrow}$ ($\ket{K\downarrow}$) and red solid (dashed) line: $\ket{K\uparrow}$ ($\ket{K'\downarrow}$).} \label{fig:MoS2_Inf_BPerp_20nm} \end{figure} From Eq. (\ref{eq:BPerp_total_Energy}), spectra demonstrating the effect of an out of plane magnetic field for QDs in MoS$_{2}$ monolayers may be calculated numerically. The splitting of the spin and valley states due to the external magnetic field allows for spin-degenerate crossings for a given radius within the $K'$ valley, i.e. at some external magnetic field strength $E^{n,l}_{K',\uparrow}=E^{n,l}_{K',\downarrow}$, see Fig. \ref{fig:MoS2_Inf_BPerp_20nm}. These critical magnetic field strengths ($B_{\text{c}}$) for given dot radii may be determined for a range of radii to give the spin-degenerate regime spectra shown in Fig. \ref{fig:MoS2_Inf_BCrit_RDot_Spin}. These spectra show separate plateaus in the critical field strength at relatively high dot radii ($R>\unit[20]{nm}$) for the $l\geq0$ and $l<0$ angular states, differing by up to $\sim\unit[5]{T}$, but with both still at high field strengths. This is the limit at which the maximum Kramers pair energy difference at zero field is observed and valley Zeeman splitting alone is used to achieve spin degeneracy. On the other end of the spectra, at low external field strengths the gradient of the regime curves increases compromising the fabrication error robustness of single dot spin qubits, i.e. small errors ($\sim\unit[1]{nm}$) in QD radii would make the difference between operating the qubit at $\unit[1]{T}$ and $\unit[6]{T}$ external field. Thus operating a spin qubit with a single electron regime in the groundstate is not easily implemented. The possibility of operation at excited states and alternative enhancment methods are considered and discussed in Sec. \ref{sec:QIP}. \begin{figure}[!h] \includegraphics[width=\linewidth]{Higher_States_Existence_Curves.pdf} \caption{Spin degeneracy curves of critical out of plane magnetic field strength $B_c$ with the radius of QD on MoS$_{2}$ monolayer for the first few states, black solid (dashed): $n=1$ ($2$), $l=0$, red solid (dashed): $n=1$, $l=1$ ($-1$), blue solid (dashed): $n=1$, $l=2$ ($-2$), purple solid (dashed): $n=1$, $l=2$ ($-2$).} \label{fig:MoS2_Inf_BCrit_RDot_Spin} \end{figure} \section{Finite Well} \label{sec:FinWell} Up to this point, all models used assume QDs with an infinite hard wall potential. Here the effects of transitioning to a finite hard wall potential \begin{equation} V=\begin{cases} 0 & \quad r\leq R_D\\ V_{0} & \quad r\geq R_D,\\ \end{cases} \label{eq:Fin_Pot} \end{equation} on the spin-degenerate regimes discussed are shown. Thus, for both the zero field and perpendicular magnetic field regimes, the $\Psi(r=R_D,\phi)=0$ boundary condition is replaced by the continuity condition at the potential interface $\partial_r \ln[\Psi^{r\geq R_D}_{n,l}(r=R_D,\phi)]=\partial_r \ln[\Psi^{r\leq R_D}_{n,l}(r=R_D,\phi)]$\cite{recher2009bound}. In zero field the unnormalised radial portions of the wavefunction within and outside of the potential barrier are described as follows \begin{equation} R_{n,l}(r)=\begin{cases} J_{|l|} (\epsilon_{n,l}^{\text{in}} r) & \quad r\leq R_D\\ e^{\frac{i l \pi}{2}} K_{|l|} (\epsilon_{n,l}^{\text{out}} r) & \quad r\geq R_D\\ \end{cases}. \label{eq:ZF_Fin_WVFN} \end{equation} Here $ K_{l}$ is the $l^{th}$ modified Bessel function of the second kind, $\epsilon_{n,l}^{\text{in}}=\sqrt{2 m_{\text{eff}}^{\tau,s}[E_{n,l}-\tau\Delta_{\text{cb}}s_z]}/\hbar$ and $\epsilon_{n,l}^{\text{out}}=\sqrt{2 m_{\text{eff}}^{\tau,s}[V_{0}-E_{n,l}+\tau\Delta_{\text{cb}}s_z]}/\hbar$. Eigenenergies as a function of potential height may then be numerically calculated by applying the continuity condition to Eq. (\ref{eq:ZF_Fin_WVFN}), \begin{equation} \frac{\epsilon_{n,l}^{\text{in}} J_{|l|+1} (\epsilon_{n,l}^{\text{in}} R_D)}{J_{|l|} (\epsilon_{n,l}^{\text{in}} R_D)}=\frac{\epsilon_{n,l}^{\text{out}} K_{|l|+1} (\epsilon_{n,l}^{\text{out}} R_D)}{K_{|l|} (\epsilon_{n,l}^{\text{out}} R_D)}. \label{eq:ZF_Fin_Char} \end{equation} From this, the fourfold degenerate critical radii as a function of potential height may be calculated, leading to the result shown in Fig. \ref{fig:MoS2_RCrit_VWell_ZF}. The effect of a finite potential is only noticeable at low potential heights $<\unit[100]{meV}$, whereafter a sharp drop in the critical radii is observed. \begin{figure}[!t] \includegraphics[width=\linewidth]{MoS2_RCrit_VWell_ZF.pdf} \caption{Spin-degenerate critical radii $R_c$ of QD of finite potential height in MoS$_{2}$ monolayers at the ground and first few excited states, red: $n=1$, $l=0$, blue: $n=1$, $|l|=1$, purple: $n=1$, $|l|=2$.} \label{fig:MoS2_RCrit_VWell_ZF} \end{figure} Similarly, when a finite potential is considered with an external magnetic field over the QD, the unnormalised radial component of the wavefunction is described as \begin{equation} P_{n,l}(\rho)= \rho^{|l|/2} e^{-\rho/2}\begin{cases} M (\tilde{a}^{\text{in}}_{n,l},|l|+1,\rho) & \quad r\leq R_D\\ U (\tilde{a}^{\text{out}}_{n,l},|l|+1,\rho) & \quad r\geq R_D\\ \end{cases} \label{eq:BPerp_Fin_WVFN} \end{equation} \noindent where $U (\tilde{a}^{\text{out}}_{n,l},|l|+1,\rho)$ is Tricomi's hypergeometric function and $\tilde{a}^{\text{in}}_{n,l}$ is the $n^{th}$ numerical solution to the continuity equation at the potential barrier and $\tilde{a}^{\text{out}}_{n,l}=\tilde{a}^{\text{in}}_{n,l}+V_{0}/\hbar \omega_c^{\tau,s}$. The continuity condition may then be applied to achieve the following characteristic equation \begin{equation} \begin{split} &(1+|l|)\tilde{a}_{n,l}^{\text{out}}M(\tilde{a}^{\text{in}}_{n,l},|l|+1,\rho_D)U(1+\tilde{a}_{n,l}^{\text{out}},|l|+2,\rho_D)\\&+\tilde{a}^{\text{in}}_{n,l}M(1+\tilde{a}^{\text{in}}_{n,l},|l|+2,\rho_D)U(\tilde{a}_{n,l}^{\text{out}},|l|+1,\rho_D)=0 \end{split} \end{equation} \label{eq:BPerp_Fin_Char} \noindent from which $\tilde{a}^{in}_{n,l}$ may be numerically extracted and applied to Eq. (\ref{eq:BPerp_total_Energy}) in lieu of $a_{n,l}$. The effect of a finite potential height model on the spin-degenerate regimes of MoS$_{2}$ is shown in Fig. \ref{fig:MoS2_RCrit_VWell_BPerp}. \begin{figure}[!h] \includegraphics[width=\linewidth]{MoS2_RCrit_VWell_BPerp.pdf} \caption{Spin-degenerate critical magnetic field $B_c$ of QD of finite potential heights in MoS$_{2}$ monolayers at the ground of heights $\unit[1]{eV}$ (red), $\unit[0.5]{eV}$ (blue), $\unit[0.25]{eV}$ (purple) and infinite potential (black dashed) for reference} \label{fig:MoS2_RCrit_VWell_BPerp} \end{figure} A similar effect on the spin degeneracy regimes in shown in both FIGs. \ref{fig:MoS2_RCrit_VWell_ZF} and \ref{fig:MoS2_RCrit_VWell_BPerp}. At shallow potential heights the required critical radius of the dot decreases by $\sim\unit[1-2]{nm}$. However at high magnetic field, there is no discernible difference between the finite and infinite potential solutions. This result will pose little threat to the operation of dots with a single electron charged into the groundstate as the potential height may be selected to be sufficiently high such that little to no difference in the critical radii will be observed. Although, as is discussed in Sec. \ref{sec:QIP}, this effect must be considered when switching to an excited operational electron state by charging. \section{Single Quantum Dots as Qubits} \label{sec:QIP} To achieve a pure spin qubit in a single MoS$_{2}$ QD, some considered parameter selection is required to gain a certain robustness of the operational regime. As previously stated in Sec. \ref{sec:PerpB}, a regime with a single electron in the lowest spin-degenerate state either requires a very large external field ($>\unit[20]{T}$) or extreme precision in the QD's radius. This is not ideal, however these problems may be mitigated by charging the dot to operate at higher degenerate states. As can be seen in Fig. \ref{fig:MoS2_Inf_BCrit_RDot_Spin}, at reasonable external fields ($\leq\unit[10]{T}$), for each increasing excited state the necessary QD radius increases in accordance with Eq. (\ref{eq:ZF_Crit_Rad}). These regimes allowing for larger dot radii are more reliably achieved by gated monolayer QD fabrication methods. Moreover, the $(|l|+l)/2$ term of Eq. (\ref{eq:BPerp_total_Energy}) splits the plateaus of the regime curves shown in Fig. \ref{fig:MoS2_Inf_BCrit_RDot_Spin} into the higher plateaus of the $l\leq0$ and lower $l=1,2,\dots$ plateaus. Therefore, if a charged excited state is chosen as the operational state, the ideal choice would be an $l>0$ angular-state. Even in the lowest spin-degenerate state, some charging may be required. The operational electron confined to the $K'$ valley is at a higher energy than the two other possible states in the $K$ valley (see Fig. \ref{fig:MoS2_Inf_BPerp_20nm}). Although valley lifetime is expectedly long \cite{yang2015long,sallen2012robust}, eventually the electron will decay out of the higher operational state to these empty states. Also, since each excitation state may be split into four different configurations of spin and valley, the total number of electrons needed to charge the dot up to the desired operational regime is $3+4N$ where $N$ is an integer describing the excitation level of the operational state, i.e. $N=0$ corresponds to the groundstate $n=1$ $l=0$, $N=1$ corresponds to the first excited state $n=1$ $l=-1$ etc. The direct band gap of monolayer MoS$_{2}$ is $\sim\unit[1.8]{eV}$ \cite{mak2010atomically}, and current advances in gated QD nanostructures in MoS$_{2}$ give a charging energy of $\unit[2]{meV}$ at a dot radius of $\unit[70]{nm}$ \cite{wang2016engineering}. This result was said to align well with the self capacitance model \cite{wang2016engineering,kouwenhoven2001few,hanson2007spins}, therefore, using said model, the charging energy at desired radii for spin-degenerate regimes ($\sim\unit[10]{nm}$) may be approximately shown to increase to $\sim\unit[14]{meV}$. This is however a broad approximation, therefore further study of the perturbation of the energy levels due to Coulomb interaction mediated by the Keldysh potential\cite{chernikov2014exciton} is warranted, however such effects are spin and valley independant and should only serve as a renormalisation of the effects studied here. These considerations do however limit the choice of excited operational states, as is evident in Fig. \ref{fig:MoS2_RCrit_VWell_ZF}, at highly charged states relative to the potential height and band gap, the critical radii will be compromised. Additionally, ferromagnetic substrates may be employed to enhance the valley splitting due to an external magnetic field. Recent experiments demonstrate an effective $\sim\unit[2]{T}$ addition to the magnetic field inducing valley Zeeman splitting in WSe$_{2}$ monolayers on EuS ferromagnetic substrate \cite{zhao2016enhanced}. Such techniques may be employed to reduce the necessary external field strength to reasonable quantities. An alternative quantum confinement method with TMDC monolayers has been proposed by way of heterostructures consisting of islands of one form of Mo based TMDC within a sea of a the corresponding W based monolayer \cite{liu2014intervalley,PhysRevB.93.045313}, or by sufficiently small free standing flakes\cite{pavlovic2015electronic}. While such methods offer quantum confinement on the desired scale, high inter-vally coupling terms are introduced at small dot radii due to edge effects, offering a decoherence channel to the system. Additionally, such structures offer scalability challenges such as the lack of a method of adjusting the exchange coupling if the proposed model is extended to a double QD system. However, such studies of quantum confinement in TMDCs pay close attention to the effect of dot shape, a consideration omitted here for simple symmetry considerations, but could yet warrant consideration in further research. With a suitable operational regime selected, operation of the spin qubit is relatively straightforward. The energy gap between the up and down spin computational basis is tuneable by the external magnetic field, while Bychkov-Rashba spin orbit coupling induced by an external electric field perpendicular to the device may be used to provide off diagonal spin coupling terms in the spin Hilbert space \cite{PhysRevX.4.011034}. \section{Summary} \label{sec:Summary} Overall, given selection of a proper operational regime and reasonable accuracy in QD fabrication at low radii, MoS$_{2}$ monolayer QDs do offer novel pure spin qubits in 2D semiconductors. Overcoming the Kramers pairs of gated QDs on TMDC monolayers is explored, as to achieve operational regimes of pure spin qubits, thus avoiding the problem of achieving valley state mixing and low valley coherence times. Zero field fourfold spin-valley degeneracy was demonstrated to be achievable in Mo based TMDC monolayers, unlike their W based counterparts, at low QD radii whilst spin degeneracy solely within a given valley was shown be achieved by application of a sufficiently high external magnetic field perpendicular to the dot. Regime restrictions for spin-degenerate MoS$_{2}$ QDs have been shown, demonstrating radially sensitive low external field regimes which may be made to be more robust when charged into higher operational states and enhanced valley-Zeeman splitting substrates. Switching from an infinite to a finite potential barrier model did demonstrate a drop in the expected values of spin-degenerate critical radii, but only at particularly low barrier heights. In addition to the moderate expected charging energy this somewhat limits the usefulness of highly charged operational states, but will not substantially effect operation at the first few excited states. To conclude, a theoretical demonstration of QD radius dependant spin-orbit effects in TMDC monolayers is given along with descriptions of possible methods of implementing novel pure spin qubits on two-dimensional semiconductor crystals. \section{Acknowledgements} \label{ref:Acknowledgements} We acknowledge helpful discussions with A. Korm\'{a}nyos, A. Pearce, M. Ran\v{c}i\'{c} and M. Russ and funding through both the European Union by way of the Marie Curie ITN Spin-Nano and the DFG through SFB 767.
proofpile-arXiv_068-15181
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Abstract polytopes trace their roots back to classical geometric objects such as the Platonic solids and more generally convex polytopes and non-convex \textquotedblleft star" polytopes in Euclidean space. During the twentieth century, with the work of Coxeter, Grünbaum, Dang and Schulte, the present day foundations of the subject of abstract polytopes evolved. Abstract regular polytopes are those abstract polytopes whose automorphism group acts regularly on its set of (maximal) flags. Abstract regular polytopes with finitely many flags may be viewed entirely within a group theoretic framework, the basic counterpart being that of a string C-group, $G$. We will review these ideas in detail in Section 2, but for now remark that the key feature of $G$ is a generating set of involutions satisfying certain properties - we call such a set of involutions a C-string for $G$. There is an extensive, recent, literature which explores abstract regular polytopes from the perspective of group theory. Typically, a particular group or family of groups is investigated usually with the aim of discovering if it is the automorphism group of some abstract regular polytope. For a small selection of these see \cite{p1,p3,p4,p6,p9,p10,p11,p13,p16,p18} as well as their references. For many groups there can be a significant number of abstract regular polytopes of which the given group is the automorphism group (or, equivalently, many C-strings for that group). Nevertheless there have been successful attempts to catalogue particular abstract regular polytopes - see \cite{atlas1,atlas2}. The purpose of this paper is to introduce a certain type of abstract regular polytope which we now define. \begin{definition} Suppose $G$ is a finite group with $\{t_1,\ldots,t_n\}$ a C-string for $G$ of rank $n$. \begin{enumerate}[$(i)$] \item For $N \trianglelefteq G,$ set $\overline{G} = G/N$ and use $\overline{g}$ for the image of $g$ in $\overline{G}$, $g \in G$. If either $|\{\overline{t_1}, \ldots, \overline{t_n}\}| < n$ or, $|\{\overline{t_1}, \ldots, \overline{t_n}\}| = n$ and $\{\overline{t_1}, \ldots, \overline{t_n}\}$ is not a C-string for $\overline{G},$ we say that $\{t_1,\ldots,t_n\}$ is an $N$-unravelled C-string for $G$. \item If $\{t_1,\ldots,t_n\}$ is an $N$-unravelled C-string for all non-trivial proper normal subgroups $N$ of $G$, we call $\{t_1,\ldots,t_n\}$ an unravelled C-string for $G$, and refer to the associated abstract regular polytope as being unravelled. \end{enumerate} \end{definition} Clearly, if $G$ is a simple group, then all its C-strings will be unravelled (and all its abstract regular polytopes likewise). The motivation for looking at unravelled C-strings was to focus upon a smaller set of C-strings, that were, in some sense, intrinsic to $G$. Hence with this in mind, we need to concentrate upon non-simple groups. Even with only a few non-trivial proper normal subgroups, the unravelled C-strings often stand out from the crowd. For example $G = \mathrm{SL}_3(7) \rtimes \langle t \rangle \sim 3^{\boldsymbol{\cdot}} L_3(7):2,$ where $t$ acts upon $\mathrm{SL}_3(7)$ as the transpose inverse automorphism, has 3256 abstract regular polytopes only one of which (of rank 4) is unravelled. We note here that we shall be using the \textsc{Atlas} \cite{TheATLAS} conventions to describe the shape of groups. In geometric terms we are considering the rank $n$ abstract regular polytopes that do not cover any others of the same rank. One could naturally extend this definition in a variety of ways, for example, dropping the restriction in rank or regularity. There exists much literature examining instances of this including \cite{AllPolytopess}, \cite{Moreonquotientpolytopes}, \cite{Simplertestsforsemisparsesubgroups}, \cite{Covers}, \cite{Quotientsofpolytopes}. We restrict ourselves to only the unravelled case. Another example, related to a sporadic group is $3^{\boldsymbol{\cdot}}\mathrm{M}_{22}:2$ which has 727 abstract regular polytopes among which five are unravelled (and all of rank 4). The unravelled example for $3^{\boldsymbol{\cdot}} L_3(7):2$ already mentioned appears as a specific case in our first theorem. \begin{theorem}\label{thm 1.2} Suppose that $q$ is a prime power and $G = \mathrm{SL}_3(q) \rtimes <t>$ where $t$ acts upon $\mathrm{SL}_3(q)$ as the transpose inverse automorphism. Assume that \begin{enumerate}[$(i)$] \item $6 | q-1$; \item there exists $\lambda, \mu \in \mathrm{GF}(q) $ such that $2\lambda^2 = 1$ and $2\mu^2 - \lambda^2 = 0$; and \item at least one of $-3^{-1}+(3^{-2}-1)^{{1}/{2}}$ and $-3^{-1}-(3^{-2}-1)^{{1}/{2}}$ has order $q+1$ in $\mathrm{GF}(q^2)^*$. \end{enumerate} Then $G$ possesses an unravelled rank 4 C-string with Schläfli symbol $[4,q+1,4].$ \end{theorem} There are infinitely many $q$ satisfying $(i)$ and $(ii)$ of Theorem \ref{thm 1.2} (for example, taking $q=p$, a prime with $p \equiv 1\pmod 3$ and $p\equiv 7 \pmod 8$ gives infinitely many $q$ by Dirichlet's Theorem). However we do not know if there are infinitely many $q$ satisfying all three conditions in the theorem. Of the 157 primes $p$ less than or equal to 10000 with $p \equiv 1 \pmod 3$ and $p\equiv 7 \pmod 8$, 20 of them do not satisfy $(iii)$ (and they are $ 199, 343, 919, 1039, 1063, 2239, 3079, 3919, 4423, 4759, 4783, 5167, 6967, 7039, 7759,\\ 7879, 8287, 8887, 9511, 9679)$. A \textsc{Magma}\cite{Magma} calculation shows there are no unravelled C-strings for $\mathrm{SL}_3(13):2 \sim 3^{\boldsymbol{\cdot}}L_3(13):2$ with Schläfli symbols $[4,14,4]$. So the requirement of $p \equiv 7 \pmod 8$ in Theorem \ref{thm 1.2} is necessary. However, we have the following companion result to Theorem \ref{thm 1.2}. \begin{theorem}\label{thm 1.3} Let $p$ be a prime with $p \equiv 1 \pmod 3$ and $p \equiv 5 \pmod 8.$ Then $G=\mathrm{SL}_3(p) \rtimes \langle t \rangle,$ where $t$ is the transpose inverse automorphism of $\mathrm{SL}_3(p)$, has an unravelled rank 4 C-string with Schläfli symbol $[4,p,4]$. \end{theorem} In Nicolaides and Rowley \cite{BnUNravlledPaper} two further families of unravelled C-strings are uncovered, both associated with Coxeter groups of type $B_n$. For one family they all have rank 4, but the other has unbounded rank. The proofs of Theorems \ref{thm 1.2} and \ref{thm 1.3} occupy Sections 3 and 4, respectively. Moreover, these proofs are constructive. The involutions of the C-strings being given as explicit $6 \times 6$ matrices. As already mentioned Section 2 reviews notation and concepts relevant to this paper. Our final section is a compendium of calculations, done with the aid of \textsc{Magma}\cite{Magma}. Table \ref{table1} is a census of the number, up to isomorphism, of C-strings for a variety of groups. This table also records how many of the C-strings are unravelled. Among these groups we have the Coxeter groups of types $B_3,B_4,B_5,B_6,B_7,B_8,D_3,D_4,D_5,D_6,D_7,D_8$, joined by a number of groups which are unusual in some respect. We single out for mention the non-split extensions $3^{\boldsymbol{\cdot}}\Sym{6}$ and $3^{\boldsymbol{\cdot}}\Sym{7}$. Each of these groups has a unique unravelled C-string of rank 4. The remainder of this section highlights various properties of these along with other unravelled C-strings. \section{Preliminaries} Although we shall work exclusively in the group theory context, we say a few words about abstract regular polytopes. An abstract regular polytope $\mathcal{P}$ is a ranked partially ordered poset, whose elements are usually called faces, and satisfies four axioms. First it's also assumed that $\mathcal{P}$ has a smallest and largest face. We assume the ranks are precisely $I = \{1,\ldots,n\}$, so $\mathcal{P}$ has rank $n$, with the smallest and largest faces conventionally given ranks $0$ and $n+1$ respectively. A flag (or chamber) of $\mathcal{P}$ is a maximal totally ordered chain in $\mathcal{P}$. Two flags $F_1$ and $F_2$ of $\mathcal{P}$ are said to be $i$-adjacent where $i \in I$ if they differ in exactly one face of rank (or type) $i$. The chamber graph of $\mathcal{P}$ has the flags of $\mathcal{P}$ as its vertices with two flags adjacent in the chamber graph if they are $i$-adjacent for some $i \in I$. The other three axioms that $\mathcal{P}$ must fulfill are that all flags of $\mathcal{P}$ should have the same number of faces, it be strongly connected and satisfy the diamond property. For $\mathcal{P}$ to be strongly connected each of its sections (included $\mathcal{P}$) must be connected. The definition of sections, and further background, may be found in Sections 2B and 2E of McMullen, Schulte \cite{ARP}. The diamond property requires that for every flag $F$ and $i \in I$, there is precisely one other flag in $\mathcal{P}$ which is $i$-adjacent to $F$. An abstract regular polytope $\mathcal{P}$ is called regular if its automorphism group, $Aut(\mathcal{P})$ acts transitively on the set of flags of $\mathcal{P}$. This leads, via $Aut(\mathcal{P})$, to being able to translate abstract regular polytopes entirely into group theoretic data, see McMullen, Schulte \cite{ARP} again, Section 2E for details. That group entity is a string C-group. Suppose $G$ is a group, and let $\{t_1,\ldots,t_n\}$ be a set of involutions in $G$. Set $I = \{1,\ldots,n\}$, and for $J\subseteq I$ define $G_J = \langle t_j | j \in I\rangle$, setting $G_J =1$ when $J = \emptyset$. Sometimes we write $G_J$ as $G_{j_1\dotsm j_k}$ when $J=\{j_1,\ldots,j_k\}$. We say $\{t_1,\ldots,t_n\}$ is a C-string for $G$ if \begin{enumerate}[$(i)$] \item $G = \langle t_1,\ldots,t_n \rangle$; \item $t_i$ and $t_j$ commute whenever $|i-j| \ge 2$; and \item for $J,K \subseteq I$, $G_J \cap G_K = G_{J \cap K}$. \end{enumerate} We note that $(iii)$ is usually referred to as the intersection property. A group possessing a C-string (and a given group may have many) is called a string C-group. The Schläfli symbol of a C-string $\{t_1,\ldots,t_n\}$ is the sequence $[\tau_{12},\tau_{23},\ldots,\tau_{n-1n}]$ where $\tau_{jj+1}$ is the order of $t_jt_{j+1}.$ In Sections 3 and 4 we will be investigating C-strings in the group $G = \mathrm{SL}_3(q) \ltimes \langle t \rangle $ where $q$ is some prime power with $3 \mid q-1$ and $t$ is the transpose inverse automorphism of $\mathrm{SL}_3(q)$. We close this section establishing some relevant notation. Put $H = \mathrm{SL}_3(q)$ and let $U$ be the natural 3-dimensional $\mathrm{GF}(q)H$-module. Set $V = U \oplus U^*$, where $U^*$ is the dual of $U$. Choosing a basis for $U$ and a dual basis for $U^*$ (viewing $U$ and $U^*$ as subspaces of $V$) we may take $t$ to be $ t = \left(\begin{array}{c|c} & I_3 \\ \hline I_3 & \\ \end{array}\right). $ We note that $G$ has two conjugacy classes of involutions, namely $t^G$ and $s^G$ where $s \in G' = H$. These classes may be easily distinguished as $\dim C_V(t) = 3$ whereas $\dim C_V(s) = 2$. Also, since $3 \mid q-1$, $G$ has shape $3^{\boldsymbol{\cdot}}L_3(q):2.$ Our group theoretic notation is standard as given, for example, in \cite{suzuki1982group}. Additionally, we use $\Sym{n}$, and $\Alt{n}$ to denote, respectively, the symmetric and alternating groups of degree $n$. \section{C-strings with Schläfli symbol $[4,q+1,4]$} In this section we prove Theorem \ref{thm 1.2} in a series of steps. We use the set up given at the end of Section 2. Since $6 \mid q-1$, we may select $\rho \in \mathrm{GF}(q)^\ast $ such that $\rho$ has multiplicative order 6. Further, we have $\lambda, \mu \in \mathrm{GF}(q)$ for which $2\lambda^2=1$ and $2\mu^2-\lambda^2=0$. We now introduce five other elements of $\mathrm{GF}(q)$. \par\quad\par \begin{personalEnvironment} $\mkern-7mu \boldsymbol{)}$ \label{noteworthyElements} \begin{align*} \alpha &= (\mu^{-2}-1)^{-1}\\ \beta &= 2(\mu^{-2}-1)^{-1}\lambda\mu^{-1}\\ \xi &= \rho^2+(1-\rho^2)2^{-1}\\ \eta &= (1-\rho^2)2^{-1}\\ \tau &= \rho^4 \end{align*} \end{personalEnvironment} Observe that $\alpha = 3^{-1}$. From $2\mu^2 = \lambda^2 = 2^{-1}$ we get $2^{-1}\mu^{-2} = 2$, and so $\mu^{-2} = 4.$ Therefore $\alpha = (\mu^{-2} -1)^{-1} = 3^{-1}$. Also, since $\beta = 2\alpha\lambda\mu^{-1}$, $$\beta^2 = 4\alpha^2\lambda^2\mu^{-2} = 8\alpha^2.$$ Hence $\alpha^2 + \beta^2 = 9\alpha^2 = 9(3^{-1})^2 = 1.$ Thus $\alpha^2 + \beta^2 = 1.$ Using these elements we now define our C-string, $\{t_1,t_2,t_3,t_4\}$. We shall show that $\{t_1,t_2,t_3,t_4\}$ is an unravelled C-string for $G$ where the $t_i$ are specified as follows. \begin{personalEnvironment}$\boldsymbol{\mkern-7mu)}$\label{generators} \begin{align*} t_1 &= \left(\begin{array}{c c c|c c c } & & & \mu & \phantom{-}\lambda &\phantom{-}\mu\\ &\phantom{-}\makebox(0,0){\text{\huge0}} & & \lambda & \phantom{-}0 &-\lambda\\ & & & \mu & -\lambda &\phantom{-}\mu\\ \hline \mu & \phantom{-}\lambda &\phantom{-}\mu& & &\\ \lambda & \phantom{-}0 &-\lambda& &\phantom{-}\makebox(0,0){\text{\huge0}} &\\ \mu & -\lambda &\phantom{-}\mu& & &\\ \end{array}\right) \\ t_2 &= \left(\begin{array}{c c c|c c c } -1& & & & & \\ & 1& & &\makebox(0,0){\text{\huge0}} & \\ & & -1& & &\\ \hline & & &-1& & \\ & \makebox(0,0){\text{\huge0}}& && 1 & \\ & & && & -1\\ \end{array}\right) = \mathrm{diag}(-1,1,-1,-1,1,-1) \\ t_3 &= \left(\begin{array}{c c c|c c c } \phantom{-}\alpha& \phantom{-}\beta& \phantom{-}0& & & \\ \phantom{-}\beta& -\alpha&\phantom{-}0 & &\makebox(0,0){\text{\huge0}} & \\ \phantom{-}0&\phantom{-}0 & -1& & &\\ \hline & & &\phantom{-}\alpha&\phantom{-}\beta & \phantom{-}0\\ & \phantom{-}\makebox(0,0){\text{\huge0}}& &\phantom{-}\beta& -\alpha &\phantom{-}0 \\ & & &\phantom{-}0&\phantom{-}0 & -1\\ \end{array}\right) \\ t_4 &= \left(\begin{array}{c c c|c c c } & & & \xi & 0 &\eta\\ & \makebox(0,0){\text{\huge0}}& & 0 & \tau &0\\ & & & \eta &0 &\xi\\ \hline \xi\rho^{-2} & 0 &\eta\rho& & &\\ 0 & \tau\rho^{-2} &0& &\makebox(0,0){\text{\huge0}} &\\ \eta\rho &0 &\xi\rho^{-2}& & &\\ \end{array}\right) \\ \end{align*} \end{personalEnvironment} \begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{involutionsLemma} For $i=1,2,3,4$, $t_i$ are involutions with $t_1,t_4 \in t^G$ and $t_2,t_3 \in s^G$. \end{personalEnvironment} \par The diagonal blocks of $t_2$ and $t_3$ are easily seen to be involutions, and so $t_2$ and $t_3$ are involutions. Since \begin{equation*} \left(\begin{array}{c c c } \mu & \lambda &\mu\\ \lambda & 0 &-\lambda\\ \mu & -\lambda &\mu\\ \end{array}\right)^2 \\ = \left(\begin{array}{c c c } 2\mu^2+\lambda^2 & 0 &2\mu^2-\lambda^2\\ 0 & 2\lambda^2 &0\\ 2\mu^2-\lambda^2 &0 &2\mu^2+\lambda^2\\ \end{array}\right) \\ \end{equation*} the conditions on $\mu$ and $\lambda$ imply that $t_1$ is an involution. \par Moving onto $t_4$, we look at the product \begin{equation*} \left(\begin{array}{c c c } \xi & 0 &\eta\\ 0 & \tau &0\\ \eta &0 &\xi\\ \end{array}\right) \left(\begin{array}{c c c } \xi\rho^{-2} & 0 &\eta\rho\\ 0 & \tau\rho^{-2} &0\\ \eta\rho &0 &\xi\rho^{-2}\\ \end{array}\right) \\ = \left(\begin{array}{c c c } \xi^2\rho^{-2}+\eta^2\rho & 0 &\xi\eta\rho+\eta\xi\rho^{-2}\\ 0 & \tau^2\rho^{-2} &0\\ \eta\xi\rho^{-2}+\xi\eta\rho & 0 &\eta^2\rho+\xi^2\rho^{-2}\\ \end{array}\right) = A. \end{equation*} Note that $\rho^3$ has multiplicative order 2, and so $\rho^3=-1$. Now \begin{align*} \eta\xi\rho^{-2}+\xi\eta\rho&=\eta\xi\rho^{-2}(1+\rho^3)\\ &= \eta\xi\rho^{-2}(1+-1)=0, \end{align*} and using (\ref{noteworthyElements}) we have \begin{align*} \tau^2\rho^{-2}=\rho^8\rho^{-2}=\rho^6=1 \end{align*} Again, from (\ref{noteworthyElements}) \begin{align*} \xi&=\rho^2+\eta && \\ \xi^2&=\rho^4+2\rho^2\eta+\eta^2 && \\ \xi^2\rho^{-2}&=\rho^2+2\eta+\eta^2\rho^{-2} && \\ \xi^2\rho^{-2}+\eta^2\rho&=\rho^2+2\eta+\eta^2\rho^{-2}+ \eta^2\rho&& \\ &=\rho^2+(1-\rho^2)+\eta^2\rho^{-2} +\eta^2\rho&& \\ \end{align*} as $2\eta=1-\rho^2$. Then, as $\eta^2\rho^{-2}+\eta^2\rho = \eta^2\rho^{-2}(1+\rho^3)=0$, we get $$\xi^2\rho^{-2}+\eta^2\rho = 1. $$ Hence $A=I_3$, whence $t_4$ is also an involution. Since $\dim C_V(t_i)=3$ for $i=1,4$ and $\dim C_V(t_i)=2$ for $i=2,3$, (\ref{involutionsLemma}) is proved. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma4} \begin{align*} C_G(t) &= \langle t \rangle \times C_H(t) \cong 2 \times \mathrm{SO}_3(q) \cong 2 \times \mathrm{PGL}_2(q) \end{align*} Because $t$ acts by inverse conjugation on $H$, $C_H(t)$ consists of all orthogonal matrices of determinant 1. The well-known isomorphism $\mathrm{SO}_3(q) \cong \mathrm{PGL}_2(q)$ (see \cite{taylor1992geometry}) now gives (\ref{lemma4}). \end{personalEnvironment} \par We define $$ r = \left(\begin{array}{c c c|c c c } & & & \rho & 0 &0\\ & \makebox(0,0){\text{\huge0}}& & 0 & \rho &0\\ & & & 0 &0 &\rho^{-2}\\ \hline \rho^{-1} & 0 &0& & &\\ 0 & \rho^{-1} &0& &\makebox(0,0){\text{\huge0}} &\\ 0 &0 &\rho^{2}& & &\\ \end{array}\right) \\. $$ Observe that $r \in t^G$ and so $C_G(r) \cong 2 \times \mathrm{PGL}_2(q).$ \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma5} $tr= \mathrm{diag}(\rho^{-1},\rho^{-1},\rho^{2},\rho^{},\rho^{},\rho^{-2}) \in H$ has order 6 and $(tr)^2 \in Z(H)$. Further, $C_G(t) \cap C_G(r) \le C_G(tr) = C_H(tr) \cong \mathrm{GL}_2(q)$. Since $[G:H]=2,$ we have $tr \in H$ and, as $\rho$ has multiplicative order 6, $tr$ has order 6 with $(tr)^2 \in Z(H).$ Thus $C_G(tr)=C_H(tr)=C_H((tr)^3) \cong \mathrm{GL}_2(q).$ \end{personalEnvironment} \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma6} We have $t_1,t_2,t_3 \in C_G(t)$ and $t_2,t_3,t_4 \in C_G(r)$. \end{personalEnvironment} It is straightforward to check (\ref{lemma6}), though for $t_4r=rt_4$ we use the fact that $\rho^2=\rho^{-4}$. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma7} $C_G(t) \cap C_G(r) \cong \Dih{2(q+\epsilon)}$ where $\epsilon = \pm 1.$ \end{personalEnvironment} First we observe that $C_G(t) \cap C_G(r) = C_{C_G(tr)}(t).$ Since $C_G(tr)=C_H(tr)\cong \mathrm{GL}_2(q)$ by (\ref{lemma5}) and $t$ acts by transpose inverse upon $C_H(tr)$, $C_{C_G(tr)}(t) \cong \O_2^\epsilon(q)$ (the 2-dimensional orthogonal group of type $\epsilon$). Since $\O_2^\epsilon(q) \cong \Dih{2(q-\epsilon)}$, (see \cite{taylor1992geometry}), we have (\ref{lemma7}). \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma8} The order of $t_1t_2$ is 4. \end{personalEnvironment} We have $t_1t_2 = \left(\begin{array}{c | c } &A\\ \hline A& \end{array}\right)$ where $A= \left(\begin{array}{c c c } -\mu&\phantom{-}\lambda&-\mu\\ -\lambda&\phantom{-}0&\phantom{-}\lambda\\ -\mu&-\lambda&-\mu\\ \end{array}\right)$. Now $A^2 = \left(\begin{array}{c c c } 2\mu^2-\lambda^2&0&2\mu^2+\lambda^2\\ 0&-2\lambda^2&0\\ 2\mu^2+\lambda^2&0&2\mu^2-\lambda^2\\ \end{array}\right)$ and hence $A^2 = \left(\begin{array}{c c c } 0&\phantom{-}0&\phantom{-}1\\ 0&-1&\phantom{-}0\\ 1&\phantom{-}0&\phantom{-}0\\ \end{array}\right)$. Therefore $t_1t_4$ has order 4. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma9} $t_1t_3 = t_3t_1$. \end{personalEnvironment} Let $A= \left(\begin{array}{c c c } \mu&\phantom{-}\lambda&\phantom{-}\mu\\ \lambda&\phantom{-}0&-\lambda\\ \mu&-\lambda&\phantom{-}\mu\\ \end{array}\right)$ and $B= \left(\begin{array}{c c c } \alpha&\phantom{-}\beta&\phantom{-}0\\ \beta&-\alpha&\phantom{-}0\\ 0&\phantom{-}0&-1\\ \end{array}\right).$ Then $t_1t_3 = t_3t_1$ provided $AB=BA$. Now \begin{align*} AB &= \left(\begin{array}{c c c } \mu\alpha + \lambda\beta&\mu\beta - \alpha\lambda&-\mu\\ \lambda\alpha&\lambda\beta&\phantom{-}\lambda\\ \mu\alpha - \lambda\beta&\mu\beta + \alpha\lambda&-\mu\\ \end{array}\right) &&\text{ and }\\ BA&= \left(\begin{array}{c c c } \alpha\mu + \beta\lambda&\alpha\lambda&\alpha\mu - \beta\lambda\\ \beta\mu - \alpha\lambda&\beta\lambda&\beta\mu + \alpha\lambda\\ -\mu&\phantom{\alpha}\lambda&-\mu\\ \end{array}\right). \end{align*} So we need to know that \begin{align*} \alpha\lambda &= \mu\beta-\alpha\lambda,\\ -\mu &= \alpha\mu-\beta\lambda \quad \text{and}\\ \lambda &= \beta\mu+\alpha\lambda.\\ \end{align*} Since $\mu\beta = \mu2(\mu^{-2}-1)^{-1}\lambda\mu^{-1}=2(\mu^{-2}-1)^{-1}\lambda = 2\alpha\lambda,$ we have $\alpha\lambda = \mu\beta-\alpha\lambda$. From $\lambda\beta = \lambda2(\mu^{-2}-1)^{-1}\lambda\mu^{-1}=(\mu^{-2}-1)^{-1}\mu^{-1}=\alpha\mu^{-1}$, we get \begin{align*} \mu\alpha-\lambda\beta &= \mu\alpha - \alpha\mu^{-1}\\ &= \mu\alpha(1-\mu^{-2})\\ &= \mu(\mu^{-2}-1)^{-1}(1-\mu^{-2})\\ &= -\mu. \end{align*} Finally we show $\lambda=\beta\mu+\alpha\lambda$. Using $\beta=2\alpha\mu^{-1}$, we have \begin{align*} \mu\beta + \alpha\lambda &= 2\alpha\lambda+\alpha\lambda\\ &= 3\alpha\lambda \\ &= 3(\mu^{-2}-1)^{-1}\lambda\\ &=3.3^{-1}\lambda = \lambda, \end{align*} as $4\mu^2=1$ implies $\mu^{-2}-1=3.$ Hence (\ref{lemma9}) holds. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma10} The order of $t_2t_3$ is $q+1$ and $C_G(t) \cap C_G(r) = \langle t_2,t_3\rangle$ \end{personalEnvironment} We use that \begin{align*} t_2t_3 &= \left(\begin{array}{c | c } X&\\ \hline &X \end{array}\right) \text{where } X = \left(\begin{array}{c c c } -\alpha & -\beta&0\\ \beta & -\alpha&0\\ 0&0&1\\ \end{array}\right). \end{align*} Hence the order of $t_2t_3$ is the same as the order of $Y$ where $Y = \left(\begin{array}{c c } -\alpha & -\beta\\ \beta & -\alpha\\ \end{array}\right).$ Recalling that $\alpha^2 + \beta^2 = 1$, the characteristic polynomial of $Y$ is $$ x^2 +2\alpha x +1.$$ Therefore the eigenvalues of $Y$ are $-\alpha \pm (\alpha^2-1)^{1/2} = -3^{-1}\pm(3^{-2}-1)^{1/2}.$ If these two eigenvalues are equal, then $2(\alpha^2 - 1)^{1/2} = 0$ which implies the impossible $\alpha^2 = 1$. So the two eigenvalues of $Y$ are different. Consequently $Y$ is diagonalizable in $\mathrm{GL}_2(q^2)$ and hence, by assumption $(iii)$ of Theorem \ref{thm 1.2}, $Y$ has order $q+1$. Hence, using (\ref{lemma6}) and (\ref{lemma7}), we obtain $C_G(t) \cap C_G(r) = \langle t_2,t_3 \rangle.$ \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma11.5} $[t_2,t_4] = 1$ \end{personalEnvironment} Since $t_2$ is a diagonal matrix with 1 and -1 as its only diagonal entries, a matrix commutes with $t_2$ if and only if its of the form $$\left(\begin{array}{c c c |c c c } *&0&*&*&0&*\\ 0&*&0&0&*&0\\ *&0&*&*&0&*\\ \hline *&0&*&*&0&*\\ 0&*&0&0&*&0\\ *&0&*&*&0&*\\ \end{array}\right),$$ and $t_4$ is of this form. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma12} $[t_1,t_4] = 1$ \end{personalEnvironment} Writing $t_1 = \left(\begin{array}{c | c } &A\\ \hline A& \end{array}\right)$ and $t_4 = \left(\begin{array}{c | c } &C\\ \hline D& \end{array}\right)$, (\ref{lemma12}) will hold if we show that $AD=CA$ and $AC=DA$. Calculating gives \begin{align*} AD &= \left(\begin{array}{c c c } \mu\xi\rho^{-2}+\mu\eta\rho&\lambda\tau\rho^{-2}&\mu\eta\rho+\mu\xi\rho^{-2}\\ \lambda\xi\rho^{-2}-\lambda\eta\rho^{}&0&\lambda\eta\rho-\lambda\xi\rho^{-2}\\ \mu\xi\rho^{-2}+\mu\eta\rho &-\lambda\tau\rho^{-2}&\mu\eta\rho+\mu\xi\rho^{-2}\\ \end{array}\right) \quad\text{and}\\ CA &= \left(\begin{array}{c c c } \xi\mu + \eta\mu&\xi\lambda-\eta\lambda&\xi\mu + \mu\eta\\ \tau\lambda&0&-\tau\lambda\\ \eta\mu+\xi\mu&\eta\lambda-\xi\lambda&\eta\mu+\xi\mu\\ \end{array}\right). \end{align*} Therefore $AD=CA$ holds provided \begin{align*} \mu\xi\rho^{-2}+ \mu\eta\rho &= \xi\mu+\eta\mu,\\ \lambda\xi\rho^{-2}- \lambda\eta\rho &= \tau\lambda \quad \text{and}\\ \lambda\tau\rho^{-2} &= \xi\lambda-\eta\lambda.\\ \end{align*} Since $\lambda \ne 0$ and $\mu \ne 0$ this is equivalent to showing that \begin{align*} \xi\rho^{-2}+\eta\rho &= \xi+\eta, \\ \xi\rho^{-2}-\eta\rho &= \tau \quad \text{and} \\ \tau\rho^{-2} &= \xi-\eta. \end{align*} First we observe that $\xi = \rho^2 + \eta$, and recall that $\rho^3=-1$. Hence \begin{align*} \xi+\eta &= \rho^2+2\eta \\ &= \rho^2+2(1-\rho^2)2^{-1}\\ &=\rho^2+1-\rho^2 = 1. \end{align*} While \begin{align*} \xi\rho^{-2}+\eta\rho &= (\rho^2+\eta)\rho^{-2} + \eta\rho\\ &=1+\eta\rho^{-2} + \eta\rho\\ &=1+\eta\rho^{-2}(1+\rho^3)\\ &=1+\eta\rho^{-2}(1-1)=1. \end{align*} Next, \begin{align*} \xi\rho^{-2}-\eta\rho &= (\rho^2+\eta)\rho^{-2}- \eta\rho\\ &= 1 + \eta\rho^{-2}-\eta\rho\\ &=\rho^4(\rho^2+\eta-\eta\rho^{-3})\\ &=\rho^4(\rho^2+2\eta), \end{align*} and substituting for $\eta$ yields \begin{align*} \xi\rho^{-2} - \eta\rho&= \rho^4(\rho^2+2(1-\rho^2)2^{-1})\\ &=\rho^4=\tau. \end{align*} Since $\xi -\eta= \rho^2+\eta-\eta=\rho^2=\rho^4\rho^{-2}=\tau\rho^{-2},$we have shown that $AD=CA.$ Similar considerations verify that $AC=DA$, whence (\ref{lemma12}) holds. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma12.5} $t_3t_4$ has order 4. \end{personalEnvironment} Let \begin{align*} X &= \left(\begin{array}{c c c } \alpha & \beta & 0\\ \beta & -\alpha & 0\\ 0 & 0 & -1 \end{array}\right),\\ A &= \left(\begin{array}{c c c } \xi& 0 & \eta\\ 0 & \tau & 0\\ \eta & 0 & \xi \end{array}\right) and\\ B &= \left(\begin{array}{c c c } \xi\rho^{-2}& 0 & \eta\rho\\ 0 & \tau\rho^{-2} & 0\\ \eta\rho & 0 & \xi\rho^{-2} \end{array}\right). \end{align*} To show that $t_3t_4$ has order 4 we verify that $(t_3t_4)^2$ is an involution. Now \begin{align*} (t_3t_4)^2 &= \left(\begin{array}{ c | c } XAXB& \\ \hline &XBXA\\ \end{array}\right). \end{align*} We will see in a moment that the $(3,2)^{th}$-entry of XBXA is non-zero, so $(t_3t_4)^2 \ne 1$. Thus recalling that $X=X^{-1}$ and $A^{-1}= B$, we must show \begin{align*} XAXB &= (XAXB)^{-1} = AXBX \text{ and }\\ XBXA &= (XBXA)^{-1} = BXAX. \end{align*} Observe that $XBXA=BXAX$ implies $$A(XBXA)B = A(BXAX)B,$$ giving $AXBX = XAXB.$ Hence it suffices to show that $XBXA = BXAX.$ We calculate that \begin{align*} BXAX &= \left(\begin{array}{c c c } \alpha^2\xi^2\rho^{-2}-\alpha\eta^2\rho+\beta^2\xi\tau\rho^{ -2} & \alpha\beta\xi^2\rho^{-2}-\beta\eta^2\rho-\alpha\beta \xi\tau\rho^{-2} & \eta\xi\rho-\alpha\xi\eta\rho^{-2}\\ \alpha\beta\xi\tau\rho^{-2}-\alpha\beta\tau^2\rho^{-2} & \beta^2\xi\tau\rho^{-2}+\alpha^2\tau^2\rho^{-2} & -\beta\eta\tau\rho^{-2}\\ \alpha^2\xi\eta\rho-\alpha\xi\eta\rho^{-2}+\beta^2\tau\eta \rho & \alpha\beta\xi\eta\rho-\beta\xi\eta\rho^{-2}-\alpha \beta\tau\eta\rho & \eta^2\rho^{-2}-\alpha\xi^2\rho\\ \end{array}\right) \text{and} \\ XBXA &= \left(\begin{array}{c c c } \alpha\xi^2\alpha\rho^{-2}-\alpha\eta^2\rho+\beta^2\tau{\rho}^{-2}\xi & \alpha\xi{\rho}^{-2}\beta\tau - \beta\tau^2\rho^{-2}\alpha & \alpha^2\xi\rho^{-2}\eta - \alpha\eta\rho\xi+\beta^2\tau\rho^{-2}\eta\\ \beta\xi^2\alpha\rho^{-2}-\beta\eta^2\rho-\alpha\tau\rho^{-2}\beta\xi & \beta^2\xi\rho^{-2}\tau + \alpha^2\tau^2\rho^{-2} & \beta\xi\rho^{-2}\alpha\eta-\beta\eta\rho\xi-\alpha\tau\rho^{-2}\beta\eta\\ -\eta\rho\alpha\xi + \xi\rho^{-2}\eta & \eta\rho\beta\tau & - \eta^2 \rho\alpha+\xi^2\rho^{-2} \end{array}\right). \end{align*} First we note that $XBXA$ and $BXAX$ have the same diagonal entries. For the $(2,1)^{th}$-co-ordinate of $XBXA$ and $BXAX$ we require $$\alpha\beta\xi\tau\rho^{-2}-\alpha\beta\tau^2\rho^{-2} = \beta\xi^2\alpha\rho^{-2}-\beta\eta^2\rho-\alpha\tau\rho^{-2}\beta\xi. $$ Multiplying through by $\beta\rho^2$ this is equivalent to $$\tau\xi\alpha - \alpha\tau^2 = \xi^2\alpha+\eta^2 - \alpha\tau\xi, $$ using $\beta^3 = -1$. Since $-2\xi = \tau$, this is equivalent to $$-2\alpha\tau^2 = \xi^2\alpha+\eta^2. $$ Substituting for $\xi,\eta$ and $\alpha = 3^{-1}$ reduces this to $$0 = 1 + \rho^2 +\rho^4, $$ which holds. Therefore $XBXA$ and $BXAX$ have the same $(2,1)^{th}$-co-ordinate. Similarly we may check all the off-diagonal entries of $XBXA$ and $BXAX$ are equal. Therefore $XBXA = BXAX$ and hence (\ref{lemma12.5}) holds. \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma11} $\langle t_1,t_2,t_3 \rangle = C_G(t)$ and $\langle t_2,t_3,t_4 \rangle = C_G(r)$. \end{personalEnvironment} Since $t_1,t_2 \in C_G(t)$ with $t_2 \in C_H(t) \trianglelefteq C_G(t)$, $[t_1,t_2] \in C_H(t) \cong \mathrm{PGL}_2(q)$. Now, from (\ref{lemma8}), $$[t_1,t_2] = (t_1t_2)^2 = \left(\begin{array}{c c c |c c c } &&1&&&\\ &-1&&&&\\ 1&&&&&\\ \hline &&&&&1\\ &&&&-1&\\ &&&1&&\\ \end{array}\right).$$ A quick calculation reveals that $[t_1,t_2] \notin C_G(r)$, and so $[t_1,t_2] \notin C_G(t)\cap C_G(r).$ By (\ref{lemma7}) and \cite{suzuki1982group}, $C_G(t) \cap C_G(r)$ is a maximal subgroup of $C_H(t)$, whence, as $t_1 \notin C_H(t),$ we infer that $\langle t_1,t_2,t_3 \rangle = C_G(t)$. Similar considerations show that $\langle t_2,t_3,t_4\rangle = C_G(r).$ \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma13} $\{t_1,t_2,t_3,t_4 \}$ is a C-string for $G$ with Schläfli symbol $[4,q+1,4].$ \end{personalEnvironment} This comes from combining the fact that $C_H(t)$ is a maximal subgroup of $H$ (see \cite{Mitchell}) with (\ref{lemma8}), (\ref{lemma9}), (\ref{lemma10}), (\ref{lemma11.5}), (\ref{lemma12}) and (\ref{lemma12.5}). \par\quad\par\begin{personalEnvironment}$\boldsymbol{\mkern-7mu )}$\label{lemma14} $\{t_1,t_2,t_3,t_4 \}$ is an unravelled C-string of $G$. \par \end{personalEnvironment} The only non-trivial proper normal subgroups of $G$ are $H$,$Z(H)$. Since $[G:H]=2$, we only need show $\{t_1,t_2,t_3,t_4\}$ is $Z(H)$-unravelled. Put $G_{123}=\langle t_1,t_2,t_3\rangle$, $G_{234}=\langle t_2,t_3,t_4\rangle$ and $\overline{G}=G/Z(H)$. Since $Z(H)= \langle \mathrm{diag}(\rho^{-2},\rho^{-2},\rho^{-2},\rho^{2},\rho^{2},\rho^{2}) \rangle$, we see that $|\{\overline{t_1},\overline{t_2},\overline{t_3},\overline{t_4}\}|=4$. Also $\langle \overline{t_2},\overline{t_3}\rangle \cong C_G(t) \cap C_G(r) \cong \Dih{2(q+1)}$ as $\langle {t_2},{t_3}\rangle \cap Z(H) = 1$. Now $\overline{G}_{123}=C_{\overline{G}}(\overline{t})$ and $\overline{G}_{234}=C_{\overline{G}}(\overline{r}),$ as the orders of $t$ and $r$ are coprime to $|Z(H)|$. From (\ref{lemma5}) $\overline{t}\overline{r}=\overline{r}\overline{t}$ has order 2. That is $\overline{t}$ and $\overline{r}$ commute. So $\overline{t},\overline{r} \in \overline{G}_{123} \cap \overline{G}_{234}$ and therefore $\overline{G}_{123} \cap \overline{G}_{234} \gneqq \langle \overline{t_2},\overline{t_3} \rangle$. Consequently, the intersection property fails for $\{\overline{t_1},\overline{t_2},\overline{t_3},\overline{t_4}\}$. Together (\ref{lemma13}) and (\ref{lemma14}) prove Theorem \ref{thm 1.2}. \section{C-strings with Schläfli symbol $[4,p,4]$} As mentioned in Section 2, this section is concerned with proving Theorem \ref{thm 1.3}. Again we employ the notational set up described at the end of Section 2. Now here $G = \mathrm{SL}_3(p) \rtimes \langle t \rangle$ where $p$ is a prime such that $p \equiv 1 \pmod 3$ and $p \equiv 5 \pmod 8$. Because $p \equiv 1 \pmod 3$ we may choose, and keep fixed, $\rho \in \mathrm{GF}(p)$ of multiplicative order 3. Further, $p \equiv 5 \pmod 8$ means we may choose $\iota \in \mathrm{GF}(p)$, also now to be fixed, such that $\iota^2 = -1$. Set $\alpha = \sqrt{(1+\rho^2)^{-1}}$, again making a choice from the (at most) two possibilities. Now we define a slew of elements in $\mathrm{GF}(p)$. \begin{personalEnvironment}\label{(4.1)}$\boldsymbol{\mkern-7mu )}$ \end{personalEnvironment} \par \begin{align*} \lambda &= \alpha(\iota+1)(-1+\rho-\iota\rho^2)\\ \epsilon &= -\iota\lambda\\ \beta &= -2^{-1}\lambda^2\iota\\ \gamma &= 2^{-1}\lambda^2-1\\ \delta &= -1 -2^{-1}\lambda^2\\ \mu &= 1 - \rho.\\ \end{align*} Note that $\lambda \ne 0$ and $\lambda^2 = -\epsilon^2.$ Also recall that $1 + \rho+\rho^2 = 0$ and so $\alpha^2 = -\rho^2.$ Hence $\alpha \ne 0$. The elements in (\ref{(4.1)}) appear as entries in $\{t_1,t_2,t_3,t_4\}$, elements of $G$, which we now define. \begin{personalEnvironment}\label{(4.2)}$\boldsymbol{\mkern-7mu )}$ \begin{align*} t_1 &= \left(\begin{array}{c c c|c c c } & & & \phantom{-}0 & \phantom{-}\alpha &-\alpha\rho\\ &\phantom{-}\makebox(0,0){\text{\huge0}} & & \phantom{-}\alpha & \phantom{-}\rho &\phantom{-}1\\ & & & -\alpha\rho & \phantom{-}1 &\phantom{-}\rho^2\\ \hline \phantom{-}0 & \phantom{-}\alpha &-\alpha\rho& & & \\ \phantom{-}\alpha & \phantom{-}\rho &\phantom{-}1&&\phantom{-}\makebox(0,0){\text{\huge0}} & \\ -\alpha\rho & \phantom{-}1 &\phantom{-}\rho^2& & &\\ \end{array}\right) \\ t_2 &= \left(\begin{array}{c c c|c c c } 1& & & & & \\ & -1& & &\makebox(0,0){\text{\huge0}} & \\ & & -1& & &\\ \hline & & &1& & \\ & \makebox(0,0){\text{\huge0}}& && -1 & \\ & & && & -1\\ \end{array}\right) = \mathrm{diag}(1,-1,-1,1,-1,-1) \\ t_3 &= \left(\begin{array}{c c c|c c c } 1& \lambda& \epsilon& & & \\ \lambda&\gamma&\beta & &\makebox(0,0){\text{\huge0}} & \\ \epsilon&\beta & \delta& & &\\ \hline & & &1& \lambda& \epsilon \\ & \makebox(0,0){\text{\huge0}} &&\lambda&\gamma&\beta \\ & & &\epsilon&\beta & \delta\\ \end{array}\right) \\ t_4 &= \left(\begin{array}{c c c|c c c } & & & -\rho & \phantom{-}0 &\phantom{-}0\\ & \makebox(0,0){\text{\huge0}}& & \phantom{-}0 &\phantom{-}0 & \phantom{-}\rho\\ & & & \phantom{-}0 &\phantom{-}\rho &-\mu\rho^2\\ \hline -\rho^{2} & 0 &0& & &\\ \phantom{-}0 & \mu &\rho^2& &\makebox(0,0){\text{\huge0}} &\\ \phantom{-}0 &\rho^{2}&0& & &\\ \end{array}\right). \end{align*} \end{personalEnvironment} In order to define a further element in $t^G$, we introduce more elements in $\mathrm{GF}(p)$. \begin{personalEnvironment}\label{(4.3)}$\boldsymbol{\mkern-7mu )}$ \begin{align*} a &= 2(2\rho^2+(1-\rho)\iota)^{-1}\\ x &= -2^{-1}a\rho(1-\rho)\\ y &= -x\rho\\ b &= a^{-1}(\rho^2 + x^2)\\ c &= a^{-1}(1+x^2\rho)\\ d &= a\rho\\ \end{align*} Observe that $a\ne0$, so $b$ and $c$ are well-defined. Now set \begin{align*} r &= \left(\begin{array}{c c c|c c c } & & & \rho & 0 &0\\ & \makebox(0,0){\text{\huge0}}& & 0 & a &x\\ & & & 0 &x &b\\ \hline \rho^{2} & 0 &0& & &\\ 0 & c &y& &\makebox(0,0){\text{\huge0}} &\\ 0 &y &d& & &\\ \end{array}\right). \\ \end{align*} \end{personalEnvironment} \begin{personalEnvironment}\label{(4.4)}$\boldsymbol{\mkern-7mu )}$ \begin{enumerate}[$(i)$] \item $t_1,t_2,t_3,t_4$ and $r$ are involutions. \item $t_1,t_4,r \in t^G$ and $t_2,t_3 \in s^G.$ \end{enumerate} To show that $t_1$ is an involution, we must verify that $X^2 = I_3$ where $ X = \left(\begin{array}{c c c} 0&\alpha&-\alpha\rho\\ \alpha&\rho&1\\ \alpha\rho&1&\rho^2\\ \end{array}\right).$ Now $X^2 = \left(\begin{array}{c c c} \alpha^2+\alpha^2\rho & 0 &0\\ 0 & \alpha^2 + \rho^2 +1 & -\alpha^2\rho+\rho+\rho^2\\ 0 & -\alpha^2\rho+\rho+\rho^2& \alpha^2\rho+1+\rho^4 \end{array}\right)$ and using $\alpha^2 = -\rho^2,$ we see $X^2 = I_3.$ Similarly, using (\ref{(4.1)}), we may show $t_3$ is an involution. While it is straightforward to check that $t_2$ and $t_4$ are involutions, for $r$ it suffices, using (\ref{(4.3)}), to show that $$\left(\begin{array}{c c} a & x\\ x & b\\ \end{array}\right)^{-1}= \left(\begin{array}{c c} c & y\\ y & d\\ \end{array}\right), $$ so proving $(i)$. Since, by calculation, $\dim C_V(t_1) = \dim C_V(t_4) = \dim C_V(r) = 3$ and $\dim C_V(t_2) = \dim C_V(t_3) = 2,$ we have part $(ii)$. \end{personalEnvironment} \begin{personalEnvironment}\label{(4.5)}$\boldsymbol{\mkern-7mu )}$ $t_1t_3 = t_3t_1,$ $t_1t_4 = t_4t_1$ and $t_2t_4 = t_4t_2$. Checking $t_1t_4 = t_4t_1$ uses $\mu = 1 - \rho$ whereas $t_1t_3 = t_3t_1$ requires the definitions of $\lambda, \epsilon, \beta,\gamma$ and $\delta$. That $t_2t_4 = t_4t_2$ is easily seen. \end{personalEnvironment} \begin{personalEnvironment}\label{(4.6)}$\boldsymbol{\mkern-7mu )}$ \begin{enumerate}[$(i)$] \item $t_1t_2$ and $t_3t_4$ both have order 4. \item $t_2t_3$ has order $p$. \end{enumerate} Part $(i)$ can be checked following the same strategy as in (\ref{lemma12.5}). Now $t_2t_3 = \left(\begin{array}{c | c} X &\\ \hline & X\\ \end{array}\right)$ where $X = \left(\begin{array}{c c c} 1 & \lambda & \epsilon\\ -\lambda & -\gamma & -\beta\\ -\epsilon & -\beta & -\delta\\ \end{array}\right).$ We demonstrate that $X$ has order $p$, from which $(ii)$ will follow. Consider $X$ acting on the 3-dimensional vector space $U$, setting $U_1 = C_U(X)$ and letting $U_2$ be the inverse image of $C_{U/U_1}(X)$ in $U$. For $(u,v,w) \in U, (u,v,w) \in U_1$ if and only if \begin{align*} u - \lambda v - \epsilon w &= u\\ \lambda u - \gamma v - \beta w &= v\\ \epsilon u - \beta v - \delta w &= w\\ \end{align*} The first equation gives $v = - \lambda^{-1}\epsilon w = - \lambda ^{-1}. - \iota \lambda w = \iota w,$ and then the second yields $$ \lambda u = (\gamma \iota + \beta + \iota)w = 0, $$ using the definitions of $\gamma$ and $\beta$. Since $\lambda \ne 0, u = 0.$ Thus $U_1 = \{(0,\iota w, w ) | w \in \mathrm{GF}(p)\}$. Similar calculations show that $U_2 = \{ (u,\iota w,w) | u,w \in \mathrm{GF}(p)\}$. Now $(0,0,1)X - (0,0,1) = (-\epsilon,-\beta,-\delta-1) \in U_2$, as $\iota(-\delta-1) =-\beta.$ Hence as $(0,0,1) \notin U_2,$ $X$ acts nilpotently on $U$, whence $X$ has $p$-power order. Since Sylow $p$-subgroups of $\mathrm{SL}_3(p)$ have exponent $p$ and $X \ne I_3$, $X$ has order $p$. This completes the proof of (\ref{(4.6)}). \end{personalEnvironment} Let $g_0 = \left(\begin{array}{c c c|c c c } & & & 1 & 0 &\phantom{-}0\\ & \makebox(0,0){\text{\huge0}}& & 0 & 0 &-1\\ & & & 0 &1 &\phantom{-}0\\ \hline 1 & 0 &\phantom{-}0& & &\\ 0 & 0 &-1& &\makebox(0,0){\text{\huge0}} &\\ 0 &1 &\phantom{-}0& & &\\ \end{array}\right)$ and $z = \mathrm{diag}(\rho,\rho,\rho,\rho^2,\rho^2,\rho^2).$ Note that $z \in Z(H)$, and straightforward calculation gives \begin{personalEnvironment}\label{(4.7)}$\boldsymbol{\mkern-7mu )}$ \begin{enumerate}[(i)] \item $g_0 \in C_G(t)$ and $zg_0 \in C_G(r).$ \item $g_0^2 = t_2 = (zg_0)^2$. \end{enumerate} \end{personalEnvironment} Set $L_{123} = G_{123} \cap C_H(t)'$ and $L_{234} = G_{234} \cap C_H(t)'.$ Note that $L_{123} \cong \mathrm{PSL}_2(p) \cong L_{234}.$ \begin{personalEnvironment}\label{(4.8)}$\boldsymbol{\mkern-7mu )}$ \begin{enumerate}[(i)] \item $C_G(t) \ge G_{123}$ and $C_G(r) \ge G_{234}.$ \item $G_{123} = \langle t_1 \rangle L_{123}$ and $G_{234} = \langle t_4 \rangle L_{234}.$ \item $G_{123} \cong \mathrm{PGL}(2,p) \cong G_{234.}$ \end{enumerate} \end{personalEnvironment} First, calculation reveals that $t_1, t_2$ and $t_3$ commute with $t$ and $t_2,t_3$ and $t_4$ commute with $r$, so part $(i)$ holds. Observe that, as $C_G(t)=\langle t\rangle \times C_H(t)$ with $C_H(t) \equiv \mathrm{PGL}(2,p)$, $C_G(t) = \langle t \rangle \times C_H(t)$ with $C_H(t) \cong \mathrm{PGL}(2,p), C_G(t)/L_{123}$ is elementary abelian of order 4, (\ref{(4.7)}) implies that $t_2 \in L_{123}.$ Clearly we also have $t_2t_3 \in L_{123},$ so $G_{23} = \langle t_2, t_3 \rangle \le L_{123}.$ Since by (\ref{(4.6)})$(ii)$, $\Dih{2p} \cong G_{23}$ is a maximal subgroup of $L_{123} \cong \mathrm{PSL}(2,p)$ and $t_1$ does not normalize $G_{23},$ $G_{123} = \langle t_1 \rangle L_{123}$. A similar argument establishes $G_{234} = \langle t_4 \rangle L_{234}.$ Since $p \equiv 5 \pmod{8}$, (\ref{(4.6)})(i) implies that $\langle t_1, t_2 \rangle \in \mathrm{Syl}_2 G_{123}.$ Hence $t \notin G_{123}$ and so, by $(ii)$, $G_{123} \cong \mathrm{PGL}(2,p).$ Likewise we have $G_{234} \cong \mathrm{PGL}(2,p)$, so proving (\ref{(4.8)}). \begin{personalEnvironment}\label{(*4.9*)}$\boldsymbol{\mkern-7mu )}$ $G = \langle t_1,t_2,t_3,t_4\rangle$. \end{personalEnvironment} Put $\overline{G} = G/Z(H).$ Then $\overline{H} \equiv \mathrm{PSL}_3(p)$ and $\overline{G}_{123}$ contains a subgroup isomorphic to $\mathrm{PSL}_2(p)$ by (\ref{(4.7)}). Since $\overline{C_G(t)}$ is the only maximal subgroup of $\overline{G}$ containing $\overline{G}_{123}$ and $\overline{t}_4 \notin \overline{C_G(t)}$, $\overline{G} = \langle \overline{G}_{123}, \overline{t}_4 \rangle.$ Now $H$ being a non-split central extension this then implies (\ref{(*4.9*)}). \begin{personalEnvironment}\label{(*4.10*)}$\boldsymbol{\mkern-7mu )}$ $G_{23} = G_{123} \cap G_{234} = C_G(t) \cap C_G(r)$. \end{personalEnvironment} From (\ref{(4.7)}) $G_{23} \le G_{123} \cap G_{234} \le C_G(t) \cap C_G(r)$. Now $$tr = \left(\begin{array}{c c c|c c c} \rho^2&0&0&&&\\ 0&c&y&&\makebox(0,0){\text{\huge0}}&\\ 0&y&d&&&\\ \hline &&&\rho&0&0\\ &\makebox(0,0){\text{\huge0}}&&0&a&x\\ &&&0&x&b\\ \end{array}\right). $$ Let $g=ztr$ (recall that $z = \mathrm{diag}(\rho,\rho,\rho,\rho^2,\rho^2,\rho^2)$. Then $$g = \left(\begin{array}{c c c|c c c} 1&0&0&&&\\ 0&\rho c&\rho y&&\makebox(0,0){\text{\huge0}}&\\ 0&\rho y&\rho d&&&\\ \hline &&&1&0&0\\ &\makebox(0,0){\text{\huge0}}&&0&\rho^2a&\rho^2x\\ &&&0&\rho^2x&\rho^2b\\ \end{array}\right).$$ Investigating the action of $g$ on $V$ we discover that $g$ acts nilpotently on $V$, and therefore $g$ has order $p$. Hence $tr=z^{-1}g$ has order $3p$ with $\langle z \rangle \le \langle tr \rangle.$ Consequently $C_G(tr) \le C_G(z) = H$. So $C_G(tr) = C_H(g)$. Since $G_{23} \le C_G(t) \cap C_G(r) \le C_G(tr), C_G(tr)$ has even order by (\ref{(4.6)})$(i)$. Thus from centralizers of $p$-elements in $\mathrm{SL}_3(p)$ we have $C_G(tr) = C_H(g) \sim p^3 : (p-1).$ Let $P \in \mathrm{Syl}_p C_H(g).$ Then $P \trianglelefteq C_H(g).$ Also $t$ acts upon $C_H(g)/P\cong p-1$. If $t$ centralizes $C_H(g)/P$, then $C_H(g) = C_{C_H(g)}(t)P.$ Now $\langle t_2t_3\rangle \le C_H(t)$ and from $C_H(t) \cong \mathrm{PGL}_2(p)$ we have $N_{C_H(t)}(\langle t_2t_3\rangle) \sim p:p-1,$ so $C_{C_H(g)}(t)$ normalizes $\langle t_2t_3\rangle$ which contradicts the structure of $C_H(g).$ Therefore $t$ does not centralize $C_H(g)/P.$ Since $C_H(g)/P$ is a cyclic group, $t$ must act by inverting which implies $C_{C_G(tr)}(t)$ has order dividing $2p^3$. But the largest power of $p$ dividing $|\mathrm{PGL}_2(p)|$ is $p$ and so $|C_{C_G(tr)}(t)|=2p.$ Now we infer that $C_G(t) \cap C_G(r) = C_{C_G(tr)}(t) = G_{23}.$ \begin{personalEnvironment}\label{(*4.11*)}$\boldsymbol{\mkern-7mu )}$ $\{t_1,t_2,t_3,t_3\}$ is an unravelled C-string for $G$ with Schläfli symbol $[4,p,4].$ \end{personalEnvironment} Combining (\ref{(4.4)})$(i),$ (\ref{(4.6)}), (\ref{(*4.9*)}) and (\ref{(*4.10*)}) gives that $\{t_1,t_2,t_3,t_3\}$ is a C-string with Schläfli symbol $[4,p,4].$ We now show it is unravelled. Since $L_{123} \cong \mathrm{PSL}(2,p)$ and, by assumption $p \equiv 5 \pmod{8},$ the Sylow 2-subgroup of $L_{123}$ are elementary abelian. In particular, $L_{123}$ contains no elements of order 4. Hence, if $h$ is an element of $G_{123}$ of order 4, $G_{123} = \langle h \rangle L_{123}$. As a consequence any $G_{123}$-conjugate of $h$ is $L_{123}$-conjugate. By (\ref{(4.8)})$(iii)$ $G_{123} \cong \mathrm{PGL}(2,p)$ and so, as its Sylow 2-subgroups are isomorphic to $\Dih{8},$ has only one $G_{123}-$conjugacy class of elements of order 4. Now \begin{align*} t_1t_2 &= \left(\begin{array}{c|c } 0 & \ast \\ \hline \ast & 0 \end{array}\right). \end{align*} Thus we conclude, as $L_{123} \le H,$ that all order 4 elements of $G_{123}$ must have this shape. From \ref{(4.7)}(i) $g_0 \in C_G(t)$ and, since $C_G(t) = \langle t \rangle G_{123},$ either $g_0$ or $tg_0$ are in $G_{123}$. But $tg_0$ has shape $\left(\begin{array}{c|c } \ast & 0 \\ \hline 0 & \ast \end{array}\right),$ whence we deduce that $g_0 \in G_{123}.$ Because of (\ref{(4.7)}), a similar argument yields that $zg_0 \in G_{234}.$ Let $\overline{G} = G/Z(H).$ Then, as $ z \in Z(H),$ we have $$\overline{g_0} = \overline{zg_0} \in \overline{G}_{123} \cap \overline{G}_{234}, $$ but $\overline{g_0} \notin \overline{G}_{23} = \langle \overline{t}_2, \overline{t}_3 \rangle$ as $\overline{g_0}$ has order $4$. Thus $\{t_1,t_2,t_3,t_4\}$ is an unravelled C-string, so proving (\ref{(*4.11*)}). \section{Some small unravelled C-strings} This section is a pot pourri of calculations, obtained with the aid of \textsc{Magma}\cite{Magma}, focussing mainly on C-strings of groups which display some kind of exceptional behaviour. Among the alternating groups, $\Alt{6}$ and $\Alt{7}$ stand out by virtue of being the only ones having Schur multiplier divisible by 3 (see \cite{suzuki1982group}). This means we may construct groups $G$ of shape $3^{\boldsymbol{\cdot}}\Sym{6}$ and $3^{\boldsymbol{\cdot}}\Sym{7}$ where $G'$ is isomorphic to, respectively, the non-split central extensions $3^{\boldsymbol{\cdot}}\Alt{6}$ and $3^{\boldsymbol{\cdot}}\Alt{7}$. Another kind of unusual behaviour is having a smaller dimension matrix representation than expected . Such a phenomenon, effectively via the isomorphism $\Alt{8} \equiv \mathrm{GL}_4(2),$ gives rise to groups of shape $2^4:\Alt{5}\equiv \mathrm{M}_{20}$ ($\mathrm{M}_{20}$ is the \textquotedblleft Mathieu" group of degree 20), $2^4:\Sym{6},$ $2^4:\Alt{7}$ and ${2^4}:\Alt{8}$. And a further example is the non-split extension ${2^4} ^{\boldsymbol{\cdot}}\Alt{8}$ (see miscellaneous groups in \cite{wilson1999atlas}). Table \ref{table1} below is a census of C-strings for the given groups - $B_n$ and $D_n$ as usual denoting the Coxeter groups of type $B$ and $D$ of rank $n$. In a column of the table $i(j)[k]$ indicates that $i$ is the total number of C-strings (up to isomorphism) possessing the property listed for the column, of which $j$ are self-dual and $k$ the number of the $i$ C-strings which are unravelled. As a general comment we observe that $3^{\boldsymbol{\cdot}}\mathrm{G}_2(3):2$ with 625 C-strings is bereft of any unravelled C-strings while $3^{\boldsymbol{\cdot}}\mathrm{M}_{22}:2$ and $2^4:\Sym{6}$ are both well endowed. While ${2^4}^{\boldsymbol{\cdot}}\Alt{8}$ and $2^4:\Alt{5}$ have no C-strings at all! \par \begin{table}[H] \begin{adjustwidth}{-1.5cm}{} \begin{tabular}{c|c | c| c| c| c| c| c } Group & Total & rank 3 & rank 4 & rank 5 & rank 6 & rank 7 & rank 8\\ \hline $3.S_6$ &11(3)[1]&3(1)[0] &8(2)[1] &0 &0 &0 &0 \\ $3.S_7$ &167(5)[1]&142(4)[0] &23(1)[1] &2 &0 &0 &0 \\ $3.\mathrm{PSL}_3(7):2$ &3256(48)[1]&3240(44)[0] & 16(4)[1] & 0& 0&0 &0 \\ $3.\mathrm{PSL}_3(13):2$ &38594(174)[1]&38534(166)[0] & 60(8)[1] & 0& 0&0 &0 \\ $3.\mathrm{M}_{22}:2$ &727(13)[5]&550(10)[0] &177(3)[5] &0 &0 &0 &0 \\ $3.\mathrm{G}_2(3):2$ &725(25)[0]&705(25)[0] & 20(0)[0] & 0& 0&0 &0 \\ $2^4:S_6$ &22(2)[11]&6(0)[0] & 8(0)[4] & 8(2)[7]& 0&0 &0 \\ \hline $B3$ & 8(0)[0] & 8(0)[0] &0 &0 &0 &0 &0 \\ $B4$ & 14(2)[0] & 6(2)[0] &8(0)[0] &0 &0 &0 &0 \\ $B5$ & 165(0)[0] & 63(0)[0] &88(0)[0] &14(0)[0] &0 &0 &0 \\ $B6$ & 130(0)[0] & 24(0)[0] &76(0)[0] &20(0)[0] &10(0)[0] &0 &0 \\ $B7$ &2965(21)[14] & 1031(21)[0] &1428(0)[10] &400(0)[4] &84(0)[0] &22(0)[0] &0 \\ $B8$ &3051(33)[38] &1020(32)[0] & 1494(0)[32] &304(0)[8] &192(0)[0] &27(1)[0] &14(0)[0] \\ \hline $D3$ & 3(1)[3] & 3(1)[3] &0 &0 &0 &0 &0 \\ $D4$ & 0 & 0 &0 &0 &0 &0 &0 \\ $D5$ & 39(1)[16] & 21(1)[0] &16(0)[14] &2(0)[2] &0 &0 &0 \\ $D6$ & 132(0)[2] & 24(0)[0] &48(0)[2] &60(0)[0] &0 &0 &0 \\ $D7$ &628(16)[210] & 348(16)[0] &226(0)[166] &42(0)[36] &10(0)[6] &2(0)[2] &0 \\ $D8$ &3537(27)[24] & 887(19)[0] &1598(8)[14] &826(0)[10] &172(0)[0] &54(0)[0] &0 \\ \hline \end{tabular} \caption{Number of C-strings} \label{table1} \end{adjustwidth} \end{table} For $\gamma$ a chamber in an abstract regular polytope and $i \in \mathbb{N}$, $\Delta_i(\gamma)$ consists of chambers which are distance $i$ from $\gamma$ in the chamber graph. \begin{personalEnvironment}\label{(5.1)}$\boldsymbol{\mkern-7mu )}$ $G\sim 3^{\boldsymbol{\cdot}}\Sym{6}$ \end{personalEnvironment} Up to isomorphsim $G$ has precisely one unravelled C-string. It has rank 4 and its Schläfli symbol is $[4,5,4]$ with Betti numbers $[1,18,135,13,18,1]$. The disc structure from an arbitary chamber $\gamma$ of the associated abstract regular polytope is \begin{figure}[H] \centering \begin{adjustwidth}{-0.5cm}{} \begin{tabular}{c|c c c c c c c c c c c c c c c c c} i&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16\\ \hline $|\Delta_i(\gamma)|$ & 4 & 9 & 18 & 34 & 61 & 108 & 162 & 218 & 303 & 358 & 373 & 276 & 154 & 70 & 9 & 2 \\ \end{tabular} \end{adjustwidth} \end{figure} So the diameter of this graph is 16. If we take $\gamma$ to correspond to the identity element of $G$, then the two chambers in $\Delta_{16}(\gamma)$ correspond to the two non-trivial elements in $Z'(G).$ Moreover, the three chambers $\{\gamma\} \cup \Delta_{16}(\gamma)$ are mutually at distance 16. We briefly examine some other properties of this C-string, \tikzstyle{int}=[draw, fill=black!20, minimum size=2em] \tikzstyle{init} = [pin edge={to-,thin,black}] \begin{tikzpicture}[baseline={(0,-0.5)}] \draw (0,-0.5) -- (1,-0.5); \draw (1,-0.5) -- (2,-0.5); \draw (2,-0.5) -- (3,-0.5); \filldraw [black] (0,-0.5) circle (3pt); \filldraw [black] (1,-0.5) circle (3pt); \filldraw [black] (2,-0.5) circle (3pt); \filldraw [black] (3,-0.5) circle (3pt); \node (p1) at (0.5,-0.2) {$4$}; \node (p2) at (1.5,-0.2) {$5$}; \node (p3) at (2.5,-0.2) {$4$}; \node (t1) at (0,-1) {$t_1$}; \node (t2) at (1,-1) {$t_2$}; \node (t3) at (2,-1) {$t_3$}; \node (t4) at (3,-1) {$t_4$}; \end{tikzpicture}. The subgroups $G_{123}$ and $G_{234}$ of $G$ (corresponding to the so-called vertices and facets) are both isomorphic to $\Sym{5}$. Yet $G_{123}$ and $G_{234}$ are not conjugate in $G$. According to Hartley's Atlas \cite{atlas1} there is only one abstract regular polytope of $\Sym{5}$ with Schläfli type $[4,5]$ and $[5,4]$. They are both locally spherical, non-orientable compact quotients of hyperbolic space. \begin{personalEnvironment}\label{(5.2)}$\boldsymbol{\mkern-7mu )}$ $G\sim 3^{\boldsymbol{\cdot}}\Sym{7}$ \end{personalEnvironment} Just as in (\ref{(5.1)}), $G$ has up to isomorphism exactly one unravelled C-string. Also it has rank 4, while its Schläfli symbol is $[4,6,4]$ and Betti numbers are $[1,63,945,945,63,1].$ For $\gamma$ a chamber of the associated abstract regular polytope, the disc sizes of the chamber graph are \begin{table}[H] \centering \begin{tabular}{c | c c c c c c c c c c c } i&1&2&3&4&5&6&7&8&9&10&11\\ \hline $|\Delta_i(\gamma)|$ &4 & 9 & 18 & 34 & 62 & 113 & 204 & 366 & 601 & 963 &1454 \\ \end{tabular} \begin{tabular}{c c c c c c c c c c c c} 12&13&14&15&16&17&18&19&20&21&22\\ \hline 2036 & 2562 & 2696& 2005 &1219& 514 & 188 & 57 & 10 & 4 & 1 \\ \end{tabular} \end{table} So we have a unique chamber at distance 22 from $\gamma$. If $\gamma$ corresponds to the identity element of $G$, then this chamber will correspond to an involution of $G$. We remark that both vertices and facets of this polytope have automorphism group isomorphic to $\mathbb{Z}_2 \times \Sym{5}$ and are non-orientable, spherical hyperbolic quotients named as $\{4,6\} \ast 240a$ polytope in Hartley's Atlas \cite{atlas1}. \begin{personalEnvironment}\label{(5.3)}$\boldsymbol{\mkern-7mu )}$ $G\sim 3^{\boldsymbol{\cdot}}\mathrm{M}_{22}:2$ \end{personalEnvironment} Here there are five unravelled C-strings all of rank 4, with details given in Table \ref{table2}. \begin{table}[H] \centering \begin{tabular}{c|c} Schläfli Symbol & Betti Numbers\\ \hline $[4,5,4]$&$ [ 1, 2016, 166320, 166320, 8316, 1 ]$\\ $[4,5,4]$&$ [ 1, 8316, 166320, 166320, 2016, 1 ]$\\ $[4,6,4]$&$[ 1, 693, 166320, 166320, 693, 1 ]$\\ $[4,6,4]$&$[ 1, 693, 166320, 166320, 6930, 1 ]$\\ $[4,6,4]$&$[ 1, 6930, 166320, 166320, 693, 1 ]$\\ \hline \end{tabular} \caption{Unravelled C-strings for $3^{\boldsymbol{\cdot}}\mathrm{M}_{22}:2$} \label{table2} \end{table} We note that the five polytopes in Table \ref{table2} consists of a dual pair of $[4,5,4]$ polytopes and a dual pair of $[4,6,4]$ polytopes and one self-dual $[4,6,4]$ polytope. \begin{personalEnvironment}\label{(5.4)}$\boldsymbol{\mkern-7mu )}$ $G\sim 2^4:\Sym{6}$ \end{personalEnvironment} Here in Table \ref{table3} we find eleven unravelled C-strings, four of which have rank 4 and the remainder rank 5. \begin{table}[H] \centering \begin{tabular}{ c | c } Schläfli Symbol & Betti Numbers\\ \hline $[6,6,4 ]$ &$[ 1, 60, 720, 480, 16, 1 ]$\\ $[ 4, 6, 6 ]$ &$[ 1, 16, 480, 720, 60, 1 ]$\\ $[ 6, 5, 4 ]$ & $[ 1, 72, 720, 480, 16, 1 ]$\\ $[ 4, 5, 6 ]$ &$[ 1, 16, 480, 720, 72, 1 ]$\\ $[ 4, 4, 6, 3 ]$ & $[ 1, 16, 120, 240, 90, 6, 1 ]$\\ $[ 3, 6, 4, 4 ]$& $[ 1, 6, 90, 240, 120, 16, 1 ]$\\ $[ 4, 4, 4, 3 ]$ & $[ 1, 16, 120, 240, 90, 10, 1 ]$\\ $[ 3, 4, 4, 4 ]$ &$[ 1, 10, 90, 240, 120, 16, 1 ]$\\ $[ 3, 6, 4, 3 ]$& $[ 1, 6, 120, 320, 120, 16, 1 ]$\\ $[ 3, 4, 6, 3 ]$ &$[ 1, 16, 120, 320, 120, 6, 1 ]$\\ $[ 3, 4, 4, 3 ]$ &$[ 1, 16, 120, 320, 120, 16, 1 ]$\\ \hline \end{tabular} \caption{Unravelled C-strings for $2^4:\Sym{6}$} \label{table3} \end{table} Only two of the eleven, namely those with symbols $[4,5,6]$ and $[6,5,4]$, decrease in rank when quotienting, whereas the others have at least one case of the intersection property failing. We also note that the only self-dual C-string in Table \ref{table3} is the one with symbol $[3,4,4,3]$. Of course the more normal subgroups a group has, the more stringent the unravelled condition becomes. We close this section including an example of a soluble group which possess an unravelled C-string. \begin{personalEnvironment}\label{(5.5)}$\boldsymbol{\mkern-7mu )}$ $G$ of order $1296 = 2^4.3^4.$ \end{personalEnvironment} Let $t_1,t_2,t_3,t_4$ be the elements of $\Sym{27}$ as follows:- \begin{align*} t_1=&(4,10)(7,15)(9,17)(12,20)(14,22)(16,23)(19,25)(21,26)(24,27), \\ t_2 =&(2,4)(5,10)(6,9)(11,17)(12,15)(13,16)(18,23)(19,22)(24,26),\\ t_3 =&(2,3)(5,8)(7,9)(11,13)(12,16)(15,17)(19,21)(20,23)(25,26) \text{ and }\\ t_4 =&(1,3)(2,6)(4,9)(5,11)(7,14)(10,17)(12,19)(15,22)(20,25). \end{align*} Set $G= \langle t_1,t_2,t_3,t_4\rangle$. Then $\{t_1,t_2,t_3,t_4\}$ is an unravelled C-string for $G$ with diagram \tikzstyle{int}=[draw, fill=black!20, minimum size=2em] \tikzstyle{init} = [pin edge={to-,thin,black}] \begin{tikzpicture}[baseline={(0,-0.5)}] \draw (0,-0.5) -- (1,-0.5); \draw (1,-0.5) -- (2,-0.5); \draw (2,-0.5) -- (3,-0.5); \filldraw [black] (0,-0.5) circle (3pt); \filldraw [black] (1,-0.5) circle (3pt); \filldraw [black] (2,-0.5) circle (3pt); \filldraw [black] (3,-0.5) circle (3pt); \node (p1) at (0.5,-0.2) {$4$}; \node (p2) at (1.5,-0.2) {$3$}; \node (p3) at (2.5,-0.2) {$4$}; \node (t1) at (0,-1) {$t_1$}; \node (t2) at (1,-1) {$t_2$}; \node (t3) at (2,-1) {$t_3$}; \node (t4) at (3,-1) {$t_4$}; \end{tikzpicture} and Betti numbers $[1,27,81,81,27,1]$. Evidently both vertices and facets of this polytope have the Coxeter group $B_3$ as their automorphism groups. The diameter of the chamber graph is 18, and for $\gamma$ a chamber the disc sizes are as follows \begin{table}[H] \centering \begin{tabular}{c | c c c c c c c c c} i&1&2&3&4&5&6&7&8&9\\ \hline $|\Delta_i(\gamma)|$ &4 & 9 & 17 & 28 & 42 & 60 & 81 & 105 & 129 \\ \end{tabular}\\ \begin{tabular}{c c c c c c c c c } 10&11&12&13&14&15&16&17&18\\ \hline 147 & 157 & 155 & 138 & 109 & 71 & 33 & 9 & 1 \\ \end{tabular} \end{table} \medskip
proofpile-arXiv_068-15358
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgment} This work has been partially supported by The Lerverhulme Trust Fellowship "QuantUn: quantification of uncertainty using Bayesian Surprises" (Grant No. RF-2019-548/9) and the EPSRC Research Project Twenty20Insight (Grant No. EP/T017627/1). \bibliographystyle{IEEEtran} \section{Architecture} The \textit{RDMSim} exemplar has been developed to facilitate the implementation of a two-layered architecture for a self-adaptive RDM, as shown in Fig. \ref{figRDMSimarchitecture}. The architecture structures a Managing System (based on feedback loop \cite{Kephart2003,Brun2009}) on top of the Managed System (the \textit{RDMSim}). We next describe each layer. \begin{figure}[h!] \centering \includegraphics[width=.45\textwidth,keepaspectratio]{Images/rdmsimarchitecture.eps} \vspace{-2mm} \caption{RDMSim Architecture} \label{figRDMSimarchitecture} \end{figure} \subsection{\textbf{Managing System}} The Managing System, at the upper layer, is responsible for providing the self-adaptive decision-making logic. A feedback loop is implemented to monitor the environment and managed system, adapting the latter when necessary. The feedback loop consists of Monitor-Analyse-Plan-Execute over a Knowledge base K (MAPE-K) \cite{Kephart2003}. The MAPE-K loop is considered an architectural blueprint for self-adaptive systems and is used to perform adaptation decisions on the Managed System (i.e. \textit{RDMSim} in our case). When using the \textit{RDMSim} exemplar, researchers will provide their own decision-making techniques to serve as a Managing System. The Managing System can be based on different techniques such as Multi-Criteria Decision-Making~\cite{triantaphyllou2000multi}, Reinforcement Learning~\cite{samin2020priority}, and Evolutionary Computation~\cite{Ramirez2012b,BowersFredericksCheng2018}, etc. \subsection{\textbf{Managed System} } \textit{RDMSim} represents the Managed System and provides probes and effectors that can be used by the Managing System to interact with the simulator. Probes are used to monitor information (M in MAPE) whereas the effectors are used to execute the adaptation decisions (E in MAPE) on the Managed System. Next, we present the architecture of the Managed System implemented as Java Packages for the \textit{RDMSim} software. The components in the architecture for \textit{RDMSim}, presented in Fig.\ref{figRDMSimarchitecture}, are as follows: \subsubsection{\textbf{Management Component}} which acts as a bridge between the Managing System and other internal components of the \textit{RDMSim}. It provides an implementation of probes and effectors to be used by the Managing System. The functions provided by the probes and effectors are used to both monitor the status of the RDM (i.e. cost, reliability and performance) and also change the network topology and different network parameters according to the decision made as described in Table \ref{tab2} and \ref{tab3} respectively. \begin{table} \caption{Probe Functions }\label{tab2} \vspace{-4mm} \begin{center} \centering\renewcommand\cellalign{lc} \fontsize{6}{8}\selectfont \begin{tabular}{|l|l|l|} \hline \rowcolor{AshGrey} \textbf{Function} & \textbf{Description} \\ \hline Topology getCurrentTopology() &\makecell{Returns the current topology\\ for the network.} \\ \hline int getBandwidthConsumption() & \makecell{Returns the bandwidth consumption \\of the network.} \\ \hline int getActiveLinks()& \makecell{Returns the number of active links.} \\ \hline int getTimeToWrite()& \makecell{Returns the time to write data \\for the network.} \\ \hline Monitorables getMonitorables()&\makecell{ Returns the values for all the \\monitorable metrics.}\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Effector Functions }\label{tab3} \centering\renewcommand\cellalign{lc} \fontsize{6}{8}\selectfont \vspace{-2mm} \begin{tabular}{|l|l|l|} \hline \rowcolor{AshGrey} \textbf{Function} & \textbf{Description} \\ \hline void setNetworkTopology(int timestep,Topology selectedtopology)& \makecell{To set the network\\ topology at a \\particular timestep. } \\ \hline void setActiveLinks(int active\_links)& \makecell{to set the number of \\active links for \\the network.} \\ \hline void setTimeToWrite(double time\_to\_write)& \makecell{To set the time to write \\data for the \\network.} \\ \hline void setBandwidthConsumption(double bandwidth\_consumption)& \makecell{To set bandwidth \\consumption for the\\ network.}\\ \hline void setCurrentTopology(Topology current\_topology)&\makecell{ To set topology for the \\network.}\\ \hline \end{tabular} \end{table} \subsubsection{\textbf{Network Component}} which provides an implementation of the main physical elements of the RDM. These elements include the number of mirrors (i.e. servers) and the network links that represent a fully connected network of mirrors. As an example, for 25 mirrors, a network of 300 links will be created. The users of \textit{RDMSim} can change the number of mirrors to create a custom RDM network for their experiments. The Network Component also provides an implementation of the monitorables and topologies for the network. Specifically, in the \textit{RDMSim}, we provide an implementation of three monitorables: \textit{\textbf{Mon1--} \textbf{Active Network Links:}} provides the current active network links to measure the reliability of the RDM. The RDM will provide a higher level of reliability with a larger number of active links. \textit{\textbf{Mon2--} \textbf{Bandwidth Consumption:}} provides the current bandwidth consumption to measure the operational cost for the RDM in terms of inter-site network traffic. Operational costs will be increased for the RDM with a higher amount of bandwidth consumed. Bandwidth Consumption is measured in GigaBytes per second. \textit{\textbf{Mon3--} \textbf{Time to Write Data to mirrors:}} measures the performance of the network in terms of writing time to maintain multiple copies of data on each remote site. A big writing time leads to reduction of performance of the RDM. Time to Write Data is measured in milliseconds. For the communication between the mirrors, we consider synchronous mirroring \cite{cure_chapter_2015,Keeton04}. During synchronous mirroring, sequential writing is performed to prevent data loss \cite{cure_chapter_2015}. In sequential writing, the primary mirror (i.e. the sender) waits for an acknowledgement (known as a \textit{handshake}) regarding the receipt and writing of data from the secondary mirror (i.e. the receiver). This process is performed for each active link on the communication path between the mirrors. Therefore, the time to write data is computed as \textit{Total Writing Time= (\begin{math}\alpha\end{math}* number of active links) * Time to Write Data Unit}\footnote{To implement realistic impacts, we vary the time between 10 to 20 milliseconds}. Here, \begin{math} \alpha \end{math} represents a fraction of active links to constitute the communication path between mirrors. \begin{math} \alpha \end{math} can have a value of greater than zero and less than and equal to one. For our experiments, we have set \begin{math} \alpha = 1\end{math}. Similarly, the bandwidth consumption is also dependent on the number of active links. More active links imply more data transmission, which leads to a higher bandwidth consumption \cite{cure_chapter_2015}. Hence, we compute the Bandwidth Consumption as \textit{Total Bandwidth Consumed=(\begin{math}\alpha * \end{math}number of active links) * Bandwidth per link}\footnote{To implement realistic impacts we vary the Bandwidth per link between 20 to 30 GBps}. \subsubsection{\textbf{Simulation Component}} which includes the implementation of the uncertainty scenarios\cite{esfahani_taming_2011,giese_living_2014} that represent the different dynamic environmental conditions that the RDM can face, and which will be simulated. It allows the setting of the simulation properties, such as the number of simulation runs and the chosen uncertainty scenario(s) to be executed by the \textit{RDMSim}. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth,height=8cm]{Images/classdiagram.eps} \caption{RDMSim Class Diagram} \label{figRDMClassdiagram} \end{figure*} A partial class diagram representing the elements of the Management Component, Network Component and Simulation Component is shown in Fig. \ref{figRDMClassdiagram}. The \textit{NetworkManagement} class along with the \textit{Probe} and \textit{Effector} interfaces provides an implementation of the Management Component. The classes \textit{NetworkProperties}, \textit{Monitorables}, \textit{Topology} and \textit{TopologyList} are part of the Network Component and provide an implementation of the corresponding features of the RDM. The \textit{SimulationProperties} and \textit{UncertaintyScenario} classes are part of the Simulation Component, and are used to implement the functionalities related to the simulations to be executed. \section{Conclusion} In this paper, we have presented the \textit{RDMSim} exemplar to provide a simulating environment for the RDM. \textit{RDMSim} facilitates the researchers to execute experiments in the domain of RDM. To the best of our knowledge, \textit{RDMSim} is the first simulator to be implemented for this domain. Using \textit{RDMSim}, researchers can compare their self-adaptive decision-making solutions with other techniques, including ours\cite{samin2021priority}. We have executed experiments for each scenario presented here, using our own decision-making technique, called MR-POMDP++ \cite{samin2021priority}. The results are provided in the \textit{RDMSimExemplar} repository, ready to be used for comparison purposes. Furthermore, \textit{RDMSim} also provides opportunities for researchers to design their own scenarios for experiments by modification of values in the configuration file using the instructions provided in \textit{RDMSim} user guide. We hope that the research community will use the \textit{RDMSim} to evaluate and compare novel solutions in the area of self-adaptive decision-making. \section{Experiments} In this section, we provide a simple example to describe the steps to develop a custom adaptation logic for performing experiments using the \textit{RDMSim}. We also demonstrate the execution of different uncertainty scenarios using the custom adaptation logic. The steps for developement of custom adaptation logic are as follows: \subsection*{\textbf{Step: 1 Download the RDMSim Exemplar}} Download the \textit{RDMSim} package from the \textit{RDMSimExemplar} repositor S\footnote{https://doi.org/10.5281/zenodo.4613152} and install the required libraries. \subsection*{\textbf{Step: 2 Design an Adaptation Solution}} Design an adaptation solution (Managing System) using the Probe and Effector interface functions provided by the \textit{RDMSim} software as follows:\\ \subsubsection*{\textbf{A. Loading Configuration Settings and Instantiation of Probe and Effector} } The first step in implementing the custom adaptation logic is to load the configuration settings for the experiment from the \textit{configuration.json} file and instantiation of the Probe and Effector components. The Probe and Effector will enable the communication between our custom adaptation logic and \textit{RDMSim}. This can be done by using the NetworkManagement class in your program as follows: \begin{verbatim} NetworkManagment network_management; network_management=new NetworkManagment(); Probe probe; Effector effector; probe=network_management.getProbe(); effector=network_management.getEffector(); \end{verbatim} The NetworkManagement class is responsible for loading the configuration parameters and instantiating the Probe and Effector instances. The configuration settings include the parameters like number of simulation time steps, the number of mirrors for the RDM, number of active links and the uncertainty scenario to be considered for the experiments. The details of the configuration parameters is provided in the \textit{RDMSim Artefact:User Guide} document provided as part of \textit{RDMSimExemplar} repository. \\ \subsubsection*{\textbf{B. Monitoring of the RDMSim network using Probe functions}} In order to monitor the RDMSim, we can use the probe functions provided in Table \ref{tab2}. For example, to get the values of all the monitorable metrics, at a particular simulation time step, we can use the \textit{getMonitorables()} function as follows: \begin{verbatim} Monitorables m=probe.getMonitorables(); \end{verbatim} \subsubsection*{\textbf{C. Performing Adaptations on the RDMSim using Effector functions}} In order to perform adaptations on the \textit{RDMSim}, we can use the Effector functions provided in Table \ref{tab3}. For example, to change the network topology at a particular timestep, we can use the \textit{setNetworkTopology()} function as follows: \begin{verbatim} effector.setNetworkTopology(10,"mst"); \end{verbatim} The code above will set the Minimum Spanning Tree (MST) topology for the network at the simulation timestep 10. A step by step implementation of the MAPE-K loop using steps A to C is provided in the \textit{User Guide} document. \subsection*{\textbf{Step: 3 Design and Execute Experiments to test the Adaptation Logic}} Once the adaptation solution is designed, an experiment should be designed to test the adaptation logic. For an experiment to be executed, the configuration parameters (provided in the configuration file) should be set to execute a particular simulation scenario. We have assigned some default values for the configuration parameters based on the expert knowledge provided in \cite{Paucar2020}. You can change the number of simulation runs, the number of mirrors for the network, the uncertainty scenario and the ranges for the different monitorables. The details for the configuration parameters are provided in the \textit{User Guide}. \subsection*{Example: To demonstrate RDMSim working under Default and Detrimental Scenario \begin{math}S_{1}\end{math} We demonstrate the execution of experiments under both the default scenario \begin{math}S_{0}\end{math} and uncertainty Scenario \begin{math}S_{1}\end{math}. For our experiments, we consider an RDM network of 25 mirrors and 300 network links to create a fully connected network. We have set the default values for the configuration parameters in the \textit{configuration.json} file. The satisfaction thresholds for the quality objectives have been set based on the expert knowledge provided in \cite{Paucar2020}. In order to satisfy the quality objectives of minimization of operational cost and maximization of performance, bandwidth consumption and time to write data should be minimized. Conversely, the quality objective of Maximization of Reliability requires maximization of number of active links. Based on the expert knowledge, the bandwidth consumption should be less than or equal to 40 percent to satisfy minimization of operational cost. Similarly, the time to write data should be less than or equal to 45 percent to satisfy maximization of performance. On the other hand, number of active links should be greater than or equal to 35 percent of total links to satisfy maximization of reliability for the RDM. Once the configuration parameters are setup, we have executed the experiments for 100 simulation runs for the scenarios as shown in Fig. \ref{figRDMSimStableScenario} and \ref{figRDMSimDetrimental}. Under default scenario, the \textit{RDMSim} will meet the satisfaction thresholds in terms of the value ranges of bandwidth consumption, active links and time to write data. Under uncertainty scenario \begin{math}S_{1}\end{math}, the different disturbance levels are introduced to reduce the number of active links affecting the reliability of the system when MST is the selected topology as shown in Fig. \ref{figRDMSimDetrimental}. \begin{figure*}[h!] \includegraphics[width=\textwidth,keepaspectratio]{Images/scenario0screen.eps} \caption{Default Scenario} \label{figRDMSimStableScenario} \end{figure*} \begin{figure*}[h!] \includegraphics[width=\textwidth,keepaspectratio]{Images/scenario1screen.eps} \caption{Scenario 1} \label{figRDMSimDetrimental} \end{figure*} For further validation purposes, we have applied reinforcement learning based decision-making to the \textit{RDMSim}. We provide our initial evaluation results for the \textit{RDMSim} using MR-POMDP++ \cite{samin2020priority,samin2021priority} as part of the \textit{RDMSimExemplar} repository. MR-POMDP++ is based on Multi-Reward Partially Observable Markov Decision Process (MR-POMDP). MR-POMDP is a multi-objective reinforcement learning technique that considers the decision-making agent performing in a partially observable environment. MR-POMDP++ performs adaptations on the basis of the multi-objective utility value computed at each simulation time step. We have executed experiments considering a network of 25 RDM mirrors using the default configuration setup provided in the \textit{configuration} file. In order to test our decision-making techniques DeSIRE~\cite{RossBencomoSEAMS2018} and MR-POMDP++\cite{SaminSubmittedSoSyM2021}, we have also used the exemplar~\cite{IftikharRBW017}. Both exemplars ~\cite{IftikharRBW017} and \textit{RDMSim}, focus on different domains and aspects, the IoT domain and the RDM and effect on quality objectives respectively, and complement each other. \textbf{Discussion:} An RDM can be seen as a specific example of a more generic type of applications, where the decision making guides self-reconfiguration by identifying a target system configuration to provide the desired system behavior~\cite{ZhangChengICE2005,Goldsby2008}. A set of reconfiguration instructions to reach the desired target configuration is applied (i.e. E in MAPE). These reconfiguration instructions define an adaptation path. Several adaptation paths may be chosen, and most self-reconfiguration approaches select adaptation paths based on tradeo-offs between several objectives goals, such as performance and reliability~\cite{Goldsby2008}. As such the \textit{RDMSim} can be used to test decision-making techniques applicable to other domains as well. \section{Introduction} Remote Data Mirroring (RDM) is a disaster recovery technique used to protect data by storing multiple copies (i.e. replicas) on physically remote servers (i.e. mirrors) \cite{ji2003seneca,Keeton04}. The RDM system tolerates failures by requesting or rebuilding the lost or damaged data samples from another active mirror to facilitate data recovery. Hence, the RDM helps in maintaining data availability and preventing data loss. Furthermore, to ensure that distributed data is not lost or corrupted, the RDM is required to perform the replication and distribution of data in an efficient and reliable way. Considerable research efforts have targeted the domain of Remote Data Mirroring \cite{Ramirez2012b,Fredericks2015,cure_chapter_2015,Paucar2020,samin2020priority,BowersFredericksCheng2018}. However, the RDM applications are very costly to implement as the equipment used to install such applications is expensive. To the best of our knowledge, there is no exemplar available to support research based on the RDM paradigm. In this paper, we present \textit{RDMSim}, an exemplar that simulates a Remote Data Mirroring environment. The goal of the \textit{RDMSim} is to offer researchers a RDM environment to test and compare their decision-making techniques \cite{de2013software} against other techniques. Other exemplars exist however, they focus on other domains and aspects, such as cloud environments \cite{barna2015hogna}, cyber-physical systems \cite{kit2015architecture}, traffic management system \cite{schmid2017model}, client-server systems \cite{cheng2009evaluating} and IoT-based systems\cite{IftikharRBW017}. In comparison to \cite{barna2015hogna}, that deals mainly with the functionality of cloud environments such as workload management using the addition and removal of virtual machines, \textit{RDMSim} focuses mainly on the simulation of Remote Mirroring process. The \textit{RDMSim} exemplar presented here is implemented in Java, keeping in view the operational model presented in \cite{Keeton04,ji2003seneca}. It simulates the RDM presenting a fully connected network of mirrors. The simulator offers the flexibility of changing the number of mirrors to create a customized RDM network according to the experiment's requirements. The focus is on the application of self-adaptive realization strategies in the form of the topologies of Minimum Spanning Tree (MST) and Redundant Topology (RT). The application of these topologies have an impact on the different network parameters such as bandwidth consumption and active network links affecting the quality objectives such as the minimization of operational costs and the maximization of the reliability of the network. A trade-off of such impacts has to be taken into account as part of the decision making\cite{elahi2011requirements,saadatmand_fuzzy_2015,ramirez2009evolving,sawyer_requirements-aware_2010, Goldsby2008}. The topological impacts have been defined based on the expert knowledge presented in \cite{Paucar2020}. Additionally, we provide an implementation of different scenarios that define possible different uncertain environmental contexts for the RDM\cite{esfahani_taming_2011}. A Python version is also publicly available. Researchers can use these scenarios to test their specific decision-making techniques based on Reinforcement Learning \cite{samin2021priority}, Multi-Criteria Decision Analysis \cite{triantaphyllou2000multi} and Evolutionary Computation \cite{Ramirez2012b} among others. Researchers can also design their own scenarios by modifying the different parameter ranges. The paper is organized as follows: Section 2 presents the operational model of a RDM. In Section 3, we present the architecture of the \textit{RDMSim} exemplar. Section 4 provides a description of the different scenarios for the experiments that can be executed by the \textit{RDMSim}. In Section 5, an example of how to execute experiments using \textit{RDMSim} is provided, which is followed by Conclusion in Section 6. \section{Remote Data Mirroring} The RDM application is composed of data servers and network links~\cite{ji2003seneca,Keeton04}. It must replicate and distribute data in an efficient manner by minimizing consumed bandwidth and providing assurance that distributed data is neither lost or corrupted \cite{ji2003seneca}. The RDM application must achieve functional objectives such as \textit{construct a connected network} and \textit{distribute data}. These functional objectives can be achieved through alternative realization strategies represented by two different topologies: \textit{Minimum Spanning Tree} (MST) and \textit{Redundant Topology} (RT). An MST Topology uses the least possible number of network links to transmit data among different remote servers. Contrarily, an RT topology uses simultaneously, several redundant network links paths to transmit information among remote servers. The implementation of the RDM considered in this paper should also satisfy the following three quality objectives: \textit{Maximization of Reliability} (MR), \textit{Maximization of Performance} (MP) and \textit{Minimization of Cost} (MC). The levels of satisfaction associated with reliability, performance and cost of the RDM are determined according to the trade-offs based on: \begin{itemize} \item A RT Topology \textit{offers higher levels of reliability} than an MST topology. However, the cost of maintaining an RT topology may be prohibitive in some contexts, given the additional cost of bandwidth consumption required. \item Conversely, a MST Topology \textit{offers higher levels of performance} with \textit{lower levels of cost} than an RT topology. However, the reliability of the system can be impacted in a negative way when an MST Topology is used. \end{itemize} Based on the above, we have designed the \textit{RDMSim} exemplar. Next, we present the architecture for \textit{RDMSim}. \section{Scenarios} Six different scenarios were defined to be used in simulations of the RDM. These scenarios have been designed to simulate different archetypal real situations, which can cause deterioration of satisfaction of the quality objectives of the system in relation to a scenario with stable conditions. The main goal of the scenarios depicted below, is to evaluate how decision-making techniques and algorithms react under uncertain situations, specially different from the stable conditions. Next, a description for each scenario is presented.\\ \textbf{Default scenario S$_{0}$: } For the sake of comparison between techniques, a default scenario is provided that represents an environment envisioned by the requirements experts\cite{Fredericks2015,Paucar2020}. For the \textit{RDMSim}, the following thresholds for the levels of satisfaction associated with reliability, performance and cost are suggested: bandwidth consumption should be on average less than or equal to 40\%. Similarly, the time to write data should be on average less than or equal to 45\%. On the other hand, the number of active links should be on average greater than or equal to 35\% of the total number of links. The initial topology being used is MST topology.\\ \textbf{Scenario S$_{1}$ - Unexpected Packet Loss during MST: The initial topology being used is MST Topology. A period of consecutive and unexpected data packet loss during the execution of the MST Topology generates a reduction on the reliability of the system. Data packet loss represents link failures in the RDM system, which may be caused, for example, by problems with the equipment (e.g. failures in a switch or router or power failures \cite{ji2003seneca}). \\ \textbf{Scenario S$_{2}$ - Unexpected Packet Loss during RT: The initial topology being used is RT Topology. Unexpected data packet loss during the execution of the RT Topology, are generating an unusual rate of data forwarding, which would increase the bandwidth consumption (i.e. cost), and would reduce the system's performance. As said before, in the RDM, the cost for inter-site links communication is a function of the data sent over them. Therefore, a \textit{Redundant Topology} (RT), which involves a bigger number of inter-site network links than a \textit{Minimum Spanning Tree Topology} (MST), is more expensive. Cost increases as the number of active links increases and a reduction on the system's performance\footnote{The performance in these systems is measured as the total time to perform the write of data, which is the sum of the response times of the writes of each copy of data on each remote site \cite{ji2003seneca}.} could also be expected. \\ \textbf{ Scenario S$_{3}$}. Simultaneous occurrence of the scenario $S_{1}$ and $S_{2}$. The current topology is randomly generated.\\ \textbf{Scenario S$_{4}$ - MST topology execution failures: } The topology being used is MST Topology. Involves the behaviour presented in the scenario S$_{1}$. Additionally, during the execution of the MST topology, an increment in bandwidth consumption (MC) and the reduction of the system's performance (MP) is also produced due to synchronous mirroring.\\ \textbf{Scenario S$_{5}$ - RT Topology execution failures}. The topology being used is RT Topology. Involves the behaviour presented in the scenario S$_{2}$. Additionally, the RT topology is also producing a reduction on the reliability of the system (MR) due to failures in the equipment such as routers and switches.\\ \textbf{Scenario S$_{6}$ - Significant site failure}. The current topology is randomly generated. This scenario involves the simultaneous occurrence of the scenarios $S_{4}$ and $S_{5}$. It is related to a significant site failure~\cite{ji2003seneca,Keeton04}, where both, repeated and multiple concurrent failures are expected~\cite{ji2003seneca} as in the scenarios S$_{4}$ and S$_{5}$ but all at the same time. A full-scale site failure may be caused by a power outage affecting all the buildings on different campuses, an earthquake or flood affecting buildings within several metropolitan areas. Under this scenario, the worst-case data loss \cite{Keeton04} may occur in different sites (RDM nodes), i.e. a site can be destroyed or inoperative before the full backup of information is shipped offsite. Site failure disasters are usually modelled with a failure rate of once per year \cite{Keeton04}.
proofpile-arXiv_068-15371
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Neutrino physics has been at the forefront of particle phenomenology in the last years. After the discovery of neutrino oscillations, experiments have entered the precision era \cite{Fukuda:1998mi,Collaboration:2007zza,KamLAND2007}, measuring the relevant parameters with increasing accuracy \cite{Schwetz:2008er} and highlighting the need for an explanation of neutrino masses. The seesaw mechanism is the most popular framework to accommodate neutrino masses \cite{Minkowski:1977sc,seesaw,MohSen,Schechter:1980gr,Cheng:1980qt,Foot:1988aq}. The existence of very heavy fields, right-handed neutrinos in the most common variation, naturally explains the smallness of neutrino masses due to the famous seesaw relation, $m_\nu \sim v_{EW}^2 / M_{heavy}$. However, the structure of the seesaw makes direct tests impossible. The new heavy fields cannot be produced at colliders and therefore at best only indirect tests will be possible. In addition to the question of neutrino masses, the Standard Model has other theoretical problems that need to be addressed. Several extensions have been built with this purpose, filling the literature with a wide variety of ideas that will be put to experimental test in the coming years. Among the different choices, Supersymmetry (SUSY) is the most popular one because it technically solves the hierarchy problem \cite{Dimopoulos:1981zb} and it has the capability to address many of the other open questions, like the nature of dark matter, gauge coupling unification and radiative symmetry breaking, to mention a few. One of the key points in supersymmetric model building is R-parity violation/conservation \cite{Fayet:1974pd,Farrar:1978xj}. This discrete symmetry, defined as $R_p = (-1)^{3(B-L)+2s}$ (where $B$ and $L$ stand for baryon and lepton numbers and $s$ for the spin of the particle), is usually imposed to forbid the dangerous baryon and lepton number violating interactions, never seen in nature. If both were simultaneously present, proton decay would be extremely fast, a phenomenological disaster that is prevented by forbidding the R-parity violating (\rpv) couplings. One of the consequences of this new symmetry is the existence of a stable particle, a natural dark matter candidate, and thus its practical importance is beyond any doubt. However, its fundamental origin is totally unknown and in most cases it is introduced by hand in the theory. Compared to the standard model, where baryon and lepton numbers are automatically conserved, this is a step back. A possible approach to understand the origin of R-parity is to embed the MSSM in an extended model with a gauge group containing a $U(1)_{B-L}$ piece. The original high-energy theory will conserve R-parity due to the gauge symmetry and R-parity will become a remnant subgroup at low energies if the scalar fields responsible for the breaking have even $B-L$ charges. This picture can be realized in minimal $U(1)_{B-L}$ extensions of the MSSM, see for example \cite{FileviezPerez:2010ek}, or, more ambitiously, in Left-Right (LR) symmetric models \cite{earlyLR}. Apart from the conservation of R-parity this type of models have other motivations. The original motivation was the restoration of parity as an exact symmetry at higher energies \cite{earlyLR}. In addition, it has been shown that they provide technical solutions to the SUSY CP and strong CP problems \cite{Mohapatra:1996vg}, they give an understanding of the $U(1)$ charges and they can be embedded in $SO(10)$ Grand Unified Theories (GUTs). Here we study some phenomenological aspects of a supersymmetric LR model that leads to R-parity conservation at low energies and incorporates a type-I seesaw mechanism to generate neutrino masses. We will concentrate the discussion on lepton flavor violating signals in slepton decays, such as $\tilde{l}_i \to \tilde{\chi}_1^0 \: l_j$ with $i \neq j$. These signatures have been known to be important in SUSY models for many years \cite{Borzumati:1986qx,Hisano:1995nq,Hisano:1995cp}, and in fact they have been already studied in great detail for minimal seesaw implementations, see for example \cite{Hirsch:2008dy,Hirsch:2008gh,Esteves:2009qr}. This work, however, studies the case of a non-minimal seesaw, what implies new features. In particular, the high-energy restoration of parity enhances the flavor violating effects in the right-handed slepton sector, contrary to the usual expectation. This way, by measuring braching ratios of LFV decays at colliders one can get valuable information on the structure of the high-energy theory. \section{The model} \subsection{How to break the LR symmetry} There are many supersymmetric LR models in the literature. From the pioneering works in the 70s, many extensions and variations have been proposed. In all of them the left-right gauge group $SU(3)_c \times SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ breaks down to the SM gauge group $SU(3)_c \times SU(2)_L \times U(1)_Y$. However, one can choose different representations in the scalar sector, responsible for the breaking of the symmetry, and obtain very different low energy effective theories. In addition, there are several ingredients that cannot be forgotten if one wants to have a consistent framework. Therefore, four requirements will be imposed as guidelines for the choice of the model: (a) Automatic conservation of R-parity, (b) Parity conservation at high energies, (c) Seesaw mechanism, and (d) Cancellation of anomalies. The first LR models used $SU(2)_R$ doublets to break the gauge symmetry. The non-supersymmetric model proposed in references \cite{earlyLR} introduced two additional scalar doublets $\chi_L$ and $\chi_R$, where $\chi_L \equiv \chi_L(1,2,1,1)$ and $\chi_R \equiv \chi_R(1,1,2,-1)$ under $SU(3)_c \times SU(2)_L \times SU(2)_R \times U(1)_{B-L}$\footnote{Note the duplication in the number of fields. This comes from parity conservation, that implies the same number of $SU(2)_L$ and $SU(2)_R$ charged fields, and anomaly cancellation, that implies that for every group representation with charge $+Q$ under $U(1)_{B-L}$ there must be another one with charge $-Q$. This will be also found in the models discussed below.}. When the neutral component of $\chi_R$ gets a vacuum expectation value (VEV), $\langle \chi_R^0 \rangle \neq 0$, the gauge symmetry is broken down to the SM gauge group and the known low energy phenomenology with broken parity is recovered. However, such models are not suited for the purpose of this study. The reason comes from the oddness of $\chi_R$ under $U(1)_{B-L}$. In a supersymmetric version of the model, the breaking of the gauge symmetry by $\langle \chi_R^0 \rangle$ also implies the breaking of R-parity. This could be solved by imposing additional discrete symmetries to the model that forbid the dangerous \rpv operators \cite{Malinsky:2005bi}, but this cannot be regarded as automatic R-parity conservation. In addition, there is no seesaw mechanism to generate neutrino masses, and additional superfields would be needed to account for them. The simplest solution is to break the gauge symmetry by $SU(2)_R$ fields with even charge under $U(1)_{B-L}$. This was in fact proposed in reference \cite{Cvetic:1983su}, where four triplets were added to the MSSM spectrum: $\Delta(1,3,1,2)$, $\Delta^c(1,1,3,-2)$, $\bar{\Delta}(1,3,1,-2)$ and $\bar{\Delta}^c(1,1,3,2)$. In these models, sometimes called MSUSYLR (Minimal Supersymmetric Left-Right), $SU(2)_R \times U(1)_{B-L}$ is broken down to $U(1)_Y$ by the VEVs of the scalar components of $\Delta^c$ and $\bar{\Delta}^c$. When this occurs, a right-handed neutrino mass is generated from the operator $L^c \Delta^c L^c$, leading to a type-I seesaw mechanism. However, the issue of R-parity conservation is not clear. Although one would naively expect that R-parity is automatically conserved due to the even $B-L$ charges of $\Delta^c$ and $\bar{\Delta}^c$, the scalar potential of the theory might not allow to have vanishing sneutrino VEVs \cite{Kuchimanchi:1993jg}, favoring the existence of R-parity breaking minima. For many years, MSUSYLR and its minimal extensions were considered to break R-parity. Nevertheless, the authors of the recent reference \cite{Babu:2008ep} claimed that 1-loop corrections change the picture, allowing for vanishing sneutrino VEVs in the minimum of the potential. Since this is still a controversial issue that relies on not fully calculated loop corrections, we leave this possibility for future studies. Finally, Aulakh and collaborators \cite{Aulakh:1997ba,Aulakh:1997fq} extended MSUSYLR by the addition of two triplets, $\Omega(1,3,1,0)$ and $\Omega^c(1,1,3,0)$. They showed that the scalar potential offers the new possibility, within large regions of parameter space, of having minima that conserve R-parity while breaking the gauge symmetry in the proper way. Moreover, since the $\Delta$ triplets are part of the spectrum, the seesaw mechanism is present like in MSUSYLR, generating small masses for the light neutrinos. In conclusion, this model fulfills the requirements that we imposed, and thus we will concentrate on it in the following. \subsection{Model basics} The matter content of the model is \cite{Aulakh:1997ba,Aulakh:1997fq} \begin{center} \begin{tabular}{c c c c c c} \hline Superfield & generations & $SU(3)_c$ & $SU(2)_L$ & $SU(2)_R$ & $U(1)_{B-L}$ \\ \hline $Q$ & 3 & 3 & 2 & 1 & $\frac{1}{3}$ \\ $Q^c$ & 3 & $\bar{3}$ & 1 & 2 & $-\frac{1}{3}$ \\ $L$ & 3 & 1 & 2 & 1 & -1 \\ $L^c$ & 3 & 1 & 1 & 2 & 1 \\ $\Phi$ & 2 & 1 & 2 & 2 & 0 \\ $\Delta$ & 1 & 1 & 3 & 1 & 2 \\ $\bar{\Delta}$ & 1 & 1 & 3 & 1 & -2 \\ $\Delta^c$ & 1 & 1 & 1 & 3 & -2 \\ $\bar{\Delta}^c$ & 1 & 1 & 1 & 3 & 2 \\ $\Omega$ & 1 & 1 & 3 & 1 & 0 \\ $\Omega^c$ & 1 & 1 & 1 & 3 & 0 \\ \hline \end{tabular} \end{center} Here $Q$, $Q^c$, $L$ and $L^c$ contain the quark and lepton superfields of the MSSM with the addition of a right-handed neutrino $\nu^c$. The two $\Phi$ superfields are $SU(2)_L \times SU(2)_R$ bidoublets and contain the usual $H_d$ and $H_u$ MSSM Higgs doublets. Finally, the rest of the superfields are introduced to break the LR symmetry. With these representations, the most general superpotential compatible with the gauge symmetry and parity is \begin{eqnarray} \label{eq:Wsuppot1} {\cal W} &=& Y_Q Q \Phi Q^c + Y_L L \Phi L^c - \frac{\mu}{2} \Phi \Phi + f L \Delta L + f^* L^c \Delta^c L^c \nonumber \\ &+& a \Delta \Omega \bar{\Delta} + a^* \Delta^c \Omega^c \bar{\Delta}^c + \alpha \Omega \Phi \Phi + \alpha^* \Omega^c \Phi \Phi \\ &+& M_\Delta \Delta \bar{\Delta} + M_\Delta^* \Delta^c \bar{\Delta}^c + M_\Omega \Omega \Omega + M_\Omega^* \Omega^c \Omega^c \nonumber \end{eqnarray} Family and gauge indices have been omitted in equation \eqref{eq:Wsuppot1}. $Y_Q$ and $Y_L$ are the usual quark and lepton Yukawa couplings. However, note that $Y_Q Q \Phi Q^c \equiv Y_Q^\alpha Q \Phi_\alpha Q^c$ and $Y_L L \Phi L^c \equiv Y_L^\alpha L \Phi_\alpha L^c$, with $\alpha = 1,2$, and thus there are four $3 \times 3$ Yukawa matrices. Conservation of parity implies that they must be symmetric. $f$ is a $3 \times 3$ complex symmetric matrix, whereas $\alpha$ is a $2 \times 2$ antisymmetric matrix, and thus it only contains one complex parameter, $\alpha_{12}$. The breaking of the LR gauge group to the MSSM gauge group happens in two steps. \begin{displaymath} SU(2)_R \times U(1)_{B-L} \quad \longrightarrow \quad U(1)_R \times U(1)_{B-L} \quad \longrightarrow \quad U(1)_Y \end{displaymath} The first step is due to the VEV of the $\Omega^{c \: 0}$ field $\langle \Omega^{c \: 0} \rangle = \frac{v_R}{\sqrt{2}}$, which breaks $SU(2)_R$. However, note that, since $T_{3R} (\Omega^{c \: 0}) = 0$ there is a $U(1)_R$ symmetry left over. Next, the group $U(1)_R \times U(1)_{B-L}$ is broken by $\langle \Delta^{c \: 0} \rangle = \frac{v_{BL}}{\sqrt{2}}$ and $\langle \bar{\Delta}^{c \: 0} \rangle = \frac{\bar{v}_{BL}}{\sqrt{2}}$. The remaining symmetry is $U(1)_Y$, with hypercharge defined as $Y = I_{3R} + \frac{B-L}{2}$. Note that, since the tadpole equations do not link $\Omega^c$, $\Delta^c$ and $\bar{\Delta}^c$ with their left-handed counterparts, the left-handed triplets can be taken to have vanishing vevs \cite{Aulakh:1997ba}. Although a hierarchy between the two breaking scales may exist, $v_{BL} \ll v_R$, one cannot neglect the effects of the second breaking stage on the first one. The tadpole equations mix them, and only through the contribution of the $\Delta^c-\bar{\Delta}^c$ fields one can understand a non-vanishing $v_R$ VEV. In fact, there is an inverse hierarchy between the VEVs and the superpotential masses $M_\Delta$, $M_\Omega$, given by \begin{equation} \label{tadpolesol} v_R = \frac{2 M_\Delta}{a} \qquad v_{BL} = \frac{2}{a} (2 M_\Delta M_\Omega)^{1/2} \end{equation} And so, for $v_{BL} \ll v_R$ one needs $M_\Delta \gg M_\Omega$ \cite{Aulakh:1997ba}. \section{Slepton decays and LFV} Lepton flavor violation is a well known indirect test of the seesaw mechanism \cite{Borzumati:1986qx,Hisano:1995nq,Hisano:1995cp}. Assuming flavor-blind soft SUSY breaking terms at some high-energy scale, the RGE running down to the SUSY scale generates non-zero off-diagonal entries in the slepton soft squared masses. These flavor violating entries are connected to the effective neutrino mass matrix and thus, by making some assumptions, one can find testable relations. Moreover, they induce LFV decays, such as $l_i \to l_j \gamma$ and $\tilde{l}_i \to \tilde{\chi}_1^0 \: l_j$ with $i \neq j$. By studying these decays one can set important constraints and get valuable information on the underlying theory. In all cases studied in the literature, based on minimal seesaw models, the off-diagonal entries of the soft masses of right-handed sleptons get negligible contributions from the RGE running and thus one expects no visible signal of LFV from the right-handed sector at the LHC. However, in LR models the gauge symmetry makes left- and right-handed sectors behave the same, and then the LFV violating entries of the soft squared masses in the right-handed sector must contain non-negligible contributions. This novel LHC signal would point to a high-energy LR symmetry in a very clean way. That is the main result of this work. As we will show below, LFV in the right-handed lepton/slepton sector can be observable in this model. In order to prove it we perform a numerical calculation using the code SPheno \cite{Porod:2003um}, including 2-loop RGEs and the corresponding 1-loop threshold corrections at the intermediate scales. The Yukawa parameters $Y_L$ are fixed in order to correctly reproduce neutrino oscillation data. Finally, all analytical computations have been done with the help of the Mathematica package Sarah \cite{sarah}. Figure \ref{fig:FVvsSeesaw} shows $Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: e)$ and $Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: \mu)$ as a function of the seesaw scale, defined as $M_{Seesaw} \equiv f v_{BL}$. The dependence on the seesaw scale is clearly understood from the seesaw formula. This implies that larger $M_{Seesaw}$ requires larger Yukawa parameters in order to fit neutrino masses which, in turn, leads to larger flavor violating terms due to RGE running. \begin{figure} \begin{center} \vspace{5mm} \includegraphics[width=0.49\textwidth]{img/stauFV-1e15.ps} \end{center} \vspace{-5mm} \caption{$Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: e)$ and $Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: \mu)$ as a function of the seesaw scale, defined as $M_{Seesaw} \equiv f v_{BL}$, for the parameter choice $v_{BL} = 10^{15}$ GeV and $v_R = 5 \cdot 10^{15}$ GeV. The dashed lines correspond to $\tau_1 \simeq \tau_R$, whereas the solid ones correspond to $\tau_2 \simeq \tau_L$. The mSUGRA parameters have been taken as in the SPS3 benchmark point \cite{Allanach:2002nj} and the right-handed neutrino spectrum has been assumed to be degenerate, $M_{R i} = M_{Seesaw}$.} \label{fig:FVvsSeesaw} \end{figure} Furthermore, figure \ref{fig:FVvsSeesaw} also shows that right-handed staus can also have LFV decays with observable rates. This is the main novelty in this model. One can see that for large $M_{Seesaw}$ values, around $10^{13} - 10^{14}$ GeV, the rates for LFV are measurable for both left- and right-handed staus. See references \cite{Andreev:2006sd,delAguila:2008iz} for the LHC discovery potential in the search for LFV. The previous result can be understood by using analytical approximations for the slepton soft squared masses. The running from the GUT scale to the SUSY scale generates off-diagonal entries $\Delta m^2$ in both left- and right-handed slepton soft masses. In the first step, from the GUT scale to the $v_R$ scale they can be written in leading-log approximation as \cite{Chao:2007ye} \begin{eqnarray} \Delta m_L^2 &=& - \frac{1}{4 \pi^2} \left( 3 f f^\dagger + Y_L^{(k)} Y_L^{(k) \: \dagger} \right) (3 m_0^2 + A_0^2) \ln \left( \frac{m_{GUT}}{v_R} \right) \label{rge1}\\ \Delta m_{L^c}^2 &=& - \frac{1}{4 \pi^2} \left( 3 f^\dagger f + Y_L^{(k) \: \dagger} Y_L^{(k)} \right) (3 m_0^2 + A_0^2) \ln \left( \frac{m_{GUT}}{v_R} \right) \label{rge2} \end{eqnarray} After parity breaking at $v_R$ the Yukawa coupling $Y_L$ splits into $Y_e$, the charged lepton Yukawa, and $Y_\nu$, the neutrino Yukawa. The later contributes to LFV entries in the running down to the $v_{BL}$ scale \begin{eqnarray} \Delta m_L^2 &=& - \frac{1}{8 \pi^2} Y_\nu Y_\nu^\dagger (3 m_0^2 + A_0^2) \ln \left( \frac{v_R}{v_{BL}} \right) \\ \Delta m_{\tilde{e}^c}^2 &=& 0 \end{eqnarray} Finally, from $v_{BL}$ to the SUSY scale one recovers the MSSM RGEs, which do not add any flavor violating effect. This short discussion shows an important consequence of the symmetry breaking pattern. From the GUT scale to the $v_R$ scale parity is conserved and the magnitude of the LFV entries in the left- and right-handed sectors is the same, see eqs. \eqref{rge1} and \eqref{rge2}. However, from $v_R$ to $v_{BL}$ only the left-handed ones keep running, and thus one expects larger flavor violation in this sector. Moreover, if the difference between $v_R$ and $v_{BL}$ is increased, the difference between the LFV entries in the L and R sectors gets increased as well. This is shown in figure \ref{fig:difLR}, which shows branching ratios for the LFV decays of the staus as a function of $v_{BL}$ for a fixed value of $v_R = 3 \cdot 10^{15}$ GeV. The theoretical expectation is obtained: the difference between $Br(\tilde{\tau}_L)$ and $Br(\tilde{\tau}_R)$ strongly depends on the difference between $v_R$ and $v_{BL}$. \begin{figure} \begin{center} \vspace{5mm} \includegraphics[width=0.49\textwidth]{img/stauFV-vBL-1e12.ps} \includegraphics[width=0.49\textwidth]{img/stauFV-vBL-1e13.ps} \end{center} \vspace{-5mm} \caption{$Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: e)$ and $Br(\tilde{\tau}_i \to \tilde{\chi}_1^0 \: \mu)$ as a function of $v_{BL}$, for a fixed $v_R = 3 \cdot 10^{15}$ GeV. To the left, $M_{Seesaw} = 10^{12}$ GeV, whereas to the right $M_{Seesaw} = 10^{13}$ GeV. The dashed lines correspond to $\tau_1 \simeq \tau_R$, whereas the solid ones correspond to $\tau_2 \simeq \tau_L$. The mSUGRA parameters have been taken as in the SPS3 benchmark point \cite{Allanach:2002nj} and the right-handed neutrino spectrum has been assumed to be degenerate, $M_{R i} = M_{Seesaw}$.} \label{fig:difLR} \end{figure} The question arises whether one can determine the ratio $v_{BL}/v_R$ by measuring both $Br(\tilde{\tau}_L)$ and $Br(\tilde{\tau}_R)$ at the LHC. This is answered in figure \ref{fig:compLR} where the ratio $Br(\tilde{\tau}_R \to \tilde{\chi}_1^0 \: \mu) / Br(\tilde{\tau}_L \to \tilde{\chi}_1^0 \: \mu)$ is plotted as a function of $v_{BL} / v_R$. A measurement of both branching ratios would allow to constrain the ratio $v_{BL} / v_R$. However, there is a slight dependence on other important quantities, such as $m_{GUT}$ and $M_{Seesaw}$. This implies that more experimental information will be needed in order to set reliable constraints. \begin{figure} \begin{center} \vspace{5mm} \includegraphics[width=0.49\textwidth]{img/stauFV-LR.ps} \end{center} \vspace{-5mm} \caption{$Br(\tilde{\tau}_R \to \tilde{\chi}_1^0 \: \mu) / Br(\tilde{\tau}_L \to \tilde{\chi}_1^0 \: \mu)$ as a function of $v_{BL} / v_R$. The seesaw scale $M_{Seesaw}$ takes values in the range $[10^{12},10^{13}]$ GeV. The rest of the parameters have been chosen as in figure \ref{fig:difLR}.} \label{fig:compLR} \end{figure} \section{Summary and conclusions} Neutrino masses and R-parity conservation are two issues not addressed in the MSSM. On the one hand, the MSSM does not provide an explanation for the observation of neutrino oscillations and the subsequent non-zero neutrino masses. These experimental results require the introduction of a mechanism that can explain the smallness of the neutrino masses, being the seesaw mechanism the most popular choice. On the other hand, R-parity is introduced in the MSSM as an ad-hoc symmetry, without any theoretical motivation. It is therefore interesting to study extended symmetry groups that can lead to R-parity conservation at low energies. In this work we have studied some phenomenological aspects of a supersymmetric Left-Right model which automatically conserves R-parity and contains the seesaw mechanism to generate neutrino masses. We have found that, contrary to minimal realizations of the seesaw, large lepton flavor violating effects are obtained both in the left- and right-handed slepton sectors. This is a useful signature that allows us to get additional information on the high energy regime and clearly points to an underlying left-right symmetry. In particular, we have shown that observables like $Br(\tilde{\tau}_R \to \tilde{\chi}_1^0 \: l)$ can get strong deviations from the standard seesaw picture, allowing us to constrain the parameters of the high energy theory and get a hint on its structure. Furthermore, there are other observables which are also very sensitive to new right-handed flavor violation. Examples are slepton mass splittings and the polarization of the outgoing electrons in $\mu \to e \gamma$. We plan to address these issues in a future publication \cite{future}. \ack This talk was based on work in collaboration with J. Esteves, M. Hirsch, W. Porod, J. C. Romao and F. Staub, and is supported by the Spanish MICINN under grants FPA2008-00319/FPA, FPA2008-04002-E and MULTIDARK Consolider CAD2009-00064, by Prometeo/2009/091 and by the EU grant UNILHC PITN-GA-2009-237920. A.V. thanks the Generalitat Valenciana for financial support. \section*{References}
proofpile-arXiv_068-15443
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Alkaline-earth-like atoms are of increasing interest for applications in quantum control, including optical atom clocks~\cite{diddams2003}, quantum computing~\cite{derevianko2004,hayes2007,daley2008,gorshkov2009}, and simulations of condensed matter systems~\cite{gorshkov2010}. Experimental advances are proceeding at a steady pace with demonstrations of a variety of important milestones, including clocks that now surpass the cesium standard~\cite{ludlow2008}, Bose-Einstein condensation in isotopes of ytterbium~\cite{takasu2003}, calcium~\cite{kraft2009}, and strontium~\cite{stellmer2009,escobar2009}, Fermi degenerate gases~\cite{fukuhara2007}, and the superfluid-to-mott-insulator quantum phase transition~\cite{ybmott}. Another important ingredient in the quantum-control toolbox is the ability to control the interatomic interactions. Feshbach resonances have played an essential role in such manipulation of alkali-metal degenerate gases, allowing for the observation of the BEC-BCS crossover~\cite{greiner2003}. Whereas in alkali gases Feshbach resonances can be induced via magnetic fields that couple different channels in the electronic ground state, in alkaline-earth-like atoms this is not possible because of the lack of hyperfine structure in the ground $^1S_0$ state. An alternative is to employ an optical Feshbach resonance (OFR) by laser-coupling two scattering ground-state atoms to a meta-stable bound molecule in an excited-state potential~\cite{bohnandjulienne97}. Alkaline earths are particularly well suited to OFRs due to the existence of narrow intercombination lines of the kind studied for optical clocks, $^1S_0 \rightarrow {}^3P_J$~\cite{ciurylo2005}. Photoassociation spectroscopy has been used to measure narrow molecular resonances in the $^1S_0 + {}^3P_1$ channel~\cite{zelevinsky2006,enomoto2008}, an important first step toward implementation of OFRs. In previous work we studied the use of OFRs to manipulate nuclear spin coherence in fermionic, spin-1/2, $^{171}$Yb using OFRs associated with s-wave collisions~\cite{iris_2009}. In the work presented here, we extend our study to $p$-wave OFRs of this species. The ability to manipulate $p$-wave collisions could open the door to studies of nonconventional superfluidity and other exotic quantum phases of matter~\cite{anisopsf}. Prior observations of $p$-wave magnetic Feshbach resonances in alkalis proved to be too lossy for quantum coherent control~\cite{pwaveMFRJin}. Inelastic collisions are believed to be enhanced in these resonances because the $p$-wave scattering states are well localized behind the centrifugal barrier~\cite{pwaveMFRJin}. They thus have a very large Franck-Condon overlap with more tightly bound molecules below the Feshbach threshold, which leads to exothermic transitions. The use of OFRs can potentially mitigate this effect. In particular, purely-long-range (PLR) molecular states existing in excited state potentials can be coupled optically to ground-state $p$-wave channels \cite{enomoto2008}. These PLR states, arising from avoided crossings in the excited state hyperfine structure, have inner turning points at $\sim 50 a_0$, and are thus well separated from the chemical binding region. Inelastic collisions to bound-ground molecules via excitation to PLR states should be highly suppressed. In this case, heating due to spontaneous emission will be the dominant source of inelastic collisions, but this too can be suppressed through off-resonance excitation. In addition to suppressing inelastic recombination, OFRs offer opportunities for quantum control beyond what is possible with magnetic Feshbach resonances. For example, in $p$-wave collisions the projection of rotational angular momentum along a given axis is a new degree of freedom that can affect the symmetry of the order parameter in $p$-wave superfluidity~\cite{pxpy}. In the presence of a bias magnetic field and for appropriate choices of laser polarization, we can address these degrees of freedom and control the scattering length associated with different projection quantum numbers. If an optical lattice trapping potential is added, a variety of rich phenomena can be explored with such control. For example, the three projections of angular momentum translate into three orbitals of a $p$-band in the first excited vibrational state of an optical lattice~\cite{scarola2005,pxpyhubbard}. With control of $p$-wave collisions of spin-polarized fermions, one can obtain a Hubbard model similar to the one that gives rise to 3-color superfluidity and trionic phases and is an important model of QCD~\cite{rapp2007}. In this article we study the use of an OFR to control $p$-wave collisions in $^{171}$Yb by exciting near photoassociation resonances of the $^1S_0 + {}^3P_1$ channel. After reviewing the system and the formalism for calculating the optically controlled scattering properties, we calculate the energy spectrum and scattering lengths including the presence of a magnetic field, which allows for polarization-dependent control of the interaction. We apply this to a toy model of 3-color superfluidity to give a benchmark of the performance of the $p$-wave OFR and summarize our results. \section{$p$-wave Photoassociation Resonances} We consider spin-polarized $^{171}$Yb, with nuclear spin $i = 1/2$, for which $s$-wave collisions are forbidden and $p$-waves dominate at low temperature. The essential formalism for describing the system, in the absence of an external magnetic field, was given in \cite{iris_2009}. We review the salient points here. The two-atom states in each of the collision channels are governed by an effective potential of the form \begin{equation} V_{\text{eff}} = \frac{R(R+1)}{2\mu r^2}+V_{\text{BO}}(r)+V_{\text{HF}}+V_{\text{mag}}, \label{Veff} \end{equation} where $V_{\text{BO}}$ is the Born-Oppenheimer potential in Hund's case-(c), $V_{\text{HF}}$ is the hyperfine interaction, and $V_{\text{mag}}$ is the interaction with external magnetic fields. Here and throughout we set $\hbar=1$ and we use atomic units. For the ground $^1S_0+ {}^1S_0$ collision there is only one channel, the nuclear spin triplet state $I=1, m_I=1$. There is no hyperfine interaction and we neglect the very small magnetic interaction with the nuclear magneton. As we are interested only in the near-threshold scattering states of this channel, the ground Born-Oppenheimer potentials can be approximated in a modified Leonard-Jones form~\cite{improved_pot}, \begin{equation}\label{gnd_pot} V_{\text{BO}}^{(g)}(r) = \frac{C_{12}^{(g)}}{r^{12}}-\frac{C_6^{(g)}}{r^6}-\frac{C_8^{(g)}}{r^8}, \end{equation} where $C_6^{(g)}=1931.7 \rm{a.u.}$, $C_8^{(g)}=1.93 \times 10^5 \rm{a.u.}$, and $C_{12}^{(g)}= 1.03409\times10^9 \rm{a.u.}$ ~\cite{improved_pot}. Since we are considering $p$-wave scattering, the rotational angular momentum is $R=1$. The system is not prepared in a state with a fixed projection of $R$, and thus the atoms can scatter with any allowed value of $m_R=-1, 0, 1$ relative to a space-fixed quantization axis, defined by the magnetic field. We obtain the scattering wave functions corresponding to the above potential numerically, using the Numerov method for integration~\cite{numerov1, numerov2}. In the excited $^1S_0+{}^3P_1$ channel, the description is more complicated. The electronic Born-Oppenheimer potentials are taken in the Hund's case-(c), \begin{equation} \label{HBO} V_{\text{BO}}^{(e)}(r)=\frac{C_{12}^{(e)}}{r^{12}}-\frac{C_6^{(e)}}{r^6}-\sigma\frac{C_3^\Omega}{r^3}, \end{equation} with parameters determined by fits to experiments as $C_6^{(e)}=2810 \rm{a.u.}$, $C_{12}^{(e)}=1.862\times10^8 \rm{a.u.}$ and $C_{3}^{\Omega=1}=-C_{3}^{\Omega=0}/2=0.09695 \rm{a.u.}$ for the $1_u$ and $0_u$ states respectively. The Hund's case-(c) variables, however, are not good quantum numbers in the region of interest. Coriolis forces mix nuclear rotation and electronic angular momentum and hyperfine interaction mixes this with nuclear spin \cite{tiesinga2005}. As such, the only good quantum numbers are the total angular momentum and its projection which we denote $T, M_T$; parity is fixed here to be -1 for the $p$-wave collisions. \begin{figure}[floatfix] \begin{tikzpicture} \tikzstyle{every node}=[font=\normalsize] \node (pic) at (0, 0) {\includegraphics[width=7.7cm]{adiabatic_pot_thin.png}}; \node at (2.4, 0.6) {\small $^1S_0+{}^3P_1(f_2=3/2)$}; \end{tikzpicture} \caption{\label{fig:adiabpot} Adiabatic potentials for the four channels with $T=3$ that asymptote to the $^1S_0+{}^3P_1(f_2=3/2)$ channel. Since $M_T$ takes seven values, each channel is seven fold degenerate.} \end{figure} We are interested in the molecular bound states, or photoassociation resonances of these electronic potentials. Dipole selection rules break the resonances into two parity classes -- those accessible from $R$-even or $R$-odd ground states~\cite{enomoto2008, iris_2009}. Of particular interest are the PLR states arising from avoided crossings due to hyperfine mixing. Figure \ref{fig:adiabpot} shows the adiabatic potentials with $T=3$ that asymptote to the $^1S_0+{}^3P_1(f_2=3/2)$ channel, where $f_2$ is the hyperfine quantum number of the excited state atom. There exists one potential with its minimum at $\sim 75 a_0$ and a depth of 0.68 GHz. This shallow potential nonetheless supports bound states that are well resolved and can be used for $p$-wave OFRs with suppressed three-body recombination. To determine the photoassociation resonances, we employ a multichannel integration of the Schr\"{o}dinger equation as discussed in~\cite{iris_2009}. We consider first the case of no external magnetic fields. The effective potential operator in the $^1S_0+{}^3P_1$ channel, \Eref{Veff}, is written as a matrix expanded in the extended Hund's case-(e) basis $|\epsilon(T, M_T)\rangle \equiv |f_2, F, R, T, M_T\rangle$, where $\mathbf{F}=\mathbf{f}_1+\mathbf{f}_2$ and $\mathbf{T}=\mathbf{F}+\mathbf{R}$ \cite{tiesinga2005}. Here $f_1=1/2$ is the spin of the ground-state atom, and $f_2=3/2, 1/2$ is the hyperfine spin of the excited-state atom. In the ground state $I=F=1$, $M_F=1$, $R=1$, and the total angular momentum takes the possible values $T_g=0,1,2$. By dipole selection rules, in the excited channels the allowed values are therefore $T_e=0,1,2,3$. The effective excited potential matrix thus has 19 channels each of which are $2T_e+1$ fold degenerate, resulting in a total of 89 channels. We denote the multichannel excited bound states as (neglecting the subscript $e$), \begin{equation}\label{expand} |n,T, M_T\rangle=\sum_{\epsilon(T, M_T)}{\psi_{n, \epsilon(T, M_T)}(r)|\epsilon(T, M_T)\rangle }. \end{equation} In the binding energy range of $-1022$ MHz to $-3$ MHz, the system supports 2 bound states with $T=0$, 26 bound states with $T=1$, 15 bound states with $T=2$, and 23 bound states with $T=3$. \begin{figure}[floatfix] \begin{tikzpicture} \tikzstyle{every node}=[font=\normalsize] \node (pic) at (0, 0) {\includegraphics[width=7.5cm]{PLRthin.png}}; \path[sloped] (-4.1, -2.0) -- node {$\psi_{\bm n \bm , \bm \epsilon\bm(\bm T \bm , \bm M_{\bm T}\bm )}(r)$} (-4.1, +3); \end{tikzpicture} \caption{\label{fig:PLR} Spinor components of the multichannel wave function of the PLR bound molecular state at -355 MHz and with $T=3$. Each curve corresponds to a component associated with one of the six basis states, $| \epsilon(T, M_T) \rangle$, that contribute to this state. Since $M_T$ can assume seven distinct values, each wave function is 7-fold degenerate.} \end{figure} Of particular interest are the PLR states, denoted in Table \ref{vopt}(a). Figure \ref{fig:PLR} shows an example of a multichannel spinor wave function of the PLR bound molecular state at $-355$ MHz with $T=3$. Each spinor component corresponds to one of the six different $|\epsilon(T, M_T)\rangle$ channels, each of which are 7-fold degenerate. Most of the amplitude of the wave function is supported between $50 a_0$ and $150 a_0$. As such, the inner turning point is well removed from the chemical binding region and the outer turning point is sufficiently far out to allow for a large Franck-Condon factor in optical excitation. These features are advantageous for application to OFRs. \subsection{In an external magnetic field} We now consider the effect of an external magnetic field to allow for additional control on the system. With the $\mathbf{B}$-field defining the quantization axis and in the linear Zeeman regime, the perturbing potential is \begin{equation}\label{hb} V_{\text{mag}}= \sum_{f_2, m_{f_2}} g_{f_2}\mu_B B \, m_{f_2} | f_2, m_{f_2} \rangle \langle f_2, m_{f_2} | , \end{equation} where $g_{f_2}$ is the Land\'{e} g-factor of the atomic hyperfine level. This Hamiltonian breaks the rotational symmetry and generally couples an infinite hierarchy of states with different total angular momenta $T$. For the relatively weak magnetic fields that we consider here, we can employ perturbation theory. We break the degeneracy of the states within a $T$-manifold and mix states with the same $M_T$ when the Zeeman shift is on the order of the vibrational spacing. For these weak magnetic fields, the value of $T$ at zero magnetic field still dominates and this will be used to label the states. This is particularly true for the PLR states, where $T$ remains approximately a good quantum number for all fields we use in our calculation. To obtain the eigenenergies and eigenfunctions in the magnetic field, we diagonalize $V_{\text{mag}}$ expressed as a matrix in the basis of the bound states $|n,T, M_T\rangle$ within the energy range given in the discussion following Eq.\ (4). The matrix elements are given by \begin{widetext} \begin{equation} \langle n , T, M_T|V_{\text{mag}}|n', T', M'_{T}\rangle= \sum_{\epsilon(T, M_T), \epsilon'(T', M_{T})}{\langle\epsilon(T, M_T)|V_{\text{mag}}|\epsilon'(T', M_{T})\rangle\int{\psi^*_{n, \epsilon(T, M_T)}(r)\psi_{n', \epsilon'(T', M_{T})}(r)}dr}\delta_{M_T,M'_T}. \end{equation} \end{widetext} The term $\langle\epsilon(T, M_T)|V_{\text{mag}}|\epsilon'(T', M_{T})\rangle$ characterizes the coupling of the spin degrees of freedom and the Franck-Condon overlap, $\int{\psi^*_{n, \epsilon(T, M_T)}(r)\psi_{n', \epsilon'(T', M_{T})}(r)}dr$, is the coupling of the radial wave functions. A part of the eigenspectrum, between -427 MHz and -273 MHz, is shown in Fig.~\ref{fig:envsB}. The PLR state of interest, with binding of $355$ MHz and $T=3$, exhibits an approximately linear Zeeman splitting of its 7 magnetic sublevels over a range of 80 Gauss, as shown in the inset. Figure~\ref{fig:envsB} also shows that the PLR state, with binding of $383$ MHz and $T=1$, also has an approximately linear Zeeman splitting of its 3 magnetic sublevels over the 80 Gauss range, while the remaining two states (-279 MHz and -416 MHz), which are not PLR, show nonlinear Zeeman shifts over that range of perturbation. \begin{figure} \begin{tikzpicture} \node (pic) at (0, 0) {\includegraphics[width=8.5cm]{plots_en_vs_B_full.png}}; \node at (-2.4, -0.1) {$T=3$}; \node at (-2.4, 4.0) {$T=1$}; \node at (-2.4, -1.6) {$T=1$}; \node at (-2.4, -3.5) {$T=3$}; \end{tikzpicture} \caption{\label{fig:envsB} Eigenspectrum of states in Table \ref{vopt}(b) (energy range: -427 MHz to -273 MHz) as a function of an applied magnetic field, $B$. For $B=0$ the eigenenergies and the quantum number $T$ correspond to the values in the left most column of Table \ref{vopt}(b). We calculate the OFR associated with tuning near the $T=3$ PLR state, bound by $-355$ MHz, which splits into 7 magnetic sublevels in a linear Zeeman regime of the 80 Gauss plotted here} \end{figure} \section{The $p$-wave OFR} To calculate the effect of the OFR on the $p$-wave scattering volume we turn to the theory of Bohn and Julienne~\cite{bohnandjulienne99}. In that formalism the laser field is chosen detuned close to, but off-resonance from, a given photoassociation resonance. Only one bound state in a closed channel is assumed to contribute to the modification of the scattering volume. In practice, the laser field can couple to multiple excited bound states, and in the far-off-resonance limit, all will contribute. How such multiple resonances interfere and affect the OFR is a subject of continued research. Here, we will choose parameters for which one PLR bound state dominates and calculate its contribution to the OFR in both elastic and inelastic terms. For a single bound state, the effect of the OFR on the S-matrix in the incoming $^1S_0 + {}^1S_0$ channel is \begin{equation}\label{Smat} S=e^{2i \eta_0} \frac{2 \Delta-i\left(\Gamma -\gamma \right)}{2 \Delta+i\left(\Gamma +\gamma \right)}. \end{equation} $\gamma$ is the molecular natural linewidth, $\Delta$ is the detuning of the laser from the bound molecular state (including the light-shift of that level), $\eta_0$ is the background phase shift, and the stimulated linewidth is, \begin{equation} \Gamma = \frac{\pi}{2} \left(\frac{I}{I_{sat}} \right) \gamma_A^2 f_{FC}. \end{equation} Here $I$ is the laser intensity, $I_{sat} = 0.13$ mW/cm$^2$ is the atomic saturation intensity for $^3P_1$, and $\gamma_A/2\pi = 182$ kHz is the atomic linewidth. The Franck-Condon factor, with rotational corrections, is \begin{equation} f_{FC}=\frac{ |\langle n,T,M_T | \mathbf{d}\cdot \boldsymbol{\epsilon}_L | \psi_g (k_r) \rangle |^2}{2 d_A^2} \end{equation} expressed here as the ratio of the free-to-bound transition molecular dipole moment for laser polarization $\boldsymbol{\epsilon}_L$ to the atomic dipole momentum $d_A^2=(3c^3\gamma_A)/(4\omega^3)$. In the Wigner-threshold regime $ |\psi_g (k_r) |^2 \propto k_r^3 $, where $k_r$ is the wave vector of the relative coordinate momentum at the scattering energy, and thus we define $\mathcal{V}_{\rm{opt}}$ as the ``optical volume" in analogy with the ``optical length" for $s$-wave OFRs, \begin{equation} \mathcal{V}_{\text{opt}}=\frac{\Gamma}{2 k_r^3 \gamma}. \end{equation} This is the parameter that defines the strength of the $p$-wave OFR. \begin{table*} {\Large(a)} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Energy MHz} & \multicolumn{9}{c|}{$V_{\text{opt}}$ $a_0^3$}\\ \cline{2-10} & \multicolumn{3}{|c|}{$m_R=-1$} & \multicolumn{3}{|c|}{$m_R=0$} & \multicolumn{3}{|c|}{$m_R=1$}\\ \cline{2-10} & $q=-1$ & $q=0$ & $q=1$ &$q=-1$ & $q=0$ & $q=1$ & $q=-1$ & $q=0$ & $q=1$\\ \hline $-279 (T=1)$ & $2.13712\times10^6$ & $4.3758\times10^6$ & $3.61051\times 10^6$ & $396736$ &$36724$ & &$674875$ & &\\ \hline $\mathbf{-355* (T=3)}$ & $75605$ & $113407$ & $75605$ & $113407$ &$302419$ & $378024$ &$75605$ &$378024$&$1.13407\times10^6$\\ \hline $-383* (T=1)$ & $139596$ & $231235$ & $972724$ & $11501$ &$255429$ & &$158527$ & &\\ \hline $-416 (T=3)$ & $143913$ & $215869$ & $143913$ & $215869$ &$575652$ & $719565$ &$143913$ &$719565$&$2.15869\times10^6$\\ \hline \end{tabular} \vspace{0.7cm} {\Large(b)} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Energy MHz} & \multicolumn{3}{c|}{$V_{\text{opt}}$ $a_0^3$ ($B=0$ Gauss)}& \multicolumn{3}{c|}{$V_{\text{opt}}$ $a_0^3$ ($B=30$ Gauss)}\\ \cline{2-7} $B=0$& $m_R=-1$ & $m_R=0$ & $m_R=1$ &$m_R=-1$ & $m_R=0$ & $m_R=1$ \\ \hline $-279 (T=1)$ & $3.61051\times 10^6$ & & & 844623 & &\\ \hline $\mathbf{-355* (T=3)}$ & $75605$ & $378024$ &$1.13407\times10^6$& $76339 $&$379073$& $1.13089\times 10^6$\\ \hline $-383* (T=1)$ & $972724$ & & &973575& &\\ \hline $-416 (T=3)$ & $143913$ & $719565$ &$2.15869\times10^6$ & $26884$ & $478907$& $2.14338 \times 10^6$\\ \hline \end{tabular} \caption{\label{vopt} $p$-wave optical volumes ($\mathcal{V}_{\text{opt}}$) for the coupling of all possible initial states to four of the bound molecular states of the excited potential (the energies of these states and their $T$ value are shown in the first column ) with (a) different polarizations, (b) polarization $q=1$. Blank entries indicate that the particular combination of initial state, polarization and final state is forbidden. A * indicates that the particular state is a PLR state. The PLR state bound at $-355$ MHz, denoted in bold face, is used for the OFR calculation presented here.} \end{table*} Selection rules dictate the allowed transitions that are accessible for an OFR. For atoms with spin-polarized nuclei scattering on the ground $^1S_0+{}^1S_0$ potential, $I=1$, $m_I=1$, $T_g=0,1,2$, and $M_{T_g}=m_R+1$, where $m_R = -1,0,1$ are the projections of the partial-wave angular momentum on the quantization axis. We can thus optically connect to excited $^1S_0+{}^3P_1$ bound molecules with $T_e=0,1,2,3$, and $M_{T_e} = M_{T_g}+q=m_R+1+q$, where $q$ denotes the projection of photon helicity $(\pi, \sigma_\pm)$. Table \ref{vopt}(b) shows the values of $\mathcal{V}_{\text{opt}}$ for the coupling of the partial-wave projections $m_R$ to four of the excited bound molecular states using $\sigma_+$ polarized light at $B=0$ Gauss and $B=30$ Gauss. A non-zero $\mathbf{B}$-field leads to a mixing between the different eigenstates with the same $M_{T_e}$, leading to a change in the Condon overlap of this state with the scattering wave function of the ground potential. For the PLR states we see that the $\mathcal{V}_{\text{opt}}$ is fairly constant, while for the other states the $\mathcal{V}_{\text{opt}}$ is significantly changed by the magnetic field. This is because the poor overlap of the PLR states with their neighboring non-PLR states suppresses mixing. With the scattering matrix in hand, the $p$-wave scattering volume is defined as $a_p^3 = -K/k_r^3$, where the $K$-matrix element is given by~\cite{taylor} \begin{equation}\label{Kmat} K=i\frac{1-S}{1+S}=-\frac{\Gamma/2}{\Delta +i\gamma/2} , \end{equation} excluding the background phase shift. The real and imaginary parts of the $p$-wave scattering volume are then \begin{eqnarray}\label{scatt_vol} \Re {(a_p^3)}=a_{bg}^3+\mathcal{V}_{\rm{opt}}\frac{\gamma\Delta }{\Delta^2 +\frac{\gamma^2}{4}},\\ \Im{(a_p^3)} = -\frac{\mathcal{V}_{\rm{opt}}}{2} \frac{\gamma^2}{\Delta^2 +\frac{\gamma^2}{4}}. \end{eqnarray} where the additional background contribution was added. We obtain the background phase shift $\eta_{0}$ by numerical integration of the $p$-wave scattering state in the Wigner threshold regime and fit to the asymptotic wave function. We find the background scattering volume to be $a_{bg}^3 =-406446$ a.u. The real and imaginary parts of the scattering volume govern the strengths of the elastic and inelastic collisions respectively. In principle, one can increase the ratio of good to bad collisions solely by increasing the detuning. In practice, this is limited by the available intensity that is required to ensure a sufficiently strong interaction. Moreover, our model is restricted to an OFR via a single molecular bound state, and for self-consistency, we require a sufficiently small detuning so that only one photoassociation resonance dominates the process. For these reasons, we must choose a state in a sufficiently sparse region of the density of states so that when the laser is detuned closest to this state, even for detunings large enough to avoid spontaneous scattering, the single resonance model is valid. We thus seek a PLR state that we can address with high resolution and with a sufficient optical volume to induce a strong OFR. Firstly, as we are considering spin-polarized fermions, we can ignore the nearby spectrum of $s$-wave photoassociation resonances and concentrate only on the bound states connected to $p$-waves. Secondly, by employing dipole selection rules, we can reduce the number of allowed transitions and reduce the density of states. Using a magnetic field and polarized light, the interaction strength for scattering in states of the ground potential with a particular $m_R$ value can be selectively enhanced while suppressing the interaction strength for scattering in states with other $m_R$ values. For example, the state with $m_R=1$, corresponding to the stretched state, $T_g=2, M_{T_g}=2$, couples with $\sigma_+$ polarized light only to a $T_e=3, M_{T_e}=3$ molecular bound state. Transition to states with other values of $T_e$ are forbidden. Of course, the ground state can not be prepared in a state with a given $m_R$, but in the presence of a magnetic field, differences in detuning and optical volumes can suppress other transitions. Given these observations, the PLR at $-355$ MHz is promising for application to a $p$-wave OFR. This is a $T_e=3$ state which connects only to a $T_g =2$ ground-state. In the presence of a magnetic field and with $\sigma_+$ polarized light, we can address the ground $M_{T_g} = 2 \rightarrow M_{T_e} = 3$ transition and make this the dominant resonance (see Fig.~\ref{fig:energy_levels}). The neighboring bound states are $T_e =1$ (see Fig. 3) and inaccessible with this polarization from the $T_g=2, M_{T_g}=2$ ground state. In addition, the $\mathcal{V}_{\text{opt}}$ for $m_R=1$ is substantially larger than $\mathcal{V}_{\text{opt}}$ for the other sublevels indicating that the OFR is strongest for the $ M_{T_g}=2$ state. This leads to a further enhancement of the interaction strength for the $M_{T_g} = 2 \rightarrow M_{T_e} = 3$ transition. Figure~\ref{fig:energy_levels} shows a possible configuration for inducing the OFR. In a 30 Gauss magnetic field, and detuning $\Delta = -3\, \rm{MHz}$ below the resonance at $-355$ MHz, we dominantly couple the $T_g=2, M_{T_g} = 2 \rightarrow T_e=3, M_{T_e} = 3$ transition. Using \Eref{scatt_vol} we calculate the real-part of the scattering volume arising from the OFR to be $\Re(a_p^3) = -1.44 \times 10^5 (\rm{W/cm}^2)^{-1}$. The imaginary part is reduced by the factor $\gamma/2 \Delta = 0.057$. The effect of this spontaneous emission will depend on the application at hand. Coupling to other transitions, $M_{T_g} = 0,1 \rightarrow M_{T_e} = 1,2$ are reduced to $\Re(a_p^3) = -4.36 \times 10^4 (\rm{W/cm}^2)^{-1}$ and $\Re(a_p^3) = -1.40 \times 10^4 (\rm{W/cm}^2)^{-1}$ respectively. In addition, off-resonant coupling of $T_g=2, M_{T_g} = 0$ to neighboring $T_e=1, M_{T_e} = 1$ is highly suppressed at this detuning. \section{The three-color Fermi-Hubbard model} It will be extremely challenging to observe $p$-wave superfluidity in a dilute gas, even with the use of an OFR, given the ultra-low temperatures required. Nonetheless, the ability to control $p$-wave interactions can potentially lead to a rich variety of many-body phenomena, particularly if an optical lattice confining potential is included. We propose here how the combination of such tools can be used to explore a toy model of fermionic color superfluidity with three colors. Such models have been considered before~\cite{rapp2007} where the {\em internal} degrees of freedom served as the three ``colors''. For the case of fermions, this is not a natural realization since the number of internal states will always be an even number. An alternative is to employ the {\em external} degrees of freedom associated with the three spatial orbitals of the first excited ``$p$-band'' of an optical lattice. Such colors have been considered for bosons, mediated by s-wave interactions. We consider here a model for spin polarized fermions, mediated by $p$-wave interactions. Following~\cite{pxpyhubbard}, the multicolor field operator for spinless (i.e. polarized) fermions in the first excited $p$-band is written in the Wannier basis as \begin{equation} \psi(\mathbf{x})=\sum_{i,\alpha} c_{i,\alpha} \phi_{\alpha} (\mathbf{x}- \mathbf{R}_i ) , \end{equation} where $\phi_{\alpha} ( \mathbf{x} )$ is a p-orbital with $\alpha = x,y,z$, and $c_{i,\alpha}$ is the fermionic annihilation operator for that orbital at the $i^{th}$ lattice site. We consider lattices of sufficient depth $V_0$ that the tight-binding approximation is valid. We restrict the dynamics to a single $p$-band, which can be metastable, as seen in recent experiments where bosons remained in the first excited band of an optical lattice for about a hundred times the tunneling time scale~\cite{pband_boson_raman, pband_boson_doublewell}. We expect a similar metastability for fermions. In addition, we assume sufficiently deep lattices such that the tunneling coefficient for a particle in the state $\alpha$ in negligible along the direction $\alpha'$ for $\alpha' \neq \alpha$. Moreover, we take the wells to be spherically symmetric. The Hamiltonian then takes the Fermi-Hubbard form for the three colors in a single band, \begin{equation} \label{hubbard} H=-J \sum_{\langle i,j\rangle_\alpha,\alpha} {c_{i,\alpha}^\dagger c_{j,\alpha}} + \sum\limits_{i, m_R, \alpha \beta, \alpha' \beta'} {c_{i,\alpha'}^\dagger c_{i,\beta'}^\dagger c_{i,\alpha} c_{i,\beta} \, V^{m_R}_{\alpha' \beta',\alpha \beta}}, \end{equation} where $J$ is the tunneling coefficient along any direction $\alpha$, $\langle i, j \rangle_\alpha$ indicates that $i$ and $j$ are nearest neighbors along $\alpha$, and $V^{m_R}_{\alpha' \beta',\alpha \beta}$ is the interaction matrix element for two atoms at the same site starting in orbitals $\alpha,\beta$ and scattering to $\alpha',\beta'$ via $p$-wave collisions of symmetry $m_R$. The coupling matrix is \begin{equation} V^{m_R}_{\alpha' \beta',\alpha \beta} = \int \phi^*_{\alpha'} (\mathbf{x}_1) \phi^*_{\beta'} (\mathbf{x}_2) V^{m_R}_p (\mathbf{x}_1-\mathbf{x}_2) \phi_{\alpha} (\mathbf{x}_1) \phi_{\beta} (\mathbf{x}_2), \end{equation} where $V^{m_R}_p (\mathbf{x}_1-\mathbf{x}_2)$ is the two-body interaction potential for $p$-wave scattering. This can be treated through a pseudopotential on a delta-shell \cite{stock2005} \begin{equation} V^{m_R}_p(\mathbf{r}) = \lim_{s \rightarrow 0}\frac{3 \Re(a_p^3)}{4\mu}\, Y_{1,m_R}(\theta_r,\phi_r) \, \frac{\delta(r-s)}{s^3} \frac{\partial^3}{\partial r^3}(r^2 \, \, \, ). \end{equation} In order to calculate the interaction matrix we transform the Wannier states from the Cartesian orbitals to spherically symmetric 3D harmonic oscillator orbitals, and to center-of-mass and relative coordinates of the two particles, specified by the projections of angular momentum, $M_R$ and $m_R$, respectively. The matrix then takes the form \begin{equation} V^{m_R}_{\alpha' \beta',\alpha \beta} = \sum\limits_{M_R} \langle \alpha' \beta' | m_r M_R \rangle U^{m_R}\langle m_r M_R | \alpha \beta \rangle, \end{equation} where $\langle m_r M_R | \alpha \beta \rangle$ is the angular part of the change-of-basis matrix, and $ U^{m_R}$ is the interaction strength coming from the radial integral of the interaction potential expressed in the relative coordinate, proportional to the real-part of the $p$-wave scattering volume. Like the model studied in \cite{rapp2007}, the Fermi-Hubbard Hamiltonian \Eref{hubbard} has three colors, but differs in two important ways. Firstly, it allows for anisotropic interactions as considered in \cite{miyatake2009}. In addition, we allow for couplings between different incoming and outgoing orbitals, $\alpha \neq \alpha', \beta \neq \beta'$, as studied for bosons, \cite{pxpyhubbard}. Most importantly, unlike any model previously considered, the control provided by the OFR allows for the possibility to manipulate the strength of interactions in a manner that depends on the fermionic colors. We expect such control could be used to explore a variety of phenomena such as the trionic phase and color superfluids discussed in Ref.~\cite{rapp2007}. We leave the details of the many-body analysis for future work. \begin{figure} \begin{tikzpicture}[scale=1.2] \filldraw[green, ultra thick] (-1,0) circle (0.1cm); \filldraw[green, ultra thick] (0.2,0) circle (0.1cm); \filldraw[green, ultra thick] (1.4,0) circle (0.1cm); \draw[ultra thick] (-1.3,0) -- (-0.7,0); \draw[ultra thick] (-0.1,0) -- (0.5,0); \draw[ultra thick] (1.1,0) -- (1.7,0); \draw[ultra thick] (-0.1,4.3) -- (0.5,4.3); \draw[ultra thick] (1.1,3.8) -- (1.7,3.8); \draw[ultra thick] (2.3,3.3) -- (2.9,3.3); \draw[->, red, ultra thick] (-1,0) -- (0.0,2.7); \draw[->, red, ultra thick] (0.2,0) -- (1.1,2.7); \draw[->, red, ultra thick] (1.4,0) -- (2.4,2.7); \draw[dashed, blue, ultra thick] (-1,2.7) -- (2.9,2.7); \node at (-1, -0.3) {{$m_R=-1$}}; \node at (0.2, -0.3) {{$m_R=0$}}; \node at (1.4, -0.3) {{$m_R=1$}}; \node at (-1, -0.6) {{$M_{T_g}=0$}}; \node at (0.2, -0.6) {{$M_{T_g}=1$}}; \node at (1.4, -0.6) {{$M_{T_g}=2$}}; \node at (0.2, 4.5) {{{${M_{T_e}=1}$}}}; \node at (1.4, 4.0) {{{${M_{T_e}=2}$}}}; \node at (2.6, 3.5) {{{${M_{T_e}=3}$}}}; \node at (0.6, 3.3) {{$\Re{(a_p^3)}=-7.23\times 10^6$ $a0^3$}}; \node at (-0.6, 3.8) {{$\Re{(a_p^3)}=-2.18\times 10^6$ $a0^3$}}; \node at (-1.8, 4.3) {{$\Re{(a_p^3)}=-6.97\times 10^5$ $a0^3$}}; \node at (0.4, 2.4) {\color{red}{$q=1$}}; \draw [dgreen,decorate,decoration={brace,amplitude=5pt},ultra thick] (2.9,3.3) -- (2.9,2.7); \node at (3.2, 3) {{$\Delta = 3$ MHz}}; \end{tikzpicture} \caption{\label{fig:energy_levels} OFR using $\sigma^+$ polarized light to couple the scattering state of the ground potential with the three different projections of $p$-wave angular momentum, $m_R$, to the excited PLR bound state with the different total projection $M_{T_e}$. The figure shows only those states permitted by selection rules. Denoted are OFR values of the real part of the scattering volume, $\Re(a_p^3)$, for each of the three transitions, for an intensity of 50 W/cm$^2$.} \end{figure} To evaluate the potential for this system to lead to quantum critical behavior, we give here a rough back-of-the-envelope estimate. We expect that interesting many-body physics will be accessible when the ratio between the kinetic and interaction energies in the system is of order one~\cite{u6j}, i. e., $ U_{m_R}\gtrsim 6J $, for some given choice of $m_R$. Choosing the lattice depth along any direction to be $V_0=18 E_r$, where $E_r$ is the recoil energy, we find $J= 0.16 E_r$ in the first excited band. For these parameters, it follows that phase transitions occur near \begin{equation}\label{criterion} |\Re{(a_p^3)}|\gtrsim 7 \times 10^6 \text{a.u.} \end{equation} Typically, the $p$-wave scattering volume arising from the background phase shift is very small and the model in \Eref{hubbard} does not result in quantum phase transitions. However, using an OFR, $\Re {(a_p^3)}$ can be tuned to larger values. Moreover, through selection rules we can control specific $m_R$-couplings that correlate with interactions of specific colors. Figure~\ref{fig:energy_levels} outlines one possible scheme. The different $M_{T_g}$ levels are coupled to specific $M_{T_e}$ levels in the excited PLR state, shown in Fig.~\ref{fig:envsB}, using $\sigma^+$ polarized light. The Zeeman splitting between the different $M_{T_e}$ levels of the excited state is approximately $0.89$ MHz at $B=30$ Gauss. The figure indicates the values of $\Re{(a_p^3)}$, calculated for the couplings between the different states for a laser of intensity $I=50 \text{W/cm}^2$ and a detuning $\Delta =3$ MHz below the $M_{T_e}=3$ state. For this magnetic field, laser intensity, polarization, and detuning the atoms, scattering in the $m_R=1$ state will experience a $p$-wave scattering volume of $\Re(a^3_p) = - 7.24 \times 10^6$ a.u., satisfying the criterion in \Eref{criterion}. With such control, we expect one can observe novel quantum critical behavior in the fermionic superfluid. \section{Summary and outlook} We have studied a highly controllable system of spin-polarized ${}^{171}$Yb atoms undergoing $p$-wave collisions as modified by an optical Feshbach resonance (OFR). By tuning near an electronically excited purely-long-range (PLR) bound state in the $^1S_0 + {}^3P_1$ channel, we expect to suppress three-body recombination losses that typify magnetically induced $p$-wave Feshbach resonances in the ground electronic manifold. We used a multichannel integration of the Schr\"{o}dinger equation to determine the photoassociation resonances and the eigenfunctions including perturbing magnetic fields. With these, we calculated the real and imaginary part of the ``scattering volume'' associated with the $p$-wave scattering phase shift and loss rate for choices of magnetic fields and OFR polarized laser fields. Because the ${}^3P_1$ state has a relatively large linewidth as compared to the other intercombination lines, the demands on precision control of laser detuning are moderate. On the other hand, this larger linewidth implies a limitation on the strength of the real-part of the Feshbach resonance before inelastic scattering can no longer be neglected. For these reasons we expect that even with an OFR, one will not be able to achieve $p$-wave superfluidity, or a BEC-BCS crossover, analogous to that seen for s-wave pairing. Nonetheless, the degree of control afforded by the OFR could open the door to explorations of novel quantum critical behavior in the many-body system. We began such an exploration, considering a new model of three-color fermonic superfluidity. Here the three colors correspond to the three spatial orbitals of spinless (i.e. polarized) fermions in the first excited $p$-band of an optical lattice. Based on this toy model, we calculated the parameters of a Hubbard model including nearest neighbor hopping and on-site interaction between two fermions in different orbitals via $p$-wave collisions. Through careful choice of magnetic field, laser polarization, and detuning, we find conditions under which tunneling and interaction energy scales are comparable. For such operating conditions, we expect quantum phase transitions are possible. A full many-body exploration of the phase diagram is left for future analysis. We thank Maciej Lewenstein, Pietro Massignan, and Philipp Hauke for helpful discussions, particularly about the application of our model to the Fermi-Hubbard Hamiltonian. KG and IHD acknowledge support from the Office of Naval Research Grant No. N00014-03-1-0508 and the Center for Quantum Information and Control (CQuIC) via the National Science Foundation Grant PHY-0969997.
proofpile-arXiv_068-15445
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
proofpile-arXiv_068-15477
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We report on our analytic studies of the renormalization properties of Bori\c{c}i-Creutz \cite{mind:Creutz07,mind:Borici07,mind:Creutz08,mind:Borici08} and Karsten-Wilczek \cite{mind:Karsten81,mind:Wilczek87} fermions (see \cite{mind:Capitani09,mind:Capitani_lat09,Capitani:2010nn}, and references therein), two particular realizations of minimally doubled fermions.~\footnote{For recent developments, see also \cite{Creutz:2010cz}.} These actions preserve an exact chiral symmetry for a degenerate doublet of quarks, and at the same time they remain strictly local, so that they are much cheaper to simulate than Ginsparg-Wilson fermions. They could then become a cost-effective realization of chiral symmetry at nonzero lattice spacing. This $U(1) \otimes U(1)$ chiral symmetry, which is of the same form as in the continuum, protects the quark mass from additive renormalization. As we have also verified at one loop, the renormalization of the quark mass has the same form as, say, overlap or staggered fermions. It is noteworthy that using minimally doubled fermions one can construct a conserved axial current which has a simple expression, involving only nearest-neighbour sites (see Section \ref{sec:conscurr}). These actions are then among the very few lattice discretizations which provide a simple (ultralocal) expression for a conserved axial current. It is natural to compare these realizations of minimally doubled fermions with staggered fermions, which preserve the same $U(1) \otimes U(1)$ chiral symmetry and are also ultralocal and comparably cheap. The advantage of Bori\c{c}i-Creutz and Karsten-Wilczek fermions is that they contain 2 flavours instead of 4, and thus they do not require any uncontrolled extrapolation to 2 physical light flavours \cite{Creutz:2009kx,Creutz:2009zq}. Moreover, the construction of fermionic operators is much easier than for staggered fermions, where there is also a complicated intertwining of spin and flavour. Minimally doubled actions look then ideal for $N_f=2$ simulations.~\footnote{They remain rather convenient also for $N_f=2+1$ and $N_f=2+1+1$ simulations. The second doublet of minimally doubled quarks will contain chirality-breaking terms in order to give different masses to the $s$ and $c$ quarks, however this is not so important for these larger masses.} \section{Actions} \label{sec:actions} The free Dirac operator of Bori\c{c}i-Creutz fermions is given in momentum space by \begin{equation} D(p) = i \, \sum_\mu (\gamma_\mu \sin p_\mu + \gamma'_\mu \cos p_\mu) - 2i\Gamma +m_0 , \label{eq:creutz-action} \end{equation} where \begin{equation} \Gamma = \frac{1}{2} \, (\gamma_1 + \gamma_2 + \gamma_3 + \gamma_4) \qquad (\Gamma^2=1) \end{equation} and \begin{equation} \gamma'_\mu = \Gamma \gamma_\mu \Gamma = \Gamma - \gamma_\mu . \end{equation} $D(p)$ vanishes at $p_1=(0,0,0,0)$ and $p_2=(\pi/2,\pi/2,\pi/2,\pi/2)$, and can also be seen as a linear combination of two physically equivalent naive fermions (one of them translated in momentum space). The free Karsten-Wilczek Dirac operator is given in momentum space by \begin{equation} D(p) = i \sum_{\mu=1}^4 \gamma_\mu \sin p_\mu + i \gamma_4 \sum_{k=1}^3 (1-\cos p_k) , \label{eq:wilczek-action} \end{equation} and its zeros are instead at $p_1=(0,0,0,0)$ and $p_2=(0,0,0,\pi)$. The two zeros of these actions, corresponding to the physical flavours, select a special direction in euclidean spacetime, identified by the line that connects them. It is easy to see that in the Bori\c{c}i-Creutz case the matrix $\Gamma$ selects as a special direction the major hypercube diagonal, while in the Karsten-Wilczek case is the temporal direction which becomes the special one. As a consequence, hyper-cubic symmetry is broken, and these actions are symmetric only under the subgroup of the hyper-cubic group which preserves (up to a sign) the respective special direction. This opens the way to mixings of a new kind under renormalization. One of the main aims of our work is the investigation of the mixing patterns that appear in radiative corrections. We have elucidated the one-loop structure of these theories, and one of our main results is that everything is consistent at the one loop level, and the new mixings are very few. We also remark that, although the distance between the two zeros is the same ($p_2^2-p_1^2=\pi^2$), these two realizations of minimally doubled fermions are not equivalent. \section{Counterterms} Each of the two actions (\ref{eq:creutz-action}) and (\ref{eq:wilczek-action}) does not contain all possible operators which are invariant under the subgroup of the hyper-cubic group preserving its respective special direction. Radiative corrections then generate new contributions whose form is not matched by any term in the original bare actions. It becomes necessary to introduce counterterms to the bare actions in order to obtain a consistent renormalized theory. Enforcing the consistency requirement will allow us to uniquely determine the coefficients of these counterterms.~\footnote{It is interesting in this respect to observe that an action which contains doublers will in general select some special direction, and hence require counterterms. However, the staggered fermion formulation is very clever, because it rearranges the 16 spin-flavour components of the 4 doublers on the corners of the unit hypercube. Thanks to this, no special direction arises, and thus no extra counterterms are needed for the simulation of staggered fermions. In the case of naive fermions the 16 doublers are also uniformly distributed in the Brillouin zone, and hence there is no special direction in this case too.} One must add to the bare actions all possible counterterms allowed by the remnant symmetries. Moreover, counterterms are needed also in the pure gauge part of the actions of minimally doubled fermions. The reason for this is that, although at the bare level the breaking of hyper-cubic symmetry happens only in the fermionic parts of the actions, in the renormalized theory it propagates (via the interactions between quarks and gluons) also to the pure gauge sector. We consider the massless case $m_0=0$, and note that chiral symmetry strongly restricts the number of possible counterterms. It turns out that there is only one possible dimension-four fermionic counterterm, which for Bori\c{c}i-Creutz fermions is written in continuum form as \linebreak $\overline{\psi} \,\Gamma \sum_\mu D_\mu \psi$. A possible discretization for it has a form similar to the hopping term in the action: \begin{equation} c_4 (g_0) \,\, \frac{1}{2a} \sum_\mu \Big( \overline{\psi} (x) \, \Gamma \, U_\mu (x) \, \psi (x + a\widehat{\mu}) -\overline{\psi} (x + a\widehat{\mu}) \, \Gamma \, U_\mu^\dagger (x) \, \psi (x) \Big). \end{equation} There is also one counterterm of dimension three, \begin{equation} \frac{ic_3 (g_0)}{a}\,\overline{\psi} (x) \, \Gamma \, \psi (x) , \end{equation} which is already present in the bare Bori\c{c}i-Creutz action, but with a fixed coefficient $-2/a$. The appearance of this counterterm means that in the general renormalized action the coefficient of the dimension-three operator must be kept general. For Karsten-Wilczek fermions we find a similar situation. The only gauge-invariant fermionic counterterm of dimension four is \begin{equation} \overline{\psi}\,\gamma_4 D_4\,\psi , \end{equation} and a suitable discretization of it is \begin{equation} d_4 (g_0) \,\, \frac{1}{2a} \Big( \overline{\psi} (x) \, \gamma_4 \, U_4 (x) \, \psi (x + a\widehat{4}) -\overline{\psi} (x + a\widehat{4}) \, \gamma_4 \, U_4^\dagger (x) \, \psi (x) \Big) . \end{equation} The counterterm of dimension three is for this action \begin{equation} \frac{id_3 (g_0)}{a} \,\overline{\psi} (x) \,\gamma_4 \,\psi (x) \end{equation} (already present in the bare Karsten-Wilczek action, with a fixed coefficient). The rules for the counterterm corrections to fermion propagators, needed for our one-loop calculations, can be easily derived. For external lines, they are given in momentum space respectively by \begin{equation} -ic_4 (g_0) \,\,\Gamma \,\sum_\nu p_\nu , \quad -\frac{ic_3(g_0)}{a}\,\Gamma \label{eq:frfctbc} \end{equation} for Bori\c{c}i-Creutz fermions, and by \begin{equation} -id_4 (g_0) \,\,\gamma_4 \,p_4 , \quad -\frac{id_3(g_0)}{a}\,\gamma_4 \label{eq:frfctkw} \end{equation} for Karsten-Wilczek fermions. The gluonic counterterms must be of the form $\rm{tr}\, FF$, but with nonconventional choices of the indices, reflecting the breaking of the hyper-cubic symmetry. It turns out that there is only one purely gluonic counterterm, which for the Bori\c{c}i-Creutz action can be written in continuum form as \begin{equation} c_P(g_0) \, \sum_{\lambda\rho\tau} \rm{tr}\, F_{\lambda\rho}(x) \, F_{\rho\tau}(x) . \end{equation} At one loop this counterterm is relevant only for gluon propagators. Denoting the fixed external indices at their ends with $\mu$ and $\nu$, all possible lattice discretizations of this counterterm give in momentum space the same Feynman rule: \begin{equation} -c_P(g_0) \,\left[ (p_\mu + p_\nu)\,\sum_\lambda p_\lambda -p^2 - \delta_{\mu\nu}\Big( \sum_\lambda p_\lambda \Big)^2 \right] . \label{eq:frgctbc} \end{equation} Contributions of this kind must be taken into account for a correct renormalization of the vacuum polarization (see Section \ref{sec:detgluon}). In the case of Karsten-Wilczek fermions the counterterm which needs to be introduced can be written in continuum form as \begin{equation} d_P(g_0) \, \sum_{\rho\lambda} \rm{tr}\, F_{\rho\lambda}(x) \, F_{\rho\lambda}(x) \, \delta_{\rho 4} . \end{equation} The Feynman rule for the insertion of this counterterm in external gluon propagators reads \begin{equation} - d_P(g_0) \, \left[ p_\mu p_\nu \,(\delta_{\mu 4 } + \delta_{\nu 4 }) -\delta_{\mu\nu} \left( p^2\,\delta_{\mu 4 } \delta_{\nu 4 } +p_4^2 \right) \right] . \label{eq:frgctkw} \end{equation} In perturbation theory the coefficients of all counterterms are functions of the coupling which start at order $g_0^2$. We will determine (at one loop) the coefficients of all fermionic and gluonic counterterms by requiring that the renormalized self-energy and vacuum polarization, respectively, assume their standard form (see Sections \ref{sec:detfermion} and \ref{sec:detgluon}). Counterterm interaction vertices are generated as well. However, these vertex insertions are at least of order $g_0^3$, and thus they cannot contribute to the one-loop amplitudes that we study here. We also want to emphasize that counterterms not only provide additional Feynman rules for the calculation of loop amplitudes. They can also modify Ward identities and hence, in particular, contribute additional terms to the conserved currents (see Section \ref{sec:conscurr}). \section{Determination of the fermionic counterterms} \label{sec:detfermion} Leaving aside for one moment the counterterms, the quark self-energy of a Bori\c{c}i-Creutz fermion is given at one loop by \begin{equation} \Sigma (p,m_0) = i\slash{p}\,\Sigma_1(p) +m_0\,\Sigma_2(p) + c_1 (g_0)\cdot i\, \Gamma \sum_\mu p_\mu + c_2 (g_0)\cdot i\, \frac{\Gamma}{a}, \label{eq:totalselfbc} \end{equation} where~\footnote{For our calculations we have developed programs written in the algebraic computer language {\em FORM} \cite{Vermaseren:2000nd,Vermaseren:2008kw}.} \begin{eqnarray} \Sigma_1(p) &=& \frac{g_0^2}{16\pi^2} \,C_F \,\Bigg[ \log a^2p^2 +6.80663 +(1-\alpha) \Big(-\log a^2p^2 + 4.792010 \Big) \Bigg] , \label{eq:Sigma1self} \\ \Sigma_2(p) &=& \frac{g_0^2}{16\pi^2} \,C_F \,\Bigg[ 4\,\log a^2p^2 -29.48729 +(1-\alpha) \Big(-\log a^2p^2 +5.792010 \Big) \Bigg] , \label{eq:Sigma2self} \\ c_1 (g_0)\, &=& 1.52766 \cdot\frac{g_0^2}{16\pi^2} \,C_F , \label{eq:c1self} \\ c_2 (g_0)\, &=& 29.54170 \cdot\frac{g_0^2}{16\pi^2} \,C_F , \end{eqnarray} with $C_F=(N_c^2-1)/2N_c$, and $\alpha$ denotes the gauge parameter in a general covariant gauge. The full inverse propagator at one loop can be written (without counterterms) as \begin{equation} \Sigma^{-1} (p,m_0) = \Big( 1 -\Sigma_1 \Big) \cdot \Big\{ i\slash{p} + m_0 \,\Big( 1 -\Sigma_2 +\Sigma_1 \Big) -ic_1 \,\Gamma \,\sum_\mu p_\mu -\frac{ic_2}{a}\,\Gamma \Big\} . \end{equation} We can only cast the renormalized propagator in the standard form \begin{equation} \Sigma (p,m_0) = \frac{Z_2}{i\slash{p} + Z_m\, m_0} , \end{equation} where the wave-function and quark mass renormalization factors are given by \begin{equation} Z_2 = \Big( 1 -\Sigma_1 \Big)^{-1}, \qquad Z_m = 1 - \Big( \Sigma_2 -\Sigma_1 \Big) , \end{equation} provided that we employ the counterterms to cancel the Lorentz non-invariant factors ($c_1$ and $c_2$). The term proportional to $c_1$ can be eliminated by using the dimension-four counterterm, $\overline{\psi} \, \Gamma \, \sum_\mu D_\mu \, \psi$, while the term proportional to $c_2$ can be eliminated using the dimension-three counterterm, $1/a \, \overline{\psi} \, \Gamma \, \psi$. This amounts to applying the insertions of eqs.~(\ref{eq:frfctbc}) and (\ref{eq:frfctkw}). We thus determine in this way that at one loop, for Bori\c{c}i-Creutz fermions, \begin{equation} c_3 (g_0) = 29.54170\cdot\frac{g_0^2}{16\pi^2} \,C_F+O(g_0^4), \qquad c_4 (g_0) = 1.52766\cdot\frac{g_0^2}{16\pi^2} \,C_F+O(g_0^4) . \end{equation} Things work out very similarly for Karsten-Wilczek fermions. In this case the inverse propagator at one loop (without counterterms) is \begin{equation} \Sigma^{-1} (p,m_0) = \Big( 1 -\Sigma_1 \Big) \cdot \Big( i\slash{p} + m_0 \,\Big( 1 -\Sigma_2 +\Sigma_1 \Big) - id_1 \,\gamma_4 p_4 - \frac{id_2}{a} \,\gamma_4 \Big), \label{eq:totalselfkw} \end{equation} where \begin{eqnarray} \Sigma_1(p) &=& \frac{g_0^2}{16\pi^2} \,C_F \,\Bigg[ \log a^2p^2 +9.24089 +(1-\alpha) \Big(-\log a^2p^2 + 4.792010 \Big) \Bigg] , \label{eq:Sigma1self2} \\ \Sigma_2(p) &=& \frac{g_0^2}{16\pi^2} \,C_F \,\Bigg[ 4\,\log a^2p^2 -24.36875 +(1-\alpha) \Big(-\log a^2p^2 +5.792010 \Big) \Bigg] , \label{eq:Sigma2self2} \\ d_1 (g_0)\, &=& -0.12554 \cdot\frac{g_0^2}{16\pi^2} \,C_F , \label{eq:c1self2} \\ d_2 (g_0)\, &=& -29.53230 \cdot\frac{g_0^2}{16\pi^2} \,C_F . \end{eqnarray} By using the appropriate counterterms $\overline{\psi} \, \gamma_4 \, D_4 \, \psi$ and $1/a \, \overline{\psi} \, \gamma_4 \, \psi$ the renormalized propagator can be written in the standard form. Then, at one loop we obtain \begin{equation} d_3 (g_0) = -29.53230\cdot\frac{g_0^2}{16\pi^2} \,C_F+O(g_0^4), \qquad d_4 (g_0) = -0.12554\cdot\frac{g_0^2}{16\pi^2} \,C_F+O(g_0^4) . \end{equation} One may expect that the above subtraction procedure can be carried out systematically at every order of perturbation theory. After the subtractions via the appropriate counterterms are properly taken into account, the extra terms appearing in the self-energy disappear. \section{Determination of the gluonic counterterms} \label{sec:detgluon} Leaving aside for one moment the counterterms, the contribution of the fermionic loops to the one-loop vacuum polarization of Bori\c{c}i-Creutz fermions comes out from our calculations as \begin{eqnarray} \Pi^{(f)}_{\mu\nu} (p) & = & \Bigg( p_\mu p_\nu-\delta_{\mu\nu}p^2 \Bigg) \Bigg[\frac{g_0^2}{16\pi^2} C_2 \Bigg( -\frac{8}{3} \log p^2a^2 + 23.6793 \Bigg) \Bigg] \\ && - \Bigg( (p_\mu + p_\nu)\,\sum_\lambda p_\lambda - p^2 - \delta_{\mu\nu}\Big( \sum_\lambda p_\lambda \Big)^2 \Bigg) \, \frac{g_0^2}{16\pi^2} \,C_2 \cdot 0.9094 , \nonumber \end{eqnarray} where $\rm{Tr} \,(t^at^b) = C_2 \,\delta^{ab}$. For Karsten-Wilczek fermions the corresponding result is \begin{eqnarray} \Pi^{(f)}_{\mu\nu} (p) & = & \Bigg( p_\mu p_\nu-\delta_{\mu\nu}p^2 \Bigg) \Bigg[\frac{g_0^2}{16\pi^2} C_2 \Bigg( -\frac{8}{3} \log p^2a^2 + 19.99468 \Bigg) \Bigg] \\ && - \Bigg( p_\mu p_\nu \,(\delta_{\mu 4 } + \delta_{\nu 4 }) -\delta_{\mu\nu} \left( p^2\,\delta_{\mu 4 } \delta_{\nu 4 } +p_4^2 \right) \Bigg)\, \frac{g_0^2}{16\pi^2} \,C_2 \cdot 12.69766 . \nonumber \end{eqnarray} We notice the appearance of non-standard terms, compared with e.g. Wilson fermions. These new terms break hyper-cubic symmetry. It is remarkable that they still satisfy the Ward identity $p^\mu \Pi^{(f)}_{\mu\nu} (p)=0$. At this stage we can employ the gluonic counterterms, which correspond to the insertions in the gluon propagator according to eqs.~(\ref{eq:frgctbc}) and (\ref{eq:frgctkw}), to cancel the hyper-cubic-breaking terms in the vacuum polarization. The coefficients of these counterterms are hence determined as \begin{equation} c_P (g_0) = -0.9094 \cdot\frac{g_0^2}{16\pi^2} \,C_2 +O(g_0^4), \qquad d_P (g_0) = -12.69766 \cdot\frac{g_0^2}{16\pi^2} \,C_2 +O(g_0^4) . \end{equation} It is also very important to remark that no power-divergences ($1/a^2$ or $1/a$) show up in our results for the vacuum polarization. \section{Conserved currents} \label{sec:conscurr} We have also calculated the renormalization of the local Dirac bilinears. We have found that no mixings occur for the scalar and pseudoscalar densities and the tensor current. For the vector and axial currents instead a mixing can be seen, which is a consequence of the breaking of hyper-cubic invariance, and their renormalization factors $Z_V$ and $Z_A$ are thus are not equal to one (for their numerical values see Section \ref{sec:notation}). These local currents are indeed not conserved. Using chiral Ward identities we have then derived the expressions of the conserved currents, which are protected from renormalization. As we have previously remarked, the counterterms influence the expressions of the conserved currents. It is easy to see that the counterterm of dimension three does not modify the Ward identities, and is irrelevant in this regard. On the contrary, the dimension-four counterterm \begin{equation} \frac{c_4(g_0)}{4} \sum_\mu \sum_\nu \Big( \overline{\psi} (x) \, \gamma_\nu \, U_\mu (x) \, \psi (x + a\widehat{\mu}) +\overline{\psi} (x + a\widehat{\mu}) \, \gamma_\nu \, U_\mu^\dagger (x) \, \psi (x) \Big) \end{equation} generates new terms in the Ward identities and hence contributes to the conserved currents. The conserved axial current for Bori\c{c}i-Creutz fermions in the renormalized theory turns out to have the expression \begin{eqnarray} A_\mu^{\mathrm c} (x) &=& \frac{1}{2} \bigg( \overline{\psi} (x) \, (\gamma_\mu+i\,\gamma'_\mu) \gamma_5 \, U_\mu (x) \, \psi (x+a\widehat{\mu}) + \overline{\psi} (x+a\widehat{\mu}) \, (\gamma_\mu-i\,\gamma'_\mu) \gamma_5 \, U_\mu^\dagger (x) \, \psi (x) \bigg) \nonumber \\ && +\frac{c_4 (g_0)}{2} \, \bigg( \overline{\psi} (x) \, \Gamma \gamma_5 \, U_\mu (x) \, \psi (x+a\widehat{\mu}) + \overline{\psi} (x+a\widehat{\mu}) \, \Gamma \gamma_5 \, U_\mu^\dagger (x) \, \psi (x) \bigg) . \label{eq:noether-axial} \end{eqnarray} For Karsten-Wilczek fermions, application of the chiral Ward identities gives for the conserved axial current \begin{eqnarray} A_\mu^{\mathrm c} (x) & = & \frac{1}{2} \bigg( \overline{\psi} (x) \, (\gamma_\mu -i\gamma_4 \, (1-\delta_{\mu 4}) ) \, \gamma_5 \, U_\mu (x) \, \psi (x+a\widehat{\mu}) \nonumber \\ && \qquad + \overline{\psi} (x+a\widehat{\mu}) \, (\gamma_\mu +i\gamma_4 \, (1-\delta_{\mu 4}) ) \, \gamma_5 \, U_\mu^\dagger (x) \, \psi (x) \bigg) \\ && + \frac{d_4 (g_0)}{2} \bigg( \overline{\psi} (x) \, \gamma_4 \gamma_5 \, U_4 (x) \, \psi (x+a\widehat{4}) + \overline{\psi} (x+a\widehat{4}) \, \gamma_4 \gamma_5 \, U_4^\dagger (x) \, \psi (x) \bigg) . \nonumber \end{eqnarray} The conserved vector currents can be obtained by simply dropping the $\gamma_5$ matrices from the above expressions. We remark that the vector current is isospin-singlet, representing the conservation of fermion number (as also discussed in \cite{Tiburzi:2010bm}). The axial current, however, is a non-singlet because the doubled fermions have opposite chirality. All these currents have a very simple structure, which involves only nearest-neighbour sites. We have computed the renormalization of these point-split currents, and verified that is one. As all four cases are very similar, we briefly discuss here the conserved vector current for Bori\c{c}i-Creutz fermions, for which the sum of the ``standard'' diagrams (vertex, sails and operator tadpole, without the counterterm) gives \begin{equation} \frac{g_0^2}{16\pi^2} \,C_F \,\gamma_\mu\,\Bigg[ -\log a^2p^2 -6.80664 +(1-\alpha) \Big(\log a^2p^2 -4.79202 \Big) \Bigg] +c_1^{cv} (g_0) \, \Gamma . \end{equation} The value of the coefficient of the mixing is $c_1^{cv} (g_0) \, = -1.52766 \cdot \frac{g_0^2}{16\pi^2} \,C_F + O(g_0^4)$. When one adds to this result the wave-function renormalization (that is, $\Sigma_1(p)$ of the quark self-energy), the term proportional to $\gamma_\mu$ is exactly cancelled. The mixing term, proportional to $\Gamma$, instead remains, because we have not yet taken into account the counterterm. The part of the conserved vector current due to the counterterm corresponds to the last line of eq.~(\ref{eq:noether-axial}). Its 1-loop contribution is quite easy to compute (since $c_4$ is already of order $g_0^2$), and is given by $c_4(g_0) \, \Gamma$. We now note that the value of $c_4$ is already known from the self-energy, and numerical inspection shows that $c_4(g_0) = -c_1^{cv}(g_0)$ (within the precision of our integration routines). Thus, the $\Gamma$ mixing term is finally cancelled. We emphasize that only this particular value of $c_4$, determined from the self-energy, does exactly this job. We have thus obtained that the renormalization constant of these point-split currents is one, which confirms that they are conserved currents. Everything turns out to be consistent at the one loop level. \section{Numerical simulations} \label{sec:simulations} If we use the nearest-neighbour forward covariant derivative $\nabla_\mu \psi(x) = \frac{1}{a}\,[U_\mu(x)\,\psi\,(x+a\widehat{\mu}) - \psi(x)]$ and the corresponding backward one $\nabla^\ast_\mu$, we can express the (bare) actions in position space in a rather compact form. It then becomes apparent that these two realizations of minimally doubled fermions bear a close formal resemblance to Wilson fermions: \begin{eqnarray} D^f_{\rm{Wilson}} & = & \frac{1}{2} \, \Bigg\{ \sum_{\mu=1}^4 \gamma_\mu (\nabla_\mu + \nabla^\ast_\mu) \, -ar \sum_{\mu=1}^4 \nabla^\ast_\mu \nabla_\mu \Bigg\} , \\ D^f_{\rm{BC}} & = & \frac{1}{2} \, \Bigg\{ \sum_{\mu=1}^4 \gamma_\mu (\nabla_\mu + \nabla^\ast_\mu) \, +ia \sum_{\mu=1}^4 \gamma'_\mu \,\nabla^\ast_\mu \nabla_\mu \Bigg\} , \\ D^f_{\rm{KW}} & = & \frac{1}{2} \, \Bigg\{ \sum_{\mu=1}^4 \gamma_\mu (\nabla_\mu + \nabla^\ast_\mu) \, -ia \gamma_4 \sum_{k=1}^3 \nabla^\ast_k \nabla_k \Bigg\} . \end{eqnarray} All these three formulations contain a dimension-five operator in the bare action, and so we expect leading lattice artefacts to be of order~$a$. However, for minimally doubled fermions these effects could numerically be small, if the results of \cite{Cichy:2008gk} are to be believed. We will not discuss here how to achieve one-loop (or nonperturbative) order~$a$ improvement for these theories. The classification of all relevant independent operators could turn out to require a lengthy analysis. Notice that additional dimension-5 operators will occur not only in the quark sector (e.g., $\overline{\psi}\,\Gamma \sum_{\mu\nu}D_\mu D_\nu \psi$), but also in the pure gauge part (e.g., $\sum_{\mu\nu\lambda}F_{\mu\nu}D_\lambda F_{\mu\nu}$). Indeed, when Lorentz invariance is broken, the statement that only operators with even dimension can appear in the pure gauge action is no longer true. We would now like to see what can be learned, from the one-loop calculations that we have carried out, regarding the numerical simulations of minimally doubled fermions. These simulations will have to employ the complete renormalized actions, including the counterterms. The renormalized action for Bori\c{c}i-Creutz fermions in position space contains three counterterms and reads \begin{eqnarray} S^f_{BC} & = & a^4 \sum_{x} \bigg\{ \frac{1}{2a} \sum_{\mu=1}^4 \Big[ \overline{\psi} (x) \, (\gamma_\mu + c_4(\beta) \, \Gamma + i\gamma'_\mu) \, U_\mu (x) \, \psi (x + a\widehat{\mu}) \nonumber \\ && \qquad -\overline{\psi} (x + a\widehat{\mu}) \, (\gamma_\mu + c_4(\beta) \, \Gamma - i\gamma'_\mu) \, U_\mu^\dagger (x) \, \psi (x) \Big] \nonumber \\ && \qquad + \overline{\psi}(x) \, \Big(m_0+\widetilde{c}_3(\beta)\, \frac{i\,\Gamma}{a} \Big) \, \psi (x) \nonumber \\ && \qquad +\beta \sum_{\mu < \nu} \Bigg( 1 - \frac{1}{N_c} {\mathrm Re} \, \rm{tr}\, P_{\mu\nu} \Bigg) + c_P(\beta) \, \sum_{\mu\nu\rho} \rm{tr}\, F^{lat}_{\mu\rho}(x) \, F^{lat}_{\rho\nu}(x) \bigg\} , \end{eqnarray} where $F^{lat}$ is some lattice discretization of the field-strength tensor. We have here redefined the coefficient of the dimension-3 counterterm, using $\widetilde{c}_3(\beta)=-2+c_3(\beta)$ (which does not vanish at tree level).~\footnote{We assume that simulations will be carried out at very small values of $m_0$, so that our analysis of the counterterms, which assumes chiral symmetry, is essentially still valid. But note also that in our results of eqs.~\ref{eq:totalselfbc} and \ref{eq:totalselfkw}, obtained for general $m_0$, no new dimension-four terms proportional to this mass appear (apart from the standard one, $\Sigma_2$). Thus, at one loop we do not need further counterterms in additions to the three which we have found. This strongly suggests that our analysis of the counterterms remains valid even when chiral symmetry is broken.} The renormalized action for Karsten-Wilczek fermions also contains three counterterms and reads \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\! S^f_{KW} & = & a^4 \sum_{x} \bigg\{ \frac{1}{2a} \sum_{\mu=1}^4 \Big[ \overline{\psi} (x) \, (\gamma_\mu(1 + d_4(\beta)\,\delta_{\mu 4}) -i\gamma_4 \, (1-\delta_{\mu 4}) ) \, U_\mu (x) \, \psi (x + a\widehat{\mu}) \nonumber \\ && \qquad -\overline{\psi} (x + a\widehat{\mu}) \, (\gamma_\mu(1 + d_4(\beta)\,\delta_{\mu 4}) +i\gamma_4 \, (1-\delta_{\mu 4}) ) \, U_\mu^\dagger (x) \, \psi (x) \Big] \nonumber \\ && \qquad + \overline{\psi}(x) \, \Big(m_0+\widetilde{d}_3(\beta)\, \frac{i\,\gamma_4}{a}\Big) \, \psi (x) \nonumber \\ && \qquad + \beta \sum_{\mu < \nu} \Bigg( 1 - \frac{1}{N_c} {\mathrm Re} \, \rm{tr} \, P_{\mu\nu} \Bigg) \, \Big( 1 + d_P(\beta) \, \delta_{\mu 4} \Big) \bigg\} \end{eqnarray} ($\widetilde{d}_3(\beta)=3+d_3(\beta)$ has a non-zero value at tree level). In perturbation theory the coefficients of the counterterms have the expansions \begin{eqnarray} \widetilde{c}_3(g_0) & = -2 + c_3^{(1)} g_0^2 + c_3^{(2)} g_0^4 + \dots; \qquad \widetilde{d}_3(g_0) & = 3 + d_3^{(1)} g_0^2 + d_3^{(2)} g_0^4 + \dots \\ c_4(g_0) & = \phantom{-2 +~} c_4^{(1)} g_0^2 + c_4^{(2)} g_0^4 + \dots; \qquad d_4(g_0) & = \phantom{3 +~} d_4^{(1)} g_0^2 + d_4^{(2)} g_0^4 + \dots \\ c_P(g_0) & = \phantom{-2 +~} c_P^{(1)} g_0^2 + c_P^{(2)} g_0^4 + \dots; \qquad d_P(g_0) & = \phantom{3 +~} d_P^{(1)} g_0^2 + d_P^{(2)} g_0^4 + \dots . \end{eqnarray} The same counterterms also appear at the nonperturbative level, and need to be taken into account for a consistent simulation of these fermions. Their nonperturbative determination is one the most important task for the near future. This can be achieved using suitable renormalization conditions, and it remains to be seen which ones will turn out to be more convenient in practice. We have previously seen that in perturbation theory the four-dimensional fermionic counter\-term is necessary for the proper construction of the conserved currents. Its coefficient, as determined from the one-loop self-energy, has exactly the right value for which the conserved currents remain unrenormalized. This suggests that one possible nonperturbative determination of $c_4$ (and $d_4$) can be accomplished by simulating matrix elements of the (unrenormalized) conserved current, and imposing (by tuning the coefficient) that the electric charge is one. Another effect of radiative corrections is to move the poles of the quark propagator away from their tree-level positions. It is the task of the dimension-three counterterm, for the appropriate value of the coefficient $c_3$ (or $d_3$), to bring the two poles back to their original locations. These shifts of the poles can introduce oscillations in some hadronic correlation functions as a function of time separation (similarly to staggered fermions). Then one possible way to determine $c_3$ ($d_3$) is to tune it in appropriately chosen correlation functions until these oscillations are removed. Such oscillations, familiar from the staggered formulation, come about since the underlying fermion field can create several different species, and these species occur in different regions of the Brillouin zone. It would be interesting to explore whether or not these oscillations could be cancelled by constructing hadronic operators spread over nearby neighbours \cite{Creutz:2010qm}. It is important to remember that because the two species are of opposite chirality, the naive $\gamma_5$ matrix is physically a flavour non-singlet. The naive on-site pseudoscalar field $\overline{\psi}\gamma_5\psi$ can create only flavour non-singlet pseudoscalar states. To create the flavour-singlet pseudoscalar meson, which gets its mass from the anomaly, one needs to combine fields on nearby sites with appropriate phases. We would like to stress that the breaking of hyper-cubic symmetry does not generate any sign problem for the Monte Carlo generation of configurations. The gauge action is real, and the eigenvalues of the Dirac operator come in complex conjugate pairs, so that the fermion determinant is always non-negative. The purely gluonic counterterm for Bori\c{c}i-Creutz fermions introduces in the renormalized action operators of the kind $E\cdot B$, $E_1 E_2$, $B_2 B_3$ (and similar). In a hyper-cubic invariant theory, instead, only the standard terms $E^2$ and $B^2$ are allowed. Fixing the coefficient $c_P$ could then be done by measuring $\langle E\cdot B \rangle$, $\langle E_1 E_2 \rangle$, $\cdots$, and tuning $c_P$ in such a way that one (or more) of these expectation values is restored to its proper value pertinent to a hyper-cubic invariant theory, i.e. zero. These effects could turn out to be rather small, given that only the fermionic part of the tree-level action breaks hyper-cubic symmetry. It could also be that other derived quantities are more sensitive to this coefficient, and more suitable for its nonperturbative determination. In general one can look for Ward identities in which violations of the standard Lorentz invariant form, as functions of $c_P$, occur. For Karsten-Wilczek fermions the purely gluonic counterterm introduces an asymmetry between Wilson loops containing temporal links relative to those involving spatial links only. One could then fix $d_P$ by computing a Wilson loop lying entirely in two spatial directions, and then equating its result to an ordinary Wilson loop which also has links in the time direction. In the end only Monte Carlo simulations will reveal the actual amount of symmetry breaking. This could turn out to be large or small depending on the observable considered. One important such quantity is the mass splitting of the charged pions relative to the neutral pion. Indeed, since there is only a $U(1) \otimes U(1)$ chiral symmetry, the $\pi^0$ is massless, as the unique Goldstone boson (for $m_0\to 0$), but $\pi^+$ and $\pi^-$ are massive. Furthermore, the magnitude of these symmetry-breaking effects could turn out to be substantially different for Bori\c{c}i-Creutz compared to Karsten-Wilczek fermions. Thus, one of these two actions could in this way be raised to become the preferred one for numerical simulations. \section{A unifying notation for the two fermion discretizations} \label{sec:notation} By introducing a particular notation, some similarities between the two realizations of minimally doubled fermions can be revealed. This applies to the form of the action, operators and counterterms. For this purpose one can introduce a 4-component object $\Lambda_\mu$, defined as \begin{equation} \Lambda_{\mu} \equiv \left\{\begin{array}{ll} \delta_{\mu 4} & \,\mbox{Karsten-Wilczek} \\[0.2cm] \frac{1}{2} & \,\mbox{Bori\c{c}i-Creutz} \end{array}\right., \qquad (\Lambda\cdot\gamma) \equiv \left\{\begin{array}{ll} \gamma^4 & \,\mbox{Karsten-Wilczek} \\[0.2cm] \Gamma & \,\mbox{Bori\c{c}i-Creutz} \end{array}\right. . \end{equation} In both cases this object points from the zero of the action at the center of the Brillouin zone to the other zero (describing the second fermion, of opposite chirality). At first we show that by means of this object one can cast both actions into similar (although non-equivalent) forms. Their free Dirac operators, as we have already seen in Section \ref{sec:simulations}, contain the same naive fermion piece but a different dimension-five operator. The latter can be rewritten in this new notation as \begin{eqnarray} D_{KW}^{(5)}(k) &\equiv& \frac{2i}{a} \,\sum\limits_{\mu,\nu}\, \Lambda^{\nu} \gamma^{\nu} \,\sin^2 \frac{ap_{\mu}}{2} \, \Big(1-\delta_{\mu \nu }\Big) , \\ D_{BC}^{(5)}(k) &\equiv& -\frac{2i}{a} \,\sum\limits_{\mu,\nu}\, \Lambda^{\nu} \gamma^{\nu} \,\sin^2 \frac{ap_{\mu}}{2} \, \Big(1-2\delta_{\mu \nu}\Big) . \end{eqnarray} The factors $(1-\delta_{\mu \nu})$ and $(1-2\delta_{\mu \nu})$ cannot be transformed into each other, and this illustrates that the two actions are inequivalent and must be distinguished (as we remarked in Section \ref{sec:actions}). Although the quark propagator cannot be cast into a uniform expression using this notation, this turns out to be possible for operators (e.g. local currents and counterterms), as well as some other results such as the expression for vacuum polarization. For example, the various counterterms that we have previously discussed can be easily cast in a completely unified way for the two actions. If we rewrite the three counterterms making use of the object $\Lambda_\mu$, the counterterms of dimension three appear as \begin{equation} i \overline{\psi}(x) (\Lambda\cdot\gamma) \psi(x) , \end{equation} the fermionic ones of dimension four become \begin{equation} \overline{\psi}(x) (\Lambda\cdot\gamma)(\Lambda\cdot D) \psi(x) , \end{equation} and the gluonic ones are \begin{equation} \sum_{\mu,\nu, \rho} \Lambda_{\mu} F_{\mu\rho} F_{\rho\nu} \Lambda_{\nu} . \end{equation} Here (and in the following) objects written in this unified notation may differ by simple numerical coefficients from the corresponding quantities which we have previously used in the conventional notation. Let us now consider the results of the one-loop calculation that we have presented in the previous Sections. One can rewrite the full self-energy (without counterterms) for both actions as \begin{equation} \Sigma = i\slash{p} \,\Sigma_1 + m_0 \,\Sigma_2 + i\tilde{c}_1 \,(\Lambda\cdot\gamma)(\Lambda \cdot p) + \tilde{c}_2 \,\frac{i}{a}(\Lambda\cdot\gamma) , \end{equation} with $\tilde{c}_i$ being given by either $c_i$ or $d_i$. Also the fermionic bilinears can be expressed in a unified form. Using the abbreviations $b=\frac{g_{0}^{2}C_{F}}{16\pi^{2}}$ and $L = \log a^{2}p^{2}$, the results for the one-loop vertex diagram for the local scalar, vector and tensor bilinears are \begin{eqnarray} C^{S} &=& b \,\left\{\begin{array}{ll} \Big(-4L+24.36875 + (1-\alpha)\big(L-5.792010\big) \Big)~~\mbox{Karsten-Wilczek} \\ \Big(-4L+29.48729 + (1-\alpha)\big(L-5.792010\big) \Big)~~\mbox{Bori\c{c}i-Creutz} \end{array}\right. , \\ C^{V}_{\mu} &=& b \,\left\{\begin{array}{ll} \gamma_{\mu}\Big(-L+10.44610+(1-\alpha)\big(L-4.792010\big)\Big) -2.88914\cdot\Lambda_{\mu}(\Lambda\cdot\gamma)~~\mbox{Karsten-Wilczek} \\ \gamma_{\mu}\Big(-L+9.54612+(1-\alpha)\big(L-4.792010\big)\Big) -0.20074\cdot\Lambda_{\mu}(\Lambda\cdot\gamma)~~\mbox{Bori\c{c}i-Creutz} \end{array}\right. , \\ C^{T}_{\mu\nu} &=& b \,\left\{\begin{array}{ll} \sigma_{\mu\nu}\Big(4.17551+(1-\alpha)\big(L-3.792010\big) \Big)~~\mbox{Karsten-Wilczek} \\ \sigma_{\mu\nu}\Big(2.16548+(1-\alpha)\big(L-3.792010\big) \Big)~~\mbox{Bori\c{c}i-Creutz} \end{array}\right. . \end{eqnarray} For the conserved vector current, the sum of the standard proper diagrams (vertex, sails and operator tadpole) reads for the two actions \begin{equation} b\,\left\{\begin{array}{ll} \gamma_{\mu}\Big(-L-9.24089 + (1-\alpha)\big(L-4.792010\big) +0.12554\cdot\Lambda_{\mu}(\Lambda\cdot\gamma)~~\mbox{Karsten-Wilczek} \\ \gamma_{\mu}\Big(-L-6.80663 + (1-\alpha)\big(L-4.792010\big) \Big) -3.05532\cdot\Lambda_{\mu}(\Lambda\cdot\gamma)~~\mbox{Bori\c{c}i-Creutz} \end{array}\right. . \end{equation} Perhaps one of the most striking examples of the convenience of this notation can be observed in the case of the vacuum polarization. The contribution of fermion loops to this quantity contains structures which break hyper-cubic symmetry. It can be written as \begin{equation} \Pi^{(f)}_{\mu\nu}(p) = \Sigma_ {3}\ (p_\mu p_\nu-p^{2}\delta_{\mu\nu}) +d_{g} \Big((\Lambda\cdot p)(\Lambda_{\mu}p_{\nu}+\Lambda_{\nu}p_{\mu}) -(\Lambda_{\mu}\Lambda_{\nu}p^{2}+\delta_{\mu\nu}(\Lambda\cdot p)^{2})\Big) , \label{eq:vacpol} \end{equation} with the numerical results (as we have seen in Section \ref{sec:detgluon}) \begin{eqnarray} \Sigma_{3}(g_{0}^{2}) &=& \tilde{b}\,\left\{ \begin{array}{ll} -\frac{8}{3}L+19.99468~~\mbox{Karsten-Wilczek} \\ -\frac{8}{3}L+23.6793~~\mbox{Bori\c{c}i-Creutz} \end{array}\right. , \\ d_{g}(g_{0}^{2}) &=& \tilde{b}\,\left\{ \begin{array}{ll} -12.69766~~\mbox{Karsten-Wilczek} \\ -3.6376~~\mbox{Bori\c{c}i-Creutz} \end{array}\right., \end{eqnarray} with $\tilde{b}= \frac{g_{0}^{2}C_2}{16\pi^{2}}$ (Wilson fermions have $\Sigma_{3} = \tilde{b}(-\frac{4}{3}L+4.337002)$ and $d_{g}= 0$). Thus, a single formula can describe the structures which arise in the calculation of the vacuum polarization for both actions. With this notation we have thus shown that operator structures and results for Bori\c{c}i-Creutz and Karsten-Wilczek fermions, although distinct, share many common traits. As can be inspected in the above expressions, another remarkable feature appears to be that, after $\Lambda_\mu$ is introduced, the summed indices occur in pairs (like in the continuum), and also the free indices match exactly on both sides of equations. We do not know if this will always happen, also if one computes more complicated quantities. Even without using the $\Lambda$ notation, we also discovered that the hyper-cubic-breaking terms of the vacuum polarization in eq.~(\ref{eq:vacpol}) can be put for both actions in the same algebraic form, namely \begin{equation} p^2 \{\gamma_\mu,\Gamma\} \{\gamma_\nu,\Gamma\} + \delta_{\mu\nu} \{\slash{p},\Gamma\}\{\slash{p},\Gamma\} -\frac{1}{2}\,\{\slash{p},\Gamma\} \Big( \{\gamma_\mu,\slash{p}\} \{\gamma_\nu,\Gamma\} +\{\gamma_\nu,\slash{p}\} \{\gamma_\mu,\Gamma\} \Big) , \end{equation} where in the case of Karsten-Wilczek fermions $\Gamma$ must be replaced by $\gamma_4/2$. This substitution is suggested by comparison of the standard relation $\Gamma = \frac{1}{4}\,\sum_{\mu}(\gamma_\mu+\gamma'_\mu)$ of Bori\c{c}i-Creutz fermions with the formula $\gamma_4 = \frac{1}{2}\,\sum_{\mu}(\gamma_\mu+\gamma'_\mu)$ for Karsten-Wilczek fermions, expressing the symmetries of the action (as can be seen from the expression of the propagator, when one expands it around the second zero). Whether there is any deeper significance to this structural ``equivalence'' of the hyper-cubic-breaking structures in the vacuum polarizations remains an open question. \section{Conclusions} Bori\c{c}i-Creutz and Karsten-Wilczek fermions are described by a fully consistent renormalized quantum field theory. Three counterterms need to be added to the bare actions, and all their coefficients can be calculated either in perturbation theory (as we have shown), or nonperturbatively from Monte Carlo simulations (a task for the future, for which we have suggested some strategies). After these subtractions are consistently taken into account, the power divergence in the self-energy is eliminated, and no other power divergences occur for all quantities that we calculated. We have argued that under reasonable assumptions and following the nonperturbative determination of these counterterms, no special features of these two realizations of minimally doubled fermions should hinder their successful Monte Carlo simulation. Conserved vector and axial currents can be derived, and they have simple expressions which involve only nearest-neighbours sites. We have then here one of the very few cases where one can define a simple conserved axial current (also ultralocal). Finally, we would like to observe that this work is also an example of the usefulness of perturbation theory in helping to unfold theoretical aspects of (new) lattice formulations. \acknowledgments This work was supported by Deutsche Forschungsgemeinschaft (SFB443), the GSI Helmholtz-Zentrum f\"ur Schwerionenforschung, and the Research Centre ``Elementary Forces and Mathematical Foundations'' (EMG) funded by the State of Rhineland-Palatinate. MC was supported by contract number DE-AC02-98CH10886 with the U.S.~Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S.~Government purposes. MC is particularly grateful to the Alexander von Humboldt Foundation for support for multiple visits to the University of Mainz.
proofpile-arXiv_068-15484
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the 1910s Nordstr\"{o}m proposed a theory of gravity that met the strictures of Special Relativity \cite{RennGenesis3,vonLaueNordstrom,OBergmannScalar} by having, at least, Lorentz transformations as well as space- and time-translations as symmetries, as well as displaying retarded action through a field medium, as opposed to Newtonian instantaneous action at a distance. Nordstr\"{o}m's scalar gravity was a serious competitor to Einstein's program for some years during the middle 1910s. Neglecting time dependence and nonlinearity, it gives Poisson's equation just as Newton's theory does. Nordstr\"{o}m's theory was eclipsed first by the theoretical brilliance of Einstein's much more daring project, and then by the empirical success of Einstein's theory in the bending of light, a result inconsistent with Nordstr\"{o}m's theory. While representing gravity by a scalar field is no longer a viable physical proposal, it is interesting to fill a hole left by the abandonment of Nordstr\"{o}m's scalar gravitational theory caused by Einstein's inventing General Relativity (GR) so soon. Developments in group theory as applied to quantum mechanics, such as by Wigner \cite{WignerLorentz}, classified all possible fields in terms of the Lorentz group with various masses and various spins. In the late 1930s Pauli and Fierz found that the theory of a non-interacting massless spin 2 (symmetric tensor) field in Minkowski space-time was just the linear approximation of Einstein's GR \cite{PauliFierz,FierzPauli,Wentzel}. Tonnelat and some author authors associated with de Broglie pursued massive spin 2 theories \cite{Tonnelat20}. Nordstr\"{o}m's theory of a long-range scalar field is, in this particle physics terminology, a theory of a massless spin 0 field. Both the precedent of particle physics in the 1930s and the work by Seeliger and Neumann in the 1890s in giving Newtonian gravity a finite range \cite{PockelsHelmholtzEquation,Neumann,Seeliger1896,Pauli,North,NortonWoes,EarmanLambda} show the appropriateness of considering the possibility of a finite range for the gravitational potential. A finite range corresponds in field theory to a mass term, a term in the field equation that is linear and \emph{algebraic} in the potential; the corresponding term in the Lagrangian density, a sort of potential energy, is quadratic. Einstein briefly entertained in his 1917 paper on the cosmological constant what is in effect a massive scalar gravitational theory \begin{quote} as a foil for what is to follow. In place of Poisson's equation we write \begin{eqnarray*} \hspace{2cm} \nabla^2 \phi - \lambda \phi = 4 \pi \kappa \rho \hspace{1.25cm} . \hspace{1.25cm} . \hspace{1.25cm} . \hspace{1.1cm} (2) \end{eqnarray*} where $\lambda$ denotes a universal constant. \cite[p. 179]{EinsteinCosmological} \end{quote} Thus Einstein in effect contemplated a theory of the sort that, in light of later quantum mechanics terminology, one might call a theory of gravity using a massive scalar field, with $\lambda$ equaling the square of the scalar graviton mass in relativistic units with Planck's constant and the speed of light set to 1. Relativistic massive scalar fields in the absence of interacting satisfy the Klein-Gordon equation, but interpreting the field as gravity introduces interactions, including self-interaction and hence nonlinearity. However, Einstein promptly drew a widely followed analogy to his cosmological constant---I suppress references to spare the guilty---but this analogy is erroneous \cite{DeWittDToGaF,Trautman,Treder,FMS,Schucking,CooperstockTerm,NortonWoes,HarveySchucking,EarmanLambda}. Thus Einstein obscured for himself and others the deep conceptual issues raised by the mass term. The cosmological constant, having a zeroth order term in the field equations, is analogous to the scalar equation $$\nabla^2 \phi - \lambda (1+\phi) = 4 \pi \kappa \rho;$$ the strange term $- \lambda \cdot 1$ will tend to dominate over the intended $- \lambda \phi$ term. Two papers from \emph{c.} 1970 provide a partial exception to the remarkable silence about the possibility of relativistic massive scalar gravity theories. One by Freund and Nambu \cite{FreundNambu} writes down equations that formally could be read as applying to massive scalar gravity, but they do not consider that application. Deser and Halpern \cite{DeserHalpern} soon called attention to the identity of the Freund-Nambu field equations (less the mass term, though that omission is not mentioned!) with Nordstr\"{o}m's theory. While the alert reader could notice that Freund and Nambu had in effect provided field equations for a massive variant of Nordstr\"{o}m's theory, apparently no one has ever managed to comment on that fact, still less to discuss its significance. A more recent paper by H. Dehnen and R. Frommert reinvented massive scalar gravity in a similar way \cite{DehnenMassiveScalar}, but with an unfortunate and apparently unnoticed restriction on the allowed matter field content such that standard scalar fields are inadmissible. Massive scalar gravities, if the mass is sufficiently small, fit the data as well as does Nordstr\"{o}m's theory, as a consequence of the smoothness of the limit of a massive scalar field theory as the mass goes to zero \cite[p. 246]{WeinbergQFT1}. Thus there is a problem of underdetermination between the massless theory and its massive variants for sufficiently small masses \cite{UnderdeterminationPhoton}. The analog of this instance of underdetermination was already clearly understood by Seeliger in the 1890s. He wrote (as translated by John Norton) that Newton's law was ``a purely empirical formula and assuming its exactness would be a new hypothesis supported by nothing.'' \cite{Seeliger1895a,NortonWoes} While that claim might be a bit strong, in that Newton's law had virtues that not every rival formula empirically viable in the 1890s had, a certain kind of exponentially decaying formula of Neumann and Seeliger was also associated with an appropriate differential equation \cite{PockelsHelmholtzEquation,Neumann}. It is well known that Nordstr\"{o}m's theory does not bend light \cite{Kraichnan}. That is an immediate consequence of the conformal flatness of the metric in Nordstr\"{o}m's theory in geometrical form \cite{EinsteinFokker,EinsteinTrans4} and the conformal invariance of Maxwell's electromagnetism \cite{Wald}: space-time is flat in Nordstr\"{o}m's theory except for the volume element, but light doesn't see the volume element in Maxwell's theory. While scalar gravity is a museum piece as far as theoretical physics is concerned---at least as far as the dominant gravitational field as concerned---it remains a useful test bed theory for analogous phenomena for which the details in General Relativity are much more complicated technically or might play a secondary role in gravitation theory \cite{BransScalar,MisnerScalar,SundrumScalar,ReuterManrique}. Scalar gravity sheds light on some fundamental issues in space-time theory as well, not least by setting a precedent that could be entertained for massive tensor theories. \section{Massive Scalar Gravities: Relatives of Nordstr\"{o}m's Theory } Here I shall give a suitable Lagrangian density in a form adapted to the derivation of the massive variants to be introduced shortly. The Einstein-Fokker geometrization \cite{EinsteinFokker,EinsteinTrans4} suggests a useful set of variables to use. One can isolate the conformal structure (the null cones) out of a metric by taking the part with determinant of $-1$; for a flat metric $\eta_{\mu\nu}$ one can call the resulting conformal metric density $\hat{\eta}_{\mu\nu},$ a tensor density of weight $-\frac{1}{2}$ in four space-time dimensions \cite{Anderson}. Let $\tilde{\eta}$ be a (positive) scalar density of arbitrary nonzero weight $w,$ related to $\eta_{\mu\nu}$ by $\sqrt{-det(\eta_{\mu\nu})} = \tilde{\eta}^\frac{1}{w}.$ (The expression $\sqrt{-det(\eta_{\mu\nu})}$ is often written as $\sqrt{-\eta}.$) Thus $\tilde{\eta}$ (or its $w$th root) governs volumes, at least volumes that are not distorted by gravity. Let the gravitational potential be represented by a potential $\tilde{\gamma},$ also of density weight $w.$ Then one can define a new effective volume element by $ \tilde{g} =_{def} \tilde{\eta} + k \tilde{\gamma}.$ Thus far it is unclear whether $ \tilde{g}$ actually determines the volume of anything, but the derivation shows that it determines all volumes for the matter dynamics and all (in the massless case) or most (in the massive cases) for the gravitational dynamics. It turns out that $k^2 = 64 \pi G w^2;$ the sign of $k$ does not matter much, but will be chosen to match that of $w$ to maximize continuity. Neglecting terms that do not contribute to the field equations or that disappear with the choice of appropriate coordinates, one can take the purely gravitational part of the Lagrangian density as \begin{eqnarray} \mathcal{L}_{g0}= - \frac{1}{2} \hat{\eta}^{\mu\nu} \tilde{g}^{ \frac{1}{2w} -2} (\partial_{\mu} \tilde{\gamma} ) \partial_{\nu} \tilde{\gamma}; \end{eqnarray} here $\partial_{\mu}$ is the coordinate derivative, while $\hat{\eta}^{\mu\nu}$ is the inverse of $\hat{\eta}_{\mu\nu}.$ This result will be derived below. It is essential to use scalar \emph{densities} in order to obtain the trace of the stress-energy tensor so readily, which then permits combining the gravitational potential with the background volume element by an additive field redefinition $ \tilde{g} =_{def} \tilde{\eta} + k \tilde{\gamma}.$ The basic postulate is universal coupling, that the full field equations are obtained by taking the free field equations and adding in the trace of the total stress-energy tensor (including gravitational energy-momentum). The universal coupling principle (initially using a brute-force direct construction of the stress-energy tensor) was employed by Einstein in his supposedly unsuccessful \emph{Entwurf} physical strategy for finding his field equations \cite{EinsteinEntwurf}, and was later brought to successful completion in finding Einstein's equations using higher mathematical technology \cite{Kraichnan}. One can show that the trace of the stress-energy tensor is given by taking the Euler-Lagrange derivative of the Lagrangian density with respect to the volume element (perhaps raised to some power), such as \begin{equation} \frac{\delta \mathcal{L}}{\delta \sqrt{-\eta} } = \frac{1}{2\sqrt{-\eta}} \frac{\delta \mathcal{L}}{\delta \eta_{\mu\nu} } \eta_{\mu\nu}. \end{equation} The universal coupling postulate can be written therefore as \begin{eqnarray} \frac{\delta \mathcal{L} }{\delta \tilde{\gamma} } = \frac{\delta \mathcal{L}_{free} }{\delta \tilde{\gamma} } + k \frac{\delta \mathcal{L}}{\delta \tilde{\eta} }|\tilde{\gamma}. \end{eqnarray} The same theory results from all values of $w$. $\mathcal{L}_{free}$ is a rather standard quadratic expression yielding the Klein-Gordon equation. The material part of the Lagrangian density can be written in terms of matter fields $u$ and the conformally flat metric built from $\hat{\eta}_{\mu\nu}$ and $\tilde{g}$; $\tilde{\eta}$ does \emph{not} appear on its own. Both the gravitational and the material parts of the Lagrangian density therefore depend only on a conformally flat metric. While Nordstr\"{o}m's theory has been derived in terms of universal coupling previously \cite{Kraichnan,FreundNambu,DeserHalpern}, the approach outlined here admits a ready generalization yielding a distinct massive scalar gravity for every value of $w,$ only one of which has been found before. The kinetic and matter terms are of course just those of Nordstr\"{o}m's theory. Given the nonlinearity of the theories, there are different choices of field variables, often nonlinearly related, that are especially convenient for one purpose or another; thus comparison requires choosing some common set of fields. It is convenient to use $\sqrt{-\eta}= \tilde{\eta}^\frac{1}{w}$ and $\sqrt{-g}= \tilde{g}^\frac{1}{w}$; here $\eta$ and $g$ (without the $~\tilde{\ }$) are the determinants of the metrics $\eta_{\mu\nu}$ and $g_{\mu\nu}$ as usual. In the mass term only, one has $\sqrt{-\eta}$ appearing in its own right, the fact around which much of the philosophical interest of the theories. For any real $w$ (including $w=1$ and $w=0$ by l'H\^{o}pital's rule), a universally coupled massive variant of Nordstr\"{o}m's theory is given by \begin{eqnarray} \mathcal{L}_{mass} = \frac{m^2}{64 \pi G} \left[ \frac{ \sqrt{-g} }{w-1} + \frac{ \sqrt{-g}^w \sqrt{-\eta}^{1-w} }{w(1-w)} - \frac{ \sqrt{-\eta} }{w} \right]. \end{eqnarray} One can express this mass term as a quadratic term in the potential and, typically, a series of higher powers using the expansion $$ \tilde{g} =\sqrt{-g}^w = \sqrt{-\eta}^w + 8 w \sqrt{\pi G} \tilde{\gamma},$$ where $\tilde{\gamma}$ is the gravitational potential; the case $w=0$ requires taking the $\frac{1}{w}$th root of this equation and taking the limit, giving an exponential function. In all cases the result is \begin{eqnarray} \mathcal{L}_{mass} = -m^2 \left[ \frac{ \tilde{\gamma}^2 }{ 2 \sqrt{-\eta}^{ 2w-1} } + \frac{ (1-2w) 4 \sqrt{\pi G} \tilde{\gamma}^3 }{3 \sqrt{-\eta}^{3w-1} } + \ldots \right]. \end{eqnarray} This one-parameter family of theories closely resembles the 2-parameter family Ogievetsky-Polubarinov family of massive tensor theories \cite{OP,OPMassive2}, which can also be derived in a similar fashion \cite{MassiveGravity1}. The case $w= \frac{1}{2},$ which conveniently terminates at quadratic order, is the Freund-Nambu theory \cite{FreundNambu}. It is also useful to make a series expansion for all the theories using the special $w=0$ exponential variables, as will appear below. \section{Massive Scalar Gravities Are Strictly Special Relativistic} Features of Nordstr\"{o}m's scalar gravity are said to have shown that even the simplest and most conservative relativistic field theory of gravitation burst the bounds of Special Relativity (SR) \cite[p. 179]{MTW} \cite[p. 414]{NortonNordstromBook}. Relativistic gravity couldn't be merely special relativistic, according to this claim. Nordstr\"{o}m's theory indeed has a merely conformally flat space-time geometry \cite{EinsteinFokker}, which one can write as \begin{equation} g_{\mu\nu}=\hat{\eta}_{\mu\nu} \sqrt{-g}^\frac{1}{2}, \end{equation} where $\hat{\eta}_{\mu\nu}$ (with determinant $-1$) determines the light cones just as if for a flat metric in SR. Thus Nordstr\"{o}m's theory is invariant under the 15-parameter conformal group, a larger group than the usual 10-parameter Poincar\'{e} group of Special Relativity. By contrast the massive variants of Nordstr\"{o}m's theory are invariant only under the 10-parameter Poincar\'{e} group standard in SR and thus are special relativistic in the strict sense. The mass term breaks the conformal symmetry. It is therefore false that relativistic gravitation could not have fit within the confines of Special Relativity. While it is true that no phenomena required the mass term, it was possible that the mere smallness of the mass parameter explained its empirical obscurity, as Seeliger had already proposed in the Newtonian case. \section{Derivation of (Massless) Nordstr\"{o}m Scalar Gravity } Having summarized the results above, I turn to their derivation. While Nordstr\"{o}m's theory has been derived in terms of universal coupling previously \cite{Kraichnan,FreundNambu,DeserHalpern}, the approach outlined here admits a ready generalization yielding a distinct massive scalar gravity for every value of $w.$ All of these theories are new, with the exception of the Freund-Nambu theory. The use of a scalar \emph{density} as the field variable permits a convenient additive change of variables and simple coupling to the trace of the total stress-energy tensor. A scalar density under arbitrary coordinate transformations is of course a scalar under Lorentz transformations (with two options regarding transformations with negative determinant \cite{Golab}, corresponding to the scalar and pseudoscalar of particle physics). It will turn out that different density weights conveniently give different massive scalar gravities, much as in the tensor case \cite{OP,MassiveGravity1}. \subsection{Tensor Densities and Irreducible Parts of the Metric and Stress Tensor } In addition to the background geometrical variables and the gravitational potential, there are also other matter fields, which I denote collectively by $u.$ The derivation that follows makes use of the Rosenfeld-type metrical definition of the stress-energy tensor \cite{RosenfeldStress,Kraichnan,Deser,GotayMarsden,SliBimGRG}. In such an approach the flatness of the metric is momentarily relaxed while a functional derivative with respect to it is taken; then flatness is restored. It will be most helpful to use not the metric tensor $\eta_{\mu\nu},$ its inverse, or some densitized relative thereof, but rather two irreducible geometrical objects that together build up the metric tensor. (The use of irreducible geometric objects is also important in analyzing the Anderson-Friedman geometric objects program \cite{FriedmanJones}.) The conformal metric tensor density $\hat{\eta}_{\mu\nu}$ (with determinant of $-1$) of weight $-\frac{1}{2}$ determines the null cone structure, which is untouched by gravitation in scalar gravity theories. The remainder of the flat metric tensor is supplied by a scalar density $\tilde{\eta}$ of nonzero weight $w$, which quantity we may take to be positive.\footnote{Choosing $\tilde{\eta}>0$ in all coordinate systems indicates which of the various subtypes of density \cite{Golab} is intended.} The flat metric tensor is built up as $\eta_{\mu\nu}= \hat{\eta}_{\mu\nu} (\tilde{\eta})^\frac{1}{2w}.$ Note that $w$ can be any real nonzero number; the use of a single tilde above the variable name reminds us that the field is a density, but gives no hint about which weight it has. The usual torsion-free covariant derivative compatible with $\eta_{\mu\nu}$ is written as $\partial_{\alpha}.$ The metric-compatibility condition $\partial_{\alpha} \eta_{\mu\nu}=0 $ yields $\partial_{\alpha} \tilde{\eta}=0$ and $\partial_{\alpha} \hat{\eta}_{\mu\nu}=0$ as well. One should be careful to use the correct form of the covariant derivative for tensor densities, so it is worth recalling the forms of their covariant and Lie derivatives. A $(1, 1)$ density $\tilde{ \phi }^{\alpha}_{\beta}$ of weight $w$ is representative. The Lie derivative is given by \cite{Schouten,Anderson,Israel} \begin{eqnarray} \pounds_{\xi} \tilde{ \phi }^{\alpha}_{\beta} = \xi^{\mu} \tilde{ \phi } ^{\alpha}_{\beta},_{\mu} - \tilde{ \phi }^{\mu}_{\beta} \xi^{\alpha},_{\mu} + \tilde{ \phi }^{\alpha}_{\mu} \xi^{\mu},_{\beta} + w \tilde{ \phi }^{\alpha}_{\beta} \xi^{\mu},_{\mu}, \end{eqnarray} where the $,\mu$ denotes partial differentiation with respect to local coordinates $x^{\mu}.$ The $\eta$-covariant derivative is given by \cite{Schouten,Anderson,Israel} \begin{eqnarray} \partial_{\mu} \tilde{ \phi }^{\alpha}_{\beta} = \tilde{ \phi }^{\alpha}_{\beta},_{\mu} + \tilde{ \phi }^{\sigma}_{\beta} \Gamma_{\sigma\mu}^{\alpha} - \tilde{ \phi } ^{\alpha}_{\sigma} \Gamma_{\beta\mu}^{\sigma} - w \tilde{ \phi }^{\alpha}_{\beta} \Gamma_{\sigma\mu}^{\sigma}. \end{eqnarray} Here $\Gamma_{\beta\mu}^{\sigma}$ are the Christoffel symbols for $\eta_{\mu\nu}.$ Once the curved metric $g_{\mu\nu}$ is defined, the analogous $g$-covariant derivative $\nabla$ with Christoffel symbols $\{ _{\sigma\mu}^{\alpha} \}$ follows. For \emph{scalar} densities, the relevant piece is the new term with the coefficient $\pm w.$ The formulas for Lie and covariant differentiation follow \cite{SzybiakLie,SzybiakCovariant} from the coordinate transformation law for scalar densities: under a change of local coordinates from an unprimed set $x^{\mu}$ to a primed set $x^{\nu^{\prime} },$ the density's component behaves as \begin{equation} \phi^{\prime} = \left| det\left( \frac{ \partial x }{\partial x^{\prime} }\right) \right|^{w} \phi, \end{equation} The primes perhaps are opposite where one might have expected, but this is the usual convention \cite{Anderson,Golab,Schouten}, though some authors, especially but not only Russian, define weight in the opposite fashion. For the other kind of scalar densities, which can change sign under coordinate transformations \cite{Golab}, the same Lie and covariant derivative formulas follow, because only behavior for infinitesimal transformations is relevant. For any action $S$ invariant under arbitrary coordinate transformations, one can derive a metrical stress-energy tensor. It is convenient to break the flat metric up into its irreducible parts, the conformal metric density $\hat{\eta}_{\mu\nu}$ that fixes the null cones and the volume-related scalar density $\tilde{\eta}$ \cite{SliBimGRG}. One can show that $$ \frac{\delta S}{\delta \tilde{\eta} } = \frac{1}{2w \tilde{\eta} } \frac{ \delta S}{\delta \eta_{\mu\nu} } \eta_{\mu\nu}.$$ Making an infinitesimal coordinate transformation described by the vector field $\xi^{\mu}$ gives $$ \delta S = \int d^{4}x \left[ \frac{ \delta S}{\delta \tilde{\gamma} } \pounds_{\xi} \tilde{\gamma} + \frac{ \delta S}{\delta u } \pounds_{\xi} u + \frac{ \delta S}{\delta \hat{\eta}_{\mu\nu} } \pounds_{\xi} \hat{\eta}_{\mu\nu} + \frac{ \delta S}{\delta \tilde{\eta} } \pounds_{\xi} \tilde{\eta} \right] + BT,$$ where BT is some boundary term of no interest for present purposes. Because $S$ is a scalar, $\delta S = 0.$ Integration by parts pulls all the derivatives off $\xi^{\mu}$ at the cost of more boundary terms. Choosing $\xi^{\mu}$ to vanish at the boundary annihilates all the boundary terms, while using its arbitrariness removes the integration, leaving the integrand to vanish. Going `on-shell' by using gravity's and matter's field equations $\frac{ \delta S}{\delta \tilde{\gamma} } =0,$ $\frac{ \delta S}{\delta u } =0,$ respectively, gives local conservation of stress-energy: $$-\partial_{\nu} \left[2 \hat{\eta}_{\mu\alpha} \frac{ \delta S}{\delta \hat{\eta}_{\mu\nu} } + w \delta^{\nu}_{\alpha} \tilde{\eta} \frac{ \delta S}{\delta \tilde{\eta} }\right] =0.$$ The stress-energy tensor here is broken up into traceless and trace parts. For a scalar theory, the latter will represent the source for gravity. This stress-energy tensor contains contributions from both matter $u$ and gravity $\tilde{\gamma} $. If Maxwell's electromagnetism is included among the matter fields, then it couples only to $\hat{\eta}_{\mu\alpha},$ not $\tilde{\eta}$: rather than introducing conformal rescalings and showing that nothing interesting changes \cite{Wald,ChoquetDeWitt2}, this use of densities is the most direct way to see the theory's conformal invariance, from which the theory's failure to bend light follows immediately: electromagnetic radiation in the absence of charges doesn't know that it isn't in flat space-time because the volume element disappears entirely. \subsection{Spinors} Foundational questions about space-time not infrequently are addressed as though there were no such thing as protons, electrons, neutrons, or other fermions, which are represented classically by spinor fields, or as if spinor fields introduced no additional issues. But consider the supposedly \emph{trivial} possibility of representing special relativity using arbitrary coordinates. While mere tensor calculus is needed for bosonic fields, the issue is more complicated for spinor fields. In the interest of overcoming this unjustifiable neglect of spinor fields, I point out that spinorial matter can be included effortlessly in the universally coupled massive gravities considered here. On account of breaking up the metric and stress tensor into their irreducible pieces, including spinors requires no work at all, because the spinor (if massless) does not notice scalar gravity. To see this effortlessly and yet rigorously, one can use the Ogievetsky-Polubarinov spinor formalism \cite{OP,OPspinor,GatesGrisaruRocekSiegel,BilyalovSpinors}, thereby avoiding a tetrad in favor of the metric (which is possible by construction, contrary to widely held belief); the formalism is a bit like the tetrad formalism in the symmetric gauge, but is conceptually independent. The conformal covariance of the massless Dirac equation \cite{ChoquetDeWitt2,Branson} is well known. But what is rarely if ever noticed is that one must and can use density-weighted spinors to achieve conformal \emph{invariance}, with the volume element dropping out altogether---much as in Maxwell's electromagnetism, but the details are more difficult and less familiar and involve derivatives of the conformal part of the metric \cite{PittsPhilDiss}. The trace of the stress-energy tensor comes from $\frac{\delta S}{\delta \tilde{\eta} },$ but $\tilde{\eta}$ is simply absent from the suitably weighted spinor's kinetic term. That expression depends, in a highly nonlinear way, only on the components of $\hat{\eta}_{\mu\nu},$ which determines the null cones: 9 components, not the 10 of the metric, 15 of a unimodular conformal tetrad, or 16 of a tetrad. The appropriate spinor has density weight $\frac{3}{8}$ in 4 space-time dimensions or $ \frac{n-1}{2n}$ in $n$ space-time dimensions \cite{PittsPhilDiss}. The spinor, if massless, does not notice scalar gravity at all; the metrical stress-energy tensor has vanishing trace even off-shell. More familiar routes to this conclusion of vanishing trace of the metric stress tensor are less direct \cite{ImprovedEnergy,SorkinStress,DehnenHiggsScalar}. The Ogievetsky-Polubarinov treatment of spinors has also avoided a spurious counterexample to the Anderson-Friedman absolute objects program \cite{FriedmanJones} for understanding the difference between merely formal general covariance and the substantive kind that is supposed to be a novel feature of General Relativity. \subsection{Universal Coupling} Let us assume that the free field action $S_{f}[ \tilde{\gamma}, u, \hat{\eta}_{\mu\nu}, \tilde{\eta}]$ (with vanishing Newton's constant $G$) is known; it is given below, apart from the unspecified matter fields. The full action $S$ should reduce to $S_{f}$ in the limit of vanishing $G.$ The task at hand is to derive the full action $S$ for the theory with nonzero gravitational interaction. Lorentz covariance requires a source generalizing the mass density in a Lorentz-invariant way, so the trace of the stress-energy tensor is a natural choice. This choice is perhaps not compulsory, unlike the tensor case where free field gauge invariance necessitates that any source used by a divergenceless symmetric rank 2 tensor, of which there is only one physically significant example at hand. But it is a very natural choice. Letting $S$ be specialized once more to the interacting scalar gravitational theory that we seek, we can postulate that the gravitational free field equation is modified by the introduction of a source term that is basically the trace of the stress-energy tensor: $$ \frac{ \delta S}{\delta \tilde{\gamma} } = \frac{ \delta S_f}{\delta \tilde{\gamma} } + k \frac{ \delta S}{\delta \tilde{\eta} }|\tilde{\gamma} ,$$ where $k$ is a coupling constant related to $G$ and perhaps the density weight $w$ in ways that will be ascertained later. In anticipation of a change to bimetric variables, the $|\tilde{\gamma}$ notation has been added to emphasize that the other independent variable here besides $ \tilde{\eta}$ is $\tilde{\gamma}$. The change to bimetric variables\footnote{ For the tensor case, the new variables involve two metric tensors or the like \cite{Kraichnan,SliBimGRG,MassiveGravity1}. For the scalar case, only the scalar density portions undergo any redefinition, because the gravitational field has no tensor piece to combine with $\hat{\eta}_{\mu\nu}.$ Nonetheless the term ``bimetric variables'' is handy.} involves the definition $$\tilde{g} = \tilde{\eta} + k \tilde{\gamma}.$$ Equating coefficients for variations of $\tilde{\eta}$ and $\tilde{\gamma}$ relates the functional derivatives as follows: $$ \frac{ \delta S}{\delta \tilde{\eta} }|\tilde{\gamma}= \frac{ \delta S}{\delta \tilde{g} } + \frac{ \delta S}{\delta \tilde{\eta} }|\tilde{g}; $$ $$\frac{ \delta S}{\delta \tilde{\gamma} }= k \frac{ \delta S}{\delta \tilde{g} }. $$ The second equation shows that the new field $\tilde{g}$ has an Euler-Lagrange equation equivalent to that of the potential $\tilde{\gamma}.$ The first equation shows that the trace of the stress-energy tensor splits into one piece that vanishes on-shell and one that does not. Using these equations in the postulated equation for universal coupling gives $$ 0= \frac{ \delta S_f}{\delta \tilde{\gamma} } + k \frac{ \delta S}{\delta \tilde{\eta} }|\tilde{g}.$$ So far little has been said about the detailed form of $S_f$. The most natural choice is given by the Lagrangian density $$\mathcal{L} = -\frac{1}{2} (\partial_{\mu}\tilde{\gamma}) (\partial_{\nu}\tilde{\gamma}) \hat{\eta}^{\mu\nu} \tilde{\eta}^{\frac{1}{2w} -2},$$ apart from a mass term for $\tilde{\gamma}$ which will be included later (and a free field term for matter $u$ which does not contain $\tilde{\gamma}$ and so does not contribute to the derivation). This choice gives the usual wave equation. The metrical signature $-+++$ is employed. To satisfy the universal coupling identity in bimetric guise, it is convenient \cite{Kraichnan,Anderson,SliBimGRG} to split the full (but unknown) action $S$ into a piece $S_{1}[\tilde{g}, u, \hat{\eta}_{\mu\nu}] $ (without explicit dependence on $\tilde{\eta}$) and another piece $S_2$ that, perhaps among other things, cancels the term $\frac{ \delta S_f}{\delta \tilde{\gamma} }$. For $S_1$ it is natural to build a conformally flat metric $$g_{\mu\nu}= \hat{\eta}_{\mu\nu} (\tilde{g})^\frac{1}{2w}.$$ Then it is natural to choose the Hilbert-like expression $$S_{1}= c \int d^{4}x \sqrt{g}R[g] + S_{matter}[g_{\mu\nu},u].$$ (A cosmological constant term $\int d^4 x \sqrt{-g}$ is also available if desired.) One can show that $$S_{2}=\frac{2w}{3k} \int d^{4}x R[\eta] \tilde{\eta}^{-1 + \frac{1}{w}} \tilde{\gamma} + \int d^{4}x \partial_{\mu} \alpha^{\mu}$$ does the job of accommodating $S_{f}.$ The first piece $S_1$involving the Ricci scalar for $\eta_{\mu\nu}$ does the work here. The second piece $S_{2}$ is simply a boundary term, which is deposited into $S_2$ rather than elsewhere for convenience. The boundary term can be chosen to remove the second derivatives from the Hilbert-like term $\sqrt{g}R[g]$ in $S_1.$ One can also include a pure volume term $\int d^4 x \sqrt{-\eta}$. Note that $S_{2}$ contributes nothing to the field equations; its purpose is to contribute to the Rosenfeld metric stress-energy tensor only. (Recall that flatness of the background is relaxed briefly in taking the functional derivative and then restored.) The total action for the massless case is thus a piece $S_1$ describing an effective conformally flat geometry and a piece $S_{2}$ that does not affect the field equations. Universal coupling has completely clothed the background volume element with the gravitational potential, leaving only their sum as observable. This is an amusingly strict realization of Einstein's Poincar\'{e}-inspired dictum that only the epistemological sum of gravity and physics is observable \cite{EinsteinGeomExp}. By requiring the usual normalization for the free gravitational field's kinetic term to lowest order, one infers that $$ c= - \frac{4w^2}{3k^2}.$$ In comparison to the free gravitational Lagrangian density \begin{eqnarray} - \frac{1}{2} \hat{\eta}^{\mu\nu} \tilde{\eta}^{ \frac{1}{2w} -2} (\partial_{\mu} \tilde{\gamma} ) \partial_{\nu} \tilde{\gamma}, \end{eqnarray} the interacting theory has the corresponding expression (apart from terms not affecting the equations of motion) \begin{eqnarray} - \frac{1}{2} \hat{\eta}^{\mu\nu} \tilde{g}^{ \frac{1}{2w} -2} (\partial_{\mu} \tilde{\gamma} ) \partial_{\nu} \tilde{\gamma}, \end{eqnarray} while the matter fields see the conformally flat effective metric rather than the flat background metric. Thus universal coupling of gravity to the trace of the total stress-energy tensor yield's Nordstr\"{o}m's theory. In the massless case, the same theory obtains for every (nonzero) density weight $w$ of the gravitational potential. The case $w=0$ without a graviton mass term was handled by Kraichnan \cite{Kraichnan} using an multiplicative exponential field redefinition, rather than an additive one, and yields Nordstr\"{o}m's theory as well. \section{Derivation of Massive Gravities} Making the gravitational potential $\tilde{\gamma}$ massive implies that the free gravitational potential obeys the Klein-Gordon equation, not the wave equation (or Laplace's equation in the static case). It is therefore necessary to add a term quadratic in the gravitational potential to the free gravitational field Lagrangian density. Going through the universal coupling derivation, one then fields a crucial new term in which the flat metric's volume element remains essentially in the theory. One also crucially uses the cosmological constant term; the cosmological constant term and the new term with the flat volume element have equal and opposite terms linear in the gravitational potential in the total action, a crucial cancellation that removes the odd behavior (from a field-theoretic point of view \cite{FMS}) of the cosmological constant. Then the pure volume term enters the action to cancel out the zeroth order parts of both the cosmological constant and the new term essentially involving the flat volume element; this last cancellation is largely a matter of good bookkeeping, letting the action vanish for flat space-time but not affecting the field equations. Thus the interacting theory has a term quadratic in the gravitational potential (a mass term) and, except in one special case, one or more higher powers (possibly infinitely many) in the gravitational potential. While the derivation is carried out using different fields for different values of density weight $w,$ making comparison mildly nontrivial, one can show (such as by using the two metrics only) that the theories obtained are all distinct. One can also show that the resulting one-parameter family of mass terms is related in the expected way to the two-parameter family of massive tensor gravities obtained some time ago by Ogievetsky and Polubarinov. (It turns out that the mass term in \cite{OP} contains a typographical error absent in the less well known summary \cite{OPMassive2}.) One expects that the mass term for a free field be quadratic in the potential and lack derivatives. The free field action $S_{f}= S_{f0} + S_{fm}$ is now assumed to have two parts: a (mostly kinetic) part $S_{f0}$ that as in the massless case above, and an algebraic mass term $S_{fm}$ that is quadratic. We seek a full universally coupled theory with an action $S$ that has two corresponding parts. The two parts of $S=S_{0} + S_{ms}$ are the familiar part $S_{0}$ (yielding the Einstein tensor, the matter action, a cosmological constant, and a zeroth order 4-volume term) and the part $S_{ms}$ that essentially contains the background volume element $\tilde{\eta}$ and also has a zeroth order 4-volume term. As it turns out, the mass term is built out of \emph{both} of the algebraic part of $S_{0} $ (the cosmological constant and 4-volume term) and the purely algebraic term $S_{ms}.$ In comparison to derivations using the canonical stress-energy tensor \cite{FreundNambu,FMS}, the metric stress-energy tensor is much cleaner, in some respects more illuminating, but in some respects less transparent. The canonical tensor derivation is noticeably simpler in the known $w=\frac{1}{2}$ case for Freund and Nambu than in the other cases, for reasons that will be explained below. Again we postulate universal coupling in the form \begin{eqnarray} \frac{\delta S}{\delta \tilde{\gamma} } = \frac{\delta S_{f} }{\delta \tilde{\gamma} } +k \frac{\delta S}{\delta \tilde{\eta} }. \end{eqnarray} Changing to the bimetric variables $\tilde{g}$ and $\tilde{\eta}$ implies, as before, that \begin{eqnarray} 0 = \frac{\delta S_{f}}{\delta \tilde{\gamma} } +k \frac{\delta S}{\delta \tilde{\eta}} | \tilde{g}. \end{eqnarray} Now we introduce the relations $S_{f}= S_{f0} + S_{fm}$ and $S=S_{0} + S_{ms}$ to treat separately the pieces that existed in the massless case from the innovations of the massive case. Thus \begin{equation} \frac{\delta S_{f0}}{\delta \tilde{\gamma} } + \frac{\delta S_{fm}}{\delta \tilde{\gamma} } = -k \frac{\delta S_{0} }{\delta \tilde{\eta}} | \tilde{g} -k \frac{\delta S_{ms} }{\delta \tilde{\eta}} | \tilde{g}. \end{equation} Given the assumption that the new terms $S_{fm}$ and $S_{ms}$ correspond, this equation separates into the familiar part $\frac{\delta S_{f0}}{\delta \tilde{\gamma}} = -k \frac{\delta S_{0} }{\delta \tilde{\eta}} | \tilde{g} $ as before and the new part $$ \frac{\delta S_{fm}}{\delta \tilde{\gamma} } = -k \frac{\delta S_{ms} }{\delta \tilde{\eta}} | \tilde{g}.$$ $S_{0} $ is given by $S_{0} = S_{1} [\tilde{g}_{\mu\nu}, u] + S_{2}$ as in the massless case. Once again, we choose the simplest case and get the Hilbert action along with a cosmological constant, with matter coupled only to the curved metric, along with various terms that do not affect the equations of motion. Assuming the free field mass term to be quadratic in the gravitational potential, \begin{equation} \mathcal{L}_{fm} =-\frac{m^2}{2} \tilde{\gamma}^2 \sqrt{-\eta}^{1-2w}, \end{equation} its contribution to the field equation is $$ \frac{\delta S_{fm}}{\delta \tilde{\gamma} } = - m^2 \sqrt{-\eta}^{1-2w} \tilde{\gamma}.$$ Changing to the bimetric variables gives \begin{equation} \frac{m^2}{k} \tilde{g} \tilde{\eta}^{\frac{1}{w} -2} - \frac{m^2}{k} \tilde{\eta}^{\frac{1}{w} -1} = k \frac{\delta S_{ms} }{\delta \tilde{\eta}} | \tilde{g}. \end{equation} \subsection{Cases with $w \neq 1,$ $w \neq 0$} The goal for $S_{ms}$ is to obtain a mass term, so one can omit $u$ and $\hat{\eta}_{\mu\nu}$ from the `constant' of integration, leaving a function of $\sqrt{-g}$ only---which must be linear in order to be a scalar density of weight $1.$ Thus $$S_{ms} = \int d^{4}x \left( A \tilde{g}^\frac{1}{w} + \frac{w m^2}{k^2 (1-w)} \tilde{g} \tilde{\eta}^{\frac{1}{w} -1} - \frac{w m^2}{k^2} \tilde{\eta}^{\frac{1}{w}} \right) $$ as long as $w \neq 1.$ (The case $w=1$ yields a theory as well, but must be treated separately.) Requiring $S_{ms}$ to vanish to zeroth order in $\tilde{\gamma}$ yields $$A = \frac{ w^2 m^2}{(w-1) k^2}.$$ Requiring $S_{ms}$ to vanish to first order, which is important for the field equations, gives nothing new. For the second and higher order terms, the binomial series expansion yields $$\mathcal{L}_{ms} = m^2 \sqrt{-\eta} \left( - \frac{ \tilde{ \gamma}^2 }{2 \tilde{\eta}^2 } - \frac{[1-2w]k \tilde{\gamma}^3}{6 w \tilde{\eta}^3 } + \ldots \right).$$ For the free-field limit $k \rightarrow 0$ this expression reduces to the expected quadratic expression. For the special case $w = \frac{1}{2},$ which turns out to be the Freund-Nambu theory (the only case previously obtained), the mass term contains no interaction part; the free mass term, quadratic in a weight $\frac{1}{2}$ potential, does not depend on the volume element in order to make a covariant (that is, weight $1$) Lagrangian density and so contributes nothing to the metric stress energy tensor. (This perturbative expansion indicates nothing odd about the $w=1$ case, though the above integration was not permissible in that case.) In terms of bimetric variables, the mass term for $w\neq 1$ is $$\mathcal{L}_{ms} = \frac{ w^2 m^2}{(w-1) k^2} \tilde{g}^\frac{1}{w} + \frac{w m^2}{k^2 (1-w)} \tilde{g} \tilde{\eta}^{\frac{1}{w} -1} - \frac{w m^2}{k^2} \tilde{\eta}^{\frac{1}{w}}. $$ The factor $\tilde{g}^\frac{1}{w}$ often can be treated using a binomial series expansion; the series converges for $|k \tilde{\gamma} / \tilde{\eta} | <1$ \cite{Jeffrey}. (A strong-field expansion is also possible, but will not be employed here.) One can show that the binomial series expansion for the theory labeled by $w$ in terms of the weight $w$ variables is \begin{equation} \mathcal{L}_{ms} = - \frac{m^2 \sqrt{-\eta} }{k^2} \sum_{j=2}^{\infty} \left(k \frac{ \tilde{\gamma} }{ \tilde{\eta} } \right)^j \frac{ (\frac{1}{w} -2)! }{ (\frac{1}{w} -j)! j! }. \end{equation} Here the expression $$\frac{ (\frac{1}{w} -2)! }{ (\frac{1}{w} -j)! }$$ is shorthand for $(\frac{1}{w} -2) (\frac{1}{w} -3) \cdots (\frac{1}{w} -j+1);$ one need not make sense of the numerator and denominator separately in terms of Gamma functions, though one could do so. This form is clearly well behaved in the vicinity of $w=1,$ so one can find the limit as $w \rightarrow 1$ to be \begin{equation} \mathcal{L}_{ms,w=1} = - \frac{m^2 \sqrt{-\eta} }{k^2} \sum_{j=2}^{\infty} \left(-k \frac{ \tilde{\gamma} }{ \tilde{\eta} } \right)^j \frac{ 1}{ j(j-1) }. \end{equation} Note that this series expression of the theories leaves them not readily commensurable, due to the use of a different potential, bearing different relations to the more physically meaningful $\sqrt{-g}$ and $\sqrt{-\eta}$, for each value of $w$. The use a $w$-specific potential in the derivation of each theory is quite important in the context discovery, however. \subsection{Case $w=1$} The case $w=1$ can be considered now. The equation to integrate is \begin{equation} \frac{m^2}{k} \tilde{g} \tilde{\eta}^{-1} - \frac{m^2}{k} = k \frac{\delta S_{ms} }{\delta \tilde{\eta}} | \tilde{g}. \end{equation} Performing the integration introduces a logarithm: $$\mathcal{L}_{ms} = \frac{ m^2}{ k^2} \tilde{g}[ln(\tilde{\eta}) + f(\tilde{g})] - \frac{ m^2}{k^2} \tilde{\eta}. $$ In the interest of getting a scalar density, one can set the `constant' of integration $f(\tilde{g})$ to be $$f(\tilde{g}) = a - ln(\tilde{g}) $$ for some constant $a.$ For $ w \neq 1$ the quadratic and higher terms came from the formal cosmological constant term proportional to $\sqrt{-g}$, but in this case that term is linear in the gravitational potential (in this choice of field variables), and hence serves to cancel the noxious linear part of the mass-yielding nonlinear expression $ -\frac{ m^2}{ k^2} \tilde{g} ln \left(\frac{ \tilde{g}}{ \tilde{\eta} } \right).$ This cancellation is a second service formed by the choice of $-1$ for the coefficient in $f.$ (This remark should be taken as merely heuristic, because use of nonlinear field redefinitions, such as re-expressing this mass term using a potential of a different density weight, or using the $w=0$ exponential field redefinition, would lead to a different sort of bookkeeping.) Requiring the zeroth order part of the mass term to vanish as well gives $a=1.$ Using the Taylor expansion $ln(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} -\ldots$, convergent for $-1<x\leq 1$ \cite[p. 564]{Shenk}, one obtains $$\mathcal{L}_{ms} = m^2 \left( - \frac{ \tilde{ \gamma}^2 }{2 \tilde{\eta} } + \frac{k \tilde{\gamma}^3}{6 \tilde{\eta}^2 } + \ldots \right),$$ matching the expansion above to this order. The full series expansion (for any $w,$ with possible exception of $w=0$---but that case will be vindicated shortly) can be shown to be \begin{equation} \mathcal{L}_{ms,w=1} = - \frac{m^2 \sqrt{-\eta} }{k^2} \sum_{j=2}^{\infty} \left(-k \frac{ \tilde{\gamma} }{ \tilde{\eta} } \right)^j \frac{ 1}{ j(j-1) }, \end{equation} in agreement with the expression above for the limit of the $w \neq 1$ family in the limit as $w \rightarrow 1.$ Thus the family of massive universally coupled scalar gravities is indeed continuous across $w=1,$ despite the need for special treatment of this case. One can also treat the case $w=1$ using l'H\^{o}pital's rule for the indeterminate form $\frac{0}{0}.$ One has \begin{eqnarray} \lim_{w \rightarrow 1} \frac{m^2}{64 \pi G} \left[ \frac{ \sqrt{-g} }{w-1} + \frac{ \sqrt{-g}^w \sqrt{-\eta}^{1-w} }{ w(1-w)} - \frac{ \sqrt{-\eta} }{ w} \right] \nonumber \\ = \lim_{w \rightarrow 1} \frac{m^2}{64 \pi G} \left[ \frac{ w\sqrt{-g} - \sqrt{-g}^w \sqrt{-\eta}^{1-w} - (w-1) \sqrt{-\eta} }{w^2-w} \right] = \nonumber \\ \lim_{w \rightarrow 1} \frac{m^2}{64 \pi G} \left[ \frac{\sqrt{-g} - \sqrt{-g}^w \sqrt{-\eta}^{1-w} (ln\sqrt{-g} - ln\sqrt{-\eta}) - \sqrt{-\eta} }{2w-1} \right] \nonumber \\ = \frac{m^2}{64 \pi G} \left[ {\sqrt{-g} - \sqrt{-g} ln\sqrt{-g} + \sqrt{-g}ln\sqrt{-\eta} - \sqrt{-\eta} } \right] \end{eqnarray} where the formula for exponentials of a non-natural base introduces the logarithms. This expression of course agrees with that given above. \subsection{Case $w=0$} The case $w=0$ is much more problematic, given the above bimetric field redefinition and the expression of universal coupling in terms of the derivative with respect to a volume element of some weight. The weight 0 power of volume element is just 1, hardly a good field with respect to take a functional derivative. The additive field redefinition defining $\tilde{g}$ appears to fail also. From the Newtonian limit it follows that $$ k^2 = 64 \pi G w^2.$$ To assess continuity of the field redefinition, one needs to know what happens to the meaning of $\tilde{\gamma}$ as $w \rightarrow 0.$ By considering $$ 1= \tilde{g}_w \tilde{g}_{-w} = (\tilde{\eta}_w + k_w \tilde{\gamma}_w) (\tilde{\eta}_{-w} + k_{-w} \tilde{\gamma}_{-w}) \approx 1 + \tilde{\eta}_w k_{-w} \tilde{\gamma}_{-w} + k_w \tilde{\gamma}_w \tilde{\eta}_{-w} $$ near $w=0,$ one sees that the physical significance of the potential $\tilde{\gamma}$ does not jump discontinuously at $w=0$ as long as $k$ and $w$ have the same sign, which I choose to be positive for positive density weights $w.$ Thus $k = 8 \sqrt{\pi G} w.$ But with $k$ proportional to $w,$ it appears that the bimetric field redefinition $$\tilde{g}_w = \tilde{\eta} + k(w) \tilde{\gamma}_w$$ (where the dependence on $w$ has now been made explicit) reduces to $1=1$ for $w=0.$ The universal coupling postulate \begin{eqnarray} \frac{\delta S}{\delta \tilde{\gamma} } = \frac{\delta S_{f} }{\delta \tilde{\gamma} } +k \frac{\delta S}{\delta \tilde{\eta} } \end{eqnarray} suffers not only from the meaninglessness of $\frac{\delta S}{\delta \tilde{\eta} },$ but also from the vanishing of the coupling constant due to the linearity of $k$ in $w.$ While these problems seem rather disastrous, in fact they are all soluble. First we recall the series expansion above, with $k$ now expressed in terms of $w:$ \begin{equation} \mathcal{L}_{ms} = - m^2 \sqrt{-\eta} \sum_{j=2}^{\infty} ( 8 \sqrt{\pi G})^{j-2} \left( \frac{ \tilde{\gamma} }{ \tilde{\eta} } \right)^j \frac{ (1-2w)(1-3w) \cdots (1-jw +w)}{ j! }, \end{equation} which has well behaved and simple coefficients as $w \rightarrow 0$. It is natural to drop the tilde on $\gamma$ and set $\tilde{\eta}$ to 1 for $w=0,$ leaving the simple form \begin{equation} \mathcal{L}_{ms,w=0} = - m^2 \sqrt{-\eta} \sum_{j=2}^{\infty} ( 8 \sqrt{\pi G})^{j-2} \frac{ \gamma^j }{ j! }. \end{equation} This is clearly the sum of the quadratic and higher terms for the exponential function, so one infers \begin{equation} \mathcal{L}_{ms,w=0} = - \frac{ m^2 \sqrt{-\eta} } {64 \pi G } \left[ -1 - 8 \sqrt{\pi G} \gamma + exp ( 8 \sqrt{\pi G} \gamma) \right]. \end{equation} It remains to find a meaningful and appropriate notion of universal coupling that permits, one hopes, the derivation of an expression equivalent to this $w=0$ series. The form of this series for $w=0$ suggests that one might think in terms of exponentials or logarithms to find a suitable field redefinition. While the linear redefinition $$\tilde{g} = \tilde{\eta} + 8 \sqrt{\pi G} w \tilde{\gamma}$$ fails for $w=0,$ the $\frac{1}{w}$th root \begin{eqnarray} \sqrt{-g} = \tilde{g}^\frac{1}{w} = (\tilde{\eta} + 8 \sqrt{\pi G} w \tilde{\gamma})^\frac{1}{w} = \sqrt{-\eta} \left(1 + 8w \sqrt{\pi G} \frac{ \tilde{\gamma} }{\tilde{\eta} } \right)^\frac{1}{w} \end{eqnarray} makes sense for $w=0$ also. The limit is $$ \sqrt{-g} =\sqrt{-\eta} exp(8 \sqrt{\pi G} \gamma) .$$ An exponential change of variables very much like this was already employed by Kraichnan, though without application to massive theories \cite{Kraichnan}. By writing the trace of the stress-energy tensor in two different ways, one can show that the problems of meaningless field variable and of vanishing coupling also can be resolved. The flat metric $\eta_{\mu\nu}$ can be written as $$ \eta_{\mu\nu} = \hat{\eta}_{\mu\nu} \tilde{\eta}^\frac{1}{2w}.$$ Thus one recalls that $$ \frac{\delta S}{\delta \tilde{\eta} } = \frac{1}{2w \tilde{\eta} } \frac{ \delta S}{\delta \eta_{\mu\nu} } \eta_{\mu\nu}.$$ Using this result in the postulate of universal coupling, the dependence on $w$ cancels out, giving for $w=0$ \begin{eqnarray} \frac{\delta S}{\delta \gamma } = \frac{\delta S_{f} }{\delta \gamma } + 8 w \sqrt{\pi G} \frac{1}{2w } \frac{ \delta S}{\delta \eta_{\mu\nu} } \eta_{\mu\nu} = \frac{\delta S_{f} }{\delta \gamma } + 4 \sqrt{\pi G} \frac{ \delta S}{\delta \eta_{\mu\nu} } \eta_{\mu\nu}, \end{eqnarray} which makes sense even for $w=0.$ It is convenient to choose the weight $1$ variable $\sqrt{-\eta},$ in terms of which universal coupling is $$\frac{\delta S}{\delta \gamma } = \frac{\delta S_{f} }{\delta \gamma } + \frac{8 \sqrt{\pi G} }{\sqrt{-\eta} } \frac{ \delta S}{\delta \sqrt{-\eta } }. $$ The exponential change of variables, while leaving $\hat{\eta}_{\mu\nu} $ and $u$ alone, gives \begin{eqnarray} \frac{\delta S}{\delta \sqrt{ -\eta} } |\gamma = \frac{\delta S}{\delta \sqrt{ -g} } \frac{ \sqrt{-g} }{ \sqrt{ -\eta} } + \frac{\delta S}{\delta \sqrt{ -\eta} } |g, \nonumber \\ \frac{\delta S}{\delta \gamma } = 8 \sqrt{\pi G} \frac{\delta S}{\delta \sqrt{ -g} } \sqrt{ -g}. \end{eqnarray} Installing these results in the universal coupling postulate yields \begin{equation} 0 = \frac{ \delta S_f }{ \delta \gamma} + 8 \sqrt{\pi G} \sqrt{-\eta} \frac{\delta S}{\delta \sqrt{ -\eta} } |g, \end{equation} a result that is surprisingly indifferent to the non-additive form of the field redefinition. Letting the action $S$ be a sum of $S_1 + S_2$ from the massless case and $S_{ms}$ for the mass term, one has \begin{equation} 0 = -m^2 \sqrt{-\eta} \gamma + 8 \sqrt{\pi G} \sqrt{-\eta} \frac{\delta S_{ms}}{\delta \sqrt{ -\eta} } |g. \end{equation} Making the change of variables in $S_f$ as well yields \begin{equation} 0 = - \frac{ m^2 \sqrt{-\eta} }{ 8 \sqrt{\pi G } } ln\left(\frac{\sqrt{-g} }{\sqrt{-\eta} } \right) + 8 \sqrt{\pi G} \sqrt{-\eta} \frac{\delta S_{ms}}{\delta \sqrt{ -\eta} } |g. \end{equation} Dividing by $\sqrt{-\eta}$ and integrating gives $$ -64 \pi G \mathcal{L}_{ms} = m^2 \left[ -\sqrt{-\eta} ln\sqrt{-g} + \sqrt{-\eta} ln\sqrt{-\eta} -\sqrt{-\eta} +f(g) \right], $$ where $f(g)$ is a `constant' of integration. To get a scalar action, the obvious choice is $b \sqrt{-g}$ for some constant $b$. Requiring the action to vanish to zeroth order yields $b=1;$ it vanishes to first order as well. The result is \begin{eqnarray} \mathcal{L}_{ms} = - \frac{m^2}{64 \pi G } \left[ - \sqrt{-\eta} ln\left( \frac{ \sqrt{-g} }{\sqrt{-\eta} }\right) + \sqrt{-g} -\sqrt{-\eta} \right]= \nonumber \\ - \frac{ m^2 \sqrt{-\eta} } {64 \pi G } \left[ - 8 \sqrt{\pi G} \gamma + exp ( 8 \sqrt{\pi G} \gamma) -1 \right], \end{eqnarray} which was already obtained above as the $w \rightarrow 0$ limit of the series derived for the $w \neq 0.$ Thus the $w=0$ case in fact makes perfectly good sense and yields the theory that the $w \rightarrow 0$ limit leads one to expect. If Kraichnan had considered massive scalar gravity, then he would have obtained the $w=0$ theory readily. One can also treat the case $w=0$ using l'H\^{o}pital's rule for the indeterminate form $\frac{0}{0}.$ One has \begin{eqnarray} \lim_{w \rightarrow 0} \frac{m^2}{64 \pi G} \left[ \frac{ \sqrt{-g} }{w-1} + \frac{ \sqrt{-g}^w \sqrt{-\eta}^{1-w} }{ w(1-w)} - \frac{ \sqrt{-\eta} }{ w} \right]= \nonumber \\ \lim_{w \rightarrow 0} \frac{m^2}{64 \pi G} \left[ \frac{ w\sqrt{-g} - \sqrt{-g}^w \sqrt{-\eta}^{1-w} - (w-1) \sqrt{-\eta} }{w^2-w} \right] = \nonumber \\ \lim_{w \rightarrow 0} \frac{m^2}{64 \pi G} \left[ \frac{\sqrt{-g} - \sqrt{-g}^w \sqrt{-\eta}^{1-w}(ln\sqrt{-g} - ln\sqrt{-\eta}) - \sqrt{-\eta} }{2w-1} \right] = \nonumber \\ \frac{m^2}{64 \pi G} ( -\sqrt{-g} + \sqrt{-\eta} ln\sqrt{-g} - \sqrt{-\eta}ln\sqrt{-\eta} + \sqrt{-\eta} ), \end{eqnarray} in agreement with the formula given above. To sum up, while the $w=1$ case needed some special treatment and the $w=0$ case needed a great deal of special treatment, every real value of $w$ yields a (distinct) universally coupled massive scalar gravity. Thus we have found uncountably infinitely many massive scalar gravities, all derived by universal coupling, that give finite-range rivals to Nordstr\"{o}m's massless scalar theory. These theories all provide a relativistic embodiment of the Seeliger-Neumann finite-range modification of Newtonian gravity. There might be still other massive scalar gravities worthy of discovery, so no claim of exhaustiveness is made. \subsection{All Cases in $w=0$ Exponential Variables} Above the infinite family of theories was presented both using the bimetric variables $\sqrt{-g}$ and $\sqrt{-\eta}$ and using a series expansion of each theory in its own adapted perturbative field $\tilde{\gamma}_w.$ Recently it was found that the $w=0$ case suggests the relationship $$ \sqrt{-g} = exp(8 \sqrt{\pi G} \gamma) \sqrt{-\eta};$$ this field $\gamma$ (with no $\tilde{}$ and no density weight) is a neutral, ecumenical choice for expressing all of the massive gravities in a commensurable fashion---as compared to the series expansions above, which use different fields for different theories. The result is $$\mathcal{L}_{ms} = \frac{ m^2 \sqrt{-\eta} }{64 \pi G} \frac{ [ w e^{8 \gamma \sqrt{\pi G}} - e^{8w \gamma \sqrt{\pi G}} + 1-w ]}{w(w-1)}$$ for $w \neq 0, 1;$ these special cases are readily handled by l'H\^{o}pital's rule. Using the series expansion for the exponential function, which converges everywhere, one has $$\mathcal{L}_{ms,w} = -\frac{ m^2 \sqrt{-\eta} }{64 \pi G} \sum_{j=2}^{\infty} \frac{ (8 \gamma \sqrt{\pi G})^j}{j!} \sum_{i=0}^{j-2} w^i.$$ I will not attempt to \emph{derive} all the infinitely many theories using the $w=0$ exponential change of variables. While such a derivation must be possible in some sense, the premises might look contrived by virtue of the apparently non-linear form of some of the terms. Thus the role of the $w$-adapted field variables in the context of discovery is evident. They allow infinitely many derivations to succeed using a manifestly free field \emph{via} a quadratic Lagrangian density, coupled to the total stress-energy tensor's trace, without powers of $\gamma$ in the coefficients. \section{Stability} In the interest of avoiding runaway solutions due to a potential energy with no lower bound, one wants to investigate the behavior of the algebraic mass/self-interaction term. One might worry, for example, about theories such that this potential behaves like an odd polynomial (or worse) for large values (positive, or negative unless the singularity as $\sqrt{-g} \rightarrow 0$ matters--in fact it will prove helpful) of the gravitational field $\gamma$ or some relative thereof \cite{VenezianoScalarBoson}. While such strong values might invalidate the assumed validity of perturbation expansions sometimes made in this paper, a perturbative treatment at least suggests where trouble-spots might be found. For theories with the self-interaction potential behaving like an even polynomial (or certain kinds of infinite series), that sort of instability is not an issue, but correct physical interpretation requires checking whether $\gamma = 0$ or the like is the true vacuum \cite{VenezianoScalarBoson}. Checking these issues for all values of $w$ would be a substantial task, but it is not difficult to check some interesting cases. Veneziano remarks that the Freund-Nambu theory is satisfactory on this count; I observe that for sufficiently negative values of the field there is a crushing singularity, but the mass term repels from it. While it is possible to check various isolated cases, treating all the infinitely many theories in a perturbative manner is not viable. There is the further drawback, which takes a disjunctive form, depending on the choice of field variable. In the $w$-adapted variables $\tilde{\gamma}$, the gravitational field means different things for different theories. On the other hand, if one uses the neutral $w=0$ field, then all theories give an infinite series---no case is a polynomial, making the analysis difficult. Fortunately one can avoid perturbative treatments altogether and extremize the mass-interaction part of the Lagrangian with respect to $\sqrt{-g}.$ One readily finds that the only critical point is the expected vacuum $\sqrt{-g}=\sqrt{-\eta}$ (which gives $\tilde{\gamma}=0$ for all $w$) and that it is indeed the ground state for all $w.$ In this sense massive scalar gravity is stable for all values of $w$. Some of the theories repel infinitely from the singularity $\sqrt{-g}=0,$ while others do not. \section{Why Canonical Tensor Derivation Is Simple Only for $w=\frac{1}{2}$ Theory} It is not difficult to see why Freund and Nambu discovered in effect the $w=\frac{1}{2}$ theory but not any of the other massive scalar gravities found here. They use the canonical stress-energy tensor in its standard simple form, as in their equation 3b, where the trace is given as $$ \frac{ \partial \mathcal{L} }{ \partial \phi,_{\mu} } \phi,_{\mu} -4\mathcal{L}.$$ It is well known that one is permitted to add terms with automatically vanishing divergence to the stress-energy tensor, terms sometimes called ``curls'' \cite{Anderson} by virtue of their resemblance to the vector calculus theorem that the divergence of a curl is $0$ (itself a consequence of $1-1=0$). It is quite understandable that in a 3-page paper Freund and Nambu did not consider this option (though terms something like this were studied, still in a non-gravitational context, by Mack, by Chang and Freund, and by Aurilia \cite{MackDilatation,ChangFreundScalar,AuriliaBrokenConformal}). The derivation of the above massive scalar gravities using the canonical tensor for arbitrary $w,$ one can show, requires in the trace of the canonical stress-energy tensor the d'Alembertian of the term $$ \frac{ \tilde{\gamma} }{2 \sqrt{\pi G}} - \frac{ (1 + 8w \sqrt{\pi G} \tilde{\gamma})^{\frac{1}{2w} }}{ 8 \pi G } + \frac{1}{8 \pi G}, $$ as expressed in Cartesian coordinates, the use of which id advantageous when the canonical stress-energy tensor is employed. This extra term, which has second derivatives, vanishes if and only if $w=\frac{1}{2}.$ Thus the neglect of this term will cause the derivation to fail except in the case $w=\frac{1}{2}.$ The scalar gravity theory of Dehnen and Frommert \cite{DehnenMassiveScalar} is equivalent to the Freund-Nambu theory \cite{FreundNambu} if one restricts the latter to matter fields with conformally invariant kinetic terms. This class includes not only electromagnetism and Yang-Mills theories (spin $1$), but also fermions (spin $\frac{1}{2}$). It does not include standard scalar fields, however, though they do not remark on that important limitation. With $\phi$ being gravity and $\chi$ being matter, Freund and Nambu find that a standard scalar field coupled to gravity has an interaction involving $\phi (\partial \chi)^2.$ Dehnen and Frommert assume without justification that no such terms exist, as well as assuming that no nonlinear terms $\phi (\partial \phi)^2$ arise. The latter terms can be absorbed by a nonlinear field redefinition of the gravitational potential $\phi.$ For electromagnetism and Yang-Mills fields, taking the potentials to be covectors (as one usually does) removes the $\phi (\partial \chi)^2$ terms. For spinors, I recall from above that redefining the spinor fields to have density weight $\frac{3}{8}$ removes the coupling of the gravitational potential $\phi$ to the spinor (in this case a term roughly like $\phi \psi \partial \psi$). The ability of the Dehnen-Frommert theory to accommodate spin $\frac{1}{2}$ and spin $1$ fields was exploited in some subsequent papers where only those matter fields were entertained \cite{DehnenHiggsScalar,DehnenMassiveScalarFermion}. The Dehnen-Frommert derivation is not so simple because it involves multiple field redefinitions motivated by the need to recover from the assumption of purely nonderivative coupling between gravity and matter or the desire to derive massive scalar gravity in a Higgs-looking fashion. In view of the disadvantages of the (unsymmetrized, unimproved) canonical energy-momentum tensor for various fields---asymmetry, gauge dependence, and nonvanishing trace in some contexts where one might have wanted the trace to vanish \cite{ForgerRomerStress}---the use of the metrical definition is a good deal more convenient. In principle one could use a Belinfante-Rosenfeld equivalence theorem and employ the symmetric Belinfante tensor or the like. However, it seems not very practical to derive an unknown Lagrangian density for non-scalar matter fields by looking for solutions to a messy identity involving all sorts of partial derivatives of the Lagrangian density with respect to the field derivatives. It seems not accidental that thus far only \emph{via} the metrical definition has an infinity, or even a variety, of universally coupled scalar gravities been derived. As Josep Pons has recently recalled \cite{PonsEnergy}, the process for making a flat space-time theory formally generally covariant admits considerable freedom in choosing density weights for fields: the comma-goes-to-semicolon rule thus has considerable ambiguity which tends to go unnoticed. While for many purposes the choice of density weight does not matter much (other than affecting the forms of the Lie and covariant derivatives), the metrical stress-energy tensor, in particular its trace, is significantly affected. Above I observed that the use of a suitably densitized spinor allowed $\sqrt{-g}$ to disappear completely from the massless Dirac equation; I observe that one can do the same thing using conformally invariantly coupled scalar fields. The scalar field is replaced by a scalar density $\phi_w$ of weight $w= \frac{n-2}{2n}$ (which comes to $\frac{1}{4}$ in four space-time dimensions) by absorbing suitable powers of $\sqrt{-g}$; the pleasant result is that $\sqrt{-g}$ thereupon disappears completely from the theory. The expression $$ \sqrt{-g}^\frac{2}{n} \left[\nabla^2 \phi_w - \frac{n-2}{4(n-1)} R \phi_w \right]$$ is the same for all conformally related metrics, from which it follows that $\sqrt{-g}$ simply cancels out altogether. Multiplying this expression by $\phi_w$ gives a scalar density of weight $1,$ which is thus a suitable Lagrangian density. One should be able to expand the expression out using $\hat{g}_{\mu\nu}$ and $\sqrt{-g}$ and watch the latter disappear; the result will not \emph{look} like a scalar density, so the expression $ \sqrt{-g}^\frac{2}{n}[\nabla^2 \phi_w - \frac{n-2}{4(n-1)} R \phi_w]$ has some advantages. One could also drop some total divergences to remove second derivatives of $\phi_w$ and/or the metric, if desired, perhaps at the expense of manifest covariance (somewhat like the Einstein $\Gamma\Gamma$ Lagrangian density for General Relativity). The absence of $\sqrt{-g}$ immediately implies a traceless metrical stress-energy tensor even off-shell. The use of densitized scalar and spinor fields thus allows one to identify and reject $\sqrt{-g}$ as surplus structure in some notable contexts.
proofpile-arXiv_068-15503
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In recent years, the effort to give a physical explanation to the today observed cosmic acceleration \citep{cosmic_acceleration} has attracted a good amount of interest in Fourth Order Gravity (FOG) considered as a viable mechanism to explain the cosmic acceleration by extending the geometric sector of field equations without the introduction of Dark Matter and Dark Energy. Other issues come from Astrophysics. For example, the observed Pioneer anomaly problem \citep{anderson} can be framed into the same approach \citep{bertolami} and then a systematic analysis of such theories urges at small, medium and large scales. Other main topic is the flatness of the rotation curves of spiral galaxies. In particular, a delicate point is to address the weak field limit of any theory of Gravity since two main issues are extremely relevant: $i)$ preserving the results of General Relativity (GR) al local scales since they well fit Solar System experiments and observations; $ii)$ enclosing in a self-consistent and comprehensive picture phenomena as anomalous acceleration or Dark Matter at Galactic scales. The idea to extend Einstein's theory of Gravitation is fruitful and economic also with respect to several attempts which try to solve problems by adding new and, most of times, unjustified ingredients in order to give self-consistent pictures of dynamics. Both the issues could be solved by changing the gravitational sector, \emph{i.e.} the \emph{l.h.s.} of field equations. In particular, relaxing the hypothesis that gravitational Lagrangian has to be only a linear function of the Ricci curvature scalar $R$, like in the Hilbert-Einstein formulation, one can take into account an effective action where the gravitational Lagrangian includes a generic function of Ricci scalar ($f(R)$-Gravity). In this communication, we report the general approach of the Weak Field Limit for $f(R)$-Gravity in the metric approach. We deduce the field equations and derive the weak field potentials with corrections to the Newtonian potential. \section{The Field Equations and their Solutions} Let us start with a general class of $f(R)$-Gravity given by the action \begin{eqnarray}\label{HOGaction} \mathcal{A}\,=\,\int d^{4}x\sqrt{-g}[f(R)+\mathcal{X}\mathcal{L}_m] \end{eqnarray} where $f$ is an unspecified function of curvature invariant $R$. The term $\mathcal{L}_m$ is the minimally coupled ordinary matter contribution. In the metric approach, the field equations are obtained by varying (\ref{HOGaction}) with respect to $g_{\mu\nu}$. We get \begin{eqnarray}\label{fieldequationHOG} f'R_{\mu\nu}-\frac{f}{2}g_{\mu\nu}-f'_{;\mu\nu}+g_{\mu\nu}\Box f'\,=\,\mathcal{X}\,T_{\mu\nu} \end{eqnarray} Here, $T_{\mu\nu}\,=\,-\frac{1}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_m)}{\delta g^{\mu\nu}}$ is the the energy-momentum tensor of matter, while $f'\,=\,\frac{df(R)}{dR}$, $\Box\,=\,{{}_{;\sigma}}^{;\sigma}$ and $\mathcal{X}\,=\,8\pi G$\footnote{Here we use the convention $c\,=\,1$.}. The paradigm of Weak Field or Newtonian limit is starting from a develop of the spherically symmetric metric tensor with respect to dimensionless quantity $v$. To solve the problem we must start with the determination of the metric tensor $g_{\mu\nu}$ at any level of develop (for details see \citep{newtonian_limit_fR_1,newtonian_limit_fR_2}). From lowest order of field equations (\ref{fieldequationHOG}) we have $f(0)\,=\,0$ which trivially follows from the assumption that the space-time is asymptotically Minkowskian. A such result suggests a first consideration. If the Lagrangian is developable around a vanishing value of the Ricci scalar we don't have a cosmological constant contribution in the $f(R)$-Gravity. Let us consider a ball-like source with mass $M$ and radius $\xi$. The energy-momentum tensor $T_{\mu\nu}$ has the components $T_{tt}\,\sim\,T^{(0)}_{tt}\,=\,\rho$ and $T_{ij}\,=\,T_{0i}\,=\,0$ where $\rho$ is the mass density (we are not interesting to the internal structure). The field equations (\ref{fieldequationHOG}) at $\mathcal{O}(2)$ - order become\footnote{We set for simplicity $f'(0)\,=\,1$ (otherwise we have to renormalize the coupling constant $\mathcal{X}$ in the action (\ref{HOGaction})).} \begin{eqnarray}\label{PPN-field-equation-general-theory-fR-O2} \left\{\begin{array}{ll} R^{(2)}_{tt}-\frac{R^{(2)}}{2}+\frac{\triangle R^{(2)}}{3m^2}\,=\,\mathcal{X}\,\rho\\\\ \frac{\triangle R^{(2)}}{m^2}-R^{(2)}\,=\,\mathcal{X}\,\rho \end{array}\right. \end{eqnarray} where $\triangle$ is the Laplacian in the flat space, $R^{(2)}_{tt}$ is the time components of Ricci tensor and $m^{-2}\,\doteq\,-3f''(0)$. The second line of (\ref{PPN-field-equation-general-theory-fR-O2}) is the trace of field equations (\ref{fieldequationHOG}) at $\mathcal{O}(2)$ - order. It notes that if $f\,\rightarrow\,R$ (\emph{i.e.} $m^2$ diverges) the equations (\ref{PPN-field-equation-general-theory-fR-O2}) correspond to one of GR. The solution for the Ricci scalar $R^{(2)}$ in the third line of (\ref{PPN-field-equation-general-theory-fR-O2}) is \begin{eqnarray}\label{scalar_ricci_sol_gen} R^{(2)}(t,\textbf{x})\,=\,m^2\mathcal{X}\int d^3\mathbf{x}'\mathcal{G}(\mathbf{x},\mathbf{x}')\rho(t,\mathbf{x}') \end{eqnarray} where $\mathcal{G}(\mathbf{x},\mathbf{x}')$ is the Green function of field operator $\triangle-m^2$. The solution for $g^{(2)}_{tt}$, from the first line of (\ref{PPN-field-equation-general-theory-fR-O2}) by considering that $R^{(2)}_{tt}\,=\,\frac{1}{2}\triangle g^{(2)}_{tt}$, is \begin{eqnarray}\label{new_sol} g^{(2)}_{tt}(t,\mathbf{x})\,=\,-\frac{\mathcal{X}}{2\pi}\int d^3\textbf{x}'\frac{\rho(t,\textbf{x}')}{|\textbf{x}- \textbf{x}'|}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \nonumber\\\\\nonumber\,\,\,\,\,\,\,\,\,\,\,\,\,\, -\frac{1}{4\pi}\int d^3\textbf{x}'\frac{R^{(2)}(t,\textbf{x}')}{|\textbf{x}- \textbf{x}'|}-\frac{2}{3m^2}R^{(2)}(t,\textbf{x}) \end{eqnarray} We can check immediately that when $f\rightarrow R$ we find $g^{(2)}_{tt}(t,\textbf{x})\rightarrow-2G\int d^3\textbf{x}'\frac{\rho(t,\textbf{x}')}{|\textbf{x}- \textbf{x}'|}$ \citep{postnewtonian_limit_fR}. The solution (\ref{new_sol}) is the the gravitational potential $\Phi\,=\,g^{(2)}_{tt}/2$ for $f(R)$-Gravity. We note that $\Phi$ has a Yukawa-like behavior depending by a characteristic length on which it evolves. As it is evident the Gauss theorem is not valid since the force law is not $\propto|\mathbf{x}|^{-2}$. The equivalence between a spherically symmetric distribution and point-like distribution is not valid and how the matter is distributed in the space is very important (\citep{newtonian_limit_R_Ric}). From the solution (\ref{new_sol}) we can affirm that it is possible to have solutions non-Ricci-flat in vacuum: \emph{Higher Order Gravity mimics a matter source}. It is evident from (\ref{new_sol}) the Ricci scalar is a "matter source" which can curve the spacetime also in absence of ordinary matter. Besides the solutions are depending on the only first two derivatives of $f$ in $R\,=\,0$. So different theories from the third derivative admit the same solutions. \section{The Spatial Behaviors of Gravitational Potential} If $m^2\,>\,0$ we have as Green function with spherical symmetry the following expression \begin{eqnarray}\label{green_function1} \mathcal{G}(\mathbf{x},\mathbf{x}')\,=\,-\frac{1}{4\pi} \frac{e^{-\mu|\mathbf{x}-\mathbf{x}'|}} {|\mathbf{x}-\mathbf{x}'|} \end{eqnarray} where we defined $\mu\,\doteq\,\sqrt{|m^2|}$. Then the spatial behaviors of Ricci scalar (\ref{scalar_ricci_sol_gen}) and gravitational potential $\Phi$, if $\rho\,=$ constant, are shown in the Figs. \ref{plotricciscalar} and \ref{plotpontential00}. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{fig1.eps}} \caption{ \footnotesize Plot of dimensionless function $\zeta^4\mu^{-3}{r_g}^{-1}R^{(2)}$ for $\zeta\,\doteq\,\mu\xi\,=\,.5$ representing the spatial behavior of Ricci scalar at second order.} \label{plotricciscalar} \end{figure} \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{fig2.eps}} \caption{ \footnotesize Plot of metric potential $2\zeta \mu^{-1}{r_g}^{-1}\Phi$ vs distance from central mass with $\zeta\,\doteq\,\mu\xi\,=\,.5$. The dashed line is the GR behavior.} \label{plotpontential00} \end{figure} For fixed values of the distance $|\mathbf{x}|$, the solution $g^{(2)}_{tt}$ depends on the value of the radius $\xi$, then the Gauss theorem does not work also if the Bianchi identities hold. We can affirm: \emph{the potential does not depend only on the total mass but also on the mass - distribution in the space}. It is interesting to note as the gravitational potential assumes smaller value of its equivalent in GR, then in terms of gravitational attraction we have a potential well more deep. Besides if the mass distribution takes a bigger volume, the potential increases and vice versa. If $m^2\,<\,0$ the Green function assumes the "oscillating" expression \begin{eqnarray}\label{green_function_2} \mathcal{G}(\mathbf{x},\mathbf{x}')\,=\,-\frac{\cos \mu|\mathbf{x}-\mathbf{x}'|+ \sin \mu|\mathbf{x}-\mathbf{x}'|}{4\pi\,|\mathbf{x}-\mathbf{x}'|} \end{eqnarray} Now the Ricci scalar (\ref{scalar_ricci_sol_gen}) and gravitational potential $\Phi$ are shown in Figs. \ref{plotricciscalar_oscil} and \ref{plotpontential00_oscil}. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{fig6.eps}} \caption{ \footnotesize Plot of dimensionless function $\zeta^4\mu^{-3}{r_g}^{-1}R^{(2)}$ with $\zeta\,\doteq\,\mu\xi\,=\,.5$ representing the spatial behavior of Ricci scalar at second order in the oscillating case.} \label{plotricciscalar_oscil} \end{figure} \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{fig7.eps}} \caption{ \footnotesize Plot of metric potential $2\zeta \mu^{-1}{r_g}^{-1}\Phi$ vs distance from central mass with the choice $\zeta\,\doteq\,\mu\xi\,=\,.5$ in the oscillating case. The dashed line is the GR behavior.} \label{plotpontential00_oscil} \end{figure} Finally in the limit of point-like source, \emph{i.e.} $\rho\,=\,M\,\delta(\mathbf{x})$, we get \begin{eqnarray}\label{sol_new_pfR} \left\{\begin{array}{ll} R^{(2)}\,=\,-r_g\mu^2\frac{e^{-\mu|\mathbf{x}|}}{|\mathbf{x}|} \\\\ \Phi\,=\,-\frac{r_g}{2}\biggl(\frac{1}{|\textbf{x}|} +\frac{1}{3}\frac{e^{-\mu|\mathbf{x}|}}{|\mathbf{x}|}\biggr) \end{array}\right. \end{eqnarray} where $r_g\,=\,2GM$ is the Schwarzschild radius. If $f(R)\,\rightarrow\,R$ we recover the gravitational potential induced by GR. To conclude this section we show in Fig. \ref{plotforce} the comparison between gravitational forces induced in GR and in $f(R)$-Gravity in the Newtonian limit. Obviously also about the force we obtained an intensity stronger than in GR. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{fig5.eps}} \caption{ \footnotesize Comparison between gravitational forces induced by GR and $f(R)$-Gravity with $\zeta\,\doteq\,\mu\xi\,=\,.5$. The dashed line is the GR behavior.} \label{plotforce} \end{figure} \section{Conclusions} The Weak Field Limit is a crucial issue that has to be addressed in any relativistic theory of Gravity. It is also the test bed of such theories in order to compare them with the well-founded experimental results of GR, at least at Solar system level. The general feature that emerges from the Weak Field Limit is that correction to the Newtonian potential naturally comes out. This correction is Yukawa-like term bringing characteristic masse and length. Conversely, the standard Newtonian potential is just a feature emerging in the particular case $f(R)\,=\,R$. It is well-known that the new features related to FOG could have interesting applications in other fields of Astrophysics as galactic dynamics, large scale structure and Cosmology in order to address Dark Matter and Dark Energy issues. The fact that such "dark" structures have not been definitely discovered at fundamental quantum scales but operate at large astrophysical (infra-red scales) could be due to these corrections to the Newtonian potential which can be hardly detected at laboratory or Solar System scales. Finally, the presence of unavoidable light massive modes could open new opportunities also for the gravitational waves detection of experiments like VIRGO, LIGO and the forthcoming LISA.
proofpile-arXiv_068-15581
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The design of nonlinear state observers has been an area of constant research for the last three decades and as a result, a wide variety of design techniques for nonlinear observers exist in the literature. Despite important progress, many outstanding problems still remain unsolved. A class of nonlinear systems of special attention is the so-called Lipschitz systems in which the mathematical model of the system satisfies a Lipschitz continuity condition. Many practical systems satisfy the Lipschitz condition, at least locally. Roughly speaking, in these systems, the rate of growth of the trajectories is bounded by the rate of growth of the states. Observer design for Lipschitz systems was first considered by Thau in his seminal paper \cite{Thau} where he obtained a sufficient condition to ensure asymptotic stability of the observer. Thau's condition provides a very useful analysis tool but does not address the fundamental design problem. Encouraged by Thau's result, several authors studied observer design for Lipschitz systems \cite{Raghavan, Rajamani, Rajamani2, Aboky, Pertew}. All these methods share a common structure for the error dynamics of the nonlinear systems; namely the error dynamics can be represented as a linear system with a sector bounded nonlinearity in feedback. This type of problems are both theoretically and numerically tractable because they can be formulated as convex optimization problems \cite{Howell}, \cite{Boyd}. Raghavan formulated a procedure to tackle the design problem. His algorithm is based on solving an algebraic Riccati equation to obtain the static observer gain \cite{Raghavan}. Unfortunately, Raghavan's algorithm often fails to succeed even when the usual observability assumptions are satisfied. Raghavan showed that the observer design might still be tractable using state transformations. Another shortcoming of his algorithm is that it does not provide insight into what conditions must be satisfied by the observer gain to ensure stability. A rather complete solution of these problems was later presented by Rajamani \cite{Rajamani}. Rajamani obtained necessary and sufficient conditions on the observer matrix that ensure asymptotic stability of the observer error and formulated a design procedure, based on the use of a gradient based optimization method. He also discussed the equivalence between the stability condition and the minimization of the $H_{\infty}$ norm of a system in the standard form. However, he pointed out that the design problem is not solvable as a standard $H_{\infty}$ optimization problem since the regularity assumptions required in the $H_{\infty}$ framework are not satisfied. Using Riccati based approach, Pertew et. al. \cite{Pertew} showed that the condition introduced in \cite{Rajamani} is related to a modified $H_{\infty}$ norm minimization problem satisfying all of the regularity assumptions. It is worth mentioning that the $H_{\infty}$ problem in \cite{Rajamani} is associated with the nominal stability of the observer error dynamics while no disturbance attenuation is considered. Moreover, in all of the above references, the system model is assumed to be perfectly known with no uncertainty or disturbance. In order to guarantee robustness against unknown exogenous disturbance, the nonlinear $H_{\infty}$ filtering was introduced by De Souza et. al. \cite{deSouza1, deSouza2} via the Riccati approach. In an $H_{\infty}$ observer, the $\mathcal{L}_{2}$-induced gain from the norm-bounded exogenous disturbance signals to the observer error is guaranteed to be below a prescribed level. On the other hand, the restrictive regularity assumptions in the Riccati approach can be relaxed using linear matrix inequalities (LMIs). In this paper, we introduce a novel nonlinear $H_{\infty}$ observer design method for Lipschitz nonlinear systems based on the LMI framework. Our solution follows the same approach as the original problem of Thau and proposes a natural way to tackle the problem, directly. Unlike the methods of \cite{Raghavan, Rajamani, Pertew}, the proposed LMIs can be efficiently solved using commercially available software without any tuning parameters. In all aforementioned references, the Lipschitz constant of the system is assumed to be known and fixed. In this paper, the resulting LMIs are formulated such that to be linear in the Lipschitz constant of the nonlinear system. This adds an important extra feature to the observer, robustness against nonlinear uncertainty. Maximizing the admissible Lipschitz constant, the observer can tolerate some nonlinear uncertainty for which an explicit norm-wise bound is derived. In addition to this robustness, we will extend our result such that the observer disturbance attenuation level (the $H_{\infty}$ feature of the observer) can be optimized as well. Then, both the admissible Lipschitz constant and the disturbance attenuation level are optimized simultaneously through multiobjective convex optimization. The rest of the paper is organized as follows: Section 2, introduces the problem and some background. In Section 3, the LMI formulation of the problem and our observer design algorithm are proposed. The observer guaranteed decay rate and robustness against nonlinear uncertainty are discussed. In Section 4, we expand the result of Section 3, to an $H_{\infty}$ nonlinear observer design method. Section 5, is devoted to the simulators optimization of the observer features through multiobjective optimization. In section 6, the proposed observer performance is shown in some illustrative examples. \section{Preliminaries and Problem Statement} Consider the following continuous-time nonlinear system \begin{align} \dot{x}(t)&=Ax(t)+ \Phi(x,u)\hspace{7mm} A \in\mathbb{R}^{n\times n}\label{con1}\\ y(t)&=Cx(t)\hspace{23mm} C \in\mathbb{R}^{n\times p}\label{con2} \end{align} where $x\in {\mathbb R} ^{n} ,u\in {\mathbb R} ^{m} ,y\in {\mathbb R} ^{p} $ and $\Phi(x,u)$ contains nonlinearities of second order or higher. We assume that the system (\ref{con1})-\eqref{con2} is locally Lipschitz in a region $\mathcal{D}$ including the origin with respect to $x$, uniformly in $u$, i.e.: \begin{eqnarray} \|\Phi(x_{1},u^{*})-\Phi(x_{2},u^{*})\|\leqslant\gamma\|x_{1}-x_{2}\| \hspace{7mm}\forall \, x_{1} (k),x_{2} (k)\in \mathcal{D} \end{eqnarray} where $\|.\|$ is the induced 2-norm, $u^{*}$ is any admissible control signal and $\gamma>0$ is called the Lipschitz constant. If the nonlinear function $\Phi$ satisfies the Lipschitz continuity condition globally in $\mathbb{R}^{n}$, then the results will be valid globally. Consider now an observer of the following form \begin{align} \dot{\hat{x}}(t)=A\hat{x}(t)+\Phi(\hat{x},u)+L(y-C\hat{x})\label{observer1}. \end{align} The observer error dynamics is given by \begin{align} e(t)&\triangleq x(t)-\hat{x}(t) \\\dot{e}(t)&=(A-LC)e(t)+\Phi(x,u)-\Phi(\hat{x},u).\label{error1} \end{align} The goal is to find a gain, $L$, such that: \begin{itemize} \item In the absence of disturbance, the observer error dynamics is asymptotically stable i.e.: $\lim_{t\rightarrow \infty} e(t)=0$. \item In the presentence of unknown exogenous disturbance, a disturbance attenuation level is guaranteed. ($H_{\infty}$ performance). \end{itemize} The result is simple and yet efficient with no regularity assumption. The observer error dynamics is asymptotically stable with guaranteed decay rate (the convergence is actually exponential as we will see). In addition, the observer is robust against nonlinear uncertainty and exogenous disturbance. The dismissible Lipschitz constant which as will be shown, determines the robustness margin against nonlinear uncertainty, and the disturbance attenuation level (the $H_{\infty}$ cost), are optimized through LMI optimization. \addtolength{\textheight}{-3cm} \section{An Algorithm for Nonlinear Observer Design} In this section an LMI approach for the nonlinear observer design problem introduced in Section 2 is proposed and some performance measures of the observer are optimized. \subsection{Maximizing the Admissible Lipschitz Constant} We want to maximizes the admissible Lipschitz constant of the nonlinear system (1)-(2) for which the observer error dynamics is asymptotically stable. The following theorem states the main result of this section.\\ \emph{\textbf{Theorem 1.} Consider the Lipschitz nonlinear system (\ref{con1})-(\ref{con2}) along with the observer (\ref{observer1}). The observer error dynamics (\ref{error1}) is (globally) asymptotically stable with maximum admissible Lipschitz constant if there exist scalers $\epsilon > 0$ and $\xi > 0$ and matrices $P > 0$ and F such that the following LMI optimization problem has a solution. } \begin{align} \hspace{-2cm}min (\xi) \notag \end{align} \hspace{4cm}\emph{s.t.} \begin{align} &A^{T}P+PA-C^{T}F^{T}-FC < -I-\epsilon I \label{LMI1} \\&\left[ \begin{array}{cc} \frac{1}{2}\xi I & P \\ P & \frac{1}{2}\xi I \end{array} \right]>0 \label{LMI2} \end{align} \emph{once the problem is solved} \begin{align} L&=P^{-1}F \label{L1} \\\gamma^{*}&\triangleq\max(\gamma)=\xi^{-1} \end{align} \textbf{Proof:} Suppose $Q=I$. The original problem as discussed in section 2, can be written as \begin{equation} \hspace{-5cm} \min (\lambda_{max}(P)) \notag \end{equation} \hspace{3cm} s.t. \begin{align} (A-LC)^{T}P+P^{T}(A-LC)&=-I\label{lyap2}\\ 1-2\gamma.\lambda_{max}(P)&>0\label{cond2} \\ P&>0 \end{align} which is a nonlinear optimization problem, hard to solve if not impossible. We proceed by converting it into an LMI form. A sufficient condition for existence of a solution for (\ref{lyap2}) is \begin{equation} \\\exists\hspace{1mm}\epsilon>0, \hspace{1mm}(A-LC)^{T}P+P^{T}(A-LC)<-I-\epsilon I. \end{equation} The above can be written as \begin{equation} \\A^{T}P+PA-C^{T}L^{T}P-PLC<-I-\epsilon I \end{equation} which is a bilinear matrix inequality. Defining the new variable \begin{equation} \\F\triangleq PL\rightarrow L^{T}P^{T}=L^{T}P=F^{T} \end{equation} it becomes \begin{equation} \\A^{T}P+PA-C^{T}F^{T}-FC < -I-\epsilon I \end{equation} In addition, since $P$ is positive definite $\bar{\sigma}(P)=\lambda_{max}(P)$. So, from (\ref{cond2}) we have \begin{equation} \\\bar{\sigma}(P)<\frac{1}{2\gamma}\label{cond3} \end{equation} which is equivalent to \begin{equation} \\(\frac{1}{2\gamma})^{2}I-P^{T}P>0 \end{equation} using Schur's complement lemma \begin{equation} \left[ \begin{array}{cc} \frac{1}{2\gamma}I & P \\ P & \frac{1}{2\gamma}I \\ \end{array} \right]>0 \label{cond4} \end{equation} defining $\xi=\frac{1}{\gamma}$, (\ref{LMI2}) is achieved. $\triangle$\\ \emph{\textbf{Proposition1.}} {\emph{Suppose the actual Lipschitz constant of the system is $\gamma$ and the maximum admissible Lipschitz constant achieved by Theorem 2, is $\gamma^{*}$. Then, the observer designed based on Theorem 2, can tolerate any additive Lipschitz nonlinear uncertainty with Lipschitz constant less than or equal $\gamma^{*}-\gamma$}}.\\ \textbf{Proof:} Assume a nonlinear uncertainty as follows \begin{align} \Phi_{\Delta}(x,u)&=\Phi(x,u)+\Delta\Phi(x,u) \\\dot{x}(t)&= Ax(t) + \Phi_{\Delta}(x,u) \end{align} where \begin{align} \|\Delta\Phi(x_{1},u)-\Delta\Phi(x_{2},u)\|\leqslant\Delta\gamma\|x_{1}-x_{2}\|. \end{align} Based on Schwartz inequality, we have \begin{eqnarray} \|\Phi_{\Delta}(x_{1},u)-\Phi_{\Delta}(x_{2},u)\|&\leq&\notag \|\Phi(x_{1},u)-\Phi(x_{2},u)\|+\|\Delta\Phi(x_{1},u)-\Delta\Phi(x_{2},u)\|\notag \\&\leq& \gamma\|x_{1}-x_{2}\|+\Delta\gamma\|x_{1}-x_{2}\|. \end{eqnarray} According to the Theorem 1, $\Phi_{\Delta}(x,u)$ can be any Lipschitz nonlinear function with Lipschitz constant less than or equal to $\gamma^{*}$, \begin{equation} \|\Phi_{\Delta}(x_{1},u)-\Phi_{\Delta}(x_{2},u)\|\leq\gamma^{*}\|x_{1}-x_{2}\| \end{equation} so, there must be \begin{eqnarray} \gamma+\Delta\gamma\leq\gamma^{*}\rightarrow\Delta\gamma\leq\gamma^{*}-\gamma. \ \ \ \triangle \end{eqnarray} \emph{\textbf{Remark 1.}} If one wants to design an observer for a given system with known Lipschitz constant, then the LMI optimization problem can be reduced to an LMI feasibility problem (just satisfying the constraints) which is easier.\\ >From Theorem 2, it is clear that the gain $L$ obtained via solving the LMI optimization problem, can lead to stable error dynamics for every member in the class of the Lipschitz nonlinear functions with Lipschitz constant less than or equal to $\gamma^{*}$. Thus, it neglects the structure of the given nonlinear function. It is possible to take advantage of the structure of the $\Phi(x,u)$ in addition to the fact that its Lipschitz constant is $\gamma$. According to Proposition 1, the margin of robustness against nonlinear uncertainty is $\gamma^{*}-\gamma$. The Lipschitz constant of the systems can be reduced using appropriate coordinates transformations. The transformation matrices that are picked are problem specific and they reflect the structure of the given nonlinearity \cite{Raghavan}. The robustness margin can then be modified through coordinates transformations. Finding the Lipschitz constant of a function is itself a global optimization problem, since the Lipschitz constant is the supremum of the magnitudes of directional derivatives of the function as shown in \cite{Khalil} and \cite{Marquez}. If the analytical form of the nonlinear function and its derivatives are known explicitly, any appropriate global optimization method may be applied to find the Lipschitz constant. If only the function values can be evaluated, a stochastic random search and probability density function fitting method may be used \cite{Wood}. \subsection{Guaranteed Decay Rate} The decay rate of the system (\ref{error1}) is defined to be the largest $\beta>0$ such that \begin{eqnarray} \lim_{t\rightarrow\infty} \exp(\beta t)\|e(t)\|=0 \end{eqnarray} holds for all trajectories $e$. We can use the quadratic Lyapunov function $V (e)=e^{T}Pe$ to establish a lower bound on the decay rate of the (\ref{error1}). If $\frac{dV(e(t))}{dt}\leqslant-2\beta V (e(t))$ for all trajectories, then $V(e(t)) \leqslant \exp(-2\beta t)V(e(0))$, so that $\|e(t)\|\leqslant \exp(-\beta t)\kappa(P)^{\frac{1}{2}}\|e(0)\|$ for all trajectories, where $\kappa(P)$ is the condition number of P and therefore the decay rate of the (\ref{error1}) is at least $\beta$, \cite{Boyd}. In fact, decay rate is a measure of observer speed of convergence.\\ \emph{\textbf{Theorem 3.} Consider Lipschitz nonlinear system (\ref{con1})-(\ref{con2}) along with the observer (\ref{observer1}). The observer error dynamics (\ref{error1}) is (globally) asymptotically stable with maximum admissible Lipschitz constant and guaranteed decay rate $\beta$, if there exist a fixed scaler $\beta> 0$, scalers $\epsilon> 0$ and $\xi > 0$ and matrices $P > 0$ and F such that the following LMI optimization problem has a solution.} \begin{align} &\hspace{2cm} min (\xi) \notag \\ s.t. \notag \\&A^{T}P+PA+2\beta P-C^{T}F^{T}-FC < -I-\epsilon I\label{lyap4} \\&\left[ \begin{array}{cc} \frac{1}{2}\xi I & P \\ P & \frac{1}{2}\xi I \end{array} \right]>0 \end{align} \emph{once the problem is solved} \begin{align} L&=P^{-1}F \\\gamma^{*}&\triangleq\max(\gamma)=\xi^{-1} \end{align} \textbf{Proof:} Consider the following Lyapunov function candidate \begin{eqnarray} V(t)=e^{T}(t)Pe(t) \end{eqnarray} then \begin{equation} \dot{V}(t)=\dot{e}^{T}(t)Pe(t)+e^{T}(t)P\dot{e}(t)=-e^{T}Qe+2e^{T}P(\Phi(x,u)-\Phi(\hat{x},u))^{T}\label{V1}. \end{equation} To have $\dot{V}(t)\leqslant-2\beta V(t)$ it suffices (\ref{V1}) to be less than zero, where: \begin{equation} (A-LC)^{T}P+P^{T}(A-LC)+2\beta P=-Q \label{lyap3}. \end{equation} The rest of the proof is the same as the proof of Theorem 2. $\Delta$ \section{Robust $H_{\infty}$ Nonlinear Observer} In this section we extend the result of the previous section into a new nonlinear robust $H_{\infty}$ observer design method. Consider the system \begin{eqnarray} \dot{x}(t)&=& Ax(t) + \Phi(x,u)+B w(t)\hspace{4mm}\label{sys3} \\ y(t)&=& Cx(t)+D w(t)\label{sys4} \end{eqnarray} where $w(t)\in\mathfrak{L}_{2}[0,\infty)$ is an unknown exogenous disturbance. suppose that \begin{equation} z(t)=He(t) \end{equation} stands for the controlled output for error state where $H$ is a known matrix. Our purpose is to design the observer parameter $L$ such that the observer error dynamics is asymptotically stable and the following specified $H_{\infty}$ norm upper bound is simultaneously guaranteed. \begin{equation} \|z\|\leq\mu\|w\|. \end{equation} The following theorem introduces a new method for nonlinear robust $H_{\infty}$ observer design. we first present an inequality that will be used in the proof of our result.\\ \emph{\textbf{Lemma 1 \cite{Xu2}}. For any $x,y\in\mathbb{R}^{n}$ and any positive definite matrix $P\in\mathbb{R}^{n\times{n}}$, we have} \begin{equation} 2x^{T}y\leq x^{T}Px+y^{T}P^{-1}y \end{equation} \emph{\textbf{Theorem 4.} Consider the Lipschitz nonlinear system (\ref{sys3})-(\ref{sys4}) with given Lipschitz constant $\gamma$, along with the observer (\ref{observer1}). The observer error dynamics is (globally) asymptotically stable with decay rate $\beta$ and minimum $\mathfrak{L}_{2}(w \rightarrow e)$ gain, $\mu$, if there exist fixed scaler $\beta>0$, scalers $\alpha>1$, $\epsilon> 0$ and $\zeta>0$ and matrices $P>0$ and $F$ such that the following LMI optimization problem has a solution.}\\ \begin{equation} \hspace{-6cm} \ min (\zeta) \notag \end{equation} \hspace{3cm}\emph{s.t.} \begin{align} &A^{T}P+PA+2\beta P-C^{T}F^{T}-FC < -\alpha I-\epsilon I\label{LMI8}\\ &\left[ \begin{array}{cc} \frac{1-\bar{\sigma}^{2}(H)}{2\gamma} I & P \\ P & \frac{1-\bar{\sigma}^{2}(H)}{2\gamma} I \end{array} \right]>0 \label{LMI4} \\ &\left[ \begin{array}{cc} H^{T}H+\frac{1}{2}(\gamma+\frac{1}{\gamma}-2\alpha)I & PB-FD \\ \\B^{T}P-D^{T}F^{T} & -\zeta I \\ \end{array}% \right]<0 \label{LMI3} \end{align} \emph{Once the problem is solved} \begin{align} L&=P^{-1}F \\\mu^{*}&\triangleq\min(\mu)=\sqrt{\zeta} \end{align} \textbf{Proof:} The observer error dynamics will be \begin{eqnarray} \dot{e}(t)=(A-LC)e(t)+\Phi(x,u)-\Phi(\hat{x},u)+(B-LD)w \end{eqnarray} consider the following Lyapunov function candidate \begin{eqnarray} V(t)=e^{T}(t)Pe(t) \end{eqnarray} then \begin{equation} \dot{V}(t)=\dot{e}^{T}(t)Pe(t)+e^{T}(t)P\dot{e}(t)=-e^{T}Qe\notag \end{equation} \begin{equation} +2e^{T}P(\Phi(x,u)-\Phi(\hat{x},u))^{T}+e^{T}(PB-FD)w+w^{T}(B^{T}P-D^{T}F^{T})e \end{equation}\\ where, $Q$ is as in (\ref{lyap3}). We select $Q=\alpha I$. If $w=0$ the error dynamics is as Theorem 2, so the LMIs (\ref{LMI1}) and (\ref{LMI2}) which for $Q=\alpha I$ will become \begin{align} A^{T}P+PA+2\beta P-C^{T}F^{T}-FC &< -\alpha I-\epsilon I \label{LMI5} \\\left[ \begin{array}{cc} \frac{\alpha}{2\gamma} I & P \\ P & \frac{\alpha}{2\gamma} I \end{array} \right]&>0 \label{LMI6} \end{align} are sufficient for the asymptotic stability of the error dynamics. Having $\alpha>1$, (\ref{cond3}) always implies (\ref{LMI6}).\\ \indent Based on Rayleigh inequality \begin{equation} e^{T}Qe\leq\lambda_{max}(Q)e^{T}e \label{ineq1} \end{equation} \indent Using Lemma 1 we can write \begin{equation} 2e^{T}P(\Phi(x,u)-\Phi(\hat{x},u)) \leq e^{T}Pe+(\Phi(x,u)-\Phi(\hat{x},u))^{T}PP^{-1}P(\Phi(x,u)-\Phi(\hat{x},u))\notag \end{equation} \begin{equation} =e^{T}Pe+(\Phi(x,u)-\Phi(\hat{x},u))^{T}P(\Phi(x,u)-\Phi(\hat{x},u))\label{ineq2} \end{equation} based on Rayleigh inequality we have \begin{equation} \|e^{T}Pe\|\leq \lambda_{max}(P)\|e\|^{2}=\lambda_{max}(P)e^{T}e \end{equation} \begin{equation} \|(\Phi(x,u)-\Phi(\hat{x},u))^{T}P(\Phi(x,u)-\Phi(\hat{x},u))\|\leq\lambda_{max}(P)\|\Phi(x,u)-\Phi(\hat{x},u)\|^{2}\notag \end{equation} \begin{equation} \leq\gamma^{2}\lambda_{max}(P)\|e\|^{2}=\gamma^{2}\lambda_{max}(P)e^{T}e \end{equation} therefore, from the above and (\ref{cond3}), \begin{equation} 2e^{T}P(\Phi(x,u)-\Phi(\hat{x},u))\leq (1+\gamma^{2})\lambda_{max}(P) e^{T}e\leq \frac{1}{2}(\gamma+\frac{1}{\gamma})e^{T}e \label{ineq3}. \end{equation} According to (\ref{ineq1}) and (\ref{ineq3}) and knowing that $Q=\alpha I$, we have \begin{eqnarray} \dot{V}(t)\leq\frac{1}{2}(\gamma+\frac{1}{\gamma}-2\alpha)e^{T}e+e^{T}(PB-FD)w+w^{T}(B^{T}P-D^{T}F^{T})e. \end{eqnarray} \indent Now, we define \begin{equation} J=\int^{\infty}_{0}(z^{T}z-\zeta w^{T}w) dt \end{equation} therefore \begin{equation} J<\int^{\infty}_{0}(z^{T}z-\zeta w^{T}w+\dot{V}) dt \end{equation} it follows that a sufficient condition for $J\leq0$ is that \begin{equation} \forall t\in[0,\infty),\hspace{5mm} z^{T}z-\zeta w^{T}w+\dot{V}\leq0 \end{equation} but we have \begin{equation} z^{T}z-\zeta w^{T}w+\dot{V}\leq e^{T}H^{T}He-\zeta w^{T}w+\dot{V} e^{T}H^{T}He+\frac{1}{2}(\gamma+\frac{1}{\gamma}-2\alpha)e^{T}e\notag\notag \end{equation} \begin{equation} +e^{T}(PB-FD)w+w^{T}(B^{T}P-D^{T}F^{T})e-\zeta w^{T}w=\notag \end{equation} \begin{eqnarray} \left[ \begin{array}{c} e \\ w \end{array} \right]^{T} \left[ \begin{array}{cc} {H^{T}H}+\frac{1}{2}(\gamma+\frac{1}{\gamma}-2\alpha)I & PB-FD \\ \\B^{T}P-D^{T}F^{T} & -\zeta I \\ \end{array}% \right] \left[ \begin{array}{c} e \\ w \\ \end{array} \right]\label{ineq4}. \end{eqnarray} Thus, a sufficient condition for $J\leq0$ is that the above matrix which is the same as (\ref{LMI3}) be negative definite. Then \begin{equation} z^{T}z-\zeta w^{T}w\leq0\rightarrow\|z\|\leq\sqrt{\zeta}\|w\| \end{equation} \indent Up until now, we have the LMIs (\ref{LMI5}), (\ref{cond4}) and (\ref{LMI3}). If these LMIs are all feasible, then the problem is solvable and the observer synthesis is complete. However, (\ref{cond4}) can be slightly modified to improve its feasibility. We proceed as follows:\\ \indent Inequality (\ref{ineq2}) can be rewritten as follows \begin{equation} 2e^{T}P(\Phi(x,u)-\Phi(\hat{x},u))\leq\ 2\gamma\lambda_{max}(P)e^{T}e \end{equation} following the same steps, the matrix in (\ref{ineq4}) will become \begin{eqnarray} \left[ \begin{array}{cc} H^{T}H+[2\gamma\lambda_{max}(P)-\alpha]I & PB-FD \\ \\B^{T}P-D^{T}F^{T} & -\zeta I \\ \end{array}% \right] <0. \label{ineq5} \end{eqnarray} The above matrix can not be used together with (\ref{LMI5}) and (\ref{LMI6}) because it includes $P$ as one of the LMI variables, thus resulting in a problem that is not linear in $P$. It can, however, give us another insight about $\lambda_{max}(P)$. According to the Schur's complement lemma, (\ref{ineq5}) is equivalent to \begin{equation} -\zeta I < 0 \end{equation} \begin{equation} H^{T}H+[2\gamma\lambda_{max}(P)-\alpha]I+\frac{1}{\zeta}(PB-FD)(PB-FD)^{T}<0. \end{equation} The third term in the above is always nonnegative, so it is necessary to have \begin{equation} H^{T}H+[2\gamma\lambda_{max}(P)-\alpha]I<0 \label{ineq6} \end{equation} but as for any other symmetric matrix, for $H^{T}H$, we have \begin{equation} \lambda_{min}(H^{T}H)I\leq H^{T}H\leq\lambda_{max}(H^{T}H)I \end{equation} or according to the definition of singular values \begin{equation} \underline{\sigma}^{2}(H)I\leq H^{T}H\leq\ \bar{\sigma}^{2}(H)I \end{equation} therefore, a sufficient condition for (\ref{ineq6}) is \begin{equation} \bar{\sigma}^{2}(H)+2\gamma\lambda_{max}(P)-\alpha<0 \end{equation} or \begin{equation} \lambda_{max}(P)<\frac{\alpha-\bar{\sigma}^{2}(H)}{2\gamma}\label{LMI7} \end{equation} but (\ref{cond3}) must be also satisfied. To have both (\ref{cond3}) and (\ref{LMI7}), it is sufficient that \begin{equation} \lambda_{max}(P)<\frac{1-\bar{\sigma}^{2}(H)}{2\gamma} \end{equation} which is equivalent to (\ref{LMI4}). $\triangle$ \\ \emph{\textbf{Remark 2.}} Similar to Remark 1, if one wants to design an observer for a given system with known Lipschitz constant and with a prespecified $\mu$, the LMI optimization problem is reduced to an LMI feasibility problem.\\ \emph{\textbf{Remark 3.}} As an additional opportunity, we can first maximize the admissible Lipschitz constant using Theorem 3, and then minimize $\mu$ for the maximized $\gamma$, using Theorem 4. In this case, according to Proposition 1, robustness against nonlinear uncertainty is also guaranteed. In the next section, we will show that how $\gamma$ and $\mu$ can be simultaneously optimized using convex multiobjective optimization. It is clear that if no decay rate is specified, then the term $2\beta P$ will be eliminated from LMI (\ref{LMI8}) in Theorem 4. \section{Combined Performance using Multiobjective Optimization} The LMIs proposed in Theorem 4 are linear in both admissible Lipschitz constant and disturbance attenuation level and as mentioned earlier, each can be optimized. A more realistic problem is to choose the observer gain matrix by combining these two performance measures. This leads to a Pareto multiobjective optimization in which the optimal point is a trade-off between two or more linearly combined optimality criterions. Having a fixed decay rate, the optimization is over $\gamma$ (maximization) and $\mu$ (minimization), simultaneously. The following theorem is in fact a generalization of the results of \cite{Raghavan, Rajamani, Rajamani2, Aboky, Pertew, Zhu}, and \cite{deSouza1} (for systems of class \eqref{con1}-\eqref{con2}) in which the Lipschitz constant is assumed to be known and fixed and the result of \cite{Howell} in which a special class of sector nonlinearities is considered.\\ \emph{\textbf{Theorem 5.} Consider the Lipschitz nonlinear system (\ref{sys3})-(\ref{sys4}) along with the observer (\ref{observer1}). The observer error dynamics is (globally) asymptotically stable with decay rate $\beta$ and simultaneously maximized admissible Lipschitz constant $\gamma^{*}$ and minimized $\mathfrak{L}_{2}(w \rightarrow e)$ gain, $\mu^{*}$, if there exist fixed scalers $0\leq\lambda\leq1$ and $\beta>0$, scalers $\alpha>1$, $\epsilon>0$, $\xi>0$ and $\zeta>0$ and matrices $P>0$ and $F$ such that the following LMI optimization problem has a solution.}\\ \begin{equation} \hspace{-5cm} \ min \ [\lambda\cdot\xi+(1-\lambda)\zeta] \notag \end{equation} \hspace{2cm} \emph{s.t.} \begin{align} &A^{T}P+PA+2\beta P-C^{T}F^{T}-FC < -\alpha I-\epsilon I\\ &\left[ \begin{array}{cc} \frac{1-\bar{\sigma}^{2}(H)}{2}\cdot\xi I & P \\ P & \frac{1-\bar{\sigma}^{2}(H)}{2}\cdot\xi I \end{array} \right]>0\\ &\left[ \begin{array}{ccc} H^{T}H+\frac{1}{2}(\xi-2\alpha)I & I & PB-FD \\ I & -2\xi I & 0 \\ B^{T}P-D^{T}F^{T} & 0 & -\zeta I \\ \end{array} \right]<0\label{LMI9} \end{align} \emph{Once the problem is solved,} \begin{align} L&=P^{-1}F\\ \gamma^{*}&\triangleq\max(\gamma)=\xi^{-1}\\ \mu^{*}&\triangleq\min(\mu)=\sqrt{\zeta} \end{align} \textbf{Proof:} The above is a scalarization of a multiobjective optimization with two optimality criteria. Since each of these optimization problems is convex, the scalarized problem is also convex \cite{Boyd2}. The rest of the proof is the same as the proof of Theorem 4 where the LMI \eqref{LMI9} is obtained from the LMI \eqref{LMI3} using the Schur's complement lemma. \ $\triangle$ \section{Illustrative Examples} In this section the high performance of the proposed observer is shown via three design examples.\\ \hspace{0.5cm} \emph{\textbf{Example 1.}} Consider the following observable (A,C) pair \begin{eqnarray} A=\left[% \begin{array}{cc} 0 & 1 \\ 1 & -1 \\ \end{array}% \right], C=\left[ \begin{array}{cc} 0 & 1 \end{array} \right]\notag \end{eqnarray} The result of the iterative algorithm proposed in \cite{Rajamani} is \begin{eqnarray} \gamma^{*}&=&0.49\notag \\L&=&\left[ \begin{array}{cc} 69.5523 & 11.5679 \\ \end{array} \right]^{T}\notag \end{eqnarray} while using our proposed method in Theorem 2, \begin{eqnarray} \gamma^{*}&=&1.1933\notag \\L&=&\left[ \begin{array}{cc} 56.8334 & 21.9074 \\ \end{array} \right]^{T}\notag \end{eqnarray} which means that the admissible Lipschitz constant is improved by a factor of $2.42$.\\ \emph{\textbf{Example 2.}} The following system is the unforced forth-order model of a flexible joint robotic arm as presented in \cite{Raghavan}, \cite{Aboky}, \cite{Rajamani2}. The reason we have chosen this example is that it is an important industrial application and has been widely used as a benchmark system to evaluate the performance of the observers designed for Lipschitz nonlinear systems. \begin{eqnarray} \dot{x}&=&\left[ \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -48.6 & -1.25 & 48.6 & 0 \\ 0 & 0 & 0 & 1 \\ 19.5 & 0 & -19.5 & 0 \\ \end{array} \right]x+\left[\begin{array}{c} 0 \\ 0 \\ 0 \\ -3.33\sin(x3) \end{array}\right]\notag\\ y&=&\left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right]x.\notag \end{eqnarray} The system is globally Lipschitz with Lipschitz constant $\gamma=3.33$. Noticing the structure of $\Phi$ that has a zero entry in three of its channels, Raghavan \cite{Raghavan}, proposed the coordinates transformation $\bar{x}=Tx$, where \begin{equation} T= diag \ [1,1,4,0.1] \end{equation} under which the transformed system has Lipschitz constant $\bar{\gamma}=0.083$. Using Theorem 3, $\gamma^{*}=0.4472$ in the original coordinates and $\bar{\gamma}^{*}=2.4177$ in the transformed coordinates. The observer gain $\bar{L}$, is obtained in the transformed coordinates and computed in the original coordinates as $L=T^{-1}\bar{L}$. Assuming \begin{eqnarray} \beta&=&0.2\notag\\ B&=&\left[\begin{array}{cccc} 1 & 1 & 1 & 1 \end{array}\right]^{T}\notag\\ D&=&\left[ \begin{array}{cc} 0.1 & 0.25 \\ \end{array} \right]^{T}\notag\\ H&=&0.5 I_{4\times4}\notag, \end{eqnarray}\notag using Theorem 4 we get, $\mu^{*}=0.5753$, $\alpha=2.0517$, $\epsilon=0.0609$, and finally the observer gain will be \begin{align} L=\left[ \begin{array}{cccc} 33.4865 & 129.9249 & 59.89713 & 108.2134 \\ 38.5694 & 282.8603 & 102.1561 & 171.0910 \end{array}\right]^{T}.\notag \end{align} Figure \ref{Fig1}, shows the true and estimated values of states. The actual states are shown along with the estimates obtained using Raghavan's and Aboky's methods and our proposed LMI optimization method. The initial conditions for the system are $x(0)=\left[ \begin{array}{cccc} 0 & -1 & 0 & 2 \\ \end{array} \right]^{T}$ and those of the all observers are $\hat{x}(0)=\left[ \begin{array}{cccc} 1 & 0 & -0.5 & 0 \\ \end{array} \right]^{T}$. As seen in Figure \ref{Fig1}, the observer designed using the proposed LMI optimization method has the best convergence of the three. Note that in addition to the better convergence, the proposed observer is an $H_{\infty}$ filter with maximized disturbance attenuation level while the observers designed based on the methods of \cite{Raghavan, Rajamani, Rajamani2, Aboky, Pertew} can only guarantee stability of the observer error. \begin{figure}[!h \centering \includegraphics[width=6.5in]{IMA2}\\ \caption{The true and estimated states of Example 2}\label{Fig1} \end{figure}\\ \emph{\textbf{Example 3.}} In this example we show the usage of of the multiobjective optimization of Theorem 5 in the design of $H_{\infty}$ observers. Consider the following system \begin{eqnarray} x&=&\left[ \begin{array}{cc} x_{1} & x_{2} \\ \end{array} \right]^{T}\notag \\ \dot{x}&=&\left[% \begin{array}{cc} 0 & 1 \\ -1 & -1 \\ \end{array}% \right]x + \left[ \begin{array}{c} x_{1}^{3} \\ -6x_{1}^{5}-6x_{1}^{2}x_{2}-2x_{1}^{4}-2x_{1}^{2} \\ \end{array}\notag \right] \\ y &=& \left[ \begin{array}{cc} 1 & 0 \end{array} \right]x. \end{eqnarray} The systems is locally Lipschitz. Its Lipschitz constant is region-based. Suppose we consider the region $\mathcal{D}$ as follows \begin{eqnarray} \mathcal{D}&=&\biggl\lbrace (x_{1},x_{2})\in \mathbb{R}^{2} \ | \ x_{1}\leq 0.25 \biggr\rbrace\notag \end{eqnarray} in which the Lipschitz constant is $\gamma=0.4167$. We choose \begin{align} H&=0.5I\notag \\ B&=\left[ \begin{array}{cc} 1 & 1 \\ \end{array} \right]^{T}\notag \\D&=0.2\notag \\\beta&=0.05\notag \end{align} and solve the multiobjective optimization problem of Theorem 5 with $\lambda=0.9$. We get \begin{align} \gamma^{*}&=0.5525\notag\\ \mu^{*}&=1.1705\notag\\ \alpha&=1.6260\notag\\ \epsilon&=2.2435 \times 10^{-4}\notag\\ L&=\left[ \begin{array}{cc} 23.7025 & 13.7272 \\ \end{array} \right]^{T}.\notag \end{align} The true and estimated values of states are shown in Figure \ref{Fig2}. We have assumed that \begin{eqnarray} x(0)&=&\left[\begin{array}{cc} -0.2 & -1.45 \end{array}\right]^{T} \notag \\ \hat{x}(0)&=&\left[\begin{array}{cc} 0.25 & -2 \end{array}\right]^{T}\notag \\w(t)&=&0.15\exp(-t)\sin(t).\notag \end{eqnarray} \begin{figure}[!h \centering \includegraphics[width=4.5in]{IMA3}\\ \caption{The true and estimated states of Example 3 in the presence of disturbance}\label{Fig2} \end{figure} The values of $\gamma^{*}$, $\mu^{*}$, norm of the observer gain matrix, $\bar{\sigma}(L)$, and the optimal trade-off curve between $\gamma^{*}$ and $\mu^{*}$ over the range of $\lambda$ when the decay rate is fixed ($\beta=0.05$) are shown in Figure \ref{Fig3}. \begin{figure}[!h \centering \includegraphics[width=5.5in]{IMA4}\\ \caption{$\gamma^{*}$, $\mu^{*}$ and $\bar{\sigma}(L)$, and the optimal trade-off curve with $\beta=0.05$}\label{Fig3} \end{figure} The optimal surfaces of $\gamma^{*}$ and $\mu^{*}$ over the range of $\lambda$ when the decay rate is variable are shown in Figures \ref{Fig4} and \ref{Fig5}, respectively. \begin{figure}[!h \centering \includegraphics[width=5.5in]{IMA5}\\ \caption{The optimal surface of $\gamma^{*}$}\label{Fig4} \end{figure} \begin{figure}[!h \centering \includegraphics[width=5.5in]{IMA6}\\ \caption{The optimal surface of $\mu^{*}$}\label{Fig5} \end{figure} \section{Conclusions} A new method of robust observer design for Lipschitz nonlinear systems proposed based on LMI optimization. The Lipschitz constant of the nonlinear system can be maximized so that the observer error dynamics not only be asymptotically stable but also the observer can tolerate some additive nonlinear uncertainty. In addition, the result extended to a robust $H_{\infty}$ nonlinear observer. The obtained observer has three features, simultaneously. Asymptotic stability, robustness against nonlinear uncertainty and minimized guaranteed $H_{\infty}$ cost. Thanks to the linearity of the proposed LMIs in both admissible Lipschitz constant and the disturbance attenuation level, they can be simultaneously optimized through convex multiobjective optimization. The observer high performance showed through design examples. \bibliographystyle{IEEEtran}
proofpile-arXiv_068-15677
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{#1} \medskip} \setcounter{tocdepth}{1} \theoremstyle{plain} \newtheorem*{thm}{Theorem} \newtheorem*{thmA}{Theorem A} \newtheorem*{thmB}{Theorem B} \newtheorem*{thmC}{Theorem C} \newtheorem*{thmD}{Theorem D} \newtheorem*{prop}{Proposition} \newtheorem*{propA}{Proposition A} \newtheorem*{propB}{Proposition B} \newtheorem*{propC}{Proposition C} \newtheorem*{lem}{Lemma} \newtheorem*{lemA}{Lemma A} \newtheorem*{lemB}{Lemma B} \newtheorem*{lemC}{Lemma C} \newtheorem*{lemD}{Lemma D} \newtheorem*{cor}{Corollary} \newtheorem*{corA}{Corollary A} \newtheorem*{corB}{Corollary B} \newtheorem*{corC}{Corollary C} \newtheorem*{corE}{Corollary E} \theoremstyle{definition} \newtheorem*{rem}{Remark} \begin{document} \title{Hochschild (co)homology of the second kind I} \author{Alexander Polishchuk \ and \ Leonid Positselski} \address{Department of Mathematics, University of Oregon, Eugene, OR 97403, USA} \email{apolish@uoregon.edu} \address{Sector of Algebra and Number Theory, Institute for Information Transmission Problems, Bolshoy Karetny per.~19 str.~1, Moscow 127994, Russia} \email{posic@mccme.ru} \maketitle \tableofcontents \section*{Introduction} \medskip CDG\+algebras (where ``C'' stands for ``curved'') were introduced in connection with nonhomogeneous Koszul duality in~\cite{Pcurv}. Several years earlier, (what we would now call) $A_\infty$\+algebras with curvature were considered in~\cite{GJ} as natural generalizations of the conventional $A_\infty$\+algebras. In fact, \cite{GJ}~appears to be the first paper where the Hochschild (and even cyclic) homology of curved algebras was discussed. Recently, the interest to these algebras was rekindled by their connection with the categories of matrix factorizations~\cite{Seg,Dyck,PV,CT,Tu}. In these studies, beginnings of the theory of Hochschild (co)homology for CDG\+algebras have emerged. The aim of the present paper is to work out the foundations of the theory on the basis of the general formalism of \emph{derived categories of the second kind} as developed in the second author's paper~\cite{Pkoszul}. The terminology, and the notion of a \emph{differential derived functor of the second kind}, which is relevant here, go back to the classical paper~\cite{HMS}. The subtle but crucial difference between the differential derived functors of the first and the second kind lies in the way one constructs the totalizations of bicomplexes: one can take either direct sums or direct products along the diagonals. The construction of the differential $\Tor$ and $\Ext$ of the first kind, which looks generally more natural at the first glance, leads to trivial functors in the case of a CDG\+algebra with nonzero curvature over a field. So does the (familiar) definition of Hochschild (co)homology of the first kind. On the other hand, with a CDG\+algebra $B$ one can associate the DG\+category $C$ of right CDG\+modules over $B$, projective and finitely generated as graded $B$\+modules. For the DG\+category $C$, the Hochschild (co)homology of the first kind makes perfect sense. The main problem that we address in this paper is the problem of comparison between the Hochschild (co)homology of the first kind of the DG\+category $C$ and the Hochschild (co)homology of the second kind of the original CDG\+algebra $B$ (defined using the differential $\Tor$/$\Ext$ of the second kind). We proceed in two steps: first, compare the Hochschild (co)homology of the second kind for $B$ and $C$, and then deal with the two kinds of Hochschild (co)homology of~$C$. The first step is relatively easy: our construction of an isomorphism works, at least, for any CDG\+algebra $B$ over a field~$k$ (see Section~\ref{dg-of-cdg-subsect}). However, a trivial counterexample shows that the two kinds of Hochschild (co)homology of $C$ are \emph{not} isomorphic in general (see Section~\ref{counterexample}). There are natural maps between the two kinds of Hochschild (co)homology, though. A sufficient condition for these maps to be isomorphisms is formulated in terms of the derived categories of the second kind of CDG\+bimodules over~$B$. In the maximal generality that we have been able to attain, this is a kind of ``resolution of the diagonal'' condition for the CDG\+bimodule $B$ over~$B$ (see Theorems~\ref{comparison-dg-of-cdg}.C\+-D and Corollaries~\ref{cdg-koszul}.B, \ref{noetherian-cdg-rings}.B, and~\ref{matrix-factorizations}). Let us say a few more words about the first step. There is no obvious map between the Hochschild complexes of $B$ and $C$, so one cannot directly compare their cohomology. Instead, we construct a third complex (both in the homological and the cohomological versions) endowed with natural maps from/to these two complexes, and show that these maps are quasi-isomorphisms. To obtain the intermediate complex, we embed both $B$ and $C$ into a certain larger differential category. The idea of these embeddings goes back to A.~Schwarz's work~\cite{Sch}. The starting observation is that a CDG\+algebra is not a CDG\+module over itself in any natural way (even though it is naturally a CDG\+bimod\-ule over itself). It was suggested in~\cite{Sch}, however, that one can relax the conditions on differential modules over CDG\+algebras (called ``$Q$\+algebras'' in~\cite{Sch}) thereby making the modules carry their own curvature endomorphisms. In recognition of A.~Schwarz's vision, we partly borrow his terminology by calling such modules \emph{QDG\+modules}. Any CDG\+algebra is naturally both a left and a right QDG\+module over itself. While CDG\+modules form a DG\+category, QDG\+modules form a \emph{CDG\+category}. Both a CDG\+algebra $B$ (considered as a CDG\+category with a single object) and the DG\+category $C$ of CDG\+modules over it embed naturally into the CDG\+category $D$ of QDG\+modules over $B$, so the Hochschild complex of $D$ provides an intermediate object for comparison between the Hochschild complexes of $B$ and~$C$. Now let us turn to the second step. The (conventional) derived category of DG\+modules over a DG\+algebra is defined as the localization of the homotopy category of DG\+modules by the class of quasi-isomorphisms, or equivalently, by the thick subcategory of acyclic DG-modules. This does not make sense for CDG\+modules, since their differentials have nonzero squares, so their cohomology cannot be defined. Indeed, the subcategory of acyclic DG\+modules is not even invariant under CDG\+isomorphisms between DG\+algebras~\cite[Examples~9.4]{Pkoszul}. The definition of the \emph{derived categories of the second kind}, various species of which are called the \emph{coderived}, the \emph{contraderived}, the \emph{absolute derived}, and the \emph{complete derived categories}, for DG- and CDG-modules are not based on any notion of cohomology of a differential module. Rather, the classes of \emph{coacyclic}, \emph{contraacyclic}, \emph{absolutely acyclic}, and \emph{completely acyclic} CDG\+modules are built up starting from short exact sequences of CDG\+modules (with closed morphisms between them). For reasons related to the behavior of tensor products with respect to infinite direct sums and products of vector spaces, the derived categories and functors of the second kind work better for \emph{coalgebras} than for algebras, even though one is forced to use them for algebras if one is interested in curved algebras and modules. (For derived categories and functors of the first kind, it is the other way.) That is why one has to impose additional conditions like finiteness of homological dimension, Noetherianness, etc.,\ on the underlying graded algebras of one's CDG\+algebras in order to make the derived categories of the second kind well-behaved and the relation between them and the derived functors of the second kind working properly. We did our best to make such additional conditions as weak as possible in this paper, but the price of generality is technical complexity. Unlike the $\Tor$ and $\Ext$, the Hochschild (co)homology is essentially an invariant of a pair (a field or commutative ring, an algebra over it). It is \emph{not} preserved when the ground field or ring is changed. In this paper, we always work over an arbitrary commutative ring~$k$, or a commutative ring of finite homological dimension, as needed. The only exceptions are some examples depending on the Koszul duality results from~\cite{Pkoszul}, which are established only over a field. Working over a commutative ring involves all kinds of $k$\+flatness or $k$\+projectivity conditions that need to be imposed on the algebras and modules, both in order to define the Hochschild (co)homology and to compute various (co)homology theories in terms of standard complexes. Recent studies of the categories of matrix factorizations and of the associated CDG\+algebras showed the importance of developing the relevant homological algebra using only $\Z/2$\+grading (as opposed to the conventional $\Z$\+grading). In this paper we work with CDG\+algebras and CDG\+categories graded by an arbitrary abelian group $\Gamma$ endowed with some additional data that is needed to define $\Gamma$\+graded complexes and perform operations with them. The behavior of our (co)homology theories with respect to a replacement of the grading group $\Gamma$ is discussed in detail (see Section~\ref{change-grading-group}). We exhibit several classes of examples of DG\+algebras and DG\+categories for which the two kinds of $\Tor$, $\Ext$, and Hochschild (co)homology coincide. These examples roughly correspond to the classes of DG\+algebras for which the derived categories of the first and second kind are known to coincide~\cite[Section~9.4]{Pkoszul}. In particular, one of these classes is that of the DG\+categories that are cofibrant with respect to G.~Tabuada's model category structure (see Section~\ref{cofibrant-subsect}). Examples of CDG\+algebras $B$ such that the two kinds of $\Tor$ and $\Ext$ for the corresponding DG\+category $C$ of CDG\+modules over $B$, finitely generated and projective as graded $B$\+modules, are known to coincide are fewer; and examples when we can show that the two kinds of Hochschild (co)homology for this DG\+category $C$ coincide are fewer still. Among the former are all the CDG\+rings $B$ whose underlying graded rings are Noetherian of finite homological dimension (see Section~\ref{noetherian-cdg-rings}). In the latter class we have some CDG\+algebras over fields admitting Koszul filtrations of finite homological dimension (see Section~\ref{cdg-koszul}), curved commutative local algebras describing germs of isolated hypersurface singularities (due to the results of~\cite{Dyck}), and curved commutative smooth algebras over perfect fields with the curvature function having no other critical values but zero (due to the recent results of~\cite{LP}; see Section~\ref{matrix-factorizations}). Our discussion of the Hochschild (co)homology of the DG\+categories of matrix factorizations is finished in Section~\ref{direct-sum}, where we show that the Hochschild (co)homology of the second kind of the DG\+category of matrix factorizations over a smooth affine variety over an algebraically closed field of characteristic zero is isomorphic to the direct sum of the Hochschild (co)homology of the first kind of the similar DG\+categories corresponding to all the critical values of the potential. We are grateful to Anton Kapustin, Ed Segal, Daniel Pomerleano, Kevin Lin, and Junwu Tu for helpful conversations. A.~P. is partially supported by the NSF grant DMS-1001364. L.~P. is partially supported by a grant from P.~Deligne 2004 Balzan prize and an RFBR grant. \Section{CDG-Categories and QDG-Functors} This section is written in the language of CDG\+categories. Expositions in the generality of CDG\+rings, which might be somewhat more accessible to an inexperienced reader, can be found in~\cite{Pcurv,Pkoszul,Sch}. For a discussion of DG\+categories, we refer to~\cite{Kel}, \cite{Toen}, and~\cite[Section~1.2]{Pkoszul}. \subsection{Grading group} \label{grading-group} Let $\Gamma$ be an abelian group endowed with a symmetric bilinear form $\sigma\:\Gamma\times\Gamma\rarrow \Z/2$ and a fixed element $\boldsymbol{1}\in\Gamma$ such that $\sigma(\boldsymbol{1}, \boldsymbol{1}) = 1\bmod 2$. We will use $\Gamma$ as the group of values for the gradings of our complexes. The differentials will raise the degree by~$\boldsymbol{1}$, and signs like $(-1)^{\sigma(a,b)}$ will appear in the sign rules. For example, in the simplest cases one may have $\Gamma=\Z$, \ $\boldsymbol{1}=1$, and $\sigma(a,b)=ab\bmod 2$ for $a$, $b\in\Gamma$, or, alternatively, $\Gamma=\Z/2$, \ $\boldsymbol{1}=1\bmod 2$, and $\sigma(a,b)=ab$. One can also take $\Gamma$ to be any additive subgroup of $\Q$, containing $\Z$ and consisting of fractions with odd denominators, $\boldsymbol{1}=1$, and $\sigma(a,b)=ab\bmod 2$. Of course, it is also possible that $\Gamma=\Z^d$ for any finite or infinite~$d$, etc. When working over a commutative ring~$k$ containing the field~$\mathbb F_2$, we will not need the form~$\sigma$, and so $\Gamma=\Q$ or $\Gamma=0$ become admissible choices as well. From now on, we will assume a grading group data $(\Gamma,\sigma, \boldsymbol{1})$ to be fixed. When appropriate, we will identify the integers with their images under the natural map $\Z\rarrow \Gamma$ sending $1$ to~$\boldsymbol{1}$ without presuming this map to be injective, and denote $\sigma(a,b)$ simply by $ab$ for $a$, $b\in\Gamma$. So we will write simply $1$ instead of~$\boldsymbol{1}$, etc. This map $\Z\rarrow\Gamma$ will be also used when constructing the total complexes of polycomplexes some of whose gradings are indexed by the integers and the other ones by elements of the group~$\Gamma$. Conversely, to any $a\in\Gamma$ one assigns the class $\sigma(\boldsymbol{1},a)\in\Z/2$, which we will denote simply by~$a$ in the appropriate contexts. \subsection{CDG-categories} \label{cdg-categories-subsect} A \emph{CDG\+category} $C$ is a category whose sets of morphisms $\Hom_C(X,Y)$ are $\Gamma$\+graded abelian groups (i.~e., $C$ is a $\Gamma$\+graded category) endowed with homogeneous endomorphisms $d\:\Hom_C(X,Y)\rarrow\Hom_C(X,Y)$ of degree~$1$ and fixed elements $h_X\in\Hom_C(X,X)$ of degree~$2$ for all objects $X$, $Y\in C$. The endomorphisms $d$ are called the \emph{differentials} and the elements $h_X$ are called the \emph{curvature elements}. The following equations have to be satisfied: $d(fg)=d(f)g+ (-1)^{|f|}fd(g)$ for any composable homogeneous morphisms $f$ and $g$ in $C$ of the degrees $|f|$ and $|g|\in\Gamma$, \ $d^2(f)=h_Yf-fh_X$ for any morphism $f\:X\rarrow Y$ in~$C$, and $d(h_X)=0$ for any object $X\in C$. The simplest example of a CDG\+category is the category $\Pre(A)$ of \emph{precomplexes} over an additive category~$A$. The objects of $\Pre(A)$ are $\Gamma$\+graded objects $X$ in $A$ endowed with an endomorphism $d_X\:X\rarrow X$ of degree~$1$. The $\Gamma$\+graded abelian group of morphisms $\Hom_{\Pre(A)}(X,Y)$ is the group of homogeneous morphisms $X\rarrow Y$ of $\Gamma$\+graded objects. The differentials $d\:\Hom(X,Y)\rarrow\Hom(X,Y)$ are given by the rule $d(f)=d_Y f-(-1)^{|f|}fd_X$, and the curvature elements are $h_X=d_X^2$. In particular, when $A=Ab$ is the category of abelian groups, we obtain the CDG\+category of precomplexes of abelian groups $\Pre(Ab)$. A CDG\+category with a single object is another name for a \emph{CDG\+ring}. A CDG\+ring $(B,d,h)$ is a $\Gamma$\+graded ring $B$ endowed with an odd derivation~$d$ of degree~$1$ and a curvature element $h\in B^2$ such that $d^2(b)=[h,b]$ for any $b\in B$ and $d(h)=0$. An \emph{isomorphism} between objects $X$ and $Y$ of a CDG\+category $C$ is (an element of) a pair of morphisms $i\:X\rarrow Y$ and $j\:Y\rarrow X$ of degree~$0$ such that $ji=\id_X$, \ $ij=\id_Y$, and $d(i)=0=d(j)$; any one of the latter two equations implies the other one. It also follows that $jh_Yi=h_X$. Let $X$ be an object of a CDG\+category $C$ and $\tau\in\Hom_C(X,X)$ be its homogeneous endomorphism of degree~$1$. An object $Y\in C$ is called the \emph{twist} of an object $X$ with an endomorphism~$\tau$ (the notation: $Y=X(\tau)$) if homogeneous morphisms $i\:X\rarrow Y$ and $j\:Y\rarrow X$ of degree~$0$ are given such that $ji=\id_X$, \ $ij=\id_Y$, and $jd(i)=\tau$. In this case one has $jh_Yi=h_X+d\tau+\tau^2$. For any object $X\in C$ and an element $n\in\Gamma$, an object $Y\in C$ is called the \emph{shift} of $X$ with the grading~$n$ (the notation: $Y=X[n]$) if homogeneous morphisms $i\:X\rarrow Y$ and $j\:Y\rarrow X$ of the degrees $n$ and $-n$, respectively, are given such that $ji=\id_X$, \ $ij=\id_Y$, and $d(i)=0=d(j)$. In this case one has $jh_Yi=h_X$. An object $X\in C$ is called the \emph{direct sum} of a family of objects $X_\alpha\in C$ if homogeneous morphisms $i_\alpha\:X_\alpha \rarrow X$ of degree~$0$ are given such that the induced map $\Hom_C(X,Y)\rarrow\prod_\alpha\Hom_C(X_\alpha,Y)$ is an isomorphism of $\Gamma$\+graded abelian groups for any object $Y\in C$, and $di_\alpha=0$. In this case one has $h_X i_\alpha=i_\alpha h_{X_\alpha}$, so the endomorphism $h_X$ corresponds to the family of morphisms $i_\alpha h_{X_\alpha}$ under the above isomorphism for $Y=X$. The (\emph{direct}) \emph{product} of a family of object is defined in the dual way. An object $X$ is the direct sum of a finite family of objects $X_\alpha\in C$ if and only if it is their direct product. Of course, the notions of a shift and a direct sum/product of objects make sense in a (nondifferential) $\Gamma$\+graded category, too; one just drops the conditions involving $d$ and~$h$. Twists, shifts, direct sums, and products of objects of a CDG\+category are unique up to a unique isomorphism whenever they exist. A \emph{DG\+category} is a CDG\+category in which all the curvature elements are zero. The \emph{opposite CDG\+category} to a CDG\+category $C$ is constructed as follows. The class of objects of $C^\op$ coincides with the class of objects of~$C$. For any objects $X$, $Y\in C$ the $\Gamma$\+graded abelian group $\Hom_{C^\op}(X^\op,Y^\op)$ is identified with $\Hom_C(Y,X)$, and the differential $d^\op$ on this group coincides with~$d$. The composition of morphisms in $C^\op$ differs from that in $C$ by the sign rule, $f^\op g^\op = (-1)^{|f||g|}(gf)^\op$. Finally, the curvature elements in $C^\op$ are $h_{X^\op}=-h_X$. In particular, this defines the CDG\+ring $B^\op=(B^\op,d^\op,h^\op)$ opposite to a CDG\+ring $B=(B,d,h)$. Now let $k$ be a commutative ring. A \emph{k\+linear} CDG\+category is a CDG\+category whose $\Gamma$\+graded abelian groups of morphisms are endowed with $\Gamma$\+graded $k$\+module structures so that the compositions are $k$\+bilinear and the differentials are $k$\+linear. The \emph{tensor product} $C\ot_kD$ of two $k$\+linear CDG\+categories $C$ and $D$ is constructed as follows. The objects of $C\ot_kD$ are pairs $(X',X'')$ of objects $X'\in C$ and $X''\in D$. The $\Gamma$\+graded $k$\+module of morphisms $\Hom_{C\ot_kD}((X',X''),(Y',Y''))$ is the tensor product $\Hom_C(X',Y')\ot_k\Hom_D(X'',Y'')$; the differential~$d$ on this module is defined by the formula $d(f'\ot f'')=d(f')\ot f'' + (-1)^{|f'|}f'\ot d(f'')$. The curvature elements are $h_{(X',X'')}=h_{X'}\ot\id_{X''}+ \id_{X'}\ot h_{X''}$. \subsection{QDG-functors} Let $C$ and $D$ be CDG\+categories. A \emph{covariant CDG\+functor} $F\:C\rarrow D$ is a homogeneous additive functor between the $\Gamma$\+graded categories $C$ and $D$, endowed with fixed elements $a_X\in \Hom_D(F(X),F(X))$ of degree~$1$ for all objects $X\in C$ such that $F(df) = dF(f) + a_Y F(f) - (-1)^{|f|} F(f) a_X$ for any morphism $f\:X\rarrow Y$ in $C$ and $F(h_X) = h_{F(X)} + da_X + a_X^2$ for any object~$X$. A contravariant CDG\+functor $C\rarrow D$ is defined as a covariant CDG\+functor $C^\op\rarrow D$, or equivalently, a covariant CDG\+functor $C\rarrow D^\op$. The \emph{opposite} CDG\+functor $F^\op\:C^\op\rarrow D^\op$ to a covariant CDG\+functor $F\:C\rarrow D$ is defined by the rule $(F,a)^\op=(F^\op,-a)$. (Covariant or contravariant) CDG\+functors $C\rarrow D$ are objects of the \emph{DG\+category of CDG\+functors}. The $\Gamma$\+graded abelian group of morphisms between covariant CDG\+functors $F$ and $G$ is the $\Gamma$\+graded group of homogeneous morphisms, with the sign rule, between $F$ and $G$ considered as functors between $\Gamma$\+graded categories. More precisely, a morphism $f\:F\rarrow G$ of degree $n\in\Gamma$ is a collection of morphisms $f_X\:F(X)\rarrow G(X)$ of degree~$n$ in $D$ for all objects $X\in C$ such that $f_Y F(g)=(-1)^{n|g|} G(g) f_X$ for any morphism $g\:X\rarrow Y$ in~$C$. The differential~$d$ on the $\Gamma$\+graded group $\Hom(F,G)$ of morphisms between CDG-functors $F=(F,a)$ and $G=(G,b)$ is defined by the rule $(df)_X = d(f_X) + b_X f_X - (-1)^{|f|} f_X a_X$. A (covariant or contravariant) \emph{QDG\+functor} $F$ between CDG\+categories $C$ and $D$ is the same set of data as a CDG\+functor satisfying the same equations, except for the equation connecting $F(h_X)$ with $h_{F(X)}$, which is omitted. QDG\+functors $C\rarrow D$ are objects of the \emph{CDG\+category of QDG\+functors}. The $\Gamma$\+graded abelian group of morphisms between QDG\+functors and the differential on it are defined exactly in the same way as in the CDG\+functor case. The curvature element of a QDG\+functor $F\:C\rarrow D$ is the endomorphism $h_F\:F\rarrow F$ of degree~$2$ defined by the formula $(h_F)_X = h_{F(X)} + da_X + a_X^2 - F(h_X)$ for all $X\in C$. The composition of QDG\+functors $(F,a)\:C\rarrow D$ and $(G,b)\:D\rarrow E$ is the QDG\+functor $(G\circ F\;c)$, where $c_X=G(a_X)+b_{F(X)}$ for any object $X\in C$. A CDG\+functor or QDG\+functor $F=(F,a)\:C\rarrow D$ is said to be \emph{strict} if $a_X=0$ for all objects $X\in C$. The identity CDG\+functor $\Id_C$ of a CDG\+category $C$ is the strict CDG\+functor $(\Id_C,0)$. The composition of strict QDG\+functors is a strict QDG\+functor, and the composition of (strict) CDG\+functors is a (strict) CDG\+functor. Two CDG\+functors $F\:C\rarrow D$ and $G\:D\rarrow C$ between CDG\+categories $C$ and $D$ are called mutually inverse \emph{equivalences} of CDG\+categories if they are equivalences of the $\Gamma$\+graded categories such that the adjunction isomorphisms $i\:GF\rarrow\Id_C$ and $j\:FG\rarrow\Id_D$ are closed morphisms of CDG\+functors, i.~e., $d(i)=0=d(j)$ (any one of the two equations implies the other one). A CDG\+functor $F\:C\rarrow D$ is an equivalence if and only if it is fully faithful as a functor between $\Gamma$\+graded categories and any object $Y\in D$ is a twist of an object $F(X)$ for some $X\in C$. An equivalence $(F,G)$ between CDG\+categories $C$ and $D$ is called a \emph{strict equivalence} if the CDG\+functors $F$ and $G$ are strict. A strict CDG\+functor $F\:C\rarrow D$ is a strict equivalence if and only if it is fully faithful as a functor between $\Gamma$\+graded categories and any object $Y\in D$ is isomorphic to an object $F(X)$ for some $X\in C$. A strict CDG\+functor between DG\+categories is called a \emph{DG\+functor}. An \emph{equivalence} of DG\+categories is their strict equivalence as CDG\+categories. If all objects of the category $D$ admit twists with all of their endomorphisms of degree~$1$, then the embedding of the DG\+category of strict CDG\+functors $C\rarrow D$ into the DG\+category of all CDG\+functors is an equivalence of DG\+categories, and the embedding of the CDG\+category of strict QDG\+functors $C\rarrow D$ into the CDG\+category of all QDG\+functors is a strict equivalence of CDG\+categories. A QDG\+functor between $k$\+linear CDG\+categories is \emph{k\+linear} if its action on the $\Gamma$\+graded $k$\+modules of morphisms in the CDG\+categories is $k$\+linear. Given three $k$\+linear CDG\+categories $C$, $D$, $E$, the functor of composition of $k$\+linear QDG\+functors $C\rarrow D$ and $D\rarrow E$ is a strict $k$\+linear CDG\+functor on the tensor product of the $k$\+linear CDG\+categories of QDG\+functors. The composition (on either side) with a fixed CDG\+functor is a strict CDG\+functor between the CDG\+categories of QDG\+functors, and the composition with a fixed QDG\+functor is a strict QDG\+functor between such CDG\+categories. Given two $k$\+linear QDG\+functors $F'=(F',a')\:C'\rarrow D'$ and $F''=(F'',a'')\:C''\allowbreak\rarrow D''$, their tensor product $(F'\ot F''\;a)\:C'\ot_k C''\rarrow D'\ot_k D''$ is defined by the rule $(F'\ot F'')(X',X'')=(F'(X'),F''(X''))$ on the objects, $(F'\ot F'')(f'\ot f'')=F'(f')\ot F''(f'')$ on the morphisms, and $a_{X'\ot X''}=a_{X'}\ot\id_{X''}+\id_{X'}\ot a_{X''}$. The tensor product of strict QDG\+functors is a strict QDG\+functor, and the tensor product of (strict) CDG\+functors is a (strict) CDG\+functor. \subsection{QDG\+modules} \label{qdg-modules-subsect} A \emph{left QDG\+module} over a small CDG\+category $C$ is a strict covariant QDG\+functor $C\rarrow\Pre(Ab)$. Analogously, a right QDG\+module over $C$ is a strict contravariant QDG\+functor $C^\op\rarrow\Pre(Ab)$. (Left or right) CDG\+modules over a CDG\+category $C$ are similarly defined in terms of strict CDG\+functors with values in the CDG\+category $\Pre(Ab)$. The CDG\+categories of left and right QDG\+modules over $C$ are denoted by $C\modlq$ and $\modrq C$; the DG\+categories of left and right CDG\+modules over $C$ are denoted by $C\modlc$ and $\modrc C$. Since the CDG\+category $\Pre(Ab)$ admits arbitrary twists, one obtains (strictly) equivalent (C)DG\+categories by considering not necessarily strict QDG\+ or CDG\+functors. Given a CDG\+ring or CDG\+category $C$, we will denote by $C^\#$ the underlying $\Gamma$\+graded ring or category. For a QDG\+module $M$ over $C$, we similarly denote by $M^\#$ the underlying $\Gamma$\+graded $C^\#$\+module (i.~e., homogeneous additive functor from $C^\#$ to the $\Gamma$\+graded category of $\Gamma$\+graded abelian groups) of~$M$. If $k$ is a commutative ring and $C$ is a $k$\+linear CDG\+category, then any QDG\+functor $C\rarrow\Pre(Ab)$ can be lifted to a $k$\+linear QDG\+functor $C\rarrow\Pre(k\modl)$ in a unique way, where $k\modl$ denotes the abelian category of $k$\+modules. So the CDG\+category $C\modlq$ can be also described as the CDG\+category of (strict) $k$\+linear QDG\+functors $C\rarrow\Pre(k\modl)$. Notice that another notation for the CDG\+category $\Pre(k\modl)$ is $k\modlq$, where $k$ is considered as a CDG\+ring concentrated in degree~$0$ with the trivial differential and curvature, while $k\modlc$ is a notation for the DG\+category of complexes of $k$\+modules. Let $C$ be a small $k$\+linear CDG\+category, $N$ be a right QDG\+module over $C$, and $M$ be a left QDG\+module. The tensor product $N^\#\ot_{C^\#} M^\#$ is a $\Gamma$\+graded $k$\+module defined as the quotient module of the direct sum of $N(X)\ot_k M(X)$ over all objects $X\in C$ by the sum of the images of the maps $N(Y)\ot_k M(X)\rarrow N(X)\ot_k M(X)\oplus N(Y)\ot_k M(Y)$ over all homogeneous morphisms $X\rarrow Y$ in~$C$. There is a natural differential on $N^\#\ot_{C^\#}M^\#$ defined by the usual formula $d(n\ot m)=d(n)\ot m + (-1)^{|n|} n\ot d(m)$. The precomplex of $k$\+modules so obtained is denoted by $N\ot_CM$. The tensor product over~$C$ is a strict CDG\+functor $$ \ot_C\:\modrq C\times C\modlq\lrarrow k\modlq, $$ and its restriction to the DG\+subcategories of CDG\+modules is a DG\+functor $$ \ot_C\:\modrc C\times C\modlc\lrarrow k\modlc. $$ A QDG\+functor between CDG\+categories $F\:C\rarrow D$ induces a strict QDG\+func\-tor of inverse image (restriction of scalars) $F^*\:D\modlq\rarrow C\modlq$. Here we use the natural strict equivalence between the CDG\+categories of arbitrary and strict QDG\+functors $C\rarrow\Pre(Ab)$. When $F$ is a CDG\+functor, the functor $F^*$ is a strict CDG\+functor, and it restricts to a DG\+functor $D\modlc\rarrow C\modlc$. For any right QDG\+module $N$ and left QDG\+module $M$ over a $k$\+linear CDG\+category $D$ and a $k$\+linear CDG\+functor $F\:C\rarrow D$ there is a natural map of precomplexes of $k$\+modules $F^*(N)\ot_C F^*(M)\rarrow N\ot_D M$, commuting with the differentials. For any CDG\+category $B$ there is a natural strict CDG\+functor $B\rarrow\modrq B$ assigning to an object $X\in B$ the right QDG\+module $R_X\:Y\longmapsto\Hom_B(Y,X)$ over~$B$. Here the differential on $R_X(Y)$ coincides with the differential on $\Hom_B(Y,X)$. A CDG\+module over a DG\+category is called a \emph{DG\+module}. The DG\+categories of left and right DG\+modules over a small DG\+category $C$ are denoted by $C\modld$ and $\modrd C$. In particular, $k\modld$ is yet another notation for the DG\+category of complexes of $k$\+modules for a commutative ring~$k$. If $C$ is a $k$\+linear DG\+category, then the objects of $C\modld$ can be viewed as DG\+functors $C\rarrow k\modld$, and the objects of $\modrd C$ can be viewed as DG\+functors $C^\op\rarrow k\modld$. Given left QDG\+modules $M'$ and $M''$ over $k$\+linear CDG\+categories $B'$ and $B''$, their tensor product $M'\ot_k M''$ is the QDG\+module over $B'\ot_k B''$ defined as the composition of the tensor product of strict QDG\+functors $M'\ot M''\:B'\ot_k B''\rarrow \Pre(k\modl)\ot_k\Pre(k\modl)$ with the strict CDG\+functor of tensor product of precomplexes $\ot_k\:\Pre(k\modl)\ot_k\Pre(k\modl)\rarrow\Pre(k\modl)$. The latter functor assigns to two precomplexes of $k$\+modules their tensor product as $\Gamma$\+graded $k$\+modules, endowed with the differential defined by the usual formula. The tensor product of CDG\+modules is a CDG\+module. \subsection{Pseudo-equivalences} \label{pseudo-equi-subsect} Let us call a homogeneous additive functor $F^\#\:C^\#\rarrow D^\#$ between $\Gamma$\+graded additive categories $C^\#$ and $D^\#$ a \emph{pseudo-equivalence} if $F$ is fully faithful and any object $Y\in D^\#$ can be obtained from objects $F(X)$, \ $X\in C^\#$, using the operations of finite direct sum, shift, and passage to a direct summand. {\hfuzz=3pt\par} A CDG\+functor between CDG\+categories $F\:C\rarrow D$ is called a \emph{pseudo-equivalence} if it is fully faithful as a functor between the $\Gamma$\+graded categories and any object $Y\in D$ can be obtained from objects $F(X)$, \ $X\in C$, using the operations of finite direct sum, shift, twist, and passage to a direct summand. The category of (left or right) $\Gamma$\+graded modules over a small $\Gamma$\+graded category $C^\#$ is abelian. Let us call a right $\Gamma$\+graded module $N$ over $C^\#$ (\emph{finitely generated}) \emph{free} if it is a (finite) direct sum of representable modules $R_X$, where $X\in C^\#$. A $\Gamma$\+graded module $P$ over $C^\#$ is a projective object in the abelian category of $\Gamma$\+graded modules if and only if it is a direct summand of a free $\Gamma$\+graded module. A $\Gamma$\+graded module $P$ is a compact projective object (i.~e., a projective object representing a covariant functor preserving infinite direct sums on the category of modules) if and only if it is a direct summand of a finitely generated free $\Gamma$\+graded module. In this case, a $\Gamma$\+graded modules $P$ is said to be \emph{finitely generated projective}. Given a CDG\+category $B$, denote by $\modrcfp B$ and $\modrqfp B$ the DG\+category of right CDG\+modules and the CDG\+category of right QDG\+modules over $B$, respectively, which are finitely generated projective as $\Gamma$\+graded modules. The representable QDG\+modules $R_X$ are obviously objects of $\modrqfp B$, so there is a strict CDG\+functor $R\:B\rarrow \modrqfp B$. There is also the strict CDG\+functor of tautological embedding $I\:\modrcfp B\rarrow\modrqfp B$. \begin{lemA} The CDG\+functors $R$ and $I$ are pseudo-equivalences. \end{lemA} \begin{proof} First of all notice that any two objects of a CDG\+category $C$ that are isomorphic in the $\Gamma$\+graded category $C^\#$ are each other's twists. In particular, so are any two QDG\+modules over a CDG\+category $B$ that are isomorphic as $\Gamma$\+graded $B^\#$\+modules. Hence in order to prove that $R$ is a pseudo-equivalence, it suffices to show that any (finitely generated) projective $\Gamma$\+graded right $B^\#$\+module $P$ admits a QDG\+module structure. Indeed, if there is a $\Gamma$\+graded right $B^\#$\+module $Q$ such that the $\Gamma$\+graded module $P\oplus Q$ admits a differential~$d$ making it a QDG\+module, and $\iota\:P \rarrow P\oplus Q$ and $\pi\: P\oplus Q\rarrow P$ are the embedding of and the projection onto the direct summand $P$ in $P\oplus Q$, then the differential $\pi d\iota$ on $P$ makes it a QDG\+module. To prove that $I$ is a pseudo-equivalence, it suffices to show that the $\Gamma$\+graded $B^\#$\+module $P\oplus P[-1]$ admits a CDG\+module structure for any (finitely generated) projective right $B^\#$\+module $P$. Define the right CDG\+module $Q$ over $B$ with the group $Q(X)$ consisting of formal expressions of the form $p'+d(p'')$, \ $p'$, $p''\in P(X)$, with $P(X)$ embedded into $Q(X)$ as the set of all expressions $p+d(0)$. The differential $d$ on $Q(X)$, being restricted to $P(X)$, maps $p+d(0)$ to $0+d(p)$ and $B$ acts on $P\subset Q$ as it acts on~$P$. The action of $B$ is extended from $P$ to $Q$ in the unique way making the Leibniz rule satisfied, and the differential~$d$ is extended from $P$ to $Q$ in the unique way making the equation on $d^2$ hold (see~\cite[proof of Theorem~3.6]{Pkoszul} for explicit formulas). There is a natural exact sequence of $\Gamma$\+graded $B^\#$\+modules $0\rarrow P^\#\rarrow Q^\#\rarrow P^\#[-1]\rarrow 0$, which splits, since $P^\#$ is projective. \end{proof} \begin{lemB} If $F\:C\rarrow D$ is a pseudo-equivalence of small CDG\+categories, then the induced strict CDG\+functors $F^*\:D\modlq\rarrow C\modlq$ and\/ $\modrq D\rarrow\modrq C$ are strict equivalences of CDG\+categories. For any QDG\+modules $N\in\modrq D$ and $M\in D\modlq$, the natural map $N\ot_D M\rarrow F^*(N)\ot_C F^*(M)$ is an isomorphism of precomplexes. Besides, the induced DG\+functors $F^*\:D\modlc\rarrow C\modlc$ and\/ $\modrc D\rarrow\modrc C$ are equivalences of DG\+categories. \end{lemB} \begin{proof} First of all, it is obvious that if $F\:C^\#\rarrow D^\#$ is a pseudo-equivalence of $\Gamma$\+graded categories, then the induced functor of restriction of scalars between the categories of (left or right) $\Gamma$\+graded modules over $D$ and $C$ is an equivalence of $\Gamma$\+graded categories. These equivalences transform the functor of tensor product of $\Gamma$\+graded modules over $D$ into the functor of tensor product of $\Gamma$\+graded modules over~$C$. Thus, it remains to check that any QDG\+module over $C$ can be extended to a QDG\+module over~$D$. And this is also straightforward. \end{proof} More generally, one can see that the assertions of Lemma~B hold for any CDG\+functor $F\:C\rarrow D$ that is a pseudo-equivalence \emph{as a $\Gamma$\+graded functor} $C^\#\rarrow D^\#$. The assertions of both Lemmas A and~B remain valid if one replaces finitely generated projective modules with finitely generated free ones. \begin{lemC} \hfuzz=4pt If a CDG\+functor $F\:C\rarrow D$ is a pseudo-equivalence of CDG\+categories, then so is the CDG\+functor $F^\op\: C^\op\rarrow D^\op$. If $k$\+linear CDG\+functors $F'\:C'\allowbreak\rarrow D'$ and $F''\:C''\rarrow D''$ are pseudo-equivalences of CDG\+categories, then so is the CDG\+functor $F'\ot F''\:C'\ot_k C''\rarrow D'\ot_k D''$. \qed \end{lemC} \Section{Ext and Tor of the Second Kind} This section contains an exposition of the classical theory of the two kinds of differential derived functors, largely following~\cite{HMS}, except that we deal with CDG\+categories rather than DG\+(co)algebras. The classical theory allows to establish an isomorphism between the Hochschild (co)homology of the second kind of a CDG\+category $B$ and the DG\+category $C$ of right CDG\+modules over $B$ that are finitely generated and projective as graded $B$\+modules. We also construct a natural map between the two kinds of Hochschild (co)homology of any DG\+category~$C$ linear over a field~$k$. \subsection{Ext and Tor of the first kind} \label{ext-tor-first-kind} Given a DG\+category $D$, denote by $Z^0(D)$ the category whose objects are the objects of $D$ and whose morphisms are the closed (i.~e., annihilated by the differential) morphisms of degree~$0$ in~$D$. Let $H^0(D)$ denote the category whose objects are the objects of $D$ and whose morphisms are the elements of the cohomology groups of degree~$0$ of the complexes of morphisms in~$D$. The categories $Z^0(D)$ and $H^0(D)$ have preadditive category structures (i.~e., the abelian group structures on the sets of morphisms). In addition, these categories are endowed with the shift functors $X\maps X[n]$ for all $n\in\Gamma$, provided that shifts of all objects exist in~$D$ (see~\ref{cdg-categories-subsect}). Finally, let $H(D)$ denote the $\Gamma$\+graded category whose objects are the objects of $D$ and whose morphisms are the $\Gamma$\+graded groups of cohomology of the complexes of morphisms in~$D$. Let $k$ be a commutative ring and $C$ be a small $k$\+linear DG\+category. Let us endow the additive categories $Z^0(C\modld)$ and $Z^0(\modrd C)$ with the following exact category structures. A short sequence $M'\rarrow M\rarrow M''$ of DG\+modules and closed morphisms between them is exact if and only if \emph{both} the short sequence of $\Gamma$\+graded $C^\#$\+modules $M'{}^\#\rarrow M^\#\rarrow M''{}^\#$ and the short sequence of $\Gamma$\+graded $H(C)$\+modules of cohomology $H(M')\rarrow H(M)\rarrow H(M'')$ are exact in the abelian categories of $\Gamma$\+graded modules and their homogeneous morphisms of degree~$0$. In other words, for any object $X\in C$ the sequence $M'(X)\rarrow M(X)\rarrow M''(X)$ must be a short exact sequence of complexes of $k$\+modules whose $\Gamma$\+graded cohomology modules also form a short exact sequence (i.~e., the boundary maps vanish). Denote the additive category $Z^0(k\modld)$ of $\Gamma$\+graded complexes of $k$\+modules with its exact category structure defined above by $\Comex(k\modl)$. Let $d$~denote the differentials on objects of $\Comex(k\modl)$. We will be interested in the derived categories $\DD^-(\Comex(k\modl))$ and $\DD^+(\Comex(k\modl))$ of complexes, bounded from above or below, over the exact category $\Comex(k\modl)$. The differential acting between the terms of a complex over $\Comex(k\modl)$ will be denoted by~$\d$. The objects of $\DD^-(\Comex(k\modl))$ can be viewed as bicomplexes with one grading by the integers bounded from above and the other grading by elements of the group~$\Gamma$. (The differential~$d$ preserves the grading by the integers, while changing the $\Gamma$\+valued grading; and the differential~$\d$ raises the grading by the integers by~$1$, while preserving the $\Gamma$\+valued grading.) To any such bicomplex, one can assign its $\Gamma$\+graded total complex, constructed by taking infinite direct sums along the diagonals. This defines a triangulated functor from $\DD^-(\Comex(k\modl))$ to the unbounded derived category of $\Gamma$\+graded complexes of $k$\+modules, $$ \Tot^\oplus\:\DD^-(\Comex(k\modl))\lrarrow \DD(k\modl). $$ Analogously, the objects of $\DD^+(\Comex(k\modl))$ can be viewed as bicomplexes with one grading by the integers bounded from below and the other grading by elements of the group~$\Gamma$. To any such bicomplex, one can assign its $\Gamma$\+graded total complex, constructed by taking infinite products along the diagonals. This defines a triangulated functor $$ \Tot^\sqcap\:\DD^+(\Comex(k\modl))\lrarrow \DD(k\modl). $$ Any complex over $\Comex(k\modl)$ bounded from above (resp.,\ below) that becomes exact (with respect to the differential~$\d$) after passing to the cohomology of the $\Gamma$\+graded complexes of $k$\+modules (with respect to the differential~$d$) is annihilated by the functor $\Tot^\oplus$ (resp.,\ $\Tot^\sqcap$). \begin{rem} The latter assertion does not hold for the total complexes of unbounded complexes over $\Comex(k\modl)$, constructed by taking infinite direct sums or products along the diagonals. That is the reason why we define the functors $\Tot^\oplus$ and $\Tot^\sqcap$ for bounded complexes only. The assertion holds, however, for the functor of ``Laurent totalization'' of unbounded complexes, which coincides with $\Tot^\oplus$ for complexes bounded from above and with $\Tot^\sqcap$ for complexes bounded from below. See~\cite{HMS} and the introduction to~\cite{Pkoszul} (cf.\ Remark~\ref{second-kind-general}). \end{rem} Now consider the functor of two arguments (see~\ref{qdg-modules-subsect}) \begin{equation} \label{dg-tensor-product} \ot_C\:Z^0(\modrd C)\times Z^0(C\modld)\lrarrow\Comex(k\modl). \end{equation} We would like to construct its left derived functor $$ \ot_C^\L\:Z^0(\modrd C)\times Z^0(C\modld) \lrarrow\DD^-(\Comex(k\modl)). $$ For this purpose, notice that both exact categories $Z^0(\modrd C)$ and $Z^0(C\modld)$ have enough projective objects. Specifically, for any object $X\in C$ the representable DG\+module $R_X\in Z^0(\modrd C)$ is projective, and so is the the cone of the identity endomorphism of~$R_X$ (taken in the DG\+category $\modrd C$). Any object of $Z^0(\modrd C)$ is the image of an admissible epimorphism acting from an (infinite) direct sum of shifts of objects of the above two types. Given a right DG\+module $N$ and a left DG\+module $M$ over $C$, choose a left projective resolution $Q_\bu$ of $N$ and a left projective resolution $P_\bu$ of $M$ in the exact categories $Z^0(\modrd C)$ and $Z^0(C\modld)$. When substituted as one of the arguments of the functor~$\ot_C$, any projective object of one of the exact categories of DG\+modules makes this functor an exact functor from the other exact category of DG\+modules to the exact category $\Comex(k\modl)$. This allows to define $N\ot_C^\L M\in\DD^-(\Comex(k\modl))$ as the object represented either by the complex $Q_\bu\ot_C M$, or by the complex $N\ot_C P_\bu$, or by the total complex of the bicomplex $Q_\bu\ot_C P_\bu$. Analogously, consider the functor of two arguments \begin{equation} \label{dg-hom} \Hom^C\:Z^0(C\modld)^\op\times Z^0(C\modld)\rarrow\Comex(k\modl), \end{equation} assigning to any two left DG\+modules over $C$ the complex of morphisms between them as DG\+functors $C\rarrow k\modld$. We would like to construct its right derived functor $$ \R\!\Hom^C\:Z^0(C\modld)^\op\times Z^0(C\modld) \rarrow\DD^+(\Comex(k\modl)). $$ Notice that the exact category $Z^0(C\modld)$ has enough injective objects. For any projective object $Q\in Z^0(\modrd C)$ and an injective $k$\+module $I$, the object $\Hom_k(Q,I)\in Z^0(C\modld)$ is injective, and any injective object in the exact category $Z^0(C\modld)$ is a direct summand of an object of this type. To prove these assertions, it suffices to check that for any DG\+module $M\in Z^0(\modrd C)$, any object $X\in C$, and any element of $M(X)$ or $H(M)(X)$ there is a DG\+module $Q$ as above and a closed morphism of DG\+modules $M\rarrow\Hom_k(Q,I)$ that is injective on the chosen element. Given left DG\+modules $L$ and $M$ over $C$, choose a left projective resolution $P_\bu$ of $L$ and a right injective resolution $J^\bu$ of $M$ in the exact category $Z^0(C\modld)$. Substituting a projective object as the first argument or an injective object as the second argument of the functor $\Hom^C$, one obtains an exact functor from the exact category of DG\+modules in the other argument to the exact category $\Comex(k\modl)$. This allows to define $\R\!\Hom^C(L,M)\in\DD^+(\Comex(k\modl))$ as the object represented either by the complex $\Hom^C(P_\bu,M)$, or by the complex $\Hom^C(L,J^\bu)$, or by the total complex of the bicomplex $\Hom^C(P_\bu,J^\bu)$. Composing the derived functor $\ot_C^\L$ with the functor $\Tot^\oplus$, we obtain the derived functor $$ \Tor^C\:Z^0(\modrd C)\times Z^0(C\modld)\lrarrow\DD(k\modl). $$ Similarly, composing the derived functor $\R\!\Hom^C$ with the functor $\Tot^\sqcap$, we obtain the derived functor $$ \Ext_C\:Z^0(C\modld)^\op\times Z^0(C\modld)\lrarrow\DD(k\modl). $$ One can compute the derived functors $\Tor^C$ and $\Ext_C$ using resolutions of a more general type than above. Specifically, let $N$ and $M$ be a left and a right DG\+module over~$C$. Let $\dsb\rarrow F_2\rarrow F_1\rarrow F_0\rarrow M$ be a complex of left DG\+modules over $C$ and (closed morphisms between them) such that the complex of $\Gamma$\+graded $H(C)$\+modules $\dsb\rarrow H(F_2)\rarrow H(F_1)\rarrow H(F_0)\rarrow H(M)\rarrow0$ is exact. Assume that the DG\+modules $F_i$ are \emph{h\+flat} (homotopy flat), i.~e., for any $i\ge0$ and any right DG\+module $R$ over $C$ such that $H(R)=0$ one has $H(R\ot_C F_i)=0$. Let $Q_\bu$ be a left projective resolution of the DG\+module $N$ in the exact category of right DG\+modules over~$C$. Then the natural maps $\Tot^\oplus(Q_\bu\ot_C F_\bu)\rarrow\Tot^\oplus (Q_\bu\ot_C M)$ and $\Tot^\oplus(Q_\bu\ot_C F_\bu)\rarrow\Tot^\oplus (N\ot_C F_\bu)$ are quasi-isomorphisms, so the $\Gamma$\+graded complex of $k$\+modules $\Tot^\oplus(N\ot_C F_\bu)$ represents the object $\Tor^C(N,M)$ in $\DD(k\modl)$. Analogously, let $L$ and $M$ be left DG\+modules over~$C$. Let $\dsb\rarrow P_2\rarrow P_1\rarrow P_0\rarrow L$ be a complex of left DG\+modules over $C$ which becomes exact after passing to the $\Gamma$\+graded cohomology modules. Assume that the DG\+modules $P_i$ are \emph{h\+projective} (homotopy projective), i.~e., for any $i\ge0$ and any left DG\+module $R$ over $C$ such that $H(R)=0$ one has $H(\Hom^C(P_i,R))=0$. Then the complex of $k$\+modules $\Tot^\sqcap(\Hom^C(P_\bu,M))$ represents the object $\Ext_C(L,M)$ in $\DD(k\modl)$. Similarly, let $M\rarrow J^0\rarrow J^1\rarrow J^2\rarrow\dsb$ be a complex of left DG\+modules over $C$ which becomes exact after passing to the cohomology modules. Assume that the DG\+modules $J^i$ are \emph{h\+injective}, i.~e., for any $i\ge0$ and any left DG\+module $R$ over $C$ such that $H(R)=0$ one has $H(\Hom^C(R,J^i))=0$. Then the complex of $k$\+modules $\Tot^\sqcap(\Hom^C(L,J^\bu))$ represents the object $\Ext_C(L,M)$. In particular, it follows that the functors $\Tor^C$ and $\Ext_C$ transform quasi-isomorphisms of DG\+modules (i.~e., morphisms of DG\+modules inducing isomorphisms of the $\Gamma$\+graded cohomology modules) in any of their arguments into isomorphisms in $\DD(k\modl)$. Furthermore, consider the case when the complex of morphisms between any two objects of $C$ is an h\+flat complex of $k$\+modules. Then for any left DG\+module $M$ over $C$ such that the complex of $k$\+modules $M(X)$ is h\+flat for any object $X\in C$, the bar-construction \begin{multline*}\textstyle \dsb\lrarrow\bigoplus_{Y,Z\in C}C(X,Y)\ot_k C(Y,Z) \ot_k M(Z) \\ \textstyle \lrarrow \bigoplus_{Y\in C} C(X,Y)\ot_k M(Y)\lrarrow M(X), \end{multline*} where we use the simplifying notation $C(X,Y)=\Hom_C(Y,X)$ for any objects $X$, $Y\in C$, defines a left resolution of the DG\+module $M$ which consists of h\+flat DG\+modules over $C$ and remains exact after passing to the cohomology modules. Thus, for any right DG\+module $N$ over $C$ the total complex of the bar-complex $$\textstyle \dsb\lrarrow\bigoplus_{Y,Z\in C} N(Y)\ot_k C(Y,Z)\ot_k M(Z) \lrarrow\bigoplus_{X\in C} N(Y)\ot_k M(Y), $$ constructed by taking infinite direct sums along the diagonals, represents the object $\Tor^C(N,M)$ in $\DD(k\modl)$. The h\+flatness condition on the DG\+module $M$ can be replaced with the similar condition on the DG\+module~$N$. Analogously, assume that the complex of morphisms between any two objects of $C$ is an h\+projective complex of $k$\+modules. Let $L$ and $M$ be left DG\+modules over $C$ such that either the complex of $k$\+modules $L(X)$ is h\+projective for any object $X\in C$ or the complex of $k$\+modules $M(X)$ is h\+injective for any object $X\in C$. Then the total complex of the cobar-complex $$\textstyle \prod_{X\in C}\Hom_k(L(X),M(X))\mskip-.1\thinmuskip \lrarrow\mskip -.1\thinmuskip \prod_{X,Y\in C}\Hom_k(C(X,Y)\ot_k L(Y)\;M(X)) \mskip-.1\thinmuskip\lrarrow\mskip-.1\thinmuskip\dsb, $$ constructed by taking infinite products along the diagonals, represents the object $\Ext_C(L,M)$ in $\DD(k\modl)$. Given a $k$\+linear DG\+functor $F\:C\rarrow D$, a right DG\+module $N$ over $D$, and a left DG\+module $M$ over $D$, there is a natural morphism \begin{equation} \label{tor-first-kind-F-star} \Tor^C(F^*N,F^*M)\lrarrow\Tor^D(N,M) \end{equation} in $\DD(k\modl)$. Analogously, given a $k$\+linear DG\+functor $F\:C\rarrow D$ and left DG\+modules $L$ and $M$ over $D$, there is a natural morphism \begin{equation} \label{ext-first-kind-F-star} \Ext_D(L,M)\lrarrow\Ext_C(F^*L,F^*M) \end{equation} in $\DD(k\modl)$. If the functor $H(F)\:H(C)\rarrow H(D)$ is a pseudo-equivalence of $\Gamma$\+graded categories, then the natural morphisms between the objects $\Tor$ and $\Ext$ over $C$ and $D$ are isomorphisms for any DG\+modules $L$, $M$, and~$N$. This follows from the fact that the similar morphisms between the objects $\Tor$ and $\Ext$ over $H(C)$ and $H(D)$ are isomorphisms. \subsection{Ext and Tor of the second kind: general case} \label{second-kind-general} Let $B$ be a small $k$\+linear CDG\+category. Then the categories $Z^0(B\modlc)$ and $Z^0(\modrc B)$ of (left and right) CDG\+mod\-ules over $B$ and closed morphisms of degree~$0$ between them are abelian. In particular, consider the abelian category $Z^0(k\modlc)$ of $\Gamma$\+graded complexes of $k$\+modules and denote it by $\Comab(k\modl)$. We will be interested in the derived categories $\DD^-(\Comab(k\modl))$ and $\DD^+(\Comab(k\modl))$ of complexes, bounded from above or below, over the abelian category $\Comab(k\modl)$. The objects of $\DD^-(\Comab(k\modl))$ can be viewed as bicomplexes with one grading by the integers bounded from above and the other grading by elements of the group~$\Gamma$. To any such bicomplex, one can assign its $\Gamma$\+graded total complex, constructed by taking infinite products along the diagonals. This defines a triangulated functor $$ \Tot^\sqcap\:\DD^-(\Comab(k\modl))\lrarrow\DD(k\modl). $$ Analogously, the objects of $\DD^+(\Comab(k\modl))$ can be viewed as bicomplexes with one grading by the integers bounded from below and the other grading by elements of the group~$\Gamma$. To any such bicomplex, one can assign its $\Gamma$\+graded total complex, constructed by taking infinite direct sums along the diagonals. This defines a triangulated functor $$ \Tot^\oplus\:\DD^+(\Comab(k\modl))\lrarrow\DD(k\modl). $$ \begin{rem} \emergencystretch=3em\hbadness=5000 The functors of total complexes of unbounded complexes over $\Comab(k\modl)$, constructed by taking infinite direct sums or infinite products along the diagonals, are not well-defined on the derived category $\DD(\Comab(k\modl))$. The procedure of ``Laurent totalization'' of unbounded complexes, which coincides with $\Tot^\sqcap$ for complexes bounded from above and with $\Tot^\oplus$ for complexes bounded from below, defines a functor on $\DD(\Comab(k\modl))$, though. Notice that this Laurent totalization is different from the one discussed in Remark~\ref{ext-tor-first-kind} (the chosen direction along the diagonals is opposite in the two cases). \end{rem} Now consider the functor of two arguments (see~\ref{qdg-modules-subsect}) \begin{equation} \label{cdg-tensor-product} \ot_B\:Z^0(\modrc B)\times Z^0(B\modlc)\lrarrow\Comab(k\modl). \end{equation} We would like to construct its left derived functor $$ \ot_B^\L\:Z^0(\modrc B)\times Z^0(B\modlc)\lrarrow \DD^-(\Comab(k\modl)). $$ Notice that the abelian categories $Z^0(\modrc B)$ and $Z^0(B\modlc)$ have enough projective objects. More precisely, for any projective left $\Gamma$\+graded module $P$ over $B^\#$ the corresponding freely generated CDG\+module $Q$, as constructed in the proof of Lemma~\ref{pseudo-equi-subsect}.A, is a projective object of $Z^0(B\modlc)$. Any projective object in $Z^0(B\modlc)$ is a direct summand of an object of this type. For any projective object $Q$ in $Z^0(B\modlc)$, the underlying left $\Gamma$\+graded $B^\#$\+module $Q^\#$ is projective. Let us call a left $\Gamma$\+graded $B^\#$\+module $P^\#$ \emph{flat} if the functor of tensor product with $P^\#$ over $B^\#$ is exact on the abelian category of right $\Gamma$\+graded $B^\#$\+modules. Given a left CDG\+module $P$ over $B$, if the left $B^\#$\+module $P^\#$ is flat, then the functor of tensor product with $P$ is exact as a functor $Z^0(\modrc B)\rarrow\Comab(k\modl)$. Any projective $\Gamma$\+graded $B^\#$\+module is flat. Given a right CDG\+module $N$ and a left CDG\+module $M$ over $B$, choose a left resolution $Q_\bu$ of $N$ in $Z^0(\modrc B)$ and a left resolution $P_\bu$ of $M$ in $Z^0(B\modlc)$ such that the $\Gamma$\+graded $B^\#$\+modules $Q_i^\#$ and $P_i^\#$ are flat. In view of the above remarks, we can define $N\ot_B^\L M\in\DD^-(\Comab(k\modl))$ as the object represented either by the complex $Q_\bu\ot_B M$, or by the complex $N\ot_B P_\bu$, or by the total complex of the bicomplex $Q_\bu\ot_B P_\bu$. Analogously, consider the functor of two arguments \begin{equation} \label{cdg-hom} \Hom^B\:Z^0(B\modlc)^\op\times Z^0(B\modlc)\lrarrow \Comab(k\modl), \end{equation} assigning to any two left CDG\+modules over $B$ the complex of morphisms between them as strict CDG\+functors $B\rarrow k\modlc$. We would like to construct its right derived functor $$ \R\!\Hom^B\:Z^0(B\modlc)^\op\times Z^0(B\modlc)\lrarrow \DD^+(\Comab(k\modl)). $$ Notice that the abelian category $Z^0(B\modlc)$ has enough injective objects. For any injective object $J$ in $Z^0(B\modlc)$, the underlying left $\Gamma$\+graded $B^\#$\+module $J^\#$ is injective. One can construct these injective CDG\+modules as the duals to projective (or flat) right CDG\+modules (see the discussion of injective DG\+modules in~\ref{ext-tor-first-kind}) or obtain them as the CDG\+modules cofreely cogenerated by injective $\Gamma$\+graded $B^\#$\+modules (see the construction of injective resolutions in~\cite[proof of Theorem~3.6]{Pkoszul}). Given left CDG\+modules $L$ and $M$ over $B$, choose a left resolution $P_\bu$ of $L$ and a right resolution $J^\bu$ of $M$ in $Z^0(B\modlc)$ such that the $\Gamma$\+graded $B^\#$\+modules $P_i^\#$ are projective and the $\Gamma$\+graded $B^\#$\+modules $J^i{}^\#$ are injective. Define $\R\!\Hom^B(L,M)\in\DD^-(\Comab(k\modl))$ as the object represented either by the complex $\Hom^B(P_\bu,M)$, or by the complex $\Hom^B(L,J^\bu)$, or by the total complex of the bicomplex $\Hom^B(P_\bu,J^\bu)$. Composing the derived functor $\ot^\L_B$ with the functor $\Tot^\sqcap$, we obtain the derived functor $$ \Tor^{B,I\!I}\:Z^0(\modrc B)\times Z^0(B\modlc)\lrarrow \DD(k\modl). $$ Similarly, composing the derived functor $\R\!\Hom^B$ with the functor $\Tot^\oplus$, we obtain the derived functor $$ \Ext^{I\!I}_B\:Z^0(B\modlc)^\op\times Z^0(B\modlc)\lrarrow \DD(k\modl). $$ The derived functors $\Tor^{B,I\!I}$ and $\Ext^{I\!I}_B$ are called the \emph{Tor and Ext of the second kind} of CDG\+modules over~$B$. Notice that the derived functors $\ot_B^\L$ and $\R\!\Hom^B$ assign distinghuished triangles to short exact sequences of CDG\+modules in any argument, hence so do the derived functors $\Tor^{B,I\!I}$ and $\Ext^{I\!I}_B$. Given a $k$\+linear CDG\+functor $F\:B\rarrow C$, a right CDG\+module $N$ over $C$, and a left CDG\+module $M$ over $C$, there is a natural morphism \begin{equation} \label{tor-second-kind-F-star} \Tor^{B,I\!I}(F^*N,F^*M)\lrarrow\Tor^{C,I\!I}(N,M) \end{equation} in $\DD(k\modl)$. Analogously, given a $k$\+linear CDG\+functor $F\:B\rarrow C$ and left CDG\+modules $L$ and $M$ over $C$, there is a natural morphism \begin{equation} \label{ext-second-kind-F-star} \Ext_C^{I\!I}(L,M)\lrarrow\Ext_B^{I\!I}(F^*L,F^*M) \end{equation} in $\DD(k\modl)$. If the functor $F^\#\:B^\#\rarrow C^\#$ is a pseudo-equivalence of $\Gamma$\+graded categories, then these natural morphisms are isomorphisms for any CDG\+modules $L$, $M$, and~$N$. Now let $C$ be a small $k$\+linear DG\+category. Then the identity functors from the exact categories $Z^0(\modrd C)$ and $Z^0(C\modld)$ to the abelian categories $Z^0(\modrc C)$ and $Z^0(C\modlc)$ are exact, so any resolution in $Z^0(\modrd C)$ or $Z^0(C\modld)$ is also a resolution in $Z^0(\modrc C)$ or $Z^0(C\modlc)$. Besides, any DG\+module that is projective or injective in the exact category $Z^0(\modrd C)$ or $Z^0(C\modld)$ is also projective or injective as a $\Gamma$\+graded $C^\#$\+module. It follows that there are natural morphisms \begin{align} \label{tor-first-second} \Tor^C(N,M)&\lrarrow\Tor^{C,I\!I}(N,M) \\ \intertext{and} \label{ext-first-second} \Ext_C^{I\!I}(L,M)&\lrarrow\Ext_C(L,M) \end{align} in $\DD(k\modl)$ for any DG\+modules $L$, $M$, and $N$ over~$C$. \subsection{Flat/projective case} \label{second-kind-flat} Let $B$ be a small $k$\+linear CDG\+category, $N$ a right CDG\+module over $B$, and $M$ a left CDG\+module over $B$. Consider the $\Gamma$\+graded complex of $k$\+modules $\Br^\sqcap(N,B,M)$ constructed in the following way. As a $\Gamma$\+graded $k$\+module, $\Br^\sqcap(N,B,M)$ is obtained by totalizing a bigraded $k$\+module with one grading by elements of the group $\Gamma$ and the other grading by nonpositive integers, the totalizing being performed by taking infinite products along the diagonals. The component of degree $-i\in\Z$ of that bigraded module is the $\Gamma$\+graded $k$\+module $$\textstyle \bigoplus_{X_0,\dsc,X_i\in B} N(X_0)\ot_k B(X_0,X_1)\ot_k \dsb\ot_k B(X_{i-1},X_i)\ot_k M(X_i), $$ where, as in~\ref{ext-tor-first-kind}, we use the simplifying notation $B(X,Y)=\Hom_B(Y,X)$. The differential on $\Br^\sqcap(N,B,M)$ is the sum of the three components $\d$, $d$, and $\delta$ given by the formulas \begin{align*} &\d(n\ot b_1\ot\dsb\ot b_i\ot m) = nb_1\ot b_2\ot\dsb\ot b_i\ot m - n\ot b_1b_2\ot b_3\ot\dsb\ot b_i\ot m \\ &+ \dsb + (-1)^{i-1} n\ot b_1\ot\dsb\ot b_{i-2}\ot b_{i-1}b_i\ot m + (-1)^i n\ot b_1\ot\dsb \ot b_{i-1}\ot b_im, \end{align*} where the products $b_jb_{j+1}$ denote the composition of morphisms in $B$ and the products $nb_1$ and $b_im$ denote the action of morphisms in $B$ on the CDG\+modules, \begin{align*} (-1)^id(n\ot b_1\ot\dsb\ot b_i\ot m) &= d(n)\ot b_1\ot\dsb\ot b_i\ot m \\ &+ (-1)^{|n|} n\ot d(b_1)\ot b_2\ot\dsb\ot b_i\ot m + \dsb \\ &+ (-1)^{|n|+|b_1|+\dsb+|b_i|} n\ot b_1\ot\dsb\ot b_i\ot d(m), \end{align*} and \begin{multline*} \delta(n\ot b_1\ot \dsb\ot b_i\ot m) = n\ot h\ot b_1\ot\dsb\ot b_i \ot m \\ - n\ot b_1\ot h\ot b_2\ot\dsb\ot b_i\ot m + \dsb + (-1)^i n\ot b_1\ot\dsb\ot b_i\ot h\ot m. \end{multline*} \begin{propA} Assume that all the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are flat, and either all the $\Gamma$\+graded $k$\+modules $N^\#(X)$ are flat, or all the $\Gamma$\+graded $k$\+modules $M^\#(X)$ are flat. Then the complex\/ $\Br^\sqcap(N,B,M)$ represents the object\/ $\Tor^{B,I\!I} (N,M)$ in the derived category\/ $\DD(k\modl)$. \end{propA} \begin{proof} Choose a left resolution $Q_\bu$ of the right CDG\+module $N$ and a left resolution $P_\bu$ of the left CDG\+module $M$ such that the $\Gamma$\+graded $B^\#$\+modules $P_j^\#$ and $Q_j^\#$ are flat. Consider the tricomplex $\Br^\sqcap(Q_\bu,B,P_\bu)$ and construct its $\Gamma$\+graded total complex by taking infinite products along the diagonals. Then this total complex maps naturally to both the complex $\Br^\sqcap(N,B,M)$ and the total complex $\Tot^\sqcap(Q_\bu\ot_B P_\bu)$ of the tricomplex $Q_\bu\ot_B P_\bu$, constructed also by taking infinite products along the diagonals. These morphisms of $\Gamma$\+graded complexes are both quasi-isomorphisms. Cf.\ the proof of Proposition~\ref{hochschild-subsect}.A below, where some additional details can be found. \end{proof} Let $F\:B\rarrow C$ be a $k$\+linear CDG\+functor, $N$ be a right CDG\+module over $C$, and $M$ be a left CDG\+module over~$C$. Then there is a natural morphism of complexes of $k$\+modules $F_*\:\Br^\sqcap(F^*N,B,F^*M)\rarrow\Br^\sqcap(N,C,M)$ given by the rule \begin{multline} \label{bar-cdg-functorial} \textstyle F_*(n\ot b_1\ot\dsb\ot b_i\ot m) = \sum_{j_0,\dsc,j_i=0}^\infty (-1)^{\rho(j_0,\dsc,j_i;\.|n|,|b_1|,\dsc,|b_i|)} \\ n\ot a^{\ot j_0}\ot F(b_1)\ot a^{\ot j_1}\ot\dsb\ot F(b_i) \ot a^{\ot j_i}\ot m, \end{multline} where \begin{multline} \label{rho-sign-formula} \rho(j_0,\dsc,j_i;\.t_0,t_1,\dsc,t_i) = (j_0+\dsb+j_i-1)(j_0+\dsb+j_i)/2 \\ +j_0(i+1)+j_1i+\dsb+j_i +j_0t_0+j_1(t_0+t_1)+\dsb+j_i(t_0+t_1+\dsb+t_i). \end{multline} The image of an arbitrary element in $\Br^\sqcap(F^*N,B,F^*M)$ is constructed as the sum of the images of (the infinite number of) its bihomogeneous components, the sum being convergent bidegree-wise in $\Br^\sqcap(N,C,M)$. Suppose the CDG\+categories $B$ and $C$ satisfy the assumptions of Proposition~A, and so does one of the CDG\+modules $N$ and~$M$. Then the morphism of bar-complexes $F_*$ represents the morphism~\eqref{tor-second-kind-F-star} of the objects $\Tor$ in $\DD(k\modl)$. Now let $L$ and $M$ be left CDG\+modules over~$B$. Consider the $\Gamma$\+graded complex of $k$\+modules $\Cb^\oplus(L,B,M)$ constructed as follows. As a $\Gamma$\+graded $k$\+module, $\Cb^\oplus(L,B,M)$ is obtained by totalizing a bigraded $k$\+module with one grading by elements of the group $\Gamma$ and the other grading by nonnegative integers, the totalizing being done by taking infinite direct sums along the diagonals. The component of degree $i\in\Z$ of that bigraded module is the $\Gamma$\+graded $k$\+module $$ \textstyle \prod_{X_0,\dsc,X_i\in B} \Hom_k(B(X_0,X_1)\ot_k\dsb\ot_k B(X_{i-1},X_i)\ot_k L(X_i)\;M(X_0)). $$ The differential on $\Cb^\oplus(L,B,M)$ is the sum of the three components $\d$, $d$, and $\delta$ given by the formulas \begin{gather*} \begin{split} &(\d f)(b_1,\dsc,b_{i+1},l) = (-1)^{|f||b_1|} b_1f(b_2,\dsc,b_{i-1},l) - f(b_1b_2,b_3,\dsc,b_{i+1},l) \\ &+ \dsb + (-1)^i f(b_1,\dsc,b_{i-1}, b_ib_{i+1},l) + (-1)^{i+1} f(b_1,\dsc,b_i,b_{i+1}l), \end{split} \displaybreak[1]\\ \begin{split} &(-1)^i(df)(b_1,\dsc,b_i,l) = d(f(b_1,\dsc,b_i,l)) - (-1)^{|f|}f(db_1,b_2,\dsc,b_i,l) \\ &- (-1)^{|f|+|b_1|} f(b_1,db_2,b_3,\dsc,b_i,l) - \dsb - (-1)^{|f|+|b_1|+\dsc+|b_i|} f(b_1,\dsc,b_i,dl), \end{split} \end{gather*} and \begin{align*} (\delta f)(b_1,\dsc,b_{i-1},l) &= -f(h,b_1,\dsc,b_{i-1},l) \\ &+ f(b_1,h,\dsc,b_{i-1},l) - \dsb + (-1)^i f(b_1,\dsc,b_{i-1},h,l). \end{align*} \begin{propB} Assume that all the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are projective, and either all the $\Gamma$\+graded $k$\+modules $L^\#(X)$ are projective, or all the $\Gamma$\+graded $k$\+modules $M^\#(X)$ are injective. Then the complex\/ $\Cb^\oplus(L,B,M)$ represents the object\/ $\Ext_B^{I\!I}(L,M)$ in the derived category\/ $\DD(k\modl)$. \end{propB} \begin{proof} Choose a left resolution $P_\bu$ of the left CDG\+module $L$ and a right resolution $J^\bu$ of the left CDG\+module $M$ such that the $\Gamma$\+graded $B^\#$\+modules $P_j^\#$ are projective and the $\Gamma$\+graded $B^\#$\+modules $Q^j{}^\#$ are injective. Consider the tricomplex $\Cb^\oplus(P_\bu,B,J^\bu)$ and construct its $\Gamma$\+graded total complex by taking infinite direct sums along the diagonals. Both the complex $\Cb^\oplus(L,B,M)$ and the total complex $\Tot^\oplus(\Hom^B(P_\bu,J^\bu))$ of the tricomplex $\Hom^B(P_\bu,J^\bu)$ map quasi-isomorphically into the above total complex. \end{proof} Let $F\:B\rarrow C$ be a $k$\+linear CDG\+functor, and $L$ and $M$ be left CDG\+modules over~$C$. Then there is a natural morphism of complexes of $k$\+modules $F^*\:\Cb^\op(L,C,\allowbreak M)\rarrow\Cb^\op(F^*L,B,F^*M)$ given by the rule \begin{multline} \label{cobar-cdg-functorial}\textstyle (F^*f)(b_1\ot\dsb b_i\ot l)=\sum_{j_0,\dsc,j_i=0}^\infty (-1)^{\lambda(j_0,\dsc,j_i;|f|,|b_1|,\dsc,|b_i|)} \\ f(a^{\ot j_0}\ot F(b_1)\ot a^{\ot j_1}\ot\dsb \ot F(b_i)\ot a^{j_i}\ot l), \end{multline} where \begin{align} \label{sigma-sign-formula} \lambda(j_0,\dsc,j_i;\.t_0,t_1,\dsc,t_i) &= j_0i+j_1(i-1)+\dsb+j_{i-1} \\ \notag &+j_0t_0+j_1(t_0+t_1)+\dsb+j_i(t_0+t_1+\dsb+t_i). \end{align} Suppose the CDG\+categories $B$ and $C$ satisfy the assumptions of Proposition~B, and so does one of the CDG\+modules $L$ and~$M$. Then the morphism of cobar-complexes $F^*$ represents the morphism~\eqref{ext-second-kind-F-star} of the objects $\Ext$ in $\DD(k\modl)$. Denote by $\Br^\oplus(N,B,M)$ the $\Gamma$\+graded complex of $k$\+modules constructed in the same way as $\Br^\sqcap(N,B,M)$, except that the totalization is being done by taking infinite direct sums along the diagonals. Similarly, denote by $\Cb^\sqcap(L,B,M)$ the $\Gamma$\+graded complex of $k$\+modules constructed in the same way as $\Cb^\oplus(L,B,M)$ except that the totalization is being done by taking infinite products along the diagonals. Assume that $C$ is a small DG\+category in which the complex of morphisms between any two objects is an h\+flat complex of flat $k$\+modules, and either a right DG\+module $N$ or a left DG\+module $M$ over $C$ is such that all the complexes of $k$\+modules $N(X)$ or $M(X)$ are h\+flat complexes of flat $k$\+modules. Then the natural map $\Br^\oplus(N,C,M)\rarrow\Br^\sqcap(N,C,M)$ represents the morphism $\Tor^C(N,M)\rarrow \Tor^{C,I\!I}(N,M)$ in $\DD(k\modl)$. Analogously, assume that the complex of morphisms between any two objects in a DG\+category $C$ is an h\+projective complex of projective $k$\+modules, and either a left DG\+module $L$ over $C$ is such that all the complexes of $k$\+modules $L(X)$ are h\+projective complexes of projective $k$\+modules, or a left DG\+module $M$ over $C$ is such that all the complexes of $k$\+modules $M(X)$ are h\+injective complexes of injective $k$\+modules. Then the natural map $\Cb^\oplus(L,C,M)\rarrow\Cb^\sqcap(L,C,M)$ represents the morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ in $\DD(k\modl)$. Notice that the complexes $\Br^\oplus(N,B,M)$ and $\Cb^\sqcap (L,B,M)$ are \emph{not} functorial with respect to nonstrict CDG\+functors between CDG\+categories $B$ because of the infinite summation in the formulas \eqref{bar-cdg-functorial} and~\eqref{cobar-cdg-functorial}. \begin{propC} Let $B$ be a small $k$\+linear CDG\+category. Assume that the maps $k\rarrow\Hom_B(X,X)$ corresponding to the curvature elements $h_X\in\Hom_B(X,X)$ admit $k$\+linear retractions $\Hom_B(X,X)\rarrow k$, i.~e., they are embeddings of $k$\+module direct summands. In particular, this holds when $k$ is a field and all the elements $h_X$ are nonzero. Then for any CDG\+modules $L$, $M$ and $N$ the complexes $\Br^\oplus(N,B,M)$ and $\Cb^\sqcap(L,B,M)$ are acyclic. \end{propC} \begin{proof} This follows from the fact that the differentials~$\delta$ on the bigraded bar- and cobar-complexes are acyclic. \end{proof} \subsection{Hochschild (co)homology} \label{hochschild-subsect} Let $B$ be a small $k$\+linear CDG\+category. Consider the CDG\+category $B\ot_k B^\op$; since it is naturally isomorphic to its opposite CDG\+category, there is no need to distinguish between the left and the right CDG\+modules over it. Furthermore, there is a natural (left) CDG\+module over the CDG\+category $B\ot_k B^\op$ assigning to an object $(X,Y^\op)\in B\ot_k B^\op$ the precomplex of $k$\+modules $B(X,Y)=\Hom_B(Y,X)$. By an abuse of notation, we will denote this CDG\+module (as well as the corresponding right CDG\+module) simply by~$B$. Assume that the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are flat for all objects $X$, $Y\in B$. The \emph{Hochschild homology of the second kind} $HH^{I\!I}_*(B,M)$ of a $k$\+linear CDG\+cate\-gory $B$ with coefficients in a (left) CDG\+module $M$ over $B\ot_k B^\op$ is defined as the homology of the object $\Tor^{B\ot_k B^\op\;I\!I}(B,M)\in\DD(k\modl)$. In particular, the Hochschild homology of the second kind of the CDG\+module $M=B$ over $B\ot_k B^\op$ is called simply the Hochschild homology of the second kind of the $k$\+linear CDG\+category $B$ and denoted by $HH^{I\!I}_*(B,B) = HH^{I\!I}_*(B)$. The \emph{Hochschild cohomology of the second kind} $HH^{I\!I\;*} (B,M)$ of a $k$\+linear CDG\+category $B$ with coefficients in a (left) CDG\+module $M$ over $B\ot_k B^\op$ is defined as the cohomology of the object $\Ext_{B\ot_k B^\op}^{I\!I}(B,M)\in\DD(k\modl)$. In particular, the Hochschild cohomology of the second kind of the CDG\+module $M=B$ over $B\ot_k B^\op$ is called simply the Hochschild cohomology of the second kind of the $k$\+linear CDG\+category $B$ and denoted by $HH^{I\!I\;*}(B,B)=HH^{I\!I\;*}(B)$. \begin{rem} We define the Hochschild (co)homology of the second kind for CDG\+cate\-gories $B$ satisfying the above flatness assumption only, even though our definition makes sense without this requirement. In fact, this assumption is never used in this paper (except in the discussion of explicit complexes below in this section, which requires a stronger projectivity assumption in the cohomology case anyway). However, we believe that our definition is not the \emph{right} one without the flatness assumption, since one is not supposed to use underived nonexact functors when defining (co)homology theories. So to define the Hochschild (co)homology of the second kind in the general case one would need to replace a CDG\+category $B$ with a CDG\+category, equivalent to it in some sense and satisfying the flatness requirement. We do not know how such a replacement could look like. The analogue of this procedure for Hochschild (co)homology of the first kind is well-known (in this case it suffices to replace a DG\+category $C$ with a quasi-equivalent DG\+category with h\+flat complexes of morphisms; see below). \end{rem} By the result of~\ref{second-kind-flat}, the Hochschild homology $HH^{I\!I}_*(B,M)$ is computed by the explicit bar-complex $\Br^\sqcap(B\;B\ot_k B^\op\;M)$. When the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are projective for all objects $X$, $Y\in B$, the Hochschild cohomology $HH^{I\!I\;*}(B,M)$ is computed by the explicit cobar-complex $\Cb^\oplus(B\;B\ot_k B^\op\;M)$. However, these complexes are too big and apparently not very useful. There are smaller and much more important complexes computing the Hochschild (co)homology, namely, the Hochschild complexes. The homological Hochschild complex of the second kind $\Hoch_\bu^\sqcap(B,M)$ is constructed in the following way. As a $\Gamma$\+graded $k$\+module, $\Hoch_\bu^\sqcap(B,M)$ is obtained by taking infinite products along the diagonals of a bigraded $k$\+module with one grading by elements of the group $\Gamma$ and the other grading by nonpositive integers. The component of degree $-i\in\Z$ of that bigraded $k$\+module is the $\Gamma$\+graded $k$\+module $$\textstyle \bigoplus_{X_0,\dsc,X_i\in B} M(X_i,X_0^\op)\ot_k B(X_0,X_1) \ot_k\dsb\ot_k B(X_{i-1},X_i). $$ The differential on $\Hoch_\bu^\sqcap(B,M)$ is the sum of the three components $\d$, $d$, and $\delta$ given by the formulas \begin{gather*} \begin{split} \d(m\ot b_1\ot\dsb \ot b_i) &= mb_1\ot b_2\ot\dsb\ot b_i - m\ot b_1b_2\ot b_3\ot\dsb\ot b_i \\&+ \dsb + (-1)^{i-1} m\ot b_1\ot\dsb\ot b_{i-2}\ot b_{i-1}b_i \\&+ (-1)^{i+|b_i| (|m|+|b_1|+\dsb+|b_{i-1}|)} b_im\ot b_1\ot \dsb\ot b_{i-1}, \end{split} \displaybreak[1]\\ \begin{split} (-1)^i d(m\ot b_1\ot\dsb\ot b_i) &= d(m)\ot b_1\ot\dsb\ot b_i + (-1)^{|m|} m\ot d(b_1)\ot b_2\ot\dsb\ot b_i \\&+ \dsb + (-1)^{|m|+|b_1|+\dsb+|b_{i-1}|} m\ot b_1\ot\dsb\ot b_{i-1}\ot d(b_i), \end{split} \end{gather*} and \begin{multline*} \delta(m\ot b_1\ot\dsb\ot b_i) = m\ot h\ot b_1\ot\dsb\ot b_i \\ - m\ot b_1\ot h\ot b_2\ot\dsb\ot b_i + \dsb + (-1)^i m\ot b_1\ot\dsb \ot b_i\ot h. \end{multline*} \begin{propA} The homology of the complex\/ $\Hoch_\bu^\sqcap(B,M)$ is naturally isomorphic to the Hochschild homology of the second kind $HH_*^{I\!I}(B,M)$ as a $\Gamma$\+graded $k$\+module. \end{propA} \begin{proof} Choose a left resolution $P_\bu$ of the CDG\+module $M$ such that the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+modules $P_j^\#$ are flat. Consider the bicomplex $\Hoch_\bu^\sqcap(B,P_\bu)$ and construct its total complex by taking infinite products along the diagonals. This total complex maps naturally to both the complex $\Hoch_\bu^\sqcap(B,M)$ and the total complex of the bicomplex $B\ot_{B\ot_k B^\op}P_\bu$, constructed by taking infinite products along the diagonals. These morphisms of $\Gamma$\+graded complexes are both quasi-isomorphisms. Indeed, the morphism $\Hoch_\bu^\sqcap(B,P_\bu)\rarrow \Hoch_\bu^\sqcap(B,M)$ is a quasi-isomorphism, because the functor $\Hoch_\bu^\sqcap(B,{-})$ transforms exact sequences of CDG\+modules over $B\ot_k B^\op$ into exact sequences of complexes. The morphism $\Hoch_\bu^\sqcap(B,P_\bu)\rarrow B\ot_{B\ot_k B^\op} P_\bu$ is a quasi-isomorphism, since the morphism $\Hoch_\bu^\sqcap(B,P)\rarrow P$ is a quasi-isomorphism for any CDG\+module $P$ over $B\ot_k B^\op$ such that the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $P^\#$ is flat. The latter assertion follows from the similar statement for the bigraded Hochschild complex of the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $P^\#$ with the differential~$\d$. \end{proof} Let $F\:B\rarrow C$ be a $k$\+linear CDG\+functor and $M$ be a CDG\+module over $C\ot_k C^\op$. Let us denote the CDG\+module $(F\ot F^\op)^*M$ over $B\ot_k B^\op$ simply by $F^*M$. There is a natural morphism of complexes of $k$\+modules $F_*\:\Hoch_\bu^\sqcap(B,F^*M)\rarrow\Hoch_\bu^\sqcap(C,M)$ defined by the rule \begin{multline} \label{ho-hoch-cdg-functorial} \textstyle F_*(m\ot b_1\ot\dsb\ot b_i) = \sum_{j_0,\dsc,j_i=0}^\infty (-1)^{\rho(j_0,\dsc,j_i;\.|m|,|b_1|,\dsc,|b_i|)} \\ m\ot a^{\ot j_0}\ot F(b_1)\ot a^{\ot j_1}\ot\dsb\ot F(b_i) \ot a^{\ot j_i}, \end{multline} where the value of~$\rho$ in the exponent is given by the formula~\eqref{rho-sign-formula}. The image of an arbitrary element in $\Hoch_\bu^\sqcap(B,F^*M)$ is constructed as the sum of the images of (the infinite number of) its bihomogeneous components, the sum being convergent bidegree-wise in $\Hoch^\bu(C,M)$. The morphism $F_*$ of Hochschild complexes computes the map of Hochschild homology \begin{equation} \label{ho-hoch-second-kind-F-star} HH_*^{I\!I}(B,F^*M)\lrarrow HH_*^{I\!I}(C,M) \end{equation} obtained by passing to the homology in the morphism of $\Tor$ objects~\eqref{tor-second-kind-F-star} for the CDG\+functor $F\ot F^\op$. Furthermore, there is a natural closed morphism $B\rarrow F^*C$ of CDG\+modules over $B\ot_k B^\op$, inducing a map of Hochschild homology \begin{equation} \label{ho-hoch-second-kind-of-itself-F-star} HH_*^{I\!I}(B)\lrarrow HH_*^{I\!I}(C) \end{equation} and a morphism of Hochschild complexes $F_*\:\Hoch_\bu^\sqcap(B,B) \rarrow\Hoch_\bu^\sqcap(C,C)$ computing this homology map. The cohomological Hochschild complex of the second kind $\Hoch^{\oplus,\bu}(B,M)$ is constructed as follows. As a $\Gamma$\+graded $k$\+module, $\Hoch_\bu^\sqcap(B,M)$ is obtained by taking infinite direct sums along the diagonals of a bigraded $k$\+module with one grading by elements of the group $\Gamma$ and the other grading by nonnegative integers. The component of degree $i\in\Z$ of that bigraded $k$\+module is the $\Gamma$\+graded $k$\+module $$\textstyle \prod_{X_0,\dsc,X_i\in B}\Hom_k(B(X_0,X_1)\ot_k\dsb\ot_k B(X_{i-1},X_i)\;M(X_0,X_i^\op)). $$ The differential on $\Hoch^{\oplus,\bu}(B,M)$ is the sum of the three components $\d$, $d$, and $\delta$ given by the formulas \begin{gather*} \begin{split} (\d f)(b_1,\dsc,b_{i+1}) &= (-1)^{|f||b_1|}b_1 f(b_2,\dsc,b_{i+1}) - f(b_1b_2,b_3,\dsc,b_{i+1}) \\ &+ \dsb + (-1)^i f(b_1,\dsc,b_{i-1},b_ib_{i+1}) + (-1)^{i+1}f(b_1,\dsc,b_i)b_{i+1}, \end{split} \displaybreak[1]\\ \begin{split} (-1)^i(df)(b_1,\dsc,b_i) &= d(f(b_1,\dsc,b_i)) - (-1)^{|f|}f(db_1,b_2,\dsc,b_i) \\ &- \dsb - (-1){}^{|f|+|b_1|+\dsb+|b_{i-1}|}f(b_1,\dsc,b_{i-1},db_i), \end{split} \end{gather*} and \begin{align*} (\delta f)(b_1,\dsc,b_{i-1}) &= - f(h,b_1,\dsc,b_{i-1}) \\ &+ f(b_1,h,b_2,\dsc,b_{i-1}) - \dsb + (-1)^if(b_1,\dsc,b_{i-1},h). \end{align*} \begin{propB} Assume that all the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are projective. Then the cohomology of the complex\/ $\Hoch^{\oplus,\bu}(B,M)$ is naturally isomorphic to the Hochschild cohomology of the second kind $HH^{I\!I\;*}(B,M)$ as a $\Gamma$\+graded $k$\+module. \end{propB} \begin{proof} Choose a right resolution $J^\bu$ of the CDG\+module $M$ such that the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+modules $J^j{}^\#$ are injective. Consider the bicomplex $\Hoch^{\oplus,\bu}(B,J^\bu)$ and construct its total complex by taking infinite direct sums along the diagonals. Both the complex $\Hoch^{\oplus,\bu}(B,M)$ and the total complex of the bicomplex $\Hom^{B\ot_k B^\op}(B,J^\bu)$ map quasi-isomorphically into the above total complex. \end{proof} For a $k$\+linear CDG\+functor $F\:B\rarrow C$ and a CDG\+module $M$ over $C\ot_k C^\op$, there is a natural morphism of complexes of $k$\+modules $F^*\:\Hoch^{\oplus,\bu}(C,M)\rarrow \Hoch^{\oplus,\bu}(B,F^*M)$ defined by the rule \begin{multline} \label{coho-hoch-cdg-functorial}\textstyle (F^*f)(b_1\ot\dsb b_i)=\sum_{j_0,\dsc,j_i=0}^\infty (-1)^{\lambda(j_0,\dsc,j_i;|f|,|b_1|,\dsc,|b_i|)} \\ f(a^{\ot j_0}\ot F(b_1)\ot a^{\ot j_1}\ot\dsb \ot F(b_i)\ot a^{j_i}), \end{multline} where the value of~$\lambda$ in the exponent is given by the formula~\eqref{sigma-sign-formula}. Suppose the CDG\+categories $B$ and $C$ satisfy the assumptions of Proposition~B\hbox{}. Then the morphism $F^*$ of Hochschild complexes computes the map of Hochschild cohomology \begin{equation} \label{coho-hoch-second-kind-F-star} HH^{I\!I\;*}(C,M)\lrarrow HH^{I\!I\;*}(B,F^*M) \end{equation} obtained by passing to the cohomology in the morphism of $\Ext$ objects~\eqref{ext-second-kind-F-star} for the CDG\+functor $F\ot F^\op$. Notice that, unlike the Hochschild homology, the Hochschild cohomology of CDG\+categories $HH^{I\!I\;*}(B)$ is \emph{not} functorial with respect to arbitrary CDG\+functors $F\:B\rarrow C$. It \emph{is} contravariantly functorial, however, with respect to CDG\+functors $F$ for which the functor $F^\#\:B^\#\rarrow C^\#$ is fully faithful, since the closed morphism of CDG\+modules $B\rarrow F^*C$ is an isomorphism in this case. The (co)homology of the complexes $\Hoch_\bu^\sqcap(B,M)$ and $\Hoch^{\oplus,\bu}(B,M)$ are what is called the ``Borel--Moore Hochschild homology'' and the ``compactly supported Hochschild cohomology'' in~\cite{CT}. Now denote by $\Hoch_\bu^\oplus(B,M)$ the $\Gamma$\+graded complex of $k$\+modules constructed in the same way as $\Hoch_\bu^\sqcap (B,M)$, except that the totalization is being done by taking infinite direct sums along the diagonals. Similarly, denote by $\Hoch^{\sqcap,\bu}(B,M)$ the $\Gamma$\+graded complex of $k$\+modules constructed in the same way as $\Hoch^{\oplus,\bu}(B,M)$, except that the totalization is being done by taking infinite products. The complexes $\Hoch_\bu^\oplus(B,M)$ and $\Hoch^{\sqcap,\bu}(B,M)$ play an important role when $B$ is a DG\+category, but apparently not otherwise, as we will see below. Let $C$ be a small $k$\+linear DG\+category. Assume that the complexes of $k$\+modules $C(X,Y)$ are h\+flat for all objects $X$, $Y\in C$. The (conventional) Hochschild homology (of the first kind) $HH_*(C,M)$ of a $k$\+linear DG\+category $C$ with coefficients in a DG\+module $M$ over $C\ot_k C^\op$ is the homology of the object $\Tor^{C\ot_k C^\op}(C,M)\in\DD(k\modl)$. In particular, the Hochschild homology of the DG\+module $M=C$ over $C$ is called simply the Hochschild homology of $C$ and denoted by $HH_*(C,C)=HH_*(C)$. The (conventional) Hochschild cohomology (of the first kind) $HH^*(C,M)$ of a $k$\+linear DG\+category $C$ with coefficients in a DG\+module $M$ over $C\ot_k C^\op$ is the cohomology of the object $\Ext_{C\ot_k C^\op}(C,M)\in\DD(k\modl)$. In particular, the Hochschild cohomology of the DG\+module $M=C$ over $C$ is called simply the Hochschild cohomology of $C$ and denoted by $HH^*(C,C)=HH^*(C)$. Let $F\:C\rarrow D$ be a $k$\+linear DG\+functor between DG\+categories whose complexes of morphisms are h\+flat complexes of $k$\+modules. Then for any DG\+module $M$ over $D\ot_k D^\op$ passing to the homology in the morphism of $\Tor$ objects~\eqref{tor-first-kind-F-star} for the DG\+functor $F\ot F^\op$ provides a natural map of $\Gamma$\+graded $k$\+modules \begin{equation} \label{ho-hoch-first-kind-F-star} HH_*(C,F^*M)\lrarrow HH_*(D,M). \end{equation} Composing this map with the map induced by the closed morphism $C\rarrow F^*D$ of DG\+modules over $C\ot_k C^\op$, we obtain a natural map \begin{equation} \label{ho-hoch-first-kind-of-itself-F-star} HH_*(C)\lrarrow HH_*(D). \end{equation} Passing to the cohomology in the morphism of $\Ext$ objects~\eqref{ext-first-kind-F-star} for the DG\+functor $F\ot F^\op$ provides a natural map \begin{equation} \label{coho-hoch-first-kind-F-star} HH^*(D,M)\lrarrow HH^*(C,F^*M). \end{equation} Unlike the Hochschild homology, the Hochschild cohomology of DG\+categories $HH^*(C)$ is \emph{not} functorial with respect to arbitrary DG\+functors $F\:C\rarrow D$. It \emph{is} contravariantly functorial, however, with respect to DG\+functors $F$ such that the functor $H(F)\:H(C)\rarrow H(D)$ is fully faithful, since the closed morphism of DG\+modules $C\rarrow F^*D$ is a quasi-isomorphism in this case. When the functor $H(F)$ is a pseudo-equivalence of $\Gamma$\+graded categories, the maps (\ref{ho-hoch-first-kind-F-star}\+-\ref{coho-hoch-first-kind-F-star}) are isomorphisms, as is the natural map $HH^*(D)\rarrow HH^*(C)$. Indeed, under our assumptions on the DG\+categories $C$ and $D$ the $\Gamma$\+graded category $H(C\ot_k C^\op)$ is isomorphic to $H(C)\ot_k H(C)^\op$ and similarly for $D$, so the assertion follows from Lemma~\ref{pseudo-equi-subsect}.C and the results of~\ref{ext-tor-first-kind}. Just as in~\ref{ext-tor-first-kind}, one shows that the complex $\Hoch_\bu^\oplus(C,M)$ computes the Hochschild homology $HH_*(C,M)$. The morphism of complexes $F_*\:\Hoch_\bu^\oplus(C,F^*M)\rarrow \Hoch_\bu^\oplus(D,M)$ induced by a DG\+functor $F$ computes the map of Hochschild homology~\eqref{ho-hoch-first-kind-F-star}. In particular the morphism of complexes $F_*\:\Hoch_\bu^\oplus (C,C)\rarrow\Hoch_\bu^\oplus(D,D)$ induced by~$F$ computes the map~\eqref{ho-hoch-first-kind-of-itself-F-star}. When all the complexes of morphisms in $C$ are h\+projective complexes of $k$\+modules, the complex $\Hoch^{\sqcap,\bu}(C,M)$ computes the Hochschild cohomology $HH^*(C,M)$. When both DG\+categories $C$ and $D$ satisfy the same condition, the morphism of complexes $F^*\:\Hoch^{\sqcap,\bu}(D,M)\rarrow \Hoch^{\sqcap,\bu}(C,F^*M)$ computes the map~\eqref{coho-hoch-first-kind-F-star}. When the complexes $C(X,Y)$ are h\+flat complexes of flat $k$\+modules for all objects $X$, $Y\in C$, both the Hochschild (co)homology of the first and the second kind are defined for any DG\+module $M$ over $C\ot_k C^\op$. In this case, there are natural morphisms of $\Gamma$\+graded $k$\+modules \begin{equation} \label{hoch-first-second-kind} HH_*(C,M)\lrarrow HH^{I\!I}_*(C,M) \quad\text{and}\quad HH^{I\!I\;*}(C,M)\lrarrow HH^*(C,M) \end{equation} and, in particular, \begin{equation} \label{hoch-of-itself-first-second-kind} HH_*(C)\lrarrow HH^{I\!I}_*(C) \quad\text{and}\quad HH^{I\!I\;*}(C)\lrarrow HH^*(C). \end{equation} All of these are obtained from the comparison morphisms~(\ref{tor-first-second}--\ref{ext-first-second}) for the two kinds of functors $\Tor$ and $\Ext$. The morphism $HH_*(C,M)\rarrow HH^{I\!I}_*(C,M)$ is computed by the morphism of complexes $\Hoch_\bu^\oplus(C,M)\rarrow \Hoch^\sqcap_\bu(C,M)$. When the complexes $C(X,Y)$ are h\+projective complexes of projective $k$\+modules for all objects $X$, $Y\in C$, the morphism $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ is computed by the morphism of complexes $\Hoch^{\oplus,\bu}(C,M)\rarrow \Hoch^{\sqcap,\bu}(C,M)$. On the other hand, assume that the maps $k\rarrow\Hom_B(X,X)$ corresponding to the curvature elements $h_X\in\Hom_B(X,X)$ are embeddings of $k$\+linear direct summads. Then for any CDG\+module $M$ over $B\ot_k B^\op$ the complexes $\Hoch_\bu^\oplus(B,M)$ and $\Hoch^{\sqcap,\bu}(B,M)$ are acyclic~\cite[Lemma~3.9 and Theorem~4.2(a)]{CT}. Notice that these complexes are \emph{not} functorial with respect to nonstrict CDG\+functors between CDG\+categories $B$ because of the infinite summation in the formulas \eqref{ho-hoch-cdg-functorial} and~\eqref{coho-hoch-cdg-functorial}. The complexes $\Hoch_\bu^\oplus(B,M)$ and $\Br^\oplus (B\;B\ot_k B^\op\;M)$ are \emph{not} quasi-isomorphic in general, even when $k$~is a field, $B$ is a CDG\+algebra considered as a CDG\+category with a single object, and $M=B$. Neither are the complexes $\Hoch^{\sqcap,\bu}(B,M)$ and $\Cb^\sqcap(B\;B\ot_k B^\op\;M)$. \subsection{Change of grading group} \label{change-grading-group} Let us first introduce some terminology that will be used throughout the rest of the paper. A $\Gamma$\+graded module $N^\#$ over a $\Gamma$\+graded category $B^\#$ is said to have \emph{flat dimension~$d$} if $d$~is the minimal length of a left flat resolution of~$N^\#$ in the abelian category of $\Gamma$\+graded $B^\#$\+modules, or equivalently, the functor of tensor product with $N^\#$ over $B^\#$ has the homological dimension~$d$. \emph{Projective} and \emph{injective} dimensions of $\Gamma$\+graded $B^\#$\+modules are defined in the similar way. The \emph{left homological dimension} of a $\Gamma$\+graded category $B^\#$ is the homological dimension of the abelian category of $\Gamma$\+graded left $B^\#$\+modules, and the \emph{weak homological dimension} of $B^\#$ is the homological dimension of the functor of tensor product of $\Gamma$\+graded modules over~$B^\#$. Let $(\Gamma,\sigma,\boldsymbol{1})$ and $(\Gamma',\sigma', \boldsymbol{1}')$ be two different grading group data (see~\ref{grading-group}) and $\phi\:\Gamma\rarrow\Gamma'$ be a morphism of abelian groups taking $\boldsymbol{1}$ to $\boldsymbol{1}'$ such that $\sigma$ is the pull-back of~$\sigma'$ by~$\phi$. Then to any $\Gamma'$\+graded $k$\+module $V'$ one can assign a $\Gamma$\+graded $k$\+module $\phi^*V'$ defined by the rule $(\phi^*V)^n=V'{}^{\phi(n)}$ for $n\in\Gamma$. The functor $\phi^*$ has a left adjoint functor~$\phi_!$ and a right adjoint functor~$\phi_*$. The former assigns to a $\Gamma$\+graded $k$\+module $V$ the $\Gamma'$\+graded $k$\+module $V'$ constructed by taking the direct sums of the grading components of $V$ over all the preimages in $\Gamma$ of a given element $n'\in\Gamma'$, while the latter involves taking direct products over the preimages of~$n'$ in~$\Gamma$. All three functors $\phi_!$, $\phi_*$, and $\phi^*$ are exact. Besides, they transform (pre)complexes of $k$\+modules to (pre)complexes of $k$\+modules and commute with passing to the cohomology of the complexes of $k$\+modules. So they induce triangulated functors between the derived categories $\DD_\Gamma(k\modl)$ and $\DD_{\Gamma'}(k\modl)$ of $\Gamma$\+graded and $\Gamma'$\+graded complexes of $k$\+modules. Given a $\Gamma$\+graded $k$\+linear CDG\+category $B$, one can apply the functor $\phi_!$ to all its precomplexes of morphisms, obtaining a $\Gamma'$\+graded $k$\+linear CDG\+category $\phi_!B$. To a (left or right) CDG\+module $M'$ over $\phi_!B$ one can assign a CDG\+module $\phi^*M'$ over $B$, and to a CDG\+module $M$ over $B$ one can assign CDG\+modules $\phi_!M$ and $\phi_*M$ over~$\phi_!B$. The functors $\phi_!$, $\phi_*$, and $\phi^*$ are compatible with the functors of tensor product and $\Hom$ of CDG\+modules in the following sense. For any left CDG\+modules $L$, \ $M$ and right CDG\+module $N$ over $B$ there are natural isomorphisms \begin{equation} \label{phi-push-compatibility} \phi_!N\ot_{\phi_!B}\phi_!M\simeq\phi_!(N\ot_BM) \quad\text{and}\quad \Hom^{\phi_!B}(\phi_!L,\phi_*M)\simeq\phi_*\Hom^B(L,M). \end{equation} For any left CDG\+modules $L'$ and $M'$ over $\phi_!B$ there are natural isomorphisms \begin{equation} \label{phi-projection-formula} \begin{aligned} \phi^*(\phi_!N\ot_{\phi_!B}M')&\simeq N\ot_B\phi^*M', \\ \phi^*\Hom^{\phi_!B}(\phi_!L,M')&\simeq\Hom^B(L,\phi^*M'), \\ \phi^*\Hom^{\phi_!B}(L',\phi_*M)&\simeq\Hom^B(\phi^*L',M). \end{aligned} \end{equation} It follows from the isomorphisms~\eqref{phi-projection-formula} that the functors $\phi_!$ preserve all the flatness and projectivity properties of CDG- and DG-modules considered above in this paper, while the functors $\phi_*$ preserve the injectivity properties. Furthermore, the functors $\phi_!$ commute with the functors $\Tot^\oplus$, while the functors $\phi_*$ commute with the functors $\Tot^\sqcap$. Therefore, in view of the isomorphisms~\eqref{phi-push-compatibility}, for any $\Gamma$\+graded DG\+category $C$ and DG\+modules $L$, $M$, and $N$ over it there are natural isomorphisms \begin{equation} \Tor^{\phi_!C}(\phi_!N,\phi_!M)\simeq\phi_!\Tor^C(N,M) \.\ \ \text{and}\ \ \Ext_{\phi_!C}(\phi_!L,\phi_*M)\simeq\phi_*\Ext_C(L,M). \end{equation} in $\DD_{\Gamma'}(k\modl)$. Furthermore, the functor $\phi_!$ preserves tensor products of $k$\+linear (C)DG\+categories. Thus, assuming that the complexes of morphisms in the DG\+category $C$ are h\+flat complexes of $k$\+modules, for any DG\+module $M$ over $C\ot_k C^\op$ there are natural isomorphisms of Hochschild (co)homology \begin{equation} HH_*(\phi_!C,\phi_!M)\simeq\phi_!HH_*(C,M) \quad\text{and}\quad HH^*(\phi_!C,\phi_*M)\simeq\phi_*HH^*(C,M). \end{equation} In particular, there is an isomorphism \begin{equation} HH_*(\phi_!C)\simeq \phi_!HH_*(C). \end{equation} and a natural morphism \begin{equation} HH^*(\phi_!C) \lrarrow HH^*(\phi_!C,\phi_*C) \.\simeq\. \phi_* HH^*(C)\. \end{equation} The latter morphism is an isomorphism when the kernel of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite (so the functors $\phi_!$ and $\phi_*$ are isomorphic). The analogous results for (co)homology theories of the second kind hold under more restrictive conditions, since the functor $\phi_!$ does not commute with $\Tot^\sqcap$ in general, nor does the functor $\phi_*$ commute with $\Tot^\oplus$. However, there are morphisms of functors $\phi_!\Tot^\sqcap \rarrow\Tot^\sqcap\phi_!$ and $\Tot^\oplus\phi_*\rarrow \phi_*\Tot^\oplus$. Hence for any $\Gamma$\+graded CDG\+category $B$ and CDG\+modules $L$, $M$, and $N$ over it there are natural morphisms \begin{gather} \label{tor-second-phi-push} \phi_!\Tor^B(N,M)\lrarrow\Tor^{\phi_!B}(\phi_!N,\phi_!M), \\ \label{ext-second-phi-push} \Ext_{\phi_!B}(\phi_!L,\phi_*M)\lrarrow\phi_*\Ext_B(L,M). \end{gather} in $\DD_{\Gamma'}(k\modl)$. The morphisms (\ref{tor-second-phi-push}\+-\ref{ext-second-phi-push}) are always isomorphisms when the kernel of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite. They are also isomorphisms when the derived functors in question can be computed using \emph{finite} resolutions (cf.~\ref{functors-second-kind}). So the morphism~\eqref{tor-second-phi-push} is an isomorphism whenever one of the $\Gamma$\+graded $B^\#$\+modules $N^\#$ and $M^\#$ has finite flat dimension. The morphism~\eqref{ext-second-phi-push} is an isomorphism whenever either the $\Gamma$\+graded $B^\#$\+module $L^\#$ has finite projective dimension, or the $\Gamma$\+graded $B^\#$\+module has finite injective dimension. Thus, assuming that the $\Gamma$\+graded $k$\+modules of morphisms in the category $B^\#$ are flat, for any CDG\+module $M$ over $B\ot_k B^\op$ there are natural morphisms of Hochschild (co)homology \begin{gather} \label{ho-hoch-second-phi-push} \phi_!HH_*^{I\!I}(B,M)\lrarrow HH_*^{I\!I}(\phi_!B,\phi_!M) \\ \label{coho-hoch-second-phi-push} HH^{I\!I\;*}(\phi_!B,\phi_*M) \lrarrow\phi_*HH^{I\!I\;*}(B,M), \end{gather} which are always isomorphims when the kernel of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite. The map~\eqref{ho-hoch-second-phi-push} is an isomorphism whenever one of the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+modules $B^\#$ and $M^\#$ has finite flat dimension. The map~\eqref{coho-hoch-second-phi-push} is an isomorphism whenever either the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $B^\#$ has finite projective dimension, or the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $M^\#$ has finite injective dimension. In particular, there is a natural map \begin{equation} \phi_!HH_*^{I\!I}(B)\lrarrow HH_*(\phi_!B), \end{equation} which is an isomorphism when either the kernel of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite, or the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $B^\#$ has finite flat dimension. There are also natural maps \begin{equation} HH^{I\!I\;*}(\phi_!B)\lrarrow HH^{I\!I\;*}(\phi_!B,\phi_*B) \lrarrow \phi_*HH^{I\!I\;*}(B), \end{equation} which are both isomorphisms when the kernel of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite. Given a $\Gamma'$\+graded $k$\+linear CDG\+category $B'$, one can apply the functor $\phi^*$ to all of its precomplexes of morphisms, obtaining a $\Gamma$\+graded $k$\+linear CDG\+category $\phi^*B'$. To a (left or right) CDG\+module $M'$ over $B'$ one can assign a CDG\+module $\phi^*M'$ over~$\phi^*B'$. Assume that the map $\phi\:\Gamma\rarrow\Gamma'$ is surjective. Then the functors $\phi^*\:Z^0(B'\modlc)\rarrow Z^0(\phi^*B'\modlc)$ and $Z^0(\modrc B')\rarrow Z^0(\modrc \phi^*B')$ are equivalences of abelian categories. For any left CDG\+modules $L'$, \ $M'$ and right CDG\+module $N'$ over $B'$ there are natural isomorphisms \begin{equation} \begin{aligned} \phi^*N'\ot_{\phi^*B'}\phi^*M'&\simeq \phi^*(N'\ot_{B'}M'), \\ \Hom^{\phi^*B'}(\phi^*L',\phi^*M')&\simeq\phi^*\Hom^{B'}(L',M'). \end{aligned} \end{equation} Furthermore, the functor $\phi^*$ commutes with the functors $\Tot^\oplus$ and $\Tot^\sqcap$ when applied to polycomplexes with one grading by elements of the group $\Gamma'$ and the remaining gradings by the integers. Therefore, there are natural isomorphisms \begin{align} \Tor^{\phi^*B'\;I\!I}(\phi^*N',\phi^*M')&\simeq \phi^*\Tor^{B'\;I\!I}(N',M'), \\ \Ext_{\phi^*B'}^{I\!I}(\phi^*N',\phi^*M')&\simeq \phi^*\Ext_{B'}^{I\!I}(N',M') \end{align} in $\DD_\Gamma(k\modl)$, and similar isomorphisms for the $\Tor$ and $\Ext$ of the first kind over a $k$\+linear DG\+category~$C$. There is a natural strict CDG\+functor $\phi^*B'\ot_k\phi^*B'{}^\op \rarrow\phi^*(B'\ot_k B'{}^\op)$. So, assuming that the $\Gamma'$\+graded $k$\+modules of morphisms in the category $B'{}^\#$ are flat, for any CDG\+module $M'$ over $B'\ot_k B'{}^\op$ there are natural maps \begin{gather} \label{hoch-second-phi-pull} HH_*^{I\!I}(\phi^*B',\phi^*M')\lrarrow \phi^*HH_*^{I\!I}(B',M'), \\ \phi^*HH^{I\!I\;*}(B',M')\lrarrow HH^{I\!I\;*}(\phi^*B',\phi^*M'), \end{gather} and, in particular, \begin{equation} \label{hoch-second-of-itself-phi-pull} HH_*^{I\!I}(\phi^*B')\lrarrow \phi^*HH_*^{I\!I}(B') \quad\text{and}\quad \phi^*HH^{I\!I\;*}(B')\lrarrow HH^{I\!I\;*}(\phi^*B'). \end{equation} One can see that the maps~(\ref{hoch-second-phi-pull}\+ \ref{hoch-second-of-itself-phi-pull}) are isomorphisms whenever the kernel $\Gamma''$ of the map $\phi\:\Gamma\rarrow\Gamma'$ is finite and its order $|\Gamma''|$ is invertible in~$k$. Indeed, the CDG\+category $\phi^*B'\ot_k\phi^*B'{}^\op$ is linear over the group ring $k[\Gamma'']$ of the abelian group~$\Gamma''$, and the CDG\+category $\phi^*(B'\ot_K B'{}^\op)$ is strictly equivalent (in fact, isomorphic) to $(\phi^*B'\ot_k\phi^*B'{}^\op) \ot_{k[\Gamma'']}k$. The same assertions apply to Hochschild (co)homology of the first kind of a $\Gamma'$\+graded DG\+category $C$ whose complexes of morphisms are h\+flat complexes of $k$\+modules. \subsection{DG-category of CDG-modules} \label{dg-of-cdg-subsect} Let $B$ be a small $k$\+linear CDG\+category such that the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are flat for all objects $X$, $Y\in B$. Denote by $C=\modrcfp B$ the DG\+category of right CDG\+modules over $B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules, and by $D=\modrqfp B$ the CDG\+category of right QDG\+modules over~$B$ satisfying the same condition. The results below also apply to finitely generated free modules in place of finitely generated projective ones. There are strict $k$\+linear CDG\+functors $R\:B\rarrow D$ and $I\:C\rarrow D$, and moreover, these CDG\+functors are pseudo-equivalences of CDG\+categories (see~\ref{pseudo-equi-subsect}). Strictly speaking, the categories $C$ and $D$ as we have defined them are only \emph{essentially} small rather than small, i.~e., they are strictly equivalent to small CDG\+categories. So from now on we will tacitly assume that $C$ and $D$ have been replaced with their small full subcategories containing at least one object in every isomorphism class and such that the functors $R$ and $I$ are still defined. The pseudo-equivalences $R$ and $I$ induce equivalences between the DG\+categories of (left or right) CDG\+modules over the CDG\+categories $B$, $C$, and $D$. Let $N$ be a right CDG\+module and $L$, $M$ be left CDG\+modules over $B$; denote by $N_C$, $N_D$, $L_C$, $L_D$, etc.\ the corresponding CDG\+modules over $C$ and~$D$ (which are defined uniquely up to a unique isomorphism). By the results of~\ref{second-kind-general} (see~(\ref{tor-second-kind-F-star}\+-\ref{ext-second-kind-F-star})), the CDG\+functors $R$ and $I$ induce isomorphisms \begin{align} \Tor^{B,I\!I}(N,M)\rarrow\Tor^{D,I\!I}(N_D,M_D) \quad&\text{and}\quad \Tor^{C,I\!I}(N_C,M_C)\rarrow\Tor^{D,I\!I}(N_D,M_D) \notag \\ \Ext^{I\!I}_D(L_D,M_D)\rarrow\Ext^{I\!I}_B(L,M) \ \ &\text{and}\ \ \Ext^{I\!I}_D(L_D,M_D)\rarrow\Ext^{I\!I}_C(L_C,M_C). \end{align} There are also the induced pseudo-equivalences $R\ot R^\op\: B\ot_k B^\op\rarrow D\ot_k D^\op$ and $I\ot I^\op\:C\ot_k C^\op \rarrow D\ot_k D^\op$. These pseudo-equivalences induce equivalences between the DG\+categories of CDG\+modules over the CDG\+categories $B\ot_k B^\op$, \ $C\ot_k C^\op$, and $D\ot_k D^\op$. In particular, the CDG\+module $B$ over $B\ot_k B^\op$ corresponds to the CDG\+module $C$ over $C\ot_k C^\op$ and to the CDG\+module $D$ over $D\ot_k D^\op$ under these equivalences of DG\+categories. Indeed, the closed morphisms $B\rarrow R^*D$ and $C\rarrow I^*D$ of CDG\+modules over $B\ot_k B^\op$ and $C\ot_k C^\op$ induced by the functors $R$ and $I$ are isomorphisms, since the functors $R^\#$ and $I^\#$ are fully faithful. Let $M$ be a CDG\+module over $B\ot_k B^\op$; denote by $M_C$ and $M_D$ the corresponding CDG\+modules over $C\ot_k C^\op$ and $D\ot_k D^\op$. Then the CDG\+functors $R\ot R^\op$ and $I\ot I^\op$ induce isomorphisms (see~\eqref{ho-hoch-second-kind-F-star}, \eqref{coho-hoch-second-kind-F-star}) \begin{align} HH^{I\!I}_*(B,M)\rarrow HH^{I\!I}_*(D,M_D) \quad&\text{and}\quad HH^{I\!I}_*(C,M_C)\rarrow HH^{I\!I}_*(D,M_D); \\ HH^{I\!I\;*}(D,M_D)\rarrow HH^{I\!I\;*}(B,M) \quad&\text{and}\quad HH^{I\!I\;*}(D,M_D)\rarrow HH^{I\!I\;*}(C,M_C). \notag \\ \intertext{In particular, we obtain natural isomorphisms} \label{hoch-B-C-isomorphisms} HH^{I\!I}_*(B)\simeq HH^{I\!I}_*(C) \quad&\text{and}\quad HH^{I\!I\;*}(B)\simeq HH^{I\!I\;*}(C). \end{align} This is a generalization of~\cite[Theorem~2.14]{Seg}. When the ring $k$ has finite weak homological dimension, any $\Gamma$\+graded complex of flat $k$\+modules is flat. So if the $\Gamma$\+graded $k$\+modules of morphisms in the category $B^\#$, and hence also in the category $C^\#$, are flat, then the complexes of morphisms in the DG\+category $C$ are h\+flat. Thus, both the Hochschild (co)homology of the first and the second kind are defined for the DG\+category $C$, and therefore the natural maps between the Hochschild (co)homology of the first and second kind of the DG\+category $C$ with coefficients in any DG\+module over $C\ot_k C^\op$ are defined. \Section{Derived Categories of the Second Kind} In this section we interpret, under certain homological dimension assumptions, the $\Ext$ and $\Tor$ of the second kind over a CDG\+category in terms of the derived categories of the second kind of CDG\+modules over it. This allows to obtain sufficient conditions for an isomorphism of the Hochschild (co)homology of the first and second kind for a DG\+category, and in particular, for the DG\+category $C$ of CDG\+modules over a CDG\+category $B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules. \subsection{Conventional derived category} \label{derived-first-kind} Given a DG\+category $D$, the additive category $H^0(D)$ has a natural triangulated category structure provided that a zero object and all shift and cones exist in~$D$. In particular, for any small DG\+category $C$ the categories $H^0(C\modld)$ and $H^0(\modrd C)$ are triangulated. These are called the \emph{homotopy categories} of (left and right) DG\+modules over $C$. A (left or right) DG\+module $M$ over $C$ is said to be \emph{acyclic} if the complexes $M(X)$ are acyclic for all objects $X\in C$, i.~e., $H(M)=0$. Acyclic DG\+modules form thick subcategories, closed under both infinite directs sums and infinite products, in the homotopy categories of DG\+modules. The quotient categories by these thick subcategories are called the (conventional) derived categories (of the first kind) of DG\+modules over $C$ and denoted by $\DD(C\modld)$ and $\DD(\modrd C)$. The full subcategory of h\+projective DG\+modules $H^0(C\modld)_\prj\subset H^0(C\modld)$ is a triangulated subcategory whose functor to $\DD(C\modld)$ is an equivalence of categories~\cite{Kel}, and the same applies to the full subcategory of h\+injective DG\+modules $H^0(C\modld)_\inj\subset H^0(C\modld)$. To prove these results, one notices first of all that any projective object in the exact category $Z^0(C\modld)$ is an h\+projective DG\+mod\-ule, and similarly for injectives (see~\ref{ext-tor-first-kind} for the discussion of this exact category and its projective/injective objects). Let $P_\bu$ be a left projective resolution of a DG\+module $M$ in $Z^0(C\modl)$; then the total DG\+module of $P_\bu$, constructed by taking infinite direct sums along the diagonals, is an h\+projective DG\+module quasi-isomorphic to~$M$. Similarly, if $J^\bu$ is a right injective resolution of a DG\+module $M$ in $Z^0(C\modld)$, then the total DG\+module of $J^\bu$, constructed by taking infinite products along the diagonals, is an h\+injective DG\+module quasi-isomorphic to~$M$ \cite[Section~1]{Pkoszul}. {\hbadness=1500 Furthermore, the full subcategory of h\+flat DG\+modules $H^0(C\modld)_\fl\subset H^0(C\modld)$ is a triangulated subcategory whose quotient category by its intersection with thick subcategory of acyclic DG\+modules is equivalent to $\DD(C\modld)$. This follows from the above result for h\+projective DG\+modules and the fact that any h\+flat DG\+module is h\+projective. The same applies to the full subcategory of h\+flat right DG\+modules $H^0(\modrd C)_\fl\subset H^0(\modrd C)$. \par} Let $k$ be a commutative ring and $C$ be a small $k$\+linear DG\+category. Restricting the triangulated functor of two arguments (see~\eqref{dg-hom}) $$ \Hom^C\:H^0(C\modld)^\op\times H^0(C\modld)\lrarrow \DD(k\modl) $$ to the full subcategory of h\+projective DG\+modules in the first argument, one obtains a functor that factors through the derived category in the second argument, providing the derived functor $$ \Ext_C\:\DD(C\modld)^\op\times\DD(C\modld)\lrarrow \DD(k\modl). $$ Alternatively, restricting the functor $\Hom^C$ to the full subcategory of h\+injective DG\+modules in the second argument, one obtains a functor that factors through the derived category in the first argument, leading to the same derived functor $\Ext_C$. The composition of this derived functor with the localization functor $Z^0(C\modld)\rarrow\DD(C\modld)$ is isomorphic to the derived functor $\Ext_C$ constructed in~\ref{ext-tor-first-kind}. For any left DG\+modules $L$ and $M$ over $C$ there is a natural isomorphism $$ H^*\Ext_C(L,M)\simeq\Hom_{\DD(C\modld)}(L,M[*]). $$ Analogously, restricting the triangulated functor of two arguments (see~\eqref{dg-tensor-product}) $$ \ot_C\:H^0(\modrd C)\times H^0(C\modld)\lrarrow \DD(k\modl) $$ to the full subcategory of h\+flat DG\+modules in the first argument one obtains a functor that factors through the Cartesian product of the derived categories, providing the derived functor $$ \Tor^C\:\DD(\modrd C)\times\DD(C\modld)\lrarrow\DD(k\modl). $$ The same derived functor can be obtained by restricting the functor $\ot_C$ to the full subcategory of h\+flat DG\+modules in the second argument. Up to composing with the localization functors $Z^0(\modrd C)\rarrow\DD(\modrd C)$ and $Z^0(C\modld)\rarrow \DD(C\modld)$, this is the same derived functor $\Tor^C$ that was constructed in~\ref{ext-tor-first-kind}. \subsection{Derived categories of the second kind} \label{categories-second-kind} Let $B$ be a small CDG\+category. As in~\ref{derived-first-kind}, the \emph{homotopy categories of CDG\+modules} $H^0(B\modlc)$ and $H^0(\modrc B)$ over $B$ are naturally triangulated. Given a short exact sequence $0\rarrow K'\rarrow K\rarrow K'' \rarrow 0$ in the abelian category $Z^0(B\modlc)$, one can consider it as a finite complex of closed morphisms in the DG\+category $B\modlc$ and take the corresponding total object in $B\modlc$ \cite[Section~1.2]{Pkoszul}. A left CDG\+module over $B$ is called \emph{absolutely acyclic} if it belongs to the minimal thick subcategory of $H^0(B\modlc)$ containing the total CDG\+modules of exact triples of CDG\+modules. The quotient category of $H^0(B\modlc)$ by the thick subcategory of absolutely acyclic CDG\+modules is called the \emph{absolute derived category} of left CDG\+modules over $B$ and denoted by $\DD^\abs(B\modlc)$ \cite[Section~3.3]{Pkoszul}. A left CDG\+module over $B$ is called \emph{coacyclic} if it belongs to the minimal triangulated subcategory of $H^0(B\modlc)$ containing the total CDG\+modules of exact triples of CDG\+modules and closed under infinite direct sums. The quotient category of $H^0(B\modlc)$ by the thick subcategory of coacyclic CDG\+modules is called the \emph{coderived category} of left CDG\+modules over $B$ and denoted by $\DD^\co(B\modlc)$. The definition of a \emph{contraacyclic} CDG\+module is dual to the previous one. A left CDG\+module over $B$ is called contraacyclic if it belongs to the minimal triangulated subcategory of $H^0(B\modlc)$ containing the total CDG\+modules of exact triples of CDG\+modules and closed under infinite products. The quotient category of $H^0(B\modlc)$ by the thick subcategory of contraacyclic CDG\+modules is called the \emph{contraderived category} of left CDG\+modules over $B$ and denoted by $\DD^\ctr(B\modlc)$. Coacyclic, contraacyclic, and absolutely acyclic right CDG\+modules are defined in the analogous way. The corresponding exotic derived (quotient) categories are denoted by $\DD^\co(\modrc B)$, \ $\DD^\ctr(\modrc B)$, and $\DD^\abs(\modrc B)$. We will use the similar notation $\DD^\co(C\modld)$, \ $\DD^\ctr(C\modld)$, etc.,\ in the particular case of the coderived, contraderived, and absolutely derived categories of DG\+modules over a small DG\+category $C$. Notice that any coacyclic or contraacyclic DG\+module is acyclic. The converse is not true~\cite[Examples~3.3]{Pkoszul}. Furthermore, given an exact subcategory in the abelian category of $\Gamma$\+graded $B^\#$\+modules, one can define the class of absolutely acyclic CDG\+modules with respect to this exact subcategory (or the DG\+category of CDG\+modules whose underlying $\Gamma$\+graded modules belong to this exact subcategory). For this purpose, one considers exact triples of CDG\+modules whose underlying $\Gamma$\+graded modules belong to the exact subcategory, takes their total CDG\+modules, and uses them to generate a thick subcategory of the homotopy category of all CDG\+modules whose underlying $\Gamma$\+graded modules belong to the exact subcategory. When the exact subcategory is closed under infinite direct sums (resp.,\ infinite products), the class of coacyclic (resp.,\ contraacyclic) CDG\+modules with respect to this exact subcategory is defined. Taking the quotient category, one obtains the coderived, contraderived, or absolute derived category of CDG\+modules with the given restriction on the underlying $\Gamma$\+graded modules. We will be particularly interested in the coderived and absolute derived categories of CDG\+modules over $B$ whose underlying $\Gamma$\+graded $B^\#$\+modules are flat or have finite flat dimension (see~\ref{change-grading-group} for the terminology). Denote the DG\+categories of right CDG\+modules over $B$ with such restrictions on the underlying $\Gamma$\+graded modules by $\modrcfl B$ and $\modrcffd B$, and their absolute derived categories by $\DD^\abs(\modrcfl B)$ and $\DD^\abs(\modrcffd B)$. The coderived category of $\modrcfl B$, defined as explained above, is denoted by $\DD^\co(\modrcfl B)$. The definition of the coderived category $\DD^\co(\modrcffd B)$ requires a little more care because the class of modules of finite flat dimension is not closed under infinite direct sums; only the classes of modules of flat dimension not exceeding a fixed number~$d$ are. Let us call a CDG\+module $N$ over $B$ \emph{d\+flat} if its underlying $\Gamma$\+graded $B^\#$\+module $M^\#$ has flat dimension not greater than~$d$. Define an object $N\in H^0(\modrcffd B)$ to be \emph{coacyclic with respect to $\modrcffd B$} if there exists an integer $d\ge 0$ such that the CDG\+module $N$ is coacyclic with respect to the DG\+category of $d$\+flat CDG\+modules over $B$. The coderived category $\DD^\co(\modrcffd B)$ is the quotient category of the homotopy category $H^0(\modrcffd B)$ by the thick subcategory of CDG\+modules coacyclic with respect to $\modrcffd B$. Similarly, let $B\modlc_\prj$, \ $B\modlc_\fpd$, $B\modlc_\inj$, and $B\modlc_\fid$ denote the DG\+cate\-gories of left CDG\+modules over $B$ whose underlying $\Gamma$\+graded $B^\#$\+modules are projective, of finite projective dimension, injective, and of finite injective dimension, respectively. The notation for the homotopy categories and exotic derived categories of these DG\+categories is similar to the above. The definition of the coderived category $\DD^\co(B\modlc_\fpd)$ and the contraderived category $\DD^\ctr(B\modlc_\fid)$ involves the same subtle point as discussed above. It is dealt with in the same way, i.~e., the class of CDG\+modules coacyclic with respect to $B\modlc_\fpd$ or contraacyclic with respect to $B\modlc_\fid$ is defined as the union of the classes of CDG\+modules coacyclic or contraacyclic with respect to the category of modules of the projective or injective dimension bounded by a fixed integer. \begin{thm} \textup{(a)} The functors\/ $\DD^\co(\modrcfl B)\rarrow\DD^\co(\modrcffd B)$ and\/ $\DD^\abs(\modrcfl B)\allowbreak\rarrow\DD^\abs(\modrcffd B)$ induced by the embedding\/ $\modrcfl B\rarrow\modrcffd B$ are equivalences of triangulated categories. \par \textup{(b)} The functors $H^0(B\modlc_\prj)\rarrow\DD^\abs(B\modlc_\fpd) \rarrow\DD^\co(B\modlc_\fpd)$, the first of which is induced by the embedding\/ $B\modlc_\prj\rarrow B\modlc_\fpd$ and the second is the localization functor, are equivalences of triangulated categories. \par \textup{(c)} The functors $H^0(B\modlc_\inj)\rarrow\DD^\abs(B\modlc_\fid) \rarrow\DD^\ctr(B\modlc_\fid)$, the first of which is induced by the embedding\/ $B\modlc_\inj\rarrow B\modlc_\fid$ and the second is the localization functor, are equivalences of triangulated categories. \end{thm} \begin{proof} The first equivalence in part~(b) is easy to prove. By~\cite[Theorem~3.5(b)]{Pkoszul}, CDG\+modules that are projective as $\Gamma$\+graded modules are semiorthogonal to any contraacyclic CDG\+modules in $H^0(B\modlc)$. The construction of~\cite[proof of Theorem~3.6]{Pkoszul} shows that any object of $H^0(B\modlc_\fpd)$ is a cone of a morphism from a CDG\+module that is absolutely acyclic with respect to $B\modlc_\fpd$ to an object of $H^0(B\modlc_\prj)$. It follows that the functor $H^0(B\modlc_\prj)\rarrow \DD^\abs(B\modlc_\fpd)$ is an equivalence of triangulated categories. Moreover, any object of $H^0(B\modlc_\fpd)$ that is contraacyclic with respect to $B\modlc$ is absolutely acyclic with respect to $B\modlc_\fpd$. To prove the second equivalence in part~(b), it suffices to show that any object of $H^0(B\modlc_\prj)$ that is coacyclic with respect to $B\modlc_\fpd$ is coacyclic with respect to $B\modlc_\prj$ (as any object of the latter kind is clearly contractible). The proof of this is analogous to the proof of part~(a) below. It follows that any CDG\+module coacyclic with respect to $B\modlc_\fpd$ is absolutely acyclic with respect to $B\modlc_\fpd$. The proof of part~(c) is analogous to the proof of part~(b) up to the duality. To prove part~(a), notice that the same construction from~\cite[proof of Theorem~3.6]{Pkoszul} allows to present any object of $H^0(\modrcffd B)$ as a cone of a morphism from a CDG\+module that is absolutely acyclic with respect to $\modrcffd B$ to an object of $H^0(\modrcfl B)$. By~\cite[Lemma~1.6]{Pkoszul}, it remains to show that any object of $H^0(\modrcfl B)$ that is coacyclic (absolutely acyclic) with respect to $\modrcffd B$ is coacyclic (absolutely acyclic) with respect to $\modrcfl B$. We follow the idea of the proof of~\cite[Theorem~7.2.2]{Psemi}. Given an integer $d\ge0$, let us call a $d$\+flat right CDG\+module $N$ over $B$ \ \emph{$d$\+coacyclic} if it is coacyclic with respect to the exact category of $d$\+flat CDG\+modules over $B$. We will show that for any $d$\+coacyclic CDG\+module $N$ there exists an $(d-1)$-coacyclic CDG\+module $L$ together with a surjective closed morphism of CDG\+modules $L\rarrow N$ whose kernel $K$ is also $(d-1)$-coacyclic. It will follow that any $(d-1)$-flat $d$\+coacyclic CDG\+module $N$ is $(d-1)$-coacyclic, since the total CDG\+module of the exact triple $K\rarrow L\rarrow M$ is $(d-1)$\+coacyclic, as is the cone of the morphism $K\rarrow L$. By induction we will conclude that any $0$\+flat $d$\+coacyclic CDG\+module is $0$\+coacyclic. The argument for absolutely acyclic CDG\+modules will be similar. To prove that a $d$\+coacyclic CDG\+module can be presented as a quotient of a $(d-1)$-coacyclic CDG\+module by a $(d-1)$-coacyclic CDG\+submodule, we will first construct such a presentation for totalizations of exact triples of $d$\+flat CDG\+modules, and then check that the class of $d$\+flat CDG\+modules presentable in this form is stable under taking cones and homotopy equivalences. \begin{lemA} Let $N$ be the total CDG\+module of an exact triple of $d$\+flat CDG\+modules $N'\rarrow N''\rarrow N'''$. Then there exists a surjective closed morphism onto $N$ from a $0$\+coacyclic CDG\+module $P$ with a $(d-1)$\+coacyclic kernel~$K$. \end{lemA} \begin{proof} Choose projective objects $P'$ and $P'''$ in the abelian category of CDG\+modules $Z^0(\modrc B)$ (see~\ref{second-kind-general}) such that there are surjective morphisms $P'\rarrow N'$ and $P'''\rarrow N'''$. Then there exists a surjective morphism from the exact triple CDG\+modules $P'\rarrow P''= P'\oplus P'''\rarrow P'''$ onto the exact triple $N'\rarrow N''\rarrow N'''$. Let $K'\rarrow K''\rarrow K'''$ be the kernel of this morphism of exact triples; then the CDG\+modules $P^{(i)}$ are $0$\+flat, while the CDG\+modules $K^{(i)}$ are $(d-1)$-flat. Therefore, the total CDG\+module $P$ of the exact triple $P'\rarrow P''\rarrow P'''$ is $0$\+coacyclic (in fact, $0$\+flat and contractible), while the total CDG\+module $K$ of the exact triple $K'\rarrow K''\rarrow K'''$ is $(d-1)$-coacyclic. \end{proof} \begin{lemB} \textup{(a)} Let $K'\rarrow L'\rarrow N'$ and $K''\rarrow L''\rarrow N''$ be exact triples of CDG\+modules such that the CDG\+modules $K'$, $L'$, $K''$, $L''$ are $(d-1)$-coacyclic, and let $N'\rarrow N''$ be a closed morphism of CDG\+modules. Then there exists an exact triple of CDG\+modules $K\rarrow L \rarrow N$ with $N=\cone(N'\to N'')$ and\/ $(d-1)$-coacylic CDG\+modules $K$ and~$L$. \par \textup{(b)} In the situation of~\textup{(a)}, assume that the morphism $N'\rarrow N''$ is injective with a $d$\+flat cokernel $N_0$. Then there exists an exact triple of CDG\+modules $K_0\rarrow L_0\rarrow N_0$ with $(d-1)$-coacyclic CDG\+modules $K_0$ and $L_0$. \end{lemB} \begin{proof} Denote by $L'''$ the CDG\+module $L'\oplus L''$; then there is the embedding of a direct summand $L'\rarrow L'''$ and the surjective closed morphism of CDG\+modules $L'''\rarrow N''$ whose components are the composition $L'\rarrow N'\rarrow N''$ and the surjective morphism $L''\rarrow N''$. These two morphisms form a commutative square with the morphisms $L'\rarrow N'$ and $N'\rarrow N''$. The kernel $K'''$ of the morphism $L'''\rarrow N''$ is the middle term of an exact triple of CDG\+modules $K''\rarrow K'''\rarrow L'$. Since the CDG\+modules $K''$ and $L'$ are $(d-1)$-coacyclic, the CDG\+module $K'''$ is $(d-1)$-coacyclic, too. Set $L=\cone(L'\to L''')$ and $K=\cone(K'\to K''')$. To prove part~(b), notice that the above morphisms of CDG\+modules $L'\rarrow L'''$ and $K'\rarrow K'''$ are injective; denote their cokernels by $L_0$ and~$K_0$. Then the CDG\+module $L_0\simeq L''$ is $(d-1)$-coacyclic. In the assumptions of part~(b), the CDG\+module $K_0$ is the kernel of the surjective morphism $L_0\rarrow N_0$, so it is $(d-1)$-flat. Hence it follows from the exact triple $K'\rarrow K'''\rarrow K_0$ that $K_0$ is $(d-1)$-coacyclic. \end{proof} \begin{lemC} For any contractible $d$\+flat CDG\+module $N$ there exists an exact triple $K\rarrow P\rarrow N$ with with a $0$\+flat contractible CDG\+module $P$ and a $(d-1)$-flat contractible CDG\+module~$K$. \end{lemC} \begin{proof} It is easy to see using the explicit description of projective objects in $Z^0(\modrc B)$ given in~\ref{second-kind-general} that any projective CDG\+module is contractible. Let $p\:P\rarrow N$ be a surjectuve morphism onto $N$ from a projective CDG\+module~$P$. Let $t\:N\rarrow N$ be a contracting homotopy for $N$ and $\theta\:P\rarrow P$ be a contracting homotopy for~$P$. Then $p\theta-tp\:P\rarrow N$ is a closed morphism of CDG\+modules of degree~$-1$. Since $P$ is projective and $p$ is surjective, there exists a closed morphism $b\:P\rarrow P$ of degree~$-1$ such that $p\theta-tp=pb$. Hence $\theta-b$ is another contracting homotopy for $P$ making a commutative square with the contracting homotopy~$t$ and the morphism~$p$. It follows that the restriction of $\theta-b$ on the kernel $K$ of the morphism~$p$ is a contracting homotopy for the CDG\+module~$K$. \end{proof} \begin{lemD} Let $N\rarrow N'$ be a homotopy equivalence of $d$\+flat CDG\+modules, and suppose that there is an exact triple of CDG\+modules $K'\rarrow L'\rarrow N'$ with $(d-1)$-coacyclic CDG\+modules $K'$ and~$L'$. Then there exists an exact triple of CDG\+modules $K\rarrow L \rarrow N$ with $(d-1)$-coacyclic CDG\+modules $K$ and~$L$. \end{lemD} \begin{proof} The cone of the morphism $N\rarrow N'$, being a contractible $d$\+flat CDG\+module, is the cokernel of an injective morphism of $(d-1)$-coacyclic CDG\+modules by Lemma~C\hbox{}. By Lemma~B(a), the cocone $N''$ of the morphism $N'\rarrow \cone(N\to N')$ can be also presented in such form. The CDG\+module $N''$ is isomorphic to the direct sum of the CDG\+module $N$ and the cocone $N'''$ of the identity endomorphism of the CDG\+module~$N'$. The CDG\+module $N'''$ can be also presented in the desired form. Hence, by Lemma~B(b), so can the cokernel $N$ of the injective morphism $N'''\rarrow N''$. \end{proof} It is clear that the property of a CDG\+module to be presentable as the quotient of a $(d-1)$-coacyclic CDG\+module by a $(d-1)$-coacyclic CDG\+submodule is stable under infinite direct sums. The assertion that all $d$\+coacyclic CDG\+modules can be presented in such form now follows from Lemmas~A, B(a), and~D. \end{proof} In particular, it follows from part~(b) of Theorem that there is a natural fully faithful functor $\DD^\co(B\modlc_\fpd)\rarrow \DD^\ctr(B\modlc)$. Indeed, the functor $H^0(B\modlc_\prj)\allowbreak\simeq \DD^\abs(B\modlc_\fpd)\rarrow\DD^\ctr(B\modlc)$ is fully faithful by~\cite[Theorem~3.5(b) and Lemma~1.3]{Pkoszul}. Similarly, the functor $H^0(B\modlc_\inj)\simeq\DD^\abs(B\modlc_\fid) \rarrow\DD^\co(B\modlc)$ is fully faithful by~\cite[Theorem~3.5(a)]{Pkoszul}, so there is a natural fully faithful functor $\DD^\ctr(B\modlc_\fid)\rarrow \DD^\co(B\modlc)$. \subsection{Derived functors of the second kind} \label{functors-second-kind} Let $B$ be a small $k$\+linear CDG\+cate\-gory and $L$ be a left CDG\+module over $B$ such that the $\Gamma$\+graded left $B^\#$\+module $L^\#$ has finite projective dimension. Then the CDG\+module $L$ admits a finite left resolution $P_\bu$ in the abelian category $Z^0(B\modlc)$ such that the $\Gamma$\+graded $B^\#$\+modules $P_i^\#$ are projective. This resolution can be used to compute the functor $\Ext_B^{I\!I}(L,{-})$ as defined in~\ref{second-kind-general}. On the other hand, let $P$ denote the total CDG\+module of the finite complex of CDG\+modules $P_\bu$. Then for any left CDG\+module $M$ over $B$ the $k$\+module of morphisms from $L$ into $M$ in $\DD^\abs(B\modlc)$ or $\DD^\ctr(B\modlc)$ is isomorphic to the $k$\+module of morphisms from $P$ into $M$ in the homotopy category $H^0(B\modlc)$ \cite[Theorem~3.5(b) and Lemma~1.3]{Pkoszul}. Thus, $$ H^*\Ext_B^{I\!I}(L,M)\.\simeq\.\Hom_{\DD^\abs(B\modlc)}(L,M[*]) \.\simeq\.\Hom_{\DD^\ctr(B\modlc)}(L,M[*]). $$ Similar isomorphisms $$ H^*\Ext_B^{I\!I}(L,M)\.\simeq\.\Hom_{\DD^\abs(B\modlc)}(L,M[*]) \.\simeq\.\Hom_{\DD^\co(B\modlc)}(L,M[*]) $$ hold if one assumes, instead of the condition on $L^\#$, that the $\Gamma$\+graded $B^\#$\+module $M^\#$ has finite injective dimension. One can lift these comparison results from the level of cohomology modules to the level of the derived category $\DD(k\modl)$ in the following way. Consider the functor (see~\eqref{cdg-hom}) $$ \Hom^B\:H^0(B\modlc)^\op\times H^0(B\modlc)\lrarrow\DD(k\modl) $$ and restrict it to the subcategory $H^0(B\modlc_\prj)^\op$ in the first argument. This restriction factors through the contraderived category $\DD^\ctr(B\modlc)$ in the second argument. Taking into account Theorem~\ref{categories-second-kind}(b), we obtain a right derived functor \begin{equation} \label{ext-second-kind-proj} \DD^\co(B\modlc_\fpd)^\op\times\DD^\ctr(B\modlc)\lrarrow \DD(k\modl). \end{equation} The composition of this derived functor with the localization functors $Z^0(B\modlc_\fpd)\allowbreak\rarrow\DD^\co(B\modlc_\fpd)$ and $Z^0(B\modlc)\rarrow\DD^\ctr(B\modlc)$ agrees with the derived functor $\Ext_B^{I\!I}$ where the former is defined. In the same way one can use Theorem~\ref{categories-second-kind}(c) to construct a right derived functor \begin{equation} \label{ext-second-kind-inj} \DD^\co(B\modlc)^\op\times\DD^\ctr(B\modlc_\fid)\lrarrow\DD(k\modl), \end{equation} which agrees with the functor $\Ext_B^{I\!I}$ where the former is defined, up to composing with the localization functors. Analogously, consider the functor (see~\eqref{cdg-tensor-product}) $$ \ot_B\:H^0(\modrc B)\times H^0(B\modlc)\lrarrow\DD(k\modl) $$ and restrict it to the subcategory $H^0(\modrcfl B)$ in the first argument. This restriction factors through the Cartesian product $\DD^\co(\modrcfl B)\times\DD^\co(B\modlc)$. Indeed, the tensor product of a CDG\+module that is flat as a $\Gamma$\+graded module with a coacyclic CDG\+module is clearly acyclic, as is the tensor product of a CDG\+module coacyclic with respect to $\modrcfl B$ with any CDG\+module over~$B$. Taking into account Theorem~\ref{categories-second-kind}(a), we obtain a left derived functor \begin{equation} \label{tor-second-kind} \DD^\co(\modrcffd B)\times\DD^\co(B\modlc)\lrarrow\DD(k\modl). \end{equation} Up to composing with the localization functors $Z^0(\modrcffd B) \rarrow\DD^\co(\modrcffd B)$ and $Z^0(B\modlc)\rarrow \DD^\co(B\modlc)$, this derived functor agrees with the derived functor $\Tor^{B,I\!I}$ where the former is defined. To see this, it suffices, as above, to choose for a CDG\+module $N\in \modrcffd B$ a \emph{finite} left resolution $Q_\bu$ in the abelian category $Z^0(\modrc B)$ such that the $\Gamma$\+graded $B^\#$\+modules $Q_i^\#$ are flat. \begin{rem} The functor $\Tor^{B,I\!I}$ factors through the Cartesian product of the absolute derived categories, defining a triangulated functor of two arguments $$ \DD^\abs(\modrc B)\times\DD^\abs(B\modlc)\lrarrow\DD(k\modl) $$ \cite[Section~3.12]{Pkoszul}. This functor agrees with the functor~\eqref{tor-second-kind} in the sense that the composition of the former with the functor $\DD^\abs(\modrcffd B)\rarrow\DD^\abs(\modrc B)$ in the first argument is isomorphic to the composition of the latter with the functors $\DD^\abs(\modrcffd B)\rarrow\DD^\co(\modrcffd B)$ and $\DD^\abs(B\modlc)\rarrow\DD^\co(B\modlc)$ in the first and second arguments, respectively. Analogously, the functor $\Ext_B^{I\!I}$ descends to a triangulated functor of two arguments $$ \DD^\abs(B\modlc)^\op\times\DD^\abs(B\modlc)\lrarrow\DD(k\modl), $$ which agrees with the functors \eqref{ext-second-kind-proj} and~\eqref{ext-second-kind-inj} in the similar sense. \end{rem} \subsection{Comparison of the two theories} \label{comparison-subsect} Let $C$ be a small $k$\+linear DG\+category. Recall (see~\ref{derived-first-kind}) the notation $H^0(C\modld)_\prj$ for the homotopy category of h\+projective left DG\+modules over $C$. As in~\ref{categories-second-kind}, let $C\modld_\prj$ and $H^0(C\modld_\prj)$ denote the DG\+category of left DG\+modules over $C$ whose underlying $\Gamma$\+graded $C^\#$\+modules are projective, and its homotopy category. Finally, denote by $H^0(C\modld_\prj)_\prj$ the full triangulated subcategory in $H^0(C\modld_\prj)$ formed by the h\+projective left DG\+modules over $C$ whose underlying $\Gamma$\+graded $C^\#$\+modules are projective. The functors $$ H^0(C\modld_\prj)_\prj\lrarrow H^0(C\modld)_\prj\lrarrow\DD(C\modld) $$ are equivalences of triangulated categories. Moreover, for any left DG\+module $L$ over $C$ there exists a DG\+module $P\in H^0(C\modld_\prj)_\prj$ together with a quasi-isomorphism $P\rarrow L$ of DG\+modules over~$C$ (see \cite{Kel} or~\cite[Section~1]{Pkoszul}). The equivalence of categories $H^0(C\modld_\prj)_\prj\rarrow \DD(C\modld)$ factors as the following composition $$ H^0(C\modld_\prj)_\prj\lrarrow H^0(C\modld_\prj)\lrarrow \DD^\co(C\modld_\fpd)\lrarrow\DD(C\modld), $$ where the middle arrow is also an equivalence of categories (by Theorem~\ref{categories-second-kind}(b)). Besides, there is the localization functor $\DD^\ctr(C\modld) \rarrow\DD(C\modld)$. This allows to construct a natural morphism \begin{equation} \label{ext-derived-first-second-proj} \Ext_C^{I\!I}(L,M)\lrarrow\Ext_C(L,M) \end{equation} in $\DD(k\modl)$ for any objects $L\in\DD^\co(C\modld_\fpd)$ and $M\in\DD^\ctr(C\modld)$. Specifically, for a given DG\+module $L$ choose a DG\+module $F\in H^0(C\modld_\prj)$ and a closed morphism $F\rarrow L$ with a cone coacyclic with respect to $C\modld_\fpd$. Next, for the DG\+module $F$ choose a DG\+module $P\in H^0(C\modld_\prj)_\prj$ together with a quasi-isomorphism $P\rarrow F$. Then the complex $\Hom^C(F,M)$ represents the object $\Ext_C^{I\!I}(L,M)$, the complex $\Hom^C(P,M)$ represents the object $\Ext_C(L,M)$, and the morphism $\Hom^C(F,M)\rarrow \Hom^C(P,M)$, induced by the morphism $P\rarrow F$, represents the desired morphism~\eqref{ext-derived-first-second-proj}. This morphism does not depend on the choices of the objects $F$ and~$P$. To see that the comparison morphism~\eqref{ext-derived-first-second-proj} coincides with the morphism~\eqref{ext-first-second} constructed in~\ref{second-kind-general}, choose a projective resolution $P_\bu$ of the object $L$ in the exact category $Z^0(C\modld)$. Then both the resolution $P_\bu$ and its finite canonical truncation $\tau_{\ge -d}P_\bu$ for $d$~large enough are resolutions of $L$ that can be used to compute $\Ext_C^{I\!I}(L,M)$ by the procedure of~\ref{second-kind-general}, while the whole resolution $P_\bu$ can also be used to compute $\Ext_C(L,M)$ by the procedure of~\ref{ext-tor-first-kind}. Set $F$ to be the total DG\+module of the finite complex of DG\+modules $\tau_{\ge -d}P_\bu$ and $P$ the total DG\+module of the complex of DG\+modules $P_\bu$, constructed by taking infinite direct sums along the diagonals. Then the morphism of complexes $\Hom^C(F,M)\rarrow\Hom^C(P,M)$ represents both the morphisms \eqref{ext-first-second} and~\eqref{ext-derived-first-second-proj} in $\DD(k\modl)$. Analogously, denote by $H^0(C\modld_\inj)_\inj$ the full triangulated subcategory in $H^0(C\modld_\inj)$ formed by the h\+injective left DG\+modules over $C$ whose underlying $\Gamma$\+graded $C^\#$\+modules are injective. Here, as above, the notation $H^0(C\modld)_\inj$ for the category of h\+injective DG\+modules comes from~\ref{derived-first-kind}, while the notation $C\modld_\inj$ and $H^0(C\modld_\inj)$ for the categories of DG\+modules whose underlying $\Gamma$\+graded modules are injective is similar to that in~\ref{categories-second-kind}. The functors $$ H^0(C\modld_\inj)_\inj\lrarrow H^0(C\modld)_\inj\lrarrow \DD(C\modld) $$ are equivalences of triangulated categories; moreover, for any left DG\+module $M$ over $C$ there exists a DG\+module $J\in H^0(C\modld_\inj)_\inj$ together with a quasi-isomorphism $M\rarrow J$ of CDG\+modules over~$C$. The equivalence of categories $H^0(C\modld_\inj)_\inj\rarrow \DD(C\modld)$ factors as the following composition $$ H^0(C\modld_\inj)_\inj\lrarrow H^0(C\modld_\inj)\lrarrow \DD^\ctr(C\modld_\fid)\lrarrow\DD(C\modld), $$ where the middle arrow is also an equivalence of categories (by Theorem~\ref{categories-second-kind}(c)). Besides, there is the localization functor $\DD^\co(C\modld)\rarrow \DD(C\modld)$. This allows to construct a natural morphism \begin{equation} \label{ext-derived-first-second-inj} \Ext_C^{I\!I}(L,M)\lrarrow\Ext_C(L,M) \end{equation} in $\DD(k\modl)$ for any objects $L\in\DD^\co(C\modld)$ and $M\in\DD^\ctr(C\modld_\fid)$. Specifically, for a given DG\+module $M$ choose a DG\+module $I\in H^0(C\modld_\inj)$ and a closed morphism $M\rarrow I$ with a cone contraacyclic with respect to $C\modld_\fid$. Next, for the DG\+module $I$ choose a DG\+module $J\in H^0(C\modld_\inj)_\inj$ together with a quasi-isomorphism $I\rarrow J$. Then the complex $\Hom^C(L,I)$ represents the object $\Ext_C^{I\!I}(L,M)$, the complex $\Hom_C(L,J)$ represents the object $\Ext_C(L,M)$, and the morphism $\Hom_C(L,I)\rarrow \Hom_C(L,J)$ represents the desired morphism~\eqref{ext-derived-first-second-inj}. This comparison morphism agrees with the comparison morphism~\eqref{ext-first-second} from~\ref{second-kind-general} where the former is defined. Finally, denote by $H^0(\modrdfl C)_\fl$ the full triangulated subcategory in $H^0(\modrdfl C)$ formed by h\+flat right DG\+modules over $C$ whose underlying $\Gamma$\+graded $C^\#$\+mod\-ules are flat. As above, $H^0(\modrd C)_\fl$ is the homotopy category of h\+flat right DG\+mod\-ules over $C$, while $\modrdfl C$ and $H^0(\modrdfl C)$ denote the DG\+category of right DG\+modules whose underlying $\Gamma$\+graded $C^\#$\+modules are flat, and its homotopy category. The functors between the quotient categories of $H^0(\modrdfl C)_\fl$ and $H^0(\modrd C)_\fl$ by their intersections with the thick subcategory of acyclic DG\+mod\-ules and the derived category $\DD(\modrd C)$ are equivalences of triangulated categories. Moreover, for any right DG\+module $N$ over $C$ there exists a DG\+module $Q\in H^0(\modrdfl C)_\fl$ together with a quasi-isomorphism of DG\+modules $Q\rarrow N$ \cite[Section~1.6]{Pkoszul}. The localization functor $H^0(\modrdfl C)_\fl\rarrow\DD(\modrd C)$ factors into the composition $$ H^0(\modrdfl C)_\fl\lrarrow H^0(\modrdfl C)\lrarrow \DD^\co(\modrdffd C)\lrarrow\DD(\modrd C). $$ (the middle arrow being described by Theorem~\ref{categories-second-kind}(a)). There is also the localization functor $\DD^\co(C\modld)\rarrow \DD(C\modld)$. This allows to construct a natural morphism \begin{equation} \label{tor-derived-first-second} \Tor^C(N,M)\lrarrow\Tor^{C,I\!I}(N,M) \end{equation} in $\DD(k\modl)$ for any objects $N\in\DD^\co(\modrdffd C)$ and $M\in\DD^\co(C\modld)$ in the same way as above. Specifically, for a given DG\+module $N$ choose a DG\+module $F\in H^0(\modrdfl C)$ and a closed morphism $F\rarrow N$ with a cone coacyclic with respect to $\modrdffd C$. Next, for the DG\+module $F$ choose a DG\+module $Q\in H^0(\modrdfl C)_\fl$ together with a quasi-isomorphism $Q\rarrow F$. Then the complex $F\ot_C M$ represents the object $\Tor^{C,I\!I}(N,M)$, the complex $Q\ot_C M$ represents the object $\Tor^C(N,M)$, and the morphism $Q\ot_C M\rarrow F\ot_C M$ represents the desired morphism~\eqref{tor-derived-first-second}. This comparison morphism agrees with the morphism~\eqref{tor-first-second} from~\ref{second-kind-general} where the former is defined. \begin{prop} \textup{(a)} The natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism whenever the $\Gamma$\+graded $C^\#$\+module $N^\#$ has finite flat dimension and there exists a closed morphism $Q\rarrow N$ into $N$ from a DG\+module $Q\in H^0(\modrdfl C)_\fl$ with a cone that is coacyclic with respect to $\modrdffd C$. \par \textup{(b)} The natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism whenever the $\Gamma$\+graded $C^\#$\+module $L^\#$ has finite projective dimension and the object $L\in H^0(C\modld_\fpd)$ belongs to the triangulated subcategory generated by $H^0(C\modld_\prj)_\prj$ and the subcategory of objects coacyclic with respect to $C\modld_\fpd$. \par Equivalently, the latter conclusion holds whenever the $\Gamma$\+graded $C^\#$\+module $L^\#$ has finite projective dimension and the object $L\in\DD^\ctr(C\modld)$ belongs to the image of the functor $H^0(C\modld)_\prj\rarrow\DD^\ctr(C\modld)$. \par\textup{(c)} The natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is also an isomorphism if the $\Gamma$\+graded $C^\#$\+module $M^\#$ has finite injective dimension and the object $M\in H^0(C\modld_\fid)$ belongs to the triangulated subcategory generated by $H^0(C\modld_\inj)_\inj$ and the subcategory of objects contraacyclic with respect to $C\modld_\fid$. \par Equivalently, the latter conclusion holds whenever the $\Gamma$\+graded $C^\#$\+module $M^\#$ has finite injective dimension and the object $M\in\DD^\co(C\modld)$ belongs to the image of the functor $H^0(C\modld)_\inj\rarrow\DD^\co(C\modld)$. \end{prop} Notice that the equivalence of categories $H^0(C\modld)_\prj\simeq \DD(C\modld)$ identifies the functor $H^0(C\modld)_\prj\rarrow \DD^\ctr(C\modld)$ with the functor left adjoint to the localization functor $\DD^\ctr(C\modld)\rarrow\DD(C\modld)$. Analogously, the equivalence of categories $H^0(C\modld)_\inj\simeq \DD(C\modld)$ identifies the functor $H^0(C\modld)_\inj\rarrow \DD^\co(C\modld)$ with the functor right adjoint to the localization functor $\DD^\co(C\modld)\rarrow\DD(C\modld)$. Before we prove the proposition, let us introduce some more notation. The triangulated category of (C)DG\+modules coacyclic (resp.,\ contraacyclic) with respect to a given DG\+category of (C)DG\+modules $D$ will be denoted by $\Ac^\co(D)$ (resp.,\ $\Ac^\ctr(D)$). So $\Ac^\co(D)$ and $\Ac^\ctr(D)$ are triangulated subcategories of $H^0(D)$. Similarly, $\Ac^\abs(D)$ denotes the triangulated subcategory of absolutely acyclic (C)DG\+modules. Finally, given a DG\+category $C$, we denote by $\Ac(C\modld)$ and $\Ac(\modrd C)$ the full subcategories of acyclic DG\+modules in the homotopy categories $H^0(C\modld)$ and $H^0(\modrd C)$. \begin{proof} Part~(a) follows immediately from the above construction of the morphism~\eqref{tor-derived-first-second}. To prove the first assertion of part~(b), notice that any morphism from an object of $H^0(C\modld_\prj)_\prj$ to an object of $\Ac^\co(C\modld_\fpd)$ vanishes in $H^0(C\modld)$. In fact, any morphism from an object of $H^0(C\modld_\prj)$ to an object of $\Ac^\co(C\modld_\fpd)$ vanishes, and any morphism from an object of $H^0(C\modld)_\prj$ to an object of $\Ac(C\modld)$ vanishes in the homotopy category. By the standard properties of semiorthogonal decompositions (see, e.~g., \cite[Lemma~1.3]{Pkoszul}), it follows that any object $L$ in the triangulated subcategory generated by $H^0(C\modld_\prj)_\prj$ and $\Ac^\co(C\modld_\fpd)$ in $H^0(C\modld_\fpd)$ admits a closed morphism $P\rarrow L$ from an object $P\in H^0(C\modld_\prj)_\prj$ with a cone in $\Ac^\co(C\modld_\fpd)$. To prove the equivalence of the two conditions in part~(b), notice that, by the same semiorthogonality lemma, a DG\+module $L\in C\modld_\fpd$ belongs to the triangulated subcategory generated by $H^0(C\modld_\prj)_\prj$ and $\Ac^\co(C\modld_\fpd)$ in $H^0(C\modld_\fpd)$ if and only if, as an object of $\DD^\co(C\modld_\fpd)$, it belongs to the image of $H^0(C\modld_\prj)_\prj$ in $\DD^\co(C\modld_\fpd)$. Then use the concluding remarks in~\ref{categories-second-kind}. Part~(c) is similar to part~(b) up to the duality. \end{proof} In particular, if the left homological dimension of the $\Gamma$\+graded category $C^\#$ is finite (see~\ref{change-grading-group} for the terminology), then the classes of coacyclic, contraacyclic, and absolutely acyclic left DG\+modules over $C$ coincide~\cite[Theorem~3.6(a)]{Pkoszul}. In this case, for any left DG\+modules $L$ and $M$ over $C$ the morphism of $\Gamma$\+graded $k$\+modules $H^*\Ext_C^{I\!I}(L,M) \rarrow H^*\Ext_C(L,M)$ is naturally identified with the morphism $$ \Hom_{\DD^\abs(C\modld)}(L,M)\lrarrow\Hom_{\DD(C\modld)}(L,M). $$ So if the class of absolutely acyclic left DG\+modules also coincides with the class of acyclic DG\+modules, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any DG\+modules $L$ and $M$ over~$C$. Analogously, if the weak homological dimension of the $\Gamma$\+graded category $C^\#$ is finite and the category $H^0(\modrdfl C)$ coincides with its full subcategory $H^0(\modrdfl C)_\fl$, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any DG\+modules $N$ and $M$ over~$C$. This follows from part~(a) of Proposition. \subsection{Comparison for DG\+category of CDG\+modules} \label{comparison-dg-of-cdg} Let $B$ be a small $k$\+linear CDG\+category and $C=\modrcfp B$ the DG\+category of right CDG\+modules over $B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules. The results below also apply, mutatis mutandis, to finitely generated free modules in place of finitely generated projective ones. The DG\+categories of (left or right) CDG\+modules over $B$ and DG\+modules over $C$ are naturally equivalent; let $M_C$ denote the DG\+module over $C$ corresponding to a CDG\+module $M$ over~$B$ (see \ref{pseudo-equi-subsect} and~\ref{dg-of-cdg-subsect}). Denote by $B\modlc_\fp$ the DG\+category of left CDG\+modules over $B$, finitely generated and projective as $\Gamma$\+graded $B^\#$\+modules. Let $k\spcheck$ be an injective cogenerator of the abelian category of $k$\+modules; for example, one can take $k\spcheck=k$ when $k$~is a field, or $k\spcheck=\Hom_\Z(k,\Q/\Z)$ for any ring~$k$. Recall that an object $X$ of a triangulated category $T$ with infinite direct sums is called \emph{compact} if the functor $\Hom_T(X,{-})$ preserves infinite direct sums. A set of compact objects $S\subset T$ generates $T$ as a triangulated category with infinite direct sums if and only if the vanishing of all morphisms $X\rarrow Y[*]$ in $T$ for all $X\in S$ implies vanishing of an object $Y\in T$ \cite[Theorem~2.1(2)]{Neem}. \begin{thmA} \textup{(a)} If the $\Gamma$\+graded category $B^\#$ has finite weak homological dimension and the image of the functor $H^0(\modrcfp B)\rarrow \DD^\co(\modrc B)$ generates $\DD^\co(\modrc B)$ as a triangulated category with infinite direct sums, then for any right DG\+module $N_C$ and left DG\+module $M_C$ over $C$ the natural morphism $\Tor^C(N_C,M_C)\rarrow \Tor^{C,I\!I}(N_C,M_C)$ is an isomorphism. \par The same conclusion holds if the $\Gamma$\+graded category $B^\#$ has finite weak homological dimension and all objects of $H^0(\modrcfl B)$ can be obtained from objects of $H^0(\modrcfp B)$ using the operations of shift, cone, filtered inductive limit, and passage to a homotopy equivalent CDG\+module over~$B$. \par \textup{(b)} If the $\Gamma$\+graded category $B^\#$ has finite left homological dimension and the image of the functor $H^0(B\modlc_\fp)\rarrow\DD^\co(B\modlc)$ generates $\DD^\co(B\modlc)$ as a triangulated category with infinite direct sums, then for any left DG\+modules $L_C$ and $M_C$ over $C$ the natural morphism $\Ext_C^{I\!I}(L_C,M_C)\rarrow \Ext_C(L_C,M_C)$ is an isomorphism. \end{thmA} \begin{proof} The proof is based on the results of~\ref{comparison-subsect}. The DG\+category $B\modlc_\fp$ is equivalent to the DG\+category $C^\op$; the equivalence assigns to a right CDG\+module $F$ the left CDG\+module $G=\Hom^{B^\op}(F,B)$ and to a left CDG\+module $B$ the right CDG\+module $F=\Hom^B(G,B)$ over~$B$. Given a left CDG\+module $M$ over $B$, the corresponding left DG\+module $M_C$ over $C$ assigns to a CDG\+module $F\in\modrcfp B$ the complex of $k$\+modules $F\ot_B M\simeq\Hom^B(G,M)$. Given a right CDG\+module $N$ over $B$, the corresponding right DG\+module $N_C$ over $C$ assigns to a CDG\+module $F\in\modrcfp B$ the complex of $k$\+modules $\Hom^{B^\op}(F,N)\simeq N\ot_B G$. The categories of (left or right) $\Gamma$\+graded modules over the $\Gamma$\+graded categories $B^\#$ and $C^\#$ are also equivalent. $\Gamma$\+graded modules corresponding to each other under these equivalences have equal flat, projective, and injective dimensions. So the (weak, left, or right) homological dimensions of the $\Gamma$\+graded categories $B^\#$ and $C^\#$ are equal. The equivalence between the DG\+categories of (left or right) CDG\+modules over $B$ and DG\+modules over $C$ preserves the classes of coacyclic, acyclic, and absolutely acyclic (C)DG\+modules. Given a left CDG\+module $M$ over $B$, the DG\+module $M_C$ is acyclic if and only if the complex $F\ot_BM\simeq\Hom^B(G,M)$ is acyclic for any CDG\+modules $F\in\modrcfp B$ and $G\in B\modlc_\fp$ (related to each other as above); similarly for a right CDG\+module $N$ over~$B$. For any small CDG\+category $B$ the functor $H^0(B\modlc_\fp) \rarrow\DD^\co(B\modlc)$ is fully faithful, and the objects in its image are compact in the coderived category~\cite{KLN}. Thus, the classes of acyclic and coacyclic left DG\+modules over $C$ coincide if and only if $\DD^\co(B\modlc)$ is generated by $H^0(B\modlc_\fp)$ as a triangulated category with infinite direct sums. Now parts (a) and~(b) follow from Proposition~\ref{comparison-subsect}(a-b); see also the concluding remarks in~\ref{comparison-subsect}. For details related to the proof of the second assertion of part~(a), see the last paragraph of the proof of Theorem~B below. \end{proof} The next, more technical result is a generalization of Theorem~A to the case of $\Gamma$\+graded categories $B^\#$ of infinite homological dimension. Let us denote by $\langle T_i\rangle_\oplus\subset T$ (resp.,\ $\langle T_i\rangle_\sqcap\subset T$) the minimal triangulated subcategory of a triangulated category $T$ containing subcategories $T_i$ and closed under infinite direct sums (resp.,\ infinite products). Given a class of CDG\+modules $E\subset Z^0(\modrc B)$, we denote by $\langle E\rangle_\cup\subset Z^0(\modrc B)$ the full subcategory of all CDG\+modules that can be obtained from the objects of $E$ using the operations of shift, cone, filtered inductive limit, and passage to a homotopy equivalent CDG\+module. \begin{thmB} \textup{(a)} If for a right CDG\+module $N\in H^0(\modrcffd B)$ there exist a right CDG\+module $$ Q\in Z^0(\modrcfl B)\cap\langle Z^0(\modrcfp B)\rangle_\cup, $$ and a closed morphism $Q\rarrow N$ with a cone in $\Ac^\co(\modrcffd B)$, then for any left DG\+module $M_C$ over $C$ the natural morphism $\Tor^C(N_C,M_C)\rarrow\Tor^{C,I\!I}(N_C,M_C)$ is an isomorphism. \par \textup{(b)} If a left CDG\+module $L\in H^0(B\modlc_\fpd)$ belongs to $$ \langle\. H^0(B\modlc_\fp)\;\Ac^\co(B\modlc_\fpd)\.\rangle_\oplus \.\subset\. H^0(B\modlc_\fpd), $$ then for any left DG\+module $M_C$ over $C$ the natural morphism $\Ext_C^{I\!I}(L_C,M_C)\rarrow \Ext_C(L_C,M_C)$ is an isomorphism. \par Equivalently, the same conclusion hold if an object $L\in \DD^\co(B\modlc_\fpd)$ belongs to the minimal triangulated subcategory of\/ $\DD^\co(B\modlc_\fpd)$ containing the image of $H^0(B\modlc_\fp)$ and closed under infinite direct sums. \par \textup{(c)} If a left CDG\+module $M\in H^0(B\modlc_\fid)$ belongs to $$ \langle\.\{\Hom_k(F,k\spcheck)\},\Ac^\ctr(B\modlc_\fid)\.\rangle_ \sqcap\.\subset\. H^0(B\modlc_\fid), \quad F\in H^0(\modrcfp B), $$ then for any left DG\+module $L_C$ over $C$ the natural morphism $\Ext_C^{I\!I}(L_C,M_C)\rarrow\Ext_C(L_C,M_C)$ is an isomorphism. \par Equivalently, the same conclusion holds if if an object $M\in\DD^\ctr(B\modlc_\fid)$ belongs to the minimal triangulated subcategory of\/ $\DD^\ctr(B\modlc)_\fid$ containing the objects $\Hom_k(F,k\spcheck)$, where $F\in H^0(\modrcfp B)$, and closed under infinite products. \end{thmB} \begin{proof} The parts (a-c) follow from the corresponding parts of Proposition~\ref{comparison-subsect}. Indeed, a DG\+module over any small DG\+category $C$ is h\+projec\-tive if and only if it belongs to the minimal triangulated subcategory of $H^0(C\modld)$ containing the representable DG\+modules and closed under infinite direct sums~\cite{Kel,Pkoszul}. Representable left DG\+modules over $C$ correspond to the objects of $B\modlc_\fp$ under the equivalence between the DG\+categories $C\modld$ and $B\modlc$ (see the proof of Theorem~A). It follows that the DG\+module $L_C\in H^0(C\modld_\fpd)$ belongs to the triangulated subcategory generated by $H^0(C\modld_\fpd)_\fpd$ and the objects coacyclic with respect to $C\modld_\fpd$ if and only if a CDG\+module $L$ over $B$ belongs to the minimal triangulated subcategory of $H^0(B\modlc_\fpd)$ containing $H^0(B\modlc_\fp)$ and all objects coacyclic with respect to $B\modlc_\fpd$ and closed under infinite direct sums. Similarly, a left DG\+module over a $k$\+linear DG\+category $C$ is h\+injective if and only if it belongs to the minimal triangulated subcategory of $H^0(C\modld)$ containing the DG\+modules $\Hom_k(R_X,k\spcheck)$, where $R_X$ are the representable right DG\+modules over~$C$. Representable right DG\+modules over $C$ correspond to the objects of $\modrcfp B$ under the equivalence between the DG\+categories $\modrd C$ nad $\modrc B$. So the DG\+module $M_C\in H^0(C\modld_\fid)$ belongs to the subcategory generated by $H^0(C\modld_\fid)_\fid$ and the objects contraacyclic with respect to $C\modld_\fid$ if and only if a CDG\+module $M$ over $B$ belongs to the minimal triangulated subcategory of $H^0(B\modlc_\fid)$ containing $\Ac^\ctr(B\modlc_\fid)$ and all CDG\+modules $\Hom_k(F,k\spcheck)$ for $F\in H^0(\modrcfp B)$, and closed inder infinite products. Finally, a right DG\+module over a DG\+category $C$ is h\+flat whenever it can be obtained from the representable right DG\+modules using the operations of shift, cone, filtered inductive limit, and passage to a homotopy equivalent DG\+module (we do not know whether the converse is true). Indeed, the class of h\+flat DG\+modules is closed under shifts, cones, filtered inductive limits, and homotopy equivalences, since these operations commute with the tensor product of DG\+modules over $C$ and preserve acyclicity of complexes of $k$\+modules. Thus, if a right CDG\+module $Q$ over $B$ can be obtained from objects of $\modrcfp B$ using the operations of shift, cone, filtered inductive limit, and passage to a homotopy equivalent CDG\+module, then the corresponding DG\+module $Q_C$ over $C$ is h\+flat. The equivalence of the two conditions both in~(b) and in~(c) follows from the same semiorthogonality arguments as in the proof of Proposition~\ref{comparison-subsect}. \end{proof} Now assume that the commutative ring $k$ has finite weak homological dimension and the $\Gamma$\+graded $k$\+modules $B^\#(X,Y)$ are flat for all objects $X$, $Y\in B$. Recall that the DG\+categories of left and right CDG\+modules over $B\ot_k B^\op$ are naturally isomorphic. To any left CDG\+module $L$ and right CDG\+module $N$ over $B$ one can assign the (left) CDG\+module $L\ot_k N$ over the CDG\+category $B\ot_k B^\op$. \begin{thmC} \textup{(a)} If the $\Gamma$\+graded category $B^\#\ot_k B^\#{}^\op$ has finite weak homological dimension and the image of the functor of tensor product \begin{equation} \label{bi-cdg-tensor-product} \ot_k\:H^0(B\modlc_\fp)\times H^0(\modrcfp B)\lrarrow\DD^\co (B\ot_k B^\op\modlc) \end{equation} generates\/ $\DD^\co(B\ot_k B^\op\modlc)$ as a triangulated category with infinite direct sums, then the natural map $HH_*(C,M_C)\rarrow HH_*^{I\!I}(C,M_C)$ is an isomorphism for any DG\+module $M_C$ over the DG\+category $C\ot_k C^\op$. \par The same conclusion holds if the $\Gamma$\+graded category $B^\#\ot_k B^\#{}^\op$ has finite weak homological dimension and all objects of $H^0(B\ot_k B^\op\modlc_\fl)$ can be obtained from objects in the image of~\textup{\eqref{bi-cdg-tensor-product}} using the operations of shift, cone, filtered inductive limit, and passage to a homotopy equivalent CDG\+module over $B\ot_k B^\op$. \par \textup{(b)} If the $\Gamma$\+graded category $B^\#\ot_k B^\#{}^\op$ has finite left homological dimension and the image of the functor~\textup{\eqref{bi-cdg-tensor-product}} generates\/ $\DD^\co(B\ot_k B^\op\modlc)$ as a triangulated category with infinite direct sums, then the natural map $HH^{I\!I\;*}(C,M_C)\rarrow HH^*(C,M_C)$ is an isomorphism for any DG\+module $M_C$ over the DG\+category $C\ot_k C^\op$. \end{thmC} \begin{proof} This is a particular case of the next Theorem~D. \end{proof} \begin{thmD} \textup{(a)} Suppose that the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $B^\#$ has finite flat dimension and there exists a CDG\+module $$ Q\in Z^0(\modrcfl B\ot_k B^\op)\cap \langle\{F\ot_k G\}\rangle_\cup, \quad F\in H^0(\modrcfp B), \ G\in H^0(B\modlc_\fp), $$ and a closed morphism $Q\rarrow B$ of CDG\+modules over $B\ot_k B^\op$ with a cone in $\Ac^\co(\modrcffd B\ot_k B^\op)$. Then the natural map $HH_*(C,M_C)\rarrow HH_*^{I\!I}(C,M_C)$ is an isomorphism for any DG\+module $M_C$ over $C\ot_k C^\op$. \par \textup{(b)} Suppose that the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $B^\#$ has finite projective dimension and the CDG\+module $B$ over $B\ot_k B^\op$ belongs to $$ \langle\.\{G\ot_k F\},\Ac^\co(B\ot_k B^\op\modlc_\fpd)\. \rangle_\oplus, \quad F\in H^0(\modrcfp B), \ G\in H^0(B\modlc_\fp). $$ Then the natural map $HH^{I\!I\;*}(C,M_C)\rarrow HH^*(C,M_C)$ is an isomorphism for any DG\+module $M_C$ over $C\ot_k C^\op$. \par Equivalently, the same conclusion holds if the the $\Gamma$\+graded $B^\#\ot_k B^\#{}^\op$\+module $B^\#$ has finite projective dimension and the object $B\in\DD^\co(B\ot_k B^\op\modlc_\fpd)$ belongs to the minimal triangulated subcategory of\/ $\DD^\co(B\ot_k B^\op\modlc_\fpd)$, containing the CDG\+modules $G\ot_kF$, where $F\in H^0(\modrcfp B)$ and\/ $G\in H^0(B\modlc_\fp)$, and closed under infinite direct sums. \end{thmD} \begin{proof} It suffices to notice that CDG\+modules $G\ot F$ over $B\ot_k B^\op$ correspond precisely to representable DG\+modules over $C\ot_k C^\op$ under the equivalence of DG\+categories $B\ot_k B^\op\modlc\simeq C\ot_k C^\op\modld$. The rest of the argument is similar to the proof of Theorem~B and based on Proposition~\ref{comparison-subsect}(a-b). \end{proof} \subsection{Derived tensor product functor} \label{derived-tensor-product-subsect} The following discussion is relevant in connection with the role that the external tensor products of CDG\+modules play in the above Theorems~\ref{comparison-dg-of-cdg}.C--D. Let $k$ be a commutative ring of finite weak homological dimension, and let $B'$ and $B''$ be $k$\+linear CDG\+categories such that the $\Gamma$\+graded $k$\+modules of morphisms in the categories $B'{}^\#$ and $B''{}^\#$ are flat. Consider the functor of tensor product $$ \ot_k\:H^0(B'\modlc)\times H^0(B''\modlc)\lrarrow H^0(B'\ot_k B''\modlc). $$ We would like to construct its left derived functor $$ \ot_k^\L\:\DD^\co(B'\modlc)\times\DD^\co(B''\modlc)\lrarrow \DD^\co(B'\ot_k B''\modlc). $$ Denote by $B'\modlc_\kfl$ the DG\+category of left CDG\+modules $M'$ over $B'$ for which all the $\Gamma$\+graded $k$\+modules $M'{}^\#(X)$ are flat, and similarly for CDG\+modules over~$B''$. Notice that the natural functor from the quotient category of $H^0(B'\modlc_\kfl)$ by its intersection with $\Ac^\co(B'\modlc)$ to the coderived category $\DD^\co(B'\modlc)$ is an equivalence of triangulated categories. Indeed, the construction of~\cite[proof of Theorem~3.6]{Pkoszul} shows that for any left CDG\+module $M'$ over $B'$ there exists a closed morphism $F'\rarrow M'$, where $F'\in H^0(B'\modlc_\kfl)$, with a coacyclic cone. So it remains to use~\cite[Lemma~3.6]{Pkoszul}. Restrict the above functor $\ot_k$ to the subcategory $H^0(B'\modlc_\kfl)$ in the first argument. Clearly, this restriction factors through the coderived category $\DD^\co(B''\modlc)$ in the second argument. Let us show that it also factors through the coderived category $\DD^\co(B'\modlc)$ in the first argument (cf.~\cite[Lemma~2.7]{Psemi}). Indeed, let $M'$ be an object of $H^0(B'\modlc_\kfl)\cap \Ac^\co(B'\modlc)$ and $M''$ be a left CDG\+module over~$B''$. Choose a CDG\+module $F''\in H^0(B''\modlc_\kfl)$ such that there is a closed morphism $F''\rarrow M''$ with a coacyclic cone. Then the CDG\+module $M'\ot_k F''$ is coacyclic, since $M'$ is coacyclic and $F''$ is $k$\+flat; at the same time, the cone of the morphism $M'\ot_k F''\rarrow M'\ot_k M''$ is coacyclic, the cone of the morphism $F''\rarrow M''$ is coacyclic and $M'$ is $k$\+flat. Thus, the CDG\+module $M'\ot_k M''$ is also coacyclic. We have constructed the desired derived functor $\ot_k^L$. Clearly, the same derived functor can be obtained by restricting the functor $\ot_k$ to the subcategory $H^0(B''\modlc_\kfl)$ in the second argument. Analogously, one can construct a derived functor $$ \ot_k^\L\:\DD^\co(B'\modlc_\fpd)\times\DD^\co(B''\modlc_\fpd) \lrarrow\DD^\co(B'\ot_k B''\modlc_\fpd), $$ or the similar functor with modules of finite projective dimension replaced by those of finite flat dimension. All one has to do is to restrict the functor $\ot_k$ to the homotopy category of CDG\+modules whose underlying $\Gamma$\+graded modules satisfy both conditions of $k$\+flat\-ness and finiteness of the projective dimension over $B'$ or $B''$. In these situations one does not even need the condition that the weak homological dimension of~$k$ is finite. However, one has to use the fact that the tensor product over $k$ preserves finitness of projective/flat dimensions, provided that at least one of the $\Gamma$\+graded modules being multiplied is $k$\+flat. \Section{Examples} The purpose of this section is mainly to illustrate the results of Section~3. Examples of DG\+categories $C$ for which the two kinds of Hochschild (co)homology are known to coincide are exhibited in~\ref{zero-differentials}\+-\ref{dga-koszul}. Examples of CDG\+algebras $B$ such that the two kinds of Hochschild (co)homology can be shown to coincide for the DG\+category of CDG\+modules $C=\modrcfp B$ are considered in~\ref{cdg-koszul}\+-\ref{matrix-factorizations}. Counterexamples are discussed in~\ref{counterexample} and~\ref{direct-sum}. Hochschild (co)homology of matrix factorizations are considered in~\ref{matrix-factorizations}\+-\ref{direct-sum}. \subsection{DG\+category with zero differentials} \label{zero-differentials} Let $C$ be a small $k$\+linear DG\+category such that the differentials in the complexes $C(X,Y)$ vanish for all objects $X$, $Y\in C$. \begin{prop} \textup{(a)} If $N$ is a right DG\+module over $C$ such that the differentials in the complexes $N(X)$ vanish for all objects $X\in C$ and the $\Gamma$\+graded $C^\#$\+module $N^\#$ has finite flat dimension, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(b)} If $L$ be a left DG\+module over $C$ such that the differentials in the complexes $L(X)$ vanish for all objects $X\in C$ and the $\Gamma$\+graded $C^\#$\+module $L^\#$ has finite projective dimension, then the natural morphism $\Ext_C^{I\!I}(L,M) \rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(c)} If $M$ is a left DG\+module over $C$ such that the differentials in the complexes $M(X)$ vanish for all objects $X\in C$ and the $\Gamma$\+graded $C^\#$\+module $M^\#$ has finite injective dimension, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module $L$ over~$C$. \end{prop} \begin{proof} To prove part~(a), notice that a finite flat left resolution $P_\bu$ of the $\Gamma$\+graded $C^\#$\+module $N^\#$, with every term of it endowed with a zero differential, can be used to compute both kinds of derived functor $\Tor$ that we are interested in. The proofs of parts (b) and (c) are similar. \end{proof} \begin{corA} \textup{(a)} If the $\Gamma$\+graded category $C^\#$ has finite weak homological dimension, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any DG\+modules $N$ and~$M$. \par \textup{(b)} If the $\Gamma$\+graded category $C^\#$ has finite left homological dimension, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+modules $L$ and $M$ over $C$. \end{corA} \begin{proof} Any DG\+module over a DG\+category with vanishing differentials is an extension of two DG\+modules with vanishing differentials. Indeed, the kernel and image of the differential~$d$ on such a DG\+module is a DG\+submodule. So it remains to use the fact that both kinds of functors $\Ext$ and $\Tor$ assign distinguished triangles to short exact sequences of DG\+modules in any argument, together with the preceding proposition. Part (b) also follows from the fact that the classes of acyclic and absolutely acyclic left DG\+modules over $C$ coincide in its assumptions; see~\cite{KLN}. \end{proof} \begin{corB} Let $C$ be a DG\+category such that the complexes $C(X,Y)$ are complexes of flat $k$\+modules with zero differentials for all objects $X$, $Y\in C$. \par \textup{(a)} If the $\Gamma$\+graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite flat dimension, then the natural morphism of Hochschild homology $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ is an isomorphism for any DG\+module $M$ over $C\ot_k C^\op$. \par \textup{(b)} If the $\Gamma$\+graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite projective dimension, then the natural morphism of Hochschild cohomology $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ is an isomorphism for any DG\+module $M$ over $C\ot_k C^\op$. \end{corB} \begin{proof} This follows directly from Proposition. \end{proof} \subsection{Nonpositive DG\+category} \label{nonpositive-subsect} Assume that our grading group $\Gamma$ is isomorphic to $\Z$ and the isomorphism identifies $\boldsymbol{1}$ with~$1$ (see~\ref{grading-group}). Let $C$ be a small $k$\+linear DG\+category. Assume that the complexes of $k$\+modules $C(X,Y)$ are concentrated in nonpositive degrees for all objects $X$, $Y\in C$. Let us call a (left or right) DG\+module $M$ over $C$ \emph{bounded above} if all the complexes of $k$\+modules $M(X)$ are bounded above uniformly, i.~e., there exists an integer~$n$ such that the complexes $M(X)$ are concentrated in the degree~$\le n$ for all~$X$. DG\+modules \emph{bounded below} are defined in the similar way. \begin{propA} \textup{(a)} If a right DG\+module $N$ and a left DG\+module $M$ over $C$ are bounded above, then the natural morphism $\Tor^C(N,M) \rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism. \par \textup{(b)} If a left DG\+module $L$ over $C$ is bounded above and a left DG\+module $M$ is bounded below, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism. \end{propA} \begin{proof} The proof is based on the construction of the natural morphisms (\ref{tor-first-second}\+-\ref{ext-first-second}) from~\ref{second-kind-general}. To prove part~(a), notice that there exists a left projective resolution $Q_\bu$ of the DG\+module $N$ in the exact category $Z^0(\modrd C)$ consisting of DG\+modules bounded above with the same constant~$n$ as the DG\+module $N$, and then there is no difference between the two kinds of totalizations of the bicomplex $Q_\bu\ot_C M$. The proof of part~(b) is similar. \end{proof} \begin{propB} \textup{(a)} If a right DG\+module $N$ over $C$ is bounded above and the graded $C^\#$\+module $N^\#$ has finite flat dimension, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(b)} If a left DG\+module $L$ over $C$ is bounded above and the graded $C^\#$\+module $L^\#$ has finite projective dimension, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(c)} If a left DG\+module $M$ over $C$ is bounded below and the graded $C^\#$\+module $M^\#$ has finite injective dimension, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module $L$ over~$C$. \end{propB} \begin{proof} Parts~(a-c) follow from the corresponding parts of Proposition~\ref{comparison-subsect}. To prove part~(b), let us choose a finite left resolution $P_\bu$ of the DG\+module $L$ in the abelian category $Z^0(C\modld)$ such that the DG\+modules $P_i$ are bounded above and their underlying graded $C^\#$\+modules are projective. Then the total DG\+module $P$ of $P_\bu$ maps into $L$ with a cone absolutely acyclic with respect to $C\modld_\fpd$, so it suffices to show that $P$ is h\+projective. Indeed, any left DG\+module $P$ over $C$ that is bounded above and projective as a graded $C^\#$\+module is h\+projective. To prove the latter assertion, one can construct by induction in~$n$ an increasing filtration of $P$ by DG\+submodules such that the associated quotient DG\+modules are direct summands of direct sums of representable DG\+modules shifted by the degree determined by the number of the filtration component. The proof of part~(c) is similar up to duality, and to prove part~(a) one has to show that a right DG\+module $Q$ over $C$ that is bounded above and flat as a graded $C^\#$\+module is h\+flat. This can be done, e.~g., by using (the graded version of) the Govorov--Lazard flat module theorem to construct a filtration similar to the one in the projective case, except that the associated quotient DG\+modules are filtered inductive limits of direct sums of (appropriately shifted) representable DG\+modules. \end{proof} Now assume that the complexes $C(X,Y)$ are complexes of flat $k$\+modules concentrated in nonpositive cohomological degrees. \begin{cor} \textup{(a)} For any DG\+module $M$ over $C\ot_k C^\op$ bounded above, the natural morphism $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ is an isomorphism. If the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite flat dimension, then the latter morphism is an isomorphism for any DG\+module~$M$. \par \textup{(b)} For any DG\+module $M$ over $C\ot_k C^\op$ bounded below, the natural morphism $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ is an isomorphism. If the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite projective dimension, then the latter morphism is an isomorphism for any DG\+module~$M$. \end{cor} \begin{proof} Apply Propositions A and~B(a-b) to the DG\+category $C\ot_k C^\op$. \end{proof} So the map $HH_*(C)\rarrow HH_*^{I\!I}(C)$ is an isomorphism under our assumptions on~$C$. The map $HH^{I\!I\;*}(C)\rarrow HH^*(C)$ is an isomorphism provided that either the DG\+module $C$ over $C\ot_k C^\op$ is bounded below~\cite[Proposition~3.15]{CT}, or the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite projective dimension. \subsection{Strongly positive DG\+category} As in~\ref{nonpositive-subsect}, we assume that the grading group $\Gamma$ is isomorphic to $\Z$ and the isomorphism identifies $\boldsymbol{1}$ with~$1$. Let $k$ be a field and $C$ be a $k$\+linear DG\+category such that the complexes of $k$\+vector spaces $C(X,Y)$ are concentrated in nonnegative degrees for all objects $X$, $Y\in C$, the component $C^1(X,Y)$ vanishes for all $X$ and $Y$, the component $C^0(X,Y)$ vanishes for all nonisomorphic $X$ and $Y$, and the $k$\+algebra $C^0(X,X)$ is semisimple for all~$X$. Here a noncommutative ring is called (classically) semisimple if the abelian category of (left or right) modules over it is semisimple. We keep the terminology from~\ref{nonpositive-subsect} related to bounded DG\+modules. \begin{propA} \textup{(a)} If a right DG\+module $N$ and a left DG\+module $M$ over $C$ are bounded below, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism. \par \textup{(b)} If a left DG\+module $L$ over $C$ is bounded below and a left DG\+module $M$ is bounded above, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism. \end{propA} \begin{proof} The proof uses the construction of the natural morphisms (\ref{tor-first-second}--\ref{ext-first-second}) from~\ref{second-kind-general}. To prove part~(a), one can compute both kinds of $\Tor$ in question using the reduced bar-resolution of the DG\+module $N$ over $C$ relative to $C^0$, i.~e., $$ \dsb\lrarrow N\ot_{C^0}C/C^0\ot_{C^0}C/C^0\ot_{C^0}C \lrarrow N\ot_{C^0}C/C^0\ot_{C^0}C\lrarrow N\ot_{C^0}C. $$ Here $C^0$ is considered as a DG\+category with complexes of morphisms concentrated in degree~$0$ and endowed with zero differentials, $C/C_0$ is a DG\+module over $C^0\ot_k C^0{}^\op$, and $C$ is a DG\+module over $C^0\ot_k C^\op$. The semisimplicity condition on $C^0$ guarantees projectivity of right DG\+modules of the form $R\ot_{C^0}C$ as objects of the exact category $Z^0(\modrd C)$ for all right DG\+modules $R$ over~$C^0$. Due to the positivity/boundedness conditions on $C$, $N$, and $M$, there is no difference between the two kinds of totalizations of the resulting bar-bicomplex. The proof of part~(b) is similar. \end{proof} \begin{propB} \textup{(a)} If a right DG\+module $N$ over $C$ is bounded below and the graded $C^\#$\+module $N^\#$ has finite flat dimension, then the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(b)} If a left DG\+module $L$ over $C$ is bounded below and the graded $C^\#$\+module $L^\#$ has finite projective dimension, then the natural morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module $M$ over~$C$. \par \textup{(c)} If a left DG\+module $M$ over $C$ is bounded above and the graded $C^\#$\+module $M^\#$ has finite injective dimension, then the morphism $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+module~$L$. \end{propB} \begin{proof} The proof is similar to that of Proposition~\ref{nonpositive-subsect}.B\hbox{}. E.~g., in part~(b) the key is to show that any DG\+module over $C$ that is bounded below and projective as a graded $C^\#$\+module is h\+projective. One constructs an increasing filtration similar to that in~\ref{nonpositive-subsect} with the only difference that the associated quotient DG\+modules are projective objects of the exact category $Z^0(C\modld)$. \end{proof} \begin{cor} \textup{(a)} For any DG\+module $M$ over $C\ot_k C^\op$ bounded below the natural morphism $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ is an isomorphism. If the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite flat dimension, then the latter morphism is an isomorphism for and DG\+module~$M$. \par \textup{(b)} For any DG\+module $M$ over $C\ot_k C^\op$ bounded above, the natural morphism $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ is an isomorphism. If the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite projective dimension, then the latter morphism is an isomorphism for any DG\+module~$M$. \qed \end{cor} So the map $HH_*(C)\rarrow HH_*^{I\!I}(C)$ is an isomorphism under our assumptions on~$C$. The map $HH^{I\!I\;*}(C)\rarrow HH^*(C)$ is an isomorphism provided that either the DG\+module $C$ over $C\ot_k C^\op$ is bounded above, or the graded $C^\#\ot_k C^\#{}^\op$\+module $C^\#$ has finite projective dimension. \subsection{Cofibrant DG\+category} \label{cofibrant-subsect} A small $k$\+linear DG\+category is called \emph{cofibrant} if it is a retract (in the category of DG\+categories and functors between them) of a DG\+category $k\langle x_{n,\alpha}\rangle$ of the following form. As a $\Gamma$\+graded category, $k\langle x_{n,\alpha}\rangle$ is freely generated by a set of homogeneous morphisms $x_{n,\alpha}$, where $n$ runs over nonnegative integers and $\alpha$~belongs to some set of indices. This means that the morphisms in $k\langle x_{n,\alpha}\rangle$ are the formal $k$\+linear combinations of formal compositions of the morphisms~$x_{n,\alpha}$. It is additionally required that the element $dx_{n,\alpha}$ belongs to the class of morphisms multiplicatively and additively generated by the morphisms $x_{m,\beta}$ with $m<n$. The cofibrant DG\+categories are exactly (up to the zero object issue) the cofibrant objects in the model category structure constructed in~\cite{Tab} (see also~\cite{Toen}). The following lemmas will be used in conjunction with the results of~\ref{comparison-subsect} in order to prove comparison results for the two kinds of $\Ext$, $\Tor$, and Hochschild (co)homology for cofibrant DG\+categories. \begin{lemA} Let $D$ be a DG\+category of the form $k\langle x_{n,\alpha}\rangle$ as above. \par \textup{(a)} If a right DG\+module $N$ over $D$ is such that all the complexes of $k$\+modules $N(X)$ are h\+flat complexes of flat $k$\+modules, then there exists a closed morphism $Q\rarrow N$, where $Q\in H^0(\modrdfl D)_\fl$, with a cone absolutely acyclic with respect to $\modrdffd D$. \par \textup{(b)} If a left DG\+module $L$ over $D$ is such that all the complexes of $k$\+modules $L(X)$ are h\+projective complexes of projective $k$\+modules, then there exists a closed morphism $P\rarrow L$, where $L\in H^0(D\modld_\prj)_\prj$, with a cone absolutely acyclic with respect to $D\modld_\fpd$. \par \textup{(c)} If a left DG\+module $M$ over $D$ is such that all the complexes of $k$\+modules $L(X)$ are h\+injective complexes of injective $k$\+modules, then there is a closed morphism $M\rarrow J$, where $J\in H^0(D\modld_\inj)_\inj$, with a cone absolutely acyclic with respect to $D\modld_\fid$. \end{lemA} \begin{lemB} \textup{(a)} If $C$ is a cofibrant $k$\+linear DG\+category and the ring $k$ has a finite weak homological dimension, then the weak homological dimension of the $\Gamma$\+graded category $C^\#$ is also finite. Moreover, the categories $H^0(\modrdfl C)$ and $H^0(\modrdfl C)_\fl$ coincide in this case. \par \textup{(b)} If $C$ is a cofibrant $k$\+linear DG\+category and the ring $k$ has finite homological dimension, then the left homological dimension of the $\Gamma$\+graded category $C^\#$ is finite. Moreover, the classes of acyclic and absolutely acyclic left DG\+modules over $C$ coincide in this case. \end{lemB} \begin{proof}[Proof of Lemmas A and~B] Let us first prove parts~(b) of both lemmas. The following arguments generalize the proof of \cite[Theorem~9.4]{Pkoszul} to the DG\+category case. For any objects $X$, $Y\in D$ denote by $V(X,Y)$ the free $\Gamma$\+graded $k$\+module spanned by those elements $x_{n,\alpha}$ that belong to $D(X,Y)$. Consider the short exact sequence of $\Gamma$\+graded $D^\#$\+modules $$\textstyle \bigoplus_{Y,Z\in D} D(X,Y)\ot_k V(Y,Z)\ot_k L(Z)\lrarrow \bigoplus_{Y\in D} D(X,Y)\ot_k L(Y)\lrarrow L(X). $$ The middle and right term are endowed with DG\+module structures, so the left term also acquires such a structure. There is a natural increasing filtration on the left term induced by the filtration of $V$ related to the indexes $n$ of the generators $x_{n,\alpha}$. It is a filtration by DG\+submodules and the differentials on the associated quotient modules are the differentials on the tensor product induced by the differentials on the factors $D$ and $L$ (as is the differential on the middle term). It follows that whenever all the complexes of $k$\+modules $L(X)$ are coacyclic (absolutely acyclic), both the middle and the left terms of the exact sequence are coacyclic (absolutely acyclic) DG\+modules, so $L(X)$ is also a coacyclic (absolutely acyclic) DG\+module. In particular, if the homological dimension of $k$ is finite and $L$ is acyclic, then it is absolutely acyclic. Furthermore, when all the complexes $L(X)$ are h\+projective complexes of projective $k$\+modules, both the middle and the left terms belong to $H^0(D\modld_\prj)_\prj$. So it suffices to take the cone of the left arrow as the DG\+module~$P$. It also follows from the same exact sequence considered as an exact sequence of $\Gamma$\+graded $D^\#$\+modules that the $\Gamma$\+graded $D^\#$\+module $L^\#$ has the projective dimension at most~$1$ whenever all $L^\#(X)$ are projective $\Gamma$\+graded $k$\+modules. Since for any projective $\Gamma$\+graded $D^\#$\+module $F^\#$ the $\Gamma$\+graded $k$\+modules $F^\#(X)$ are projective, the left homological dimension of $D^\#$ can exceed the homological dimension of~$k$ by at most~$1$. Since $\Ext$ and $\Tor$ over $\Gamma$\+graded categories are functorial with respect to $\Gamma$\+graded functors, the (weak, left, or right) homological dimension of a retract $C^\#$ of a $\Gamma$\+graded category $D^\#$ does not exceed that of~$D^\#$. To prove the second assertion of Lemma~B(b) for a retract $C$ of a DG\+category $D$ as above, consider DG\+functors $I\:C\rarrow D$ and $\Pi\:D\rarrow C$ such that $\Pi I=\Id_C$. Let $M$ be an acyclic DG\+module over $C$; then the DG\+module $\Pi^*M$ over $D$ is acyclic, hence absolutely acyclic, and it follows that $M=I^*\Pi^*M$ is also absolutely acyclic. It remains to prove the second assertion of Lemma~B(a). If the underlying $\Gamma$\+graded $D^\#$\+module of a right DG\+module $N$ over $D$ is flat, then the above exact sequence remains exact after taking the tensor product with $N$ over~$D$. Besides, the $\Gamma$\+graded $k$\+modules $N^\#(X)$ are flat, since the $\Gamma$\+graded $k$\+modules $D^\#(X,Y)$ are. If the weak homological dimension of~$k$ is finite, it follows that the complexes of $k$\+modules $N(X)$ are h\+flat. Now if the complexes of $k$\+modules $L(X)$ are acyclic, then the tensor products of the left and the middle terms with $N$ over $D$ are acyclic, hence the complex $N\ot_D L$ is also acyclic. Finally, let us deduce the same assertion for a retract $C$ of the DG\+category~$D$. For this purpose, notice that for any DG\+functor $F\:C\rarrow D$ the functor $F^*\:H^0(\modrd D)\rarrow H^0(\modrd C)$ has a left adjoint functor $F_!$ given by the rule $F_!(N) = N\ot_CD$. In other words, the DG\+module $F_!(N)$ assigns the complex of $k$\+modules $N\ot_C F^*S_X$ to an object $X\in D$, where $S_X$ is the left (covariant) representable DG\+module over $D$ corresponding to~$X$. The functor $F_!$ transforms objects of $H^0(\modrdfl C)$ to objects of $H^0(\modrdfl D)$ and h\+flat DG\+modules to h\+flat DG\+modules, since for any right DG\+module $N$ over $C$ and left DG\+module $M$ over $D$ one has $F_!N\ot_D M\simeq N\ot_C F^*M$. Now if $(I,\Pi)$ is our retraction and $N\in H^0(\modrdfl C)$, then $I_!N\in H^0(\modrdfl D)$, hence $I_!N$ is h\+flat, and it follows that $N=\Pi_!I_! N$ is also h\+flat. \end{proof} \begin{corC} Let $C$ be a cofibrant $k$\+linear DG\+category. \par \textup{(a)} Given a right DG\+module $N$ and a left DG\+module $M$ over $C$, the natural morphism $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism provided that either all the complexes $N(X)$, or all the complexes $M(X)$ are h\+flat complexes of flat $k$\+modules. When the ring $k$ has finite weak homological dimension, this morphism is an isomorphism for any DG\+modules $N$ and~$M$. \par \textup{(b)} Given two left DG\+modules $L$ and $M$ over $C$, the natural morphism $\Ext_C^{I\!I}(L,M)\allowbreak\rarrow\Ext_C(L,M)$ is an isomorphism provided that either all the complexes $L(X)$ are h\+projective complexes of projective $k$\+modules, or all the complexes $M(X)$ are h\+injective complexes of injective $k$\+modules. When the ring $k$ has finite homological dimension, this morphism is an isomorphism for any DG\+modules $L$ and~$M$. \end{corC} \begin{proof} Since the morphisms (\ref{tor-first-second}--\ref{ext-first-second}) are functorial with respect to DG\+functors $F\:C\rarrow D$, i.~e., make commutative squares with the morphisms (\ref{tor-first-kind-F-star}--\ref{ext-first-kind-F-star}) and (\ref{tor-second-kind-F-star}--\ref{ext-second-kind-F-star}), it suffices to prove the statements of Corollary for a DG\+category $D=k\langle x_{n,\alpha}\rangle$. Now the first assertions in both (a) and~(b) follow from Lemma~A and Proposition~\ref{comparison-subsect}, while the second ones follow from Lemma~B and the concluding remarks in~\ref{comparison-subsect}. \end{proof} \begin{lemD} Let $D$ be a DG\+category of the form $k\langle x_{n,\alpha}\rangle$. Then the $\Gamma$\+graded $D^\#\ot_k D^\#{}^\op$\+module $D^\#$ has projective dimension at most~$1$. There exists an $h$\+projective DG\+module $P$ over $D\ot_k D^\op$ and a closed morphism of DG\+modules $P\rarrow D$ with a cone absolutely acyclic with respect to $D\ot_k D^\op\modld_\fpd$. \end{lemD} \begin{proof} It suffices to consider the short exact sequence \begin{multline*} \textstyle \bigoplus_{Y',Y''\in D} D(X,Y')\ot_k V(Y',Y'')\ot_k D(Y'',Z) \\ \textstyle \lrarrow\bigoplus_{Y\in D} D(X,Y)\ot_k D(Y,Z)\lrarrow D(X,Z) \end{multline*} and argue as above. \end{proof} \begin{corE} Let $C$ be a cofibrant $k$\+linear DG\+category. Then for any DG\+module $M$ over $C\ot_k C^\op$, the natural morphisms of Hochschild (co)homology $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ and $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ are isomorphisms. \end{corE} \begin{proof} The assertions for a DG\+category $D=k\langle x_{n,\alpha}\rangle$ follow from Lemma~D and Proposition~\ref{comparison-subsect}(a-b). To deduce the same results for a retract $C$ of a DG\+category $D$, use the fact that the comparison morphisms~\eqref{hoch-first-second-kind} make commutative squares with the morphisms \eqref{ho-hoch-second-kind-F-star}, \eqref{coho-hoch-second-kind-F-star} and \eqref{ho-hoch-first-kind-F-star}, \eqref{coho-hoch-first-kind-F-star}. \end{proof} \subsection{DG\+algebra with Koszul filtration} \label{dga-koszul} Let $A$ be a DG\+algebra over a field~$k$ endowed with an increasing filtration $F_iA$, \ $i\ge0$, such that $F_0A=k$, \ $F_iA\cdot F_jA \subset F_{i+j}A$, and $dF_iA \subset F_{i+1}A$. Assume that the associated graded algebra $\gr_FA$ is Koszul (in the grading~$i$ induced by the filtration~$F$) and has finite homological dimension (here we use the Koszulity condition without the assumption of finite-dimensionality of the components of $\gr_FA$, see e.~g.~\cite{PVi}). Then one can assign to $A$ a coaugmented CDG\+coalgebra $\CC$ endowed with a finite decreasing filtration~$G$ \cite[Section~6.8]{Pkoszul} (cf.~\ref{cdg-koszul} below). \begin{cor} Assume that the coaugmented coalgebra $\CC$ is conilpotent \textup{(}see~\cite{PVi} or~\cite[Section~6.4]{Pkoszul}\textup{)}. Then for any right DG\+module $N$ and left DG\+module $M$ over $A$ the natural morphism $\Tor^A(N,M)\rarrow\Tor^{A,I\!I}(N,M)$ is an isomorphism. For any left DG\+modules $L$ and $M$ over $A$ the natural morphism $\Ext_A^{I\!I}(L,M)\allowbreak\rarrow\Ext_A(L,M)$ is an isomorphism. For any DG\+module $M$ over $A\ot_k A^\op$, the natural maps of Hochschild (co)homology $HH_*(A,M)\rarrow HH_*^{I\!I}(A,M)$ and $HH^{I\!I\;*}(A,M)\rarrow HH^*(A,M)$ are isomorphisms. \end{cor} \begin{proof} The (left or right) homological dimension of the graded algebra $A^\#$ is finite, since one can compute the spaces $\Ext$ over it using the nonhomogeneous Koszul resolution. By~\cite[Corollary~6.8.2]{Pkoszul}, the classes of acyclic and absolutely acyclic DG\+modules over $A$ coincide. Hence the first two assertions follow from the concluding remarks in~\ref{comparison-subsect}. To prove the last assertion, notice that the DG\+algebra $A\ot_k A^\op$ is endowed with the induced filtration having the same properties as required above of the filtration on~$A$; the corresponding CDG\+coalgebra is naturally identified with $\CC\ot_k \CC^\op$. Since $\CC$ is conilpotent, so is $\CC\ot_k \CC^\op$. Thus, the classes of acyclic and absolutely acyclic DG\+modules over $A\ot_k A^\op$ coincide, too. \end{proof} \subsection{CDG\+algebra with Koszul filtration} \label{cdg-koszul} Let $B=(B,d,h)$ be a CDG\+algebra over a field~$k$ endowed with an increasing filtration $F_iB$, \ $i\ge0$, such that $F_0B=k$, \ $F_iB\cdot F_jB \subset F_{i+j}B$, \ $dF_iB\subset F_{i+1}B$, and $h\in F_2B$. Assume that the associated graded algebra $\gr_FB$ is Koszul and has finite homological dimension. Then one can assign to the filtered CDG\+algebra $(B,F)$ a CDG\+coalgebra $\CC$ endowed with a finite decreasing filtration $G$ \cite[Section~6.8]{Pkoszul}. Let $C=\modrcfp B$ be the DG\+category of right CDG\+modules over $B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules. All the results below will also hold for finitely generated free modules in place of finitely generated projective ones. \begin{corA} For any right DG\+module $N$ and left DG\+module $M$ over $C$ the natural map $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism. For any left DG\+modules $L$ and $M$ over $C$, the natural map $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism. \end{corA} \begin{proof} The homological dimension of the graded algebra $B^\#$ is finite (see~\ref{dga-koszul}). By \cite[Corollary~6.8.1]{Pkoszul}, the coderived category $\DD^\co(B\modlc)$ is generated by $H^0(B\modlc_\fp)$ as a triangulated category with infinite direct sums. Thus, the assertions of the corollary follow from Theorem~\ref{comparison-dg-of-cdg}.A. \end{proof} Let $\CC^\ss$ denote the maximal cosemisimple $\Gamma$\+graded subcoalgebra of the $\Gamma$\+graded coalgebra $\CC$ \cite[Section~5.5]{Pkoszul}. Assume that the differential~$d$ and the curvature linear function~$h$ on $\CC$ annihilate $C^\ss$, and the tensor product coalgebra $\CC^\ss\ot_k\CC^{\ss\,\.\op}$ is cosemisimple. The latter condition always holds when the field $k$ is perfect and the grading group $\Gamma$ contains no torsion of the order equal to the characteristic of~$k$. \begin{corB} Under the above assumptions, the natural maps of Hochschild (co)homology $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ and $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ are isomorphisms for any DG\+module $M$ over the DG\+category $C\ot_k C^\op$. \end{corB} \begin{proof} The CDG\+algebra $B\ot_k B^\op$ is endowed with the induced filtration having the same properties; the corresponding CDG\+coalgebra is naturally identified with $\CC\ot_k\CC^\op$. The coderived category of CDG\+modules $\DD^\co(B\modlc)$ is equivalent to the coderived category of CDG\+comodules $\DD^\co(\CC\comodlc)$ \cite[Theorem~6.8]{Pkoszul}. This equivalence transforms the functor of tensor product $$ \ot_k\:\DD^\co(B\modlc)\times\DD^\co(B^\op\modlc)\lrarrow \DD^\co(B\ot_k B^\op\modlc) $$ into the similar functor of tensor product $$ \ot_k\:\DD^\co(\CC\comodlc)\times\DD^\co(\CC^\op\comodlc) \lrarrow\DD^\co(\CC\ot_k\CC^\op\comodlc). $$ When the coalgebra $\CC^\ss\ot_k\CC^{\ss\,\.\op}$ is cosemisimple, any DG\+comodule over it (considered as a DG\+coalgebra with zero differential) can be obtained from tensor products of DG\+comodules over $\CC^\ss$ and $\CC^\ss{}^\op$ using the operations of cone and passage to a direct summand. The coderived category $\DD^\co(\CC\ot_k\CC^\op\comodlc)$ of CDG\+comodules over $\CC\ot_k\CC^\op$ is generated by DG\+comodules over $\CC^\ss\ot_k\CC^{\ss\,\.\op}$ as a triangulated category with infinite direct sums, since the coalgebra without counit $(\CC\ot_k\CC^\op)/(\CC^\ss\ot_k\CC^{\ss\,\.\op})$ is conilpotent~\cite[Section~5.5]{Pkoszul}. Therefore, the conditions of Theorem~\ref{comparison-dg-of-cdg}.C are satisfied for the CDG\+algebra~$B$. \end{proof} \subsection{Noetherian CDG\+ring} \label{noetherian-cdg-rings} Let $B$ be a CDG\+algebra over a commutative ring $k$ and $C=\modrcfp B$ the DG\+category of right CDG\+modules over $B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules. \begin{corA} Assume that the $\Gamma$\+graded ring $B^\#$ is graded left Noetherian and has finite left homological dimension. Then \par \textup{(a)} the natural map $\Tor^C(N,M)\rarrow\Tor^{C,I\!I}(N,M)$ is an isomorphism for any right DG\+module $N$ and left DG\+module $M$ over~$C$; \par \textup{(b)} the natural map $\Ext_C^{I\!I}(L,M)\rarrow\Ext_C(L,M)$ is an isomorphism for any left DG\+modules $L$ and $M$ over~$C$. \end{corA} \begin{proof} Notice that for a left Noetherian (graded) ring the weak and left homological dimensions coincide. Whenever the graded ring $B^\#$ is left Noetherian, the coderived category $\DD^\co(B\modlc)$ is compactly generated by CDG\+modules whose underlying $\Gamma$\+graded modules are finitely generated (a result of D.~Arinkin, \cite[Theorem~3.11.2]{Pkoszul}). Assuming additionally that the left homological dimension of $B^\#$ is finite, it follows easily that $\DD^\co(B\modlc)$ is compactly generated by $H^0(B\modlc_\fp)$. (See the beginning of~\ref{comparison-dg-of-cdg} for a brief discussion of compact generation.) It remains to apply Theorem~\ref{comparison-dg-of-cdg}.A(a-b) to deduce the assertions of the corollary. \end{proof} Before formulating our next result, let us define yet another exotic derived category of CDG\+modules. Given a small CDG\+category $D$, the \emph{complete derived category} $\DD^\cmp(D\modlc)$ of left CDG\+modules over $D$ is the quotient category of the homotopy category $H^0(D\modlc)$ by its minimal triangulated subcategory, containing $\Ac^\abs(D\modlc)$ and closed under \emph{both} infinite direct sums and products. CDG\+mod\-ules belonging to the latter subcategory are called \emph{completely acyclic}. Now assume that the ring $k$ has finite weak homological dimension and the $\Gamma$\+graded $k$\+module $B^\#$ is flat. Assume further that the $\Gamma$\+graded ring $B^\#$ is both left and right Noetherian of finite homological dimension, the $\Gamma$\+graded ring $B^\#\ot_k B^\#{}^\op$ is graded Noetherian and the $\Gamma$\+graded module $B^\#$ over $B^\#\ot_k B^\#{}^\op$ has finite projective dimension. \begin{corB} Suppose the CDG\+module $B$ over $B\ot_k B^\op$ belongs to the minimal triangulated subcategory of\/ $\DD^\cmp(B\ot_k B^\op\modlc)$, closed under infinite direct sums and containing all CDG\+modules of the form $L\ot_k N$, where $L$ and $N$ are a left and a right CDG\+module over $B$ and at least one of the $\Gamma$\+graded $k$\+modules $L^\#$ and $N^\#$ is flat. Then for any DG\+module $M$ over $C\ot_k C^\op$ the natural maps $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ and $HH^{I\!I\;*}(C,M) \rarrow HH^*(C,M)$ are isomorphisms. \end{corB} \begin{proof} Let us check the conditions of Theorem~\ref{comparison-dg-of-cdg}.D\hbox{}. In view of \cite[Theorem~3.11.2]{Pkoszul} and the discussion in~\ref{derived-tensor-product-subsect}, the triangulated subcategory with infinite direct sums in $\DD^\cmp(B\ot_k B^\op\modlc)$ generated by the CDG\+modules $L\ot_k N$ with $L$ and $N$ as above coincides with the triangulated subcategory with infinite direct sums generated by the CDG\+modules $G\ot_k F$ with $G\in H^0(B\modlc_\fp)$ and $F\in H^0(\modrcfp B)$. The construction from~\cite[proof of Theorem~3.6]{Pkoszul} shows that there exists a closed morphism from a CDG\+module $P\in H^0(B\ot_k B^\op\modlc_\fp)$ into the CDG\+module $B$ with the cone absolutely acyclic with respect to $B\ot_k B^\op\modlc_\fpd$. The triangulated subcategory with infinite direct sums generated by $H^0(B\ot_k B^\op\modlc_\fp)$ in $H^0(B\ot_k B^\op)$ is semiorthogonal to all completely acyclic CDG\+modules, and maps fully faithfully to $\DD^\co(B\ot_k B^\op\modlc_\fpd)$ and to $\DD^\cmp(B\ot_k B^\op\modlc)$ \cite{KLN}. So the condition that the object $P$ is generated by the objects $G\ot_k F$ can be equivalently checked in any of these triangulated categories. Notice that since the objects of $H^0(B\ot_k B^\op\modlc_\fp)$ are compact in these triangulated categories, it does not matter whether to generate $P$ from $G\ot_K F$ using shift, cones, and infinite direct sums, or shift, cones, and passages to direct summands only. \end{proof} One can drop the assumption that the $\Gamma$\+graded ring $B^\#\ot_k B^\#{}^\op$ is graded Noetherian by replacing the complete derived category $\DD^\cmp(B\ot_k B^\op\modlc)$ with the coderived category $\DD^\co(B\ot_k B^\op\modlc_\fpd)$ in the formulation of Corollary~B\hbox{}. Notice also that when the left homological dimension of $B^\#\ot_k B^\#{}^\op$ is finite, all the exotic derived categories $\DD^\cmp(B\ot_k B^\op\modlc)$, \ $\DD^\abs(B\ot_k B^\op\modlc)$, \ $\DD^\co(B\ot_k B^\op\modlc_\fpd)$, etc.\ coincide~\cite[Theorem~3.6(a)]{Pkoszul}. \subsection{Matrix factorizations} \label{matrix-factorizations} Set $\Gamma=\Z/2$. Let $R$ be a commutative regular local ring; suppose that $R$ is also an algebra of essentially finite type over its residue field~$k$. Let $w\in R$ be a noninvertible element whose zero locus has an isolated singularity at the closed point of the spectrum of~$R$. Consider the CDG\+algebra $(B,d,h)$ over~$k$, where $B$ is the algebra $R$ placed in degree~$0$, \ $d=0$, and $h=-w$. Let $C=\modrcfp B$ be the corresponding DG\+category of right CDG\+modules; its objects are conventionally called the \emph{matrix factorizations} of~$w$. The computations in~\cite{Seg} and~\cite{Dyck} show that the two kinds of Hochschild (co)ho\-mology for the $k$\+linear DG\+category $C$ are isomorphic. The somewhat stronger assertion that the natural maps $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ and $HH^{I\!I\;*}(C,M) \rarrow HH^*(C,M)$ are isomorphisms for any DG\+module $M$ over $C\ot_k C^\op$ follows from our Corollary~\ref{noetherian-cdg-rings}.B\hbox{}. Indeed, according to~\cite[Theorem~4.1 and the discussion in Section~6.1]{Dyck} the assumption of the corollary is satisfied in this case. More generally, let $X$ be a smooth affine variety over a field~$k$ and $R$ be the $k$\+algebra of regular functions on~$X$. Let $w\in R$ be such a function; consider the CDG\+algebra $(B,d,h)$ constructed from $R$ and~$w$ as above. Let $C=\modrcfp B$ be the DG\+category of right CDG\+modules over~$B$, projective and finitely generated as $\Gamma$\+graded $B^\#$\+modules. \begin{corA} Assume that the morphism $w\:X\setminus\{w=0\}\rarrow \mathbb A^1_k$ from the open complement of the zero locus of~$w$ in $X$ to the affine line is smooth. Assume moreover that either \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item there exists a smooth closed subscheme $Z\subset X$ such that $w\:X\setminus Z\rarrow\mathbb A^1_k$ is a smooth morphism and $w|_Z=0$, or \item the field $k$~is perfect. \end{enumerate} Then the natural maps $HH_*(C,M)\rarrow HH_*^{I\!I}(C,M)$ and $HH^{I\!I\;*}(C,M)\rarrow HH^*(C,M)$ are isomorphisms for any DG\+module $M$ over $C\ot_k C^\op$. \end{corA} \begin{proof} The proof is based on Corollary~\ref{noetherian-cdg-rings}.B\hbox{}, Orlov's theorem connecting matrix factorizations with the triangulated categories of singularities~\cite{Or}, and some observations from the paper~\cite{LP}. We will show that all objects of $H^0(B\ot_k B^\op\modlc_\fp)$ can be obtained from the objects $G\ot_k F$ with $G\in H^0(B\modlc_\fp)$ and $F\in H^0(\modrcfp B)$ using the operations of cone and passage to a direct summand. By Orlov's theorem, the triangulated categories $H^0(B\modlc_\fp)$ and $H^0(\modrcfp B)$ can be identified with the triangulated category $\DD_\Sing^\b(X_0)$ of singularities of the zero locus $X_0\subset X$ of the function~$w$. Similarly, the triangulated category $H^0(B\ot_k B^\op\modlc_\fp)$ is identified with the triangulated category $\DD_\Sing^\b(Y_0)$ of singularities of the zero locus $Y_0\subset X\times_k X$ of the function $w\times 1-1\times w$ on the Cartesian product $X\times_k X$. \begin{lemB} The equivalences of categories $H^0(B\modlc_\fp)\simeq\DD_\Sing^\b (X_0)\simeq H^0(\modrcfp\allowbreak B)$ and $H^0(B\ot_k B^\op\modlc_\fp) \simeq \DD_\Sing^\b(Y_0)$ transform the external tensor product functor $H^0(B\modlc_\fp)\times H^0(\modrcfp B)\rarrow H^0(B\ot_k B^\op\modlc_\fp)$ into the functor $\DD^\b_\Sing(X_0)\times\DD^\b_\Sing(X_0)\rarrow\DD^\b_\Sing(Y_0)$ induced by the composition of the external tensor product of coherent sheaves on two copies of $X_0$ and the direct image under the closed embedding $X_0\times_k X_0\rarrow Y_0$. \end{lemB} \begin{proof} Rather than checking the assertion of the lemma for Orlov's cokernel functor $\Sigma\:H^0(B\modlc_\fp) \rarrow \DD_\Sing^\b(X_0)$, one can use the construction of the inverse functor $\Upsilon\:\DD_\Sing^\b (X_0)\rarrow H^0(B\modlc_\fp)$ given in~\cite{Porl}, for which the desired compatibility is easy to establish. Alternatively, it suffices to use the result of~\cite[Lemma~2.18]{LP}. Let $\DD^\abs(B\modlc_\fg)$ denote the absolute derived category of left CDG\+modules over $B$ whose underlying $\Gamma$\+graded $B^\#$\+modules are finitely generated; the notation $\DD^\abs(\modrcfg B)$ for right CDG\+modules will have the similar meaning. Then the external tensor product of finitely generated CDG\+modules induces a functor $\DD^\abs(B\modlc_\fg)\times\DD^\abs(\modrcfg B) \rarrow\DD^\abs(B\ot_k B^\op\modlc_\fg)$; furthermore, the natural functor $H^0(B\modlc_\fp)\rarrow\DD^\abs(B\modlc_\fg)$ is an equivalence of categories, since $B^\#$ is Noetherian of finite homological dimension. {\hfuzz=4.5pt\par} Let $M\in H^0(B\modlc_\fp)$; the direct image of the corresponding coherent sheaf $\Sigma(M)$ on $X_0$ under the closed embedding $X_0\rarrow X$ can be viewed as an object of $\DD^\abs(B\modlc_\fg)$. It is clear from the above-mentioned lemma from~\cite{LP} that this object is naturally isomorphic to the image of $M$ in $\DD^\abs(B\modlc_\fg)$. Let $N\in H^0(\modrcfp B)$; then the coherent sheaf $\Sigma(N)$ on $X_0$, viewed as an object of $\DD^\abs(\modrcfg B)$, is isomorphic to $N$. Since the external tensor product is well-defined on the absolute derived categories of finitely generated CDG\+modules, it follows that the coherent sheaf $\Sigma(M)\bt_k\Sigma(N)$ on $X_0\times_k X_0$, viewed as an object of $\DD^\abs(B\ot_k B^\op\modlc_\fg)$, is isomorphic to $M\ot_k N$. Applying the same lemma from~\cite{LP} again, we conclude that $\Sigma(M\ot_kN)\simeq\Sigma(M)\bt_k\Sigma(N)$ in $\DD^\b_\Sing(Y_0)$. The assertion of Lemma~B is proven. \end{proof} Now we can finish the proof of the corollary. Recall that in the case~(a) we have a closed subvariety $Z\subset X_0$; in the case~(b), let $Z=X_0$ (or any closed subvariety of $X_0$ such that the morphism $w\:X\setminus Z \rarrow\mathbb A^1_k$ is smooth). It suffices to show that the external tensor products of coherent sheaves on two copies of $Z$, considered as coherent sheaves on $Y_0$, generate the triangulated category of singularities of~$Y_0$. The open complement to $Z\times Z$ in $Y_0$ is a smooth variety. Indeed, we have $Y_0=X\times_{\mathbb A^1_k}X$. The complement to $Z\times Z$ in $Y_0$ is covered by its open subschemes $(X\setminus Z)\times_{\mathbb A^1_k}X$ and $X\times_{\mathbb A^1_k}(X\setminus Z)$, which are both smooth, since $X$ is smooth over~$k$ and $X\setminus Z$ is smooth over~$\mathbb A^1_k$. By~\cite[Proposition~2.7]{Or2} (see also~\cite[Theorem~3.5]{LP} and~\cite[Theorem~2.1.5 and/or Lemma~2.6]{Neem}), it follows that the triangulated category of singularities of $Y_0$ is generated by coherent sheaves on $Z\times_k Z$. It remains to show that the derived category of coherent sheaves on $Z\times_k Z$ is generated by the external tensor products of coherent sheaves on the Cartesian factors. This is true, at least, (a)~for any smooth affine scheme $Z$ of finite type over a field~$k$, and (b)~for any affine scheme $Z$ of finite type over a perfect field~$k$ (the affineness assumption can be weakened, of course). The case~(a) is clear, since $Z\times_k Z$ is (affine and) regular of finite Krull dimension. In the case~(b), any reduced scheme of finite type over~$k$ contains an open dense smooth subscheme~\cite[Corollaire~(17.15.13)]{EGAIV4}, and~\cite[(proof of) Theorem~3.7]{LP} applies. When $X$ contains connected components on which $w$~is identically zero, Orlov's theorem is not applicable. On such components, one has to consider the $\Z/2$\+graded derived category of coherent sheaves in place of the triangulated category of singularities of the zero locus. Otherwise, the above argument remains unchanged. \end{proof} \subsection{Trivial counterexample} \label{counterexample} The two kinds of Hochschild (co)homology of DG\+algebras cannot be always isomorphic for very general reasons. The Hochschild homology and cohomology of the first kind $HH_*(A)$ and $HH^*(A)$ of a DG\+algebra $A$ are invariant with respect to quasi-isomorphisms of DG\+algebras (see~\ref{hochschild-subsect}). On the other hand, the Hochschild homology and cohomology of the second kind $HH_*^{I\!I}(A)$ and $HH^{I\!I\;*}(A)$ are invariant with respect to isomorphisms of DG\+algebras in the category of CDG\+algebras (since the Hochschild (co)homology of the second kind are generally functorial with respect to CDG\+functors; see~\ref{second-kind-general}\+-\ref{hochschild-subsect}). These are two incompatible types of invariance properties; indeed, any two DG\+alge\-bras over a field can be connected by a chain of tranformations some of which are quasi-isomorphisms and the other are CDG\+isomorphisms~\cite[Examples~9.4]{Pkoszul}. Here is a rather trivial example of a CDG\+algebra $B$ over a field~$k$ such that for the corresponding DG\+category $C=\modrcfp B$ the two kinds of Hochschild (co)homology are different. This example also shows that one cannot drop the conditions on the differential~$d$ and curvature~$h$ of the CDG\+coalgebra $\CC$ in Corollary~\ref{cdg-koszul}.B, nor can one drop the condition on the critical values of the potential~$w$ in Corollary~\ref{matrix-factorizations}.A. Set $\Gamma=\Z/2$ and $(B,d,h)=(k,0,1)$. So $B$ is the $k$\+algebra $k$ placed in the grading $0\bmod 2$ and endowed with the zero differential and a nonzero curvature element. Then any CDG\+module over $B$ is contractible, any one of the two components of the differential of a CDG\+module being its contracting homotopy. So the DG\+category $C$ is quasi-equivalent to zero, hence $HH_*(C)=0=HH^*(C)$. On the other hand, the CDG\+algebra $B\ot_k B^\op$ is simply the $\Z/2$\+graded $k$\+algebra $k$ with the zero differential and curvature. The CDG\+module $B$ over $B\ot_k B^\op$ is the $\Z/2$\+graded $k$\+module $k$ concentrated in degree $0\bmod 2$. So $\Tor^{B\ot_k B^\op}(B,B)\simeq\Tor^{B\ot_k B^\op,I\!I}(B,B) \simeq k\simeq\Ext_{B\ot_k B^\op}^{I\!I}(B,B)\simeq \Ext_{B\ot_k B^\op}(B,B)$. We conclude that $HH_*^{I\!I}(B)\simeq k\simeq HH^{I\!I\;*}(B)$ and, by the isomorphisms~\eqref{hoch-B-C-isomorphisms} from~\ref{dg-of-cdg-subsect}, $HH_*^{I\!I}(C)\simeq k\simeq HH^{I\!I\;*}(C)$. \subsection{Direct sum formula} \label{direct-sum} Set $\Gamma=\Z/2$. Let $k$ be a commutative ring and $B$ be a small $k$\+linear CDG\+category such that all $\Gamma$\+graded $k$\+modules of morphisms in $B^\#$ are $k$\+flat. Given a constant $c\in k$, denote by $B_{(c)}$ the $k$\+linear CDG\+category obtained from the CDG\+category $B$ by adding~$c$ to all the curvature elements in~$B$. Then there is a natural (strict) isomorphism $B_{(c)}\ot_k B_{(c)}^\op\simeq B\ot_k B^\op$, hence a natural isomorphism between the DG\+categories of CDG\+modules over $B_{(c)}\ot_k B_{(c)}^\op$ and $B\ot_k B^\op$. This isomorphism transforms the diagonal CDG\+bimodule $B_{(c)}$ over $B_{(c)}\ot_k B_{(c)}^\op$ to the diagonal CDG\+bimodule $B$ over $B\ot_k B^\op$. Therefore, we have natural isomorphisms \begin{equation} HH_*^{I\!I}(B_{(c)})\simeq HH_*^{I\!I}(B) \quad\textup{and}\quad HH^{I\!I\;*}(B_{(c)})\simeq HH^{I\!I\;*}(B), \end{equation} and consequently similar isomorphisms for the Hochschild (co)homology of the second kind of the DG\+categories $C=\modrcfp B$ and $C_{(c)}=\modrcfp B_{(c)}$. Hochschild (co)homology of the first kind of the DG\+categories $C$ and $C_{(c)}$ are \emph{not} isomorphic in general (and in fact can be entirely unrelated, as the following example illustrates). Let $k$~be an algebraically closed field of characteristic~$0$, \ $X$ be a smooth affine variety over~$k$, and $w$~be a regular function on~$X$. Let $B$ be the CDG\+algebra associated with $X$ and~$w$ as in~\ref{matrix-factorizations}. The function~$w$ has a finite number of critical values $c_i\in k$. When $c$~is not a critical value, the Hochschild (co)homology of the first kind $HH_*(C_{(c)})$ and $HH^*(C_{(c)})$ vanish, since the category $H^0(C_{(c)})$ does. We have natural maps $$ HH_*(C_{(c_i)})\lrarrow HH_*^{I\!I}(C_{(c_i)})\.\simeq\. HH_*^{I\!I}(C) $$ and $$ HH^{I\!I\;*}(C)\.\simeq\. HH^{I\!I\;*}(C_{(c_i)})\lrarrow HH^*(C_{(c_i)}). $$ \begin{cor} The induced maps \begin{equation} \label{ho-hoch-direct-sum} \textstyle\bigoplus_i HH_*(C_{(c_i})\lrarrow HH_*^{I\!I}(C) \end{equation} and \begin{equation} \label{coho-hoch-direct-sum} HH^{I\!I\;*}(C)\lrarrow\textstyle\bigoplus_i HH^*(C_{(c_i)}). \end{equation} are isomorphisms. \end{cor} \begin{proof} It follows from the spectral sequence computation for the Hochschild (co)ho\-mology of the second kind in~\cite[proof of Theorem~4.2(b)]{CT} (see also~\cite[Lemma~3.3]{LP}) that $HH_*^{I\!I}(C)$ and $HH^{I\!I\;*}(C)$ decompose into direct sums over the critical values of~$w$. More precisely, let $X_{c_i}\subset X$ denote the closed subscheme defined by the equation $w=c_i$, and let $X_i'\subset X$ denote the open complement to the union of $X_{c_j}$ over all $j\ne i$. Clearly, $X_i'$ is an affine scheme. Let $B_i'$ denote the CDG\+algebra associated with $X'_i$ and~$w$ as above, and let $C_i'=\modrcfp B_i'$ be the related DG\+category of CDG\+modules. Then the natural map $HH^{I\!I}_*(C)\rarrow\bigoplus_i HH^{I\!I}_*(C_i')$ induced by the DG\+functors $C\rarrow C_i'$ is an isomorphism, since the natural map $HH^{I\!I}_*(B)\rarrow \bigoplus_i HH^{I\!I}_*(B_i')$ is. Furthermore, the diagonal CDG\+bimodule $B_i'$ over $B_i'$ can be considered as a CDG\+bimodule over $B$ by means of the strict CDG\+functor $B\rarrow B_i'$, and similarly the diagonal DG\+bimodule $C'_i$ over $C'_i$ can be considered as a DG\+bimodule over~$C$. The CDG\+bimodule $B_i'$ over $B$ corresponds to the DG\+bimodule $C_i'$ over $C$ under the equivalence of DG\+categories $B\ot_k B^\op\modlc\simeq C\ot_k C^\op\modld$ from~\ref{dg-of-cdg-subsect}. The natural maps $HH^{I\!I\;*}(C_i')\rarrow HH^{I\!I\;*}(C,C_i')$ are isomorphisms, since the natural maps $HH^{I\!I\;*}(B_i')\rarrow HH^{I\!I\;*}(B,B_i')$ are; and the map $HH^{I\!I\;*}(C)\rarrow \bigoplus_i HH^{I\!I\;*}(C,C_i')$ is an isomorphism, for the similar reason. On the other hand, Orlov's theorem~\cite{Or} implies that the DG\+functor $C_{(c_i)}\rarrow C'_{i\.(c_i)}$, where $C'_{i\.(c_i)} = \modrcfp B'_{i\.(c_i)}$ and $B'_{i\.(c_i)}$ is the CDG\+algebra associated with the variety $X'_i$ and the function $w-c_i$, is a quasi-equivalence. Thererefore, the natural maps $HH_*(C_{(c_i)})\rarrow HH_*(C'_{i\.(c_i)})$ are isomorphisms, as are the natural maps $HH^*(C_{(c_i)})\rarrow HH^*(C_{(c_i)},C'_{i\.(c_i)})\longleftarrow HH^*(C'_{i\.(c_i)})$. Besides, one has $H^0(C_{i\.(c_j)})=0$, hence $HH_*((C_{i\.(c_j)}) = 0 = HH^*((C_{i\.(c_j)})$, for all $i\ne j$. The isomorphisms~(\ref{ho-hoch-direct-sum}\+-\ref{coho-hoch-direct-sum}) now follow from Corollary~\ref{matrix-factorizations}.A applied to the varieties $X_i'$ with the functions $w-c_i$ on them and commutativity of the natural diagrams. \end{proof} \bigskip
proofpile-arXiv_068-15701
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Convex sets in Coxeter complexes} Let $\Sigma$ be an $n$-dimensional spherical Coxeter complex, let $\bar\Sigma$ be a simplicial complex which refines the triangulation of $\Sigma$ and which is invariant under the Coxeter group $W$ and $\pm id$. Examples of such triangulations are $\Sigma$ itself and its barycentric subdivisions. In the geometric realization, the simplices are assumed to be spherical. The \emph{span} of a subset of a sphere is the smallest subsphere containing the set. We assume now that $A\subseteq\bar\Sigma$ is an $m$-dimensional subcomplex whose geometric realization $|A|$ is convex. \begin{Lem} Let $a\in A$ be an $m$-simplex. Then $|A|\subseteq \mathrm{span}|a|$. \par\medskip\rm\emph{Proof. } Assume this is false. Let $u\in|A|\setminus\mathrm{span}|a|$. Then $-u\not\in|a|$ and $Y=\bigcup\{[u,v]\mid v\in|a|\}$ is contained in $|A|$. But $Y$ is a cone over $|a|$ and in particular $m+1$-dimensional, a contradiction. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} We choose an $m$-simplex $a\in A$ and put \[ S=\mathrm{span}|a|\cap|\bar\Sigma|; \] this is an $m$-sphere containing $|A|$. Recall that an $m$-dimensional simplicial complex is called \emph{pure} if every simplex is contained in some $m$-simplex. \begin{Lem} $A$ is pure. \par\medskip\rm\emph{Proof. } It suffices to consider the case $m\geq 1$. Let $a\in A$ be an $m$-simplex, and assume that $b\in A$ is a lonely simplex of maximal dimension $\ell<m$. Then $int(-b)$ is disjoint from $int(a)$. Let $v$ be an interior point of $a$ and $u$ an interior point of $b$ and consider the geodesic segment $[u,v]\subseteq|A|$. If $b$ is a point, the existence of the geodesic shows that $b$ is contained in some higher dimensional simplex, a contradiction. If $\ell\geq 1$, then $[u,v]$ intersects $int(b)$ in more than two points (because $b$ is lonely), so $v$ is in the span of $b$. This contradicts $dim(a)>dim(b)$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} \begin{Lem} \label{OppLem} If there exists an $m$-simplex $a\in A$ with $-a\in A$, then $|A|=S$. \par\medskip\rm\emph{Proof. } Then any point in $S$ lies on some geodesic of length $<\pi$ joining a point in $|a|$ with a point in $|-a|$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} Topologically, the convex set $|A|$ is either an $m$-sphere or homeomorphic to a closed $m$-ball. For $m\geq 2$, these spaces are strongly connected (i.e. they cannot be separated by $m-2$-dimensional subcomplexes \cite{Alex}). It follows that $A$ is a chamber complex, i.e. the chamber graph $C(A)$ (whose vertices are the $m$-simplices and whose edges are the $m-1$ simplices) is connected \cite{Alex}. If $m=1$, then $|A|$ is a connected graph and hence strongly connected. \begin{Lem} If $m\geq 1$, then $A$ is a chamber complex. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} \section{Results by Balser-Lytchak and Serre} We now assume that $X$ is a simplicial spherical building modeled on the Coxeter complex $\Sigma$. By means of the coordinate charts for the apartments we obtain a metric simplicial complex $\bar X$ refining $X$, which is modeled locally on $\bar\Sigma$. In this refined complex $\bar X$, we call two simplices $a,b$ \emph{opposite} if $a=-b$ in some (whence any) apartment containing both. We let $opp(a)$ denote the collection of all simplices in $\bar X$ opposite $a$. The geometric realization $|\bar X|$ is CAT$(1)$. Furthermore, any geodesic arc is contained in some apartment. We assume that $A\subseteq\bar X$ is an $m$-dimensional subcomplex and that $|A|$ is convex. For any two simplices $a,b\in A$, we can find an apartment $\bar\Sigma$ containing $a$ and $b$. The intersection $|A|\cap|\bar\Sigma|$ is then convex, so we may apply the results of the previous section to it. We note also that $|A|$ is CAT$(1)$. \begin{Lem} $A$ is a pure chamber complex. \par\medskip\rm\emph{Proof. } Let $a\in A$ be an $m$-simplex and let $b\in A$ be any simplex. Let $\bar\Sigma$ be an apartment containing $a$ and $b$. Since $|\bar\Sigma|\cap|A|$ is $m$-dimensional and convex, we find an $m$-simplex $c\in A\cap\bar\Sigma$ containing $b$. Similarly we see that $A$ is a chamber complex. \end{Lem} The next results are due to Serre \cite{Serre} and Balser-Lytchak \cite{BL1,BL2}. \begin{Lem} If there is a simplex $a\in A$ with $opp(a)\cap A=\emptyset$, then $|A|$ is contractible. \par\medskip\rm\emph{Proof. } We choose $u$ in the interior of $a$. Then $d(u,v)<\pi$ for all $v\in|A|$, so $|A|$ can be contracted to $u$ along these unique geodesics. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} \begin{Prop} \label{SerreProp} If there is an $m$-simplex $a$ in $A$ with $opp(a)\cap A\neq\emptyset$, then every simplex $a\in A$ has an opposite in $A$. \par\medskip\rm\emph{Proof. } Let $a,b\in A$ be opposite $m$-simplices, let $\bar\Sigma$ be an apartment containing both and let $S\subseteq|\bar\Sigma|$ denote the sphere spanned by $a,b$. Then $S\subseteq|A|$. Let $c$ be any $m$-simplex in $A$. If $c$ is not opposite $a$, we find interior points $u,v$ of $c,a$ with $d(u,v)<\pi$. The geodesic arc $[u,v]$ has a unique extension in $S$. Along this extension, let $w$ be the point with $d(u,w)=\pi$ and let $c'$ be the smallest simplex containing $w$. Then $c'$ is opposite $c$. Thus every $m$-simplex in $A$ has an opposite, and therefore every simplex in $A$ has an opposite. \ \hglue 0pt plus 1filll $\Box$ \end{Prop} In this situation where every simplex has an opposite, $A$ is called $A$ \emph{completely reducible}. If every simplex of a fixed dimension $k\leq m$ has an opposite in $A$, then clearly every vertex in $A$ has an opposite. Serre \cite{Serre} observed that the latter already characterizes complete reducibility. \begin{Prop} \label{SerreProp2} If every vertex in $A$ has an opposite, then $A$ is completely reducible. \par\medskip\rm\emph{Proof. } We show inductively that $A$ contains a pair of opposite $k$-simplices, for $0\leq k\leq m$. This holds for $k=0$ by assumption, and we are done if $k=m$ by \ref{SerreProp}. So we assume that $0\leq k<m$. Let $a,a'$ be opposite $k$-simplices in $A$ and let $b\in A$ be a vertex which generates together with $a$ a $k+1$-simplex (recall that $A$ is pure, so such a vertex exists). We fix an apartment $\bar\Sigma$ containing $a$, $b$ and $a'$. The geodesic convex closure $Y$ of $b$ and $|a|\cup|a'|$ in the sphere $|\bar\Sigma|$ is a $k+1$-dimensional hemisphere (and is contained in $|A|$). Let $b'\in A$ be a vertex opposite $b$. A small $\varepsilon$-ball in $Y$ about $b$ generates together with $b'$ a $k+1$-sphere $S\subseteq|A|$. Because $\dim S=k+1$, there exists a point $u\in S$ such that the minimal simplex $c$ containing $u$ has dimension at least $k+1$. Let $u'$ be the opposite of $u$ in $S$, and $c'$ the minimal simplex containing $u'$. Then $c,c'$ is a pair of opposite simplices in $A$ of dimensions at least $k+1$. \ \hglue 0pt plus 1filll $\Box$ \end{Prop} \section{Completely reducible subcomplexes are buildings} We assume that $A$ is $m$-dimensional, convex and completely reducible. If $m=0$, then $A$ consists of a set of vertices which have pairwise distance $\pi$. This set is, trivially, a $0$-dimensional spherical building. So we assume now that $1\leq m\leq n$. Two opposite $m$-simplices $a,b\in A$ determine an $m$-sphere $S(a,b)$ which we call a \emph{Levi sphere}. \begin{Lem} If $a,b\in A$ are $m$-simplices, then there is a Levi sphere containing $a$ and $b$. \par\medskip\rm\emph{Proof. } This is true if $b$ is opposite $a$. If $b$ is not opposite $a$, we choose interior points $u\in int(a)$ and $v\in int(b)$, and a simplex $c\in A$ opposite $b$. The geodesic $[u,v]$ has a unique continuation $[v,w]$ in the Levi sphere $S(b,c)$, such that $d(u,w)=\pi$. Let $\bar\Sigma$ be an apartment containing the geodesic arc $[u,v]\cup[v,w]$ and let $d$ be the smallest simplex in $\bar\Sigma$ containing $w$. Then $d$ is in $A$ and opposite $a$, so the there is a Levi sphere $S(a,d)$ containing $[u,v]\cup[v,w]$. Since $b$ is the smallest simplex containing $v$, it follows that $b\in S(a,d)$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} Since $A$ is pure, we have the following consequence. \begin{Cor} Any two simplices $a,b\in A$ are in some Levi sphere. \ \hglue 0pt plus 1filll $\Box$ \end{Cor} We call an $m-1$-simplex $b\in A$ \emph{singular} if it is contained in three different $m$-simplices. The following idea is taken from Caprace \cite{Cap}. Two $m$-simplices are \emph{t-equivalent} if there is a path between them in the dual graph which never crosses a singular $m-1$-simplex. The t-class of $a$ is contained in all Levi spheres containing $a$. \begin{Lem} Let $b$ be a singular $m-1$-simplex. Let $S$ be a Levi sphere containing $b$ and let $H\subseteq S$ denote the great $m-1$-sphere spanned by $|b|$. Then $H$ is the union of singular $m-1$-simplices. \par\medskip\rm\emph{Proof. } Let $a$ be an $m$-simplex containing $b$ which is not in $S$ and let $-b$ denote the opposite of $b$ in $S$. Let $S'$ be a Levi sphere containing $a$ and $-b$ and consider the convex hull $Y$ of $|a|\cup|-b|$ in $S'$. Then $Y$ is an $m$-hemisphere. The intersection $Y\cap S$ is convex, contains the great sphere $H$, and is different from $Y$, so $Y\cap S=H$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} We call $H$ a \emph{singular great sphere}. Along singular great spheres, we can do 'surgery': \begin{Lem} Let $S,H,Y$ be as in the previous lemma. Let $Z\subseteq S$ be a hemisphere with boundary $H$. Then $Z\cup Y$ is a Levi sphere. \par\medskip\rm\emph{Proof. } We use the same notation as in the previous lemma. Let $c\subseteq Z$ be an $m$-simplex containing $-b$, then $|c|\cup H$ generates $Z$. Let $S'$ be a Levi sphere containing $c$ and $a$. Then $Z\cup Y\subseteq S'$ and $Z\cup Y=H$, whence $S'=Z\cup Y$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} \begin{Lem} Let $S$ be a Levi sphere and let $H,H'\subseteq S$ be singular great spheres. Let $s$ denote the metric reflection of $S$ along $H$. Then $s(H')$ is again a singular great sphere. \par\medskip\rm\emph{Proof. } We use the notation of the previous lemma. Let $b'$ be a singular $m-1$-simplex in $H'\cap Z$. Let $-b'$ denote its opposite in the Levi sphere $S'=Z\cup Y$. We note that the interior of $b$ is disjoint from $S$. Let $b''$ be the opposite of $-b'$ in the Levi sphere $S''=(S\setminus Z)\cup Y$. Then $b''$ is a singular $m-1$-simplex in $S$, and $b''$ is precisely the reflection $s(b')$ of $b'$ along $H$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} For every Levi sphere $S$ we obtain in this way a finite reflection group $W_S$ which permutes the singular great spheres in $S$. As a reprentation sphere, $S$ may split off a trivial factor $S_0$, the intersection of all singular great spheres in $S$. We let $S_+$ denote its orthogonal complement, $S=S_0*S_+$. The intersections of the singular great spheres with $S_+$ turn $S_+$ into a spherical Coxeter complex, with Coxeter group $W_S$. Let $F\subseteq S$ be a fundamental domain for $W_S$, i.e. $F=C*S_0$, where $C\subseteq S_+$ is a Weyl chamber. The geometric realization of the t-class of any $m$-simplex in $F$ is precisely $F$. \begin{Lem} If two Levi spheres $S,S'$ have an $m$-simplex $a$ in common, then there is a unique isometry $\phi:S\rTo S'$ fixing $S\cap S'$ pointwise. The isometry fixes $S_0$ and maps $W_S$ isomorphically onto $W_{S'}$. \par\medskip\rm\emph{Proof. } The intersection $Y=S\cap S'$ contains the fundamental domain $F$. Since $F$ is relatively open in $S$, there is a unique isometry $\phi:S\rTo S'$ fixing $Y$. The Coxeter group $W_s$ is generated by the reflections along the singular $m-1$-simplices in $Y$. Therefore $\phi$ conjugates $W_S$ onto $W_{S'}$. Finally, $Y$ contains $S_0$. \ \hglue 0pt plus 1filll $\Box$ \end{Lem} \begin{Cor} If two Levi spheres $S,S'$ have a point $u$ in common, then $S_0=S_0'$. Furtheremore, there exists an isometry $\phi:S\rTo S'$ which fixes $S\cap S'$ and which conjugates $W_S$ to $W_{S'}$. \par\medskip\rm\emph{Proof. } Let $a,a'$ be $m$-simplices in $S$ and $S'$ containing $u$, and let $S''$ be a Levi sphere containing $a$ and $a'$. We compose $S\rTo S''\rTo S'$. \end{Cor} \begin{Thm} Let $A$ be completely reducible. Then there is a thick spherical building $Z$ such that $|A|$ is the metric realization of $Z*\SS^0*\cdots*\SS^0$. \par\medskip\rm\emph{Proof. } Let $S$ be a Levi sphere and let $k=\dim S_0+1$. We make $S_0$ into a Coxeter complex with Coxeter group $W_0=\mathbb{Z}/2^k$ (we fix an action, this is not canonical). By the previous Corollary, we can transport the simplicial structure on $S$ unambiguously to any Levi sphere in $A$. \ \hglue 0pt plus 1filll $\Box$ \end{Thm} For $A=X$, this is Scharlau's reduction theorem for weak spherical buildings \cite{Scha} \cite{Cap}.
proofpile-arXiv_068-15777
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The two-Higgs-doublet model (2HDM) is one of the simplest extensions of the SM, only based on the enlargement of the scalar sector by one more doublet. With this small assumption a rich phenomenology is provided, making the model very interesting not only by itself but also as part of some other extensions of the SM. Without any other model-building constraint, the structure of the Yukawa Lagrangian results in \begin{eqnarray} -\mathcal L_Y = \frac{\sqrt{2}}{v} \left\{ \bar{Q}_L' (M_d' \Phi_1 + Y_d' \Phi_2) d_R' + \bar{Q}_L' (M_u' \tilde{\Phi}_1 + Y_u' \tilde{\Phi}_2)u_R' + \mbox{} \bar{L}_L' (M_l' \Phi_1 + Y_l' \Phi_2)l_R'\; +\; \mathrm{h.c.} \right\} \, . \end{eqnarray} where $ \bar{Q}_L'$ and $\bar{L}_L' $ are the left-handed quark and lepton doublets respectively and $f'_R$ ($f=u,d,l$) the right-handed fermions. The scalar fields are defined in the \emph{Higgs basis} \begin{eqnarray*} \Phi_1 = \left[ \begin{array}{c} G^+ \\ \frac{1}{\sqrt{2}} (v+S_1 + i G^0) \end{array} \right] \; , \qquad \Phi_2 = \left[ \begin{array}{c} H^+ \\ \frac{1}{\sqrt{2}} (S_2 + i S_3) \end{array} \right] \; , \end{eqnarray*} where only the first doublet acquires a vacuum expectation value (VEV), $v$, and contains the Goldstone bosons, $G^{\pm}$ and $G^0$. The physical degrees of freedom of the scalars are five, two charged fields, $H^{\pm}$, and three neutrals that need to be rotated to be mass-eigensates, $\{ S_1, S_2, S_3 \} \rightarrow \{ h, H, A \}$. $\tilde{\Phi}_{1,2}(x)\equiv i \tau_2\Phi_{1,2}^*$ are the charge-conjugated scalar fields and $M'_f$ and $Y'_f$ the corresponding Yukawa matrices. Since each right-handed fermion is coupled to two unrelated matrices that in general cannot be diagonalized simultaneously, dangerous FCNC interactions are automatically generated when going to the mass-eigenstate Lagrangian. To avoid (or suppress) these FCNCs, which are strongly constrained by the experiments, many models have been developed from the general 2HDM. A new approach based on the alignment of the Yukawa matrices in flavour space was presented in \cite{Pich:2009sp}. It opens an alternative where FCNCs are absent at tree level and, in addition, the presence of three complex parameters preserves the possibility of having new $CP$ violating sources, which is not possible in other models. \section{The Aligned two-Higgs-doublet model}\label{athdm} The alignment condition in flavour space means \begin{eqnarray} Y'_d = \varsigma_d \; M'_d \; , \qquad Y'_u= \varsigma_u^* \; M'_u \; , \qquad Y'_l = \varsigma_l \; M'_l \; , \end{eqnarray} with $\varsigma_f$ arbitrary complex numbers. These conditions imply that $Y'_{f}$ are not arbitrary anymore but proportional to $M'_f$ so they can be simultaneously diagonalized: $Y_{d,l} = \varsigma_{d,l} M_{d,l}$ and $Y_u = \varsigma_u^* M_u$. In terms of mass-eigenstate fields, the Yukawa Lagrangian takes then the form \begin{eqnarray} -\mathcal L_Y = \frac{\sqrt{2}}{v} H^+ \left\{ \; \bar{u} \left[ \varsigma_d V M_d \mathcal P_R - \varsigma_u M_u V \mathcal P_L \right] d + \varsigma_l \,\bar{\nu} M_l \mathcal P_R l \; \right\} + \frac{1}{v} \,\sum_{\varphi, f} \varphi^0_i y^{\varphi^0_i}_f \; \bar{f}\; M_f \mathcal P_R f \; +\; \mathrm{h.c.} \; , \end{eqnarray} where $V$ is the CKM matrix, $\mathcal P_{R,L}\equiv \frac{1\pm \gamma_5}{2}$ and the neutral couplings $y^{\varphi^0_i}_f$ are given in \cite{Pich:2009sp}. This Lagrangian has the following features: all fermionic couplings are proportional to the mass matrices and the neutral Yukawas are diagonal in flavour. The only flavour-changing structure is the matrix $V$, appearing in the charged current part of the quark sector, like in the SM. There are three new complex parameters, $\varsigma_f$, encoding all the possible freedom allowed by the alignment conditions. They are universal (flavour blind), do not depend on the scalar basis (contrary to the usual $\tan \beta$), recover in some limits all $\mathcal Z_2$-type models, and their phases introduce new sources of $CP$ violation without tree-level FCNCs. This fact represents a counterexample to the very well established idea that the only way of having new $CP$ violation in the electroweak sector of a 2HDM is breaking flavour conservation in neutral current interactions. \subsection{Radiative corrections} The alignment condition is not directly protected by any symmetry, therefore quantum corrections could induce some misalignment generating small FCNCs, that are suppressed by the corresponding loop factors. Nevertheless, the flavour structure of the A2HDM strongly constraints the possible FCNC interactions. The Lagrangian is invariant under flavour dependent phase transformations of the fermion mass eigenstates ($f=u,d,l,\nu$, $X=L,R$, $\alpha^{\nu,L}_i = \alpha^{l,L}_i$), $f_X^i(x)\,\rightarrow \, e^{i \alpha^{f,X}_i}\, f_X^i(x)$ while $V$ and $M_f$ transform like $V_{ij}\,\rightarrow \,e^{i \alpha^{u,L}_i} V_{ij}\, e^{-i\alpha^{d,L}_j}$ and $M_{f,ij}\, \rightarrow \,e^{i \alpha^{f,L}_i} M_{f,ij}\, e^{-i\alpha^{f,R}_j}$. Due to this symmetry, lepton flavour violation is zero to all orders in perturbation theory, while in the quark sector the $V$ matrix remains the only source of flavour-changing phenomena. The possible FCNC structures, $\bar u_L F_u^{nm} u_R$ and $\bar d_L F_d^{nm} d_R$, are of the type $F_u^{nm} = V (M_d^{\phantom{\dagger}} M_d^\dagger)^n V^\dagger (M_u^{\phantom{\dagger}} M_u^\dagger)^m M_u^{\phantom{\dagger}}$ and $F_d^{nm}=V^\dagger (M_u^{\phantom{\dagger}} M_u^\dagger)^n V (M_d^{\phantom{\dagger}} M_d^\dagger)^m M_d^{\phantom{\dagger}}$,\ or similar structures with additional factors of $V$, $V^\dagger$ and quark mass matrices. Therefore, at the quantum level the A2HDM provides an explicit implementation of the popular Minimal Flavour Violation (MFV) scenarios \cite{Chivukula:1987py, Hall:1990ac, Buras:2000dm, DAmbrosio:2002ex, Cirigliano:2005ck, Kagan:2009bn, Buras:2010mh, Trott:2010iz}, but allowing at the same time for new $CP$ violating phases. Using the renormalization group equations \cite{Cvetic:1998uw, Ferreira:2010xe, Braeuninger:2010td}, one finds that the one-loop gauge corrections preserve the alignment, and the only FCNC operator is \cite{Jung:2010ik} \begin{eqnarray} \mathcal L^{1Loop}_{\mathrm{FCNC}} &=& \frac{C(\mu)}{4\pi^2 v^3}\; (1+\varsigma_u^*\varsigma_d^{\phantom{*}})\; \sum_i\, \varphi^0_i(x) \times\nonumber\\ &\times& \left\{ (\mathcal{R}_{i2} + i\,\mathcal{R}_{i3})\, (\varsigma_d^{\phantom{*}}-\varsigma_u^{\phantom{*}})\; \left[\bar d_L\, F_d^{01} \, d_R\right] - (\mathcal{R}_{i2} - i\,\mathcal{R}_{i3})\, (\varsigma_d^*-\varsigma_u^*)\; \left[\bar u_L\, F_u^{10} \, u_R\right] \right\} \nonumber \\ &+&\; \mathrm{h.c.} \; , \end{eqnarray} which of course vanishes in all $\mathcal Z_2$-type models. It is suppressed by $m_qm_{q'}/v^3$ and quark-mixing factors, which implies interesting effects in heavy quark systems like $\bar{s}_L b_R$ and $\bar{c}_L t_R$. \section{Phenomenology}\label{pheno} One of the most important features of the A2HDM is the presence of a charged Higgs. In \cite{Jung:2010ik} we analyzed the most relevant flavor-changing processes that are sensitive to charged-scalar exchange and determined the corresponding constraints on the model parameters. We discussed tree-level processes, $\tau \rightarrow \mu/e$, $P^-\rightarrow l^- \nu_l$ and $P\rightarrow P' l^- \bar\nu_l$, where $P$ is a pseudoscalar meson, and loop-induced processes, $\Delta M_{B_s}$, $\epsilon_K$, $Z \rightarrow b \bar b $ and $\bar B \rightarrow X_s \gamma$. Pure leptonic decays give a direct bound on $|\varsigma_l|/M_{H^{\pm}} \leq 0.40$ GeV$^{-1}$ at $95\%$ CL. Combining this and the information from the other tree-level processes discussed in \cite{Jung:2010ik}, bounds on $\varsigma_u \varsigma_l^*/M_{H^{\pm}}^2$ and $\varsigma_d \varsigma_l^*/M_{H^{\pm}}^2$ parameter space were obtained. Figure \ref{global} shows the resulting limits at $95\%$ CL. These limits are rather weak, allowing the model to fit the experiments in a wide range of its parameter space. Thus, the A2HDM results in a more versatile model than other two-Higgs-doublet models. \begin{figure}\label{global} \begin{center} \includegraphics[width=5.5cm]{figures/zldc.eps} \qquad \includegraphics[width=5.5cm]{figures/zluc.eps} \caption{\it $\varsigma_d\varsigma_l^*/M_{H^{\pm}}^2$ (left) and $\varsigma_u\varsigma_l^*/M_{H^{\pm}}^2$ (right) in the complex plane, in units of $GeV^{-2}$, constrained by leptonic and semileptonic decays. The inner yellow area shows the allowed region at 95\% CL, in the case of $\varsigma_d\varsigma_l^*/M_{H^{\pm}}^2$ using additional information.\label{global}} \end{center} \end{figure} Loop-induced processes offer direct bounds on $\varsigma_u$ and $\varsigma_d$. No significative constraints on $\varsigma_d$ can be found from $\Delta M_{B_s}$, $\epsilon_K$ and $Z \rightarrow b \bar b$, because $\varsigma_u$-terms are enhanced by the top mass in comparison to $\varsigma_d$-terms. However, quite strong bounds on $\varsigma_u$ are found. Figure \ref{epsk} shows the allowed $|\varsigma_u| - M_{H^{\pm}}$ parameter space at 95$\%$ CL for values of $\varphi \in [0,2\pi]$ and $\varsigma_d \in [0,50]$, given by $\epsilon_K$, which is the most constraining. \begin{figure} \centering{ \includegraphics[width=5.5cm]{figures/EpsK.eps} \caption{\it 95\% CL constraints from $\epsilon_K$. \label{epsk}}} \end{figure} $\bar B \rightarrow X_s \gamma$ gives information in both $\varsigma_u$ and $\varsigma_d$. Figure \ref{ud} shows the resulting constraints on $\varsigma_u - \varsigma_d$ plane for complex (left) and real (right) couplings. $M_{H^{\pm}}$ is scanned over the range $[80,500]$ GeV while $\varphi \in [0,2\pi]$ in the complex case. The main conclusions coming from this figure are the following: First, large couplings of opposite sign are forbidden in the real case. Second, the role of a non-vanishing relative phase is to allow some parameter space which is excluded in the real case. And finally, the product $|\varsigma_d \varsigma_u^*| \leq 20$. Again, we find weaker limits than in other models, showing that the A2HDM gives more possibilities to accomplish the experimental constraints. \begin{figure}[tb] \begin{center} \begin{tabular}{cc} \includegraphics[width=5.3cm]{figures/bsgamma_ud.eps} & \includegraphics[width=5.6cm]{figures/bsgamma_reud.eps} \\ \end{tabular} \caption{\label{ud} \it Constraints on $\varsigma_u$ and $\varsigma_d$ from $\bar{B} \rightarrow X_s \gamma$, taking $M_{H^\pm}\in[80,500]$~GeV. The white areas are excluded at $95\%$ CL. The black line corresponds to the upper limit from $\epsilon_K, Z\to\bar{b}b$ on $|\varsigma_u|$. In the left panel, the relative phase has been varied in the range $\varphi \in [0,2\pi]$. The right panel assumes real couplings.} \end{center} \end{figure} \section{$CP$ violation} \label{cp} The $\bar{B}\rightarrow X_s \gamma$ decay is known to be very sensitive to new physics because the SM prediction for the $CP$ rate asymmetry ($a_{CP}$) is tiny. Requiring that the experimental branching ratio should be correctly reproduced (at $95\%$ CL), we find the results shown in figure \ref{acpbsgamma}. We see that the maximal asymmetry is compatible with the experimental measurement at $95\%$ CL within the scale dependence of the prediction. Then, we conclude that the A2HDM prediction for this observable does not give new constraints on the parameter space that are not already given by the branching ratio, although it reaches the experimental bound. Therefore, a precise measurement of this asymmetry and a more accurate calculation in both SM and 2HDM parts of the branching ratio would be very interesting. \begin{figure} \centering{ \includegraphics[width=9cm]{figures/acpBsgamma.eps} \caption{\it Maximal $a_{CP}$ over the relative phase $\varphi$ at NLO for $M_{H^{\pm}}\in [80,500]$ GeV, $|\varsigma_u|\in [0,2]$ and $|\varsigma_d|\in[0,50]$, taking into account the experimental constraint on $\bar{B}\rightarrow X_s \gamma$ branching ratio at $95\%$ CL. The three curves correspond to the maximal $a_{CP}$ at $\mu_b=2,2.5,5$ GeV (outer, center and inner respectively), the minimal $a_{CP}$ (black) is always zero independently on the scale. The dotted (continuous) horizontal lines denote the band of the experimental $a_{CP}$ at $1.96\sigma$ ($1\sigma$). \label{acpbsgamma}}} \end{figure} Some months ago, the D0 experiment measured an enhanced like-sign dimuon charge asymmetry \cite{Abazov:2010hv} in the $B_s$ system incompatible with a purely SM rate. In \cite{Jung:2010ik} we concluded that although the D0 central value is quite unlikely, it is possible to accommodate an enhanced $a_{CP}$ within the A2HDM. Figure \ref{acpbmix} shows how large is this enhancement ($a_{CP}^{A2HDM}/a_{CP}^{SM}$) depending on the relative phase $\varphi$, where the bound on the product $|\varsigma_u^* \varsigma_d| < 20$ coming from $\bar{B}\rightarrow X_s \gamma$ has been taken into account. From the plot we see that the asymmetry can be enhanced even 60 times compared to the SM. The preferred negative sign of $a_{sl}^s$ constrains $\varphi \in [\pi/2,\pi], [3\pi/2, 2\pi]$. A large asymmetry requires large $|\varsigma_d|$ values and small values for the charged Higgs mass. \begin{figure} \centering{ \includegraphics[width=8cm]{figures/acpBmix.eps} \caption{\it Dependence of $a_{CP}^{A2HDM}/a_{CP}^{SM}$ on $\varphi$, constraining $|\varsigma_u^* \varsigma_d| \leq 20$, for $M_{H^{\pm}}$, $|\varsigma_u|$ and $\varsigma_d$ scanned in the same ranges as in figure \ref{acpbsgamma}. \label{acpbmix}}} \end{figure} Due to the different structure of the $b\rightarrow s \gamma$ and $B_s^0-\bar{B}_s^0$ amplitudes, the enhanced asymmetry in each case corresponds to different regions in the A2HDM parameter space, giving complementary information on the relative phase. \section{Conclusions} \label{conclusions} The A2HDM provides a powerful realization of a 2HDM with no tree-level FCNCs and three complex parameters $\varsigma_f$. These parameters are new sources of $CP$ violation, flavour blind, scalar basis independent and recover in some limits all the models implemented by discrete $\mathcal Z_2$ symmetries. This parametrization allows for more freedom and accomplishes all the experimental constraints. Some misalignment can come from quantum corrections, only as MFV structures which are under control and just occur in the quark sector. Processes involving a charged Higgs give information on the parameters $\varsigma_f$, although they result in weaker limits compared to the usual scenarios with $\mathcal Z_2$, where the freedom introduced by the $\varsigma_f$ phases does not exist. The $CP$ asymmetries generated in $\bar{B}\rightarrow X_s \gamma$ and in the $B_s$ systems within the A2HDM enhance the SM prediction in complementary regions. The predicted asymmetry for $\bar{B}\rightarrow X_s \gamma$ does not give new bounds on the parameter space compared to the branching ratio, regarding this process, a more precise measurement and a complete calculation reducing the theoretical error are essential. On the other hand, if the experiments confirm a large asymmetry in $B_s$, it could be perfectly accommodated in this framework. \section*{Acknowledgements} This work has been done in collaboration with Martin Jung and Antonio Pich. It has been supported in part by the EU MRTN network FLAVIAnet [Contract No. MRTN-CT-2006-035482] and by MICINN, Spain [FPU No. AP2006-04522, Grants FPA2007-60323 and Consolider-Ingenio 2010 Program CSD2007-00042 --CPAN--]. \section*{References} \bibliographystyle{iopart-num}
proofpile-arXiv_068-15887
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The space ${\operatorname{Prim}U(\mathfrak{g})}$ of primitive ideals in the universal enveloping algebra of the Lie algebra $\mathfrak{g} := \mathfrak{gl}_N({\mathbb C})$ has an unbelievably rich structure which has been studied intensively since the 1970s. In this article we revisit several of the foundational results about ${\operatorname{Prim}U(\mathfrak{g})}$ from the perspective of the theory of finite $W$-algebras that has been developed in the last few years by Premet \cite{Pslice, Pslodowy, Pfinite, Pabelian, Pnew}, Losev \cite{Lsymplectic, Lclass, LcatO, L1D} and others \cite{BrG,BG,BGK,BKshifted,BKrep,GG,Gin}. This article was inspired by the most recent breakthrough of Premet in \cite{Pnew}, so we start by discussing that in more detail. Given a nilpotent element $e \in \mathfrak{g}$ there is associated a finite $W$-algebra $U(\mathfrak{g},e)$, and Skryabin proved that the category of $U(\mathfrak{g},e)$-modules is equivalent to a certain category of generalized Whittaker modules for $\mathfrak{g}$; see \cite{Pslice,Skryabin}. If $L$ is any irreducible $U(\mathfrak{g},e)$-module, we define $I(L) \in {\operatorname{Prim}U(\mathfrak{g})}$ by applying Skryabin's equivalence of categories to get an irreducible $\mathfrak{g}$-module, then taking the annihilator of that module. Premet's theorem \cite[Theorem B]{Pnew} can be stated for $\mathfrak{g} = \mathfrak{gl}_N({\mathbb C})$ as follows. \begin{Theorem}[Premet]\label{pti} If $L$ is a finite dimensional irreducible $U(\mathfrak{g},e)$-module and $I := I(L) \in {\operatorname{Prim}U(\mathfrak{g})}$, then the Goldie rank of $U(\mathfrak{g}) / I$ is equal to the dimension of $L$. \end{Theorem} Premet actually works with the finite $W$-algebra attached to a nilpotent element in an arbitrary reductive Lie algebra, showing in analogous notation in that general context that ${\operatorname{rk}\:} U(\mathfrak{g}) / I$ always {\em divides} $\dim L$, with equality if the Goldie field of $U(\mathfrak{g}) / I$ is isomorphic to the ring of fractions of a Weyl algebra. The fact that this condition for equality is satisfied for all $I \in {\operatorname{Prim}U(\mathfrak{g})}$ when $\mathfrak{g} = \mathfrak{gl}_N({\mathbb C})$ follows from a result of Joseph from \cite[$\S$10.3]{Jkos}. A key step in Joseph's proof involved showing in \cite[$\S$9.1]{Jkos} that the ring of fractions of $U(\mathfrak{g}) / {\operatorname{Ann}}_{U(\mathfrak{g})} M$ is isomorphic to the ring of fractions of $\mathscr L(M,M)$, the $\operatorname{ad}\mathfrak{g}$-locally finite maps from $M$ to itself, for all irreducible highest weight modules $M$. This is the weak form of Kostant's problem; see also \cite[12.13]{Je}. In the same article, Joseph proved an additivity principle for certain Goldie ranks which, when combined with the solution of the weak form of Kostant's problem just mentioned, led Joseph to the discovery of a systematic method for computing the Goldie ranks of all primitive quotients of enveloping algebras in Cartan type $A$; see \cite[$\S$8.1]{Jkos}. Soon afterwards in \cite[$\S$5.1]{Jgoldie}, Joseph worked out a general approach to compute Goldie ranks of primitive quotients in arbitrary Cartan types via his remarkable theory of {Goldie rank polynomials}. These polynomials involve some mysterious constants which even today are only determined explicitly in Cartan type $A$; see the discussion in \cite[$\S$1.5]{JIII} and use \cite[Lemma 5.1]{Jcyclic} to treat Cartan type $A$. Much more recently, in \cite[$\S$8.5]{BKrep}, we described a method for computing the dimensions of all finite dimensional irreducible representations of finite $W$-algebras, again only in Cartan type $A$. As should come as no surprise given Theorem~\ref{pti}, these two methods, Joseph's method for computing Goldie ranks in Cartan type $A$ and our method for computing dimensions, reduce after some book-keeping to performing exactly the same computation with Kazhdan-Lusztig polynomials. In the last section of the article, we will use this observation to give another proof of Theorem~\ref{pti}, quite different from Premet's argument in \cite{Pnew} which involves reduction modulo $p$ techniques. Premet's theorem allows several other classical problems about ${\operatorname{Prim}U(\mathfrak{g})}$ to be attacked using finite $W$-algebra techniques. Perhaps our most striking accomplishment along these lines is a new proof of M\oe glin's theorem from \cite{M}, asserting that all completely prime primitive ideals of $U(\mathfrak{g})$ are induced from one dimensional representations of parabolic subalgebras. In the rest of the introduction we will discuss this in more detail and formulate some other results about Goldie ranks of primitive quotients in Cartan type $A$ obtained using the link to finite $W$-algebras. We will also make some other apparently new observations about Joseph's Goldie rank polynomials. Before we give any more details, we introduce some combinatorial language. \begin{itemize} \item A {\em tableau} $A$ is a left-justified array of complex numbers with $\lambda_1$ entries in the bottom row, $\lambda_2$ entries in the next row up, and so on, for some partition $\lambda = (\lambda_1 \geq \lambda_2 \geq \cdots)$ of $N$; we refer to $\lambda$ as the {\em shape} of $A$. \item Two tableaux $A$ and $B$ are {\em row-equivalent}, denoted $A \sim B$, if one can be obtained from the other by permuting entries within rows. \item A tableau is {\em column-strict} if its entries are strictly increasing from bottom to top within each column with respect to the partial order $\geq$ on ${\mathbb C}$ defined by $a \geq b$ if $a-b \in {\mathbb Z}_{\geq 0}$. \item A tableau is {\em column-connected} if every entry in every row apart from the bottom row is one more than the entry immediately below it. \item A tableau is {\em column-separated} if it is column-strict and no two of its columns are linked, where we say that two columns are {\em linked} if the sets $I$ and $J$ of entries from the two columns satisfy the following: \begin{itemize} \item[$\circ$] if $|I| > |J|$ then $i > j > i'$ for some $i,i' \in I \setminus J$ and $j \in J \setminus I$; \item[$\circ$] if $|I| < |J|$ then $j' < i < j$ for some $i \in I \setminus J$ and $j,j' \in J \setminus I$; \item[$\circ$] if $|I| = |J|$ then either $i > j > i' > j'$ or $i' < j' < i < j$ for some $i,i' \in I \setminus J$ and $j,j' \in J \setminus I$. \end{itemize} \item A tableau is {\em standard} if its entries are $1,\dots,N$ and they increase from bottom to top in each column and from left to right in each row. \end{itemize} Now go back to the Lie algebra $\mathfrak{g} = \mathfrak{gl}_N({\mathbb C})$. Let $\mathfrak{t}$ and $\mathfrak{b}$ be the usual choices of Cartan and Borel subalgebras consisting of diagonal and upper triangular matrices in $\mathfrak{g}$, respectively. Let $W := S_N$ be the Weyl group of $\mathfrak{g}$ with respect to $\mathfrak{t}$, identified with the group of all permutation matrices in $G := GL_N({\mathbb C})$. Let $\ell$ be the usual length function and $w_0 \in W$ be the longest element. Let ${\varepsilon}_1,\dots,{\varepsilon}_N \in \mathfrak{t}^*$ be the dual basis to the basis $x_1,\dots,x_N\in\mathfrak{t}$ given by the diagonal matrix units. Given any tableau $A$, we attach a weight $\gamma(A) \in \mathfrak{t}^*$ by letting $a_1,\dots,a_N \in {\mathbb C}$ be the sequence obtained by reading the entries of $A$ in order down columns starting with the leftmost column, then setting \begin{equation}\label{newgamma} \gamma(A) := \sum_{i=1}^N a_i {\varepsilon}_i. \end{equation} Finally let $\Phi^+$ be the positive roots corresponding to $\mathfrak{b}$ and set \begin{equation}\label{rhodef} \rho := -{\varepsilon}_1-2{\varepsilon}_2-\cdots-N{\varepsilon}_N, \end{equation} which is the usual half-sum of positive roots up to a convenient normalization. Given $\alpha \in \mathfrak{t}^*$, let $L(\alpha)$ denote the {\em irreducible} $\mathfrak{g}$-module generated by a $\mathfrak{b}$-highest weight vector of weight $\alpha - \rho$. By Duflo's theorem \cite{Duflo}, the map \begin{equation*} I:\mathfrak{t}^* \rightarrow {\operatorname{Prim}U(\mathfrak{g})}, \qquad \alpha \mapsto I(\alpha) := {\operatorname{Ann}}_{U(\mathfrak{g})} L(\alpha) \end{equation*} is surjective. In \cite[Th\'eor\`eme 1]{Jduflo} (see also \cite[5.26(1)]{Je}), Joseph described the fibers of this map explicitly via the Robinson-Schensted algorithm, as follows. Take $\alpha \in \mathfrak{t}^*$ and set $a_i := x_i(\alpha)$. Construct a tableau $Q(\alpha)$ by starting from the empty tableau $A_0$, then recursively inserting the numbers $a_1,\dots,a_N$ into the bottom row using the Schensted insertion algorithm. So at the $i$th step we are given a tableau $A_{i-1}$ and need to insert $a_i$ into the bottom row of $A_{i-1}$. If there is no entry $b > a_i$ on this row then we simply add $a_i$ to the end of the row; otherwise we replace the leftmost $b > a_i$ on the row with $a_i$, then repeat the procedure to insert $b$ into the next row up. It is clear from this construction that $Q(\alpha)$ is always row-equivalent to a column-strict tableau. Now Joseph's fundamental result is that \begin{equation}\label{josephs} I(\alpha) = I(\beta) \quad\Leftrightarrow\quad Q(\alpha) \sim Q(\beta) \end{equation} for any $\alpha,\beta \in \mathfrak{t}^*$. Thus we have a complete classification of the primitive ideals in $U(\mathfrak{g})$. Our first new result identifies the primitive ideals $I$ in this classification that are completely prime, i.e. the ones for which the quotient $U(\mathfrak{g}) / I$ is a domain. \begin{Theorem}\label{mt} For $\alpha \in \mathfrak{t}^*$, the primitive ideal $I(\alpha)$ is completely prime if and only if $Q(\alpha)$ is row-equivalent to a column-connected tableau. \end{Theorem} Of course $I(\alpha)$ is completely prime if and only if ${\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = 1$. So in view of Theorem~\ref{pti} the completely prime primitive ideals of $U(\mathfrak{g})$ are related to one dimensional representations of the finite $W$-algebras $U(\mathfrak{g},e)$. This is the basic idea for the proof of Theorem~\ref{mt}: we deduce it from a classification of one dimensional representations of $U(\mathfrak{g},e)$ obtained via another result of Premet \cite[Theorem 3.3]{Pabelian} describing the maximal commutative quotient $U(\mathfrak{g},e)^{\operatorname{ab}}$. Our next theorem constructs a large family of primitive ideals which are {\em induced} in the spirit of \cite[Th\'eor\`eme 8.6]{CB}; again our proof of this uses finite $W$-algebras in an essential way. \begin{Theorem}\label{sep} Suppose we are given $\alpha \in \mathfrak{t}^*$ such that $Q(\alpha) \sim A$ for some column-separated tableau $A$. Let ${\lambda}' = ({\lambda}_1' \geq {\lambda}_2' \geq \cdots)$ be the transpose of the shape of $A$. Then we have that \begin{align*} I(\alpha) &= {\operatorname{Ann}}_{U(\mathfrak{g})} (U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} F),\\ {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) &= \dim F, \end{align*} where $\mathfrak{p}$ is the standard parabolic subalgebra with diagonally embedded Levi factor $\mathfrak{gl}_{{\lambda}_1'}({\mathbb C}) \oplus \mathfrak{gl}_{{\lambda}_2'}({\mathbb C}) \oplus \cdots$, and $F$ is the finite dimensional irreducible $\mathfrak{p}$-module generated by a $\mathfrak{b}$-highest weight vector of weight $\gamma(A)-\rho$; cf. (\ref{newgamma})--(\ref{rhodef}). \end{Theorem} Using these two results we can already recover M\oe glin's theorem. \vspace{2mm} \noindent {\bf Corollary}\:(M\oe glin){\bf .} {\em Every completely prime primitive ideal $I$ of $U(\mathfrak{g})$ is the annihilator of a module induced from a one dimensional representation of a parabolic subalgebra of $\mathfrak{g}$.} \begin{proof} Take a completely prime $I \in {\operatorname{Prim}U(\mathfrak{g})}$ and represent it as $I(\alpha)$ for $\alpha \in \mathfrak{t}^*$. By Theorem~\ref{mt}, there exists a column-connected tableau $A \sim Q(\alpha)$. Since column-connected tableaux are obviously column-separated, we then apply Theorem~\ref{sep} to deduce that $I = {\operatorname{Ann}}_{U(\mathfrak{g})} (U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} F)$ for some parabolic $\mathfrak{p}$ and some $\mathfrak{p}$-module $F$. Finally observe from its explicit description in Theorem~\ref{sep} that $F$ is actually one dimensional in the case that $A$ is column-connected. \end{proof} We record another piece of folklore peculiar to Cartan type $A$; it justifies the decision to restrict attention for the remainder of the introduction just to weights from the lattice $P := \bigoplus_{i=1}^N {\mathbb Z} {\varepsilon}_i$ of {\em integral weights}. We will give a natural proof of this via finite $W$-algebras, though it also follows from more classical techniques. \begin{Theorem}\label{red} Suppose we are given $\alpha \in \mathfrak{t}^*$ and set $a_i := x_i(\alpha)$. For fixed $z \in {\mathbb C}$, let $\mathfrak{g}_z := \mathfrak{gl}_{n}({\mathbb C})$ where $n := \#\{i=1,\dots,N\:|\:a_i \in z + {\mathbb Z}\}$, then set $\alpha_z := \sum_{j=1}^{n} (a_{i_j}-z) {\varepsilon}_j$ where $i_1 < \cdots < i_{n}$ are all the $i \in \{1,\dots,N\}$ such that $a_i \in z + {\mathbb Z}$. So $\alpha_z$ is an integral weight for $\mathfrak{g}_z$. We have that $$ {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = \prod_{z} {\operatorname{rk}\:} U(\mathfrak{g}_z) / I(\alpha_z), $$ where the product is over a set of representatives for the cosets of ${\mathbb C}$ modulo ${\mathbb Z}$. \end{Theorem} In order to say more about Goldie ranks, we need some language related to the geometry of $P$. A weight $\alpha \in P$ is {\em anti-dominant} (resp.\ {\em regular anti-dominant}) if it satisfies $x_i(\alpha) \leq x_{i+1}(\alpha)$ (resp.\ $x_i(\alpha) < x_{i+1}(\alpha)$) for each $i=1,\dots,N-1$. Given any $\alpha \in P$, we let $\delta$ be its {\em anti-dominant conjugate}, the unique anti-dominant weight in its $W$-orbit, and then define $d(\alpha)\in W$ to be the unique element of minimal length such that $\alpha = d(\alpha) \delta$. Note that stabilizer $W_\delta$ of $\delta$ in $W$ is a parabolic subgroup, and the element $d(\alpha)$ belongs to the set $D_\delta$ of minimal length $W / W_\delta$-coset representatives. For $w \in W$ let \begin{equation}\label{uc} \widehat{C}_w := \left\{\alpha \in P\:|\:d(\alpha) = w\right\}, \end{equation} which is the set of integral weights lying in the {\em upper closure} of the chamber containing $w(-\rho)$, i.e. we have $\alpha \in \widehat{C}_w$ if and only if the following hold for every $1 \leq i < j \leq N$: $$ \begin{array}{lcl} w^{-1}(i) < w^{-1}(j)&\Rightarrow&x_i(\alpha) \leq x_j(\alpha),\\ w^{-1}(i) > w^{-1}(j)&\Rightarrow&x_i(\alpha) > x_j(\alpha). \end{array} $$ The upper closures $\widehat{C}_w$ for all $w \in W$ partition the set $P$ into disjoint subsets. Recall also the {\em left cells} of $W$, which in the case of the symmetric group can be defined in purely combinatorial terms as the equivalence classes of the relation $\sim_L$ on $W$ defined by \begin{equation*} x \sim_L y \Leftrightarrow Q(x) = Q(y). \end{equation*} The map $Q$ here comes from the classical Robinson-Schensted bijection \begin{equation*} w \mapsto (P(w), Q(w)) \end{equation*} from $W$ to the set of all pairs of standard tableaux of the same shape as in e.g. \cite[ch.1]{fulton}; so $P(w)$ is the {\em insertion tableau} and $Q(w)$ is the {\em recording tableau}. Comparing with our earlier notation, we have that \begin{equation}\label{rel} Q(w) = P(w^{-1}) = Q(w(-\rho)), \end{equation} hence the connection between left cells in $W$ and the Duflo-Joseph classification of primitive ideals from (\ref{josephs}). We say that $w \in W$ is {\em minimal in its left cell} if $P(w)$ has the entries $1,\dots,N$ appearing in order up columns starting from the leftmost column. It is clear from the Robinson-Schensted correspondence that every left cell has a unique such minimal representative. Given any $\alpha \in \widehat{C}_w$, the Robinson-Schensted algorithm assembles the tableaux $Q(\alpha)$ and $Q(w(-\rho))=P(w^{-1})$ in exactly the same order, i.e. they have the same recording tableau $Q(w^{-1}) = P(w)$. If $w$ is minimal in its left cell, so this recording tableau has entries $1,\dots,N$ in order up columns, we therefore have that \begin{equation}\label{minimal} \alpha = \gamma(Q(\alpha))) \end{equation} for any $\alpha\in\widehat{C}_w$ and $w$ that is minimal in its left cell. This is the reason that the minimal left cell representatives are particularly convenient to work with. At last we can resume the main discussion of Goldie ranks. In \cite[$\S$5.12]{JgoldieI}, Joseph made the striking discovery that for each $w \in W$ there is a unique polynomial $p_w \in {\mathbb C}[\mathfrak{t}^*]$ with the property that \begin{equation}\label{goldiedef} {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = p_w(\delta) \end{equation} for each $\alpha \in \widehat{C}_w$, where $\delta$ denotes the anti-dominant conjugate of $\alpha$. The $p_w$'s are Joseph's {\em Goldie rank polynomials}, which have many remarkable properties. We recall in particular that $p_w$ only depends on the left cell of $w$. To see this, take any regular anti-dominant $\delta \in P$. Assuming $w \sim_L w'$ we have that $Q(w\delta) = Q(w'\delta)$ so $I(w \delta) = I(w' \delta)$ by (\ref{josephs}). Also $w \delta$ and $w' \delta$ belong to (the interior of) $\widehat{C}_w$ and $\widehat{C}_{w'}$, respectively, by regularity. Hence (\ref{goldiedef}) gives that $p_w(\delta) = p_{w'}(\delta)$. Since the regular anti-dominant weights are Zariski dense this implies that $p_w = p_{w'}$ whenever $w \sim_L w'$. The following theorem, which is ultimately deduced from Theorem~\ref{sep}, gives an explicit formula for Goldie rank polynomials in several important cases, e.g. it includes the extreme cases $w = 1$ (when $p_w=1$) and $w=w_0$ (when it is essentially Weyl's dimension formula), as well as all situations when the tableau $Q(w)$ has just two rows. \begin{Theorem}\label{myg} Suppose we are given $w \in W$ such that $Q(w) \sim A$ for some column-separated tableau $A$. Then we have that $$ p_w = \prod_{(i,j)} \frac{x_i-x_j}{d(i,j)} $$ where the product is over all pairs $(i,j)$ of entries from the tableau $A$ such that $i$ is strictly above and in the same column as $j$, and $d(i,j) > 0$ is the number of rows that $i$ is above $j$. \end{Theorem} For general $w$, the polynomials $p_w$ are more complicated but can be written explicitly in terms of Kazhdan-Lusztig polynomials. To explain this, and for later use, we must make one more notational digression. Recall that the irreducible module $L(\alpha)$ is the unique irreducible quotient of the {\em Verma module} $M(\alpha) := U(\mathfrak{g}) \otimes_{U(\mathfrak{b})} {\mathbb C}_{\alpha-\rho}$, where ${\mathbb C}_{\alpha-\rho}$ is the one dimensional $\mathfrak{b}$-module of weight $\alpha-\rho$. We have the usual {\em decomposition numbers} $[M(\alpha):L(\beta)] \in {\mathbb Z}_{\geq 0}$ and the {\em inverse decomposition numbers} $(L(\alpha):M(\beta)) \in {\mathbb Z}$ defined from \begin{equation}\label{first} \operatorname{ch} L(\alpha) = \sum_{\beta} (L(\alpha):M(\beta)) \operatorname{ch} M(\beta). \end{equation} For $w \in W$, we denote $L(w(-\rho))$ and $M(w(-\rho))$ simply by $L(w)$ and $M(w)$, respectively; in particular, $L(w_0)$ is the trivial module. By the translation principle (see \cite[4.12]{Je}), we have that \begin{align}\label{kldef} [M(\alpha):L(\beta)] &= [M(x):L(y)],\\ (L(\alpha):M(\beta)) &= \sum_{z \in W_\delta} (L(x):M(yz)),\label{klform} \end{align} for any $\alpha,\beta \in P$ with the same anti-dominant conjugate $\delta$, where $x := d(\alpha)$ and $y := d(\beta)$. Moreover, by the Kazhdan-Lusztig conjecture established in \cite{BB, BK}, it is known for $x,y\in W$ that \begin{align}\label{bigkl} [M(x):L(y)] &= P_{x w_0,y w_0}(1),\\ (L(x):M(y)) &= (-1)^{\ell(x)+\ell(y)} P_{y,x}(1)\label{bigkl2} \end{align} where $P_{x,y}(t)$ denotes the Kazhdan-Lusztig polynomial attached to $x,y \in W$ from \cite{KL}. The following theorem gives an explicit formula for the Goldie rank polynomials $p_w$. It is a straightforward consequence of Joseph's original approach for computing Goldie ranks in Cartan type $A$ from \cite{Jkos}, which we already mentioned in the discussion after Theorem~\ref{pti}. As was explained to me by Joseph, it can also be deduced from Joseph's general formula for Goldie rank polynomials (bearing in mind that all the scale factors are known in Cartan type $A$). We give yet another proof in the last section of the article via finite $W$-algebras, exploiting Theorem~\ref{pti}. Recall for the statement that $p_w$ depends only on the left cell of $w$, so it is sufficient to compute $p_w$ just for the minimal left cell representatives. \begin{Theorem}[Joseph]\label{foldie} Suppose $w \in W$ is minimal in its left cell. Let ${\lambda}$ be the shape of the tableau $Q(w)$ with transpose ${\lambda}' = ({\lambda}_1' \geq {\lambda}_2' \geq \cdots)$. Let $W^{\lambda}$ denote the parabolic subgroup $S_{{\lambda}'_1} \times S_{{\lambda}_2'}\times\cdots$ of $W=S_N$ and $D^{\lambda}$ be the set of maximal length $W^{\lambda} \backslash W$-coset representatives. Then \begin{equation}\label{bform} p_w= \sum_{z \in D^\lambda} (L(w):M(z)) z^{-1}(h_{\lambda}) \end{equation} where $h_{\lambda} := \!\!\displaystyle\prod_{(i\:j) \in W^{\lambda}} \!\frac{x_i - x_j}{j-i}$ (product over all transpositions $(i\:j) \in W^{\lambda}$). \end{Theorem} \iffalse Soon after working out how to compute Goldie ranks in type $A$, Joseph discovered another formula for the Goldie rank polynomials \cite[$\S$5.1]{Jgoldie}: for any $w \in W$ we have that \begin{align}\label{jform} p_w &= \frac{1}{c_w} \sum_{z \in W} (L(w):M(z)) z^{-1}(h^{m_w}) \end{align} where $h := \frac{1}{2}\sum_{1 \leq i < j \leq N} (x_i - x_j)$, $m_w := |\Phi^+| - {\operatorname{gkdim}\:} L(w)$, and $c_w \in {\mathbb Q}_{> 0}$ is some unknown constant. Remarkably, with appropriate modifications, this formula is valid in arbitrary type, although the constants $c_w$ are problematic. But at least in type $A$ the $c_w$'s can in principle be worked out, by picking some weight $\alpha$ on which $p_w$ is non-zero and then comparing the values of (\ref{bform}) and (\ref{jform}) on this weight; one needs to be careful since $c_w$ depends on $w$ not just on the left cell containing $w$, i.e. the map $w \mapsto c_w$ is typically {\em not} constant on left cells. \fi Joseph has directed a great deal of attention to the problem of determining the unknown constants in the Goldie rank polynomials in Cartan types different from $A$. This led Joseph to conjecture in \cite[Conjecture 8.4(i)]{Jsur} that Goldie rank polynomials always take the value $1$ on some integral weight. Our final result verifies this conjecture in Cartan type $A$. The proof is a surprisingly simple computation from (\ref{bform}). \begin{Theorem}\label{one} Every Goldie rank polynomial takes the value one on some element of $P$. More precisely, if $w \in W$ is minimal in its left cell and $C$ is the unique tableau of the same shape as $Q(w)$ that has all $1$'s on its bottom row, all $2$'s on the next row up, and so on, then $p_w(\alpha) =1$ where $\alpha := w^{-1} \gamma(C)$. \end{Theorem} The remainder of the article is organized as follows. In $\S$2, we recall the highest weight classification of finite dimensional irreducible representations of the finite $W$-algebra $U(\mathfrak{g},e)$ from \cite[Theorem 7.9]{BKrep}. Then we compare this with \cite[Theorem 3.3]{Pabelian} to determine the highest weights of all the one dimensional $U(\mathfrak{g},e)$-modules explicitly. In particular we see from this that every one dimensional representation of a finite $W$-algebra in Cartan type $A$ can be obtained as the restriction of a one dimensional representation of a parabolic subalgebra of $\mathfrak{g}$, a statement which is closely related to M\oe glin's theorem. Then in $\S$\ref{swhitt} we gather together various existing results about Whittaker functors and primitive ideals in Cartan type $A$. In fact we need to exploit both sorts of Whittaker functor (invariants and coinvariants) to deduce our main results. We point out in particular Remark~\ref{brundans}, in which we formulate a conjecture which would imply a classification of primitive ideals in $U(\mathfrak{g},e)$ exactly in the spirit of the Joseph-Duflo classification of ${\operatorname{Prim}U(\mathfrak{g})}$. In $\S$\ref{sm} we use the criterion for irreducibility of standard modules from \cite[Theorem 8.25]{BKrep} to establish the first equality in Theorem~\ref{sep}. In $\S$\ref{sco} we review the Whittaker coinvariants construction of finite dimensional irreducible $U(\mathfrak{g},e)$-modules from \cite[Theorem 8.21]{BKrep}. In $\S$\ref{sgoldie} we explain the method from \cite[$\S$8.5]{BKrep} for computing dimensions of finite dimensional irreducible $U(\mathfrak{g},e)$-modules, and extract the polynomial on the right hand side of the formula (\ref{bform}) from this. Finally we explain the alternative proof of Theorem~\ref{pti} and derive all the other new results formulated in this introduction in $\S$\ref{sproofs}. \vspace{2mm} \noindent {\em Acknowledgements.} My interest in reproving M\oe glin's theorem using finite $W$-algebras was sparked in the first place by a conversation with Alexander Premet and Anthony Joseph at the Oberwolfach meeting on ``Enveloping Algebras'' in March 2005. I would like to thank Alexander Premet for some inspiring discussions and encouragement since then, most recently at the ``Representation Theory of Algebraic Groups and Quantum Groups'' conference in Nagoya in August, 2010 where I learnt about the new results in \cite{Pnew}. I also thank Anthony Joseph for his helpful comments on the first draft of the article. \section{One dimensional representations}\label{s1d} In this section we recall some basic facts about the representation theory of finite $W$-algebras in Cartan type $A$ from \cite{BKrep}, and then deduce a classification of one dimensional representations of these algebras. We continue with the basic Lie theoretic notation from the introduction, in particular, $\mathfrak{g} = \mathfrak{gl}_N({\mathbb C})$ and $\mathfrak{t}$ and $\mathfrak{b}$ are the usual choices of Cartan and Borel subalgebra. Let $\lambda = (p_n \geq \cdots \geq p_1)$ be a fixed partition of $N$. For each $i=1,\dots,n-1$, pick non-negative integers $s_{i,i+1}$ and $s_{i+1,i}$ such that $s_{i,i+1}+s_{i+1,i} = p_{i+1}-p_i$. Then set $s_{i,j} := s_{i,i+1}+s_{i+1,i+2}+\cdots+s_{j-1,j}$ and $s_{j,i} := s_{j,j-1}+\cdots+s_{i+2,i+1}+s_{i+1,i}$ for $1 \leq i \leq j \leq n$. This defines a {\em shift matrix} $\sigma = (s_{i,j})_{1 \leq i,j \leq n}$ in the sense of \cite[(2.1)]{BKshifted}. Let $l := p_n$ for short, which is called the {\em level} in \cite{BKshifted}. We visualize this data by means of a {\em pyramid} $\pi$ of boxes drawn in an $n \times l$ rectangle, so that there is a box in row $i$ and column $j$ for each $1 \leq i \leq n$ and $1+s_{n,i} \leq j \leq l - s_{i,n}$ (where rows and columns are indexed as in a matrix). Note that there are $p_i$ boxes in the $i$th row for each $i=1,\dots,n$. Let $q_j$ be the number of boxes in the $j$th column for $j=1,\dots,l$. Also number the boxes of $\pi$ by $1,\dots,N$ working in order down columns starting from the leftmost column, and write $\operatorname{row}(k)$ and $\operatorname{col}(k)$ for the row and column numbers of the $k$th box. For example, for $\lambda = (3,2,1)$ there are four possible choices for $\sigma$ with corresponding pyramids $$ \sigma = \left(\begin{array}{lll}0&1&2\\0&0&1\\0&0&0\end{array}\right) \leftrightarrow \:\pi= \begin{picture}(39,0) \put(3,-16){\line(0,1){36}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){24}} \put(39,-16){\line(0,1){12}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(3,20){\line(1,0){12}} \put(9,14){\makebox(0,0){$1$}} \put(9,2){\makebox(0,0){$2$}} \put(9,-10){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$4$}} \put(21,-10){\makebox(0,0){$5$}} \put(33,-10){\makebox(0,0){$6$}} \end{picture}\:, \qquad \sigma = \left(\begin{array}{lll}0&1&1\\0&0&0\\1&1&0\end{array}\right) \leftrightarrow \:\pi= \begin{picture}(39,0) \put(3,-16){\line(0,1){12}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){24}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(15,8){\line(1,0){24}} \put(27,20){\line(-1,0){12}} \put(9,-10){\makebox(0,0){$1$}} \put(21,14){\makebox(0,0){$2$}} \put(21,2){\makebox(0,0){$3$}} \put(21,-10){\makebox(0,0){$4$}} \put(33,-10){\makebox(0,0){$6$}} \put(33,2){\makebox(0,0){$5$}} \end{picture}\:, $$ $$ \sigma = \left(\begin{array}{lll}0&0&1\\1&0&1\\1&0&0\end{array}\right) \leftrightarrow \:\pi=\begin{picture}(39,0) \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(9,2){\makebox(0,0){$1$}} \put(9,-10){\makebox(0,0){$2$}} \put(21,14){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$4$}} \put(21,-10){\makebox(0,0){$5$}} \put(33,-10){\makebox(0,0){$6$}} \end{picture}\:, \qquad \sigma = \left(\begin{array}{lll}0&0&0\\1&0&0\\2&1&0\end{array}\right) \leftrightarrow \:\pi=\begin{picture}(39,0) \put(3,-16){\line(0,1){12}} \put(15,-16){\line(0,1){24}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){36}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(15,8){\line(1,0){24}} \put(27,20){\line(1,0){12}} \put(9,-10){\makebox(0,0){$1$}} \put(21,2){\makebox(0,0){$2$}} \put(21,-10){\makebox(0,0){$3$}} \put(33,14){\makebox(0,0){$4$}} \put(33,2){\makebox(0,0){$5$}} \put(33,-10){\makebox(0,0){$6$}} \end{picture}\:. $$ If $\sigma$ is upper-triangular then $\pi$ coincides with the usual Young diagram of the partition $\lambda$; we refer to this as the {\em left-justified} case. By a {\em $\pi$-tableau}, we mean a filling of the boxes of the pyramid $\pi$ by complex numbers; the left-justified tableaux from the introduction are a special case. The definitions of {\em column-strict}, {\em column-connected} and {\em row-equivalence} formulated in the introduction in the left-justified case extend without change to $\pi$-tableaux. Also say a $\pi$-tableau $A$ is {\em row-standard} if its entries are non-decreasing along rows from left to right, meaning that $a \not> b$ whenever $a$ and $b$ are two entries from the same row with $a$ located to the left of $b$. We next define two essential maps from $\pi$-tableaux to $\mathfrak{t}^*$, denoted $\gamma$ and $\rho$ and called {\em column reading} and {\em row reading}, respectively. First, for a $\pi$-tableau $A$, we let \begin{equation}\label{gammadef} \gamma(A) := \sum_{i=1}^n a_i {\varepsilon}_i \end{equation} where $(a_1,\dots,a_N)$ is the sequence of complex numbers obtained by reading the entries of $A$ in order down columns starting with the leftmost column; so $a_i$ is the entry in the $i$th box of $A$. For $\rho(A)$, we first need to convert $A$ into a row-standard $\pi$-tableau, which we do by repeatedly transposing pairs of entries $a > b$ in the same row with $a$ located to the left of $b$ until we get to a (uniquely determined) row-standard tableau $A'$. Then let \begin{equation}\label{rhoAdef} \rho(A) := \sum_{i=1}^n a_i' {\varepsilon}_i \end{equation} where $(a_1',\dots,a_n')$ is the sequence obtained by reading the entries of $A'$ in order along rows starting with the top row. Note the map $\gamma$ is obviously bijective, but $\rho$ is definitely not. Let $e \in \mathfrak{g}$ be the nilpotent matrix $$ e := \sum_{\substack{1 \leq i,j \leq N\\ \operatorname{row}(i) = \operatorname{row}(j)\\ \operatorname{col}(i) =\operatorname{col}(j)-1}} e_{i,j} $$ of Jordan type $\lambda$. Here $e_{i,j}$ denotes the $ij$-matrix unit. Introduce a ${\mathbb Z}$-grading $\mathfrak{g} = \bigoplus_{d \in {\mathbb Z}} \mathfrak{g}(d)$ by declaring that $e_{i,j}$ is of degree $2(\operatorname{col}(j)-\operatorname{col}(i))$; in particular, $e$ is homogeneous of degree $2$. Let $\mathfrak{m} := \bigoplus_{d < 0} \mathfrak{g}(d)$, $\mathfrak{h} := \mathfrak{g}(0)$ and $\mathfrak{p} := \bigoplus_{d \geq 0} \mathfrak{g}(d)$. So $\mathfrak{p}$ is the standard parabolic subalgebra with Levi factor $\mathfrak{h}$, and $\mathfrak{h}$ is just the diagonally embedded subalgebra $\mathfrak{gl}_{q_1}({\mathbb C})\oplus\cdots\oplus \mathfrak{gl}_{q_l}({\mathbb C})$. Let $\mathfrak{g}^e$ (resp.\ $\mathfrak{t}^e$) be the centralizer of $e$ in $\mathfrak{g}$ (resp.\ $\mathfrak{t})$. It is important that $\mathfrak{g}^e \subseteq \mathfrak{p}$. Let $\chi:\mathfrak{m} \rightarrow {\mathbb C}$ be the Lie algebra homomorphism $x \mapsto (x,e)$ where $(.,.)$ is the trace form. Let $\mathfrak{m}_\chi := \{x - \chi(x)\:|\:x \in \mathfrak{m}\} \subseteq U(\mathfrak{m})$. The {\em finite $W$-algebra} is the following subalgebra of $U(\mathfrak{p})$: \begin{equation}\label{fw} U(\mathfrak{g},e) := \{u \in U(\mathfrak{p})\:|\: \mathfrak{m}_\chi u \subseteq U(\mathfrak{g}) \mathfrak{m}_\chi\}. \end{equation} This definition originates in work of Kostant \cite{K}, Lynch \cite{Ly} and M\oe glin \cite{MW}, and is a special case of the construction due to Premet \cite{Pslice} and then Gan and Ginzburg \cite{GG} of non-commutative filtered deformations of the coordinate algebra of the Slodowy slice associated to the nilpotent orbit $G \cdot e$; the terminology ``finite $W$-algebra'' has emerged because they are the finite dimensional analogues of the vertex $W$-algebras constructed in \cite{KRW}. Of course the definition depends implicitly on the choice of grading (hence on $\pi$), but up to isomorphism the algebra $U(\mathfrak{g},e)$ is independent of this choice; see \cite[Corollary 10.3]{BKshifted}. More conceptual proofs of this independence (valid in all Cartan types) were given subsequently in \cite[Theorem 1]{BG} and \cite[Proposition 3.1.2]{Lsymplectic}. A special feature of the Cartan type $A$ case is that a complete set of generators and relations for $U(\mathfrak{g},e)$ is known; see \cite[Theorem 10.1]{BKshifted}. The generators are certain explicit elements \begin{align*} \{D_i^{(r)}\:&|\:1 \leq i \leq n, r > 0\}\\ \{E_i^{(r)}\:&|\:1 \leq i < n, r > s_{i,i+1}\}\\ \{F_i^{(r)}\:&|\:1 \leq i < n, r > s_{i+1,i}\} \end{align*} of $U(\mathfrak{p})$ defined in \cite[$\S$9]{BKshifted}, and the relations are the defining relations for the shifted Yangian $Y_n(\sigma)$ recorded in \cite[(2.4)--(2.15)]{BKshifted}, together with the relations $D_1^{(r)} = 0$ for $r > p_1$. These generators and relations were exploited in \cite{BKrep} to classify the finite dimensional irreducible $U(\mathfrak{g},e)$-modules. To recall this classification in more detail, by a {\em highest weight vector} in a $U(\mathfrak{g},e)$-module, we mean a common eigenvector for all $D_i^{(r)}$ which is annihilated by all $E_j^{(s)}$. Assume that $v_+$ is a non-zero highest weight vector in a left module. Let $a_i^{(r)}\in{\mathbb C}$ be defined from $D_i^{(r)} v_+ = a_i^{(r)} v_+$ and define $a_{i,1},\dots,a_{i,p_i} \in {\mathbb C}$ by factoring \begin{equation}\label{factor} u^{p_i} + a_i^{(1)} u^{p_i-1}+\cdots+a_i^{(p_i)} = (u+a_{i,1})\cdots(u+a_{i,p_i}). \end{equation} Combining \cite[Theorem 3.5]{BKrep} for $j=i$ with the definition \cite[(2.34)]{BKrep}, it follows that the elements $D_i^{(r)}$ for $r > p_i$ lie in the left ideal of $U(\mathfrak{g},e)$ generated by all $E_j^{(s)}$, hence $a_i^{(r)} = 0$ for $r > p_i$. So we have for all $r > 0$ that \begin{equation}\label{esf} D_i^{(r)} v_+ = e_r(a_{i,1},\dots,a_{i,p_i}) v_+, \end{equation} where $e_r(a_{i,1},\dots,a_{i,p_i})$ is the $r$th elementary symmetric polynomial in the complex numbers $a_{i,1},\dots,a_{i,p_i}$. We record this by writing the complex numbers $a_{i,1}-i,\dots,a_{i,p_i}-i$ into the boxes on the $i$th row of the pyramid $\pi$ to obtain a $\pi$-tableau $A$, which we refer to as the {\em type} of the original highest weight vector $v_+$. Of course $A$ here is defined only up to row-equivalence. Conversely, given a $\pi$-tableau $A$, there is a unique (up to isomorphism) irreducible left $U(\mathfrak{g},e)$-module $L(A,e)$ generated by a highest weight vector of type $A$, with $L(A,e) \cong L(B,e)$ if and only if $A \sim B$. The module $L(A,e)$ is constructed in \cite[$\S$6.1]{BKrep} as the unique irreducible quotient of the {\em Verma module} $M(A,e)$, which is the universal highest weight module of type $A$; see also \cite[$\S$4.2]{BGK} for a different construction of Verma modules which avoids the explicit use of generators and relations (so makes sense in other Cartan types). \begin{Remark}\rm A basic question is to compute the composition multiplicities $[M(A,e):L(B,e)]$. In \cite[Conjecture 7.17]{BKrep}, we conjectured for any $\pi$-tableaux $A$ and $B$ with integer entries that \begin{equation}\label{conj} [M(A,e):L(B,e)] = [M(\rho(A)):L(\rho(B))], \end{equation} the numbers on the right hand side being known by (\ref{kldef}) and (\ref{bigkl}). Although not needed in the present article, we want to point out that this conjecture is now a theorem of Losev; see \cite[Theorems 4.1 and 4.3]{LcatO}. Strictly speaking, to get from Losev's result to (\ref{conj}) one needs to identify the Verma modules $M(A,e)$ defined here with the ones in \cite{LcatO}, but this has now been checked thanks to some recent work of Brown and Goodwin \cite{BrG}; see the proof of Theorem~\ref{labels} below for a fuller discussion. In arbitrary standard Levi type, there is an analogous conjecture formulated roughly in \cite{VD}, which can also be proved using Losev's work. \end{Remark} The highest weight classification of finite dimensional irreducible $U(\mathfrak{g},e)$-modules is as follows. \begin{Theorem}[{\cite[Theorem 7.9]{BKrep}}]\label{fdc} For a $\pi$-tableau $A$, $L(A,e)$ is finite dimensional if and only if $A$ is row-equivalent to a column-strict tableau. Hence, as $A$ runs over a set of representatives for the row-equivalence classes of column-strict $\pi$-tableaux, the modules $\{L(A,e)\}$ give a complete set of pairwise inequivalent finite dimensional irreducible left $U(\mathfrak{g},e)$-modules. \end{Theorem} The proof of the ``if'' part of Theorem~\ref{fdc} given in \cite{BKrep} is quite straightforward, and is based on the construction of another family of $U(\mathfrak{g},e)$-modules called {standard modules} indexed by column-strict tableaux. To define these, recall the weight $\rho$ from (\ref{rhodef}), and also introduce the special weight \begin{equation}\label{betadef} \beta :=\!\!\!\!\!\! \sum_{\substack{1 \leq i,j \leq N \\ \operatorname{col}(i) > \operatorname{col}(j)}} ({\varepsilon}_i - {\varepsilon}_j) = \! \sum_{i=1}^N ((q_1+\cdots+q_{\operatorname{col}(i)-1}) - (q_{\operatorname{col}(i)+1} + \cdots + q_l)){\varepsilon}_i \in \mathfrak{t}^*. \end{equation} This is the same as the weight $\beta$ defined in \cite{BGK}, which is important because of \cite[Corollary 2.9]{BGK} (reproduced in Theorem~\ref{twist} below). Notice that $A$ is column-strict if and only if $\gamma(A) - \beta - \rho$ is a dominant weight for the Lie algebra $\mathfrak{h} = \mathfrak{g}(0)$ with respect to the Borel subalgebra $\mathfrak{b}\cap\mathfrak{h}$. Assuming that is the case, there is a finite dimensional irreducible $\mathfrak{p}$-module $V(A)$ generated by a $\mathfrak{b}$-highest weight vector of this weight. Then we restrict the left $U(\mathfrak{p})$-module $V(A)$ to the subalgebra $U(\mathfrak{g},e)$ to obtain the {\em standard module} denote $V(A,e)$. Thus $V(A,e) = V(A)$ as vector spaces, but we use different notation since one is a $U(\mathfrak{g},e)$-module and the other is a $U(\mathfrak{p})$-module. As observed in the last paragraph of the proof of \cite[Theorem 7.9]{BKrep}, the original $\mathfrak{b}$-highest weight vector in $V(A)$ is a highest weight vector of type $A$ in $V(A,e)$; this can also be checked directly by arguing as in the proof of \cite[Lemma 5.4]{BGK}. It follows that $L(A,e)$ is a composition factor of the finite dimensional module $V(A,e)$, hence $L(A,e)$ is indeed finite dimensional when $A$ is column-strict. We are interested next in one dimensional modules. It is obvious from the definitions that $V(A)$ is one dimensional if and only if $A$ is column-connected. Since $L(A,e)$ is a subquotient of $V(A,e)$, it follows that $L(A,e)$ is one-dimensional if $A$ is row-equivalent to a column-connected tableau. We are going to prove the converse of this statement to obtain the following classification of one dimensional $U(\mathfrak{g},e)$-modules. The possibility of doing this was suggested already by Losev in the discussion in the paragraph after \cite[Theorem 5.2.1]{L1D}. \begin{Theorem}\label{class} For a $\pi$-tableau $A$, $L(A,e)$ is one dimensional if and only if $A$ is row-equivalent to a column-connected tableau. Hence, as $A$ runs over a set of representatives for the row-equivalence classes of column-connected $\pi$-tableaux, the modules $\{L(A,e)\}$ give a complete set of pairwise inequivalent one dimensional left $U(\mathfrak{g},e)$-modules. \end{Theorem} \begin{Corollary}\label{mc} Every one dimensional left $U(\mathfrak{g},e)$-module is isomorphic to a standard module $V(A,e)$ for some column-connected $\pi$-tableau $A$, so arises as the restriction of a one dimensional $U(\mathfrak{p})$-module. \end{Corollary} The rest of the section is devoted to proving Theorem~\ref{class} and its corollary. To do this, we need to review the following theorem of Premet describing the algebra $U(\mathfrak{g},e)^{\operatorname{ab}}$, that is, the quotient of $U(\mathfrak{g},e)$ by the two-sided ideal generated by all commutators $[x,y]$ for $x,y \in U(\mathfrak{g},e)$. Of course one dimensional $U(\mathfrak{g},e)$-modules are identified with one dimensional $U(\mathfrak{g},e)^{\operatorname{ab}}$-modules. It is convenient at this point to set $p_0 := 0$. \begin{Theorem}[{\cite[Theorem 3.3]{Pabelian}}]\label{pt} The algebra $U(\mathfrak{g},e)^{\operatorname{ab}}$ is a free polynomial algebra of rank $l$ generated by the images of the elements \begin{equation}\label{elts} \{D_i^{(r)}\:|\:1 \leq i \leq n, 1 \leq r \leq p_{i} - p_{i-1}\}. \end{equation} \end{Theorem} Premet's proof of Theorem~\ref{pt} is in two parts. The first step is to show that $U(\mathfrak{g},e)^{\operatorname{ab}}$ is generated by the images of the commuting elements listed in (\ref{elts}). This is a straightforward consequence of the defining relations for $U(\mathfrak{g},e)$ from \cite{BKshifted}, and is explained in the first two paragraphs of the proof of \cite[Theorem 3.3]{Pabelian}. Thus, letting $X \cong \mathbb{A}^l$ be the affine space with algebraically independent coordinate functions $\{T_i^{(r)}\:|\:1 \leq i \leq n, 1 \leq r \leq p_i - p_{i-1}\}$, there is a surjective map \begin{equation}\label{mor} {\mathbb C}[X] \twoheadrightarrow U(\mathfrak{g},e)^{\operatorname{ab}}, \qquad T_i^{(r)} \mapsto D_i^{(r)}. \end{equation} This map identifies $\operatorname{Specm} U(\mathfrak{g},e)^{\operatorname{ab}}$ with a closed subvariety of $X$. Then to complete the proof Premet shows quite indirectly that $\dim \operatorname{Specm} U(\mathfrak{g},e)^{\operatorname{ab}} \geq l$, hence $\operatorname{Specm} U(\mathfrak{g},e)^{\operatorname{ab}} =X$ and the surjective map is an isomorphism. In the next paragraph, we will explain an alternative argument for this second step using the following elementary lemma. \begin{Lemma}\label{Stup} Given complex numbers $a_i^{(r)}$ for $1 \leq i \leq n$ and $1 \leq r \leq p_i - p_{i-1}$, there are complex numbers $a_{i,j}$ for $1 \leq i \leq n$ and $1 \leq j \leq p_i$ such that \begin{align}\label{id1} a_{i,p_i-p_{i-1}+r} &= a_{i-1,r}&&\text{for }1 \leq r \leq p_{i-1},\\ e_r(a_{i,1},\dots,a_{i,p_i}) &= a_{i}^{(r)}&&\text{for }1 \leq r \leq p_i-p_{i-1}.\label{id2} \end{align} \end{Lemma} \begin{proof} We prove existence of numbers $a_{i,j}$ for $1 \leq j \leq p_i$ satisfying (\ref{id1})--(\ref{id2}) by induction on $i=1,\dots,n$. For the base case $i=1$, we define $a_{1,1},\dots,a_{1,p_1}$ from the factorization (\ref{factor}), and (\ref{id1})--(\ref{id2}) are clear. For the induction step, suppose we have already found $a_{i-1,1},\dots,a_{i-1,p_{i-1}}$. Define $a_{i,p_i-p_{i-1}+1},\dots,a_{i,p_i}$ so that (\ref{id1}) holds. Then we need to find complex numbers $a_{i,1},\dots,a_{i,p_i-p_{i-1}}$ satisfying (\ref{id2}). The equations (\ref{id2}) are equivalent to the equations $$ b_i^{(r)} = a_i^{(r)} - \sum_{s=0}^{r-1} b_i^{(s)} e_{r-s}(a_{i-1,1},\dots,a_{i-1,p_{i-1}}) $$ for $1 \leq r \leq p_i -p_{i-1}$, where $b_i^{(r)}$ denotes $e_r(a_{i,1},\dots,a_{i,p_i-p_{i-1}})$, Proceeding by induction on $r=1,\dots,p_i-p_{i-1}$, we solve these equations uniquely for $b_i^{(r)}$ and then define $a_{i,1},\dots,a_{i,p_i-p_{i-1}}$ by factoring $$ u^{p_i-p_{i-1}} + b_i^{(1)} u^{p_i-p_{i-1}-1}+\cdots+b_i^{(p_i-p_{i-1})} = (u+a_{i,1}) \cdots (u+a_{i,p_i-p_{i-1}}). $$ This does the job. \end{proof} Now take any point $x \in X$, set $a_i^{(r)} := T_i^{(r)}(x)$, and then define $a_{i,j}$ according to Lemma~\ref{Stup}. Because of (\ref{id1}), there is a column-connected $\pi$-tableau $A$ having entries $a_{i,1}-i,\dots,a_{i,p_i}-i$ in its $i$th row for each $i=1,\dots,n$. This tableau $A$ is unique up to row-equivalence, indeed, any two choices for $A$ agree up to reordering columns of the same height. As we have already observed, the assumption that $A$ is column-connected means that the standard module $V(A,e)$ is one dimensional, hence so is $L(A,e) \cong V(A,e)$. By (\ref{esf}) and (\ref{id2}), we see that $D_i^{(r)}$ acts on $L(A,e)$ by the scalar $a_i^{(r)}$, showing that the point $x$ lies in $\operatorname{Specm} U(\mathfrak{g},e)^{\operatorname{ab}}$. Thus we have established that $\operatorname{Specm} U(\mathfrak{g},e)^{\operatorname{ab}} =X$, so the map (\ref{mor}) is indeed an isomorphism as required for the alternative proof of the second part of Theorem~\ref{pt} promised above. This argument shows moreover that every one dimensional left $U(\mathfrak{g},e)$-module is isomorphic to $L(A,e) \cong V(A,e)$ for some column-connected $\pi$-tableau $A$, which is enough to complete the proofs of Theorem~\ref{class} and Corollary~\ref{mc}. \section{Whittaker functors and Duflo-Joseph classification}\label{swhitt} In this section we review the definitions of the two sorts of Whittaker functors and explain some of the results of Premet and Losev linking finite dimensional $U(\mathfrak{g},e)$-modules to ${\operatorname{Prim}U(\mathfrak{g})}$. For any associative algebra $A$, we denote the category of all left (resp.\ right) $A$-modules by $A\operatorname{\!-mod}$ (resp.\ $\operatorname{mod-\!} A$). If $M$ is a left $U(\mathfrak{g})$-module, it is clear from (\ref{fw}) that the space $H^0(\mathfrak{m}_\chi, M) := \{v \in M \:|\:\mathfrak{m}_\chi v = \hbox{\boldmath{$0$}}\}$ of {\em Whittaker invariants} is stable under left multiplication by elements of $U(\mathfrak{g},e)$, hence it is a left $U(\mathfrak{g},e)$-module. So we get the functor \begin{equation}\label{inv} H^0(\mathfrak{m}_\chi, ?):U(\mathfrak{g})\operatorname{\!-mod} \rightarrow U(\mathfrak{g},e)\operatorname{\!-mod}. \end{equation} Instead suppose that $M$ is a right $U(\mathfrak{g})$-module. Then, by (\ref{fw}) again, the space $H_0(\mathfrak{m}_\chi, M) := M / M \mathfrak{m}_\chi$ of {\em Whittaker coinvariants} is naturally a right $U(\mathfrak{g},e)$-module. So we have the functor \begin{equation}\label{coinv} H_0(\mathfrak{m}_\chi, ?):\operatorname{mod-\!} U(\mathfrak{g}) \rightarrow \operatorname{mod-\!} U(\mathfrak{g},e). \end{equation} In the remainder of the section we review some of the basic properties of these two Whittaker functors. Although not used here, we remark that one can also combine these functors to obtain a remarkable functor $H^0_0(\mathfrak{m}_\chi, ?)$ on bimodules introduced originally by Ginzburg; see \cite[$\S$3.3]{Gin} and \cite[$\S$3.5]{Lclass}. We begin with the functor $H^0(\mathfrak{m}_\chi,?)$. Let $(U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod}$ be the full subcategory of $U(\mathfrak{g})\operatorname{\!-mod}$ consisting of all modules on which $\mathfrak{m}_\chi$ acts locally nilpotently. By Skryabin's theorem \cite{Skryabin} (see also \cite[$\S$6]{GG}), the functor $H^0(\mathfrak{m}_\chi,?)$ restricts to an equivalence of categories \begin{equation*} H^0(\mathfrak{m}_\chi, ?):(U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod} \rightarrow U(\mathfrak{g},e)\operatorname{\!-mod}. \end{equation*} The quasi-inverse equivalence is the {\em Skryabin functor} \begin{equation}\label{skryabinf} S_\chi : U(\mathfrak{g},e)\operatorname{\!-mod} \rightarrow (U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod} \end{equation} defined by tensoring with the $(U(\mathfrak{g}), U(\mathfrak{g},e))$-bimodule $U(\mathfrak{g}) / U(\mathfrak{g}) \mathfrak{m}_\chi$. This equivalence has proved useful for the study of primitive ideals in $U(\mathfrak{g})$. For a two-sided ideal $I$ of $U(\mathfrak{g})$, we define its associated variety $\mathcal{V\!A}(I)$ as in \cite[$\S$9.3]{Ja}, viewing it as a closed subvariety of $\mathfrak{g}$ via the trace form. Let $\mathcal{V\!A}'(I)$ denote the image of $\mathcal{V\!A}(I)$ under the natural projection $\mathfrak{g} \twoheadrightarrow [\mathfrak{g},\mathfrak{g}] = \mathfrak{sl}_N({\mathbb C})$. By Joseph's irreducibility theorem, it is known that $\mathcal{V\!A}'(I)$ is the closure of a single nilpotent orbit for every $I \in {\operatorname{Prim}U(\mathfrak{g})}$. This follows in Cartan type $A$ from \cite[$\S$3.3]{JI}; for other Cartan types see \cite[$\S$3.10]{Joseph} as well as \cite[Corollary 4.7]{Vogan} and \cite[Remark 3.4.4]{Lclass} for alternative proofs (the second of which goes via finite $W$-algebras in the spirit of the present article). Let ${\operatorname{Prim}_\lambda U(\mathfrak{g})}$ denote the set of $I \in {\operatorname{Prim}U(\mathfrak{g})}$ such that $\mathcal{V\!A}'(I)$ is the closure of the orbit $G \cdot e$ of all nilpotent matrices of Jordan type $\lambda$. Given any non-zero left $U(\mathfrak{g},e)$-module $L$, we get a two-sided ideal \begin{equation}\label{il} I(L) := {\operatorname{Ann}}_{U(\mathfrak{g})} S_\chi(L) \end{equation} of $U(\mathfrak{g})$ by applying Skryabin's functor (\ref{skryabinf}) and then taking the annihilator. If $L$ is irreducible then Skryabin's theorem implies that $I(L) \in {\operatorname{Prim}U(\mathfrak{g})}$. The following fundamental theorem of Premet implies moreover that $I(L) \in {\operatorname{Prim}_\lambda U(\mathfrak{g})}$ if $L$ is finite dimensional and irreducible. Premet's proof of this result also uses Joseph's irreducibility theorem. Although not needed here, we remark that the converse of the second statement of the theorem is also true by \cite[Theorem 1.2.2(ii),(ix)]{Lsymplectic}. \begin{Theorem}[{\cite[Theorem 3.1]{Pslodowy}}]\label{t31} For any non-zero left $U(\mathfrak{g},e)$-module $L$ we have that $\mathcal{V\!A}'(I(L)) \supseteq \overline{G \cdot e}$. Moreover, if $L$ is finite dimensional then $\mathcal{V\!A}'(I(L)) = \overline{G \cdot e}$. \end{Theorem} Recalling Theorem~\ref{fdc}, this gives us an ideal $I(L(A,e)) \in {\operatorname{Prim}_\lambda U(\mathfrak{g})}$ for each column-strict $\pi$-tableau $A$. The next theorem explains how to identify this primitive ideal in the Duflo labelling from the introduction. It is a special case of a general result of Losev \cite[Theorem 5.1.1]{L1D} (a closely related statement was conjectured in \cite[$\S$5.1]{BGK}). \begin{Theorem}\label{labels} For any $\pi$-tableau $A$, we have that $$ I(L(A,e)) = I(\rho(A)), $$ where $\rho(A) \in \mathfrak{t}^*$ is defined by (\ref{rhoAdef}). \end{Theorem} \begin{proof} Recall we have labelled the boxes of $\pi$ in order down columns starting from the leftmost column. Let $1',2',\dots,N'$ be the sequence of integers obtained by reading these labels from left to right along rows starting from the top row. There is a unique permutation $w \in W$ such that $w(i) = i'$ for each $i=1,\dots,N$. Let $\mathfrak{b}' := w\cdot\mathfrak{b} = \langle e_{i',j'}\:|\:1 \leq i \leq j \leq N\rangle$ and $\rho' := w\rho = -\sum_{i=1}^N i {\varepsilon}_{i'}$. For any $\alpha' \in \mathfrak{t}^*$, let $L'(\alpha')$ be the irreducible $\mathfrak{g}$-module generated by a $\mathfrak{b}'$-highest weight vector of weight $\alpha' - \rho'$. Now take a $\pi$-tableau $A$ and let $\rho'(A) := w \rho(A)$. An easy argument involving twisting the action by $w$ shows that ${\operatorname{Ann}}_{U(\mathfrak{g})} L'(\rho'(A)) = {\operatorname{Ann}}_{U(\mathfrak{g})} L(\rho(A)) \stackrel{\text{def}}{=} I(\rho(\alpha))$. Thus to complete the proof of the theorem it suffices to show that \begin{equation}\label{prob} I(L(A,e)) \stackrel{\text{def}}{=} {\operatorname{Ann}}_{U(\mathfrak{g})} S_\chi(L(A,e)) = {\operatorname{Ann}}_{U(\mathfrak{g})} L'(\rho'(A)). \end{equation} We will ultimately deduce this from \cite[Theorem 5.1.1]{L1D}, which is in phrased in terms of \cite{BGK} highest weight theory. To recall a little of this theory, for $\mathfrak{a} \in \{\mathfrak{g}, \mathfrak{p}, \mathfrak{h}, \mathfrak{m}, \mathfrak{b}, \mathfrak{b}'\}$, let $\mathfrak{a}_0$ be the zero weight space of $\mathfrak{a}$ for the adjoint action of the torus $\mathfrak{t}^e$. In particular, we have that $\mathfrak{g}_0 = \langle e_{i,j}\:|\:\operatorname{row}(i) = \operatorname{row}(j)\rangle \cong \mathfrak{gl}_{p_1}({\mathbb C}) \oplus\cdots\oplus \mathfrak{gl}_{p_n}({\mathbb C})$, while $\mathfrak{p}_0 = \mathfrak{b}_0 = \mathfrak{b}'_0$ and $\mathfrak{h}_0 = \mathfrak{t}$. We have in front of us the necessary data to define another finite $W$-algebra $U(\mathfrak{g}_0, e) \subseteq U(\mathfrak{p}_0)$, which plays the role of ``Cartan subalgebra.'' Choose a parabolic subalgebra $\mathfrak{q}$ of $\mathfrak{g}$ with Levi factor $\mathfrak{g}_0$ by setting $\mathfrak{q} := \mathfrak{g}_0 + \mathfrak{b}' = \langle e_{i,j}\:|\:\operatorname{row}(i) \leq \operatorname{row}(j)\rangle$. This choice determines a certain $(U(\mathfrak{g},e),U(\mathfrak{g}_0,e))$-bimodule denoted $U(\mathfrak{g},e) / U(\mathfrak{g},e)_\sharp$ in \cite[$\S$4.1]{BGK}; the right $U(\mathfrak{g}_0,e)$-module structure here is defined using a homomorphism defined in \cite[Theorem 4.3]{BGK}. Then given any finite dimensional irreducible left $U(\mathfrak{g}_0,e)$-module ${\Lambda}$ we can form the Verma module \begin{equation}\label{ve} M({\Lambda},e) := U(\mathfrak{g},e) / U(\mathfrak{g},e)_\sharp \otimes_{U(\mathfrak{g}_0,e)} {\Lambda} \end{equation} as in \cite[$\S$5.2]{BGK}. As usual, it has a unique irreducible quotient denoted $L({\Lambda},e)$; see \cite[Theorem 4.5(4)]{BGK}. On the other hand, in \cite[$\S$4.3]{L1D}, Losev makes a very similar construction of Verma modules, but replaces the homomorphism from \cite[Theorem 4.3]{BGK} with a map constructed in a completely different way in \cite[(5.6)]{LcatO}. It is far from clear that Losev's map is the same as the one in \cite{BGK}, but fortunately this has recently been checked by Brown and Goodwin (in standard Levi type); see \cite[Proposition 3.12]{BrG}. Hence, as noted in \cite[$\S$3.5]{BrG}, the Verma modules constructed in \cite{L1D} are the same as the Verma modules $M({\Lambda},e)$ above coming from \cite{BGK}. This is a crucial point. As we are in standard Levi type, i.e. $e$ is regular in $\mathfrak{g}_0$, we have simply that $U(\mathfrak{g}_0,e) \cong Z(\mathfrak{g}_0)$, the center of $U(\mathfrak{g}_0)$, as goes back to Kostant \cite[$\S$2]{K}. More precisely, there is a canonical algebra isomorphism \begin{equation}\label{ko} \operatorname{Pr}_0:Z(\mathfrak{g}_0) \stackrel{\sim}{\rightarrow} U(\mathfrak{g}_0,e) \end{equation} induced by the unique linear projection $\Pr_0:U(\mathfrak{g}_0) \twoheadrightarrow U(\mathfrak{b}_0)$ that sends $u (x - \chi(x))$ to zero for each $u \in U(\mathfrak{g}_0)$ and $x \in \mathfrak{m}_0$. For $\alpha' \in \mathfrak{t}^*$, let $L_0'(\alpha')$ denote the irreducible $U(\mathfrak{g}_0)$-module generated by a $\mathfrak{b}_0'$-highest weight vector of weight $\alpha' - \rho'$. Let $W_0$ be the subgroup of $W$ consisting of all permutations such that $\operatorname{row}(i) = \operatorname{row}(w(i))$ for each $1 \leq i \leq N$, which is the Weyl group of $\mathfrak{g}_0$. Then we have the Harish-Chandra isomorphism \begin{equation}\label{HC} \Psi_0:Z(\mathfrak{g}_0) \stackrel{\sim}{\rightarrow} S(\mathfrak{t})^{W_0}, \end{equation} which we normalize so that $z\in Z(\mathfrak{g}_0)$ acts on $L_0'(\alpha')$ by the scalar $\alpha'(\Psi_0(z))$ for each $\alpha'\in\mathfrak{t}^*$. Let ${\Lambda}$ be the one dimensional left $U(\mathfrak{g}_0,e)$-module corresponding under the isomorphisms (\ref{ko}) and (\ref{HC}) to the $S(\mathfrak{t})^{W_0}$-module ${\mathbb C}_{\rho'(A)}$ of weight $\rho'(A)$. By the proof of \cite[Theorem 5.5]{BGK} and \cite[Lemma 5.1]{BGK}, we have that $M({\Lambda},e) \cong M(A,e)$ as left $U(\mathfrak{g},e)$-modules, hence $L({\Lambda},e) \cong L(A,e)$. So we have identified $L(A,e)$ with a highest weight module exactly as in \cite{L1D}, and our problem (\ref{prob}) now becomes to show that \begin{equation}\label{prob2} {\operatorname{Ann}}_{U(\mathfrak{g})} S_\chi(L(\Lambda,e)) = {\operatorname{Ann}}_{U(\mathfrak{g})} L'(\rho'(A)). \end{equation} By the definition of ${\Lambda}$ and (\ref{HC}), the character of $Z(\mathfrak{g}_0)$ arising from $\Lambda$ via (\ref{ko}) is the same as the central character of $L'_0(\rho'(A))$. Moreover, by the definition of $\rho'(A)$, $L'_0(\rho'(A))$ is an ``anti-dominant'' irreducible Verma module, so by \cite[Theorem 8.4.3]{Dix} its annihilator in $U(\mathfrak{g}_0)$ is the minimal primitive ideal generated by the kernel of this central character. By \cite[Theorem 3.9]{K}, this minimal primitive ideal is also the annihilator of the $U(\mathfrak{g}_0)$-module obtained from ${\Lambda}$ by applying the $\mathfrak{g}_0$-version of Skryabin's equivalence. Now apply \cite[Theorem 5.1.1]{L1D} to deduce (\ref{prob2}). \end{proof} Theorem~\ref{labels} has a number of important consequences. Recalling the definition of the left-justified tableau $Q(\alpha)$ from the introduction, let \begin{equation} \mathfrak{t}^*_\lambda := \{\alpha \in\mathfrak{t}^*\:|\: \text{$Q(\alpha)$ has shape $\lambda$}\}. \end{equation} For $\alpha \in \mathfrak{t}^*_\lambda$, we define a $\pi$-tableau $Q_\pi(\alpha)$ by taking $Q(\alpha)$ and sliding the boxes to the right as necessary in order to convert it to a $\pi$-tableau. Note $Q_\pi(\alpha)$ is row-equivalent to a column-strict $\pi$-tableau. \begin{Lemma}\label{tr} For any column-strict $\pi$-tableau $A$, we have that $\rho(A)\in\mathfrak{t}^*_\lambda$ and $A \sim Q_\pi(\rho(A))$. \end{Lemma} \begin{proof} This follows easily from the algorithm to compute $Q(\rho(A))$. \end{proof} \begin{Theorem}\label{labels2} For $\alpha \in \mathfrak{t}_\lambda^*$ we have that $I(\alpha) = I(L(A,e))$, where $A$ is any column-strict $\pi$-tableau with $A \sim Q_\pi(\alpha)$. \end{Theorem} \begin{proof} Lemma~\ref{tr} implies that $Q_\pi(\rho(A)) \sim A \sim Q_\pi(\alpha)$. Hence $Q(\rho(A)) \sim Q(\alpha)$, and we get that $I(\rho(A)) = I(\alpha)$ by (\ref{josephs}). Also by Theorem~\ref{labels} we have that $I(L(A,e)) = I(\rho(A))$. Hence $I(\alpha) = I(L(A,e))$. \end{proof} The next two corollaries are certainly not new, but still we have included self-contained proofs in order to illustrate the usefulness of Theorems~\ref{labels} and \ref{labels2}. The first recovers fully the result of Joseph from \cite[$\S$3.3]{JI}. \begin{Corollary}[Joseph]\label{jcor} ${\operatorname{Prim}_\lambda U(\mathfrak{g})} = \{I(\alpha)\:|\:\alpha \in \mathfrak{t}^*_\lambda\}$. \end{Corollary} \begin{proof} This follows from Theorem~\ref{labels2} and Theorem~\ref{t31}, since we know already by Duflo's theorem and Joseph's irreducibility theorem that ${\operatorname{Prim}U(\mathfrak{g})} = \{I(\alpha)\:|\:\alpha \in \mathfrak{t}^*\}$ is the disjoint union of the ${\operatorname{Prim}_\lambda U(\mathfrak{g})}$'s for all ${\lambda}$. \end{proof} The next corollary is a special case of a result proved in arbitrary Cartan type by Losev; see \cite[Theorem 1.2.2(viii)]{Lsymplectic} for the surjectivity of the map in the statement of the corollary, and Premet's conjecture formulated in \cite[Conjecture 1.2.1]{Lclass} and proved in \cite[$\S$4.2]{Lclass} for the injectivity (which simplifies in Cartan type $A$ because centralizers are connected). \begin{Corollary}[Losev]\label{lbij} The map \begin{align*} \left\{ \begin{array}{l} \text{isomorphism classes of}\\ \text{finite dimensional irreducible}\\ \text{left $U(\mathfrak{g},e)$-modules} \end{array} \right\} &\rightarrow {\operatorname{Prim}_\lambda U(\mathfrak{g})}, \qquad [L] \mapsto I(L). \end{align*} is a bijection. \end{Corollary} \begin{proof} By Corollary~\ref{jcor}, any $I \in {\operatorname{Prim}_\lambda U(\mathfrak{g})}$ can be represented as $I(\alpha)$ for some $\alpha \in \mathfrak{t}_\lambda^*$. By Theorem \ref{labels2}, we see that $I(\alpha) = I(L)$ for some finite dimensional irreducible left $U(\mathfrak{g},e)$-module, hence the map is surjective. For injectivity, by Theorem~\ref{fdc}, it suffices to show that $I(L(A,e)) = I(L(B,e))$ implies $A \sim B$ for any column-strict $\pi$-tableaux $A$ and $B$. To prove this, use Theorem~\ref{labels} and (\ref{josephs}) to see that $I(L(A,e))=I(L(B,e))$ implies $Q(\rho(A)) \sim Q(\rho(B))$, hence $A \sim B$ by Lemma~\ref{tr}. \end{proof} \begin{Remark}\label{brundans}\rm Let $\operatorname{Prim} U(\mathfrak{g},e)$ denote the space of all primitive ideals in $U(\mathfrak{g},e)$. In \cite{Lsymplectic}, Losev shows that there is a well-defined map \begin{equation*} ?^\dagger: \operatorname{Prim} U(\mathfrak{g},e) \rightarrow \bigcup_{\mu \geq {\lambda}} \operatorname{Prim}_\mu U(\mathfrak{g}) \end{equation*} such that $({\operatorname{Ann}}_{U(\mathfrak{g},e)} M)^\dagger = I(M)$ for any irreducible left $U(\mathfrak{g},e)$-module $M$; here $\geq$ is the usual dominance ordering on partitions. Using Theorem~\ref{labels}, Corollary~\ref{jcor} and (\ref{josephs}), it is a purely combinatorial exercise to check that this map sends the subset $$ \operatorname{Prim}_{hw} U(\mathfrak{g},e):= \{{\operatorname{Ann}}_{U(\mathfrak{g},e)} L(A,e)\:|\:\text{ for all $\pi$-tableaux $A$}\} \subseteq \operatorname{Prim} U(\mathfrak{g},e) $$ of {\em highest weight} primitive ideals surjectively onto $\bigcup_{\mu \geq {\lambda}} \operatorname{Prim}_\mu U(\mathfrak{g})$, hence Losev's map $?^\dagger$ is surjective. We conjecture that it is also injective (in Cartan type $A$). Combined with the preceeding observations, this conjecture would imply that $\operatorname{Prim} U(\mathfrak{g},e) = \operatorname{Prim}_{hw} U(\mathfrak{g},e)$ and moreover \begin{equation} {\operatorname{Ann}}_{U(\mathfrak{g},e)} L(A,e) = {\operatorname{Ann}}_{U(\mathfrak{g},e)} L(B,e) \quad\Leftrightarrow\quad Q(\rho(A)) \sim Q(\rho(B)). \end{equation} This would give a classification of $\operatorname{Prim}U(\mathfrak{g},e)$ exactly in the spirit of the Duflo-Joseph classification of ${\operatorname{Prim}U(\mathfrak{g})}$ from (\ref{josephs}). \end{Remark} Now we turn our attention to deriving some basic properties of the coinvariant Whittaker functor from (\ref{coinv}). This functor has its origins in the work of Kostant and Lynch (see e.g. \cite[$\S$3.8]{K} and \cite[ch.4]{Ly}) though we give a self-contained treatment here. \begin{Lemma}\label{fin} The functor $H_0(\mathfrak{m}_\chi,?)$ sends right $U(\mathfrak{g})$-modules that are finitely generated over $\mathfrak{m}$ to finite dimensional right $U(\mathfrak{g},e)$-modules. \end{Lemma} \begin{proof} Obvious from the definition (\ref{coinv}). \end{proof} \begin{Lemma}\label{bronson} For any right $U(\mathfrak{p})$-module $V$, $H_0(\mathfrak{m}_\chi, V \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))$ is isomorphic to the restriction of $V$ to $U(\mathfrak{g},e)$. \end{Lemma} \begin{proof} By the PBW theorem, $V \otimes_{U(\mathfrak{p})} U(\mathfrak{g}) \cong V \otimes U(\mathfrak{m})$ as a right $U(\mathfrak{m})$-module. It follows easily that the map $V \rightarrow H_0(\mathfrak{m}_\chi, V \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))$ sending $v$ to the image of $v \otimes 1$ is a vector space isomorphism. For $u \in U(\mathfrak{g},e)$, this map sends $vu$ to the image of $vu \otimes 1$, which is the same as the image of $(v \otimes 1)u$. Hence our map is a homomorphism of right $U(\mathfrak{g},e)$-modules. \end{proof} Given a vector space $M$, let $M^*$ be the full linear dual ${\operatorname{Hom}}_{{\mathbb C}}(M,{\mathbb C})$, and denote the annihilator in $M^*$ of a subspace $N \leq M$ by $N^\circ$ (which is of course canonically isomorphic to $(M / N)^*$). If $M$ is a left module over an associative algebra $A$, then $M^*$ is naturally a right module with action $(fa)(v) := f(av)$ for $f \in M^*, a \in A$ and $v \in M$. Similarly if $M$ is a right module then $M^*$ is a left module with action $(af)(v) = f(va)$. For a right $U(\mathfrak{m})$-module $M$, its {\em $\mathfrak{m}_\chi$-restricted dual} $M^\#$ is defined from \begin{equation}\label{hash} M^\# := \bigcup_{i \geq 0} (M \mathfrak{m}_\chi^i)^\circ \subseteq M^*. \end{equation} This gives a functor $?^\#$ from $\operatorname{mod-\!} U(\mathfrak{m})$ to vector spaces. \begin{Lemma}\label{exact} The functor $?^\#$ is exact. \end{Lemma} \begin{proof} Let $I_\chi$ be the two-sided ideal of $U(\mathfrak{m})$ generated by $\mathfrak{m}_\chi$. The subspace $(I_\chi^i)^\circ$ of $U(\mathfrak{m})^*$ is naturally a right $U(\mathfrak{m})$-module with action $(fx)(y) = f(xy)$. For any right $U(\mathfrak{m})$-module $M$, we claim that the linear map $$ \theta:{\operatorname{Hom}}_{\mathfrak{m}}(M, (I_\chi^i)^\circ) \rightarrow (M \mathfrak{m}_\chi^i)^\circ,\quad f \mapsto {\operatorname{ev}} \circ f $$ is an isomorphism, where ${\operatorname{ev}}:U(\mathfrak{m})^* \rightarrow {\mathbb C}$ is evaluation at $1$. To see this, take $f \in {\operatorname{Hom}}_{\mathfrak{m}}(M, (I_\chi^i)^\circ)$ and observe that $\theta(f)$ annihilates $M \mathfrak{m}_\chi^i$, indeed, $$ ({\operatorname{ev}}\circ f)(v x) = f(vx)(1) = (f(v) x)(1) = f(v)(x) = 0 $$ for $v \in M$ and $x \in \mathfrak{m}_\chi^i$. Hence the map makes sense. To prove that it is an isomorphism, construct a two-sided inverse ${\varphi}:(M \mathfrak{m}_\chi^i)^\circ \rightarrow {\operatorname{Hom}}_{\mathfrak{m}}(M, ((I_\chi^i)^\circ)$ by defining ${\varphi}(g) \in {\operatorname{Hom}}_{\mathfrak{m}}(M, ((I_\chi^i)^\circ)$ for $g \in (M \mathfrak{m}_\chi^i)^\circ$ from ${\varphi}(g)(v)(u) := g(vu)$ for $v \in M$ and $u \in U(\mathfrak{m})$. Now let $E_\chi := \bigcup_{i \geq 0} (I_\chi^i)^\circ$, the space of all $f: U(\mathfrak{m})\rightarrow {\mathbb C}$ which annihilate $I_\chi^i$ for sufficiently large $i$. The result from the previous paragraph taken for all $i$ gives us a natural isomorphism $$ {\operatorname{Hom}}_{\mathfrak{m}}(M, E_\chi) =\bigcup_{i \geq 0} {\operatorname{Hom}}_{\mathfrak{m}}(M, (I_\chi^i)^\circ) \stackrel{\sim}{\rightarrow} \bigcup_{i \geq 0} (M \mathfrak{m}_\chi^i)^\circ = M^\#, \quad f \mapsto {\operatorname{ev}} \circ f $$ for every right $U(\mathfrak{m})$-module $M$. Hence the functors $?^\#$ and ${\operatorname{Hom}}_{\mathfrak{m}}(?, E_\chi)$ are isomorphic. The latter functor is exact because $E_\chi$ is an injective right $U(\mathfrak{m})$-module; see \cite[Assertion 2]{Skryabin}. \end{proof} Now suppose that $M$ is a right $U(\mathfrak{g})$-module. We observe that the subspace $M^\#$ of $M^*$ from (\ref{hash}) is actually a left $U(\mathfrak{g})$-submodule belonging to the category $(U(\mathfrak{g}), \mathfrak{m}_\chi)\operatorname{\!-mod}$. So we can view $?^\#$ as an exact functor from $\operatorname{mod-\!} U(\mathfrak{g})$ to $(U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod}$. \begin{Lemma}\label{mainlem} For any right $U(\mathfrak{g})$-module $M$, we have that $$ H^0(\mathfrak{m}_\chi, M^\#) = H^0(\mathfrak{m}_\chi, M^*) = (M \mathfrak{m}_\chi)^\circ $$ as subspaces of $M^*$. Moreover there is a natural isomorphism of left $U(\mathfrak{g},e)$-modules $(M \mathfrak{m}_\chi)^\circ \cong H_0(\mathfrak{m}_\chi, M)^*.$ \end{Lemma} \begin{proof} For the first statement, we observe that \begin{align*} H^0(\mathfrak{m}_\chi, M^*) &= \{f \in M^*\:|\:xf = 0\text{ for all }x \in \mathfrak{m}_\chi\}\\ &= \{f \in M^*\:|\:(xf)(v) = 0\text{ for all }x \in \mathfrak{m}_\chi, v \in M\}\\ &= \{f \in M^*\:|\:f(vx) = 0\text{ for all }v \in M, x \in \mathfrak{m}_\chi\}\\ &= \{f \in M^*\:|\:f(v) = 0\text{ for all }v \in M\mathfrak{m}_\chi\} = (M \mathfrak{m}_\chi)^\circ. \end{align*} We get that $(M \mathfrak{m}_\chi)^\circ = H^0(\mathfrak{m}_\chi, M^\#)$ too since there are obviously inclusions $(M \mathfrak{m}_\chi)^\circ \subseteq H^0(\mathfrak{m}_\chi, M^\#) \subseteq H^0(\mathfrak{m}_\chi, M^*)$. Then for the second isomorphism just use the usual natural isomorphism $(M \mathfrak{m}_\chi)^\circ \cong (M / M \mathfrak{m}_\chi)^*$. \end{proof} \begin{Theorem}\label{altdef} There are natural isomorphisms of right $U(\mathfrak{g},e)$-modules $$ H^0(\mathfrak{m}_\chi, M^\#)^*\cong H_0(\mathfrak{m}_\chi, M) \cong H^0(\mathfrak{m}_\chi,M^*)^* $$ for any right $U(\mathfrak{g})$-module that is finitely generated over $\mathfrak{m}$. \end{Theorem} \begin{proof} Take the duals of the isomorphisms $$ H^0(\mathfrak{m}_\chi, M^\#) \cong H_0(\mathfrak{m}_\chi, M)^*\cong H^0(\mathfrak{m}_\chi, M^\#)$$ from Lemma~\ref{mainlem} and note that $(H_0(\mathfrak{m}_\chi, M)^*)^* \cong H_0(\mathfrak{m}_\chi, M)$ by Lemma~\ref{fin}. \end{proof} The following corollary is equivalent to \cite[Lemma 4.6]{Ly} (attributed there to N. Wallach). \begin{Corollary}\label{lynchc} The functor $H_0(\mathfrak{m}_\chi, ?)$ sends short exact sequences of right $U(\mathfrak{g})$-modules that are finitely generated over $\mathfrak{m}$ to short exact sequences of finite dimensional right $U(\mathfrak{g},e)$-modules. \end{Corollary} \begin{proof} In view of Theorem~\ref{altdef} it suffices to show that the functor $H^0(\mathfrak{m}_\chi, ?^\#)^*$ is exact. This is clear as it is a composition of three exact functors: the functor $?^\#:U(\mathfrak{g})\operatorname{\!-mod} \rightarrow (U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod}$ which is exact by Lemma~\ref{exact}, then the functor $H^0(\mathfrak{m}_\chi, ?):(U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod} \rightarrow U(\mathfrak{g},e)\operatorname{\!-mod}$ which is exact as it is an equivalence of categories by Skryabin's theorem, then the duality $?^*$. \end{proof} \section{Irreducible standard modules and induced primitive ideals}\label{sm} Continuing with our fixed pyramid $\pi$, we define {\em column-separated} $\pi$-tableaux in exactly the same way as was done in the introduction in the left-justified case. The following theorem explains the significance of this notion from a representation theoretic perspective. (We point out that there is a typo in the definition of ``separated'' in \cite{BKrep} in which the inequalities $r < s$ and $r > s$ are the wrong way round.) \begin{Theorem}[{\cite[Theorem 8.25]{BKrep}}]\label{sep2} For a column-strict $\pi$-tableau $A$, the standard module $V(A,e)$ is irreducible if and only if $A$ is column-separated, in which case $V(A,e) \cong L(A,e)$. \end{Theorem} In the rest of the section we are going to apply this to deduce (a slight generalization of) the first equality in Theorem~\ref{sep}; see Theorem~\ref{msup} below. \begin{Lemma}\label{dizz} Let $M$ be a right $U(\mathfrak{g})$-module that is free as a $U(\mathfrak{m})$-module. Then ${\operatorname{Ann}}_{U(\mathfrak{g})} M = {\operatorname{Ann}}_{U(\mathfrak{g})}(M^\#)$, where $M^\#$ is the left $U(\mathfrak{g})$-module defined in the previous section. \end{Lemma} \begin{proof} Take $u \in {\operatorname{Ann}}_{U(\mathfrak{g})} M$ and $f \in M^\#$. Then $(uf)(v) = f(vu) = 0$ for every $v \in M$, so $uf = 0$. This shows that ${\operatorname{Ann}}_{U(\mathfrak{g})}M \subseteq {\operatorname{Ann}}_{U(\mathfrak{g})}(M^\#)$. Conversely, by the definition (\ref{hash}), we have that $$ {\operatorname{Ann}}_{U(\mathfrak{g})} (M^\#) = \bigcap_{i \geq 0} {\operatorname{Ann}}_{U(\mathfrak{g})} (M \mathfrak{m}_\chi^i)^\circ. $$ So any $u \in {\operatorname{Ann}}_{U(\mathfrak{g})} (M^\#)$ satisfies $f(vu)=(uf)(v) = 0$ for all $i \geq 0$, $f \in (M \mathfrak{m}_\chi^i)^\circ$ and $v\in M$. This implies for any $v \in M$ that $vu \in M \mathfrak{m}_\chi^i$. It remains to observe that $\bigcap_{i \geq 0} M \mathfrak{m}_\chi^i = \hbox{\boldmath{$0$}}$. To see this, it suffices in view of the assumption that $M$ is a free $U(\mathfrak{m})$-module to check that $\bigcap_{i \geq 0} U(\mathfrak{m}) \mathfrak{m}_\chi^i = \hbox{\boldmath{$0$}}$. Twisting by the automorphism of $U(\mathfrak{m})$ sending $x \in \mathfrak{m}$ to $x + \chi(x)$, this is equivalent to the statement $\bigcap_{i \geq 0} U(\mathfrak{m}) \mathfrak{m}^i = \hbox{\boldmath{$0$}}$, which is easy to see by considering the (strictly negative) grading on $\mathfrak{m}$. \end{proof} \begin{Lemma} Let $V$ be a finite dimensional left $U(\mathfrak{p})$-module and $V^*$ be the dual right $U(\mathfrak{p})$-module as in the previous section. Then $$ (V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))^\# \cong S_\chi(V) $$ as left $U(\mathfrak{g})$-modules. (On the right hand side we are viewing $V$ as a left $U(\mathfrak{g},e)$-module by the natural restriction.) \end{Lemma} \begin{proof} Both modules belong to the category $(U(\mathfrak{g}),\mathfrak{m}_\chi)\operatorname{\!-mod}$. So by Skryabin's equivalence of categories, it suffices to show that $$ H^0(\mathfrak{m}_\chi, (V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))^\#) \cong V $$ as left $U(\mathfrak{g},e)$-modules. By Lemma~\ref{mainlem}, we have that $$ H^0(\mathfrak{m}_\chi, (V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))^\#) \cong H_0(\mathfrak{m}_\chi, V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))^*. $$ It remains to observe by Lemma~\ref{bronson} that $H_0(\mathfrak{m}_\chi, V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g})) \cong V^*$, hence $H_0(\mathfrak{m}_\chi, V^* \otimes_{U(\mathfrak{p})} U(\mathfrak{g}))^* \cong V$ as $V$ is finite dimensional. \end{proof} Let $A$ be a column-strict $\pi$-tableau. Recall the weight $\gamma(A)$ from (\ref{gammadef}) and the subsequent definition of the standard module $V(A,e)$; it is the restriction of the left $U(\mathfrak{p})$-module $V(A)$ to the subalgebra $U(\mathfrak{g},e)$. \begin{Lemma}\label{mainid} For any column-strict $\pi$-tableau $A$, we have that \begin{equation}\label{mainidf} {\operatorname{Ann}}_{U(\mathfrak{g})} (V(A)^*\otimes_{U(\mathfrak{p})} U(\mathfrak{g})) = I(V(A,e)). \end{equation} \end{Lemma} \begin{proof} This is a consequence of the previous two lemmas and the definition (\ref{il}). \end{proof} It is a bit awkward at this point that the module on the left hand side of (\ref{mainidf}) is a right module. We will get around this by twisting with a suitable anti-automorphism, at the price of a shift by the special weight $\beta$ from (\ref{betadef}) (and some temporary notational issues). Observe that $\beta$ extends uniquely to a character of $\mathfrak{p}$. Let ${\mathbb C}_{\beta}$ be the corresponding one dimensional left $U(\mathfrak{p})$-module. We need to work momentarily with a different pyramid $\pi^t$ associated to the transpose $\sigma^t$ of the shift matrix $\sigma$; in other words $\pi^t$ is obtained from $\pi$ by reversing the order of the columns. For example if \begin{equation} \pi=\begin{picture}(39,20) \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(9,2){\makebox(0,0){$1$}} \put(9,-10){\makebox(0,0){$2$}} \put(21,14){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$4$}} \put(21,-10){\makebox(0,0){$5$}} \put(33,-10){\makebox(0,0){$6$}} \end{picture} \qquad\text{then}\qquad \pi^t=\begin{picture}(39,20) \put(3,-16){\line(0,1){12}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){24}} \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(15,8){\line(1,0){24}} \put(27,20){\line(-1,0){12}} \put(9,-10){\makebox(0,0){$1$}} \put(21,14){\makebox(0,0){$2$}} \put(21,2){\makebox(0,0){$3$}} \put(21,-10){\makebox(0,0){$4$}} \put(33,-10){\makebox(0,0){$6$}} \put(33,2){\makebox(0,0){$5$}} \end{picture}\,.\label{eg} \end{equation} \vspace{0.5mm} \noindent Let $\mathfrak{p}^t$ (resp.\ $e^t$, resp.\ $U(\mathfrak{g},e^t)$) be defined in the same way as $\mathfrak{p}$ (resp.\ $e$, resp.\ $U(\mathfrak{g},e)$) but starting from the pyramid $\pi^t$ instead of $\pi$. If $A$ is any $\pi$-tableau, we obtain a $\pi^t$-tableau $A^t$ by reversing the order of the columns again. It makes sense to talk about $V(A^t)$, $V(A^t,e^t)$ and $L(A^t,e^t)$, which are $U(\mathfrak{p}^t)$- and $U(\mathfrak{g},e^t)$-modules. Now we define the appropriate anti-automorphism. As usual label the boxes of $\pi$ in order down columns starting from the leftmost column. Let $i'$ be the entry in the $i$th box of the tableau obtained by writing the numbers $1,\dots,N$ into the boxes of $\pi$ working in order down columns starting from the rightmost column; for example, in the situation of (\ref{eg}) we have that $1'=5,2'=6,3'=2,4'=3,5'=4,6'=1$. Let $t:U(\mathfrak{g}) \rightarrow U(\mathfrak{g})$ be the anti-automorphism with $t(e_{i,j}) = e_{j', i'}$. Then we have that $t(e) = e^t$ and $t(\mathfrak{p}) = \mathfrak{p}^t$, so $t$ restricts to an anti-isomorphism $t:U(\mathfrak{p}) \stackrel{\sim}{\rightarrow} U(\mathfrak{p}^t)$. \begin{Lemma}\label{booby1} Suppose that $A$ is a column-strict $\pi$-tableau, so that $A^t$ is a column-strict $\pi^t$-tableau. The pull-back $t^*(V(A^t)^*)$ of the right $U(\mathfrak{p}^t)$-module $V(A^t)^*$ is a left $U(\mathfrak{p})$-module isomorphic to ${\mathbb C}_\beta \otimes V(A)$. Hence we have that \begin{equation}\label{pv0} t^*(V(A^t)^* \otimes_{U(\mathfrak{p}^t)} U(\mathfrak{g})) \cong U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} ({\mathbb C}_\beta \otimes V(A)) \end{equation} as left $U(\mathfrak{g})$-modules. \end{Lemma} \begin{proof} Suppose $M$ is a finite dimensional left $U(\mathfrak{p}^t)$-module $M$ and we are given an isomorphism of left $U(\mathfrak{p})$-modules $\theta:K \rightarrow t^*(M^*)$. Then it is clear that the map $U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} K \rightarrow t^*(M^*\otimes_{U(\mathfrak{p}^t)} U(\mathfrak{g})), u \otimes v \mapsto \theta(v) \otimes t(u)$ is an isomorphism. So the second part of the lemma follows from the first part. The first part is a routine exercise in highest weight theory. \iffalse We just explain in the situation of (\ref{eg}). Suppose $A$ is a column-strict $\pi$-tableau $\gamma(A) = a_1{\varepsilon}_1+a_2{\varepsilon}_2+a_3{\varepsilon}_3+a_4{\varepsilon}_4+a_5{\varepsilon}_5+a_6{\varepsilon}_6$. Then $\gamma(A^t) = a_6 {\varepsilon}_1 + a_3 {\varepsilon}_2+a_4 {\varepsilon}_3+a_5 {\varepsilon}_4 + a_1 {\varepsilon}_5+a_2 {\varepsilon}_6$. Also $\beta = -4{\varepsilon}_1-4{\varepsilon}_2+{\varepsilon}_3+{\varepsilon}_4+{\varepsilon}_5-t{\varepsilon}_6$ and $\beta^t = -5 {\varepsilon}_1-{\varepsilon}_2-{\varepsilon}_3-{\varepsilon}_4+4{\varepsilon}_5+4{\varepsilon}_6$. Hence $\gamma(A^t)-\beta^t-\rho$, the highest weight of $V(A^t)$ hence $V(A^t)^*$, is equal to $(a_6+6){\varepsilon}_1+(a_3+3){\varepsilon}_2+ (a_4+4){\varepsilon}_3+ (a_5+5){\varepsilon}_4+(a_1+1){\varepsilon}_5+(a_2+2){\varepsilon}_6$. Pulling back through $t^*$ we get a finite dimensional irreducible $U(\mathfrak{p})$-module of highest weight $(a_1+1){\varepsilon}_1+(a_2+2){\varepsilon}_2+(a_3+3){\varepsilon}_3+(a_4+4){\varepsilon}_4+(a_5+5){\varepsilon}_5+(a_6+6){\varepsilon}_6 = \gamma(A) - \rho$, which is the highest weight of ${\mathbb C}_\beta \otimes V(A)$. \fi \end{proof} The module on the right hand side of (\ref{pv0}) is a parabolic Verma module attached to the parabolic $\mathfrak{p}$ in the usual sense. Let us give it a special name: for a column-strict $\pi$-tableau $A$ we set \begin{equation}\label{pv} M(A) := U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} ({\mathbb C}_\beta \otimes V(A)). \end{equation} This module has irreducible head \begin{equation}\label{pv2} L(A) := M(A) / \operatorname{rad} M(A). \end{equation} As $V(A)$ has highest weight $\gamma(A) - \beta - \rho$, $L(A)$ is the usual irreducible highest weight module $L(\gamma(A))$ of highest weight $\gamma(A) - \rho$. \begin{Theorem}\label{msup} If $A$ is a column-separated $\pi$-tableau then $$ I(L(A,e)) = {\operatorname{Ann}}_{U(\mathfrak{g})} M(A). $$ \end{Theorem} \begin{proof} We need to work with the finite $W$-algebra $U(\mathfrak{g}, e^t)$, notation as introduced just before Lemma~\ref{booby1}. Let $A$ be a column-separated $\pi$-tableau. Then $A^t$ is a column-connected $\pi^t$-tableau, so $V(A^t,e^t) \cong L(A^t,e^t)$ by Theorem~\ref{sep2}. By Lemma~\ref{mainid} (for $\pi^t$ rather than $\pi$) we get that $$ I(L(A^t,e^t)) = {\operatorname{Ann}}_{U(\mathfrak{g})} (V(A^t)^* \otimes_{U(\mathfrak{p}^t)} U(\mathfrak{g})). $$ Note that $Q(\rho(A)) \sim Q(\rho(A^t))$ by Lemma~\ref{tr}, hence $I(L(A,e)) = I(L(A^t,e^t))$ by Theorem~\ref{labels} and (\ref{josephs}). Also Lemma~\ref{booby1} implies that $$ {\operatorname{Ann}}_{U(\mathfrak{g})} (V(A^t)^* \otimes_{U(\mathfrak{p}^t)} U(\mathfrak{g})) = t({\operatorname{Ann}}_{U(\mathfrak{g})} M(A)).$$ So we have established that $ I(L(A,e)) = t({\operatorname{Ann}}_{U(\mathfrak{g})} M(A))$ or equivalently $$ t^{-1}(I(L(A,e))) = {\operatorname{Ann}}_{U(\mathfrak{g})} M(A). $$ It remains to observe for any $I \in {\operatorname{Prim}U(\mathfrak{g})}$ that $t^{-1}(I) = I$; this follows from \cite[5.2(2)]{Je} on noting that $t^{-1}$ is equal to the usual Chevalley anti-automorphism up to composing with an inner automorphism. \end{proof} \section{Irreducible modules and Whittaker coinvariants}\label{sco} In this section we recall the construction of the finite dimensional irreducible left $U(\mathfrak{g},e)$-modules from \cite[$\S$8.5]{BKrep} by taking Whittaker coinvariants in certain irreducible highest weight modules for $\mathfrak{g}$. Before we can begin, we need to modify the definition (\ref{coinv}), since we want now to use the coinvariant Whittaker functor in the context of left modules. Actually both of the definitions (\ref{inv})--(\ref{coinv}) are rather asymmetric with respect to left and right modules. The reason for this goes back to the original definition of the finite $W$-algebra from (\ref{fw}): one could just as naturally consider \begin{equation}\label{fw2} \overline{U}(\mathfrak{g},e) := \{u \in U(\mathfrak{p})\:|\: u \mathfrak{m}_\chi \subseteq \mathfrak{m}_\chi U(\mathfrak{g})\}. \end{equation} We call this the {\em opposite finite $W$-algebra} since there is an {\em anti-isomorphism} between $\overline{U}(\mathfrak{g},e)$ and $U(\mathfrak{g},e)$. More precisely, let $U(\mathfrak{g},-e)$ be defined exactly as in (\ref{fw}) but with $e$ replaced by $-e$ (hence $\chi$ replaced by $-\chi$). The antipode $S:U(\mathfrak{g}) \rightarrow U(\mathfrak{g})$ sending $x \mapsto -x$ for each $x \in \mathfrak{g}$ obviously sends $\overline{U}(\mathfrak{g},e)$ to $U(\mathfrak{g},-e)$, and then $U(\mathfrak{g},-e)$ is isomorphic to $U(\mathfrak{g},e)$ since $-e$ is conjugate to $e$. Composing, we get an anti-isomorphism $\overline{U}(\mathfrak{g},e) \stackrel{\sim}{\rightarrow} U(\mathfrak{g},e)$. Using this anti-isomorphism, it is rather routine to deduce opposite versions of most of the results in $\S$\ref{swhitt} with $U(\mathfrak{g},e)$ replaced by $\overline{U}(\mathfrak{g},e)$. For example, the opposite versions of the functors (\ref{inv})--(\ref{coinv}) are functors \begin{align}\label{bup} \overline{H}^0(\mathfrak{m}_\chi,?):\operatorname{mod-\!} U(\mathfrak{g}) &\rightarrow \operatorname{mod-\!}\overline{U}(\mathfrak{g},e), &M &\mapsto \{v \in M\:|\:v \mathfrak{m}_\chi = \hbox{\boldmath{$0$}}\},\\ \overline{H}_0(\mathfrak{m}_\chi,?):U(\mathfrak{g})\operatorname{\!-mod} &\rightarrow \overline{U}(\mathfrak{g},e)\operatorname{\!-mod}, &M &\mapsto M / \mathfrak{m}_\chi M.\label{bdown} \end{align} The first of these functors gives an equivalence between $\operatorname{mod-\!} (U(\mathfrak{g}),\mathfrak{m}_\chi)$ and $\operatorname{mod-\!} \overline{U}(\mathfrak{g},e)$, where $\operatorname{mod-\!} (U(\mathfrak{g}),\mathfrak{m}_\chi)$ is the full subcategory of $\operatorname{mod-\!} U(\mathfrak{g})$ consisting of all modules that are locally nilpotent over $\mathfrak{m}_\chi$ (the opposite version of Skryabin's theorem). Defining $\#:U(\mathfrak{g})\operatorname{\!-mod} \rightarrow \operatorname{mod-\!} (U(\mathfrak{g}),\mathfrak{m}_\chi)$ in the oppposite way to in $\S$\ref{swhitt}, the second of these functors satisfies \begin{equation}\label{duzz} \overline{H}_0(\mathfrak{m}_\chi, M) \cong \overline{H}^0(\mathfrak{m}_\chi, M^\#)^* \end{equation} for any left $U(\mathfrak{g})$-module $M$ that is finitely generated over $\mathfrak{m}$ (the opposite version of Theorem~\ref{altdef}). Less obviously, there is also a canonical {\em isomorphism} between $U(\mathfrak{g},e)$ and $\overline{U}(\mathfrak{g},e)$. To record this, recall that the weight $\beta$ from (\ref{betadef}) extends uniquely to a character of $\mathfrak{p}$. The following theorem was proved originally (in Cartan type $A$ only) by explicit computation in \cite[Lemma 3.1]{BKrep}, but we cite instead a more conceptual proof found subsequently (which is valid in all Cartan types). \begin{Theorem}[{\cite[Corollary 2.9]{BGK}}]\label{twist} The automorphisms $S_{\pm\beta}:U(\mathfrak{p}) \rightarrow U(\mathfrak{p})$ sending $x \in \mathfrak{p}$ to $x \pm \beta(x)$ restrict to mutually inverse isomorphisms $$ S_\beta:\overline{U}(\mathfrak{g},e)\stackrel{\sim}{\rightarrow} U(\mathfrak{g},e),\qquad S_{-\beta}:U(\mathfrak{g},e)\stackrel{\sim}{\rightarrow} \overline{U}(\mathfrak{g},e). $$ \end{Theorem} \iffalse \begin{Remark}\label{rem}\rm Actually in \cite{BKrep} we worked with a slightly different subalgebra $W(\pi)$ of $U(\mathfrak{p})$ such that $S_\eta(W(\pi)) = U(\mathfrak{g},e)$ and $S_{\overline{\eta}}(W(\pi)) = \overline{U}(\mathfrak{g},e)$ for certain automorphisms $S_\eta, S_{\overline{\eta}}$ of $U(\mathfrak{p})$ induced by weights $\eta, \overline{\eta} \in \mathfrak{t}^*$ with $\eta - \overline{\eta} = \beta$; see \cite[(3.7), (3.23)]{BKrep}. \end{Remark} \fi We get an isomorphism of categories $S_{-\beta}^*:\overline{U}(\mathfrak{g},e)\operatorname{\!-mod} \rightarrow U(\mathfrak{g},e)\operatorname{\!-mod}$ by pulling back the action through $S_{-\beta}$. Composing with $S_{-\beta}^*$, we will always from now on view the functors (\ref{bup})--(\ref{bdown}) as functors \begin{align}\label{bup2} \overline{H}^0(\mathfrak{m}_\chi,?):\operatorname{mod-\!} U(\mathfrak{g}) &\rightarrow \operatorname{mod-\!} U(\mathfrak{g},e),\\ \overline{H}_0(\mathfrak{m}_\chi,?):U(\mathfrak{g})\operatorname{\!-mod} &\rightarrow U(\mathfrak{g},e)\operatorname{\!-mod}.\label{bdown2} \end{align} Of course we are abusing notation here, but we won't mention $\overline{U}(\mathfrak{g},e)$ again so there should be no confusion. Now let $\mathcal O_\pi$ be the parabolic category $\mathcal O$ consisting of finitely generated $\mathfrak{g}$-modules that are locally finite over $\mathfrak{p}$ and semisimple over $\mathfrak{h}$. The basic objects in $\mathcal O_\pi$ are the parabolic Verma modules $M(A)$ and their irreducible quotients $L(A)$ from (\ref{pv})--(\ref{pv2}). Recall that both of these modules are of highest weight $\gamma(A)-\rho$. \begin{Lemma}\label{exactagain} The restriction of the functor $\overline{H}(\mathfrak{m}_\chi,?)$ to $\mathcal{O}_\pi$ is exact and it sends modules in $\mathcal{O}_\pi$ to finite dimensional left $U(\mathfrak{g},e)$-modules. \end{Lemma} \begin{proof} Every module in $\mathcal O_\pi$ has a composition series with composition factors of the form $L(A)$ for various column-strict $\pi$-tableaux $A$. Since $L(A)$ is a quotient of $M(A)$ it is clearly finitely generated as an $\mathfrak{m}$-module. Hence every object in $\mathcal O_\pi$ is finitely generated over $\mathfrak{m}$ and we are done by the opposite version of Corollary~\ref{lynchc}. \end{proof} \begin{Lemma}\label{gform} For a column-strict $\pi$-tableau $A$, we have that \begin{equation*} \overline{H}_0(\mathfrak{m}_\chi, M(A)) \cong V(A,e) \end{equation*} as left $U(\mathfrak{g},e)$-modules. \end{Lemma} \begin{proof} By the definition of $M(A)$ and the opposite version of Lemma~\ref{bronson}, we that $\overline{H}_0(\mathfrak{m}_\chi, M(A)) \cong S_{-\beta}^*({\mathbb C}_\beta\otimes V(A,e)) \cong V(A,e)$. \end{proof} Call a $\pi$-tableau $A$ {\em semi-standard} if it is column-strict and $\gamma(A) \in \mathfrak{t}^*_\lambda$, i.e. $Q(\gamma(A))$ has shape $\lambda$. In the left-justified case, it is an easy exercise to check that $A$ is semi-standard if and only if $A$ is both column-strict and row-standard, which hopefully justifies our choice of language. In other cases the semi-standard $\pi$-tableaux are harder to characterize from a combinatorial point of view. For example, here are all the semi-standard $\pi$-tableaux for one particular $\pi$ with entries $1,2,3,3,4,4$: \begin{align*} \phantom{Q_\pi(\gamma())} A &= \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$2$}} \put(21,-10){\makebox(0,0){$1$}} \put(33,-10){\makebox(0,0){$4$}} \put(9,2){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$3$}} \put(21,14){\makebox(0,0){$4$}} \end{picture} & \phantom{Q_\pi(\gamma())} B &= \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$2$}} \put(21,-10){\makebox(0,0){$1$}} \put(33,-10){\makebox(0,0){$3$}} \put(9,2){\makebox(0,0){$4$}} \put(21,2){\makebox(0,0){$3$}} \put(21,14){\makebox(0,0){$4$}} \end{picture} & \phantom{Q_\pi(\gamma())} C &= \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$1$}} \put(21,-10){\makebox(0,0){$2$}} \put(33,-10){\makebox(0,0){$4$}} \put(9,2){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$3$}} \put(21,14){\makebox(0,0){$4$}} \end{picture}\:. \end{align*} \vspace{0.5mm} \noindent To illustrate the next lemma, we note for these that \begin{align*} Q_\pi(\gamma(A)) &\sim \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$3$}} \put(21,-10){\makebox(0,0){$1$}} \put(33,-10){\makebox(0,0){$4$}} \put(9,2){\makebox(0,0){$4$}} \put(21,2){\makebox(0,0){$2$}} \put(21,14){\makebox(0,0){$3$}} \end{picture} &Q_\pi(\gamma(B)) &\sim \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$3$}} \put(21,-10){\makebox(0,0){$1$}} \put(33,-10){\makebox(0,0){$3$}} \put(9,2){\makebox(0,0){$4$}} \put(21,2){\makebox(0,0){$2$}} \put(21,14){\makebox(0,0){$4$}} \end{picture} & Q_\pi(\gamma(C)) &\sim \begin{picture}(39,22) \put(3,-16){\line(1,0){36}} \put(3,-4){\line(1,0){36}} \put(3,8){\line(1,0){24}} \put(15,20){\line(1,0){12}} \put(3,-16){\line(0,1){24}} \put(15,-16){\line(0,1){36}} \put(27,-16){\line(0,1){36}} \put(39,-16){\line(0,1){12}} \put(9,-10){\makebox(0,0){$1$}} \put(21,-10){\makebox(0,0){$2$}} \put(33,-10){\makebox(0,0){$4$}} \put(9,2){\makebox(0,0){$3$}} \put(21,2){\makebox(0,0){$3$}} \put(21,14){\makebox(0,0){$4$}} \end{picture}\:. \end{align*} \vspace{4mm} \noindent Two semi-standard $\pi$-tableaux $A$ and $B$ are {\em parallel}, denoted $A \parallel B$, if one is obtained from the other by a sequence of transpositions of pairs of columns of the same height whose entries lie in different cosets of ${\mathbb C}$ modulo ${\mathbb Z}$. \begin{Lemma}\label{rect} There is a unique map $R$ making the following into a commuting diagram of bijections: \begin{align*} \left\{\begin{array}{l} \text{parallel classes of}\\\text{semi-standard $\pi$-tableaux} \end{array} \right\}\\ &\qquad{\operatorname{Prim}_\lambda U(\mathfrak{g})}.\\ \left\{ \begin{array}{l} \text{row-equivalence classes of}\\\text{column-strict $\pi$-tableaux} \end{array} \right\} \begin{picture}(0,0) \put(11,44){\makebox(0,0){$\searrow$}} \put(37,50){\makebox(0,0){$\scriptstyle[A] \mapsto I(\gamma(A))$}} \put(11,12){\makebox(0,0){$\nearrow$}} \put(37,6){\makebox(0,0){$\scriptstyle[B] \mapsto I(\rho(B))$}} \put(-75,26){\makebox(0,0){${\scriptstyle R} {\Big\downarrow}$}} \end{picture} \end{align*} More explicitly, $R$ maps $[A]$ to $[B]$ where $B$ is any column-strict $\pi$-tableau such that $B \sim Q_\pi(\gamma(A))$. In the special case that $\pi$ is left-justified (when a $\pi$-tableau is semi-standard if and only if it is both column-strict and row-standard) the map $R$ is induced by the natural inclusion of semi-standard $\pi$-tableaux into column-strict $\pi$-tableaux. \end{Lemma} \begin{proof} In \cite[$\S$4.1]{BKrep}, the following purely combinatorial statement is established: there is a well-defined bijection $R$ from parallel classes of semi-standard $\pi$-tableaux to row-equivalence classes of column-strict $\pi$-tableaux sending $[A]$ to $[B]$ where $B \sim Q_\pi(\gamma(A))$. To deduce the first part of the lemma from this, note for such $A$ and $B$ that $B \sim Q_\pi(\rho(B))$ by Lemma~\ref{tr}, hence our bijection $R$ sends $[A]$ to $[B]$ where $Q(\gamma(A)) \sim Q(\rho(B))$. In view of (\ref{josephs}) we deduce that the diagram in the statement of the lemma commutes. It remains to observe that the top right map in the diagram is already known to be a bijection, thanks to Corollary~\ref{lbij}, Theorem~\ref{fdc} and Theorem~\ref{labels}. The last statement of the lemma is clear as $Q_\pi(\gamma(A)) \sim A$ in case $\pi$ is left-justified and $A$ is semi-standard. \end{proof} Now we can state (and slighty extend) the main result from \cite[$\S$8.5]{BKrep} which identifies some of the $\overline{H}_0(\mathfrak{m}_\chi, L(A))$'s with $L(B,e)$'s. The equivalences in this theorem originate in work of Irving \cite{I} and proofs in varying degrees of generality can be found in several places in the literature. \begin{Theorem}\label{bigt} Let $A$ be a column-strict $\pi$-tableau. The following conditions are equivalent: \begin{itemize} \item[(1)] $A$ is semi-standard; \item[(2)] the projective cover of $L(A)$ in $\mathcal{O}_\pi$ is self-dual; \item[(3)] $L(A)$ is isomorphic to a submodule of a parabolic Verma module in $\mathcal{O}_\pi$; \item[(4)] ${\operatorname{gkdim}\:} L(A) = \dim \mathfrak{m}$, which is the maximum Gelfand-Kirillov dimension of any module in $\mathcal{O}_\pi$; \item[(5)] ${\operatorname{gkdim}\:} (U(\mathfrak{g}) / {\operatorname{Ann}}_{U(\mathfrak{g})} L(A)) = \dim G\cdot e = 2 \dim \mathfrak{m}$; \item[(6)] the associated variety $\mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} L(A))$ is the closure of $ G \cdot e$; \item[(7)] the module $\overline{H}_0(\mathfrak{m}_\chi, L(A))$ is non-zero. \end{itemize} Assuming these conditions hold, we have that \begin{equation*} \overline{H}_0(\mathfrak{m}_\chi, L(A)) \cong L(B,e) \end{equation*} where $B$ is a column-strict $\pi$-tableau with $B \sim Q_\pi(\gamma(A))$, i.e. $[B]$ is the image of $[A]$ under the bijection from Lemma~\ref{rect}. \end{Theorem} \begin{proof} By (\ref{duzz}) and the first paragraph of the proof of \cite[Lemma 8.20]{BKrep}, the restriction of the functor $\overline{H}_0(\mathfrak{m}_\chi, ?)$ to $\mathcal{O}_\pi$ is isomorphic to the restriction of the functor $\mathbb{V}$ defined in \cite[$\S$8.5]{BKrep}. Given this and assuming just that (1) holds, the existence of an isomorphism $\overline{H}_0(\mathfrak{m}_\chi, L(A)) \cong L(B,e)$ follows from \cite[Corollary 8.24]{BKrep}. In particular $\overline{H}_0(\mathfrak{m}_\chi, L(A)) \neq \hbox{\boldmath{$0$}}$, establishing that (1) $\Rightarrow$ (7). (In fact \cite[Corollary 8.24]{BKrep} also proves (7) $\Rightarrow$ (1) but via an argument that uses the Kazhdan-Lusztig conjecture; we will give an alternative argument shortly avoiding that.) The equivalence (1) $\Leftrightarrow$ (6) follows from Corollary~\ref{lbij}, since $L(A) \cong L(\gamma(A))$ and by definition $A$ is semi-standard if and only if $Q(\gamma(A))$ is of shape $\lambda$. The equivalence of (4) $\Leftrightarrow$ (5) follows by standard properties of Gelfand-Kirillov dimension; see \cite[Proposition 2.7]{Joldest}. We refer to \cite[Theorem 4.8]{BKschur} for (1) $\Leftrightarrow$ (2) $\Leftrightarrow$ (3) and postpone (4) until the next paragraph. Note that \cite{BKschur} proves a slightly weaker result (integral weights, left-justified $\pi$) but the argument there extends. It remains to check (5) $\Leftrightarrow$ (6) $\Leftarrow$ (7). We have that $$ {\operatorname{Ann}}_{U(\mathfrak{g})} L(A)\supseteq {\operatorname{Ann}}_{U(\mathfrak{g})} M(A) = {\operatorname{Ann}}_{U(\mathfrak{g})} (M(A)^\#), $$ using the opposite version of Lemma~\ref{dizz}. Hence $$ \mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} L(A))\subseteq \mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} (M(A)^\#)). $$ Since $\overline{H}_0(\mathfrak{m}_\chi, M(A))^* \cong \overline{H}^0(\mathfrak{m}_\chi, M(A)^\#)$ by (\ref{duzz}), we see using also Lemma~\ref{gform} that $\overline{H}^0(\mathfrak{m}_\chi, M(A)^\#)$ is finite dimensional and non-zero. Hence we can invoke the opposite version of Theorem~\ref{t31} to deduce $\mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} (M(A)^\#)) = \overline{G\cdot e}$. Hence $\mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} L(A)) \subseteq \overline{ G \cdot e }$ and the equivalence of (5) and (6) follows by standard dimension theory. Also it is obvious that $$ {\operatorname{Ann}}_{U(\mathfrak{g})} L(A)\subseteq {\operatorname{Ann}}_{U(\mathfrak{g})} (L(A)^\#) $$ so $$ \overline{G\cdot e} \supseteq \mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} L(A))\supseteq \mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} (L(A)^\#)). $$ Finally we repeat the earlier argument with (\ref{duzz}) and the opposite version of Theorem~\ref{t31} to see that that $\mathcal{V\!A}'({\operatorname{Ann}}_{U(\mathfrak{g})} (L(A)^\#)) = \overline{G\cdot e}$ assuming (7) holds. Hence (7) $\Rightarrow$ (6). \end{proof} From this, we obtain the following alternative classification of the finite dimensional irreducible left $U(\mathfrak{g},e)$-modules; cf. Theorem~\ref{fdc}. \begin{Corollary}\label{altclass} As $A$ runs over a set of representatives for the parallel classes of semi-standard $\pi$-tableaux, the modules $\{\overline{H}_0(\mathfrak{m}_\chi,L(A))\}$ give a complete set of pairwise non-isomorphic irreducible $U(\mathfrak{g},e)$-modules. \end{Corollary} \begin{proof} Combine Theorem~\ref{fdc}, Theorem~\ref{bigt} and the bijection in Lemma~\ref{rect}. \end{proof} \section{Dimension formulae}\label{sgoldie} Now we are ready to look more closely at the dimensions of the finite dimensional irreducible $U(\mathfrak{g},e)$-modules. We note for column-strict $\pi$-tableaux $A$ and $B$ that the composition multiplicity $[M(A):L(B)]$ is zero unless $A$ and $B$ have the same {\em content} (multiset of entries), as follows by central character considerations. Define $(L(A):M(B)) \in {\mathbb Z}$ from the expansion \begin{equation}\label{Stupid} [L(A)] = {\sum_B} (L(A):M(B)) [M(B)], \end{equation} equality in the Grothendieck group of $\mathcal{O}_\pi$, where we adopt the convention here and for the rest of the section that summation over $B$ always means summation over all column-strict $\pi$-tableaux $B$ having the same content as $A$. Also define \begin{equation*} h_\pi := \prod_{\substack{1 \leq i < j \leq N \\ \operatorname{col}(i) = \operatorname{col}(j)}} \frac{x_i - x_j}{j-i} \in {\mathbb C}[\mathfrak{t}^*], \end{equation*} which is relevant because the Weyl dimension formula tells us that \begin{equation}\label{wdf} \dim V(A,e) = \dim V(A) = \dim ({\mathbb C}_\beta \otimes V(A)) =h_\pi(\gamma(A)) \end{equation} for any column-strict $\pi$-tableau $A$. \begin{Theorem}\label{mydim} For any column-strict $\pi$-tableau $A$, we have that \begin{equation*} \dim \overline{H}_0(\mathfrak{m}_\chi, L(A))= {\sum_B} (L(A):M(B)) h_\pi(\gamma(B)). \end{equation*} Moreover $\dim \overline{H}_0(\mathfrak{m}_\chi, L(A)) = 0$ unless $A$ is semi-standard, in which case it is equal to $\dim L(B,e)$ where $B$ is any column-strict $\pi$-tableau with $B \sim Q_\pi(\gamma(A))$. \end{Theorem} \begin{proof} The final statement of the theorem is clear from Theorem~\ref{bigt}. For the first statement, we know by Lemma~\ref{exactagain} that the functor $\overline{H}_0(\mathfrak{m}_\chi, ?)$ induces a linear map between the Grothendieck group of $\mathcal O_\pi$ and the Grothendieck group of the category of finite dimensional left $U(\mathfrak{g},e)$-modules. Applying this map to (\ref{Stupid}) and using Lemma~\ref{gform} gives the identity $$ [\overline{H}_0(\mathfrak{m}_\chi, L(A))] = {\sum_B} (L(A):M(B)) [V(B,e)]. $$ The dimension formula follows immediately from this and (\ref{wdf}). \end{proof} In the rest of the section we explain how to rewrite the sum appearing in Theorem~\ref{mydim} in terms of the Kazhdan-Lusztig polynomials from (\ref{bigkl2}). Actually for simplicity we will restrict attention from now on to integral weights, an assumption which can be justified in several different ways, one being the following result from \cite{BKrep}. \begin{Theorem}[{\cite[Theorem 7.14]{BKrep}}]\label{dimred} Suppose $A$ is a column-strict $\pi$-tableau. Partition the set $\{1,\dots,l\}$ into subsets $\{i_1 < \cdots < i_k\}$ and $\{j_1 < \cdots < j_{l-k}\}$ in such a way that no entry in any of the columns $i_1,\dots,i_k$ of $A$ is in the same coset of ${\mathbb C}$ modulo ${\mathbb Z}$ as any of the entries in the columns $j_1,\dots,j_{l-k}$. Let $A'$ (resp.\ $A''$) be the column-strict tableau consisting just of columns $i_1,\dots,i_k$ (resp.\ $j_1,\dots,j_{l-k}$) of $A$ arranged in order from left to right. Then $$ \dim L(A,e) = \dim L(A',e') \times \dim L(A'',e'') $$ where $e'$ and $e''$ are the nilpotent elements associated to the pyramids of shapes $A'$ and $A''$, respectively. \end{Theorem} For an anti-dominant weight $\delta \in P$, recall from the introduction that $W_\delta$ denotes its stabilizer and $D_\delta$ is the set of minimal length $W / W_\delta$-coset representatives. Also let \begin{equation}\label{colstab} W^\pi := \{w \in W\:|\:\operatorname{col}(w(i)) = \operatorname{col}(i)\text{ for all }i=1,\dots,N\}, \end{equation} the {\em column stabilizer} of our pyramid $\pi$, and $D^\pi$ denote the set of all {maximal} length $W^\pi \backslash W$-coset representatives. \begin{Lemma}\label{klthm} For column-strict $\pi$-tableaux $A$ and $B$, we have that $$ (L(A):M(B)) = (L(\gamma(A)):M(\gamma(B))). $$ If $A$ and $B$ have integer entries these numbers can be expressed in terms of Kazhdan-Lusztig polynomials using (\ref{bigkl2}) and (\ref{klform}). \end{Lemma} \begin{proof} We'll work in the Grothendieck group $[\mathcal O]$ of the full BGG category $\mathcal O$. By the Weyl character formula, we have that $$ [M(B)]=\sum_{x \in W^\pi} (-1)^{\ell(x)} [M(x \gamma(B))]. $$ Substituting this into (\ref{Stupid}) and comparing with the identity (\ref{first}) for $\alpha = \gamma(A)$, we get that \begin{multline*} {\sum_B} \sum_{x \in W^\pi} (-1)^{\ell(x)} (L(A):M(B)) [M(x \gamma(B))] = \sum_{\beta}(L(\gamma(A)):M(\beta)) [M(\beta)]. \end{multline*} Equating coefficients of $[M(\gamma(B))]$ on both sides gives the conclusion. \end{proof} Finally for each $w \in W$ we introduce the {polynomial} \begin{equation}\label{newgoldie} p_{w}^\pi := \sum_{z \in D^\pi} (L(w): M(z)) z^{-1}(h_\pi) \in {\mathbb C}[\mathfrak{t}^*]. \end{equation} Comparing the following with Theorem~\ref{mydim} and recalling Corollary~\ref{altclass}, these can be viewed as {\em dimension polynomials} computing the dimensions of finite dimensional irreducible $U(\mathfrak{g},e)$-modules in families. \begin{Theorem}\label{maing} Let $A$ be a column-strict $\pi$-tableau such that $\gamma(A) \in W \delta$ for some anti-dominant $\delta \in P$. Then $$ p^\pi_w(\delta) = {\sum_B} (L(A):M(B)) h_\pi(\gamma(B)) $$ where $w = d({\gamma(A)})$ and the sum is over all column-strict $\pi$-tableaux $B$ having the same content as $A$. \end{Theorem} \begin{proof} Let $A$ and $\delta$ be fixed as in the statement of the theorem. Let $\mathscr{T}$ be the set of all $\pi$-tableaux having the same content as $A$. Notice that $\gamma$ restricts to a bijection $\gamma:\mathscr{T} \rightarrow W \delta$. Using this bijection we lift the action of $W$ on $\mathfrak{t}^*$ to an action on $\mathscr{T}$, which is just the natural left action of the symmetric group $S_N$ on tableaux given by place permutation of entries, indexing entries in order down columns starting from the leftmost column as usual. Similarly we view functions in ${\mathbb C}[\mathfrak{t}^*]$ now as functions on $\mathscr{T}$, so $x_i(B)$ is just the $i$th entry of $B$. Let $S \in \mathscr{T}$ be the special tableau with $\gamma(S) = \delta$ and write simply $d(B)$ for $d({\gamma(B)})$ for $B \in \mathscr{T}$. We make several routine observations: \begin{itemize} \item[(1)] The map $\mathscr{T} \rightarrow D_\delta, B \mapsto d(B)$ is a bijection with inverse $x \mapsto x S$. \item[(2)] For any $x \in W$, we have that $h_\pi(x S) \neq 0$ if and only if $xS$ has no repeated entries in any column. \item[(3)] The set $D^\pi_\delta := D^\pi \cap D_\delta$ is a set of $(W^\pi, W_\delta)$-coset representatives. \item[(4)] Assume $x \in W$ is such that $h_\pi(x S) \neq 0$. Then we have that $x \in D^\pi$ if and only if $x S$ is column-strict. \item[(5)] The restriction of the bijection from (1) is a bijection between the set of all column-strict $B \in \mathscr{T}$ and the set $\{x \in D_\delta^\pi \:|\:h_\pi(x\delta)\neq 0\}$. \item[(6)] For $x \in D^\pi_\delta$ with $h_\pi(x\delta) \neq 0$, we have that $D^\pi \cap (W^\pi x W_\delta) = x W_\delta$. \end{itemize} By Lemma~\ref{klthm} and (\ref{klform}), then (5), then (3) and (6), we get that \begin{align*} {\sum_B} (L(A):M(B)) h_\pi(\gamma(B)) &= {\sum_{B}} \sum_{y \in W_\delta} (L(d(A)):M(d(B) y)) h_\pi(B)\\ &=\sum_{x \in D^\pi_\delta} \sum_{y \in W_\delta} (L(d(A)):M(xy)) h_\pi(x \delta)\\ &= \sum_{z \in D^\pi} (L(d(A)):M(z)) z^{-1}(h_\pi)(\delta). \end{align*} Comparing with (\ref{newgoldie}) this proves the theorem. \end{proof} \section{Main results}\label{sproofs} In this section we prove Theorems~\ref{pti}--\ref{one} exactly as formulated in the introduction. We begin with the promised reproof of Premet's theorem. \begin{proof}[Proof of Premet's Theorem~\ref{pti}] We recall Joseph's algorithm for computing Goldie ranks of primitive quotients of $U(\mathfrak{g})$ mentioned already in the introduction. Let $\mathscr L(M,M)$ denote the space of all $\operatorname{ad} \mathfrak{g}$-locally finite maps from a left $U(\mathfrak{g})$-module $M$ to itself. Joseph established the following statements. \begin{itemize} \item[(1)] (\cite[$\S$5.10]{JJI}) For any column-strict $\pi$-tableau $A$ we have that $$ {\operatorname{rk}\:} \mathscr L(M(A), M(A)) = h_\pi(\gamma(A)). $$ (To state Joseph's result in this way we have used (\ref{pv}) and (\ref{wdf}).) \item[(2)] (\cite[$\S$8.1]{Jkos}) The following additivity principle holds: $$ {\operatorname{rk}\:} \mathscr L(M(A), M(A)) = \sum_B [M(A):L(B)] {\operatorname{rk}}(B) $$ where ${\operatorname{rk}}(B):={\operatorname{rk}\:} \mathscr L(L(B), L(B))$ if $L(B)$ is a module of maximal Gelfand-Kirillov dimension in $\mathcal{O}_\pi$, and ${\operatorname{rk}}(B) := 0$ otherwise. (Again we are using the convention that summation over $B$ means summation over all column-strict $\pi$-tableaux $B$ having the same content as $A$.) \item[(3)] (\cite[$\S$9.1]{Jkos}) For any $\alpha \in \mathfrak{t}^*$, ${\operatorname{rk}\:} \mathscr L(L(\alpha), L(\alpha)) = {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha)$. \end{itemize} By (1)--(2) we get that $h_\pi(\gamma(A)) = {\sum_B} [M(A):L(B)] {\operatorname{rk}}(B)$. Inverting this gives that ${\operatorname{rk}}(A) = \sum_B (L(A):M(B)) h_\pi(\gamma(B)).$ Recall also from (\ref{pv2}) that $L(A) \cong L(\gamma(A))$. So using (3) and the implication (1)$\Rightarrow$(4) from Theorem~\ref{bigt} we have established that \begin{equation}\label{josep} {\operatorname{rk}\:} U(\mathfrak{g}) / I(\gamma(A)) = {\sum_B} (L(A):M(B)) h_\pi(\gamma(B)) \end{equation} for any semi-standard $\pi$-tableau $A$. Now take any finite dimensional irreducible left $U(\mathfrak{g},e)$-module $L$. By Corollary~\ref{altclass}, we may assume $L = \overline{H}_0(\mathfrak{m}_\chi, L(A))$ for a semi-standard $\pi$-tableau $A$. Comparing Theorem~\ref{mydim} with Joseph's formula (\ref{josep}), we see that $\dim L = {\operatorname{rk}\:} U(\mathfrak{g}) / I(\gamma(A))$. Finally observe that $I(\gamma(A)) = I(L)$ by Lemma~\ref{rect}, Theorem~\ref{labels} and Theorem~\ref{bigt}. \end{proof} For the rest of the section we assume that the pyramid $\pi$ is left-justified, keeping $\lambda$ fixed as before. \begin{proof}[Proof of Theorem~\ref{mt}] It suffices to show for $\alpha \in \mathfrak{t}^*_{\lambda}$ that ${\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = 1$ if and only if $Q(\alpha)$ is row-equivalent to a column-connected tableau. By Theorem~\ref{labels2}, we have that $I(\alpha) = I(L(A,e))$ where $A$ is any column-strict tableau that is row-equivalent to $Q(\alpha)$. Hence by Theorem~\ref{pti}, we see that ${\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = \dim L(A,e)$. Now apply Theorem \ref{class}. \end{proof} \begin{proof}[Proof of Theorem~\ref{sep}] We may assume that $\alpha \in \mathfrak{t}^*_\lambda$ and that $Q(\alpha) \sim A$ for some column-separated tableau $A$. By Theorem~\ref{labels2} and Theorem~\ref{msup}, we deduce that $I(\alpha) = I(L(A,e)) = \operatorname{ann}_{U(\mathfrak{g})} M(A)$. Moreover by Theorem~\ref{pti} and Theorem~\ref{sep2}, we have that $$ {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = {\operatorname{rk}\:} U(\mathfrak{g}) / I(L(A,e)) = \dim L(A,e) = \dim V(A,e) = \dim V(A). $$ It remains to observe from the definition (\ref{pv}) that $M(A) \cong U(\mathfrak{g}) \otimes_{U(\mathfrak{p})} F$ where $F$ is as in the statement of Theorem~\ref{sep}, and also $\dim V(A) = \dim F$ since they are equal up to tensoring by a one dimensional representation. \end{proof} \begin{proof}[Proof of Theorem~\ref{red}] Take any $\alpha \in \mathfrak{t}_{\lambda}^*$ and set $A := Q(\alpha)$. Then for each $z \in {\mathbb C}$ let $A_z$ be the tableau obtained by erasing all entries of $A$ that are not in $z + {\mathbb Z}$, subtracting $z$ from all remaining entries, and then sliding all boxes to the left to get a left-justified tableau with integer entries. It is clear from the definition of $Q(\alpha)$ that each $A_z$ is a column-strict tableau, indeed, $A_z = Q(\alpha_z)$ for $\alpha_z$ as in the statement of Theorem~\ref{red}. Finally let $e_z$ be the nilpotent in $\mathfrak{g}_z$ associated to the pyramid of the same shape as $A_z$. Applying Theorem~\ref{dimred} (perhaps several times) we get that $$ \dim L(A,e) = \prod_{z} \dim L(A_z,e_z) $$ where the product is over a set of coset representatives for ${\mathbb C}$ modulo ${\mathbb Z}$. This implies Theorem~\ref{red} thanks to Theorem~\ref{labels2} and Theorem~\ref{pti}. \end{proof} \begin{proof}[Proof of Theorem~\ref{myg}] We may assume that $w$ is minimal in its left cell and that $Q(w)$ is of shape $\lambda$. Take any regular anti-dominant $\delta$ and set $\alpha := w \delta \in \widehat{C}_w$. Since the entries of $Q(\alpha)$ satisfy the same system of inequalities as the entries of $Q(w)$, we see that $Q(\alpha) \sim B$ for a column-separated tableau $B$ which is obtained from $Q(\alpha)$ by permuting entries within rows in exactly the same way as $A$ is obtained from $Q(w)$. Theorem~\ref{sep} tells us that ${\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha)$ is the dimension of the irreducible $\mathfrak{h}$-module of highest weight $\gamma(B) - \rho$, where $\mathfrak{h}$ is the standard Levi subalgebra $\mathfrak{gl}_{\lambda_1'}({\mathbb C}) \oplus \mathfrak{gl}_{\lambda_2'}({\mathbb C}) \oplus\cdots$ and ${\lambda}'=({\lambda}'_1\geq{\lambda}'_2\geq\cdots)$ is the transpose of ${\lambda}$. Using the Weyl dimension formula for $\mathfrak{h}$ we deduce that $$ {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = h_{\lambda}(\gamma(B)). $$ Using (\ref{minimal}), the definition of $h_\lambda$ from the statement of Theorem~\ref{foldie}, and the assumption that $w$ is minimal in its left cell, the right hand side here is the same as $$ \bigg(\prod_{(i,j)} \frac{x_{w(i)}-x_{w(j)}}{d(i,j)} \bigg)(\gamma(Q(\alpha))) =\bigg(\prod_{(i,j)} \frac{x_i-x_j}{d(i,j)}\bigg) (\delta) $$ where the product is over pairs $(i,j)$ as in the statement of the theorem. By the definition (\ref{goldiedef}), this establishes that $p_w$ and $\prod_{(i,j)} (x_i-x_j) / d(i,j)$ take the same values on all regular anti-dominant $\gamma$. The theorem follows by density. \end{proof} \begin{proof}[Proof of Joseph's Theorem~\ref{foldie}] Take any $w \in W$ that is minimal in its left cell, and assume that $Q(w)$ has shape $\lambda$. Take any regular anti-dominant $\delta$. Set $\alpha := w \delta \in \widehat{C}_w$ and $A := Q(\alpha)$, which is a semi-standard tableau of shape $\lambda$. By (\ref{uc}) and (\ref{minimal}), we have that $d(\alpha) = w$ and $\gamma(A) = \alpha$. So Theorems~\ref{mydim} and \ref{maing} give that $\dim \overline{H}_0(\mathfrak{m}_\chi, L(A))= p^\pi_w(\delta).$ By Lemma~\ref{rect}, Theorem~\ref{labels} and Theorem~\ref{bigt} we know that $I(\overline{H}_0(\mathfrak{m}_\chi, L(A))) = I(\alpha)$. Hence by Theorem~\ref{pti} we deduce that $$ {\operatorname{rk}\:} U(\mathfrak{g}) / I(\alpha) = p^\pi_w(\delta). $$ (This equality can also be deduced without finite $W$-algebras using Theorem~\ref{maing} and Joseph's (\ref{josep}) directly.) Comparing with (\ref{goldiedef}) we have therefore shown that $p_w(\delta) = p^\pi_w(\delta)$ for all $\delta$ in a Zariski dense subset of $\mathfrak{t}^*$, so $p_w = p^\pi_w$. It remains to observe that the polynomial $p^\pi_w$ from (\ref{newgoldie}) is the same as the one in on the right hand side of (\ref{bform}) in the left-justified case. \end{proof} \begin{proof}[Proof of Theorem~\ref{one}] Let $w \in W$ be minimal in its left cell, and assume that $Q(w)$ is of shape $\lambda$. Like in the proof of Theorem~\ref{maing}, we use the map $\gamma$ from (\ref{newgamma}) to lift the action of $W$ on $\mathfrak{t}^*$ to an action on tableaux of shape $\lambda$ by place permutation. Let $\mathscr{T}$ be the set of all tableaux of shape $\lambda$ with entries $\{1,\dots,N\}$ and $S\in \mathscr{T}$ be the unique tableau with $\gamma(S) = -\rho$. We obviously get a bijection $W \rightarrow \mathscr{T}, w \mapsto w S$. For any $x \in W$ we have that $x \in D^\lambda$ if and only if $x S$ is column-strict, so our bijection identifies $D^\lambda$ with the column-strict tableaux in $\mathscr{T}$. Under this identification, it is well known that the usual Bruhat order $\geq$ on $D^\lambda$ corresponds to the partial order $\geq$ on column-strict tableaux such that $A \geq B$ if and only if we can pass from column-strict tableau $A$ to column-strict tableau $B$ by repeatedly applying the following basic move: \begin{itemize} \item[(1)] find entries $i > j$ in $A$ such that the column containing $i$ is strictly to the left of the column containing $j$; \item[(2)] interchange these entries then re-order entries within columns to obtain another column-strict tableau. \end{itemize} Now to prove the result, let $C$ be the tableau from the statement of Theorem~\ref{one}. Using the explicit formula for $p_w$ from Theorem~\ref{foldie}, we need to show that $$ \sum_{z \in D^\lambda} (L(w):M(z)) h_\lambda(z w^{-1} \gamma(C)) = 1. $$ By (\ref{rel}) and (\ref{minimal}) we know that $w S = Q(w)$, which is standard so certainly column-strict, hence $w \in D^\lambda$. So there is a term in the above sum with $z=w$, and for this $z$ it is obvious that $(L(w):M(z)) h_\lambda(z w^{-1}\gamma(C)) = h_\lambda(\gamma(C)) = 1$. Since $(L(w):M(z)) = 0$ unless $z \leq w$ in the Bruhat order on $W$, it remains to show that $h_\lambda(z w^{-1} \gamma(C)) = 0$ for any $z \in D^\lambda$ such that $z < w$. To see this, take such an element $z$ and let $A := w S$ and $B := z S$, so $A$ is standard, $B$ is column-strict and $A > B$ (in the partial order on column-strict tableau defined in the first paragraph of the proof). In the next paragraph, we show that there exist $1 \leq i < j \leq N$ such that the numbers $i$ and $j$ appear in the same row of $A$ and in the same column of $B$. We deduce in the notation from $\S$\ref{s1d} that $\operatorname{row}(w(i)) = \operatorname{row}(w(j))$ and $\operatorname{col}(z(i)) = \operatorname{col}(z(j))$. Hence $$ (x_{z(i)} - x_{z(j)})(z w^{-1} \gamma(C)) = (x_{i} - x_{j})(w^{-1} \gamma(C)) = (x_{w(i)} - x_{w(j)})(\gamma(C)) = 0 $$ and $x_{z(i)}-x_{z(j)}$ is a linear factor of $h_\lambda$. This implies that $h_\lambda(z w^{-1} \gamma(C)) = 0$ as required. It remains to prove the following claim: given tableaux $A > B$ of shape $\lambda$ with $A$ standard and $B$ column-strict, there exist $1 \leq i < j \leq N$ such that $i$ and $j$ appear in the same row of $A$ and in the same column of $B$. To see this, let $A_{\leq j}$ (resp.\ $B_{\leq j}$) denote the diagram obtained from $A$ (resp.\ $B$) by removing all boxes containing entries $> j$. Choose $1 \leq j \leq N$ so that $A_{\leq (j-1)} = B_{\leq (j-1)}$ but $A_{\leq j} \neq B_{\leq j}$. Suppose that $j$ appears in column $c$ of $B$, and observe as $A > B$ that this column is strictly to the left of the column of $A$ containing $j$. Suppose also that $j$ appears in row $r$ of $A$, and observe as $A$ is standard that this row is strictly below the row of $B$ containing $j$. As $A_{\leq (j-1)} = B_{\leq (j-1)}$ and $B$ is column-strict, $A$ and $B$ have the same entry $i \leq j-1$ in row $r$ and column $c$. Thus the entries $i$ and $j$ lie in the same row $r$ of $A$, and in the same column $c$ of $B$. \end{proof} \iffalse \section{Examples}\label{sexamples} We give some examples of Goldie rank polynomials $p_w$. Since they depend only on the left cell of $w$, they are naturally indexed by standard tableaux via the map $w \mapsto Q(w)$. When $N \leq 5$, there are just eight standard tableaux that are not row-equivalent to a column-separated tableau (thus not covered by Theorem~\ref{myg}). We computed their Goldie rank polynomials directly from the formula in Theorem~\ref{foldie}. They are listed below, adopting the shorthand $x_{i,j} := x_i - x_j$. The same computation was performed in \cite[$\S$11.4]{Jkos}, where Joseph writes $\lambda_i$ in place of our $x_{i+1,i}$. $$ \begin{array}{|l|l|} \hline Q(w)&p_w\\\hline \diagram{ $\scriptstyle 4$\cr$\scriptstyle 2$\cr$\scriptstyle 1$&$\scriptstyle 3$\cr } & \frac{1}{2}x_{2,1}x_{4,3} (x_{3,1} + x_{4,2}) \\\hline \diagram{ $\scriptstyle 4$\cr $\scriptstyle 2$\cr $\scriptstyle 1$&$\scriptstyle 3$&$\scriptstyle 5$\cr } & \frac{1}{2}x_{2,1}x_{4,3} (x_{3,1} + x_{4,2}) \\ \diagram{ $\scriptstyle 5$\cr $\scriptstyle 3$\cr $\scriptstyle 1$&$\scriptstyle 2$&$\scriptstyle 4$\cr } & \frac{1}{2}x_{3,2}x_{5,4}(x_{4,2} + x_{5,3}) \\ \diagram{ $\scriptstyle 5$\cr $\scriptstyle 2$\cr $\scriptstyle 1$&$\scriptstyle 3$&$\scriptstyle 4$\cr } & \frac{1}{2}x_{2,1}x_{5,4}(x_{4,1} + x_{5,2}) \\ \hline \diagram{ $\scriptstyle 4$\cr $\scriptstyle 2$&$\scriptstyle 5$\cr $\scriptstyle 1$&$\scriptstyle 3$\cr } & \frac{1}{2} x_{2,1}x_{4,3}(x_{4,1} x_{5,2} + x_{3,2}x_{5,4}) \\ \diagram{ $\scriptstyle 5$\cr $\scriptstyle 3$&$\scriptstyle 4$\cr $\scriptstyle 1$&$\scriptstyle 2$\cr } & \frac{1}{2}x_{3,2}x_{5,4} (x_{4,1}x_{5,2} +x_{2,1}x_{4,3}) \\ \hline \diagram{ $\scriptstyle 5$\cr $\scriptstyle 3$\cr $\scriptstyle 2$\cr $\scriptstyle 1$&$\scriptstyle 4$\cr } & \frac{1}{12} x_{2,1}x_{3,1} x_{3,2}x_{5,4}( x_{4,1}x_{4,3} + x_{4,3}x_{5,2}+ x_{5,1}x_{5,2}) \\ \diagram{ $\scriptstyle 5$\cr $\scriptstyle 4$\cr $\scriptstyle 2$\cr $\scriptstyle 1$&$\scriptstyle 3$\cr } & \frac{1}{12} x_{2,1}x_{4,3}x_{5,3}x_{5,4} (x_{4,1}x_{5,1} +x_{3,2}x_{4,1} + x_{3,2}x_{5,2}) \\ \hline\end{array} $$ \fi
proofpile-arXiv_068-16097
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Decoherence lies at the core of all difficulties in implementing quantum information technologies. It degrades information being transmitted, stored, and processed, in an irreversible way. All these processes can be thought of different kinds of quantum channels, decoherence inevitably affecting all of them. We are interested in assessing the deviation from ideality in Gaussian channels, and the precision attainable when tested with Gaussian resources. Namely, we consider a dissipative thermal bath with mean photon number $N$ and damping constant $\gamma$, and explore how well different Gaussian resources perform in identifying these parameters. Our question is immediately relevant to the field of quantum information, since assessing the deviation from the \emph{identity channel}, \emph{i.e.}, the ideal information transmitter, is the principal requirement to implement large scale quantum communication. The need for an efficient characterization of dissipation in continuous variable systems is becoming a requisite for a number of quantum information tasks, such as quantum repeaters~\cite{azuma_optimal_2009, azuma_tight_2010} or quantum memories~\cite{kozhekin_quantum_2000,htet_characterization_2008, jensen_quantum_2010}, among others. The burden of dissipation is also hindering advances in cavity QED~\cite{brune_process_2008, deleglise_reconstruction_2008} and superconducting quantum circuits~\cite{wang_measurement_2008}. On the other hand, measuring decoherence is not only relevant for quantum information technology. In several contexts, decoherence can be related to physical quantities of practical interest, \emph{e.g.}, photon loss is strongly related to impurity doping concentration in semiconductor lasers~\cite{hunsperger_photon_1969}. Nonlinear magneto-optical effects~\cite{budker_resonant_2002} can be understood as photon loss, an effect with several technological applications such as low-field magnetometry and gas density measurements~\cite{budker_nonlinear_1998, budker_sensitive_2000, shah_subpicotesla_2007}. Additionally, photon-photon scattering in vacuum is still an unobserved prediction of both quantum electrodynamics and non-standard models of elementary particles~\cite{tommasini_precision_2009}. These are only a few among the several applications that involve the precise determination of losses in dispersive media. For this reason, we will pose our problem and formulate our results in a general theoretic formalism, in order to keep our results as general as possible. We address the problem of estimating the parameters of a Gaussian channel describing the dynamics of a bosonic mode $a$, coupled with strength $\gamma$ to a thermal reservoir with mean photon number $N$. In the interaction picture, and within the Markovian approximation (at any time the mode and the bath remain unentangled), the completely positive dynamics and the action on the mode $a$ is described by the superoperator \begin{equation}\label{eq:channel} \mathcal S(\gamma,N)=\exp\frac{\gamma}{2}\left(NL[a^\dagger]+(N+1)L[a]\right), \end{equation} with $L[o]\rho=2o\rho o^\dagger-o^\dagger o \rho-\rho o^\dagger o$. For convenience we arrange the channel parameters in the two-dimensional vector $\theta=(\gamma,N)$ and let $\hat\theta$ be the estimator, corresponding to the outcome of the final measurement. \begin{figure}[b] \begin{center} \includegraphics[width=.5\textwidth]{fig_one.eps} \end{center} \caption{\label{fig:strategy} (Color online)~Scheme for measuring the values of parameters $\gamma$ and $N$. $N$ copies of a bipartite state $\rho$ are independently prepared, and acted upon by $k$ instances of the unknown channel $\mathcal I\otimes\mathcal S(\theta)$. A collective measurement $\Lambda$ is finally performed.} \end{figure} Previous related work in the literature addresses the problem of estimating a state within a Gaussian family in several different situations. Among others we should mention the works of Yuen \& Lax~\cite{yuen_multiple-parameter_1973}, Helstrom~\cite{helstrom_quantum_1976}, Holevo~\cite{holevo_probabilistic_1982}, Hayashi~\cite{hayashi_asymptotic_2008}, Adesso \& Chiribella~\cite{adesso_chiribella} and Hayashi \& Matsumoto~\cite{hayashi_quantum_2009}. However, all of these works focus on estimating some of all possible parameters of a Gaussian state. Most of them focus on estimating either displacement or temperature for fixed degrees of squeezing, while others consider the degree of squeezing in a vacuum state. No work exists, to the best of our knowledge, that addresses the problem of estimating \emph{all} parameters of a Gaussian state. If such a work would exist, the problem of estimating a Gaussian channel with Gaussian probe states would reduce to a subproblem of the former. However, the lack of such a general result demands a dedicated solution. The setup that we consider is quite generic. We allow for \emph{i)} extending the channel of interest to include an ancillary mode $b$, unaffected by the channel (\emph{i.e.}, identity superoperator $\mathcal I$), obtaining the channel $\mathcal S^\star=\mathcal S\otimes\mathcal I$, \emph{ii)} choosing any bipartite Gaussian state $\rho_0$, of which $k$ copies will be sent through the channel and \emph{iii)} performing a generalized measurement $M^{(k)}$ [characterized by a POVM $\{M^{(k)}_{\hat\theta}\}$, $M^{(k)}_{\hat\theta}\geq0$, $\int d{\hat\theta}\,M^{(k)}_{\hat\theta}=\openone$], on the collective state $\rho_\theta^{(k)}=(\mathcal S^\star \rho_0)^{\otimes k}$. This allows for a rather general scheme, which tests the channel with independent and identically prepared probes, while the generalized collective measurement may include arbitrary quantum transformations applied to $\rho_\theta^{(k)}$ prior to the measurement [see Fig.~\ref{fig:strategy}]. As a quantifier of the quality of the estimate one can use the covariance matrix \begin{equation} V_\theta(M)=\int d\hat\theta\,(\hat\theta-\theta)(\hat\theta-\theta)^\top\mathrm{tr}[\rho_\theta M_{\hat\theta}], \end{equation} which may depend on the chosen measurement $M=\{M_{\hat\theta}\}$ and the particular channel being tested, $\theta$. Alternatively, as is customary in standard statistical inference, we can relate any error cost function $\ell(\theta,\hat\theta)$ to the covariance matrix by performing a Taylor expansion of $\hat\theta$ around $\theta$, and defining $G_\theta=\frac{1}{2}\partial^2 \ell(\theta,\hat\theta)|_{\hat\theta=\theta}$ as the Hessian of $\ell/2$, thus \begin{equation}\label{eq:objective} \langle\ell\rangle=\mathrm{tr}[G_\theta V_\theta(M)]+o(\hat\theta-\theta)^2. \end{equation} More generally, an arbitrary positive semidefinite weight matrix $G_\theta$ can be defined to account for the relevance assigned to each parameter. Two extreme cases, where one only cares about one or the other parameter can be accounted for by the choices $G_\gamma=\textrm{diag}(1,0)$ and $G_N=\textrm{diag}(0,1)$. This approach allows also to define strategies where one is only interested in a particular linear combination of the parameters $X=x^\mu\theta_\mu$, by setting $G_X=XX^\top$. With these considerations, and given a large number $k$ of copies of the fixed probe state $\rho_0$, one can ask what is the smallest possible value of $\langle\ell\rangle=\lim_{k\rightarrow\infty} k\,\mathrm{tr}[G_\theta\,V_\theta(M^{(k)})]$ that is allowed by the laws of Quantum Mechanics. The main question we wish to answer is the following: To what extent can one optimize expression~\eqref{eq:objective} by using different Gaussian resources? More precisely, we will compare the performance of thermal, coherent, single-mode squeezed and two-mode squeezed vacuum in estimating the damping $\gamma$ and the temperature $N$ at different parameter regimes, comparing the performance of each probe state at equal amount of energy input to the channel, $n=\mathrm{tr}[\rho_0 a^\dagger a]$. In order to elucidate the role that each one of these resources plays in the estimation problem, we will make some simplifying assumptions. Namely, we will not consider the combination of different resources, e.g. two-mode squeezing and displacement. A priori it would seem that the problem may have several different variants depending on the chosen cost function (or $G$ matrix). We will see, however, that some general statements can be made. Anticipating the results that will be presented in the present work, we will prove that: \begin{enumerate} \item Choosing a two-mode squeezed vacuum input state $\rho_0$, the parameters $\gamma$ and $N$ can be optimally estimated \emph{simultaneously}. That is, no compromise is required in the optimization of $V_\theta(M)$. This holds true even when the optimal measurements for $\gamma$ and $N$ do not commute. \item For both parameters $\gamma$ and $N$, and at any given energy, two-mode squeezed states always outperform any other class of Gaussian states. \end{enumerate} The combination of these two statements unveals a strong compatibility between the problem of estimating the damping $\gamma$ and the temperature $N$ in the Gaussian setting. In Section~\ref{sec:Holevo} we develop our approach and derive the first result of our paper, namely, that optimal precision with two-mode squeezed states can be attained simultaneously for both parameters, thus allowing to divide the problem of computing precision bounds into two independent problems. Sections \ref{sec:gamma} and \ref{sec:N} explore the precision bounds for estimating $\gamma$ and $N$ individually, comparing the performances of different Gaussian resources, focusing especially on some physically relevant regimes of the parameters. In Section~\ref{sec:nonG} we move on to explore numerically the non-Gaussian arena and compare the relative performances with respect to Gaussian probes. Section~\ref{sec:discuss} concludes the paper with a discussion and an overview of the obtained results. Details of the technical proofs are provided in two appendices. \section{The ultimate quantum limits}\label{sec:Holevo} The Heisenberg relations place a fundamental limit on the precision with which one can measure any given observable. When it comes to quantities not associated to an observable, as is our case, it is necessary to resort to quantum estimation theory, which studies the fundamental quantum mechanical limits to the precision of measurements in a variety of situations. A first lower bound can be obtained from Helstrom's Fisher information matrix~\cite{helstrom_quantum_1976, holevo_probabilistic_1982}, \begin{equation}\label{eq:CRbound} V_\theta(M)\geq J(\theta)^{-1}~~~~\forall M \end{equation} where $J(\theta)=\Re\,\mathrm{tr}[\rho_\theta\,\Lambda \,\Lambda^\top]$ is defined as the covariance matrix of the \emph{symmetric logarithmic derivatives} (SLD) $\Lambda_\mu$ fulfilling $\partial\rho_\theta/\partial\theta^\mu=\Lambda_\mu\circ\rho$, with $A\circ B=(AB+BA)/2$. The inherent non-commutativity of Quantum Mechanics forbids, in general, to attain this inequality when the problem is multi-parametric, as in our case. Optimizing the measurement for one parameter will in general compromise the measurement precision on the others. When considering single-parameter estimation problems, it is well known that local adaptive measurements attain Eq.~\eqref{eq:CRbound}~\cite{gill_state_2000, hayashi_statistical_2003}. However, even if the optimal measurements for both parameters do not commute, it may still be possible to devise a measurement strategy to attain simultaneously both bounds. Recent progress in the theory of Local Asymptotic Normality for quantum states~\cite{gu_local_2007, kahn_local_2009} suggests that equality in Eq.~\eqref{eq:CRbound} is asymptotically attainable if and only if~\cite{guta_unpub}, \begin{equation}\label{EQ:COMMUTE} \mathrm{tr}[\rho_\theta\,[\Lambda_\mu,\Lambda_\nu]]=0. \end{equation} The SLD's for our problem, as well as in more general contexts, were obtained by the authors in~\cite{monras_information_2009}. In Appendix~\ref{sec:commute} we prove Eq.~\eqref{EQ:COMMUTE} for the case of two-mode squeezed vacuum probe states. The implications of Eq.~\eqref{EQ:COMMUTE} are two-fold. On one hand, it allows to prove that asymptotically, the estimation problems for both parameters become independent, so that they can be analyzed separately. This will be the subject of the two next Sections. On the other hand, we will show that two-mode squeezed states form the optimal Gaussian class of states. As a consequence, we will prove the existence of and provide the explicit expression of precision bounds for the simultaneous estimation of $\gamma$ and $N$ for error cost functions that have diagonal $G$ matrices. The single-parameter precision is quantified by the asymptotic standard deviation, \begin{equation} \Delta\theta=\sqrt{\lim_{k\rightarrow\infty}k\int d{\hat\theta} \,(\hat\theta-\theta)^2\mathrm{tr}[\rho_\theta^{(k)} M_{\hat\theta}^{(k)}]}, \end{equation} which is bounded by the quantum Fisher information. The latter will be generically denoted as $J_\gamma$ or $J_N$ depending whether we are considering the yield for parameter $\gamma$ or $N$, respectively. When no confusion arises, we will omit the subscript. When we are referring to a particular yield of QFI, specific to a given class of states, we will denote it with the corresponding subscript $J_\textrm{coh.}$, $J_\textrm{th.}$, $J_\textrm{sq.}$ and $J_{\textrm{2-m}}$ for coherent, thermal, single-mode squeezed and two-mode squeezed vacuum states, respectively. Precision bounds are then given by \begin{eqnarray} \Delta \theta&\geq&\frac{1}{\sqrt{J_\theta}} \end{eqnarray} We will call $J_\theta$ the \emph{yield}, or \emph{performance}. The obtained results should be interpreted in the following way. The two single-parameter problems have, as optimal observables the corresponding SLD's~\cite{braunstein_statistical_1994}, which can be explicitly computed from~\cite{monras_information_2009} and are quadratic in the creation and annihilation operators. In the multiparametric case, with diagonal $G$ matrices, the bounds obtained in the following two sections provide all necessary quantities needed to determine the asymptotic error cost. The optimal measurement, however, will require a general collective measurement, which is likely to be beyond the technical capabilities of present-day technology. We will focus on the theoretical attainable precision and not discuss the details of the implementation of the optimal observables. This will, nevertheless, provide a means to gauge the efficiency of more applied studies such as tomographic~\cite{bellomo_reconstruction_2009}, single-mode Gaussian~\cite{monras_optimal_2007} and non-Gaussian~\cite{adesso_optimal_2009}, or entanglement-assisted schemes~\cite{venzl_quantum_2007}. In order to determine the precision attainable with different Gaussian resources we consider single-mode probes parameterized as \begin{equation} \rho_0=D({\bf d}_0)S(r_0)\rho_{\nu_0} S^\dagger(r_0)D^\dagger({\bf d}_0). \end{equation} where $\rho_\nu\propto \left(\frac{\nu-1/2}{\nu+1/2}\right)^{a^\dagger a}$, is a single-mode thermal state, $D({\bf d})=\exp i(d_2Q-d_1P)$ [${\bf d}=(d_1,d_2)$] and $S(r)=\exp\frac{1}{2}(r a^{\dagger}{}^2-r^* a^2)$. The first moments and covariance matrix of the single-mode states ($\Sigma_0$) are given by ${\bf d}_0$ and \begin{eqnarray}\label{eq:CM.sm} \Sigma_0&=&\frac{\nu_0}2\left(\begin{array}{cc}e^{2r_0} & 0 \\0 & e^{-2r_0}\end{array}\right), \end{eqnarray} while the energy in the probe is given by \begin{equation} n=\frac{\nu_0\cosh 2r_0+|{\bf d}_0|^2-1}2. \end{equation} Using the results of~\cite{monras_information_2009} it is easy to obtain the yield in Fisher information as a function of the final parameters for both the single- and two-mode probe states. The yields for single-mode probes are \begin{subequations}\label{eq:singlemode} \begin{eqnarray} \nonumber J_\gamma&=&\frac{d_1^2e^{-2r}+d_2^2e^{2r}}{2\nu}+\frac{\nu^2}{\nu^2-1}\\ \nonumber &&+4\left(N+\frac{1}{2}\right)^2\frac{1+\nu^2 \cosh4r}{\nu^4-1}\\ \label{eq:singlemodegamma} &&-4\left(N+\frac{1}{2}\right)\frac{\nu\cosh2r}{\nu^2-1}.\\ \label{eq:singlemodeN} J_N&=&4(e^\gamma-1)^2\,\frac{1+\nu^2\cosh4r}{\nu^4-1}. \end{eqnarray} \end{subequations} expressed as functions of the state parameters after the action of the channel $\mathcal S$. In order to obtain the final values for the yield, one needs to consider the three different situations, $n=|{\bf d}_0|^2/2$, $2n+1=\nu_0$ and $2n+1=\cosh2r_0$, and substitute in Eqs.~\eqref{eq:singlemode}. Turning to entangled probe states, we know from~\cite{monras_information_2009} that the optimal state for constraints of the form $\mathrm{tr}[\rho_0\,a^\dagger a]\leq n$ can only be pure. Since we are interested in evaluating the sensitivity of the different Gaussian resources, and two-mode squeezing is the only genuinely entangling resource, we will restrict ourselves to two-mode squeezed vacuum states, that we will denote by $\rho_0^\star$: \begin{equation} \rho_0^\star=S_\textrm{2m}(r_0)\ket0\bra0 S^\dagger_\textrm{2m}(r_0) \, , \end{equation} where $\ket0\bra0$ denotes the two-mode vacuum. $S_\textrm{2m}(r)=\exp{\frac{1}{2}(r a^\dagger b^\dagger-r^* a b)}$ denotes the two-mode squeezing operator, where $b$ is the ancillary mode. Notice that we consistently denote parameters in the probe state with the subscript $0$. The covariance matrix for the two-mode squeezed state is \begin{eqnarray}\label{eq:CM.tm} \Sigma_0^\star&=&\frac{1}{2}\small\left(\begin{array}{cccc}\cosh2r & 0 & -\sinh2r & 0 \\0 & \cosh2r & 0 & \sinh2r \\-\sinh2r & 0 & \cosh2r & 0 \\0 & \sinh2r & 0 & \cosh2r\end{array}\right). \end{eqnarray} whereas the energy reads \begin{equation} n=\frac{1}{2}(\cosh2r_0-1) \end{equation} for the two-mode entangled state. We will consistently use $n$ to compare the performances of the different classes of states. \section{Estimating loss $\gamma$}\label{sec:gamma} We now proceed to analyze the problem of estimating $\gamma$ alone, under the assumption that the mean photon number $N$ is known. This problem has been partially addressed in the literature \cite{venzl_quantum_2007, monras_optimal_2007, adesso_optimal_2009} with different degrees of generality. In previous studies, emphasis is placed in zero temperature channels ($N=0$). In \cite{venzl_quantum_2007} several distinct probe states are considered, always with fixed tomographic measurements $X$ and $P$, whereas~\cite{monras_optimal_2007, adesso_optimal_2009} focus their attention in the optimal probe states, respectively within the single-mode Gaussian states and non-Gaussian states. Moreover, while~\cite{venzl_quantum_2007} considers the use of entangled probes, no consideration is made about the optimality of the measurement scheme. On the other hand, References \cite{monras_optimal_2007, adesso_optimal_2009} consider optimality of both measurement and single-mode probe states, but they do not consider the use of entangled probes. In this section we will combine both approaches, namely, considering different Gaussian resources, including entanglement, while still using the powerful tools of quantum estimation theory in order to take into account the corresponding optimal measurement for each probe state. The yield for two-mode squeezed probe states as a function of the final state parameters is a highly involved expression which provides no physical insight. Remarkably, plugging in the dependence of the final parameters as functions of the initial mean-photon number and the channel parameters provides manageable expressions, reported in Appendix~\ref{app:exact} along with the yields for single-mode states. Notice that in both cases, two-mode squeezing is taken at fixed phase. This does not affect the generality of the analysis since single- and two-mode squeezing along different quadratures can always be taken to the standard forms~\eqref{eq:CM.sm} and \eqref{eq:CM.tm} by means of single-mode phase shifts. The phase insensitivity of the considered channels guarantees that this will not affect the yield in Fisher information. We thus proceed to compare the different resources by writing the output parameters as functions of the initial ones~\cite{serafini_quantifying_2005}, which in turn are functions of the available energy. The resulting general expressions are exceedingly complex and provide no particular physical insight. We will, instead, explore specific parameter regimes of physical interest. In order to present the forthcoming results in a manageable form, we will use, when convenient, the following definitions, \begin{subequations} \begin{eqnarray} x&=&n(n+1),\\ y&=&N(N+1),\\ z&=&e^\gamma-1. \end{eqnarray} \end{subequations} \subsection{Zero-temperature baths, $N=0$} \begin{figure}[t] \includegraphics[width=.45 \textwidth]{fig_two_gamma_zeroN.eps}~~~~~~~~ \caption{\label{fig:zeroN}[Color online] Log-Log plot of the yield for the $\gamma$ parameter using thermal states (red), coherent states (blue), single mode squeezed (green) and two-mode squeezed states (magenta) at different energy regimes, for $N=0$ and $\gamma=0.01$. The saturation of the performance for thermal and squeezed states is clear. On the other hand, the linear dependency of coherent and two-mode entangled states is readily visible. This is in accordance with Eqs.~\eqref{eq:zeroN}.} \end{figure} \begin{figure*}[t] \includegraphics[width=.475 \textwidth]{fig_threeA_gamma_low_n.eps}~~ \includegraphics[width=.475 \textwidth]{fig_threeB_gamma_high_n.eps} \caption{\label{fig:fisher_gamma}[Color online] Yield for the $\gamma$ parameter using thermal states (red), coherent states (blue), single mode squeezed (green) and two-mode squeezed states (magenta) at different energy regimes, for $N=0.9$ and $\gamma=0.3$. [Right] Log-log plot of the yield for $n$ values greater than 1. The linear behavior of coherent and two-mode squeezed states is readily apparent, whereas saturation of the yield occurs both for thermal and single-mode squeezed states. At this parameter values we have $(e^\gamma-1)(2N+1)\simeq0.98$, thus $J_\textrm{2-m}/J_\textrm{coh.}\simeq 2.0$ for $n\gg1$.~[Left] Detail of the yield at low $n$ values in linear scale. Different slopes corresponding to the different values of $J^{(1)}$ are apparent.} \end{figure*} As a first approach and in order to put our results in context with previous related studies \cite{venzl_quantum_2007, monras_optimal_2007, adesso_optimal_2009} we analyze the limiting case of baths at zero temperature. Zero-temperature baths are the most commonly encountered in quantum optics, and to which most of the existing literature is dedicated. Under the assumption that $N=0$, the general expressions obtained in Appendix~\ref{app:exact} reduce to \begin{subequations}\label{eq:zeroN} \begin{eqnarray} J_\text{coh.}&=& \frac{n}{z+1},\\ J_\text{th.}&=&\frac{n}{z+1+n},\\ J_\text{sq.}&=& \frac{n}{z}\cdot\frac{1+z^2}{1+z(z+2(n+1))},\\ J_{\textrm{2-m}}&=&\frac{n}{z} \end{eqnarray} \end{subequations} It is easy to verify that $J_{\textrm{2-m}}$ is the largest of all these quantities. The relations $J_{\textrm{2-m}}\geq J_\text{coh.}\geq J_\text{th.}$ are obvious. On the other hand the relation $J_{\textrm{2-m}}\geq J_{\textrm{sq.}}$ only requires to observe that $(1+z^2)(1+z(z+2(n+1)))^{-1}\leq 1$. A characteristic feature that we will encounter later on with greater generality is the fact that both thermal and single-mode squeezed states saturate their performance when $n$ is large [See Fig.~\ref{fig:zeroN}]. On the other hand, the performance of coherent and two-mode squeezed states grows linearly with $n$. This clearly leaves the latter as the two candidates for optimality when $n$ is large. A relevant question is when does the performance of one become \emph{much} larger than the other's? As will be seen, two-mode squeezed vacuum states perform always better than coherent ones. On the other hand, it is easy to see that the increase in performance of two-mode squeezed states is most relevant when $z\ll z+1$ ($\gamma\ll1$), since then $J_{\textrm{2-m}}\gg J_{\textrm{coh.}}$. This increase in performance is independent of the amount of energy in the probe state. This can be observed also from Fig.~\ref{fig:zeroN}. \subsection{Low energy regime} We now turn to the most general situation where the thermal bath has nonzero temperature, \emph{i.e.} photons can \emph{leak into} the quantum system additionally to \emph{leaking out} from it. We consider the two interesting parameter regimes with practical relevance, namely, that of low-energy probes and that of high energy probes. The former is best suited for situations where the properties of the channel (bath) under inspection are sensitive to the effect of intrusive probing. It is worth stressing that in some cases, the gain by using a small amount of energy may not provide a substantial gain w.r.t. the performance of the vacuum. On the other hand, there are situations where the choice of the probe state critically determines the attainable accuracy. We wish to identify those situations. In the low energy regime ($n\ll1$) we perform the Taylor expansion \begin{equation} J_\gamma=J^{(0)}+J^{(1)} n+o(n^2), \end{equation} where obviously $J^{(0)}$ is independent of the kind of state being considered, and corresponds to the performance of the vacuum. The common leading constant is thus given by \begin{equation} J^{(0)}=\frac{N/z}{1+z(N+1)}. \end{equation} Corrections of order $n$ contribute with coefficients \begin{subequations} \begin{eqnarray} J^{(1)}_\text{coh.}&=&\frac{1}{1+z(1+2N)},\\ J^{(1)}_\text{th.}&=&-\frac{(z+1) (1+2z(N+1))}{z^2 (1+z(N+1))^2},\\ \nonumber J^{(1)}_\text{sq.}&=& \frac{2N+1}{z}-\frac{z(1+N)^2(1+2N)}{(1+z(N+1))^2} \\ &&+\frac{2(1+2N)^2}{(1+z)^2+2Nz(1+z(N+1))}\\ J^{(1)}_{\textrm{2-m}}&=& \frac{(z+1)^2+N (z (z+2)+2)}{z (1+z(N+1))^2} \end{eqnarray} \end{subequations} where we have defined $y=N(N+1)$. A few comments are in order. First of all, notice from Eq.~\eqref{eq:exact.gamma.coherent} that the yield for coherent states is a polynomial of first degree in $n$. Therefore, $J^{(0)}+J_\textrm{coh.}^{(1)}n$ gives the exact expression. On the other hand, the thermal correction $J^{(1)}_\textrm{th.}\leq 0$ is always negative, which implies that weak thermal fields perform worse than the vacuum [See Fig.~\ref{fig:fisher_gamma}~(Left)]. The first question to ask is when do coherent states provide any significant improvement over the vacuum. For this we consider the condition $J^{(0)}\ll J_\textrm{coh.}^{(1)}n$. This reduces to \begin{equation}\label{eq:coherent.cond} \frac{N(z(2N+1)+1)}{z(z(N+1)+1)}\ll n. \end{equation} On the other hand, the same condition for two-mode squeezed probe states $J^{(0)}\ll J^{(1)}_\textrm{2-m}n$, reduces to \begin{equation}\label{eq:2m.cond} \frac{N(z (N+1)+1)}{(N+1)(z+1)^2+2N}\ll n. \end{equation} Notice that for moderate values of $N$ and large values of $\gamma$ ($z\gg1$) Eqs.~\eqref{eq:coherent.cond} and \eqref{eq:2m.cond} reduce to $N (2N+1)/(z(N+1))\ll n$ and $N/z\ll n$ respectively, which shows that the regime required for Eq.~\eqref{eq:2m.cond} is entered earlier than that of Eq.~\eqref{eq:coherent.cond} for increasing $z$. In the limit of small losses ($z\ll1$), we can expand Eqs.~\eqref{eq:coherent.cond} and \eqref{eq:2m.cond} to zeroth order in $z$ to obtain, for $J^{(0)}\ll J_\textrm{coh.}^{(1)}n$ \begin{equation}\label{eq:coherent.cond2} \frac{N}{z}+N^2\ll n \end{equation} whereas for $J^{(0)}\ll J^{(1)}_\textrm{2-m}n$ we get \begin{equation} \frac{N}{1+2N}\ll n. \end{equation} The condition for $J^{(0)}\ll J^{(1)}_\textrm{2-m}n$ thus reduces to $N\ll n/(1-2n)\simeq n$. Notice that this is not a sufficient condition to achieve a significant improvement using coherent probes [Eq.~\eqref{eq:coherent.cond2}]. On the other hand, one can see that $\gamma\ll1$ (that is, $z\ll1$) with moderate $N$ ($z(1+2N)\ll1$) implies $J_\textrm{coh.}^{(1)}\ll J_\textrm{2-m}^{(1)}$. This means that at moderate temperatures, and with low energy in the probe state, two-mode squeezed states significantly outperform coherent states in the regime of small losses. \begin{figure*}[t] \includegraphics[width=.475 \textwidth]{fig_fourA_N_low_n.eps}~~ \includegraphics[width=.475 \textwidth]{fig_fourB_N_high_n.eps} \caption{\label{fig:fisher_N}[Color online] Yield for the $N$ parameter using thermal states (red), coherent states (blue), single mode squeezed (green) and two-mode squeezed states (magenta) at different energy regimes, for $N=0.9$ and $\gamma=0.3$. [Above] Detail of the yield at low $n$ values. The $X$ parameter is negative, which implies that single-mode squeezing is detrimental (at small $n$ values). [Below] Behavior at high $n$ values. } \end{figure*} \subsection{High energy regime} High energy probes are of interest when the channel being probed is not as delicate, or needs not be preserved. A natural instance of this situation is in probing the photon-photon scattering predicted by QED and non-standard models of elementary particles. The high energy regime has a substantially different behavior. Expanding the relevant yield functions from Appendix~\ref{app:exact} in inverse powers of $n$ we obtain series of the form \begin{equation} J_\gamma=J^{(-1)}n+J^{(0)}+o(1/n) \end{equation} In some cases $J^{(-1)}$ will vanish, rendering the corresponding class of states useless compared to those for which $J^{(-1)}$ does not vanish. Explicitly, we have \begin{subequations} \begin{eqnarray} J_\text{coh.}&=&\frac{n}{1+z(2 N+1)}+O(1),\\ J_\text{th.}&=&1+o\left(\frac{1}{n}\right),\\ J_\text{sq.}&=&\frac{1}{2} \left(1+\frac{1}{z^2}\right)+o\left(\frac{1}{n}\right)\\ J_{\textrm{2-m}}&=&\frac{n}{z(2 N+1)}+O(1) \end{eqnarray} \end{subequations} The first relevant fact to notice is that, contrary to the low energy regime, different Gaussian resources perform differently in the limit of large energy. This is hardly a surprise. We observe that the asymptotic performance is bounded for thermal and single-mode squeezed states. This is in contrast to the fact that single-mode squeezed states have proven highly efficient for other precision measurements such as optical phase~\cite{caves_quantum-mechanical_1981, klauder_squeezed_states, monras_optimal_2006} and magnetometry~\cite{molmer_estimation_2004}, among others. On the other hand, coherent and two-mode squeezed states provide an unbounded yield, as is manifest by the nonvanishing $J^{(-1)}$ terms, giving a linear growth in $J_\gamma$ with increasing $n$. This makes the constant zeroth order correction irrelevant. A very important difference between coherent states and two-mode squeezed states becomes readily apparent. While for coherent states, the \emph{rate} of growth of $J^{(-1)}$ is bounded, for two-mode squeezed states it is not. In particular, the difference between the two yields becomes most significant when $z(2 N+1)\ll1$ and negligible when $z(2 N+1)\gg1$. This is certainly relevant for detecting very small damping parameters, for which the inverse dependence in $z$ may even be sufficient to overcome the practical limitations of achieving very high $n$ values. \section{Estimating temperature}\label{sec:N} Quantum thermometry has become a subject of high physical relevance with the advent of ultracold atomic gases~\cite{hofferberth_probing_2008,gottlieb_quantum_2009,manz_two-point_2010}. At low temperatures, new methods need to be envisaged to determine the magnitude of thermal fluctuations in atomic clouds. In this section we analyze the quantum-limited precision bounds to the estimation of temperature (mean photon number $N$) in a bosonic thermal bath, coupled to a probe system prepared in a Gaussian state, with the coupling strength not necessarily large, \emph{i.e.}, far from thermalization. A straightforward method to measure temperature is to let the probe system coupled to the bath to thermalize. Quadrature measurements then provide an estimator of the mean photon number, providing in turn an estimator of the bath temperature. This approach has several drawbacks. Most importantly, it requires in general a large coupling constant $\gamma\gg1$, in order to reach the steady state. However, there may be situations where the coupling constant cannot be chosen at will. On the other hand, this does not necessarily provide the optimal estimation accuracy. As we will see, two-mode squeezed states can outperform the sensitivity of the vacuum state or other classes of Gaussian states, for any value of the parameters. Contrary to the effect of the decay parameter $\gamma$, that affects both first and second moments, the temperature in the bath only affects the second moments. First moments evolve independently of the bath temperature~\cite{serafini_quantifying_2005}. This has immediate consequences for the sensitivity of coherent states, which will perform equivalently to the vacuum state. Thus, we will not consider coherent probe states in this section. We will follow the same approach taken in the previous section, by addressing different energy regimes in the probe states. \subsection{Low energy regime} \begin{figure*}[t] \includegraphics[width=.45 \textwidth]{fig_fiveA_N_low_n_bis.eps}~~~~~~~~ \includegraphics[width=.45 \textwidth]{fig_fiveB_N_high_n_bis.eps} \caption{\label{fig:fisher_N2}[Color online] Yield for the $N$ parameter using thermal states (red), coherent states (blue), single mode squeezed (green) and two-mode squeezed states (magenta) at different energy regimes, for $N=0.7$ and $\gamma=0.08$. [Left] Detail of the yield at low $n$ values. The $X$ parameter is positive, which can be see by the positive slope of the single-mode squeezing ($n\ll1$). [Right] Behavior at high $n$ values. } \end{figure*} As in the case for $J_\gamma$, $J_N$ will be limited to a constant value corresponding to the sensitivity of the vacuum state $J^{(0)}$ at small values of $n$. However, since first moments are unaffected by the temperature of the bath, the yield of coherent states will equal that of the vacuum. As in the previous section, we will expand $J_N$ in powers of $n$, in order to obtain and analyze the low-energy yield for each class of states, \begin{equation} J_N=J^{(0)}+J^{(1)}n+o(n^2) \end{equation} Taking the expressions from Appendix~\ref{app:exact} we get \begin{equation} J^{(0)}=J_\textrm{coh.}=\frac{z (z+1)^2}{N (1+z(N+1))} \end{equation} to which corrections of order $n$ contribute with factors \begin{subequations} \begin{eqnarray} J^{(1)}_\text{th.}&=&-\frac{(z+1)^2 (z(2 N+1)+1)}{N^2 (z(N+1)+1)^2},\\ \nonumber J^{(1)}_\text{sq.}&=& \frac{32 z (z+1)^2 \left(2(\xi-1)-z \left(4 z \xi ^3+(z+2) \xi +1\right)\right)} {\left(4 z \xi ^2+4 \xi -z-2\right)^2\left(z \left(4 z \xi ^2+4 \xi +z+2\right)+2\right)}\\ &&\\ J^{(1)}_{\textrm{2-m}}&=&\frac{(2 N+1) z (z+1)^2}{N\left(N+1\right) (z(N+1)+1)^2}, \end{eqnarray} \end{subequations} where we have defined $\xi=N+1/2$. It is immediate to observe that $J^{(1)}_\text{th.}\leq0$ which implies that, similarly to the situation for $J_\gamma$, small thermal fluctuations in the probe state can only be detrimental. On the other hand, the small $n$ correction for single-mode squeezed states has no unambiguously defined sign, the latter being positive only when $X=2(\xi-1)-z \left(4 z \xi ^3+(z+2) \xi +1\right)>0$. Solving the inequality $X>0$ for $z$ we obtain \begin{equation} z<\frac{(2 \xi-1) \sqrt{8 \xi ^2+1} -2 \xi-1 }{2\xi(4 \xi ^2+1) }. \end{equation} The right hand side is positive only when $\xi>1$, which means that only for $N>1/2$ it is possible to have a gain in yield by using single-mode squeezed probes. We thus conclude that for $0<N<1/2$ the best single-mode probe (at low energies, $n\ll1$) is the vacuum state. However, if the bath temperature is sufficiently high [$N>1/2$], it is possible to improve the sensitivity of the vacuum by single-mode squeezing. We now turn to analyze the yield of two-mode squeezed states. As can be readily seen in Figs.~\ref{fig:fisher_N} and \ref{fig:fisher_N2}, the behavior is rather simple. The slope at small values of $n$ is always positive and, moreover, as follows from the relations in Appendix~\ref{app:exact}, always larger than that of single-mode squeezed states. Imposing that $J_\textrm{2-m}^{(1)}n\gg J^{(0)}$ yields $n(2N+1)\gg z(N+1)^2+N+1$. Since the right hand side is greater than unity, we have $n(2N+1)\gg 1$ and, given that $n$ is small, it follows that $N$ has to be large. Therefore we can reduce the condition to $z\ll(2n-1)/N$, which is never satisfied for small $n$. We thus conclude that low energy entangled probes always outperform single-mode ones, for any regime of the parameters, although the improvement cannot always be of significant magnitude ($J_\textrm{2-m}^{(1)}n\gg J^{(0)}$ cannot be achieved for small $n$ values). Most remarkably, two-mode squeezed states are the only class of Gaussian states that perform better than the vacuum for any values of the parameters. \subsection{High energy regime} The high energy limit for $J_N$ is quite uninteresting. The reason being that all yields saturate, and no significant improvement can be achieved using two-mode squeezed probes as compared to the use of coherent states. The limiting expressions read, to order $1/n$ \begin{eqnarray} J_\text{th.}&=&\frac{z^2(1+z)^2}{n^2}\simeq 0,\\ J_\text{sq.}&=&\frac{2 (1+z)^2}{(2 N+1)^2}-\frac{(1+z)^2}{(2 N+1)^3zn}\\ J_{\textrm{2-m}}&=&\frac{(1+z)^2}{N(N+1)}-\frac{(1+z)^2}{N\left(2 N^2+3 N+1\right) zn} \end{eqnarray} We have presented the second order term for $J_\textrm{th.}$ because it is the leading one. The yield, however, tends to vanish, as can be expected by observing that in the limit of a highly energetic probe, the thermal fluctuations in the probe are infinitely larger than those induced by the bath, and therefore no inference about the latter can be obtained. Single- and two-mode squeezed states have nonzero limiting yields, but, as anticipated, the yield saturates for highly energetic probes. It is easy to check that in the limit $n\rightarrow\infty$ we have $J_{\textrm{2-m}}\geq J_\text{sq.}$ but no order can be established between the vacuum state and single-mode squeezed states. A crossover can occur between the performances for the vacuum and single-mode squeezed states depending on the values of $\gamma$ and $N$. Concentrating on the asymptotic yield, a natural question is to understand in what situations does two-mode squeezing perform much better than the vacuum state. This is simple to answer by observing that \begin{equation} \frac{\lim_{n\rightarrow\infty}J_{\textrm{2-m}}}{J_\textrm{coh.}}=1+\frac{1}{(N+1)z} \end{equation} so that if $(N+1)z\ll1$ (in the high $n$ limit) then $J_{\textrm{2-m}}\gg J_\textrm{coh.}$. This condition is likely to occur in several practical situations, for channels close to ideal, \emph{i.e.}, when the decay rate $\gamma$ is very low and the temperature is low or moderate. Notice that this is not the first time that we encounter this condition for a significant improvement of squeezed states over coherent ones. All results indicate that whenever there is an improvement of two-mode squeezed states over the vacuum, it is always in the limit of small coupling to the bath ($\gamma\ll1$, i.e., a close to ideal channel). \section{Entanglement and the performance of Non-Gaussian states}\label{sec:nonG} \begin{figure*}[t] \includegraphics[width=.46\textwidth]{fig_sixA_random.eps}~~~~ \includegraphics[width=.5\textwidth]{fig_sixB_JvsE.eps} \caption{\label{fig:random} [Color online] Scatter plot of 4000 random probe states with at most 3 photons in each mode (dots), maximally entangled states at dimension cutoff $3\leq d\leq 6$, $\ket\psi=\frac{1}{\sqrt d}\sum_k\ket{k}\ket{k}$ (crosses), and two-mode squeezed vacuum states (solid line). \emph{Left}: Yield against mean photon number in the channel mode $a$ as a reference. The dashed line interpolates the behavior of maximally entangled states. Color code corresponds to the entropy of entanglement in the probe state $E(\rho)=-\mathrm{tr}\rho_a\log\rho_a$, $\rho_a=\mathrm{tr}_b \rho$. \emph{Right}: Ratio between the yield and the mean photon number against the entropy of entanglement. Observe that several highly efficient probes (those with high ratio $J_\gamma/n$) are relatively unentangled as compared to the maximally entangled ones with similar performance. Clearly, the squeezed vacuum state is much more entangled than randomly sampled states with similar yield.} \end{figure*} So far we have seen that, in order to optimally estimate the channel parameters in Eq.~\eqref{eq:channel}, two-mode squeezing is the most effective resource within the arena of Gaussian inputs. This fact suggests two immediate questions. \emph{1)} Is there a direct relation between entanglement and the performance in channel estimation? And \emph{2)} Are there non-Gaussian states outperforming the squeezed vacuum for the same energy supply? Questions similar to \emph{1)} have arisen previously in the literature, both in the finite dimensional case~\cite{fujiwara_quantum_2003,fujiwara_estimation_2004, boixo_operational_2008} and in the continuous variable one (albeit in somewhat different setups~\cite{tan_quantum_2008}), considering different kinds of channels. Question \emph{2)} has been also addressed in the context of quantum channel estimation~\cite{adesso_optimal_2009}, and regarding the performance in other quantum information tasks, especially continuous-variable quantum teleportation~\cite{dellanno_continuous-variable_2007}. With the available techniques it is difficult to give a precise \emph{quantitative} answer, since numerical techniques are not well-suited to deal with infinite dimensional systems, and analytic control methods are not yet well developed beyond the Gaussian regime. The difficulty with numerical methods resides in the fact that Hilbert-space truncation is rendered useless at $N\neq0$, because thermal baths immediately populate all levels in Fock space, thus rendering a direct numerical approach futile, or, in the best of all cases, a rude approximation. In order to address the question of sensitivity of non-Gaussian states for parameter estimation, techniques beyond those developed so far are needed, and it is beyond the scope of this work to pursue them. Instead, we will provide a simple \emph{qualitative} answer by restricting the channels of interest to those at zero temperature, \emph{i.e.}, $N=0$. In this case, populated levels in Fock space only decay to lower energy levels and Hilbert-space truncation provides exact numerical results. We have computed numerically the quantum Fisher information $J_\gamma$ at $\gamma=0.1$ for $4000$ states picked from $\mathbb C^4\otimes\mathbb C^4$ constituting the subspace with at most 3 photons in each mode, randomly distributed according to the $SU(16)$ Haar measure. In Fig.~\ref{fig:random} we report our findings. The left plot displays the values of $J_\gamma$ against the mean photon number in mode $a$, and the right plot displays the \emph{efficiency} (\emph{i.e.}, the ratio $J_\gamma/n$) against the entropy of entanglement $E(\rho)=-\mathrm{tr}\rho_a \log \rho_a,~\rho_a=\mathrm{tr}_b\rho_0$. Along with the random states we display the results for maximally entangled states within the cutoff $3\leq d\leq 6$ and the two-mode squeezed vacuum. Our findings reveal that most states have a very high performance in relation to the amount of energy they have in the channel mode. In particular, for a fixed dimension cutoff the more entangled the states are, the better they perform on average. In particular, the maximally entangled state attains the bound set by the squeezed vacuum. It is of course interesting to ask how does the performance change when one introduces temperature in the channel. This is beyond the capabilities of numerical methods relying on dimensional cutoff, and more advanced methods would need to be envisaged. \section{Discussion}\label{sec:discuss} We have obtained the sharp precision bounds on estimation of $\gamma$ and $N$ for four classes of Gaussian states, namely, coherent, thermal, single-mode squeezed vacuum and two-mode squeezed vacuum. We have shown that the two-mode squeezed vacuum always outperforms any other class of Gaussian states. The improvement of two-mode squeezed vacuum states versus coherent states is most relevant when the coupling parameter $\gamma$ is weak. In particular, at zero or finite temperature, the yield $J_\gamma$ of two-mode squeezed states increases much faster with $n$ than the yield of coherent states at small values of $\gamma$. For $J_N$, comparing the yield of two-mode squeezed vacuum states versus the yield of the vacuum, we find that, for small values of $n$, no significant improvement can be obtained. The situation changes dramatically in the high energy regime, where, despite a saturation of the yield (all yields saturate to a maximum value), saturation occurs at much higher yields when $z(N+1)\ll1$. Summarizing, we have shown that two-mode squeezed vacuum always outperforms any other Gaussian resource for both estimation problems ($\gamma$ and $N$), and we have identified the situations in which this improvement is most significant. We have also provided numerical evidence that the squeezed vacuum state provides an upper bound to the performance of arbitrary states with dimensional cutoff. This suggests a deeper analysis, and the causes for this optimality should be investigated. Turning to the multiparametric problem of simultaneously estimating both parameters of the channel, we have shown that the optimal state (two-mode squeezed vacuum) provides optimal sensitivity for both parameters, and that in the many-copy limit ($k\rightarrow\infty$) a collective measurement exists which saturates both bounds simultaneously, as a consequence of Eq.~\eqref{EQ:COMMUTE}. These results imply that the problems of estimating $\gamma$ and $N$ enjoy a strong form of compatibility. Not only the optimal probe state is common to both problems, but also the corresponding optimal measurements commute in the asymptotic limit. Therefore an overall protocol optimizing both tasks can be envisaged (in the asymptotic $k\rightarrow\infty$ limit). It is a relevant question to determine whether lifting the restriction to Gaussian states can provide an increased performance, and whether such kind of compatibility remains true in the more general setting involving arbitrary probe states. ~\\* \noindent\textbf{Acknowledgements.} The authors are thankful to Dr. M. Gu\c{t}\u{a} for very helpful discussions. We acknowledge financial support from the European Commission of the European Union under the FP7 STREP Project HIP (Hybrid Information Processing), Grant Agreement n. 221889.
proofpile-arXiv_068-16107
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Over the past decade we have seen an explosion of quantum programming languages (QPLs) and quantum programming paradigms \cite{quantum-languages-overview, heim2020quantum} that enable users to interact with quantum hardware and implement quantum algorithms. To date, most QPL development effort has been focused on circuit-level description languages \cite{qasm, cross2021openqasm, cirq, jaqal, pyquil}, which allow users to express quantum algorithms in the standard quantum circuit picture consisting of a series of time-independent single- and multi-qubit gates \cite{nielsen2002quantum}. However, the level of abstraction that this circuit layer offers is limited in its usefulness as long as digital quantum computing is dominated by errors. Gate fidelities must be improved by orders of magnitude to reach the thresholds for error correcting codes small enough to be run on current quantum computers \cite{knill1998resilient}. To get us closer to that goal, using quantum computers as the analog machines that they really are may help realize the necessary operational fidelity for the highly anticipated breakthrough in quantum computing \cite{shi2020resource, jordan2008quantum}. Analog control pulse engineering for example has seen broad adoption across various hardware platforms with encouraging results \cite{roos2008ion, milne2020phase, khaneja2005optimal, goswami2003optical, shapira2018robust}, suggesting that control pulse design will remain a productive area of research for some time. Apart from general-purpose quantum computing, there are several promising near-term applications (\emph{e.g.} analog quantum simulation \cite{monroe2021programmable}, digital-analog quantum computation \cite{parra2020digital}, simulation \cite{rajabi2019dynamical}) that also require a level of control over the quantum hardware that goes far beyond the digital circuit layer. A ``full-stack'' quantum programming language, that integrates all the required low-level analog hardware controls seamlessly with digital circuit abstraction layers would be ideal for these applications. Though promising candidates for ``pulse-level'' control have been proposed \cite{qiskit-openpulse, cross2021openqasm}, they do not give users control over hardware controls beyond the level of control and readout waveforms. However, the ability to access hardware controls beyond simple waveform generation opens up the possibility for much broader fields of research, as transport-based quantum control in the trapped ion QCCD architecture \cite{pino2021demonstration}. Additionally, in the case of remote-access quantum hardware, it reduces the need for ``behind-the-scenes'' compilation and allows for full transparency in not only how abstract gates are implemented on the hardware, but also in the calibration processes that are required to tune up the whole system. These calibration processes pose an essential challenge in quantum hardware platforms. The tune-up routines to achieve peak device performance are generally not modifiable or even exposed to the user since they require full access to the experimental hardware and a language that is expressive enough to describe this level of access. To address these challenges, we have developed a ``true'' full-stack programming language that provides the complete connection between the high-level circuit layer and the low-level timing commands for FPGA hardware. Quala also addresses the calibration challenge in a transparent way. A powerful run-time decision logic allows conditional execution directly in the user's code. A reusable library of standard gates allows algorithm designers to focus on high-level circuits. We designed this language to run the the QuantumION platform, which is a remote-access trapped ion quantum processor built at the Institute for Quantum Computing in Waterloo, Canada\footnote{A separate publication about the hardware platform is forthcoming, and we will thus keep its description here brief.}. Our programming language fully describes \emph{all} of the operations required to realize a trapped-ion quantum computer. While the accompanying hardware setup for our experiment is tailored to the trapped-ion architecture, the concepts and ideas that went into our language design are generic enough to allow portability to other experimental setups. The language even supports other implementations of quantum computing so long as they are controlled with FPGA hardware. Many academic as well as industrial labs are facing problems such as the transition between the circuit and the timing layer, real-time decision logic, and the incorporation of calibration data with symbolic algebra. In this manuscript we present an overview of this generic quantum computing programming language and discuss our proposed solutions to these common experimental problems. This manuscript is structured as follows: We begin with an outline of the design philosophy and principles that we set out to achieve with this language. Next, we articulate the core requirements that are necessary to unambiguously specify quantum programs more broadly. Following this, we describe the three elements that we have identified as key abstractions to organize a fully-specified quantum program. After this, we continue with an in-depth discussion of the formal structure of the language and the individual language elements that make up a full program. We discuss the compilation process of programs written at any layer of the stack and provide references for integration of the language with FPGA hardware. We conclude our work with a brief summary of the highlights of the language and make recommendations for user adoptions and future development work. \section{The Quala Programming Language} Quala (pronounced \textipa{/'kw{\"a}l@/}) is a true full-stack programming language designed originally in the context of the QuantumION platform. While the name is inspired by the project itself (\emph{QUA}ntumion \emph{LA}nguage), the design principles and suggestions we put forth here range far beyond this specific platform. It is a meta-language that we have defined in XML, and users can interact with it through various language bindings in languages such as Python, Matlab and Julia. Before we go into the detailed implementation of the language, we begin with a discussion of our design philosophy \subsection{Background and design philosophy} Our design philosophy behind Quala is to create a programming interface that is fully transparent to its users, \emph{i.e.} with no ``behind-the-scenes'' compilation of input circuits or hidden parameters and calibration routines that may yield unexpected results. To achieve this, we have designed our language according to the following principles: \begin{itemize} \item \textbf{Full-stack control.} Users can program at all layers of the programming stack from hardware-agnostic, high-level quantum circuits all the way to precisely timed hardware commands. The transition between our \emph{Gate Layer} and \emph{Timing Layer} is seamless; it is easy to switch between these two complementary views of a quantum program (even within the same program). \item \textbf{Transparent Calibration} All calibration data are global to the machine and stored in a historical database. Calibration programs are written in the same language and users can inspect every routine. \item \textbf{Controllability over simplicity.} We sacrifice the simple circuit-level programming picture and replace it with an interface that is more complex to enable more expressiveness. \item \textbf{Open source design.} We go beyond simply publishing the source code\footnote{With the exception of proprietary information about selected hardware drivers.}, and embrace the spirit of open-source design by exposing the design of gates, timing functions, and the calibration operations within the language itself. \end{itemize} The most striking consequence of these design principles is that our language not only supports programming of hardware-agnostic quantum circuits but enables a framework for much broader areas of research. Quala can be used for expressing control pulse design problems, analog quantum simulation, and, in the context of our platform, even fundamental ion trap research in a QCCD architecture. In the next section, we describe the features that support all these experimental problems. \subsection{Unique requirements of quantum programs} The following features are required to span the space of possible quantum computing experiments: \begin{itemize} \item \textbf{Support for high-level quantum circuits.} In the quantum circuit picture, quantum information is manipulated by applying temporal sequences of quantum gates to a register of qubits. This abstraction assumes that the key information is the \textit{order} in which quantum gates are applied, not the time at which they occur. Thus, a QPL must support specifications that look like a sequence of quantum gates (perhaps applied in parallel) with no explicit timing information beyond "which gate comes after which". We refer to this programming paradigm as our Gate Layer. \item \textbf{Support for precision-timed events.} As previously discussed, quantum hardware is highly dependent on the precise details of which operations are applied at which times. For example, a quantum gate composed of a laser pulse will be highly error prone if the laser is turned off 100 nanoseconds too early. Thus, a QPL capable of specifying all the details of what occurs "under the hood" must have a consistent mechanism to specify the time at which different experimental parameters (such as laser amplitudes, trap voltages, etc) are changed. Moreover, these specification mechanisms must integrate seamlessly with Gate Layer programming if both are to be realized within the same language. We refer to this programming paradigm as our Timing Layer. \item \textbf{Custom waveform control.} Beyond simply specifying the time at which a quantum gate or other operation occurs, it is necessary to specify full details of the control pulses to maximize system performance. For example, a square laser pulse may have much worse performance as a quantum gate than a laser pulse with custom amplitude shapes \cite{choi2014optimal}. Our language can support custom waveforms in multiple ways, from the direct application of Timing Layer controls to build up a detailed description of a waveform (e.g. by specifying a different voltage change at each time step), to allowing users to provide and play back their own waveforms, to allowing control of custom digital hardware (such as digital oscillators and interpolators) that simplify the parametrization of the waveforms. \item \textbf{Real-time decision logic.} Many quantum programs benefit from the ability to measure a subset of qubits and subsequently perform \textit{different} operations based on the outcome of the qubit measurements. Our language provides a generic mechanism to branch between different, reusable segments of code. \end{itemize} Having listed the core requirements the language must satisfy to support all of our envisaged applications, we now turn our attention to key enabling features we have designed to aid in the usability and modularity of specifying time sequences for an arbitrarily large number of physical voltage channels that are used to orchestrate a quantum program. \subsection{Key technical features} \noindent Our language must implement transparent, full-stack control without incorporating hardware specific commands into the integral structure of the language. Users must be able to access all valid control parameters and calibration values necessary to run experiments. To this end, our language framework incorporates the following key technical features: \begin{enumerate} \item A \emph{Calibration Database}, to store and manage all experimental parameters relevant to user programs \item A \emph{Symbolic Algebra} framework that allows for flexible integration of calibration parameter into user programs \item A \emph{Standard Library} that contains all relevant experimental subroutines to facilitate the programming process. Standard gates are part of this library, and are built up of fundamental, Timing Layer functions utilizing parameters from the Calibration Database. \end{enumerate} \noindent While the information in the Calibration Database and the Standard Library is hardware specific, the concepts are hardware-agnostic. In the following sections, we use example parameters applicable to our trapped-ion system to showcase how these features work together. Similar examples could be conceived with parameters and routines for different types of quantum hardware. \subsubsection{Calibration Database} The Calibration Database contains a large collection of machine parameters that users may require to run their experiments. These include both parameters which are actively calibrated and parameters which remain constant. Parameters in the first category could include laser beam intensities and alignments, pulse durations and amplitudes, as well as higher-level parameters such as SPAM (state preparation and measurement) errors and gate fidelities. Parameters in the second category are not actively calibrated but may be relevant to different calculations that include calibration parameters. These parameters include physical constants (\emph{e.g.} ion energy level splittings) and experimental constants (\emph{e.g.} the direct digital synthesis (DDS) sample clock frequency). Parameters that are calibrated regularly are stored with associated calibration dates and times, and users have access to the entire calibration history of those values. To use a calibration parameter within a program, we provide language constructs like the \code{NamedConstant} directive. In the Python binding to our language, users can access parameters as follows: \begin{lstlisting}[language=mypython] import quala as ql ql.NamedConstant("RamanRedSidebandFrequency", date="most-recent") \end{lstlisting} When used within a program, the language compiler (see \autoref{sec::compiler}) will insert the corresponding value at the time of compilation. Until then, the expression remains symbolic. This is particularly useful in the context of shared-access quantum computers, where several users might submit their programs to a queue: when calibration processes in the background update certain parameters while the user's program is still in the queue, the compiler will automatically insert the most recent calibration value that is available at the time the program is actually run. At any point, the user can also inspect the values before compiling their program through a dedicated call to the database: \begin{lstlisting}[language=mypython] >>> ql.query_database("DefaultMicrowaveRabiRate", date="most-recent") .. <DatabaseEntry name="DefaultMicrowaveRabiRate" value="1" units="MHz" date="2021-05-31-08-55"> \end{lstlisting} \subsubsection{Symbolic Algebra} The Calibration Database alone is not sufficient to allow quantum programs that behave predictably over variations in machine parameters. Calculating every permutation would be inefficient and calibrations would likely conflict with each other. Instead, the Calibration Database collects only critical parameters that can be combined mathematically. We developed a Symbolic Algebra as an integral part of our language to allow for this. In the binding languages, the Symbolic Algebra is enabled through overloading the standard algebraic operators. For example, given the example parameter for a microwave Rabi rate from above, we can calculate the corresponding time it takes to perform a $\pi$-pulse via: \begin{lstlisting}[language=mypython] pi_time = 3.14159 / ql.NamedConstant("DefaultMicrowaveRabiRate") \end{lstlisting} \noindent In the actual XML language, this expression would be written as: \begin{lstlisting}[language=XML] <qi:DivisionOperator> <qi:NumericLiteral>3.14159</qi:NumericLiteral> <qi:NamedConstant name="DefaultMicrowaveRabiRate"> </qi:DivisionOperator> \end{lstlisting} Our language supports all standard algebraic operations as well as basic boolean logic. \subsubsection{Standard Library} We combine the Calibration Database and the Symbolic Algebra in what we call the Standard Library: a set of pre-defined \emph{calculations}, time-dependent \emph{functions} and time-independent \emph{gates}. For example, the $\pi$-time from above is available through the \code{NamedCalculation} directive: \begin{lstlisting}[language=mypython] ql.NamedCalculation("DefaultMicrowavePiTime") \end{lstlisting} Standard experimental routines that are naturally described in the Timing Layer, such as the laser cooling of ions and shuttling routines, are available in the Standard Library through the \code{FunctionCall} directive: \begin{lstlisting}[language=mypython] ql.FunctionCall("DopplerCooling", duration=ql.NumericLiteral(3, "ms")) \end{lstlisting} \noindent Quantum Gates, unlike timing functions, focus on the structural connection of information qubits rather than precisely timed commands. For example, the two-qubit controlled-not (CNOT) gate can be called using the \code{GateCall} directive: \begin{lstlisting}[language=mypython] ql.GateCall("CNOT", qubits=[0, 1]) \end{lstlisting} \noindent Named calculations, functions and gates form the whole of the Standard Library. All definitions within it exist on the user's computer and can be copied, used as-is, or used as a template for custom user functions and gates. Standard Library procedures used within a program are automatically included in the program Header (see \autoref{sec::program_components}). There is no further modification or addition to these routines on the compiler side. The user program thus contains the \emph{entire} code required to carry out the experiment. \subsection{Demonstration of full-stack control} The Calibration Database, the Symbolic Algebra and the Standard Library are the core features of our language. Taken together, they enable users to program our machine at an abstraction level of their choosing and fully customize any of the intermediate operations. This interplay is illustrated in \autoref{fig::cnot} using the ``bottom-up'' construction of a CNOT gate \cite{nielsen2002quantum} as an example. Fundamentally, a CNOT gate (or any ``gate'', for that matter) is an abstract operation that does not correspond directly to a single experimental action, but instead to a series of precisely timed operations. For laser-based gates in our ion trap qubits, these operations are implemented through laser-atom interactions \cite{molmer1999multiparticle, sackett2000experimental}. Our language allows users to specify all relevant parameters for this laser-ion interaction, from pulse amplitude and frequency profiles to phases and even beam pointing and polarization control. This control allows users to generate pulses with custom amplitude, phase, or frequency modulation. These techniques have all been shown to greatly increase gate fidelitites and reduce the required interaction time to perform the desired gate \cite{choi2014optimal, leung2018robust, milne2020phase, bentley2020numeric}. Users wishing to program at this level can use the Calibration Database to retrieve the relevant values such as the calibrated Rabi rate, the phase and frequency of the individual laser beams, and several more. Our Symbolic Algebra then allows users to generate symbolic expressions using these parameters, to create \emph{e.g.} customized M\o lmer-S\o rensen interactions to implement entangling gates \cite{molmer1999multiparticle}. Users who prefer to program at the level of machine-specific gates instead of time-specific hardware instructions have the option to work in the native Gate Layer. This layer is enabled through a set of pre-defined pulse sequences in the Standard Library that implement standard gates such as individual $x$, $y$ and $z$-rotations and the $XX$ entangling gate. Combining these gates, users can construct their own version of a CNOT gate and customize gate parameters like rotation angles. Lastly, the highest level of the programming stack is designed for users focusing purely on high-level quantum algorithms. Here, users can work directly with computational gates as provided by our Standard Library. The implementation of our CNOT gate, for example, can be traced back exactly in the manner we described. This example illustrates how these three key components, the Calibration Database, the Symbolic Algebra and the Standard Library come together to enable fully customizable quantum operations in our system. \begin{figure}[t!] \includegraphics[scale=1.0]{figure_2.pdf} \caption{Features of the Quala language illustrated through the ``top-down'' reconstruction of a CNOT gate. \textbf{a} The Calibration Database stores all parameters relevant for the experiment. Insets show schematics of the Raman laser beam configuration with laser frequency configurations required for single- and two-qubit interactions. \textbf{b} The Symbolic Algebra provides the framework through which calibration parameters can be incorporated into programs. Parameters are referred to by name and can be used in standard algebraic expressions. \textbf{c} The Standard Library uses the Symbolic Algebra framework to define a set of re-usable experimental routines such as Timing Layer functions and high-level gates. The frequency configurations depicted in \textbf{a} and \textbf{b} are absorbed into functions for single- and two-ion laser pulses which form the basis for single- and two-qubit gates in the native Gate Layer. The flowchart depicts all functions and native gates required to realize a CNOT gate between two qubits. \textbf{d-f} show a schematic breakdown of the CNOT gate into native gates and individual Timing Layer instructions- illustrated through amplitude and frequency profile of selected DDS channels in the setup. } \label{fig::cnot} \end{figure} \section{Technical details} We will now take a look at the formal definition and actual syntax of the language, discuss the concept of language ``bindings'', and explain the language compilation process. \subsection{Formal language definition} Quala is defined in XML. We chose XML due to its mature schema definition specification, its seamless integration with modern web security protocols, and the compatibility of its functional structure with our Symbolic Algebra needs. Additionally, defining the language elements in XML enables us to create an interface that is easily extensible to various high-level programming languages that have the ability to generate text-based output. These high-level language extensions are called language bindings. Language bindings allow the users to write their code in a higher-level language of their choice. This can facilitate the process of writing programs with high-level programming paradigms such as loops and functions. The first binding we have implemented is in Python. We are planning on releasing bindings for Julia and Matlab in the future. All bindings use the same naming conventions and object relationships as in the XML language, and they all provide the ability to generate the required XML code at the end of the program definition. The Python binding, for example, provides a stand-alone Python module called \code{quala} that contains a 1:1 mapping of XML language elements to Python classes. All these classes are derived from a common base class that implements the translation of Python objects to XML tags. For example, the \code{NumericLiteral} element, which has the following definition in the language schema: \begin{lstlisting}[language=XML] <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:qi="https://iqc.uwaterloo.ca/quantumion"> <xs:element name="NumericLiteral"> <xs:annotation> <xs:documentation>A fixed numeric value with units. Can be any real number.</xs:documentation> </xs:annotation> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:double"> <xs:attribute name="units" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:schema> \end{lstlisting} \noindent can be used in Python through a class of the same name. Attributes and child tags are passed as arguments: \begin{lstlisting}[language=mypython] >>> import quala as ql >>> value = ql.NumericLiteral(100, units="MHz") >>> print(value.encode_xml()) .. <qi:NumericLiteral units="MHz"> 100 </qi:NumericLiteral> \end{lstlisting} \noindent The \code{encode\_xml} function is a special function that all objects in the binding languages share and which enables the translation between the binding object and the corresponding XML element. Following the strict definitions set out in the language schema provides a natural way to enable the validation of user programs with linting tools such as \code{xmllint} and to flag syntax errors before the program reaches the compiler. This provides an additional layer of security as programs that do not abide by the language definition cannot be passed through to the experiment. \subsection{Elements of a Quala program} \label{sec::program_components} \begin{figure}[t!] \includegraphics[scale=1.0]{figure_3.pdf} \caption{Building blocks of a Quala program. This schematic illustrates the key elements of the language, represented here as rectangular boxes with background-colored titles, and their relationship to one another. For example, the \code{Experiment} element contains \code{Resources}, \code{Headers \& Definitions}, the \code{Initial Setup} and the \code{Program} element. The latter in turn is made up of a series of \code{Segment} elements, which can contain either \code{Event} or \code{GateBlock} elements as well as a \code{Decision} element at the end. For more information on the individual elements, see main text. Two full examples that utilize these elements are shown in \autoref{sec::examples}.} \label{fig::program} \end{figure} The fundamental syntactical elements of our language and their relationships to each other are schematically shown in \autoref{fig::program}. The outermost component of every Quala program is called \code{Experiment}, which contains four different containers: \begin{itemize} \item \code{Resources} specify containers for data storage such as measurement results or pulse waveform parameters. \item The \code{InitialSetup} contains a set of instructions for the experimental setup appear at the beginning of the experiment, such as desired number of ions/qubits and static oscillator frequencies. \item \code{Headers} and \code{Definitions} declare and define the Standard Library procedures are used in this experiment, if any. These include gates, functions and named calculations. We formally separate the declarations and definitions to allow for the possibility of encrypted function definitions. \item The \code{Program} contains the actual experimental instructions which are stored in individual \code{Segment} objects. \end{itemize} All experimental actions that are carried out during the user's program are defined within \code{Segment}s. There are two interchangeable ways of specifying the content of these elements, depending on whether the user would like to write programs in the Timing Layer with explicit timing specifications, or in the Gate Layer in which timing is implicit. In the Timing Layer, a \code{Segment} contains a series of \code{Event} objects that specify which experimental actions are to take place at a given time. As such, every \code{Event} must define a \code{StartTime} that can be interpreted to be \emph{absolute}, with respect to the start of the experiment, or \emph{relative}, with respect to the \code{StartTime} of the previous \code{Event}. The other allowed elements within an \code{Event} are \code{Action}, \code{FunctionCall} and finally other \code{Event} tags. An \code{Action} specifies a direct and instantaneous experimental action, such as changing a DDS parameter or starting or stopping a photon counter. \code{FunctionCall} tags allow users to call pre-defined routines from the Standard Library, which are themselves made up of a series of \code{Event} tags. In the Gate Layer on the other hand, a \code{Segment} contains a series of \code{GateBlock} objects that form a time-independent container for pre-defined gates that are scheduled to start at the same time. Users can access pre-defined gates in the Standard Library though \code{GateCall} objects. Similar to the \code{Event} tag, \code{GateBlock} tags can also be nested. The main difference between a \code{GateBlock} and an \code{Event} is that the time-dependence in the former is implicit. The introduction of \code{GateBlock}s is a mere convenience for users who are interested in circuit-level programming only; in principle every program can be fully expressed in a series of \code{Event}s, and these two containers may also be used interchangeably. At the end of every \code{Segment}, users have the option of implementing a \code{Decision} block that can alter the execution flow of the program by jumping between different \code{Segment}s depending on the outcome of the conditions that are declared in the \code{Decision} object. Conditions are typically based on measurement outcomes, that are stored in \code{Resource} elements. While measurement outcomes in their raw form are photon counts, users can threshold the counts in real-time to obtain a 0 or 1 outcome for a qubit readout. It is also possible to combine multiple resources to compare the outcome of multiple qubits against multi-element bit strings. Taking all these elements together, a complete (but largely empty) program in the Python binding may be constructed as follows: \begin{lstlisting}[language=mypython] import quala as ql # Create a default InitialSetup element setup = ql.InitialSetup(use_predefined="default") # Generate a generic resource container # (A measurement Action would fill it with data) resources = ql.Resource(name="my_measurement") # Generate an Segment with an empty Event, # an empty GateBlock and a Decision segment_1 = ql.Segment( ql.Event( # Note: Events must always specify a start time start_time=ql.NumericLiteral(0, "ns") ), ql.GateBlock(), ql.Decision( resource="my_measurement", conditions=[ # If the outcome is "0", advance to "segment-2" ql.Condition("0", destination_segment="segment-2"), # If the outcome is "1", advance to "segment-3" ql.Condition("1", destination_segment="segment-3") ] ) ) # Generate two additional, empty Segments segment_2 = ql.Segment(name="segment-2") segment_3 = ql.Segment(name="segment-3") # Combine all Segments within a Program program = ql.Program(program_segments=[segment_1, segment_2, segment_3]) # Assemble the full Experiment experiment = ql.Experiment( initial_setup=setup, resources=resources, program=program) \end{lstlisting} \noindent Note that we have omitted the \code{Headers \& Definitions} element here, as those are inserted automatically when a program contains a call to the Standard Library in either the \code{Events} or the \code{GateBlocks}. This abstract example highlights the relationships of the language elements to one another without making any assumptions about the actual experimental actions and the underlying physical hardware. To illustrate how our language can be used to implement realistic experiments, we provide two full-code examples in \autoref{sec::examples}. \subsection{Language Compiler} \label{sec::compiler} Quala programs may contain abstract programming concepts such as looping constructs, functions, gates and symbolic calculations. Additionally, the decision logic and the ability to specify \code{Events} within \code{Events} allows for programs with several time-lines that are not directly translatable to hardware instructions. To bridge that gap, we have developed a sophisticated language compiler that reduces the functions and gates, flattens time-lines and solves the symbolic expressions. Compiling a Quala program follows three phases: frontend, middle, and backend processing, which is similar to the design philosophy of the GNU Compiler Collection (GCC) \cite{gcc-internals}. At the first stage, the compiler parses the user's XML program into an object-oriented C++ data structure. This includes a validity check of the program through a linting tool and if errors are detected, the user may be given warnings or errors, depending on severity of problems. The compiler-proper, or middle-end, performs a series of expansions that re-write the XML in successively less expressive forms. This re-writing compiler process ensures that at all steps, the program is still a valid, equivalent XML program to what the user described. The expressiveness is reduced at each step (meaning progressively simpler commands are used), leading to a simpler, albeit longer, program. The reduction steps are schematically shown in \autoref{fig:branching-logic} and can be summarized as follows: \begin{itemize} \item The structural circuit model is decomposed by expanding \code{GateCalls} and \code{GateBlocks} into their definitions via macro expansion. \item The abstract Gate Layer is removed by expanding each \code{GateCalls} into function definitions. The Standard Library provides this mapping. This effectively \emph{solves} both the natural gate and the computational gate description as a pure Timing Layer program. \item The \code{FunctionCalls} are expanded into their definitions from the Standard Library, or user functions, resulting only in Timing Layer \code{Events} and \code{Actions}. Recursive calls are similarly solved. \item The resulting Timing Layer \code{Events} and \code{Actions} are \emph{flattened} by solving all relative start times and reordering \code{Events} into a single timeline. \item All symbolic expressions are solved with the latest database values (or specified historical value). The result is simple numeric literal values. Start times are now specified in units of $\unit[0.5]{ns}$ or `ticks'. \item The composite actions, like \code{DDSAction}, are expanded into \emph{channelized} actions. The result is simple, fully qualified names for each execution engine involved. The resulting output program is viewable by the user in XML form as well. \end{itemize} At the last step of compilation, the so-called backend processing, converts the terminal XML form into a series of binary instructions, called `opcodes' that are used by our FPGA execution engine. \begin{figure}[t!] \includegraphics[scale=1]{figure_4.pdf} \caption{Language compiler and real-time decision logic. \textbf{a} illustrates the key steps of the compilation process that successively remove abstraction layers and simplify the program until it can be expressed as a flat time-line with direct experimental actions. \textbf{b} shows the schematic flow graph of the real-time branching logic. A pre-defined lookup table dictates the program execution flow.} \label{fig:branching-logic} \end{figure} \subsection{FPGA Execution Engine} The language is designed to execute on FPGA hardware via the use of \emph{Execution Engines} and \emph{Action Cores}. These modules reside inside the FPGA itself, and provide the programmability needed to implement language elements. Action cores, as the name suggests, provide core logic for a particular action; examples include the phase accumulator and sine-lookup of a DDS core, or pulse counting of a photon measurement sensor. Each type of Action element of a program controls an independent Action Core. Execution engines are lightweight, synthetic microprocessors that provide all timing and interface to the user program. Each user-controlled parameter, such as DDS phase, or amplitude, or photon-counter start/stop, is attached to a single, dedicated execution engine. The small footprint of these engines allows many hundreds of units to be instantiated on a single FPGA chip. With only slight changes in data word size, the execution engine module is the same regardless of the type of action core parameter being controlled. The execution engine interfaces with an action core via standard Register Transfer Logic (RTL) methods. The engines receive instructions in the form of operation codes (opcodes) from the terminal XML channelized output of the compiler. A very simple opcode format allows small logic footprint, as shown in \autoref{tbl:opcodes}. The primary opcode, \code{SETVALUE}, engages a change at a precise time. The first generation execution engine does not contain internal state variables, but does allow for repetition loops. \begin{table} \begin{tabular} { p{0.3\linewidth} p{0.55\linewidth} } \toprule \textbf{OPCODE} & \textbf{DESCRIPTION} \\ \midrule \code{NOP} & No Operation \\ \code{SETVALUE} $x$ & Set parameter value to $x$ \\ \code{SETLOOP} $x$ & Sets the loop counter to $x$ \\ \code{JNZ} $pc$ & Jump to instruction $pc$ if loop count is nonzero \\ \code{JZ} $pc$ & Jump to instruction $pc$ if loop count is zero \\ \code{DECLOOP} & Decrement the loop counter \\ \code{GOTO} $pc$ & Unconditionally jump to $pc$ \\ \code{BRANCHLUT} $m$, $t$ & Jump to instruction indicated by a look-up table $t$ based on measurement $m$ \\ \bottomrule \end{tabular} \caption{FPGA opcodes. Each corresponds to a fundamental execution engine operation to implement user language features.} \label{tbl:opcodes} \end{table} Embedded in each opcode is a field indicating the time-delay that the instruction should be processed. Unlike traditional microprocessors, which execute instructions at the next available cycle\footnote{Next-cycle execution is an oversimplification of pipelined, superscalar processors.}, the execution engine embeds a delay counter to every instruction. These delay counters operate at the $\unit[2]{GHz}$ experiment rate, and define the precision timing of all system changes. To overcome the timing closure demands of modern FPGA chips, the experiment clock rate of $\unit[2]{GHz}$ is interleaved in four clock phase-lanes, each operating at $\unit[500]{MHz}$. \subsection{Decision Logic and Realtime Communications} The decision logic forms an integral part of our language and allows users to specify programs that execute certain blocks of code \emph{conditioned} on measurement outcomes. This is enabled through the \code{Decision} element (see \autoref{fig::program}) that can be placed at the end of each \code{Segment} to instruct the program which \code{Segment} to execute next, as illustrated in \autoref{fig:branching-logic}b. To support the decision logic in our hardware setup, each measurement result $m$ is broadcast over a low-latency Infiniband network. Each FPGA module contains a copy of the decision block's lookup table $t$, generated by the language compiler. When a \code{BRANCHLUT} $m$, $t$ instruction is executed, the execution engine performs a jump to the opcodes indicated by the appropriate destination segment based on that measurement $m$. \section{Illustrative Examples} \label{sec::examples} Now we turn to two examples that highlight the core features and capabilities of our language both for digital and analog quantum programs. For the sake of brevity and clarity, we are omitting the machine-specific initial setup elements. \subsection{5-Qubit error-correction code} The five qubit code is a distance three error correction code that encodes a single logical qubit using 5 physical qubits \cite{laflamme1996perfect}. The fault-tolerant syndrome measurement circuit (FT-SMC) we use for our example uses two ancillary qubits, one to measure the syndrome bit and another to flag errors in the measurement process \cite{chao2018quantum}. The code begins by performing a fault-tolerant syndrome measurement circuit (FT-SMC). If a fault is ever detected during this process, it aborts, performs a non-fault-tolerant syndrome measurement circuit (NFT-SMC), and uses the syndrome information to correct the fault. Overall, the algorithm can be broken down into the following high-level steps: \begin{enumerate} \item Prepare all qubits in computational ground state \item Run first FT-SMC and measure flag qubit \begin{enumerate} \item If the flag is raised, interrupt, perform NFT-SMC, correct the fault, and terminate error correction \end{enumerate} \item Run second FT-SMC, measure flag qubit, repeat 2a) \item Run third FT-SMC, measure flag qubit, repeat 2a) \item Run fourth FT-SMC, measure flag qubit, repeat 2a) \item If no flag was raised perform error correction based on syndrome measurements \end{enumerate} The QuantumION language is uniquely suited to run programs of this form. We can express this algorithm through a series of \code{Segments} linked together through \code{Decision} blocks. The gates for each syndrome measurement circuit and the individual measurements at the end can be packaged into a series of \code{GateBlocks}. To store the measurement results and make them available for the decision logic, we also need to create an empty \code{Resources} object at the beginning of our program. With that, the $i$-th FT-SMC segment can be generated as follows: \begin{lstlisting}[language=mypython] import quala as ql resources = ql.Resources(length=12) def make_ft_smc(i: int) -> ql.Segment: """ Creates the i-th syndrome measurement circuit """ ft_smc = ql.Segment( name=f"FT-SMC-{i}", ql.GateBlock( ql.GateCall("H", qubit=i, port="Target"), ql.GateCall("CX", qubit=(i, i+5), port=("Control", "Target")), # [more gates here] ql.GateCall("Measure", qubit=i+5, resource=resources[2*i]) ql.GateCall("Measure", qubit=i+6, resource=resources[2*i+1]) ) ) # Add the Decision block at the end decision_block = make_decisions(i) ft_smc.add(decision_block) return ft_smc \end{lstlisting} \noindent The branching decision logic allows whole code blocks to be run (or not) conditioned on the result of a mid-circuit measurement. To implement the required \code{Decision} blocks, we need to generate a series of \code{Condition} objects that take in the measured state (0 or 1 in this example) and a ``destination'' \code{Segment}, to tell the compiler which \code{Segment} to execute next. These are referred to by their name. \begin{lstlisting}[language=mypython] def make_decisions(i: int) -> ql.Decision: """ Create the decision block for the i-th SMC segment """ segment_name = f"FT-SMC-{i}" next_segment_name = f"FT-SMC-{i+1}" decision = ql.Decision( resource=resources[2*i+1], conditions=[ # If the outcome is 1, we want to jump to the non-fault-tolerant # SMC segment (not shown) ql.Condition(state=1, destination_segment="NFT-SMC"), # Otherwise, we want to continue to the next segment (definition shown above) ql.Condition(state=0, destination_segment=next_segment_name) ] ) return decision \end{lstlisting} \noindent Finally, we can create a series of these FT-SMC segments and string them together inside a \code{Program}. \begin{lstlisting}[language=mypython] program = ql.Program( # Create the FT-SMC segments segments=[make_ft_smc(i) for i in range(4)] # Add auxiliary correction segments and non-fault-tolerant segments (not shown) + aux_segments ) experiment = ql.Experiment(program, resources) \end{lstlisting} \noindent Thus, the QuantumION language can run the whole non-fault-tolerant syndrome measurement circuit with a single decision call from anywhere in the fault-tolerant circuit without any need for code duplication. The segment structure and branching decision logic remove the need for code duplication, and shuttling operations allow mid-circuit measurement without disturbing the rest of the computation. \subsection{Analog quantum simulation} In this example, we generate an effective Ising interaction between two spins that can be described by the following Hamiltonian \cite{monroe2021programmable} $$ H_\mathrm{Ising}(t) = \sum_{ij}^N J_{ij} \sigma_x^{(i)}\sigma_x^{(j)} + B_y(t) \sum_{i} \sigma_x^{(i)}, $$ Here, $N$ is the number of spin particles, $B_y(t)$ is the effective transverse magnetic field acting on each spin, and the $J_{ij}$ terms are the spin-spin coupling terms that depend on the drive parameters. In order to generate this interaction in our setup, we need to encode three different waveforms on the laser beams that mediate the ion-ion interactions, two with static amplitudes and frequencies, and one with a time-dependent, decreasing amplitude (see \cite{richerme2013experimental} for more details). To achieve this, we use a language element called \code{DDSAction} that allows for direct control over every DDSs (direct digital synthesizer, \emph{i.e.} RF signal generator) in the system. Like any other \code{Action} in our language, a \code{DDSAction} refers to an instantaneous, experimental instruction, and needs to be integrated into \code{Event} blocks to specify their start time and duration. We will define these actions first, and then demonstrate how they are embedded into \code{Events} and the overall program. Every \code{DDSAction} takes as parameter a channel name, and can optionally take parameters such as the amplitude, frequency, absolute and relative phases, interpolation parameters and several more. For example, the static waveforms can be defined as follows: \begin{lstlisting}[language=mypython] import quala as ql # Specify waveform frequencies relative to resonance f0 = ql.NamedConstant("RamanCarrierResonanceFrequency") f1 = f0 + ql.NumericLiteral(2, "MHz") f2 = f0 - ql.NumericLiteral(2, "MHz") # Set default amplitude a0 = ql.NamedConstant("DefaultRamanIndividualDDSAmplitude") ddsaction_1 = ql.DDSAction( channel="channels.aom.raman.individual1.dds0", amplitude=a0, frequency=f1 ) ddsaction_2 = ql.DDSAction( channel="channels.aom.raman.individual1.dds1", amplitude=a0, frequency=f2 ) \end{lstlisting} \noindent Where we have made use of the Calibration Database to import the system-specific Raman laser parameters through the \code{NamedConstant} element combined with our Symbolic Algebra. In order to program the time-dependent waveform, we can make use of the interpolation capabilities of our DDSs. For example, we can implement a linear sweep of the form $p(t) = p_1 + p_2*t$, between two amplitudes \code{a1} and \code{a2} over a fixed duration as follows: \begin{lstlisting}[language=mypython] # Define sweep duration t_sweep = ql.NumericLiteral(10, "us") # Set start and stop amplitudes (decreasing) a1 = ql.NamedConstant("DefaultRamanIndividualDDSAmplitude") a2 = a1 - ql.NumericLiteral(50, "mV") ddsaction_3 = ql.DDSAction( channel="channels.aom.raman.individual1.dds2", frequency=f0, interp_type="polynomial", interp_p0=a1, interp_p1=(a2 - a1)/(t_sweep * ql.NamedConstant("DDSSampleClockFrequency")) ) \end{lstlisting} \noindent where we have used the Calibration Database parameter for the DDS clock frequency to calculate the interpolation parameter. These three \code{DDSActions} form the heart of the analog quantum simulation routine. But before we can integrate them into a full quantum experiment, we first need to wrap \code{Events} around them that allow us to specify timing information. Two \code{Events} are needed: one for turning these three DDSs on, followed by another \code{Event} to turn them off. We can define those as follows: \begin{lstlisting}[language=mypython] # Define Ising interaction Events ising_events = [ # Event 1: Turn all DDSs on ql.Event( starttime=ql.StartTime(0, "ns"), event_items=[ddsaction_1, ddsaction_2, ddsaction_3] ), # Event 2: Turn all DDSs off ql.Event( starttime=ql.StartTime(t_sweep, stype="since-last-action"), event_items=[ ql.DDSAction( channel="channels.aom.raman.individual1.dds0", amplitude=ql.NumericLiteral(0, "V")), ql.DDSAction( channel="channels.aom.raman.individual1.dds1", amplitude=ql.NumericLiteral(0, "V")), ql.DDSAction( channel="channels.aom.raman.individual1.dds2", amplitude=ql.NumericLiteral(0, "V")), ) ] \end{lstlisting} \noindent Note that we have set the \code{StartTime} of the second \code{Event} to begin at time \code{t\_sweep}, \emph{i.e.} our desired interaction time. To turn the interactions off, we simply set all DDS amplitudes to zero. We can now put everything together into a full experiment. In addition to the Ising interaction, we need to include several preparation routines specific to our ion trap hardware. We have written all these routines up in the Standard Library, and we envision that users of other hardware platforms can add their hardware-specific routines too. In our setup, the full experimental protocol that we need to implement is as follows: \begin{enumerate} \item Doppler Cooling \item Optical pumping to the ground state \item Sideband Cooling \item Global $\pi/2$ rotation \item Perform effective Ising interaction with the pre-defined \code{ising\_events} \item Global $\pi/2$ rotation \item Measurement \end{enumerate} \noindent Since there is no decision logic in this example, we can package the entire experiment code into a single \code{Segment}. For most auxiliary routines, we can furthermore use the Standard Library. Since we want to include a measurement in this experimental sequence, we also need to declare a \code{Resource} that the measurement can be stored in. \begin{lstlisting}[language=mypython] # Define measurement resource r0 = ql.APDCounterResource() segment = ql.Segment( segment_items=[ ql.Event( # Step 1 start_time=ql.NumericLiteral(0, "ns"), ql.FunctionCall("DopplerCooling", duration=ql.NumericLiteral(1, "ms")) ), ql.Event( # Step 2 start_time=ql.NumericLiteral(0, "ns"), ql.FunctionCall("OpticalPumping", duration=ql.NumericLiteral(0.1, "ms")) ), ql.Event( # Step 3 start_time=ql.NumericLiteral(0, "ns"), ql.FunctionCall("SidebandCooling") ), ql.GateBlock( # Step 4 ql.GateCall("XPi/2", ion=0) ), # Step 5: Insert Ising interaction events (see below) *ising_events, ql.GateBlock( # Step 6 ql.GateCall("XPi/2", ion=0) ), ql.Event( # Step 7 start_time=ql.NumericLiteral(0, "ns"), ql.FunctionCall("GlobalReadout", duration=ql.NumericLiteral(0.5, "ms"), resource=r0) ), ] ) \end{lstlisting} \noindent It is possible to combine both the Timing Layer's \code{Event} element with the Gate Layer's \code{GateBlock} in the same program. By default, the \code{start\_time} is taken relative to the last \code{Action} in the experiment, which in this case would be the last \code{Action} required to carry out the $\pi/2$ gates. And finally, we can take all these elements together to construct the full \code{Experiment}: \begin{lstlisting}[language=mypython] experiment = ql.Experiment( program=ql.Program(segments=segment), resources=r0 ) \end{lstlisting} The definitions of the auxiliary routines from the Standard Library are automatically packaged into the \code{Headers} and \code{Definition} elements, and the user does not need to specify those. Overall, this example implements the full experiment specification of an a analog quantum simulation experiment including all hardware controls required for its implementation. \section{Conclusion and outlook} We have presented a full-stack quantum programming language that is suitable for both hardware-agnostic circuit-level programming, as well as low-level hardware-specific timing layer programming. These two complementary views of a quantum program are integrated seamlessly in our language which offers an unprecedented degree of transparency into the analog nature of quantum hardware at a time where purely digital quantum computing is still largely dependent on sophisticated analog control design due to the high error budget of current quantum hardware. While this language was designed within the scope of the remote-access QuantumION platform, we believe the design philosophy and principles are applicable to a much broader range of hardware implementations and we hope that our work will provide a meaningful contribution to the quantum computing community and inspire discussions and collaborations on developing unified software frameworks for quantum programming. \section{Acknowledgements} We acknowledge support from the TQT (Transformative Quantum Technologies) research initiative at the University of Waterloo and the Natural Sciences and Engineering Research Council of Canada (NSERC). \bibliographystyle{ieeetr}
proofpile-arXiv_068-16205
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Tensor deflation process and tensor power method} \label{sec:deflation} In this section we will first discuss the basic tensor deflation process for orthogonal tensor decomposition. Then we show the connection between the tensor power method and gradient flow. \paragraph{Tensor deflation} For orthogonal tensor decomposition, a popular approach is to first fit the largest ground truth component in the tensor, then subtract it out and recurse on the residual. The general process is given in Algorithm~\ref{algo:deflate}. In this process, there are multiple ways to find the best rank-1 approximation. For example, \cite{anandkumar2014tensor} uses tensor power method, which picks many random vectors $w$, and update them as $w = T^*(w^{\otimes 3},I)/\n{T^*(w^{\otimes 3},I)}$. \begin{algorithm}[htbp] \caption{Tensor Deflation Process}\label{algo:deflate} \begin{algorithmic \STATE \textbf{Input:} Tensor $T^*$ \STATE \textbf{Output:} Components $W$ such that $T^* \approx \sum_{w\in \text{col}(W)} w^{\otimes 4}/\|w\|^2$ \STATE{Initially let the residual $R$ be $T^*$.} \WHILE{$\|R\|_F$ is large} \STATE Find the best rank 1 approximation $w^{\otimes 4}/\|w\|^2$ for $R$. \STATE Add $w$ as a new column in $W$, and let $R = R - w^{\otimes 4}/\|w\|^2$. \ENDWHILE \end{algorithmic} \end{algorithm} \paragraph{Tensor power method and gradient flow} If we run tensor power method using a tensor $T^*$ that is equal to $\sum_{i=1}^d a_i e_i^{\otimes 4}$, then a component $w$ will converge to the direction of $e_i$ where $i$ is equal to $\arg\max_i a_i \bar{w}_i^2$. If there is a tie (which happens with probability 0 for random $w$), then the point will be stuck at a saddle point. Let's consider running gradient flow on $W$ with objective function $\frac{1}{2}\fns{T-T^*}$ as $T:=\sum_{w\in\text{col}(W)}w^{\otimes 4}/\|w\|^2$. If $T$ does not change much, the residual $R = T^*-T$ is close to a constant. In this case the trajectory of one component $w$ is determined by the following differential equation: \begin{align} \frac{\mathrm{d} w}{\mathrm{d} t} =& 4 R(\bar{w}^{\otimes 2},w, I) - 2R(\bar{w}^{\otimes 4})w \label{eq:powerdynamics} \end{align} To understand how this process works, we can take a look at $\frac{\mathrm{d} w_i^2/\mathrm{d} t}{w_i^2}$ (intuitively this corresponds to the growth rate for $w_i^2$). If $R \approx T^*$ then we have: $$ \frac{\mathrm{d} w_i^2/\mathrm{d} t}{w_i^2} \approx 8a_i\bar{w}_i^2 - 4\sum_{j\in [d]}a_j\bar{w}_j^4. $$ From this formula it is clear that the coordinate with larger $a_i \bar{w}_i^2$ has a faster growth rate, so eventually the process will converge to $e_i$ where $i$ is equal to $\arg\max_i a_i \bar{w}_i^2$, same as the tensor power method. Because of their similarity later we refer to dynamics in Eqn. \eqref{eq:powerdynamics} as tensor power dynamics. \iffalse \paragraph{Gradient Flow and Differences} The basic gradient flow algorithm simply initializes the components $W$ and follow the differential equation \[ \frac{\mathrm{d} W}{\mathrm{d} t} = -\nabla \|T-T^*\|_F^2, \] where $T^*$ is the ground truth and $T$ is parametrized as Equation~\ref{eqn:parametrization}. This is clearly very different from the general tensor deflation process. However, intuitively if we use a very small initialization, then at the beginning $T$ is close to $0$ and each component will evolve under a similar dynamic as the tensor power method. There are still some key differences that prevents us from analyzing this algorithm directly. First, gradient flow is run on all the columns of $W$, including the columns that are already used to fit previous ground truth components. Second, in the deflation process the step to find the best rank 1 approximation is reinitialized for every rank. We modify the gradient flow algorithm to make it easier to analyze. \fi \section{Our algorithm} \label{sec:algorithm} Our algorithm is a modified version of gradient flow as described in Algorithm~\ref{algo:main}. First, we change the loss function to \[ L(W) = \frac{1}{2}\fns{T-T^*}+\frac{\lambda}{2}\fns{W}. \] The additional small regularization $\frac{\lambda}{2}\fns{W}$ allows us to prove a {\em local stability} result that shows if there are components $w$ that are close to the ground truth components in direction, then they will not move too much (see Section~\ref{sec:induction_sketch}). Our algorithm runs in multiple epochs with increasing length. We use $W^{(s,t)}$ to denote the weight matrix in epoch $s$ at time $t$. We use similar notation for tensor $T^{(s,t)}.$ In each epoch we try to fit ground truth components with $a_i \ge \beta^{(s)}.$ In general, the time it takes to fit one ground truth direction is inversely proportional to its magnitude $a_i$. The earlier epochs have shorter length so only large directions can be fitted, and later epochs are longer to fit small directions. At the middle of each epoch, we reinitialize all components that do not have a large norm. This serves several purposes: first we will show that all components that exceed the norm threshold will have good correlation with one of the ground truth components, therefore giving an initial condition to the local stability result; second, the reinitialization will reduce the dependencies between different epochs and allow us to analyze each epoch almost independently. These modifications do not change the dynamics significantly, however they allow us to do a rigorous analysis. \begin{algorithm}[htbp] \caption{Modified Gradient Flow}\label{algo:main} \begin{algorithmic \STATE \textbf{Input:} Number of components $m$, initialization scale $\delta_0$, re-initialization threshold $\delta_1$, increasing rate of epoch length $\gamma$, target accuracy $\epsilon$, regularization coefficient $\lambda$ \STATE \textbf{Output:} Tensor $T$ satisfying $\fn{T-T^*}\leq \epsilon.$ \STATE {Initialize $W^{(0,0)}$ as a $d\times m$ matrix with each column $w^{(0,0)}$ i.i.d. sampled from $\delta_0 \text{Unif}(\mathbb{S}^{d-1})$}. \STATE {$\beta^{(0)}\leftarrow \fn{T^{(0,0)}-T^*}$; $s\leftarrow 0$} \WHILE {$\fn{T^{(s,0)}-T^*}>\epsilon$} \STATE {Phase 1: Starting from $W^{(s,0)}$, run gradient flow for time $t_1^{(s)}= O(\frac{d}{\beta^{(s)}\log(d)})$.} \STATE {Reinitialize all components that have $\ell_2$ norm less than $\delta_1$ by sampling i.i.d. from $\delta_0 \text{Unif}(\mathbb{S}^{d-1})$.} \STATE {Phase 2: Starting from $W^{(s,t_1^{(s)})}$, run gradient flow for $t_2^{(s)}-t_1^{(s)}= O(\frac{\log(1/\delta_1)+\log(1/\lambda)}{\beta^{(s)}})$ time} \STATE {$W^{(s+1,0)}\leftarrow W^{(s,t_2^{(s)})};\ \beta^{(s+1)}\leftarrow \beta^{(s)}(1-\gamma)$; $s\leftarrow s+1$} \ENDWHILE \end{algorithmic} \end{algorithm} \section{Experiments}\label{sec:experiment} In Section~\ref{sec:exp_detail}, we give detailed settings for our experiments in Figure~\ref{fig:ortho}. Then, we give additional experiments on non-orthogonal tensors in Section~\ref{sec:exp_nonortho}. \subsection{Experiment settings for orthogonal tensor decomposition}\label{sec:exp_detail} We chose the ground truth tensor $T^*$ as $\sum_{i\in [5]} a_i e_i^{\otimes 4}$ with $e_i\in {\mathbb R}^{10}$ and $a_i/a_{i+1} = 1.2.$ We normalized $T^*$ so its Frobenius norm equals $1$. Our model $T$ was over-parameterized to have $50$ components. Each component $W[:,i]$ was randomly initialized from $\delta_0 \text{Unif}(\mathbb{S}^{d-1})$ with $\delta_0 =10^{-15}.$ The objective function is $\frac{1}{2}\fns{T-T^*}.$ We ran gradient descent with step size $0.1$ for $2000$ steps. We repeated the experiment from $5$ different experiments and plotted the results in Figure~\ref{fig:ortho}. Our experiments was ran on a normal laptop and took a few minutes. \subsection{Additional results on non-orthogonal tensor decomposition}\label{sec:exp_nonortho} In this subsection, we give some empirical observations that suggests non-orthogonal tensor decomposition may not follow the greedy low-rank learning procedure in~\cite{li2020towards}. \paragraph{Ground truth tensor $T^*$:} The ground truth tensor is a $10\times 10\times 10\times 10$ tensor with rank $5$. It's a symmetric and non-orthogonal tensor with $\fn{T^*}=1.$ The specific ground truth tensor we used is in the code. \paragraph{Greedy low-rank learning (GLRL):} We first generate the trajectory of the greedy low-rank learning. In our setting, GLRL consists of $5$ epochs. At initialization, the model has no component. At each epoch, the algorithm first adds a small component (with norm $10^{-60}$) that maximizes the correlation with the current residual to the model, then runs gradient descent until convergence. To find the component that has best correlation with residual $R$, we ran gradient descent on $R(w^{\otimes 4})$ and normalize $w$ after each iteration. In other words, we ran projected gradient descent to solve $\min_{w\mid \n{w}=1}R(w^{\otimes 4}).$ We repeated this process from $50$ different initializations and chose the best component among them. In the experiment, we chose the step size as $0.3$. And at the $s$-th epoch, we ran $s\times 2000$ iterations to find the best rank-one approximation and also ran $s\times 2000$ iterations on our model after we included the new component. After each epoch, we saved the current tensor as a saddle point. We also included the zero tensor as a saddle point so there are $6$ saddles in total. Figure~\ref{fig:greedy_loss} shows that the loss decreases sharply in each epoch and eventually converges to zero. \begin{figure}[h] \centering \subfigure{ \includegraphics[width=3in]{figures/greedy_loss.eps} } \caption{Loss trajectory of greedy low-rank learning. } \label{fig:greedy_loss} \end{figure} \paragraph{Over-parameterized gradient descent:} If the over-parameterized gradient descent follows the greedy low-rank learning procedure, one should expect that the model passes the same saddles when the tensor rank increases. To verify this, we ran experiments with gradient descent and computed the distance to the closest GLRL saddles at each iteration. Our model has $50$ components and each component is initialized from $\delta_0 \text{Unif}(\mathbb{S}^{d-1})$ with $\delta_0 = 10^{-60}.$ We ran gradient descent with step size $0.3$ for $1000$ iterations. \begin{figure}[h] \centering \subfigure{ \includegraphics[width=2.65in]{figures/saddle_distance.eps} } \subfigure{ \includegraphics[width=2.65in]{figures/norm_growth.eps} } \caption{Non-orthogonal tensor decomposition with number of components $m=50$ and initialization scale $\delta_0=10^{-60}.$ The left figure shows the loss trajectory and the distance to the closest GLRL saddles; the right figures shows the norm trajectory of different components.} \label{fig:distance_norm_growth} \end{figure} Figure~\ref{fig:distance_norm_growth} (left) shows that after fitting the first direction, over-parameterized gradient descent then has a very different trajectory from GLRL. After roughly $450$ iterations, the loss continues decreasing but the distance to the closest saddle is high. After $800$ iterations, gradient descent converges and the distance to the closest saddle (which is $T^*$) becomes low. In Figure~\ref{fig:distance_norm_growth} (right), we plotted the norm trajoeries for $10$ of the components. The figure shows that some of the already large components become even larger at roughly $450$ iterations, which corresponds to the second drop of the loss. We picked two of these components and found that their correlation $\inner{\bar{w}}{\bar{v}}$ drops from $1$ at the $400$-th iteration to $0.48$ at the $550$-th iteration. This suggests that two large component in the same direction can actually split into two directions in the training. One might suspect that this phenomenon would disappear if we use more aggressive over-parameterization and even smaller initialization. We then let our model have $1000$ components and let the initialization size to be $10^{-100}$ and re-did the experiments. We observed almost the same behavior as before. Figure~\ref{fig:distance_norm_growth_large} (left) shows the same pattern for the distance to closest GLRL saddles as in Figure~\ref{fig:distance_norm_growth}. In Figure~\ref{fig:distance_norm_growth_large} (right), we randomly chose $10$ of the $1000$ components and plotted their norm change, and we again observe that one large component becomes even larger at roughly iteration 700 that corresponds to the second drop of the loss function. \begin{figure}[h] \centering \subfigure{ \includegraphics[width=2.65in]{figures/saddle_distance_large.eps} } \subfigure{ \includegraphics[width=2.65in]{figures/norm_growth_large.eps} } \caption{Non-orthogonal tensor decomposition with number of components $m=1000$ and initialization scale $\delta_0=10^{-100}.$ The left figure shows the loss trajectory and the distance to the closest GLRL saddles; the right figures shows the norm trajectory of different components.} \label{fig:distance_norm_growth_large} \end{figure} \subsection{Counterexample} \label{sec: induction, counterexample} We prove Claim~\ref{clm:example} as follows. \example* \begin{proof} Similar as in Lemma~\ref{lemma: d vtk2}, we can compute $\frac{\mathrm{d}}{\mathrm{d} t}\bar{v}_k^2$ as follows, \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\bar{v}_k^2 =& 8(1-\bar{v}_k^2)\bar{v}_k^4\\ &- 8(1-\bar{v}_k^2)\pr{\ns{v}\inner{\bar{v}}{\bar{v}}^4 + \ns{w}\inner{\bar{w}}{\bar{v}}^4} \\ &+ 8\pr{\ns{w}\inner{\bar{w}}{\bar{v}}^3\inner{\bar{w}_{-k}}{\bar{v}_{-k}} + \ns{v}\inner{\bar{v}}{\bar{v}}^3\inner{\bar{v}_{-k}}{\bar{v}_{-k}}}. \end{align*} Since $\bar{v}_k^2 = 1-\alpha, \bar{v}_k = \bar{w}_k$ and $\bar{v}_{-k} = - \bar{w}_{-k},$ we have $\inner{\bar{w}}{\bar{v}}^4, \inner{\bar{w}}{\bar{v}}^3 \geq 1-O(\alpha)$ and $\inner{\bar{w}_{-k}}{\bar{v}_{-k}}=-\alpha.$ Therefore, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\bar{v}_k^2 \leq 8\alpha - 8\alpha(\ns{v}+\ns{w}(1-O(\alpha)))-8\ns{w}(1-O(\alpha))\alpha + 8\ns{v}\alpha \end{align*} We have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\bar{v}_k^2 \leq 8\alpha\pr{(1-\ns{w}-\ns{v})-\ns{w}(1-O(\alpha)) + \ns{v}} <0, \end{align*} where the last inequality assumes $\ns{w}+\ns{v}\in [2/3, 1]$ and $\ns{v},\alpha$ smaller than certain constant. \end{proof} \subsection{Continuity argument}\label{sec: continuity argument} { \newcommand{\ps}[1]{^{(#1)}} \newcommand{\mathrm{I}^{(t)}}{\mathbf{I}} \newcommand{\indi}[1]{\mathbbm{1}_{#1}} We mostly use the following version of continuity argument, which is adapted from Proposition~1.21 of \cite{tao_nonlinear_2006}. \begin{lemma} \label{lemma: continuity argument} Let $\mathrm{I}^{(t)}\ps{t}$ be a statement about the structure of some object. $\mathrm{I}^{(t)}\ps{t}$ is true for all $t \ge 0$ as long as the following hold. \begin{enumerate}[(a)] \item $\mathrm{I}^{(t)}\ps{0}$ is true. \item $\mathrm{I}^{(t)}$ is closed in the sense that for any sequence $t_n \to t$, if $\mathrm{I}^{(t)}\ps{t_n}$ is true for all $n$, then $\mathrm{I}^{(t)}\ps{t}$ is also true. \item If $\mathrm{I}^{(t)}\ps{t}$ is true, then there exists some $\delta > 0$ s.t. $\mathrm{I}^{(t)}\ps{s}$ is true for $s \in [t, t+\delta)$. \end{enumerate} In particular, if $\mathrm{I}^{(t)}\ps{t}$ has form $\bigwedge_{i=1}^N \bigvee_{j=1}^N p\ps{t}_{i, j} \le q_{i, j}$. Then, we can replace (b) and (c) by the following. \begin{enumerate} \item[(b')] $p\ps{t}_{i, j}$ is $C^1$ for all $i,j$. \item[(c')] Suppose at time $t$, $\mathrm{I}^{(t)}\ps{t}$ is true but some clause $\bigvee_{j=1}^N p\ps{t}_{i,j} \le q_{i,j}$ is tight, in the sense that $p\ps{t}_{i,j} \ge q_{i,j}$ for all $j$ with at least one equality. Then there exists some $k$ s.t. $p_{i,k}\ps{t} = q_{i,k}$ and $\dot{p}\ps{t}_{i, k} < 0$. \end{enumerate} \end{lemma} \begin{proof} Define $t' := \sup\{t\ge 0 \;:\; \mathrm{I}^{(t)}\ps{t} \text{ is true}\}$. Since $\mathrm{I}^{(t)}\ps{0}$ is true, $t' \ge 0$. Assume, to obtain a contradiction, that $t' < \infty$. Since $\mathrm{I}^{(t)}$ is closed, $\mathrm{I}^{(t)}\ps{t'}$ is true, whence there exists a small $\delta > 0$ s.t. $\mathrm{I}^{(t)}\ps{t}$ is true in $[t', t'+\delta)$. Contradiction. For the second set of conditions, first note that the continuity of $p_{i,j}\ps{t}$ and the non-strict inequalities imply that $\mathrm{I}^{(t)}$ is closed. Now we show that (b') and (c') imply (c). If none of the clause is tight at time $t$, by the continuity of $p_{i,j}\ps{t}$, $\mathrm{I}^{(t)}$ holds in a small neighborhood of $t$. If some constraint is tight, by (c') and the $C^1$ condition, we have $p\ps{t}_{i, k} < q_{i, k}$ in a right small neighborhood of $t$. \end{proof} \begin{remark} Despite the name ``continuity argument'', it is possible to generalize it to certain classes of discontinuous functions. In particular, we consider impulsive differential equations here, that is, for almost every $t$, $p\ps{t}$ behaves like a usual differential equation, but at some $t_i$, it will jump from $p\ps{t_i-}$ to $p\ps{t_i} = p\ps{t_i-} + \delta_i$. See, for example, \cite{lakshmikantham_theory_1989} for a systematic treatment on this topic. Suppose that we still want to maintain the property $p\ps{t} \le 0$. If the total amount of impulses is small and we have some cushion in the sense that $\dot{p}\ps{t} < 0$ whenever $p\ps{t} \in [-\varepsilon, 0]$ , then we can still hope $p\ps{t} \le 0$ to hold for all $t$, since, intuitively, only the jumps can lead $p\ps{t}$ into $[-\varepsilon, 0]$, and the normal $\dot{p}\ps{t}$ will try to take it back to $(-\infty, -\varepsilon)$. As long as the amount of impulses is smaller than the size $\varepsilon$ of the cushion, then the impulses will never break things. We formalize this idea in the next lemma. \end{remark} \begin{lemma}[Continuity argument with impulses] \label{lemma: continuity argument with impulses} Let $0 < t_1 < \cdots < t_N < \infty$ be the moments at which the impulse happens and $\delta_1, \dots, \delta_N \in {\mathbb R}$ the size of the impulses at each $t_i$. Let $p: [0, \infty) \to {\mathbb R}$ be a function that is $C^1$ on $[0, t_1)$, every $(t_i, t_{i+1})$ and $(t_N, \infty)$, and $p\ps{t_i} = p\ps{t_i-} + \delta_i$. Write $\Delta = \sum_{i=1}^N \max\{0, \delta_i\}$. If (a) $p\ps{0} \le -\Delta$ and (b) for every $t \notin \{t_i\}_{i=1}^N$ with $p\ps{t} \in [-\Delta, 0]$, we have $\dot{p}\ps{t} < 0$, then $p\ps{t} \le 0$ always holds. \end{lemma} \begin{remark} Note that if there is no impulses, then $p\ps{t}$ is a usual $C^1$ function and we recover conditions (b') and (c') of Lemma~\ref{lemma: continuity argument}. Also, though the statement here only concerns one $a_t$, one can incorporate it into Lemma~\ref{lemma: continuity argument} by replacing (b') and (c') with the hypotheses of this lemma and modify (a) to be $p\ps{0}_{i, j} \le p_{i, j} - \Delta_{i, j}$. \end{remark} \begin{proof} We claim that $p\ps{t} \le -\Delta + \sum_{i=1}^N \indi{t \le t_k} \max\{0, \delta_i\} =: q\ps{t}$. Define $t' = \sup\{t \ge 0 \;:\; p\ps{t} \le q\ps{t}\}$. Since $p\ps{t} \le -\Delta$ and $t_1 > 0$, $t' \ge 0$. Assume, to obtain a contradiction, that $t' < \infty$ and consider $p\ps{t'}$. If $t' = t_k$ for some $k$, then, by the definition of $t'$, $p\ps{t'-} \le -\Delta + \sum_{i=1}^{k-1} \max\{0, \delta_i\}$, whence, $p\ps{t'} = p\ps{t'-} + \delta_k \le -\Delta + \sum_{i=1}^{k} \max\{0, \delta_i\}$. Contradiction. If $t' \notin \{t_i\}_{i=1}^N$, then by the continuity of $p$, we have $p\ps{t'} = q\ps{t'}$. Then, since $\dot{p}\ps{t'} < 0$ and $p$ is $C^1$, we have $p\ps{t} < p\ps{t'} = q\ps{t'} = q\ps{t}$ in $[t', t'+\tau]$ for some small $\tau > 0$, which contradicts the maximality of $t'$. Thus, $p\ps{t} \le 0$ holds for all $t \ge 0$. \end{proof} } \subsection{Condition (\ref{itm: Ist, individual}): the individual bound}\label{sec:individual} In this section, we show Lemma~\ref{lemma: individual bound}, which implies condition (~\ref{itm: Ist, individual}) of Proposition~\ref{prop:main} always holds. \lemmaindividualbound* \begin{proof} Recall the definition of $G_1$, $G_2$ and $G_3$ from Lemma~\ref{lemma: d vtk2}. Now we estimate each of these three terms. By Lemma~\ref{lemma: Ez4 approx vk4}, the first two terms of $G_1$ can be lower bounded by $8 \tilde{a}^{(t)} \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 - O(\hat{a}^{(t)}_k \alpha^{1.5})$ and, for the third term, replace $|z^{(t)}|$ with $1$, and then, by the Cauchy-Schwarz inequality and Jensen's inequality, it is bounded $O(\hat{a}^{(t)}_k \alpha^{1.5})$. By Lemma~\ref{lemma: cross interaction, individual}, $G_2$ and $G_3$ can be bounded by $O(1) \sum_{i \ne k} \hat{a}^{(t)}_i \alpha^2$. Thus, \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}]^2 &\ge 8 \tilde{a}^{(t)} \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 - O(1) \sum_{i=1}^d \hat{a}^{(t)}_k \alpha^{1.5} - O(m \delta_1^2) \\ &\ge 8 \tilde{a}^{(t)} \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 - O\left( \alpha^{1.5}\right). \end{align*} Now suppose that $[\bar{v}^{(t)}_k]^2 = 1 - \alpha$. By Proposition~\ref{prop:main}, we have $\tilde{a}^{(t)} \ge \lambda/6$. Hence, \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}]^2 \ge \lambda \alpha (1 - \alpha)^2 - O\left( \alpha^{1.5}\right) \ge \lambda \alpha - O\left( \alpha^{1.5}\right). \end{align*} \end{proof} \subsection{Condition (\ref{itm: Ist, average}): the average bound}\label{sec:average} \subsubsection*{Bounding the total amount of impulses} Note that there are two sources of impulses. First, when $\hat{a}^{(t)}_k$ is larger, the correlation of the newly-entered components is $1 - \alpha$ instead of $1 - \alpha^2$ and, second, the reinitialization may throw some components out of $S^{(t)}_k$. First we consider the first type of impulses. Suppose that at time $t$, $\hat{a}^{(t)}_k \ge \alpha$, $\E^{(t)}_{k, w}\left\{ [\bar{w}^{(t)}_k]^2 \right\} = B$, and one particle $v^{(t)}$ enters $S^{(t)}_k$. The deterioration of the average bound can be bounded as \begin{align*} B - \left( \frac{\hat{a}^{(t)}_k}{\hat{a}^{(t)}_k + \ns{v^{(t)}}} B + \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k + \ns{v^{(t)}}} (1 - \alpha) \right) &= \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k + \ns{v^{(t)}}}\left( B - (1 - \alpha)\right) \\ &\le \frac{\ns{v^{(t)}}}{\alpha} 2 \alpha \\ &= 2 \ns{v^{(t)}}. \end{align*} Hence, the total amount of impulses caused by the entrance of new components can be bounded by $2m \delta_1^2$. Now we consider the reinitialization. Again, it suffices to consider the case where $\hat{a}^{(t)}_k \ge \alpha$. Suppose that at time $t$, $\hat{a}^{(t)}_k \ge \alpha$, $\E^{(t)}_{k, w}\left\{ [\bar{w}^{(t)}_k]^2\right\} = B$ and one particle $v^{(t)} \in S^{(t)}_k$ is reinitialized. By the definition of the algorithm, its norm is at most $\delta_1$. Hence, The deterioration of the average bound can be bounded as\footnote{The second term is obtained by solving the equation $B = \frac{\hat{a}^{(t)}_k - \ns{v^{(t)}}}{\hat{a}^{(t)}_k} B' + \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k} [\bar{v}^{(t)}_k]^2$ for $B'$.} \begin{align*} B - \frac{\hat{a}^{(t)}_k}{\hat{a}^{(t)}_k - \ns{v^{(t)}}} \left( B - \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k} [\bar{v}^{(t)}_k]^2 \right) &= \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k - \ns{v^{(t)}}} \left( [\bar{v}^{(t)}_k]^2 - B\right) \\ &\le \frac{\ns{v^{(t)}}}{\hat{a}^{(t)}_k} 2\alpha \\ &\le 2 \ns{v^{(t)}}. \end{align*} Since there are at most $m$ components, the amount of impulses caused by reinitialization is bounded by $2m\delta_1^2$. Combine these two estimations together and we know that the total amount of impulses is bounded by $4m\delta_1^2$. This gives the epoch correction term of condition (c). \subsubsection*{The average bound} First we derive a formula for the evolution of $\E^{(t)}_{k, w} \left\{ [\bar{v}^{(t)}_k]^2\right\}$. \begin{lemma} \label{lemma: average bound, formula} For any $k$ with $S^{(t)}_k \ne \varnothing$, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} \E^{(t)}_{k, v} [\bar{v}^{(t)}_k]^2 =& \E^{(t)}_{k, v} \br{\frac{d}{dt}[\bar{v}^{(t)}_k]^2} \\ +& 4 \E^{(t)}_{k,v} \br{\pr{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{[\bar{v}^{(t)}_k]^2}} - 4\pr{\E^{(t)}_{k,v}(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2}. \end{align*} \end{lemma} \begin{remark} The first term corresponds to the tangent movement and the two terms in the second line correspond to the norm change of the components. \end{remark} \begin{proof} Recall that \[ \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2 = \frac{1}{\hat{a}^{(t)}_k}\sum_{v^{(t)}\in S^{(t)}_k}\ns{v^{(t)}}[\bar{v}^{(t)}_k]^2. \] Taking the derivative, we have \begin{align*} \frac{d}{dt}\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2 =& \frac{1}{\hat{a}^{(t)}_k}\sum_{v^{(t)}\in S^{(t)}_k}\ns{v^{(t)}}\pr{\frac{d}{dt}[\bar{v}^{(t)}_k]^2} + \frac{1}{\hat{a}^{(t)}_k}\sum_{v^{(t)}\in S^{(t)}_k}\pr{\frac{d}{dt}\ns{v^{(t)}}}[\bar{v}^{(t)}_k]^2\\ &+ \pr{\frac{d}{dt}\frac{1}{\hat{a}^{(t)}_k}}\sum_{v^{(t)}\in S^{(t)}_k}\ns{v^{(t)}}[\bar{v}^{(t)}_k]^2. \end{align*} The first term is just $\E^{(t)}_{k, v}\frac{d}{dt}[\bar{v}^{(t)}_k]^2$. Denote $R(\bar{v}^{(t)}) = 2(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})-\lambda.$ We can write the second term as follows: \begin{align*} \frac{1}{\hat{a}^{(t)}_k}\sum_{v^{(t)}\in S^{(t)}_k}\pr{\frac{d}{dt}\ns{v^{(t)}}}[\bar{v}^{(t)}_k]^2 =& \frac{1}{\hat{a}^{(t)}_k}\sum_{v^{(t)}\in S^{(t)}_k}2R(\bar{v}^{(t)})\ns{v^{(t)}} [\bar{v}^{(t)}_k]^2 \\ =& 2\E^{(t)}_{k,v}\br{R(\bar{v}^{(t)})[\bar{v}^{(t)}_k]^2} \end{align*} Finally, let's consider $\frac{d}{dt}\frac{1}{\hat{a}^{(t)}_k}$ in the third term, \begin{align*} \frac{d}{dt}\frac{1}{\hat{a}^{(t)}_k} =& -\frac{1}{[\hat{a}^{(t)}_k]^2}\frac{d}{dt}\hat{a}^{(t)}_k\\ =& -\frac{1}{[\hat{a}^{(t)}_k]^2}\frac{d}{dt}\sum_{v^{(t)}\inS^{(t)}_k}\ns{v^{(t)}}\\ =& -\frac{2}{[\hat{a}^{(t)}_k]^2}\sum_{v^{(t)}\inS^{(t)}_k}R(\bar{v}^{(t)})\ns{v^{(t)}}\\ =& -\frac{2}{\hat{a}^{(t)}_k}\E^{(t)}_{k,v}R(\bar{v}^{(t)}). \end{align*} Overall, we have \begin{align*} \frac{d}{dt}\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2 =& \E^{(t)}_{k,v}\br{\frac{d}{dt}[\bar{v}^{(t)}_k]^2}\\ &+ 4\E^{(t)}_{k,v}\br{\pr{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{[\bar{v}^{(t)}_k]^2}} - 4\pr{\E^{(t)}_{k,v}(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2} \end{align*} \end{proof} \begin{lemma}[Bound for the average tangent speed] Suppose that $m\delta_1^2 = O(\alpha^3)$ and, at time $t$, Proposition~\ref{prop:main} is true and $S^{(t)}_k \ne \varnothing$. Then we have \[ \E^{(t)}_{k,v} \br{\frac{d}{dt}[\bar{v}^{(t)}_k]^2} \geq 8(a_k-\hat{a}^{(t)}_k)(1-\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2) - O(\alpha^3). \] \end{lemma} \begin{proof} Recall the definition of $G_1$, $G_2$ and $G_3$ from Lemma~\ref{lemma: d vtk2}. \begin{itemize} \item \textbf{Lower bound for $\E^{(t)}_{k, v} G_1$.} By \eqref{eq: Ekvw wk3 vk3 x >= 0}, we have $\E^{(t)}_{k, v, w}\left\{ [z^{(t)}]^3 \inner{\bar{w}_{-k}}{\bar{v}_{-k}} \right\} \ge 0$, whence can be ignored. Meanwhile, note that $\E^{(t)}_{k, w} \left\{ [z^{(t)}]^4 \right\} \le 1$. Therefore, \begin{align*} \E^{(t)}_{k, v} G_1 &\ge 8 a_k \E^{(t)}_{k, v}\left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 \right\} - 8 \hat{a}^{(t)}_k \E^{(t)}_{k, v}\left\{ 1 - [\bar{v}^{(t)}_k]^2 \right\}. \end{align*} For the first term, we compute \begin{align*} \E^{(t)}_{k, v}\left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 \right\} &= \E^{(t)}_{k, v}\left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right) \left( 1 - \left(1 + [\bar{v}^{(t)}_k]^4\right) \right) \right\} \\ &= \E^{(t)}_{k, v}\left\{ 1 - [\bar{v}^{(t)}_k]^2 \right\} - \E^{(t)}_{k, v}\left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right)^2 \left( 1 + [\bar{v}^{(t)}_k]^2 \right)\right\} \\ &\ge \E^{(t)}_{k, v}\left\{ 1 - [\bar{v}^{(t)}_k]^2 \right\} - 2 \E^{(t)}_{k, v}\left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right)^2 \right\} \\ &\ge \E^{(t)}_{k, v}\left\{ 1 - [\bar{v}^{(t)}_k]^2 \right\} - O(\alpha^3). \end{align*} Thus, \[ \E^{(t)}_{k, v} G_1 \ge 8 \tilde{a}^{(t)}_k \E^{(t)}_{k, v}\left\{ 1 - [\bar{v}^{(t)}_k]^2 \right\} - O\left( \hat{a}^{(t)}_k \alpha^3 \right). \] \item \textbf{Upper bound for $\E^{(t)}_{k, v} |G_2|$ and $\E^{(t)}_{k, v} |G_2|$.} It follows from Lemma~\ref{lemma: cross interaction, average} that both terms are $O(1)\sum_{i\ne k} \hat{a}^{(t)}_i \alpha^3$. \end{itemize} Combine these two bounds together, absorb $m\delta_1^2$ into $O(\alpha^3)$, and we complete the proof. \end{proof} \begin{lemma}[Bound for the norm fluctuation] Suppose that at time $t$, Proposition~\ref{prop:main} is true and $S^{(t)}_k \ne \varnothing$. Then at time $t$, we have \[ 4\E^{(t)}_{k,v} \br{\pr{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{[\bar{v}^{(t)}_k]^2}} - 4\pr{\E^{(t)}_{k,v}(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})}\pr{\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2} \geq -O(\alpha^3) \] \end{lemma} \begin{proof} We can express $(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})$ as follows: \begin{align*} &(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4}) \\ =& (a_k-\hat{a}^{(t)}_k)[\bar{v}^{(t)}_k]^4 + \hat{a}^{(t)}_k\pr{[\bar{v}^{(t)}_k]^4-\E^{(t)}_{k,w} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4} + \sum_{i\neq k}a_i[\bar{v}^{(t)}_i]^4 - \sum_{i\neq k}\hat{a}^{(t)}_i \E^{(t)}_{i,w}\inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4 \pm O(m\delta_1^2) \end{align*} It's clear that $\E^{(t)}_{k,v} \sum_{i\neq k}a_i[\bar{v}^{(t)}_i]^4 = O(\alpha^3)$ and $\E^{(t)}_{k,v} \sum_{i\neq k}\hat{a}^{(t)}_i \E^{(t)}_{i,w}\inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4 = O(\alpha^3)$, so their influence can be bounded by $O(\alpha^3)$. Let's then focus on the first two terms in $(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})$. For the first term, we have \begin{align*} &4\E^{(t)}_{k,v}(a_k-\hat{a}^{(t)}_k)[\bar{v}^{(t)}_k]^4 [\bar{v}^{(t)}_k]^2 - 4\E^{(t)}_{k,v}(a_k-\hat{a}^{(t)}_k)[\bar{v}^{(t)}_k]^4 \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2\\ =& 4(a_k-\hat{a}^{(t)}_k)\pr{ \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^6 - \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^4 \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2} \geq 0. \end{align*} Let's now turn our focus to the second term. Denote $x = \inner{\bar{w}^{(t)}_{-k}}{\bar{v}^{(t)}_{-k}}$ and write $\inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4 = [\bar{w}^{(t)}_{k}]^4[\bar{v}^{(t)}_k]^4 + 4[\bar{w}^{(t)}]_{k}^3[\bar{v}^{(t)}_k]^3 x + O(x^2)$. Suppose $m=\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2$, we know $m\in [1-O(\alpha^2), 1]$. We also know that $[\bar{v}^{(t)}_k]^2\in[1-\alpha, 1]$ for every $\bar{v}^{(t)} \in S^{(t)}_i,$ so we have $|[\bar{v}^{(t)}_k]^2-m|=O(\alpha)$. We have \begin{align*} \absr{\E^{(t)}_{k,v}\E^{(t)}_{k,w} ([\bar{v}^{(t)}_k]^2 -m)[\bar{v}^{(t)}_k]^4(1-[\bar{w}^{(t)}_k]^4) } = O(\alpha^3)\\ \absr{\E^{(t)}_{k,v}\E^{(t)}_{k,w} ([\bar{v}^{(t)}_k]^2 -m)(\bar{w}^{(t)}_k\bar{v}^{(t)}_k)^3 x} = O(\alpha^3) \\ \E^{(t)}_{k,v}\E^{(t)}_{k,w} x^2 = O(\alpha^4)\\ \end{align*} Therefore, \begin{align*} &4\E^{(t)}_{k,v}\br{\hat{a}^{(t)}_k\pr{[\bar{v}^{(t)}_k]^4-\E^{(t)}_{k,w} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4}[\bar{v}^{(t)}_k]^2} - 4\E^{(t)}_{k,v}\hat{a}^{(t)}_k\pr{[\bar{v}^{(t)}_k]^4-\E^{(t)}_{k,w}\inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4} \E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2\\ \geq& -O(\hat{a}^{(t)}_k \alpha^3). \end{align*} Combining the bounds for all four terms, we conclude that \begin{align*} 4\E^{(t)}_{k,v}\br{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})[\bar{v}^{(t)}_k]^2} - 4\E^{(t)}_{k,v}(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4})\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2 \geq -O(\alpha^3). \end{align*} \end{proof} \lemmaaveragebound* \begin{proof} It suffices to combine the previous three lemmas together. \end{proof} \subsection{Condition (\ref{itm: Ist, residual}): bounds for the residual} \label{sec:residual} In this section, we consider condition~(\ref{itm: Ist, residual}) of Proposition~\ref{prop:main}. Again, we need to estimate the derivative of $\tilde{a}^{(t)}_k$ when $\tilde{a}^{(t)}_k$ touches the boundary. \paragraph{On the impulses} Similar to the average bound in condition (\ref{itm: Ist, average}), we need to take into consideration the impulses. For the lower bound on $\tilde{a}^{(t)}_k$, we only need to consider the impulses caused by the entrance of new components since the reinitialization will only increase $\tilde{a}^{(t)}_k$. By Proposition~\ref{prop:main} and Assumption~\ref{assumption: induction, oracle}, the total amount of impulses is upper bounded by $m\delta_1^2$. At the beginning of epoch $s$, we have $\tilde{a}^{(t)}_k \ge \lambda/6 - (s-1) m \delta_1^2$, which is guaranteed by the induction hypothesis from the last epoch. (At the beginning of the first epoch, we have $\tilde{a}^{(t)}_k = a_k$). Thus, following Lemma~\ref{lemma: continuity argument with impulses}, it suffices to show that $\frac{\mathrm{d}}{\mathrm{d} t} \tilde{a}^{(t)}_k > 0$ when $\tilde{a}^{(t)}_k \le \lambda/6$. The upper bound on $\tilde{a}^{(t)}_k$ can be proved in a similar fashion. The only difference is that now the impulses that matter are caused by the reinitialization, the total amount of which can again be bounded by $m \delta_1^2$. \begin{lemma} \label{lemma: d hattk} Suppose that at time $t$, Proposition~\ref{prop:main} is true and no impulses happen at time $t$. Then we have \[ \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k = 2 \sum_{i=1}^d a_i \E^{(t)}_{k, v} [\bar{v}^{(t)}_i]^4 - 2 \sum_{i=1}^d \hat{a}^{(t)}_i \E^{(t)}_{k, v} \E^{(t)}_{i, w} [z^{(t)}]^4 - \lambda - O(m \delta_1^2). \] \end{lemma} \begin{proof} Recall that $\hat{a}^{(t)}_k = \sum_{v^{(t)} \in S^{(t)}_k} \ns{v^{(t)}}$ and Lemma~\ref{lemma: d |v|2} implies that \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} \ns{v^{(t)}} &= 2 \sum_{i=1}^d a_i \ns{v^{(t)}} [\bar{v}^{(t)}_i]^4 - 2 \sum_{i=1}^d \hat{a}^{(t)}_i \ns{v^{(t)}} \E^{(t)}_{i, w} \left\{ [z^{(t)}]^4 \right\} \\ &\quad - \lambda \ns{v^{(t)}} - \ns{v^{(t)}} O(m \delta_1^2). \end{align*} Sum both sides and we complete the proof. \end{proof} \begin{lemma} \label{lemma: d hattk, upper bound} Suppose that at time $t$, Proposition~\ref{prop:main} is true and no impulses happen at time $t$. Assume $\delta_1^2 = O(\alpha^2/m).$ Then we have \[ \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \le 2\tilde{a}^{(t)}_k - \lambda + O(\alpha^2). \] In particular, when $\tilde{a}^{(t)}_k \le \lambda/6$, we have $\frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k < 0$. \end{lemma} \begin{proof} By Lemma~\ref{lemma: d hattk}, we have \begin{align*} \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \le 2 a_k - 2 \hat{a}^{(t)}_k \E^{(t)}_{k, v} \E^{(t)}_{k, w} [z^{(t)}]^4 + 2 \sum_{i \ne k} a_i \E^{(t)}_{k, v} [\bar{v}^{(t)}_i]^4 - \lambda. \end{align*} By Lemma~\ref{lemma: Evw z4}, we have \[ 2 a_k - 2 \hat{a}^{(t)}_k \E^{(t)}_{k, v} \E^{(t)}_{k, w} [z^{(t)}]^4 \le 2 \tilde{a}^{(t)}_k + O( a_k \alpha^2 ) \] For each term in the summation, we have \[ \E^{(t)}_{k, v} [\bar{v}^{(t)}_i]^4 \le \E^{(t)}_{k, v} \left\{ \left(1 - [\bar{v}^{(t)}_k]^2\right)^2 \right\} \le \alpha \E^{(t)}_{k, v} \left\{1 - [\bar{v}^{(t)}_k]^2 \right\} \le \alpha^3. \] Thus, \begin{align*} \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k &\le 2 \tilde{a}^{(t)}_k + O( a_k \alpha^2 ) + 2 \sum_{i \ne k} a_i^2 \alpha^3 - \lambda \\ &\le 2\tilde{a}^{(t)}_k - \lambda + O(\alpha^2). \end{align*} \end{proof} \begin{lemma} \label{lemma: d hattk, lower bound} Suppose that at time $t$, Proposition~\ref{prop:main} is true. and no impulses happen at time $t$. Then at time $t$, we have \[ \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge 2 \tilde{a}^{(t)}_k - \lambda - O\left(\alpha^2 \right). \] In particular, when $\tilde{a}^{(t)}_k \ge \lambda$, we have $\frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k > 0$. \end{lemma} \begin{proof} By Lemma~\ref{lemma: d hattk} (and the fact $\hat{a}^{(t)}_i \le a_i$), we have \begin{align*} \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge 2 a_k \E^{(t)}_{k, v} [\bar{v}^{(t)}_k]^4 - 2 \hat{a}^{(t)}_k - 2\sum_{i \ne k} a_i \E^{(t)}_{k, v} \E^{(t)}_{i, w} [z^{(t)}]^4 - \lambda - O(m\delta_1^2). \end{align*} Note that $\E^{(t)}_{k, v} [\bar{v}^{(t)}_k]^4 \ge 1 - O(\alpha^2)$, whence \[ 2 a_k \E^{(t)}_{k, v} [\bar{v}^{(t)}_k]^4 - 2 \hat{a}^{(t)}_k \ge 2 \tilde{a}^{(t)}_k - O\left( a_k \alpha^2 \right). \] For each term in the summation, by Lemma~\ref{lemma: cross interaction, average}, we have $\E^{(t)}_{k, v} \E^{(t)}_{i, w} [z^{(t)}]^4 \le O(\alpha^3)$. Thus, \begin{align*} \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge 2 \tilde{a}^{(t)}_k - \lambda - O\left(\alpha^2 \right). \end{align*} \end{proof} \section{Proofs for Proposition~\ref{prop:main}} \label{sec: appendix, induction hypothesis} The goal of this section is to prove Proposition~\ref{prop:main} under Assumption~\ref{assumption: induction, oracle}. We also prove Claim~\ref{clm:example} in Section~\ref{sec: induction, counterexample}. \paragraph{Notations} Recall we defined \[\E^{(s,t)}_{i,w} f(w^{(s,t)}):=\frac{1}{\hat{a}^{(s,t)}_i}\sum_{w^{(s,t)}\in S^{(s,t)}_i}\ns{w^{(s,t)}} f(w^{(s,t)}).\] We will use this notation extensively in this section. For simplicity, we shall drop the superscript of epoch $s$. Further, we sometimes consider expectation with two variables $v$ and $w$: \[\E^{(s,t)}_{i,v,w} f(w^{(s,t)}):=\frac{1}{\left[\hat{a}^{(s,t)}_i\right]^2}\sum_{v^{(s,t)},w^{(s,t)}\in S^{(s,t)}_i}\ns{v^{(s,t)}}\ns{w^{(s,t)}} f(w^{(s,t)},v^{(s,t)}).\] We will also use $z_t$ to denote $z^{(t)} := \inner{\bar{v}^{(t)}}{\bar{w}^{(t)}}$ and $\tilde{a}^{(t)}_k := a_k - \hat{a}^{(t)}_k$. Note that $v$ and $w$ in this section (and later in the proof) just serve as arbitrary components in columns of $W$. \begin{assumption} \label{assumption: induction, oracle} Throughout this section, we assume the following. \begin{enumerate}[(a)] \item For any $k \in [d]$, in phase 1, when $\|v^{(t)}\|$ enters $S^{(t)}_k$, that is, $\|v^{(t)}\| = \delta_1$, we have $[\bar{v}^{(t)}_k]^2 \ge 1 - \alpha^2$ if $\hat{a}^{(t)}_k < \alpha$ and $[\bar{v}^{(t)}_k]^2 \ge 1 - \alpha$ if $\hat{a}^{(t)}_k \ge \alpha$. \item There exists a small constant $c > 0$ s.t.~for any $k \in [d]$ with $a_k < c\beta^{(s)}$, in phase 1, no components will enter $S^{(t)}_k$. \item For any $k \in [d]$, in phase 2, no components will enter $S^{(t)}_k$. \item For the parameters, we assume $m\delta_1^2 \le \alpha^3$ and $\Omega\left( \sqrt{\alpha} \right) \le \lambda \le O\left( \min_{s} \beta^{(s)} \right) = O(\varepsilon / \sqrt{d})$. \end{enumerate} \end{assumption} \begin{remark} As we mentioned, the entire proof is an induction and we only need the assumption up to the point that we are analyzing. The assumption will be proved later in Appendix~\ref{sec:proof_init_phase1} and \ref{sec:proof_phase2} to finish the induction/continuity argument. The reason we state this assumption here, and state it as an assumption, is to make the dependencies more transparent. \end{remark} \begin{remark}[Remark on the choice of $\lambda$] The lower bound $\lambda = \Omega(\sqrt{\alpha})$ comes from Lemma~\ref{lemma: individual bound}. For the upper bound, first note that when $\lambda$ is larger than $a_k$, actually the norm of components in $S^{(t)}_k$ can decrease (cf.~Lemma \ref{lemma: d |v|2}). Hence, we require $\lambda < c \min_s \beta^{(s)} / 10$ where $c$ is the constant in (c). This makes sure in phase 2 the growth rate of $\hat{a}^{(t)}_k$ is not too small. \iffalse For those $k$ with $a_k < c \min_s \beta^{(s)} / 10$, we do not need to fit them to get $\varepsilon$-accuracy. In fact, this inequality itself is just another way to say that the algorithm stops before $\beta^{(s)}$ becomes smaller than those $a_k$. The only subtlety here is that in condition (\ref{itm: Ist, residual}) of Proposition~\ref{prop:main}, we require $\tilde{a}^{(t)}_k \ge \lambda / 6$, which may not hold for those small $a_k$. However, this lower bound is only used to maintain the local stability around $e_k$ and since by condition (c) of Assumption~\ref{assumption: induction, oracle}, no components will ever enter $S^{(t)}_k$, we do not need a local stability result for those small $a_k$ in the first place. \fi \end{remark} \proposition* Before we move on to the proof, we collect some further remarks on Proposition~\ref{prop:main} and the proof overview here. \begin{remark}[Remark on the epoch correction term] Note that conditions (\ref{itm: Ist, average}) and (\ref{itm: Ist, residual}) have an additional term with form $O(s m\delta_1^2)$. This is because these average bounds may deteriorate a little when the content of $S^{(t)}_k$ changes, which will happen when new components enter $S^{(t)}_k$ or the reinitialization throw some components out of $S^{(t)}_k$. The norm of the components involved in these fluctuations is upper bounded by $\delta_1$ and the number by $m$. Thus the $O(m\delta_1^2)$ factor. The factor $s$ accounts for the accumulation across epochs. We need this to guarantee at the beginning of each epoch, the conditions hold with some slackness (cf.~Lemma~\ref{lemma: continuity argument with impulses}). Though this issue can be fixed by a slightly sharper estimations for the ending state of each epoch, adding one epoch correction term is simpler and, since we only have $\log (d/\epsilon)$ epochs, it does not change the bounds too much and, in fact, we can always absorb them into the coefficients of $\lambda$ and $\alpha^2$, respectively. \end{remark} \begin{remark}[Remark on condition~(\ref{itm: Ist, individual})] Note that Assumption~\ref{assumption: induction, oracle} makes sure that when a component enters $S^{(t)}_k$, we always have $[\bar{v}^{(t)}_k]^2 \ge 1 - \alpha$. Hence, essentially this condition says that it will remain basis-like. Following the spirit of the continuity argument, to maintain this condition, it suffices to prove Lemma~\ref{lemma: individual bound}, the proof of which is deferred to Section~\ref{sec:individual}. Also note that by Assumption~\ref{assumption: induction, oracle} and the definition of $S^{(s,t)}_k$, neither the entrance of new components nor the reinitialization will break this condition. \end{remark} \begin{restatable}{lemma}{lemmaindividualbound} \label{lemma: individual bound} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Assuming $\delta_1^2 = O(\alpha^{1.5}/m),$ then for any $v^{(t)} \in S^{(t)}_k$, we have \[ \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}]^2 \ge 8 \tilde{a}^{(t)} \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 - O\left( \alpha^{1.5}\right), \] In particular, if $\lambda = \Omega\left(\sqrt{\alpha}\right)$, then $\frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}]^2 > 0$ whenevner $[\bar{v}^{(t)}_k]^2 = 1 - \alpha$. \end{restatable} \begin{remark}[Remark on condition~(\ref{itm: Ist, average})] The proof idea of condition~(\ref{itm: Ist, average}) is similar to condition~(\ref{itm: Ist, individual}) and we prove Lemma~\ref{lemma: average bound} in Section~\ref{sec:average}. In Section~\ref{sec:average}, we also handle the impulses caused by the entrance of new components and the reinitialization. \end{remark} \begin{restatable}{lemma}{lemmaaveragebound} \label{lemma: average bound} Suppose that at time $t$, Proposition~\ref{prop:main} is true and $S^{(t)}_k \ne \varnothing$. Assuming $\delta_1^2 = O(\alpha^3/m)$, we have \[ \frac{\mathrm{d}}{\mathrm{d} t} \E^{(t)}_{k, v}[\bar{v}^{(t)}_k]^2 \geq 8\tilde{a}^{(t)}_k (1-\E^{(t)}_{k,v}[\bar{v}^{(t)}_k]^2) - O(\alpha^3). \] In particular, if $\lambda = \Omega(\alpha)$, then $\frac{\mathrm{d}}{\mathrm{d} t} \E^{(t)}_{k, v}[\bar{v}^{(t)}_k]^2 > 0$ when $\E^{(t)}_{k, v}[\bar{v}^{(t)}_k]^2 < 1 - \alpha^2/2$. \end{restatable} \begin{remark}[Remark on condition~(\ref{itm: Ist, residual})] This condition says that the residual along direction $k$ is always $\Omega(\lambda)$. This guarantees the existence of a small attraction region around $e_k$, which will keep basis-like components basis-like. We rely on the regularizer to maintain this condition. The second part of condition~(\ref{itm: Ist, residual}) means fitted directions will remain fitted. We prove Lemma~\ref{lemma: d hattk, upper and lower bound} and handle the impulses in Section~\ref{sec:residual}. \end{remark} \begin{lemma}[Lemma~\ref{lemma: d hattk, upper bound} and Lemma~\ref{lemma: d hattk, lower bound}] \label{lemma: d hattk, upper and lower bound} Suppose that at time $t$, Proposition~\ref{prop:main} is true. and no impulses happen at time $t$. Then at time $t$, we have \[ \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k = 2 \tilde{a}^{(t)}_k - \lambda \pm O\left(\alpha^2 \right). \] In particular, $\frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k$ is negative (resp.~positive) when $\hat{a}^{(t)}_k > a_k - \lambda / 6$ (resp.~$\hat{a}^{(t)}_k < a_k - \lambda$). \end{lemma} \input{induction-hypothesis/continuity-argument} \input{induction-hypothesis/preliminaries} \input{induction-hypothesis/individual-bound} \input{induction-hypothesis/average-bound} \input{induction-hypothesis/residual} \input{induction-hypothesis/counter-example} \subsection{Preliminaries} \label{sec: induction, preliminaries} The next two lemmas give formulas for the norm growth rate and tangent speed of each component. \begin{lemma}[Norm growth rate] \label{lemma: d |v|2} For any $v^{(t)}$, we have \begin{equation*} \frac{1}{2 \ns{v^{(t)}}} \frac{\mathrm{d}}{\mathrm{d} t} \ns{v^{(t)}} = \sum_{i=1}^d a_i [\bar{v}^{(t)}_i]^4 - \sum_{i=1}^d \hat{a}^{(t)}_i \E^{(t)}_{i, w} \left\{ [z^{(t)}]^4 \right\} - T^{(t)}_\varnothing \left( [\bar{v}^{(t)}]^{\otimes 4} \right) - \frac{\lambda}{2}. \end{equation*} \end{lemma} \begin{proof} Due to the $2$-homogeneity, we have\footnote{In the mean-field terminologies, the RHS is just the first variation (or functional derivative) of the loss at $\bar{v}^{(t)}$.} \begin{align*} \frac{1}{2 \ns{v^{(t)}}} \frac{\mathrm{d}}{\mathrm{d} t} \ns{v^{(t)}} &= \left( T^* - T^{(t)} \right) \left( [\bar{v}^{(t)}]^{\otimes 4} \right) - \frac{\lambda}{2}. \end{align*} The ground truth terms can be rewritten as \[ T^*\left( [\bar{v}^{(t)}]^{\otimes 4} \right) = \sum_{i=1}^d a_i [\bar{v}^{(t)}_i]^4. \] Decompose the $T^{(t)}$ term accordingly and we get \[ T^{(t)}\left( [\bar{v}^{(t)}]^{\otimes 4} \right) = \sum_{i=1}^d \hat{a}^{(t)} \E^{(t)}_{i, w} \left\{ [z^{(t)}]^4 \right\} + T^{(t)}_\varnothing \left( [\bar{v}^{(t)}]^{\otimes 4} \right). \] \end{proof} \begin{lemma}[Tangent speed] \label{lemma: d vtk2} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Then at time $t$, for any $v^{(t)} \in W^{(t)}$ and any $k \in [d]$, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}]^2 = G_1 - G_2 - G_3 \pm O(m\delta_1^2), \end{align*} where \begin{align*} G_1 &:= 8 a_k \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 - 8 \hat{a}^{(t)}_k \left( 1 - [\bar{v}^{(t)}_k]^2 \right) \E^{(t)}_{k, w}\left\{ [z^{(t)}]^4 \right\} \\ &\qquad + 8 \hat{a}^{(t)}_k \E^{(t)}_{k, w}\left\{ [z^{(t)}]^3 \inner{\bar{w}_{-k}}{\bar{v}_{-k}} \right\}, \\ G_2 &= 8 \sum_{i \ne k} \hat{a}^{(t)}_i \E^{(t)}_{i, w} \left\{ [z^{(t)}]^3 v^{(t)}_k w^{(t)}_k \right\}, \\ G_3 &= 8 [\bar{v}^{(t)}_k]^2 \sum_{i \ne k} \left( a_i [\bar{v}^{(t)}_i]^4 - \hat{a}^{(t)}_i \E^{(t)}_{i, w} \left\{ [z^{(t)}]^4 \right\} \right). \end{align*} \end{lemma} \begin{remark} Intuitively, $G_1$ captures the local dynamics around $e_k$ and $G_2$ characterize the cross interaction between different ground truth directions. \end{remark} \begin{proof} Let's compute the derivative of $[\bar{v}^{(t)}_k]^2$ in terms of time $t$: \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 2\bar{v}^{(t)}_k\cdot \frac{d}{dt}\frac{v^{(t)}_k}{\n{v^{(t)}}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}}\frac{d}{dt}v^{(t)}_k + 2[\bar{v}^{(t)}_k]^2\cdot \frac{d}{dt}\frac{1}{\n{v^{(t)}}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}} [-\nabla L(v^{(t)})]_k - 2[\bar{v}^{(t)}_k]^2 \cdot \frac{\inner{\bar{v}^{(t)}}{-\nabla L(v^{(t)})}}{\n{v^{(t)}}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}} [-(I-\bar{v}^{(t)}[\bar{v}^{(t)}]^\top)\nabla L(v^{(t)})]_k. \end{align*} Note that \[ \nabla f(v^{(t)}) = 4(T^{(t)}-T^*)([\bar{v}^{(t)}]^{\otimes 2}, \bar{v}^{(t)}, I) - 2(T^{(t)}-T^*)([\bar{v}^{(t)}]^{\otimes 4})\bar{v}^{(t)} + \lambda \bar{v}^{(t)}, \] where the last two terms left multiplied by $(I-\bar{v}^{(t)}[\bar{v}^{(t)}]^\top)$ equals to zero. Therefore, \[ \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} = 8\bar{v}^{(t)}_k\br{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 3)},I) - (T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4)})\bar{v}^{(t)} }_k \] We can write $T^*$ as $\sum_{i\in[d]}a_i e_i^{\otimes 4}$ and write $T^{(t)}$ as $\sum_{i\in [d]} T^{(t)}_i +T^{(t)}_\varnothing$. Since Proposition~\ref{prop:main} is true at time $t$, we know any $w^{(t)}$ in $W^{(t)}_\varnothing$ has norm upper bounded by $\delta_1$, which implies $\fn{T^{(t)}_\varnothing}\leq m\delta_1^2$. Therefore, we have \[ \absr{8\bar{v}^{(t)}_k\br{-T^{(t)}_\varnothing([\bar{v}^{(t)}]^{\otimes 3)},I) +T^{(t)}_\varnothing([\bar{v}^{(t)}]^{\otimes 4)})\bar{v}^{(t)} }_k } \leq O(m\delta_1^2). \] For any $i\in [d],$ we have \begin{align*} \br{T^{(t)}_i([\bar{v}^{(t)}]^{\otimes 3}, I)}_k =& \sum_{w^{(t)}\in S^{(t)}_i}\ns{w^{(t)}} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^3 \bar{w}^{(t)}_k \\ =& \hat{a}^{(t)}_k \E^{(t)}_{k,w} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^3 \bar{w}^{(t)}_k, \end{align*} and \begin{align*} \br{T^{(t)}_i([\bar{v}^{(t)}]^{\otimes 4})\bar{v}^{(t)}}_k =& \sum_{w^{(t)}\in S^{(t)}_i}\ns{w^{(t)}} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4 \bar{v}^{(t)}_k \\ =& \hat{a}^{(t)}_k \E^{(t)}_{k,w} \inner{\bar{w}^{(t)}}{\bar{v}^{(t)}}^4 \bar{v}^{(t)}_k. \end{align*} For any $i\in [d],$ we have \begin{align*} \br{T^*([\bar{v}^{(t)}]^{\otimes 3}, I)}_k = [\bar{v}^{(t)}_k]^3 \indic{i=k} \end{align*} and \begin{align*} \br{T^*([\bar{v}^{(t)}]^{\otimes 4})\bar{v}^{(t)}}_k = [\bar{v}^{(t)}_i]^4 \bar{v}^{(t)}_k \end{align*} Based on the above calculations, we can see that \begin{align*} G_1 &= 8\bar{v}^{(t)}_k\br{(T^*_k-T^{(t)}_k)([\bar{v}^{(t)}]^{\otimes 3)},I) - (T^*_k-T^{(t)}_k)([\bar{v}^{(t)}]^{\otimes 4)})\bar{v}^{(t)} }_k\\ G_2 &= 8\bar{v}^{(t)}_k\br{\sum_{i\neq k}T^{(t)}_i([\bar{v}^{(t)}]^{\otimes 3)},I)}_k\\ G_3 &= 8[\bar{v}^{(t)}_k]^2 \sum_{i\neq k}(T^*_i-T^{(t)}_i)([\bar{v}^{(t)}]^{\otimes 4)}) , \end{align*} and the error term $O(m\delta_1^2)$ comes from $T^{(t)}_\varnothing$. To complete the proof, use the identity $\inner{\bar{w}}{\bar{v}} = \bar{w}_k \bar{v}_k + \inner{\bar{w}_{-k}}{\bar{v}_{-k}}$ to rewrite $G_1$. \end{proof} One may wish to skip all following estimations and come back to them when needed. \begin{lemma} \label{lemma: inner, bvk>=, any w} For any $\bar{v}$ with $\bar{v}_k^2 \ge 1 - \alpha$ and any $\bar{w} \in \mathbb{S}^{d-1}$, we have $|\inner{\bar{v}}{\bar{w}}| = |\bar{w}_k| \pm \sqrt{\alpha}$. \end{lemma} \begin{proof} Assume w.o.l.g.~that $k=1$. Note that the set $\{\bar{v} \in \mathbb{S}^{d-1} \;:\; \bar{v}_k^2 \ge 1 - \alpha \}$ is invariant under rotation of other coordinates, whence we may further assume w.o.l.g. that $\bar{w} = \bar{w}_1 e_1 + \sqrt{1 - \bar{w}_1^2} e_2$. Then, \begin{align*} |\inner{\bar{w}}{\bar{v}}| &= \left|\bar{w}_1 \bar{v}_1 + \sqrt{1 - \bar{v}_1^2}\sqrt{1 - \bar{w}_1^2} \right| \\ &\ge |\bar{w}_1| \sqrt{1 - \alpha} - \sqrt{\alpha} \sqrt{1 - \bar{w}_1^2} \\ &= \frac{\bar{w}_1^2 (1 - \alpha) - \alpha (1 - \bar{w}_1^2)} {|\bar{w}_1| \sqrt{1 - \alpha} + \sqrt{\alpha} \sqrt{1 - \bar{w}_1^2}} \\ &= \frac{\bar{w}_1^2 - \alpha} {|\bar{w}_1| \sqrt{1 - \alpha} + \sqrt{\alpha} \sqrt{1 - \bar{w}_1^2}} \ge \frac{\bar{w}_1^2 - \alpha}{|\bar{w}_1| + \sqrt{\alpha} } = |\bar{w}_1| - \sqrt{\alpha}. \end{align*} The other direction follows immediately from \[ |\inner{\bar{w}}{\bar{v}}| \le |\bar{w}_1| |\bar{v}_1| + \left|\sqrt{1 - \bar{v}_1^2}\sqrt{1 - \bar{w}_1^2} \right| \le |\bar{w}_1| + \sqrt{\alpha}. \] \end{proof} The next two lemmas bound the cross interaction between different $S^{(t)}_k$. \begin{lemma} \label{lemma: cross interaction, individual} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Then for any $v^{(t)} \in S^{(t)}_k$ and $l \ne k$, the following hold. \begin{enumerate}[(a)] \item $[\bar{v}^{(t)}_l]^4 \le \alpha^2$. \item $\E^{(t)}_{l, w}\left\{ [z_t]^4 \right\} \le O(\alpha^2)$. \item $\E^{(t)}_{l, w}\left\{ [z_t]^3 \bar{v}_l \bar{w}_l \right\} \le O(\alpha^2)$. \end{enumerate} \end{lemma} \begin{proof} (a) follows immediately from $[v^{(t)}_l]^4 \le (1 - [v^{(t)}_l]^2) \le \alpha^2$. For (b), apply Lemma~\ref{lemma: inner, bvk>=, any w} and we get \[ \E^{(t)}_{l, w}\left\{ [z_t]^4 \right\} \le \E^{(t)}_{l, w}\left\{ \left( |\bar{w}_k| + \sqrt{\alpha}\right)^4 \right\} \le \E^{(t)}_{l, w}\left\{ [\bar{w}_k]^4 + 4 |\bar{w}_k|^3 \sqrt{\alpha} + 6 [\bar{w}_k]^2 \alpha + 4 |\bar{w}_k| \alpha^{1.5} + \alpha^2 \right\}. \] For the first three terms, it suffices to note that $\E^{(t)}_{l, w}\left\{ [\bar{w}_k]^2 \right\} \le \alpha^2$. For the fourth term, it suffices to additionally recall Jensen's inequality. Combine these together and we get $\E^{(t)}_{l, w}\left\{ [z_t]^4 \right\} = O(\alpha^2)$. The proof of (b), \textit{mutatis mutandis}, yields (c). \end{proof} \begin{lemma} \label{lemma: cross interaction, average} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Then for any $k \ne l$, the following hold. \begin{enumerate}[(a)] \item $\E^{(t)}_{k, v} [\bar{v}^{(t)}_l]^4 \le O(\alpha^3)$. \item $\E^{(t)}_{k, v} \E^{(t)}_{l, w} [z^{(t)}]^4 \le O(\alpha^3)$. \item $\E^{(t)}_{k, v} \E^{(t)}_{l, w} \left\{ [z^{(t)}]^3 \bar{v}_k \bar{w}_k \right\} \le O(\alpha^3)$. \end{enumerate} \end{lemma} \begin{proof} For (a), we compute \[ \E^{(t)}_{k, v} [\bar{v}^{(t)}_l]^4 \le \E^{(t)}_{k, v} \left\{ \left( 1 - [\bar{v}^{(t)}_k]^2 \right)^2 \right\} \le \alpha \E^{(t)}_{k, v} \left\{1 - [\bar{v}^{(t)}_k]^2 \right\} \le O(\alpha^3), \] where the second inequality comes from the condition (\ref{itm: Ist, individual}) of Proposition~\ref{prop:main} and the third from condition (\ref{itm: Ist, average}) of Proposition~\ref{prop:main}. Now we prove (b). (c) can be proved in a similar fashion. For simplicity, write $x^{(t)} = \inner{\bar{w}^{(t)}_{-l}}{\bar{v}^{(t)}_{-l}}$. Clear that $|x^{(t)}| \le \sqrt{1 - [\bar{w}^{(t)}_l]^2}$ and by Jensen's inequality and condition (\ref{itm: Ist, average}) of Proposition~\ref{prop:main}, $\E^{(t)}_{l, w} \sqrt{1 - [\bar{w}^{(t)}_l]^2} \le O(\alpha)$. We compute \begin{align*} \E^{(t)}_{k, v} \E^{(t)}_{l, w} [z^{(t)}]^4 = \E^{(t)}_{k, v} \E^{(t)}_{l, w} \bigg\{ [\bar{w}^{(t)}_l]^4 [\bar{v}^{(t)}_l]^4 & + 4 [\bar{w}^{(t)}_l]^3 [\bar{v}^{(t)}_l]^3 x^{(t)} + 6 [\bar{w}^{(t)}_l]^2 [\bar{v}^{(t)}_l]^2 [x^{(t)}]^2 \\ & + 4 \bar{w}^{(t)}_l \bar{v}^{(t)}_l [x^{(t)}]^3 + [x^{(t)}]^4 \bigg\}. \end{align*} We bound each of these five terms as follows. \begin{align*} \E^{(t)}_{k, v} \E^{(t)}_{l, w} \left\{ [\bar{w}^{(t)}_l]^4 [\bar{v}^{(t)}_l]^4 \right\} &\le \E^{(t)}_{k, v} [\bar{v}^{(t)}_l]^4 \le O(\alpha^3), \\ % \E^{(t)}_{k, v} \E^{(t)}_{l, w} \left\{ [\bar{w}^{(t)}_l]^3 [\bar{v}^{(t)}_l]^3 x^{(t)} \right\} &\le \E^{(t)}_{k, v} [\bar{v}^{(t)}_l]^3 \E^{(t)}_{l, w} \left\{ \sqrt{1 - [\bar{w}^{(t)}_l]^2} \right\} \le O(\alpha^3), \\ \E^{(t)}_{k, v} \E^{(t)}_{l, w} \left\{ [\bar{w}^{(t)}_l]^2 [\bar{v}^{(t)}_l]^2 [x^{(t)}]^2 \right\} &\le \E^{(t)}_{k, v} [\bar{v}^{(t)}_l]^2 \E^{(t)}_{l, w} \left\{ 1 - [\bar{w}^{(t)}_l]^2 \right\} \le O(\alpha^3), \\ % \E^{(t)}_{k, v} \E^{(t)}_{l, w} \left\{ \bar{w}^{(t)}_l \bar{v}^{(t)}_l [x^{(t)}]^3 \right\} &\le \E^{(t)}_{k, v} \bar{v}^{(t)}_l \E^{(t)}_{l, w} \left\{ \left(1 - [\bar{w}^{(t)}_l]^2\right)^{1.5} \right\} \le O(\alpha^3), \\ % \E^{(t)}_{k, v} \E^{(t)}_{l, w} [x^{(t)}]^4 &\le \E^{(t)}_{l, w} \left\{ \left(1 - [\bar{w}^{(t)}_l]^2\right)^2 \right\} \le O(\alpha^3). \end{align*} Combine these together and we complete the proof. \end{proof} \begin{lemma} \label{lemma: Ez4 approx vk4} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Then, for any $v^{(t)} \in S^{(t)}_k$, we have $\E^{(t)}_{k, w} \left\{ [z^{(t)}]^4 \right\} = [\bar{v}^{(t)}_k]^4 \pm O(\alpha^{1.5})$. \end{lemma} \begin{proof} For simplicity, put $x^{(t)} = \inner{\bar{w}^{(t)}_{-k}}{\bar{v}^{(t)}_{-k}}$. Note that $|x^{(t)}| \le \sqrt{1 - [\bar{v}^{(t)}_k]^2} \sqrt{1 - [\bar{w}^{(t)}_k]^2} \le \sqrt{\alpha} \sqrt{1 - [\bar{w}^{(t)}_k]^2}$. Then \begin{align*} \E^{(t)}_{k, w} \left\{ [z^{(t)}]^4 \right\} = \E^{(t)}_{k, w} \left\{ \left[\bar{w}^{(t)}_k \bar{v}^{(t)}_k + x^{(t)}\right]^4 \right\} = [\bar{v}^{(t)}_k]^4 \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^4 \right\} \pm O(1) \E^{(t)}_{k, w} x^{(t)}. \end{align*} For the first term, note that \begin{align*} \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^4 \right\} = 1 - \E^{(t)}_{k, w} \left\{ (1 - [\bar{w}^{(t)}_k]^2)(1 + [\bar{w}^{(t)}_k]^2) \right\} \ge 1 - 2 \alpha^2. \end{align*} For the second term, by Jensen's inequality, we have \[ \left|\E^{(t)}_{k, w} x^{(t)}\right| \le \sqrt{\alpha \E^{(t)}_{k, w}[1 - [\bar{w}^{(t)}_k]^2]} \le \alpha^{1.5}. \] Thus, \[ \E^{(t)}_{k, w} \left\{ [z^{(t)}]^4 \right\} = [\bar{v}^{(t)}_k]^4 \left(1 \pm 2\alpha^2 \right) \pm O(\alpha^{1.5}) = [\bar{v}^{(t)}_k]^4 \pm O(\alpha^{1.5}). \] \end{proof} \begin{lemma} \label{lemma: Evw z4} Suppose that at time $t$, Proposition~\ref{prop:main} is true. Then we have $\E^{(t)}_{k, v, w}\left\{ [z^{(t)}]^4 \right\} \ge 1 - O(\alpha^2)$. \end{lemma} \begin{proof} For simplicity, put $x^{(t)} = \inner{\bar{w}^{(t)}_{-k}}{\bar{v}^{(t)}_{-k}}$. We have \begin{align*} \E^{(t)}_{k, v, w} \left\{ [z^{(t)}]^4 \right\} &= \E^{(t)}_{k, v, w} \left\{ \left(\bar{w}^{(t)}_k \bar{v}^{(t)}_k + x^{(t)} \right)^4 \right\} \\ &\ge \E^{(t)}_{k, v, w} \left\{ [\bar{w}^{(t)}_k]^4 [\bar{v}^{(t)}_k]^4 + [\bar{w}^{(t)}_k]^3 [\bar{v}^{(t)}_k]^3 x + \bar{w}^{(t)}_k \bar{v}^{(t)}_k x^3 \right\}. \end{align*} Note that \begin{equation} \label{eq: Ekvw wk3 vk3 x >= 0} \begin{split} \E^{(t)}_{k, v, w} \left\{ [\bar{w}^{(t)}_k]^3 [\bar{v}^{(t)}_k]^3 x \right\} &= \sum_{i\ne k} \E^{(t)}_{k, v, w} \left\{ [\bar{w}^{(t)}_k]^3 [\bar{v}^{(t)}_k]^3 \bar{w}^{(t)}_i \bar{v}^{(t)}_i \right\} \\ &= \sum_{i\ne k} \left( \E^{(t)}_{k, v, w} \left\{ [\bar{w}^{(t)}_k]^3 \bar{w}^{(t)}_i \right\} \right)^2 \ge 0. \end{split} \end{equation} Similarly, $\E^{(t)}_{k, v, w} \left\{ \bar{w}^{(t)}_k \bar{v}^{(t)}_k x^3 \right\} \ge 0$ also holds. Finally, by Jensen's inequality, we have \begin{align*} \E^{(t)}_{k, v, w} \left\{ [z^{(t)}]^4 \right\} &\ge \E^{(t)}_{k, v, w} \left\{ [\bar{w}^{(t)}_k]^4 [\bar{v}^{(t)}_k]^4 \right\} \\ &= \left( \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^4 \right\} \right)^2 \ge \left( \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^2 \right\} \right)^4 \ge \left( 1 - \alpha^2 \right)^4 = 1 - O(\alpha^2). \end{align*} \end{proof} \section{Introduction} Recently, over-parametrization has been recognized as a key feature of neural network optimization. A line of works known as the Neural Tangent Kernel (NTK) showed that it is possible to achieve zero training loss when the network is sufficiently over-parametrized \citep{jacot2018neural,du2018gradient,allen2018convergence}. However, the theory of NTK implies a particular dynamics called lazy training where the neurons do not move much \citep{chizat2019lazy}, which is not natural in many settings and can lead to worse generalization performance~\citep{arora2019exact}. Many works explored other regimes of over-parametrization \citep{chizat2018global,mei2018mean} and analyzed dynamics beyond lazy training \citep{allen2018learning,li2020learning,wang2020beyond}. Over-parametrization does not only help neural network models. In this work, we focus on a closely related problem of tensor (CP) decomposition. In this problem, we are given a tensor of the form \[ T^* = \sum_{i=1}^r a_i (U[:,i])^{\otimes 4}, \] where $a_i\geq 0$ and $U[:,i]$ is the $i$-th column of $U\in {\mathbb R}^{d\times r}$. The goal is to fit $T^*$ using a tensor $T$ of a similar form: \begin{equation*} T = \sum_{i=1}^m \frac{(W[:,i])^{\otimes 4}}{\|W[:,i]\|^2}. \end{equation*} Here $W$ is a $d\times m$ matrix whose columns are components for tensor $T$. The model is over-parametrized when the number of components $m$ is larger than $r$. The choice of normalization factor of $1/\|W[:,i]\|^2$ is made to accelerate gradient flow (similar to \citet{li2020learning,wang2020beyond}). Suppose we run gradient flow on the standard objective $\frac{1}{2}\|T-T^*\|_F^2$, that is, we evolve $W$ according to the differential equation: \[ \frac{\mathrm{d} W}{\mathrm{d} t} = - \nabla\pr{ \frac{1}{2}\|T-T^*\|_F^2}, \] can we expect $T$ to fit $T^*$ with good accuracy? Empirical results (see Figure~\ref{fig:ortho}) show that this is true for orthogonal tensor $T^*$\footnote{We say $T^*$ is an orthogonal tensor if the ground truth components $U[:,i]$'s are orthonormal.} as long as $m$ is large enough. Further, the training dynamics exhibits a behavior that is similar to a {\em tensor deflation process}: it finds the ground truth components one-by-one from larger component to smaller component (if multiple ground truth components have similar norm they might be found simultaneously). \begin{figure}[t] \subfigure{ \includegraphics[width=2.65in]{figures/ortho_loss.eps} } \subfigure{ \includegraphics[width=2.65in]{figures/ortho_corr.eps} } \vspace{-0.2cm} \caption{The training trajectory of gradient flow on orthogonal tensor decompositions. We chose $T^* = \sum_{i\in [5]}a_i e_i^{\otimes 4}$ with $e_i \in {\mathbb R}^{10}$ and $a_i/a_{i+1}=1.2.$ Our model $T$ has $50$ components and each component is randomly initialized with small norm $10^{-15}$. We ran the experiments from $5$ different initialization and plotted the results separately. The left figure shows the loss $\frac{1}{2}\fns{T-T^*}$ and the right figure shows the residual on each $e_i$ direction that is defined as $(T^*-T)(e_i^{\otimes 4}).$ }\label{fig:ortho} \vspace{-0.5cm} \end{figure} In this paper we show that with a slight modification, gradient flow on over-parametrized tensor decomposition is guaranteed to follow this tensor deflation process, and can fit any orthogonal tensor to desired accuracy (see Section~\ref{sec:algorithm} for the algorithm and Theorem~\ref{thm:main} for the main theorem). This shows that for orthogonal tensors, the trajectory of modified gradient-flow is similar to a greedy low-rank process that was used to analyze the implicit bias of low-rank matrix factorization~\citep{li2020towards}. Our results can serve as a first step in understanding the implicit bias of low-rank tensor problems. \subsection{Our approach and technique} To understand the tensor deflation process shown in Figure~\ref{fig:ortho}, intuitively we can think about the discovery and fitting of a ground truth component in two phases. Consider the beginning of the gradient flow as an example. Initially all the components in $T$ are small, which makes $T$ negligible compared to $T^*$. In this case each component $w$ in $W$ will evolve according to a simpler dynamics that is similar to tensor power method, where one updates $w$ to $T^*(w^{\otimes 3},I)/\n{T^*(w^{\otimes 3},I)}$ (see Section~\ref{sec:deflation} for details). For orthogonal tensors, it's known that tensor power method with random initializations would be able to discover the largest ground truth components (see \cite{anandkumar2014tensor}). Once the largest ground truth component has been discovered, the corresponding component (or multiple components) $w$ will quickly grow in norm, which eventually fits the ground truth component. The flat regions in the trajectory in Figure~\ref{fig:ortho} correspond to the period of time where the components $w$'s are small and $T-T^*$ remains stable, while the decreasing regions correspond to the period of time where a ground truth component is being fitted. However, there are many challenges in analyzing this process. The main problem is that the gradient flow would introduce a lot of dependencies throughout the trajectory, making it harder to analyze the fitting of later ground truth components, especially ones that are much smaller. We modify the algorithm to include a reinitialization step per epoch, which alleviates the dependency issue. Even after the modification we still need a few more techniques: \paragraph{Local stability} One major problem in analyzing the dynamics in a later stage is that the components used to fit the previous ground truth components are still moving according to their gradients, therefore it might be possible for these components to move away. To address this problem, we add a small regularizer to the objective, and give a new local stability analysis that bounds the distance to the fitted ground truth component both individually and on average. The idea of bounding the distance on average is important as just assuming each component $w$ is close enough to the fitted ground truth component is not sufficient to prove that $w$ cannot move far. While similar ideas were considered in \cite{chizat2021sparse}, the setting of tensor decomposition is different. \paragraph{Norm/Correlation relation} A key step in our analysis establishes a relationship between norm and correlation: we show if a component $w$ crosses a certain norm threshold, then it must have a very large correlation with one of the ground truth components. This offers an initial condition for local stability and makes sure the residual $T^*-T$ is almost close to an orthogonal tensor. Establishing this relation is difficult as unlike the high level intuition, we cannot guarantee $T^*-T$ remains unchanged even within a single epoch: it is possible that one ground truth component is already fitted while no large component is near another ground truth component of same size. In previous work, \cite{li2020learning} deals with a similar problem for neural networks using gradient truncation that prevents components from growing in the first phase (and as a result has super-exponential dependency on the ratio between largest and smallest $a_i$). We give a new technique to control the influence of ground truth components that are fitted within this epoch, so we do not need the gradient truncation and can characterize the deflation process. \subsection{Related works} \paragraph{Neural Tangent Kernel} There is a recent line of work showing the connection between Neural Tangent Kernel (NTK) and sufficiently wide neural networks trained by gradient descent \citep{jacot2018neural,allen2018convergence,du2018gradient,du2019gradient,li2018learning,arora2019exact,arora2019fine,zou2020gradient,oymak2020towards,ghorbani2021linearized}. These papers show when the width of a neural network is large enough, it will stay around the initialization and its training dynamic is close to the dynamic of the kernel regression with NTK. In this paper we go beyond the NTK setting and analyze the trajectory from a very small initialization. \paragraph{Mean-field analysis} There is another line of works that use mean-field approach to study the optimization for infinite-wide neural networks \citep{mei2018mean,chizat2018global,nguyen2020rigorous,nitanda2017stochastic,wei2019regularization,rotskoff2018trainability,sirignano2020mean}. \cite{chizat2019lazy} showed that, unlike NTK regime, the parameters can move away from its initialization in mean-field regime. However, most of the existing works need width to be exponential in dimension and do not provide a polynomial convergence rate. \paragraph{Beyond NTK} There are many works showing the gap between neural networks and NTK \citep{allen2019can,allen2018learning,yehudai2019power,ghorbani2019limitations,ghorbani2020neural,dyer2019asymptotics,woodworth2020kernel,bai2019beyond,bai2020taylorized,huang2020dynamics,chen2020towards}. In particular, \cite{li2020learning} and \cite{wang2020beyond} are closely related with our setting. While \cite{li2020learning} focused on learning two-layer ReLU neural networks with orthogonal weights, they relied on the connection between tensor decomposition and neural networks \citep{ge2017learning} and essentially worked with tensor decomposition problems. In their result, all the $a_i$'s are within a constant factor and all components are learned simultaneously. We allow ground truth components with very different scale and show a deflation phenomenon. \cite{wang2020beyond} studied learning a low-rank non-orthogonal tensor but did not characterize the training trajectory. \paragraph{Implicit regularization} Many works recently showed that different optimization methods tend to converge to different optima in several settings \citep{soudry2018implicit,nacson2019convergence,ji2018gradient,ji2018risk,ji2019refined,ji2020directional,gunasekar2018characterizing,gunasekar2018implicit,moroshko2020implicit,arora2019implicit,lyu2019gradient,chizat2020implicit}. In particular, \cite{li2020towards} studied matrix factorization problem and showed gradient descent with infinitesimal initialization is similar to greedy low-rank learning, which is a multi-epoch algorithm that finds the best approximation within the rank constraint and relax the constraint after every epoch. \cite{razin2021implicit} studied the tensor factorization problem and showed that it biases towards low rank tensor. Both of these works considered partially observable matrix or tensor and are only able to fully analyze the first epoch (i.e., recover the largest direction). We focus on a simpler setting with fully-observable ground truth tensor and give a complete analysis of learning all the ground truth components. \subsection{Outline} In Section~\ref{sec:prelim} we introduce the basic notations and problem setup. In Section~\ref{sec:deflation} we review tensor deflation process and tensor power method. We then give our algorithm in Section~\ref{sec:algorithm}. Section~\ref{sec:sketch} gives the formal main theorem and discusses high-level proof ideas. We conclude in Section~\ref{sec:conclude} and discuss some limitations of the work. The detailed proofs and additional experiments are left in the appendix. \section{Conclusion}\label{sec:conclude} In this paper we analyzed the dynamics of gradient flow for over-parametrized orthogonal tensor decomposition. With very mild modification to the algorithm (a small regularizer and some re-initializations), we showed that the trajectory is similar to a tensor deflation process and the greedy low-rank procedure in~\citet{li2020towards}. These modifications allowed us to prove strong guarantees for orthogonal tensors of any rank, while not changing the empirical behavior of the algorithm. We believe such techniques would be useful in later analysis for the implicit bias of tensor problems. A major limitation of our work is that it only applies to orthogonal tensors. Going beyond this would require significantly new ideas---we observed that for general tensors, overparametrized gradient flow may have a very different behavior compared to the greedy low-rank procedure, as it is possible for one large component to split into two correlated components (see more details in Appendix~\ref{sec:experiment}). We leave that as an interesting open problem. \section*{Acknowledgements} Rong Ge, Xiang Wang and Mo Zhou are supported in part by NSF Award CCF-1704656, CCF-1845171 (CAREER), CCF-1934964 (Tripods), a Sloan Research Fellowship, and a Google Faculty Research Award. \section{Dynamics for Non-Orthogonal Tensors} The proof in the orthogonal setting gives intuition to the dynamics in the non-orthgonal setting. Similar as before, we expect the direction of randomly initialized columns to evolve according to gradient flow while the residual does not change by much, and the column with the largest correlation with the residual will become much larger and eventually reduce the residual. However, unlike the orthogonal setting, in this case the columns that were already large will not stay where they are. This suggests a greedy-low-rank learning process as in Algorithm... \rong{write the greedy-low rank algorithm} The greedy-low-rank tensor decomposition algorithm is similar to the greedy-low-rank algorithm in \cite{} for analyzing implicit regularization for matrix problems. However, in the matrix case the column with the largest correlation is easy to compute, while it is in general NP-hard to find the best rank-1 approximation for a tensor and the algorithm will need to rely on random initializations and hope to find a component with good correlation. In the worst-case, the greedy-low-rank process may need a large number of columns to fit the original tensor. \rong{Maybe we can have a theorem for the worst case, saying that greedy-low-rank will reduce the residual by at least something in each iteration --- this should follow from a similar technique as our previous paper but much easier} \begin{theorem}\label{thm:nonortho} Suppose $\fn{T^*} =1$ and $\delta_0 = {\text{poly}}(\epsilon/d).$ After $O(\frac{d^4 \log(d/\epsilon)}{\epsilon^6})$ epochs, we have $$\fn{T-T^*}\leq \epsilon.$$ \end{theorem} The proof of Theorem~\ref{thm:nonortho} is in Appendix~\ref{sec:non_ortho_proof}. However, in practice we observe that the greedy-low-rank algorithm is highly effective, and the dynamics of gradient flow with small initialization is very similar to the greedy-low-rank algorithm. \rong{Describe experiment result} \section{Proof for Non-orthogonal setting}\label{sec:non_ortho_proof} \begin{lemma}\label{lem:norm_bound} Suppose $T = \sum_{w \in S} w^{\otimes 4}/\ns{w},$ we have $$\sum_{w\in S} \ns{w}\leq d\fn{T}.$$ \end{lemma} \begin{proofof}{Lemma~\ref{lem:norm_bound}} We can lower bound $\fns{T}$ as follows, \begin{align*} \fns{T} = \fns{\sum_{w \in S} \frac{w^{\otimes 4}}{\ns{w}}} =& \sum_{i,j,k,l \in [d]} \pr{\sum_{w\in S}\frac{w_i w_j w_k w_l}{\ns{w}} }^2\\ \geq& \sum_{i,j \in [d]} \pr{\sum_{w\in S}\frac{w_i^2 w_j^2}{\ns{w}} }^2\\ =& \sum_{i,j \in [d]} \pr{\sum_{w\in S}w_i^2 \bar{w}_j^2 }^2\\ \geq& \frac{\pr{\sum_{w\in S}\sum_{i,j \in [d]} w_i^2 \bar{w}_j^2 }^2}{d^2} = \frac{\pr{\sum_{w\in S}\sum_{i\in [d]} w_i^2 }^2}{d^2} = \frac{\pr{\sum_{w\in S}\ns{w}}^2}{d^2}, \end{align*} where the last inequality follows from Cauchy-Schwarz inequality and the second last equality holds because $\sum_{j\in [d]} \bar{w}_j^2 = 1.$ \end{proofof} \begin{lemma}\label{lem:good_correlation} Assume $\fn{T^*}=1.$ Suppose the residual is $T-T^*$ and the gradient is zero, we have $$\min_{u} (T-T^*)(\bar{u}^{\otimes 4}) \leq -\frac{\fns{T-T^*}}{d}.$$ \end{lemma} \begin{proofof}{Lemma~\ref{lem:good_correlation}} Denote $T^* = \sum_{u \in S^*} u^{\otimes 4}/\ns{u}.$ According to Lemma~\ref{lem:norm_bound}, we have $\sum_{u\in S^*}\ns{u}\leq d.$ Suppose $T= \sum_{w\in S}w^{\otimes 4}/\ns{w}.$ We have \begin{align*} 0 =& \frac{1}{2}\fns{ T-T^* - (\sum_{w\in S}w^{\otimes 4}/\ns{w} - \sum_{u \in S^*} u^{\otimes 4}/\ns{u})}\\ =& \fns{T-T^*} -\sum_{w\in S}\ns{w}(T-T^*)(\bar{w}^{\otimes 4}) + \sum_{v\in S}\ns{v}(T-T^*)(\bar{v}^{\otimes 4}). \end{align*} Since the gradient equals zero, we know $\sum_{w\in S}\ns{w}(T-T^*)(\bar{w}^{\otimes 4})=0.$ Therefore, we have $$\sum_{v\in S}\ns{v}(T-T^*)(\bar{v}^{\otimes 4}) = -\fns{T-T^*},$$ which then implies $\min_{u} (T-T^*)(\bar{u}^{\otimes 4}) \leq -\frac{\fns{T-T^*}}{d}.$ \end{proofof} \begin{proofof}{Theorem~\ref{thm:nonortho}} Since the function value only decreases, we have $\fn{T}=O(1),$ which then implies $\fns{W} = O(d).$ Suppose at the beginning of one epoch, we have $\fns{T-T^*}\geq \epsilon^2,$ according to Lemma~\ref{lem:good_correlation}, we know $$\min_{u} (T-T^*)(\bar{u}^{\otimes 4}) \leq -\frac{\epsilon^2}{d}.$$ The running time would be $O(\frac{d}{\epsilon^2}\log(d/\epsilon)).$ Suppose the best correlation at the initialization is $\delta ,$ $\fns{W}$ is bounded by $M$ and the running time is $t$, According to Lemma 9 in \cite{wang2020beyond}, $$\Delta f\geq \frac{\delta^2 }{M t} \geq \frac{1}{d}\cdot \frac{ \epsilon^2}{ d\log(d/\epsilon)} \cdot \pr{\frac{\epsilon^2}{d}}^2 = \frac{\epsilon^6}{d^4 \log(d/\epsilon)}.$$ Therefore, after $O(\frac{d^4 \log(d/\epsilon)}{\epsilon^6})$ epochs, we have $$\fn{T-T^*}\leq \epsilon. $$ \end{proofof} \subsubsection{Proof overview}\label{sec:phase1} We give the proof overview in this subsection and present the proof of Lemma~\ref{lem-phase1-summary-trajectory} and Lemma~\ref{lem:phase1} at the end of this subsection. We remark that the proof idea in this phase is inspired by \citep{li2020learning}. We describe the high-level proof plan for phase 1. Recall that at the beginning of this epoch, we know $S_{bad}=\varnothing$ which implies there is at most one large coordinate for every component. Roughly speaking, we will show that for those small coordinate they will remain small in phase 1, and the only possibility for one component to have larger norm is to grow in the large direction. This intuitively suggests all components that have a relatively large norm in phase 1 are basis-like components. We first show within $t_1^\prime = c_t d/(8\beta\log d))$ time, there are components that can improve their correlation with some ground truth component $e_i$ to a non-trivial $\mathrm{polylog}(d)/d$ correlation. This lemma suggests that there is at most one coordinate can grow above $O(\log d/d)$. Note that we should view the analysis in this section and the analysis in Appendix~\ref{sec: appendix, induction hypothesis} as a whole induction/continuity argument. It's easy to verify that at any time $0\leq t\leq t_1^{(s)}$, Assumption~\ref{assumption: induction, oracle} holds and Proposition~\ref{prop:main} holds. \begin{lemma}\label{lem-phase1-lottery} In the setting of Lemma~\ref{lem:phase1}, suppose $\ninf{\bar{v}^{(0)}}^2\le \log^4(d)/d$. Then, for every $k\in[d]$ \begin{enumerate} \item for $v\not\in S_{pot}$, $[\bar{v}^{(t)}_i]^2=O(\log(d)/d)$ for all $i\in[d]$ and $t\le t_1^\prime$. \item if $S^{(t)}_k=\varnothing$ for $t\le t_1^\prime$, then for $v\in S_{k,good}$, there exists $t\le t_1^\prime$ such that $[\bar{v}^{(t)}_k]^2 \ge \log^4(d)/d$ and $[\bar{v}^{(t)}_i]^2= O(\log(d)/d)$ for all $i\ne k$. \item for $v\in S_{k,pot}\setminus (S_{good}\cup S_{bad})$, $[\bar{v}^{(t)}_i]^2= O(\log(d)/d)$ for all $i\ne k$ and $t\le t_1^\prime$. \end{enumerate} \end{lemma} The above lemma is in fact a direct corollary from the following lemma when considering the definition of $S_{good}$ and $S_{pot}$. It says if a direction is below certain threshold, it will remain $O(\log d/d)$, while if a direction is above certain threshold and there are no basis-like components for this direction, it will grow to have a $\mathrm{polylog}(d)$ improvement. \begin{restatable}{lemma}{lemphaseonepolyloggap}\label{lem-phase1-polyloggap} In the setting of Lemma~\ref{lem:phase1}, we have \begin{enumerate} \item if $[\bar{v}^{(0)}_k]^2\le \min\{\Gamma_k-\rho_k,\Gamma_{max}\}$, then $[\bar{v}^{(t)}_k]^2=O( \log(d)/d)$ for $t\le t_1^\prime$. \item if $S^{(t)}_k=0$ for $t\le t_1^\prime$, $[\bar{v}^{(0)}_k]^2\ge \Gamma_k+\rho_k$, $[\bar{v}^{(0)}_i]^2\le \Gamma_i-\rho_i$ for all $i\ne k$ and $\ninf{\bar{v}^{(0)}}^2\le \log^4(d)/d$, then there exists $t\le t_1^\prime$ such that $[\bar{v}^{(t)}_k]^2 \ge \log^4(d)/d$. \end{enumerate} \end{restatable} The following lemma shows if $[\bvx{t_1^\prime}_i]^2=O(\log d/d)$ at $t_1^\prime$, it will remain $O(\log d/d)$ to the end of phase 1. This implies for components that are not in $S_{pot}$, they will not have large correlation with any ground truth component in phase 1. \begin{restatable}{lemma}{lemphaseoneremainsmall}\label{lem-phase1-remainsmall} In the setting of Lemma~\ref{lem:phase1}, suppose $[\bvx{t_1^\prime}_i]^2= O(\log(d)/d)$. Then we have $[\bar{v}^{(t)}_i]^2= O(\log(d)/d)$ for $t_1^\prime\le t\le t_1$ \end{restatable} The following two lemmas show good components (those have $\mathrm{polylog}(d)/d$ correlation before $t_1^\prime$) will quickly grow to have constant correlation and $\delta_1$ norm. Note that the following condition $a_k=\Omega(\beta)$ holds in our setting because when $a_i<\beta c_a$, we have $S_{i,good}=S_{i,pot}=\varnothing$ (this means for those small directions there are no components that can have $\mathrm{polylog}(d)/d$ correlation as shown in Lemma~\ref{lem-phase1-lottery}). \begin{restatable}[Good component, constant correlation]{lemma}{lemphaseoneconstantgap}\label{lem-phase1-constantgap} In the setting of Lemma~\ref{lem:phase1}, suppose $S^{(t)}_k=\varnothing$ for $t\le t_1$, $a_k=\Omega(\beta)$. If there exists $\tau_0\le t_1$ such that $[\bvx{\tau_0}_k]^2> \log^4(d)/d$ and $[\bvx{\tau_0}_i]^2= O(\log(d)/d)$ for all $i\ne k$, then for any constant $c\in (0,1)$ we have $[\bar{v}^{(t)}_k]^2> c$ and $[\bar{v}^{(t)}_i]^2= O(\log(d)/d)$ for all $i\ne k$ when $\tau_0+t_1^{\prime\prime} \le t\le t_1$ with $t_1^{\prime\prime}=\Theta(d/(\beta \log^3 d))$. \end{restatable} \begin{restatable}[Good component, norm growth]{lemma}{lemphaseonenormgrow}\label{lem-phase1-normgrow} In the setting of Lemma~\ref{lem:phase1}, suppose $S^{(t)}_k=\varnothing$ for $t\le t_1$, $a_k=\Omega(\beta)$. If there exists $\tau_0^\prime\le t_1$ such that $[\bvx{\tau_0^\prime}_k]^2> c$ and $[\bvx{\tau_0^\prime}_i]^2= O(\log(d)/d)$ for all $i\ne k$, then we have $\n{v^{(t)}}_2\ge \delta_1$ for some $\tau_0^\prime \le t\le \tau_0^\prime+t_1^{\prime\prime\prime}$ with $t_1^{\prime\prime\prime}=\Theta(\log(d/\alpha)/\beta)$. \end{restatable} Recall from Lemma \ref{lem-phase1-polyloggap} we know there is at most one coordinate that can be large. Thus, intuitively we can expect if the norm is above certain threshold, the component will become basis-like, since this large direction will contribute most of the norm and other directions will remain small. In fact, we can show (1) norm of ``small and dense'' components (e.g., those are not in $S_{pot}$) is smaller than $\delta_1$; (2) once a component reaches norm $\delta_1$, it is a basis-like component. \begin{restatable}{lemma}{lemphaseonenormthreshold}\label{lem-phase1-normthreshold} In the setting of Lemma~\ref{lem:phase1}, we have \begin{enumerate} \item if $\ninf{\bar{v}^{(t)}}^2\le\log^4 (d)/d$ for all $t\le t_1$, then $\n{v^{(t)}}_2=O(\delta_0)$ for all $t\le t_1$. \item Let $\tau_0=\inf\{t\in[0,t_1]|\ninf{\bar{v}^{(t)}}^2\ge \log^4 d/d\}$. Suppose $[\bvx{\tau_0}_k]^2\ge \log^4 d/d$ and $[\bvx{\tau_0}_i]^2=O( \log d/d)$ for $i\ne k$. If there exists $\tau_1$ such that $\tau_0 < \tau_1 \le t_1$ and $\n{\vx{\tau_1}}_2\ge \delta_1$ for the first time, then there exists $k\in[d]$ such that $[\bvx{\tau_1}_k]^2\ge 1-\alpha^2$ if $\hat{a}^{(t)}_k\le \alpha$ for $t\le \tau_1$ and $[\bvx{\tau_1}_k]^2\ge 1-\alpha$ otherwise. \end{enumerate} \end{restatable} One might worry that a component can first exceeds the $\delta_1$ threshold then drop below it and eventually gets re-initialized. Next, we show that re-initialization at the end of Phase 1 cannot remove all the components in $S^{(t_1)}_k.$ \begin{restatable}{lemma}{lemabovedelta}\label{lem:above_delta1} If $S^{(0)}_k = \varnothing$ and $S^{(t')}_k \neq \varnothing$ for some $t'\in (0, t_1]$, we have $S^{(t_1)}_k\neq \varnothing$ and $\hat{a}_k^{(t_1)}\geq \delta_1^2.$ \end{restatable} Given above lemma, we now are ready to prove Lemma~\ref{lem-phase1-summary-trajectory} and the main lemma for Phase 1. \lemphaseonesummarytrajectory* \begin{proof We show statements one by one. \paragraph{Part 1.} The statement follows from Lemma~\ref{lem-phase1-lottery}, Lemma~\ref{lem-phase1-remainsmall} and Lemma~\ref{lem-phase1-normthreshold}. \paragraph{Part 2.} Suppose $S^{(t)}_k=\varnothing$ for all $t\le t_1$. By Lemma~\ref{assumption-phase1-init} we know $S_{k,good}\neq \varnothing$. Then by Lemma~\ref{lem-phase1-lottery}, Lemma~\ref{lem-phase1-constantgap} and Lemma~\ref{lem-phase1-normgrow}, we know there exists $v$ such that $\n{v^{(t)}}_2\ge \delta_1$ within time $t_1=t_1^\prime + t_1^{\prime\prime} + t_1^{\prime\prime\prime}$. Then by Lemma~\ref{lem-phase1-normthreshold} we know $[\bar{v}^{(t)}_k]^2\ge 1-\alpha$. Therefore, we know there exists $t\le t_1$ such that $S^{(t)}_k\neq\varnothing$. Finally we know it will keep until $t_1$ by Lemma~\ref{lem:above_delta1}. \paragraph{Part 3.} The statement directly follows from Lemma~\ref{lem-phase1-normthreshold} and Lemma~\ref{lem:above_delta1}. \end{proof} \lemphaseone* \begin{proof By Lemma~\ref{assumption-phase1-init} we know the number of reinitialized components are always $\Theta(m)$ so Lemma~\ref{assumption-phase1-init} holds with probability $1-1/{\text{poly}}(d)$ for every epoch. In the following assume Lemma~\ref{assumption-phase1-init} holds. The second and third statement directly follow from Lemma~\ref{assumption-phase1-init} and Lemma~\ref{lem-phase1-summary-trajectory} as $S_{k,pot}=\varnothing$ when $a_k\le \beta c_a$. For the first statement, combing the proof in Appendix~\ref{sec: appendix, induction hypothesis} and Lemma~\ref{lem-phase1-normthreshold}, we know the statement holds (see also the remark at the beginning of Appendix~\ref{sec: appendix, induction hypothesis}). \end{proof} \section{Preliminaries}\label{sec:prelim} \paragraph{Notations} We use upper-case letters to denote matrices and tensors, and lower-case letters to denote vectors. For any positive integer $n,$ we use $[n]$ to denote the set $\{1,2,\cdots, n\}.$ We use $I_d$ to denote $d\times d$ identity matrix, and omit the subscript $d$ when the dimension is clear. We use $\delta_0$Unif$(\mathbb{S}^{d-1})$ to denote the uniform distribution over $(d-1)$-dimensional sphere with radius $\delta_0.$ For vector $v$, we use $\|v\|$ to denote its $\ell_2$ norm. We use $v_k$ to denote the $k$-th entry of vector $v$, and use $v_{-k}$ to denote vector $v$ with its $k$-th entry removed. We use $\bar{v}$ to denote the normalized vector $\bar{v}=v/\n{v}$, and use $\bar{v}_k$ to denote the $k$-th entry of $\bar{v}.$ For a matrix $A$, we use $A[:,i]$ to denote its $i$-th column and $\mathrm{col}(A)$ to denote the set of all column vectors of $A$. For matrix $M$ or tensor $T$, we use $\|M\|_F$ and $\|T\|_F$ to denote their Frobenius norm, which is equal to the $\ell_2$ norm of their vectorization. For simplicity we restrict our attention to symmetric 4-th order tensors. For a vector $v\in {\mathbb R}^d$, we use $v^{\otimes 4}$ to denote a $d\times d\times d\times d$ tensor whose $(i,j,k,l)$-th entry is equal to $v_iv_jv_kv_l$. Suppose $T=\sum_{w} w^{\otimes 4},$ we define $T(v^{\otimes 4})$ as $\sum_{w} \inner{w}{v}^4$ and $T(v^{\otimes 3}, I)$ as $\sum_{w} \inner{w}{v}^3 w.$ For clarity, we always call a component in $T^*$ as ground truth component and call a component in our model $T$ simply as component. \paragraph{Problem setup} We consider the problem of fitting a 4-th order tensor. The components of the ground truth tensor is arranged as columns of a matrix $U \in {\mathbb R}^{d\times r}$, and the tensor $T^*$ is defined as \[T^* = \sum_{i=1}^r a_i (U[:,i]^{\otimes 4}),\] where $a_1\geq a_2\geq \cdots \geq a_r\geq 0$ and $\sum_{i=1}^r a_i = 1$. For convenience in the analysis, we assume $a_i\geq \epsilon/\sqrt{d}$ for all $i\in [r].$ This is without loss of generality because the target accuracy is $\epsilon$ and we can safely ignore very small ground truth components with $a_i<\epsilon/\sqrt{d}.$. In this paper, we focus on the case where the components are orthogonal---that is, the columns $U[:,i]$'s are orthonormal. For simplicity we assume without loss of generality that $U[:,i] = e_i$ where $e_i$ is the $i$-th standard basis vector\footnote{This is without loss of generality because gradient flow (and our modifications) is invariant under rotation of the ground truth parameters.}. To reduce the number of parameters we also assume $r = d$, again this is without loss of generality because we can simply set $a_i = 0$ for $i > r$. There can be many different ways to parametrize the tensor that we use to fit $T^*$. Following previous works~\citep{wang2020beyond,li2020learning}, we use an over-parameterized and two-homogeneous tensor \[T=\sum_{i=1}^m \frac{W[:,i]^{\otimes 4}}{\ns{W[:,i]}} .\] Here $W\in {\mathbb R}^{d\times m}$ is a matrix with $m$ columns that corresponds to the components in $T$. It is overparametrized when $m > r$. Since the tensor $T$ only depends on the set of columns $W[:,i]$ instead of the orderings of the columns, for the most part of the paper we will instead write the tensor $T$ as \begin{equation*} T=\sum_{w\in \mathrm{col}(W)} \frac{w^{\otimes 4}}{\ns{w}}, \end{equation*} where $\text{col}(W)$ is the set of all the column vectors in $W$. This allows us to discuss the dynamics of coordinates for a component $w$ without using the index for the component. In particular, $w_i$ always represents the $i$-th coordinate of the vector $w$. This representation is similar to the mean-field setup \citep{chizat2018global,mei2018mean} where one considers a distribution on $w$, however since we do not rely on analysis related to infinite-width limit we use the sum formulation instead \section{Proofs for (Re)-initialization and Phase 1}\label{sec:proof_init_phase1} We specify the constants that will be used in the proof of initialization (Section~\ref{sec:proof_init}) and Phase 1 (Section~\ref{sec:proof_phase1}). We will assume it always hold in the proof of Section~\ref{sec:proof_init} and Section~\ref{sec:proof_phase1}. We omit superscript $s$ for simplicity. \begin{prop}[Choice of parameters]\label{assumption-phase1-param} The following hold with proper choices of constants $\gamma,c_e,c_\rho, c_{max}, c_t$ \begin{enumerate} \item $t_1^\prime := \frac{c_t d}{8\beta \log d} \le t_1\le \frac{(1-\gamma)}{8\beta c_e}\cdot \frac{d}{\log d}$ , \item $\Gamma_i=\frac{1}{8a_i t_1^\prime}$ if $S_i^{(s,0)}=\varnothing$, and $\Gamma_i = \frac{1}{8\lambda t_1^\prime}$ otherwise. $\rho_i=c_\rho \Gamma_i$. $\Gamma_{max}=c_{max}\log d/d$. \item $c_e < \frac{c_\rho c_{max}}{2(1-c_\rho)}$, $c_\rho/c_t > 4c_e$, $c_tc_{max}\ge 4$. \item $c_a =(1-c_\rho)/(c_t c_{max})$ \end{enumerate} \end{prop} \begin{proof} The results hold if let $\gamma,c_e,c_\rho,c_t$ be small enough constant and $c_{max}$ be large enough constant. For example, we can choose $c_e < c_\rho/4 < 0.01$, $c_t,\gamma<0.01$ and $c_{max}>10/c_t$. \end{proof} \subsection{Initialization}\label{sec:proof_init} We give a more detailed version of initialization with specified constants to fit the definition of $S_{good}$, $S_{pot}$ and $S_{bad}$. We show that at the beginning of any epoch $s$, the following conditions hold with high probability. Intuitively, it suggests all directions that we will discover satisfy $a_i=\Omega(\beta)$ as $S_{i,pot}\neq \varnothing$. \begin{lemma}[(Re-)Initialization space]\label{assumption-phase1-init} In the setting of Theorem~\ref{thm:main}, the following hold at the beginning of current epoch with probability $1-1/{\text{poly}}(d)$. \begin{enumerate} \item For all $a_i-\hat{a}^{(0)}_i\ge \beta$, we have $S_{i,good}\ne\varnothing$. \item For all $a_i-\hat{a}^{(0)}_i<\beta c_a$, we have $S_{i,pot}=\varnothing$. \item $S_{bad}=\varnothing$ \item $\n{v^{(0)}}_2=\Theta(\delta_0)$, $[\bar{v}^{(0)}_i]^2\le \Gamma_{max}=c_{max}\log d/d$ \item For every $v$, there are at most $O(\log d)$ many $i\in[d]$ such that $[\bar{v}^{(0)}_i]^2\ge c_e\log(d)/(10d)$. \item $|\{v|v \text{ was reinitialized in epoch $s$}\}|=(1-O(1/\log^2 d))m$. \end{enumerate} \end{lemma} \begin{proof} Let the constants in Lemma~\ref{lem-init-calculation} be $\eta=1/c_t$, $c_i = \Gamma_i d/\log d$ and satisfy Proposition~\ref{assumption-phase1-param}, then we know at the time of (re-)initialization, all statements hold. Since we further know from Lemma~\ref{lem:phase2} that $\n{v}=\Theta(\delta_0)$ and $\bar{v}_i^2$ will only change $o(\log d/d)$, we have at the beginning of every epoch, all statements hold. \end{proof} \begin{lemma}\label{lem-init-calculation} There exist $m_0={\text{poly}}(d)$ and $m_1={\text{poly}}(d)$ such that if $m\in[m_0,m_1]$ and we random sample $m$ vectors $v$ from Unif$(\mathbb{S}^{d-1})$, with probability $1-1/{\text{poly}}(d)$ the following hold with proper absolute constant $\eta$, $\gamma$, $c_\rho$, $c_i$, $c_e$, $c_{max}$ satisfying $\eta(1-\gamma)\le c_i$, $c_{max}\ge 4 \eta$, $\gamma,c_\rho$ are small enough and $c_{max}, \eta$ are large enough \begin{enumerate} \item For every $i\in[d]$ such that $c_i \le \eta$, there exists $v$ such that $[\bar{v}^{(0)}_i]^2\ge c_i(1+2c_\rho)\log d /d$ and $[\bar{v}^{(t)}_j]^2\le c_j(1-2c_\rho)\log d/d$ for $j\neq i$. \item For every $v$, there does not exist $i\neq j$ such that $[\bar{v}^{(0)}_i]^2\ge c_i(1-2c_\rho)\log d /d$ and $[\bar{v}^{(0)}_j]^2\ge c_j(1-2c_\rho)\log d /d$. \item For every $v$ and $i\in[d]$, $[\bar{v}^{(0)}_i]^2\le c_{max}\log d /2d$. \item For every $v$, there are at most $O(\log d)$ many $i\in[d]$ such that $[\bar{v}^{(0)}_i]^2\ge c_e\log(d)/11d$. \item $|\{v|\text{there exists } i\in[d] \text{ such that } [\bar{v}^{(0)}_i]^2\ge c_i(1-2c_\rho)\log d/d\}| \le m/\log^2 (d).$ \end{enumerate} \end{lemma} \begin{proof} It is equivalent to consider sample $v$ from $\mathcal{N}(0,I)$. Let $x\in {\mathbb R}$ be a standard Gaussian variable, according to Proposition 2.1.2 in \cite{vershynin2018high}, we have for any $t>0$ \[\pr{\frac{2}{t}-\frac{2}{t^3}}\cdot \frac{1}{\sqrt{2\pi}}e^{-t^2/2}\leq \Pr\br{x^2 \geq t^2}\leq \frac{2}{t}\cdot \frac{1}{\sqrt{2\pi}}e^{-t^2/2}.\] Therefore, for any $i\in[d],$ we have for any constant $c>0$ \[\Pr\br{v_i^2 \geq c\log(d)}=\Theta(d^{-c/2}\log^{-1/2} d).\] According to Theorem 3.1.1 in \cite{vershynin2018high}, we know with probability at least $1-2\exp(-\Omega(d)),$ $(1-r) d \leq \ns{v}\leq (1+r)d$ for any constant $0<r<1$. Hence, we have \[\Pr\br{\bar{v}_i^2 \geq \frac{c\log(d)}{d}}\ge\Theta(d^{-c(1+r)/2}\log^{-1/2} d),\] \[\Pr\br{\bar{v}_i^2 \geq \frac{c\log(d)}{d}}\le\Theta(d^{-c(1-r)/2}\log^{-1/2} d).\] \paragraph{Part 1.} For fixed $i\in[d]$ such that $\eta(1-\gamma)\le c_i \le \eta$, we have \[\Pr\br{\bar{v}_i^2 \geq c_i(1+2c_\rho)\log(d)/d}\ge\Theta(d^{-c_i(1+2c_\rho)(1+r)/2}\log^{-1/2} d),\] For a given $j\neq i$, we have \begin{align*} &\Pr\br{\bar{v}_i^2 \geq c_i(1+2c_\rho)\log(d)/d,\ \bar{v}_j^2 \geq c_j(1-2c_\rho)\log(d)/d}\\ &\le\Theta(d^{-c_i(1+2c_\rho)(1-r)/2 -c_j(1-2c_\rho)(1-r)/2 }) =O(d^{-\eta(1-\gamma)(1-r)}). \end{align*} Since $c_i\le\eta$, we know the desired event happens with probability $\Theta(d^{-\eta(1+2c_\rho)(1+r)/2}-d^{-\eta(1-\gamma)(1-r)+1})$. Since $\gamma,c_\rho$ are small enough constant, when $m_0\ge \Omega(d^{\eta(1+2c_\rho)(1+r)/2+1})$, with probability $1-O(e^{-d})$ there exists at least one $v$ such that $\bar{v}_i^2 \geq c_i(1+2c_\rho)\log(d)$ and $[\bar{v}^{(t)}_j]^2\le c_j(1-2c_\rho)\log d/d$ for $j\neq i$. Take the union bound for all $i\in[d]$, we know when $m_0\ge \Omega(d^{\eta(1+2c_\rho)(1+r)/2+2})$, the desired statement holds with probability $1-O(de^{-d})$. \paragraph{Part 2.} For any given $i\neq j$, we have \[\Pr\br{[\bar{v}^{(0)}_i]^2\ge c_i(1-2c_\rho)\log d /d,\ [\bar{v}^{(0)}_j]^2\ge c_j(1-2c_\rho)\log d /d} \le O(d^{-(c_i+c_j)(1-2c_\rho)(1-r)/2}).\] Since $\eta(1-\gamma)\le c_i$, the probability that there exist $i\neq j$ such that the above happens is at most $O(d^{-\eta(1-\gamma)(1-2c_\rho)(1-r)+2})$. Thus, with $m_1\le O(d^{\eta(1-\gamma)(1-2c_\rho)(1-r)-2}/{\text{poly}}(d))$, the desired statement holds with probability $1-1/{\text{poly}}(d)$. \paragraph{Part 3.} We know \[\Pr\br{\text{for all } i\in[d],\ \bar{v}_i^2\le c_{max}\log d/2d }\ge 1-O(d^{-c_{max}(1-r)/4+1}).\] With $m_1\le O(d^{c_{max}(1-r)/4-1}/{\text{poly}}(d))$ the desired statement holds with probability $1-1/{\text{poly}}(d)$. \paragraph{Part 4.} Since $m\le m_1={\text{poly}}(d)$, we know for any constant $c_e$, this statement holds with probability $1-O(e^{-\log^2 d})$. \paragraph{Part 5.} We have \[\Pr\br{\text{there exists } i\in[d] \text{ such that } [\bar{v}^{(0)}_i]^2\ge c_i(1-2c_\rho)\log d/d} \le O(d^{-c_i(1-2c_\rho)/2+1}).\] Let $p$ be the above probability and set $A$ as the $v$ satisfy above condition, by Chernoff's bound we have \[\Pr\br{|A|\ge m/\log^2 d}\le e^{-pm}\left(\frac{epm}{m/\log^2 d}\right)^{m/\log^2 d}=O(e^{-d}).\] Combine all parts above, we know as long as $r,\gamma,c_\rho$ are small enough, $c_{max}\ge 4\eta$ and $\eta$ is large enough, we have when $m_0\ge\Omega(d^{0.6\eta })$ and $m_1\le O(d^{0.9\eta})$, the results hold. \end{proof} \subsection{Proof of Phase 1}\label{sec:proof_phase1} In this section, we first give a proof overview of Phase 1 and then give the detailed proof for each lemma in later subsections. \input{phase1} \subsubsection{Preliminary} To simplify the proof in this section, we introduce more notations and give the following lemma. \begin{lemma In the setting of Lemma~\ref{lem:phase1}, we have $T^* - T^{(t)} =\sum_{i\in [d]} \tilde{a}^{(t)}_i e_i^{\otimes 4} + \Delta^{(t)}$, where $\tilde{a}^{(t)}_i=a_i-\hat{a}^{(t)}_i$ and $\n{\Delta}_F =O(\alpha+m\delta_1^2)$. We know $\tilde{a}^{(0)}_i=a_i$ if $S_i^{(s,0)}= \varnothing$ and $\tilde{a}^{(t)}_i=\Theta(\lambda)$ if $S_i^{(s,0)}\neq \varnothing$. That is, the residual tensor is roughly the ground truth tensor $T^*$ with unfitted directions at the beginning of this epoch and plus a small perturbation $\Delta$. \end{lemma} \begin{proof} We can decompose $T^{(t)}$ as \[T^{(t)} = \sum_{i\in[d]}T^{(t)}_i + T^{(t)}_\varnothing= \sum_{i\in[d]}\left(\hat{a}^{(t)}_ie_i^{\otimes 4} + (T^{(t)}_i-\hat{a}^{(t)}_ie_i^{\otimes 4})\right)+T^{(t)}_\varnothing,\] where $T^{(t)}_i=\sum_{w\in S_i^{(t)}} \n{w}^2\bar{w}^{\otimes 4}$ and $T^{(t)}_\varnothing=\sum_{w\in S_\varnothing^{(t)}} \n{w}^2\bar{w}^{\otimes 4}$. Note that when $S_i^{(t)}=\varnothing$, $\hat{a}^{(t)}_i=0$ and when $S_i^{(t)}\neq\varnothing$ we have $\fn{(T^{(t)}_i-\hat{a}^{(t)}_ie_i^{\otimes 4})}=O(\hat{a}^{(t)}_i\alpha)$ and $\fn{T^{(t)}_\varnothing}\leq m\delta_1^2.$ This gives the desired form of $T^*-T^{(t)}$. \end{proof} We give the dynamic of $[\bar{v}^{(t)}_k]^2$ and $[v^{(t)}_k]^2$ here, which will be frequently used in our analysis. \begin{equation}\label{eq-dynamic-bvt} \begin{aligned} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 2\bar{v}^{(t)}_k\cdot \frac{d}{dt}\frac{v^{(t)}_k}{\n{v^{(t)}}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}}\frac{d}{dt}v^{(t)}_k + 2[\bar{v}^{(t)}_k]^2\cdot \frac{d}{dt}\frac{1}{\n{v}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}} [-\nabla L(v^{(t)})]_k - 2[\bar{v}^{(t)}_k]^2\cdot \frac{\inner{\bar{v}^{(t)}}{-\nabla L(v^{(t)})}}{\n{v^{(t)}}}\\ &= 2\bar{v}^{(t)}_k\cdot \frac{1}{\n{v^{(t)}}} [-(I-\bar{v}^{(t)}[\bar{v}^{(t)}]^\top)\nabla L(v^{(t)})]_k\\ &= 8\bar{v}^{(t)}_k\br{(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 3)},I) - (T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4)})\bar{v}^{(t)} }_k\\ &= 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 - \sum_{i\in [d]} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right). \end{aligned} \end{equation} \begin{equation}\label{eq-dynamic-vt} \begin{aligned} \frac{\mathrm{d} [v^{(t)}_k]^2}{\mathrm{d} t} &= 2v^{(t)}_k\cdot \frac{\mathrm{d} v^{(t)}_k}{\mathrm{d} t}\\ &= 2v^{(t)}_k\cdot [-\nabla L(v^{(t)})]_k \\ &= 4v^{(t)}_k\br{2(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 3)},I)\n{v^{(t)}}_2 - (T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4)})v^{(t)} }_k\\ &= 4[v^{(t)}_k]^2 \left(2\tilde{a}^{(t)}_k [\bar{v}^{(t)}_k]^2 - \sum_{i\in[d]}\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm\frac{\n{\Delta^{(t)}}_F\n{v^{(t)}}_2}{|v^{(t)}_k|}\right). \end{aligned} \end{equation} The following lemma allows us to ignore these already fitted direction as they will remain as small as their (re-)initialization in phase 1. \begin{lemma}\label{lem-phase1-fitteddir} In the setting of Lemma~\ref{lem:phase1}, if direction $e_k$ has been fitted before current epoch (i.e., $S_k^{(s,0)}\ne \varnothing$), then for $v$ that was reinitialized in the previous epoch, we have $[\bar{v}^{(t)}_k]^2=O( \log(d)/d)$ for all $t\le t_1.$ \end{lemma} \begin{proof Since direction $e_k$ has been fitted before current epoch, we know $\tilde{a}^{(t)}_k = \Theta(\lambda)$. We only need to consider the time when $[\bar{v}^{(t)}_k]^2\ge \log d/d$. By \eqref{eq-dynamic-bvt} we have \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 - \sum_{i\in [d]} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right) \le [\bar{v}^{(t)}_k]^2 O\left(\lambda+ d\n{\Delta^{(t)}}_F\right). \end{align*} Since $\lambda$ and $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ are small enough and $[\bar{v}^{(0)}_k]^2=O(\log d/d)$, we know $[\bar{v}^{(t)}_k]^2=O(\log d/d)$ for $t\le t_1$. \end{proof} \subsubsection{Proof of Lemma~\ref{lem-phase1-lottery} and Lemma~\ref{lem-phase1-polyloggap}} Lemma~\ref{lem-phase1-lottery} directly follows from Lemma~\ref{lem-phase1-polyloggap} and the definition of $S_{good}$, $S_{pot}$ and $S_{bad}$ as in Definition~\ref{def-phase1-partition}. We focus on Lemma~\ref{lem-phase1-polyloggap} in the rest of this section. We need following lemma to give the proof of Lemma \ref{lem-phase1-polyloggap}. \begin{lemma}\label{lem-phase1-bv4} In the setting of Lemma~\ref{lem:phase1}, if $\ninf{\bar{v}^{(t)}}^2\le \log^4(d)/d$, we have $\sum_i [\bar{v}^{(t)}_i]^4\le c_e\log d/d$ for all $t\le t_1$. \end{lemma} \begin{proof} We claim that for all $t\le t_1$, there are at most $O(\log d)$ many $i\in[d]$ such that $[\bar{v}^{(t)}_i]^2\ge c_e\log(d)/2d$. Based on this claim, we know \begin{align*} \sum_{i\in[d]} [\bar{v}^{(t)}_i]^4 \le O(\log d) \frac{\log^8d}{d^2} + \sum_{i:[\bar{v}^{(t)}_i]^2<c_e\log(d)/2d } [\bar{v}^{(t)}_i]^4 \le O\left(\frac{\log^9d}{d^2}\right) + \frac{c_e\log(d)}{2d} \le \frac{c_e\log(d)}{d}, \end{align*} which gives the desired result. In the following, we prove the above claim. From Lemma~\ref{assumption-phase1-init}, we know when $t=0$, the claim is true. For any $[\bar{v}^{(0)}_k]^2\le c_e\log(d)/10d$, we will show $[\bar{v}^{(t)}_k]^2\le c_e\log(d)/2d$ for all $t\le t_1$. By \eqref{eq-dynamic-bvt} we have \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 - \sum_{i\in [d]} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right). \end{align*} In fact, we only need to show that for any $\tau_0$ such that $[\bar{v}^{(\tau_0)}_k]^2= c_e\log(d)/10d$ and $[\bar{v}^{(t)}_k]^2\ge c_e\log(d)/10d$ when $\tau_0\le t \le \tau_0+t_1$, we have $[\bar{v}^{(t)}_k]^2\le c_e\log(d)/2d$. To show this, we have \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} \le 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 + \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right) \le [\bar{v}^{(t)}_k]^2 \cdot 16\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 \le [\bar{v}^{(t)}_k]^2 \cdot \frac{\beta}{1-\gamma}\cdot \frac{8c_e\log(d)}{d}, \end{align*} where we use $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ and $\tilde{a}^{(t)}_k\le \beta/(1-\gamma)$. Therefore, with our choice of $t_1 , we know $[\bar{v}^{(t)}_k]^2\le c_e\log(d)/2d$. This finish the proof. \end{proof} We now are ready to give the proof of Lemma \ref{lem-phase1-polyloggap}. \lemphaseonepolyloggap* \begin{proof We focus on the dynamic of $[\bar{v}^{(t)}_k]^2$. For those already fitted direction $e_k$, we have $\Gamma_k=1/(8\lambda t_1^\prime)$, which means $\Gamma_{max}\le \Gamma_k-\rho_k$. From Lemma~\ref{lem-phase1-fitteddir} we know $[\bar{v}^{(t)}_k]^2=O(\log d/d)$ for $t\le t_1^\prime$. In the rest of proof, we focus on these unfitted direction $e_k$. By \eqref{eq-dynamic-bvt} we have \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2 - \sum_{i\in [d]} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right) \end{align*} \paragraph{Part 1.} Define the following dynamics $p^{(t)}$, \begin{align*} \frac{\mathrm{d} p^{(t)}}{\mathrm{d} t} &= 8p^{(t)} \left(a_kp^{(t)} + \frac{a_k c_e\log d}{d}\right),\quad p^{(0)}=[\bar{v}^{(0)}_k]^2 \end{align*} Given that $\tilde{a}^{(t)}_i\le a_i$ and $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ is small enoug , it is easy to see $[\bar{v}^{(t)}_k]^2\le\max\{\log(d)/d,p^{(t)}\}$. Then it suffices to bound $p^{(t)}$ to have a bound for $[\bar{v}^{(t)}_k]^2$. Consider the following dynamic $x^{(t)}$ \begin{equation}\label{eq-xdynamic} \begin{aligned} \frac{\mathrm{d} x^{(t)}}{\mathrm{d} t} = \tau_1 [x^{(t)}]^2,\quad x^{(0)} = \tau_2. \end{aligned} \end{equation} We know $x^{(t)} = 1/(1/\tau_2 - \tau_1 t)$. Set $\tau_1 = 8a_k$ and $\tau_2=1/(\tau_1 t_1^\prime)=\Gamma_k$. Then, with our choice of $\rho_k=c_\rho\Gamma_k$, we know \begin{enumerate} \item $p^{(0)}=[\bar{v}^{(0)}_k]^2\le \Gamma_k-\rho_k\le \Gamma_{max}$. As long as $\rho_k\ge \frac{2c_e\log d}{d}$ and $x^{(0)}=p^{(0)}+\rho_k/2$, we have $p^{(t)}\le x^{(t)}-\rho_k/2$ for $t\le t_1^\prime$. Therefore, $p^{(t_1^\prime)}\le x^{(t_1^\prime)}\le 2\Gamma_k^2/\rho_k=O(\log d/d)$. \item $p^{(0)}=[\bar{v}^{(0)}_k]^2\le \Gamma_{max} < \Gamma_k-\rho_k$. As long as $x^{(0)}=p^{(0)}+\frac{c_e\log d}{d}$, we have $p^{(t)}\le x^{(t)}-\frac{c_e\log d}{d}$ for $t\le t_1^\prime$. Therefore, $p^{(t_1^\prime)}\le x^{(t_1^\prime)}=O(\log d/d)$. \end{enumerate} Together we know $[\bar{v}^{(t)}_k]^2=O(\log d/d)$ for $t\le t_1^\prime$. \paragraph{Part 2.} Define the following dynamics $q^{(t)}$, \begin{align*} \frac{\mathrm{d} q^{(t)}}{\mathrm{d} t} &= 8q^{(t)} \left(a_kq^{(t)} - \frac{2\beta c_e\log d}{d}\right),\quad q^{(0)}=[\bar{v}^{(0)}_k]^2. \end{align*} Since $S^{(t)}_k=\varnothing$, we know $\tilde{a}^{(t)}_k=a_k$. Given that $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ and Lemma \ref{lem-phase1-bv4}, it is easy to see as long as $\ninf{\bar{v}^{(t)}}^2\le \log^4 d/d$, if $q^{(0)}\ge[\bar{v}^{(0)}_k]^2\ge\Theta(\log d/d)$ and $a_k[q^{(0)}]^2 -\frac{2\beta c_e\log d}{d}>0$, we have $[\bar{v}^{(t)}_k]^2\geq^{(t)}$. Then it suffices to bound $q^{(t)}$ to get a bound on $[v^{(t)}_k]^2$. Consider the same dynamic \eqref{eq-xdynamic} with same $\tau_1$ and $\tau_2$, as long as $q^{(0)}=[\bar{v}^{(0)}_k]^2\ge \Gamma_k+\rho_k$, $\rho_k\ge \frac{4\beta c_e\log d}{a_k d}$ and $x^{(0)}=q^{(0)}-\rho_k/2$, we have $q^{(t)}\ge x^{(t)}+\rho_k/2$ if $\ninf{\bar{v}^{(t)}}^2\le \log^4 d/d$ holds. We can verify that $x^{(T_1^\prime)} = +\infty$, which implies there exists $t\le t_1^\prime$ such that $\ninf{\bar{v}^{(t)}}^2 > \log^4 d/d$ \end{proof} \subsubsection{Proof of Lemma \ref{lem-phase1-remainsmall}} \lemphaseoneremainsmall* \begin{proof Recall $t_1-t_1^\prime=t_1^{\prime\prime}+t_1^{\prime\prime\prime}=o(d/(\beta\log d))$, it suffices to show if $[\bvx{t_1^\prime}_i]^2= c_1\log(d)/d$, then $[\bar{v}^{(t)}_i]^2$ will be at most $2c_1\log(d)/d$ in $t_{max}^\prime=o(d/(\beta \log d))$ time. Suppose there exists time $\tau_1\le t_{max}^\prime$ such that $[\bvx{\tau_1}_i]^2\ge 2c_1\log(d)/d$ for the first time. We only need to show if $[\bar{v}^{(t)}_i]^2\ge c_1\log(d)/d$ for $t\le \tau_1$, we have $[\bar{v}^{(t)}_i]^2< 2c_1\log(d)/d$. We know the dynamic of $[\bar{v}^{(t)}_i]^2$ \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_i]^2}{\mathrm{d} t} &= 8[\bar{v}^{(t)}_i]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_i]^2 - \sum_{j\in [d]} \tilde{a}^{(t)}_j[\bar{v}^{(t)}_j]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_i|}\right) \le [\bar{v}^{(t)}_i]^2O\left(\frac{\beta\log d}{d}\right), \end{align*} where we use $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ is small enough and $\tilde{a}^{(t)}_k\le 1$. This implies $[\bar{v}^{(t)}_i]^2\le 2c_1\log d/d$ as $t_{max}^\prime=o(d/(\beta \log d))$. \end{proof} \subsubsection{Proof of Lemma \ref{lem-phase1-constantgap}} \lemphaseoneconstantgap* \begin{proof By Lemma \ref{lem-phase1-remainsmall} we know $[\bar{v}^{(t)}_i]^2$ will remain $O(\log d /d)$ for those $[\bvx{\tau_0}_i]^2=O(\log d/d)$. We now show $[\bar{v}^{(t)}_k]^2$ will become constant within $t_1^{\prime\prime}$ time. We know $\sum_{i\ne k} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4\le \beta c_1\log d /d$ for some constant $c_1$. Hence, with the fact $S^{(t)}_k=\varnothing$, $a_k=\Omega(\beta)$, $[\bvx{\tau_0}_k]^2> \log^4(d)/d$ and $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$, \begin{align*} \frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} &= 8[\bar{v}^{(t)}_k]^2\left(\tilde{a}^{(t)}_k[\bar{v}^{(t)}_k]^2(1-[\bar{v}^{(t)}_k]^2) - \sum_{i\ne k} \tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4 \pm \frac{\n{\Delta^{(t)}}_F}{|\bar{v}^{(t)}_k|}\right)\\ &\ge 8(1-2c)[\bar{v}^{(t)}_k]^2 a_k[\bar{v}^{(t)}_k]^2 = [\bar{v}^{(t)}_k]^2 \Omega\left(\frac{\beta \log^4 d}{d}\right). \end{align*} This implies that within $t_1^{\prime\prime}$ time, we have $[\bar{v}^{(t)}_k]^2\ge c$. Since $[\bar{v}^{(t)}_i]^2$ will remain $O(\log d /d)$ for $i\neq k$ and $t\le t_1$, following the same argument above, it is easy to see $\frac{\mathrm{d} [\bar{v}^{(t)}_k]^2}{\mathrm{d} t} \ge 0$ after $[\bar{v}^{(t)}_k]^2$ reaches $c$. Therefore, $[\bar{v}^{(t)}_k]^2\ge c$ for $t\le t_1$. \end{proof} \subsubsection{Proof of Lemma~\ref{lem-phase1-normgrow}} \lemphaseonenormgrow* \begin{proof For $\n{v^{(t)}}_2^2$, we have \begin{align*} \frac{\mathrm{d} \n{v^{(t)}}_2^2}{\mathrm{d} t} = \n{v^{(t)}}^2\left(4\sum_{i\in[d]}\tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4\pm \n{\Delta^{(t)}}_F - 2\lambda\right). \end{align*} Given the fact $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ and $\lambda$ are small enough , it is easy to see $\n{\vx{\tau_0^\prime}}_2\ge \delta_0/2$ as $\tau_0^\prime\le t_1$. We now show that there exist time $\tau_1\le t_1^\prime+t_1^{\prime\prime}+t_1^{\prime\prime\prime}= t_1$ such that $\n{\vx{\tau_1}}_2\ge\delta_1$. By Lemma \ref{lem-phase1-constantgap} we know $[\bar{v}^{(t)}_k]^2\ge c$ after time $\tau_0+t_1^\prime\le t_1^\prime+t_1^{\prime\prime}$. And since $S^{(t)}_k= \varnothing$, we know $\tilde{a}^{(t)}_k=a_k=\Omega(\beta)$. Then with the fact that $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$ and $\lambda$ are small enough, we have \begin{align*} \frac{\mathrm{d} \n{v^{(t)}}^2}{\mathrm{d} t} \ge \n{v^{(t)}}^2 \Omega(\beta). \end{align*} This implies that $\n{\vx{\tau_1}}_2^2\ge \delta_1^2$ as $t_1^{\prime\prime\prime}=\Theta( \log (d/\alpha)/\beta)$. \end{proof} \subsubsection{Proof of Lemma \ref{lem-phase1-normthreshold}} \lemphaseonenormthreshold* \begin{proof For $\n{v^{(t)}}_2^2$, we have \begin{align*} \frac{\mathrm{d} \n{v^{(t)}}_2^2}{\mathrm{d} t} = \n{v^{(t)}}^2\left(4\sum_{i\in[d]}\tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4\pm \n{\Delta^{(t)}}_F - 2\lambda\right) \end{align*} \paragraph{Part 1.} By Lemma \ref{lem-phase1-bv4} and $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$, we know \begin{align*} \frac{\mathrm{d} \n{v^{(t)}}^2}{\mathrm{d} t} \le \n{v^{(t)}}^2 \frac{5\beta c_e \log d}{d}. \end{align*} This implies $\n{v^{(t)}}_2^2=O(\delta_0) $ as $t_1= O(\frac{d}{\beta\log d })$. \paragraph{Part 2.} By Part 1, we know $\n{\vx{\tau_0}}_2=O( \delta_0)$ and $[\vx{\tau_0}_i]^2=O(\delta_0^2\log d /d)$ for $i\ne k$. For $[\bvx{\tau_0}_i]^2=O(\log d/d)$, we know $[\bar{v}^{(t)}_i]^2=O(\log d/d)$ for $\tau_0\le t\le \tau_1$ by Lemma \ref{lem-phase1-remainsmall}. We consider following cases separately. \begin{enumerate} \item Case 1: Suppose $\hat{a}^{(t)}_k\le \alpha$ for $t\le \tau_1$. In the following we show there exists some constant $C$ such that for all $i\ne k$ $[v^{(t)}_i]^2\le C\delta_0^2 \log d/d$ for $\tau_0\le t \le \tau_1$. Let $\tau_2$ be the first time that the above claim is false, which means for all $i\ne k$ $[v^{(t)}_i]^2\le C\delta_0^2 \log d/d$ when $t\le \tau_2$. For any $i\ne k$, we only need to consider the time period $t\le \tau_2$ whenever $[v^{(t)}_i]^2\ge \delta_0^2 \log d/d$. By Lemma~\ref{lem-phase1-calculation}, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2 =&4[v^{(t)}_i]^2 \left(2\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm O(\alpha+m\delta_1^2)\right.\\ &\pm \left.O\left(\frac{(\alpha^2+ d\alpha^3 + d\alpha (1-[\bar{v}^{(t)}_k]^2)^{1.5}+m\delta_1^2)\n{v^{(t)}}}{|v^{(t)}_i|}\right)\right)\\ \le& [v^{(t)}_i]^2 \left(O\left(\frac{\beta\log d}{d}\right) + O\left(\frac{(\alpha^2+ \alpha (1-[\bar{v}^{(t)}_k]^2)^{1.5}+m\delta_1^2)\n{v^{(t)}}}{|v^{(t)}_i|}\right)\right). \end{align*} Since for all $i\ne k$ $[v^{(t)}_i]^2\le C\delta_0^2 \log d/d$, we know $\sum_{i\ne k}[v^{(t)}_i]^2=\n{v^{(t)}}^2(1-[\bar{v}^{(t)}_k]^2)=O(\delta_0^2\log d)$. Together with the fact $[v^{(t)}_i]^2\ge \delta_0^2 \log d/d$, we hav \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2 \le [v^{(t)}_i]^2 O\left(\frac{\beta\log d}{d}\right). \end{align*} Since $t_1 = O(d/(\beta\log d))$, we know if we choose large enough $C$, it must be $\tau_2\ge \tau_1$. Therefore, we know for all $i\ne k$ $[v^{(t)}_i]^2\le C\delta_0^2 \log d/d$ for $\tau_0\le t \le \tau_1$. Then at time $\tau_1$ when $\n{\vx{\tau_1}}_2\ge\delta_1$, it must be $[\bar{v}^{(t)}_k]^2\ge 1-\alpha^2$ since $\delta_1=\Theta(\delta_0\log^{1/2} (d)/\alpha)$. \item Case 2: We do not make assumption on $\hat{a}^{(t)}_k$. In the following we show there exists some constant $C$ such that for all $i\ne k$ $[v^{(t)}_i]^2\le \delta_1^2\alpha/d$ for $\tau_0\le t \le \tau_1$. Let $\tau_2$ be the first time that the above claim is false, which means for all $i\ne k$ $[v^{(t)}_i]^2\le \delta_1^2\alpha/d$ when $t\le \tau_2$. For any $i\ne k$, we only need to consider the time period $t\le \tau_2$ whenever $[v^{(t)}_i]^2\ge \delta_1^2 \alpha/2d$. We have \begin{align*} \frac{\mathrm{d} [v^{(t)}_i]^2}{\mathrm{d} t} &= 4[v^{(t)}_i]^2 \left(2\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm\frac{\n{\Delta^{(t)}}_F\n{v^{(t)}}_2}{|v^{(t)}_i|}\right)\\ &\le [v^{(t)}_i]^2 \left(O\left(\frac{\beta\log d}{d}\right) + O\left(\frac{\alpha+m\delta_1^2}{\alpha^{1/2}d^{-1/2}}\right)\right). \end{align*} Since $m\delta_1^2=O(\alpha)$ and $t_1 = O(d/(\beta\log d))$, we know it must be $\tau_2\ge \tau_1$. Therefore, we know for all $i\ne k$ $[v^{(t)}_i]^2\le \delta_1^2 \alpha/d$ for $\tau_0\le t \le \tau_1$. Then at time $\tau_1$ when $\n{\vx{\tau_1}}_2\ge\delta_1$, it must be $[\bar{v}^{(t)}_k]^2\ge 1-\alpha$. \end{enumerate} \end{proof} \subsubsection{Proof of Lemma~\ref{lem:above_delta1}} To prove Lemma~\ref{lem:above_delta1}, we need the following calculation on $\frac{d}{dt} \ns{v^{(t)}}.$ \begin{lemma}\label{lem:norm_individual} Suppose $v^{(t)} \in S^{(t)}_k,$ we have \[\frac{d}{dt} \ns{v^{(t)}} = \pr{4\tilde{a}^{(t)}_k -2\lambda \pm O(\alpha+m\delta_1^2) }\ns{v^{(t)}}.\] \end{lemma} \begin{proof We can write down $\frac{d}{dt}\ns{v^{(t)}}$ as follows: \begin{align*} \frac{d}{dt} \ns{v^{(t)}} =& \pr{ 4(T^*-T^{(t)})([\bar{v}^{(t)}]^{\otimes 4}) -2\lambda} \ns{v^{(t)}}\\ =& \left(4\sum_{i\in[d]}\tilde{a}^{(t)}_i[\bar{v}^{(t)}_i]^4\pm \n{\Delta^{(t)}}_F - 2\lambda\right)\n{v^{(t)}}^2 \end{align*} Since $[\bar{v}^{(t)}_k]^2\ge 1-\alpha$, $[\bar{v}^{(t)}_i]^{2}\leq \alpha$ for any $i\neq k$ and $\n{\Delta^{(t)}}_F=O(\alpha+m\delta_1^2)$, we have \begin{align*} \frac{d}{dt} \ns{v^{(t)}} =& \left(4\tilde{a}^{(t)}_k - 2\lambda \pm O(\alpha+m\delta_1^2)\right)\n{v^{(t)}}^2. \end{align*} \end{proof} Now we are ready to prove Lemma~\ref{lem:above_delta1}. \lemabovedelta* \begin{proof If $\tilde{a}^{(t)}_k = \Omega(\lambda)$ through Phase 1, according to Lemma~\ref{lem:norm_individual}, we know $\ns{v^{(t)}}$ will never decrease for any $v^{(t)} \in S^{(t)}_k.$ So, we have $S^{(t_1)}_k\neq \varnothing$ and $\hat{a}_k^{(t_1)}\geq \delta_1^2.$ If $\tilde{a}^{(t)}_k = O(\lambda)$ at some time in Phase 1, according to Lemma~\ref{lemma: d hattk, lower bound}, it's not hard to show at the end of Phase 1 we still have $a_k-\hat{a}_k^{(t_1)} = O(\lambda).$ This then implies $\hat{a}_k^{(t_1)} = \Omega(\frac{\epsilon}{\sqrt{d}}).$ Note that we only re-initialize the components that have norm less than $\delta_1.$ As long as $\delta_1^2 = O(\frac{\epsilon}{m\sqrt{d}}),$ we ensure that after the re-initialization, we still have $\hat{a}_k^{(t_1)} = \Omega(\frac{\epsilon}{\sqrt{d}}),$ which of course means $S^{(t_1)}_k\neq \varnothing$. \end{proof} \subsubsection{Technical Lemma} \begin{lemma}\label{lem-phase1-calculation} In the setting of Lemma~\ref{lem-phase1-normthreshold}, suppose $\hat{a}^{(t)}_k\le \alpha$. We have for $i\neq k$ \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2 =&4[v^{(t)}_i]^2 \left(2\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm O(\alpha+m\delta_1^2)\right.\\ &\pm \left.O\left(\frac{(\alpha^2 + \alpha (1-[\bar{v}^{(t)}_k]^2)^{1.5}+m\delta_1^2)\n{v^{(t)}}}{|v^{(t)}_i|}\right)\right). \end{align*} \end{lemma} \begin{proof} In order to prove this lemma, we need a more careful analysis on $\frac{d}{dt}[v^{(t)}_i]^2$. Recall we can decompose $T^{(t)}$ as $\sum_{i\in[d]}T^{(t)}_i + T^{(t)}_\varnothing$ and further write each $T^{(t)}_i$ as $\hat{a}^{(t)}_ie_i^{\otimes 4} + (T^{(t)}_i-\hat{a}^{(t)}_ie_i^{\otimes 4}).$ Note that $\fn{(T^{(t)}_i-\hat{a}^{(t)}_ie_i^{\otimes 4})}=O(\hat{a}^{(t)}_i\alpha)$ and $\fn{T^{(t)}_\varnothing}\leq m\delta_1^2.$ We can write down $\frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2$ in the following form: \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2 =& 4[v^{(t)}_i]^2 \left(2a_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}a_i [\bar{v}^{(t)}_i]^4 \right)\\ &-8v^{(t)}_i\n{v^{(t)}}\sum_{j\in [d]}\br{T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 3}, I)}_i -8v^{(t)}_i\n{v^{(t)}}\br{T^{(t)}_\varnothing([\bar{v}^{(t)}]^{\otimes 3}, I)}_i\\ &+4v^{(t)}_i\sum_{j\in[d]}\br{T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 4})v^{(t)} }_i +4v^{(t)}_i\br{(T^{(t)}_\varnothing([\bar{v}^{(t)}]^{\otimes 4})v^{(t)} }_i \\ &=4[v^{(t)}_i]^2 \left(2a_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}a_i [\bar{v}^{(t)}_i]^4 \right)\\ &-8v^{(t)}_i\n{v^{(t)}}\sum_{j\in [d]}\br{T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 3}, I)}_i \pm v^{(t)}_i\n{v^{(t)}} O(m\delta_1^2)\\ &+4[v^{(t)}_i]^2\sum_{j\in[d]}T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 4}) \pm [v^{(t)}_i]^2 O(m\delta_1^2)\\ &=4[v^{(t)}_i]^2 \left(2a_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}(a_i-\hat{a}_i) [\bar{v}^{(t)}_i]^4 \pm O(\alpha+m\delta_1^2)\right)\\ &-8v^{(t)}_i\n{v^{(t)}}\sum_{j\in [d]}\br{T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 3}, I)}_i \pm v^{(t)}_i\n{v^{(t)}} O(m\delta_1^2). \end{align*} We now bound the term $\br{T^{(t)}_j([\bar{v}^{(t)}]^{\otimes 3}, I)}_i$. \begin{enumerate} \item Case 1: $j=i$. If $\hat{a}^{(t)}_i= 0$, we know $T^{(t)}_i=0$. Otherwise, denote $x=\inner{\bar{w}_{-i}}{\bar{v}^{(t)}_{-i}}$, we have \begin{align*} &\br{T^{(t)}_i([\bar{v}^{(t)}]^{\otimes 3}, I)}_i\\ &= \hat{a}^{(t)}_i \E^{(t)}_{i,w} \bar{w}_i\inner{\bar{w}}{\bar{v}^{(t)}}^3\\ &= \hat{a}^{(t)}_i \E^{(t)}_{i,w} \bar{w}_i\left((\bar{w}_i \bar{v}^{(t)}_i)^3 + (\bar{w}_i \bar{v}^{(t)}_i)^2x + (\bar{w}_i \bar{v}^{(t)}_i)x^2 + x^3\right)\\ &\le \hat{a}^{(t)}_i [\bar{v}^{(t)}_i]^3 + \hat{a}^{(t)}_i |\bar{v}^{(t)}_i| \E^{(t)}_{i,w}|x| + \hat{a}^{(t)}_i |\bar{v}^{(t)}_i| \E^{(t)}_{i,w}x^2 + \hat{a}^{(t)}_i \E^{(t)}_{i,w} x^3. \end{align*} Since $|x|\le\n{\bar{w}_{-1}}$ and $\E^{(t)}_{i,w}\n{\bar{w}_{-i}}\le (\E^{(t)}_{i,w}\n{\bar{w}_{-i}}^2)^{1/2}=O(\alpha)$, we have $\br{T^{(t)}_i([\bar{v}^{(t)}]^{\otimes 3}, I)}_i = \hat{a}^{(t)}_i [\bar{v}^{(t)}_i]^3 + \hat{a}^{(t)}_i |\bar{v}^{(t)}_i| O(\alpha) + \hat{a}^{(t)}_i O(\alpha^{2.5})$. \item Case 2: $j=k$. We have $\br{T^{(t)}_k([\bar{v}^{(t)}]^{\otimes 3}, I)}_i = \hat{a}^{(t)}_k \E^{(t)}_{k,w} \bar{w}_i\inner{\bar{w}}{\bar{v}^{(t)}}^3\le \hat{a}^{(t)}_k \E^{(t)}_{k,w} |\bar{w}_i| = O(\alpha^2)$, since $\hat{a}^{(t)}_k\le \alpha$ and $\E^{(t)}_{k,w} |\bar{w}_i|\le (\E^{(t)}_{k,w} |\bar{w}_i|^2)^{1/2}=O(\alpha)$. \item Case 3: $j\ne i,k$. $j\ne i,k$. If $\hat{a}^{(t)}_j= 0$, we know $T^{(t)}_j=0$. Otherwise, we can write $T^{(t)}_j$ as $\hat{a}^{(t)}_j\E^{(t)}_{j,w}\bar{w}^{\otimes 4}.$ So we just need to bound $\E^{(t)}_{j,w} \bar{w}_i\inner{\bar{w}}{\bar{v}^{(t)}}^3.$ We know $\absr{\inner{\bar{w}}{\bar{v}^{(t)}}} = \absr{\inner{\bar{w}_{-j}}{\bar{v}^{(t)}_{-j}}+\bar{w}_j\bar{v}^{(t)}_j }\leq \n{\bar{w}_{-j}}+\sqrt{1-[\bar{v}^{(t)}_k]^2}.$ So we have \begin{align*} \E^{(t)}_{j,w} \bar{w}_i\inner{\bar{w}}{\bar{v}^{(t)}}^3 =& \E^{(t)}_{j,w} \bar{w}_i O\pr{\n{\bar{w}_{-j}}^3+(1-[\bar{v}^{(t)}_k]^2)^{1.5}}\\ \leq& O\pr{\alpha^3+\alpha(1-[\bar{v}^{(t)}_k]^2)^{1.5}}, \end{align*} where in the lase line we use $\E^{(t)}_{j,w} \bar{w}_i \le (\E^{(t)}_{j,w} \bar{w}_i^2)^{1/2}=O(\alpha)$. \end{enumerate} Recall that $\tilde{a}^{(t)}_i=a_i-\hat{a}^{(t)}_i$. We now have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}[v^{(t)}_i]^2 =& 4[v^{(t)}_i]^2 \left(2\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^2 - \sum_{i\in[d]}\tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm O(\alpha+m\delta_1^2) \right.\\ &\pm \left.O\left(\frac{(\alpha^2 + \alpha (1-[\bar{v}^{(t)}_k]^2)^{1.5}+m\delta_1^2)\n{v^{(t)}}}{|v^{(t)}_i|}\right)\right). \end{align*} \end{proof} \section{Proofs for Phase 2}\label{sec:proof_phase2} The goal of this section is to show that all discovered directions can be fitted within time $t_2^{(s)} - t_1^{(s)}$ and the reinitialized components will not move significantly. Namely, we prove the following lemma. \phasetwomain* Note that since $\delta_1^2 = {\text{poly}}(\varepsilon) / {\text{poly}}(d)$ and $\log(d/\varepsilon) = o(d / \log d)$, we have $t_2^{(s)} - t_1^{(s)} = \frac{o(d/\log d)}{\beta^{(s)}}$. \paragraph{Notations} As in Sec.~\ref{sec: appendix, induction hypothesis}, to simplify the notations, we shall drop the superscript of epoch $s$, and write $z^{(t)} := \inner{\bar{v}^{(t)}}{\bar{w}^{(t)}}$ and $\tilde{a}^{(t)}_k := a_k - \hat{a}^{(t)}_k$. Within this section, we write $T := t_2^{(s)} - t_1^{(s)}$. \paragraph{Proof overview} The first part is proved using the analysis in Appedix~\ref{sec: appendix, induction hypothesis}. Note that we should view the analysis in this section and the analysis in Appendix~\ref{sec: appendix, induction hypothesis} as a whole induction/continuity argument. It's easy to verify that at any time $t_1^{(s)}\leq t\leq t_2^{(s)}$, Assumption~\ref{assumption: induction, oracle} holds and Proposition~\ref{prop:main} holds. The second part is a simple corollary of Lemma~\ref{lemma: d hattk, lower bound} that gives a lower bound for the increasing speed of $\hat{a}^{(t)}_k.$ For the third part, we proceed as follows. At the beginning of phase 2, for any reinitialized component $v^{(t)}$, we know there exists some universal constant $C > 0$ s.t.~ $[\bar{v}^{(t)}_k]^2 \le C \log d / d$ for all $k \in [d]$. Let $T'$ be the minimum time needed for some $[\bar{v}^{(t)}_k]^2$ to reach $2C \log d / d$. For any $t \le T' + t_1^{(s)}$, we have $[\bar{v}^{(t)}_k]^2 \le 2C \log d /d$ and then we can derive an upper bound on the movement speed of $v^{(t)}$, with which we show the change of $[\bar{v}^{(t)}_k]^2$ is $o(\log d / d)$ within time $T$. (Also note this automatically implies that $T' > T$.) To bound the change of the norm, we proceed in a similar way but with $T'$ being the minimum time needed for some $\|v^{(t)}\|$ to reach $2\delta_0$. (Strictly speaking, the actual $T'$ is the smaller one between them.) \begin{lemma} If $S^{(s, t_1^{(s)})}_k \ne \varnothing$, then after at most $\frac{4}{a_k} \log \left(\frac{a_k}{2 \delta_1^2}\right)$ time, we have $\tilde{a}^{(t)}_k \le \lambda$. \end{lemma} \begin{proof} Recall that Lemma~\ref{lemma: d hattk, lower bound} says \footnote{$\alpha^2 = o(\lambda)$.} \[ \frac{1}{\hat{a}^{(t)}_k} \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge 2 \tilde{a}^{(t)}_k - \lambda - O\left(\alpha^2 \right). \] As a result, when $\tilde{a}^{(t)}_k < 2\lambda / 3$, we have $\frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge \tilde{a}^{(t)}_k \hat{a}^{(t)}_k$ or, equivalently, $\frac{\mathrm{d}}{\mathrm{d} t} \tilde{a}^{(t)}_k \le - \tilde{a}^{(t)}_k \hat{a}^{(t)}_k$. When $\hat{a}^{(t)}_k \le a_k / 2$, we have $\frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(t)}_k \ge a_k \hat{a}^{(t)}_k / 2$, whence it takes at most $\frac{2}{a_k}\log\left( \frac{a_k}{2\delta_1^2}\right)$ time for $\hat{a}^{(t)}_k$ to grow from $\delta_1^2$ to $a_k / 2$. When $\hat{a}^{(t)}_k \ge a_k / 2$, we have $\frac{\mathrm{d}}{\mathrm{d} t} \tilde{a}^{(t)}_k \le - a_k \tilde{a}^{(t)}_k / 2$, whence it takes at most $\frac{2}{a_k}\log\left( \frac{a_k}{2 \lambda}\right)$. Hence, the total amount of time is upper bounded by $\frac{2}{a_k}\left( \log\left(\frac{a_k}{2 \delta_1^2}\right) + \log\left(\frac{a_k}{2 \lambda}\right) \right)$. Finally, use the fact $\lambda > \delta_1^2$ to complete the proof. \end{proof} \begin{lemma} \label{lemma: phase 2, bounds for zt} For any $k \in [d]$ and $\bar{v}^{(t)}$ with $\|\bar{v}^{(t)}\|_\infty^2 \le O(\log d / d)$, we have $\E^{(t)}_{k, w} [z^{(t)}]^4 = [\bar{v}^{(t)}_k]^4 \pm O\left( \frac{\log d}{d} \alpha \right)$. Meanwhile, for each $\bar{w}^{(t)} \in S^{(t)}_k$, we have $\absr{z^{(t)}} \le O\left(\sqrt{\frac{\log d}{d}}\right)$. \end{lemma} \begin{proof} For simplicity, put $x^{(t)} = \inner{\bar{w}^{(t)}_{-k}}{\bar{v}^{(t)}_{-k}}$. Then we have \begin{align*} \E^{(t)}_{k, w} [z^{(t)}]^4 = \E^{(t)}_{k, w} \bigg\{ [\bar{w}^{(t)}_k]^4 [\bar{v}^{(t)}_k]^4 & + 4 [\bar{w}^{(t)}_k]^3 [\bar{v}^{(t)}_k]^3 x^{(t)} + 6 [\bar{w}^{(t)}_k]^2 [\bar{v}^{(t)}_k]^2 [x^{(t)}]^2 \\ & + 4 \bar{w}^{(t)}_k \bar{v}^{(t)}_k [x^{(t)}]^3 + [x^{(t)}]^4 \bigg\}. \end{align*} For the first term, we have $[\bar{v}^{(t)}_k]^4 \E^{(t)}_{k, w} [\bar{w}^{(t)}_k]^4 = [\bar{v}^{(t)}_k]^4 \left( 1 \pm O( \alpha^2 )\right)$. To bound the rest terms, we compute \begin{align*} \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^3 [\bar{v}^{(t)}_k]^3 x^{(t)} \right\} &\le O(1) \left( \frac{\log d}{d} \right)^{1.5} \E^{(t)}_{k, w} \sqrt{1 - [\bar{w}^{(t)}_k]^2} \le O(1) \left( \frac{\log d}{d} \right)^{1.5} \alpha, \\ % \E^{(t)}_{k, w} \left\{ [\bar{w}^{(t)}_k]^2 [\bar{v}^{(t)}_k]^2 [x^{(t)}]^2 \right\} &\le O(1)\frac{\log d}{d} \alpha^2 \\ % \E^{(t)}_{k, w} \left\{ \bar{v}^{(t)}_k [x^{(t)}]^3 \right\} &\le O(1)\sqrt{\frac{\log d}{d}} \alpha^{2.5} \\ % \E^{(t)}_{k, w} \left\{ [x^{(t)}]^4 \right\} &\le O(1)\alpha^3. \end{align*} Use the fact $\alpha \le \log d / d$ and we get \begin{align*} \E^{(t)}_{k, w} [z^{(t)}]^4 = [\bar{v}^{(t)}_k]^4 \left( 1 \pm O( \alpha^2 )\right) \pm O(1)\frac{\log d}{d} \alpha = [\bar{v}^{(t)}_k]^4 \pm O\left( \frac{\log d}{d} \alpha \right). \end{align*} For the individual bound, it suffices to note that \[ \absr{z^{(t)}} \le \absr{\bar{v}^{(t)}_k} + \sqrt{1 - [\bar{w}^{(t)}_k]^2} \le O\left(\sqrt{\frac{\log d}{d}}\right) + \sqrt{\alpha} = O\left(\sqrt{\frac{\log d}{d}}\right). \] \end{proof} \begin{lemma}[Bound on the tangent movement] In Phase 2, for any reinitialized component $v^{(t)}$ and $k \in [d]$, we have $[\bar{v}^{(t_2)}_k]^2 = [\bar{v}^{(t_1)}_k]^2 + o(\log d / d)$. \end{lemma} \begin{proof} Recall the definition of $G_1$, $G_2$ and $G_3$ from Lemma~\ref{lemma: d vtk2}. By Lemma~\ref{lemma: phase 2, bounds for zt}, we have \begin{align*} G_1 &\le 8 \tilde{a}^{(t)}_k \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 + O(1) a_k \frac{\log d}{d} \alpha + 8 \hat{a}^{(t)}_k \E^{(t)}_{k, w}\left\{ [z^{(t)}]^3 \inner{\bar{w}_{-k}}{\bar{v}_{-k}} \right\} \\ &\le 8 \tilde{a}^{(t)}_k \left( 1 - [\bar{v}^{(t)}_k]^2 \right) [\bar{v}^{(t)}_k]^4 + O\left( a_k \frac{\log d}{d} \alpha \right), \end{align*} where the second line comes from \begin{align*} \E^{(t)}_{k, w}\left\{ [z^{(t)}]^3 \inner{\bar{w}_{-k}}{\bar{v}_{-k}} \right\} \le O(1) \frac{\log d}{d} \E^{(t)}_{k, w} \sqrt{1 - [\bar{w}^{(t)}_k]^2} \le O\left( \frac{\log d}{d} \alpha \right). \end{align*} Similarly, we have $|G_2| \le O(1) \sum_{i \ne k} a_i \frac{\log d}{d}\alpha$. For $G_3$, by Lemma~\ref{lemma: phase 2, bounds for zt}, we have \begin{align*} a_i [\bar{v}^{(t)}_i]^4 - \hat{a}^{(t)}_i \E^{(t)}_{i, w} \left\{ [z^{(t)}]^4 \right\} = \tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm O\left( a_i \frac{\log d}{d} \alpha \right). \end{align*} Therefore \begin{align*} |G_3| & \le 8 [\bar{v}^{(t)}_k]^2 \sum_{i \ne k} \left( \tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 \pm O\left( a_i \frac{\log d}{d} \alpha \right) \right) \\ &\le 8 [\bar{v}^{(t)}_k]^2 \left( \left(\max_{i \ne k} \tilde{a}^{(t)}_i \right) O\left(\frac{\log d}{d}\right) + O\left(\frac{\log d}{d} \alpha \right) \right) \\ &\le O\left( \beta^{(s)} \frac{\log^2 d}{d^2} \right). \end{align*} Thus\footnote{$\alpha \le O(\beta^{(s)} \log d / d)$}, \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(t)}_k]^2 &\le 8 \tilde{a}^{(t)}_k [\bar{v}^{(t)}_k]^4 + O\left(\frac{\log d}{d} \alpha \right) + O\left( \beta^{(s)} \frac{\log^2 d}{d^2} \right) \\ &\le O\left( \beta^{(s)} \frac{\log^2 d}{d^2} \right). \end{align*} Integrate both sides and recall that $T = \frac{o(d/\log d)}{\beta^{(s)}}$. Thus, the change of $[\bar{v}^{(t)}_k]^2$ is $o(\log d / d)$. \end{proof} \begin{lemma}[Bound on the norm growth] In Phase 2, for any reinitialized component $v^{(t)}$ and $k \in [d]$, we have $\left|\ns{v^{(t_2)}} - \ns{v^{(t_2)}}\right| = o(\delta_0^2)$. \end{lemma} \begin{proof} By Lemma~\ref{lemma: d |v|2} and Lemma~\ref{lemma: phase 2, bounds for zt}, we have \begin{align*} \frac{1}{2 \ns{v^{(t)}}} \frac{\mathrm{d}}{\mathrm{d} t} \ns{v^{(t)}} &\le \sum_{i=1}^d \left( a_i [\bar{v}^{(t)}_i]^4 - \hat{a}^{(t)}_i \E^{(t)}_{i, w} [z^{(t)}]^4 \right) \\ &\le \sum_{i=1}^d \left( \tilde{a}^{(t)}_i [\bar{v}^{(t)}_i]^4 + a_i O\left(\frac{\log d}{d} \alpha \right)\right) \\ &\le \left( \max_{i \in [d]} \tilde{a}^{(t)}_i \right) O\left( \frac{\log d}{d} \right) + O\left(\frac{\log d}{d} \alpha \right) \\ &= \left( \max_{i \in [d]} \tilde{a}^{(t)}_i \right) O\left( \frac{\log d}{d} \right). \end{align*} Recall that $\max_{i \in [d]} \tilde{a}^{(t)}_i \le O(\beta^{(s)})$ and $\|v^{(t)}\| \le O(\delta_0)$. Hence, \[ \frac{\mathrm{d}}{\mathrm{d} t} \ns{v^{(t)}} \le O\left( \beta^{(s)} \frac{\log d}{d} \right) \delta_0^2. \] Integrate both sides, use the fact $T = \frac{o(d/\log d)}{\beta^{(s)}}$, and then we complete the proof. \end{proof} \begin{proofof}{Lemma~\ref{lem:phase2}} Lemma~\ref{lem:phase2} follows by combining the above lemmas with the analysis in Appendix~\ref{sec: appendix, induction hypothesis}. \end{proofof} \section{Proof for Theorem~\ref{thm:main}}\label{sec:proof_main_theorem} In the section, we give a proof of Theorem~\ref{thm:main}. \maintheorem* Note that Proposition~\ref{prop:main} guarantees any ground truth component with $a_i\geq \beta^{(s)}/(1-\gamma)$ must have been fitted before epoch $s$ starts. When $\beta^{(s)}$ decreases below $O(\epsilon/\sqrt{d}),$ all the ground truth components larger than $O(\epsilon/\sqrt{d})$ have been fitted and the residual $\fn{T-T^*}$ must be less than $\epsilon.$ Since $\beta^{(s)}$ decreases in a constant rate, the algorithm must terminate in $O(\log(d/\epsilon))$ epochs. \begin{proof According to Lemma~\ref{lem:phase1} and Lemma~\ref{lem:phase2}, we know Proposition~\ref{prop:main} holds through the algorithm. We first show that $\beta^{(s)}$ is always lower bounded by $\Omega(\epsilon/\sqrt{d})$ before the algorithm ends. For the sake of contradiction, assume $\beta^{(s)}\leq O(\frac{\epsilon}{\sqrt{d}})$. We show that $\fn{T^{(s,0)}-T^*}<\epsilon,$ which is a contradiction because our algorithm should have terminated before this epoch. For simplicity, we drop the superscript on epoch $s$ in the proof. We can upper bound $\fn{T^* - T^{(t)}}$ by splitting $T^*$ into $\sum_{i\in [d]}T^*_i$ and splitting $T^{(t)}$ into $\sum_{i\in [d]}T^{(t)}_i + T^{(t)}_\varnothing.$ Then, we have \begin{align*} \fn{T^*-T^{(t)}} \leq& \fn{\sum_{i\in d}(a_i-\hat{a}^{(t)}_i)e_i^{\otimes 4}} + \sum_{i\in [d]}\fn{T^{(t)}_i - \hat{a}^{(t)}_i e_i^{\otimes 4}} + \fn{T^{(t)}_\varnothing}\\ \leq& O\pr{\sqrt{d}\max\pr{\beta^{(s)},\lambda}} + O(\alpha + m\delta_1^2), \end{align*} where the second inequality holds because $(a_i-\hat{a}^{(t)}_i)\leq O(\max\pr{\beta^{(s)},\lambda}), \fn{T^{(t)}_i - \hat{a}^{(t)}_i e_i^{\otimes 4}}\leq O(\hat{a}^{(t)}_i \alpha)$ and $\fn{T^{(t)}_\varnothing}\leq m\delta_1^2.$ Choosing $\lambda,\alpha = O(\frac{\epsilon}{\sqrt{d}})$ and $\delta_1^2 = O(\frac{\epsilon}{m\sqrt{d}}),$ we have \[\fn{T^*-T^{(t)}}<\epsilon.\] Since $\beta^{(s)}$ starts from $O(1)$ and decreases by a constant factor at each epoch, it will decrease below $O(\frac{\epsilon}{\sqrt{d}})$ after $O(\log(d/\epsilon))$ epochs. This means our algorithm terminates in $O(\log(d/\epsilon))$ epochs. \end{proof} \section*{Overview of Supplementary Materials} In the supplementary material we will give detailed proof for Theorem~\ref{thm:main}. We will first highlight a few technical ideas that goes into the proof, and then give details for each part of the proof. \paragraph{Continuity Argument} Continuity argument is the main tool we use to prove Proposition~\ref{prop:main}. Intuitively, the continuity argument says that if whenever a property is about to be violated, there exists a positive speed that pulls it back, then that property will never be violated. In some sense, this is the continuous version of the mathematical induction or, equivalently, the minimal counterexample method. See Section 1.3 of \cite{tao_nonlinear_2006} for a short discussion on this method. However, since our algorithm is not just gradient flow, and in particular involves reinitialization steps that are not continuous, we need to generalize continuity argument to handle impulses. We give detailed lemmas in Section~\ref{sec: continuity argument} as the continuity argument is mostly used to prove Proposition~\ref{prop:main}. \paragraph{Approximating residual} In many parts of the proof, we approximate the residual $T^* - T$ as: \[ T^* - T = \sum_{i=1}^d \tilde{a}_i e_i^{\otimes 4} + \Delta, \] where $\tilde{a}_i = a_i -\hat{a}_i.$ That is, we think of $T^* - T$ as an orthogonal tensor with some perturbations. The norm of the perturbation $\|\Delta\|_F$ is going to be bounded by $O(\alpha + m\delta_1^2)$, which is sufficient in several parts of the proof that only requires crude estimates. However, in several key steps of our proof (including conditions (a) and (b) of Proposition~\ref{prop:main} and the analysis of the first phase), it is important to use extra properties of $\Delta$. In particular we will expand $\Delta$ to show that for a basis vector $e_i$ we always have $\Delta(e_i^{\otimes 4}) = o(\alpha)$, which gives us tighter bounds when we need them. \paragraph{Radial and tangent movement} Throughout the proof, we often need to track the movement of a particular component $w$ (a column in $W$). It is beneficial to separate the movement of $w$ into radial and tangent movement, where radial movement is defined as $\inner{\frac{dw}{dt}}{w}$ and tangent movement is defined as $P_{w^\perp} \frac{dw}{dt}$ (where $P_{w^\perp}$ is the projection to the orthogonal subspace of $w$). Intuitively, the radial movement controls the norm of the component $w$, and the tangent movement controls the direction of $w$. When the component $w$ has small norm, it will not significantly change the residual $T^*-T$, therefore we mostly focus on the tangent movement; on the other hand when norm of $w$ becomes large in our proof we show that it must already be correlated with one of the ground truth components, which allow us to better control its norm growth. \paragraph{Overall structure of the proof} The entire proof is a large induction/continuity argument which maintains Proposition~\ref{prop:main} as well as properties of the two phases (summarized later in Assumption~\ref{assumption: induction, oracle}). In each part of the proof, we show that if we assume these conditions hold for the previous time, then they will continue to hold during the phase/after reinitialization. In Section~\ref{sec: appendix, induction hypothesis} we prove Proposition~\ref{prop:main} assuming Assumption~\ref{assumption: induction, oracle} holds before. In Section~\ref{sec:proof_phase1} we prove guarantees of Phase 1 and reinitialization assuming Proposition~\ref{prop:main}. In Seciton~\ref{sec:proof_phase2} we prove guarantees for Phase 2 assuming Proposition~\ref{prop:main}. Finally in Section~\ref{sec:proof_main_theorem} we give the proof of the main theorem. \paragraph{Experiments} Finally in Section~\ref{sec:exp_detail} we give details about experiments that illustrate the deflation process, and show why such a process may not happen for non-orthgonal tensors. \section{Main theorem and proof sketch} \label{sec:sketch} In this section we discuss the ideas to prove the following main theorem\footnote{In the theorem statement, we have a parameter $\alpha$ that is not used in our algorithm but is very useful in the analysis (see for example Definition~\ref{def:sst}). Basically, $\alpha$ measures the closeness between a component and its corresponding ground truth direction (see more in Section~\ref{sec:induction_sketch}).} \begin{restatable}{theorem}{maintheorem}\label{thm:main} For any $\epsilon$ satisfying $\log (1/\epsilon)=o( d/\log d),$ there exists $\gamma = \Theta(1)$, $m={\text{poly}}(d)$, $\lambda=\min\{O(\log d/d),O(\epsilon/d^{1/2})\})$, $\alpha=\min\{O(\lambda /d^{3/2}), O(\lambda^2), O(\epsilon^2/d^4)\}$, $\delta_1=O(\alpha^{3/2}/m^{1/2})$, $\delta_0=\Theta(\delta_1\alpha/\log^{1/2} (d))$ such that with probability $1-1/{\text{poly}}(d)$ in the (re)-initializations, Algorithm~\ref{algo:main} terminates in $O(\log(d/\epsilon))$ epochs and returns a tensor $T$ such that \[\fn{T-T^*}\leq \epsilon.\] \end{restatable} Intuitively, epoch $s$ of Algorithm~\ref{algo:main} will try to discover all ground truth components with $a_i$ that is at least as large as $\beta^{(s)}$. The algorithm does this in two phases. In phase 1, the small components $w$ will evolve according to tensor power dynamics. For each ground truth component with large enough $a_i$ that has not been fitted yet, we hope there will be at least one component in $W$ that becomes large and correlated with $e_i$. We call such ground truth components ``discovered''. Phase 1 ends with a check that reinitilizes all components with small norm. Phase 2 is relatively short, and in Phase 2 we guarantee that every ground truth component that has been discovered become ``fitted'', which means the residual $T-T^*$ becomes small in this direction. However, there are still many difficulties in analyzing each of the steps. In particular, why would ground truth components that are fitted in previous epochs remain fitted? How to guarantee only components that are correlated with a ground truth component grow to a large norm? Why wouldn't the gradient flow in Phase 2 mess up with the initialization we require in Phase 1? We discuss the high level ideas to solve these issues. In particular, in Section~\ref{sec:induction_sketch} we first give an induction hypothesis that is preserved throughout the algorithm, which guarantees that every ground truth component that is fitted remains fitted. In Section~\ref{sec:phase1_sketch} we discuss the properties in Phase 1, and in Section~\ref{sec:phase2_sketch} we discuss the properties in Phase 2. \subsection{Induction hypothesis and local stability}\label{sec:induction_sketch} In order to formally define what it means for a ground truth component to be ``discovered'' or ``fitted'', we need some more definitions and notations. \begin{definition}\label{def:sst} Define $S^{(s,t)}_i\subseteq [m]$ as the subset of components that satisfy the following conditions: the $k$-th component is in $S^{(s,t)}_i$ if and only if there exists some time $(s',t')$ that is no later than $(s,t)$ and no earlier than the latest re-initialization of $W[:,k]$ such that \[\n{W^{(s',t')}[:,k]}=\delta_1 \text{ and } [\overline{W^{(s',t')}[:,k]}_i]^2\geq 1-\alpha.\] We say that ground truth component $i$ is {\em discovered} in epoch $s$ at time $t$, if $S^{(s,t)}_i$ is not empty. \end{definition} Intuitively, $S^{(s,t)}_i$ is a subset of components in $W$ such that they have large enough norm and good correlation with the $i$-th ground truth component. Although such components may not have a large enough norm to fit $a_i$ yet, their norm will eventually grow. Therefore we say ground truth component $i$ is discovered when such components exist. For convenience, we shorthand $w^{(s,t)} \in \{W^{(s,t)}[:, j] | j\in S^{(s,t)}_i\}$ by $w^{(s,t)} \in S^{(s,t)}_i.$ Now we will discuss when a ground truth component is fitted, for that, let \[\hat{a}_i^{(s,t)} = \sum_{w^{(s,t)}\in S^{(s,t)}_i}\ns{w^{(s,t)}}.\] Here $\hat{a}_i^{(s,t)}$ is the total squared norm for all the components in $S^{(s,t)}_i$. We say a ground truth component is {\em fitted} if $a_i - \hat{a}_i^{(s,t)} \le 2\lambda$. Note that one can partition the columns in $W$ using sets $S^{(s,t)}_i$, giving $d$ groups and one extra group that contains everything else. We define the extra group as $S^{(s,t)}_\varnothing := [m]\setminus \bigcup_{k\in [d]}S^{(s,t)}_k$. For each of the non-empty $S^{(s,t)}_i$, we can take the average of its component (weighted by $\ns{w^{(s,t)}}$): \[\E^{(s,t)}_{i,w} f(w^{(s,t)}):=\frac{1}{\hat{a}^{(s,t)}_i}\sum_{w^{(s,t)}\in S^{(s,t)}_i}\ns{w^{(s,t)}} f(w^{(s,t)}).\] If $S^{(s,t)}_i = \varnothing,$ we define $\E^{(s,t)}_{i,w} f(w^{(s,t)})$ as zero. Now we are ready to state the induction hypothesis: \begin{restatable}[Induction hypothesis]{prop}{proposition} \label{prop:main} In the setting of Theorem~\ref{thm:main}, for any epoch $s$ and time $t$ and every $k \in [d]$, the following hold. \begin{enumerate}[(a)] \item \label{itm: Ist, individual} For any $w^{(s,t)} \in S^{(s,t)}_k$, we have $\br{\bar{w}^{(s,t)}_k}^2 \ge 1 - \alpha$. \item \label{itm: Ist, average} If $S^{(s,t)}_k$ is nonempty, $\E^{(s,t)}_{k, w} \br{\bar{w}^{(s,t)}_k}^2 \ge 1 - \alpha^2 - 4sm\delta_1^2$. \item \label{itm: Ist, residual} We always have $a_k - \hat{a}^{(s, t)}_k \ge \lambda/6 - s m\delta_1^2$; if $a_k \geq \frac{\beta^{(s)}}{1-\gamma}$, we further know $a_k - \hat{a}_k^{(s,t)} \le \lambda+ s m\delta_1^2$. \item If $w^{(s,t)} \in S^{(s,t)}_\varnothing$, then $\|w^{(s,t)}\| \le \delta_1$. \end{enumerate} \end{restatable} We choose $\delta_1^2$ small enough so that $sm\delta_1^2$ is negligible compared with $\alpha^2$ and $\lambda.$ Note that if Proposition~\ref{prop:main} is maintained throughout the algorithm, all the large components will be fitted, which directly implies Theorem~\ref{thm:main}. Detailed proof is deferred to Appendix~\ref{sec:proof_main_theorem}. Condition (c) shows that for a ground truth component $k$ with large enough $a_k$, it will always be fitted after the corresponding epoch. Condition (d) shows that components that did not discover any ground truth components will always have small norm (hence negligible in most parts of the analysis). Conditions (a)(b) show that as long as a ground truth component $k$ has been discovered, all components that are in $S^{(s,t)}_k$ will have good correlation, while the {\em average} of all such components will have even better correlation. The separation between individual correlation and average correlation is important in the proof. With only individual bound, we cannot maintain the correlation no matter how small $\alpha$ is. Here is an example below: \begin{restatable}{claim}{example}\label{clm:example} Suppose $T^* = e_k^{\otimes 4}$ and $T=v^{\otimes 4}/\ns{v} + w^{\otimes 4}/\ns{w}$ with $\ns{w}+\ns{v}\in[2/3,1].$ Suppose $\bar{v}_k^2 = 1-\alpha$ and $\bar{v}_k = \bar{w}_k, \bar{v}_{-k}=-\bar{w}_{-k}.$ Assuming $\ns{v}\leq c_1$ and $\alpha\leq c_2$ for small enough constants $c_1,c_2,$ we have $\frac{\mathrm{d}}{\mathrm{d} t}\bar{v}_k^2<0.$ \end{restatable} In the above example, both $\bar{v}$ and $\bar{w}$ are close to $e_k$ but they are opposite in other directions ($\bar{v}_{-k}=\bar{w}_{-k}$). The norm of $v$ is very small compared with that of $w$. Intuitively, we can increase $v_{-k}$ so that the average of $v$ and $w$ is more aligned with $e_k$. See the rigorous analysis in Appendix~\ref{sec: induction, counterexample}. The induction hypothesis will be carefully maintained throughout the analysis. The following lemma guarantees that in the gradient flow steps the individual and average correlation will be maintained. \begin{lemma}\label{lem:individual_average} In the setting of Theorem~\ref{thm:main}, suppose Proposition~\ref{prop:main} holds in epoch $s$ at time $t$, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(s,t)}]^2 &\ge 8 \left( a_k - \hat{a}^{(s, t)}_k \right) \left( 1 - [\bar{v}^{(s,t)}_k]^2 \right) - O\left( \alpha^{1.5} \right), \\ \frac{\mathrm{d}}{\mathrm{d} t} \E^{(s,t)}_{k, v}[\bar{v}^{(s,t)}_k]^2 &\ge 8 \left( a_k - \hat{a}^{(s, t)}_k \right) \left( 1-\E^{(s,t)}_{k,v}[\bar{v}^{(s,t)}_k]^2 \right) - O(\alpha^3). \end{align*} In particular, when $a_k-\hat{a}^{(s, t)}_k\geq \Omega(\lambda) = \Omega(\sqrt{\alpha}),$ we have $\frac{\mathrm{d}}{\mathrm{d} t} [\bar{v}^{(s,t)}]^2 >0$ when $[\bar{v}^{(s,t)}]^2=1-\alpha$ and $\frac{\mathrm{d}}{\mathrm{d} t} \E^{(s,t)}_{k, v}[\bar{v}^{(s,t)}_k]^2 >0$ when $\E^{(s,t)}_{k, v}[\bar{v}^{(s,t)}_k]^2=1-\alpha^2.$ \end{lemma} The detailed proof for the local stability result can be found in Appendix~\ref{sec: appendix, induction hypothesis}. Of course, to fully prove the induction hypothesis one needs to talk about what happens when a component enters $S^{(s,t)}_i$, and what happens at the reinitialization steps. We discuss these details in later subsections. \vspace{-1mm} \subsection{Analysis of Phase 1}\label{sec:phase1_sketch} In Phase 1 our main goal is to discover all the components that are large enough. We also need to maintain Proposition~\ref{prop:main}. Formally we prove the following \begin{restatable}[Main Lemma for Phase 1]{lemma}{lemphaseone}\label{lem:phase1} In the setting of Theorem~\ref{thm:main}, suppose Proposition~\ref{prop:main} holds at $(s,0).$ For $t_1^{(s)}:=t_1^{(s)\prime}+t_1^{(s){\prime\prime}}+t_1^{(s){\prime\prime\prime}}$ with $t_1^{(s)\prime} = \Theta(d/(\beta^{(s)} \log d))$, $t_1^{(s){\prime\prime}}=\Theta(d/(\beta^{(s)} \log^3 d))$, $t_1^{(s){\prime\prime\prime}}=\Theta(\log(d/\alpha)/\beta^{(s)})$, with probability $1-1/{\text{poly}}(d)$ we have \begin{enumerate} \item Proposition \ref{prop:main} holds at $(s,t)$ for any $0\leq t < t_1^{(s)}$, and also for $t = t_1^{(s)}$ after reinitialization. \item If $a_k\geq \beta^{(s)}$ and $S^{(s,0)}_k=\varnothing$, we have $S_k^{(s,t_1^{(s)})}\neq \varnothing$ and $\hat{a}_k^{(s,t_1^{(s)})}\geq \delta_1^2.$ \item If $S^{(s,0)}_k=\varnothing$ and $S^{(s, t_1^{(s)})}_k\neq \varnothing,$ we have $a_k\geq C\beta^{(s)}$ for universal constant $0<C<1$. \end{enumerate} \end{restatable} Property 2 shows that large enough ground truth components are always discovered, while Property 3 guarantees that no small ground truth components can be discovered. Our proof relies on initial components being ``lucky'' and having higher than usual correlation with one of the large ground truth components. To make this clear we separate components $v$ into different sets: \begin{definition}[Partition of (re-)initialized components]\label{def-phase1-partition} For each direction $i\in[d]$, define the set of good components $S^{(s)}_{i,good}$ and the set of potential components $S^{(s)}_{i,pot}$ as follow, where $\Gamma^{(s)}_i:=1/(8a_i t_1^{(s)\prime})$ if $S_i^{(s,0)}=\varnothing$, and $\Gamma^{(s)}_i := 1/(8\lambda t_1^{(s)\prime})$ otherwise. Here $\rho^{(s)}_i:=c_\rho \Gamma^{(s)}_i$ and $c_\rho$ is a small enough absolute constant. \begin{align*} S^{(s)}_{i,good} &:= \{k \mid [\bar{v}^{(s,0)}_i]^2\ge \Gamma^{(s)}_i+\rho^{(s)}_i,\ [\bar{v}^{(s,0)}_j]^2 \le \Gamma^{(s)}_j-\rho^{(s)}_j,\forall j\ne i \text{ and } v^{(s,0)} = W^{(s,0)}[:,k]\},\\ S^{(s)}_{i,pot} &:= \{k\mid [\bar{v}^{(s,0)}_i]^2\ge \Gamma^{(s)}_i-\rho^{(s)}_i\text{ and } v^{(s,0)} = W^{(s,0)}[:,k]\}. \end{align*} Let $S^{(s)}_{good}:= \cup_i S^{(s)}_{i,good}$ and $S^{(s)}_{pot}:= \cup_i S^{(s)}_{i,pot}$. We also define the set of bad components $S^{(s)}_{bad}$. \begin{align*} S^{(s)}_{bad} &:= \{k\mid \exists i\ne j \text{ s.t. } [\bar{v}^{(s,0)}_i]^2\ge\Gamma^{(s)}_i-\rho^{(s)}_i,\ [\bar{v}^{(s,0)}_j]^2\ge\Gamma^{(s)}_j-\rho^{(s)}_j\text{ and } v^{(s,0)} = W^{(s,0)}[:,k]\}. \end{align*} \end{definition} For convenience, we shorthand $v^{(s,t)} \in \{W^{(s,t)}[:, j] | j\in S_{i,good}\}$ by $v^{(s,t)} \in S_{i,good}$ (same for $S_{i,pot}$ and $S_{bad}$). Intuitively, the good components will grow very quickly and eventually pass the norm threshold. Since both good and potential components only have one large coordinate, they will become correlated with that ground truth component when their norm is large. The bad components are correlated with two ground truth components so they can potentially have a large norm while not having a very good correlation with either one of them. In the proof we will guarantee with probability at least $1-1/{\text{poly}}(d)$ that good components exists for all large enough ground truth components and there are no bad components. The following lemma characterizes the trajectories of different type of components: \begin{restatable}{lemma}{lemphaseonesummarytrajectory}\label{lem-phase1-summary-trajectory} In the setting of Lemma~\ref{lem:phase1}, for every $i \in[d]$ \begin{enumerate} \item (\emph Only good/potential components can become large) If $v^{(s,t)} \not\in S^{(s)}_{pot}$, $\n{v^{(s,t)}}=O(\delta_0)$ and $[\bar{v}^{(s,t)}_i]^2=O(\log(d)/d)$ for all $i\in[d]$ and $t\le t_1^{(s)}$. \item (\emph Good components discover ground truth components) If $S^{(s)}_{i,good}\neq \varnothing$, there exists $v^{(s,t_1^{(s)})}$ such that $\n{v^{(s,t_1^{(s)}})}\ge \delta_1$ and $S_i^{(s,t_1^{(s)})}\neq\varnothing$. \item (\emph Large components are correlated with ground truth components) If $\n{v^{(s,t)}}\geq \delta_1$ for some $t\leq t_1^{(s)}$, there exists $i\in [d]$ such that $v^{(s,t)}\in S^{(s,t)}_i$. \end{enumerate} \end{restatable} The proof of Lemma~\ref{lem-phase1-summary-trajectory} is difficult as one cannot guarantee that all the ground truth components that we are hoping to fit in the epoch will be fitted simultaneously. However we are able to show that $T-T^*$ remains near-orthogonal and control the effect of changing $T-T^*$ within this epoch. The details are in Appendix~\ref{sec:proof_init_phase1}. \vspace{-1mm} \subsection{Analysis of Phase 2}\label{sec:phase2_sketch} In Phase 2 we will show that every ground truth component that's discovered in Phase 1 will become fitted, and the reinitialized components will preserve the desired initialization conditions. \begin{restatable}[Main Lemma for Phase 2]{lemma}{phasetwomain}\label{lem:phase2} In the setting of Theorem~\ref{thm:main}, suppose Proposition~\ref{prop:main} holds at $(s,t_1^{(s)}),$ we have for $t_2^{(s)}-t_1^{(s)}:=O(\frac{\log(1/\delta_1)+\log(1/\lambda)}{\beta^{(s)}})$ \begin{enumerate} \item Proposition~\ref{prop:main} holds at $(s,t)$ for any $t_1^{(s)}\leq t\leq t_2^{(s)}.$ \item If $S_k^{(s, t_1^{(s)})}\neq \varnothing,$ we have $a_k-\hat{a}^{(s,t_2^{(s)})}_k\leq 2\lambda.$ \item For any component that was reinitialized at $t_1^{(s)}$, we have $\ns{v^{(s,t_2^{(s)}}} = \Theta(\delta_0^2)$ and $\br{\bar{v}_i^{(s,t_2^{(s)})}}^2 = \br{\bar{v}_i^{(s,t_1^{(s)})}}^2 \pm o\pr{\frac{\log d }{d}}$ for every $i\in [d].$ \end{enumerate} \end{restatable} The main idea is that as long as a direction has been discovered, the norm of the corresponding components will increase very fast. The rate of that is characterized by the following lemma. \begin{lemma}[informal]In the setting of Theorem~\ref{lem:phase2}, for any $t_1^{(s)}\leq t\leq t_2^{(s)},$ \[ \frac{\mathrm{d}}{\mathrm{d} t} \hat{a}^{(s,t)}_k \ge \pr{2 (a_k-\hat{a}^{(s,t)}_k) - \lambda - O\left(\alpha^2 \right)}\hat{a}^{(s,t)}_k. \] In particular, after $O(\frac{\log(1/\delta_1)+\log(1/\lambda)}{a_k})$ time, we have $a_k-\hat{a}^{(s,t)}_k\leq \lambda.$ \end{lemma} By the choice of $\delta_1$ and $\lambda$, the length of Phase 2 is much smaller than the amount of time needed for the reinitialized components to move far, allowing us to prove the third property in Lemma~\ref{lem:phase2}. Detailed analysis is deferred to Appendix~\ref{sec:proof_phase2}.
proofpile-arXiv_068-16246
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} During the next few years we might expect some dramatic new information from B-mode experiments either detecting primordial gravity waves or establishing a new upper bound on $r$, and from LHC discovery/non-discovery of low scale supersymmetry. A theoretical framework to discuss both of these important factors in cosmology and particle physics has been proposed recently. It is based on the construction of new models of chaotic inflation \cite{Linde:1983gd} in supergravity compatible with the current cosmological data \cite{Planck:2015xua} as well as involving a controllable supersymmetry breaking at the minimum of the potential \cite{Kallosh:2014via,Dall'Agata:2014oka,Kallosh:2014hxa,Lahanas:2015jwa}. In this paper we will develop supergravity models of inflation motivated by either string theory or extended supergravity consderations, known as cosmological $\alpha$-attractors \cite{Kallosh:2013hoa,Ferrara:2013rsa,Kallosh:2013yoa,Cecotti:2014ipa,Kallosh:2013tua,Galante:2014ifa,Kallosh:2015lwa,Kallosh:2015zsa,Carrasco:2015uma}. {\it Here we will enhance them with a controllable supersymmetry breaking and cosmological constant at the minimum}. We find this to be a compelling framework for the discussion of the crucial new data on cosmology and particle physics expected during the next few years. Some models of this type were already discussed in \cite{Kallosh:2015lwa}. The paper is organized as follows. We begin in Section~\ref{sect:Review} with a brief review of key vocabulary and features of these and related models with references to more in-depth treatments. In Section~\ref{sect:Killing} we present the $\alpha$-attractor supergravity models that make manifest an inflaton shift-symmetry by virtue of having the K{\"a}hler\ potential inflaton {\em independent} -- which we will refer to as Killing-adapted form. Section~\ref{sect:Reconstruction} presents a universal rule: given a bosonic inflationary potential of the form ${\cal F}^2 (\varphi)$ one can reconstruct the superpotential $W=\Big ( S +{1\over b} \Big) f(\Phi)$ for the K{\"a}hler\, potentials described in Section~\ref{sect:Killing}. The resulting models with $f'(\varphi)={\cal F}(\varphi)$ have a cosmological constant $\Lambda$ and an arbitrary SUSY breaking $M$ at the minimum. In Section~\ref{sect:genModels} we study more general class of models with $W= g(\varphi) + S f((\varphi)$ and the same K{\"a}hler\, potential. For these models it is also possible to get agreement with the Planck data as well as dark energy and SUSY breaking. Moreover, these models have nice properties with regard to initial conditions for inflation, analogous to the ones studied in \cite{Carrasco:2015rva} for models without SUSY breaking and dark energy. We close in Section~\ref{sect:Discussion} with a summary of what we have accomplished. \section{Review} \label{sect:Review} \subsection{$\alpha$, and attraction} There is a key parameter $\alpha$ in these models, for which the K{\"a}hler\, potential $K= -3\alpha \ln (T+\bar T)$. It describes the moduli space curvature \cite{Ferrara:2013rsa} given by ${\cal R}_K= - {2\over 3 \alpha}$. Another, also geometric, interpretation of this parameter is in terms of the Poincar\'e disk model of a hyperbolic geometry with the radius $\sqrt{3\alpha}$, illustrated by the Escher's picture Circle Limit IV~\cite{Kallosh:2015zsa,Carrasco:2015uma}. As clarified in these references, from the fundamental point of view, there are particularly interesting values of $\alpha$ depending on the original theory. From the maximal ${\cal N}=4$ superconformal theory, \cite{Cremmer:1977tt}, one would expect $\alpha=1/3$ with $r \approx10^{-3}$. This corresponds to the unit radius Escher disk \cite{Kallosh:2015zsa}, as well as a target of the future space mission for B-mode detection, as specified in CORE (Cosmic ORigins Explorer). Some interesting simplifications occur for $\alpha = 1/9$, which corresponds to the GL model \cite{Goncharov:1983mw,Linde:2014hfa}. From ${\cal N}=1$ superconformal theory \cite{Kallosh:2013hoa}, one would expect $\alpha=1$ with $r \approx 3 \times 10^{-3}$. Generic ${\cal N}=1$ supergravity allows any positive $\alpha$ and, therefore an arbitrary $r$, which has to be smaller than $0.11$ to agree with the current data. \subsection{T and E model attractors, and observables} A simple class of $\alpha$-attractor models, T-models, have a potential $V=\tanh^{2n} {\varphi\over \sqrt{6\alpha}}$ for the canonical inflaton field $\varphi$. These models have the following values of the cosmological observables \cite{Kallosh:2013hoa,Ferrara:2013rsa,Kallosh:2013yoa,Cecotti:2014ipa} for $\alpha\lesssim O(10)$, where there is an attractor behavior and many models have the same $n$-independent predictions \begin{equation} n_s = 1-\frac{2}{N}\,, \qquad r = \alpha \frac{12 }{N^2} \, , \qquad r\approx 3 \, \alpha \times 10^{-3} \ . \label{aattr} \end{equation} Once we increase $\alpha$ beyond $O(10)$, expressions for $n_s$ and $r$ become somewhat different, see eqs. (5.2-5.4) in \cite{Kallosh:2013yoa}. In particular, the value of $r$ can be increased significantly, all the way to the predictions of the $\varphi^{2n}$ models. \begin{figure}[htb] \begin{center} \includegraphics[width=9.3cm]{T.pdf} \vspace*{-0.2cm} \caption{\footnotesize Examples of supergravity T- models with $r$-dependence in logarithmic scale in $r$. For potentials $V=\tanh^{2n} {\varphi\over \sqrt{6\alpha}}$, the predictions of these models interpolate between the predictions of various polynomial models $\varphi^{2n}$ at very large $\alpha$ and the vertical attractor line for $\alpha\leq O(10)$. When $\alpha \rightarrow \infty$ the models approach the ones with $\varphi^{2n}$ potentials. This attractor line beginning with the red star corresponds to the predictions of the simplest models $V=\tanh^{2n} {\varphi\over \sqrt{6\alpha}}$ with $n=1$. } \label{f2} \end{center} \vspace{-0.6cm} \end{figure} Even the simplest of these T-models are interesting phenomenologically for cosmology. For these models the parameter $\alpha$ can take any non-zero value; it describes the inverse curvature of the K{\"a}hler\ manifold \cite{Ferrara:2013rsa,Cecotti:2014ipa}. The cosmological predictions of these models, for various values of $\alpha$, are shown in Fig. 1. As one can see, the line with $n=1$ begins at a point corresponding to the predictions of the simplest quadratic model ${m^{2}\over 2}\phi^{2}$ for $\alpha > 10^{3}$, and then, for smaller $\alpha$, it rapidly cuts through the region most favored by the Planck data, towards the predictions of the Higgs inflation model and conformal attractors $r \approx 0.003$ for $\alpha= 1$, continues further down towards the prediction $r \approx 0.0003$ of the GL model \cite{Goncharov:1983mw,Linde:2014hfa} corresponding to $\alpha = 1/9$, and then the line goes even further, all the way down to $r \to 0$ in the limit $\alpha \to 0$. This fact by itself is quite striking. \newpage The simple E-model attractors have a potential of the form $V_{0} \Bigl(1- e^{-\sqrt {2\over 3\alpha} \varphi}\Bigr)^{2n}$. For $n = 1$, $\alpha = 1$ it gives the potential of the Starobinsky model, with the prediction $r \approx 0.003$. We will generalize both T-models as well as E-models, which both fit the data from Planck very well, to describe SUSY breaking and dark energy, at the minimum of the generalized potential. \begin{figure}[h!] \vspace*{3mm} \centering \includegraphics[width=9cm]{E.pdf} \vspace*{-1mm} \caption{\footnotesize The cosmological observables $(n_s,r)$, in a logarithmic scale in $r$, for simple examples of E-models, with $V= (1- e^{-{ \sqrt {2\over 3 \alpha} \varphi}})^{2n} $ with $n = (1/2, 3/4, 7/8, 1, 3/2, 2, 3)$ starting from the right, increasing to the left, with the vertical line for $n=1$ in the middle. When $\alpha \rightarrow \infty$ the models approach the ones with $\varphi^{2n}$ potentials. The attractor line, common for all $n$, starts below $r\approx 10^{-3}$ and goes down, unlimited.} \label{fig:simpleObservables} \vspace{-.3cm} \end{figure} \ \subsection{Stabilizers} In supergravity models of inflation, the task of SUSY breaking after inflation is often delegated to the so-called {\it hidden SUSY breaking sector}, requiring the addition of new superfields constrained to not participate in inflation. The scalars from such superfields have to be strongly stabilized, so as to not affect the inflation driven by the inflaton sector of the model. In this paper we describe models of chaotic inflation with the inflaton chiral superfield, and with a nilpotent superfield stabilizer.\footnote{The nilpotent multiplet describes the Volkov-Akulov fermionic goldstino multiplet with non-linearly realized spontaneously broken supersymmetry \cite{Volkov:1973ix}. The relation to chiral nilpotent multiplets was studied in \cite{rocek}. In cosmology we use the recent implementation of nilpotent multiplets suggested in \cite{Komargodski:2009rz}. These nilpotent multiplets are deeply related to the physics of the D-branes \cite{Ferrara:2014kva,Kallosh:2014wsa}.} This new approach to generic SUSY breaking was suggested recently in \cite{Kallosh:2014via} using generic supergravity models including the inflaton multiplet as well as a nilpotent multiplet \cite{Ferrara:2014kva}. Note that the non-inflaton goldstino multiplet plays an important role for consistency of inflation, including stabilization of the second scalar belonging to the inflaton multiplet. This was explained in \cite{Kallosh:2010ug,Kallosh:2010xz} developing on the pioneering work \cite{Kawasaki:2000yn}. In these models the glodstino multiplet was a `stabilizer' superfield and was a standard chiral superfield. \subsection{Shift Symmetry and Z, T, and $\Phi$ variables} The inflationary models made with a shift-symmetric canonical K{\"a}hler\, potential, and controllable supersymmetry breaking have been studied in \cite{Kallosh:2014via,Dall'Agata:2014oka,Kallosh:2014hxa}. The basic feature of all such models is as follows. At the potential's minimum supersymmetry is spontaneously broken. With the simplest choice of the K{\"a}hler\, potential, the models are given by $ K= {1\over 2} (\Phi-\bar \Phi)^2 + S\bar S$, $ W= g (\Phi) + S f(\Phi)$, $ S^2(x, \theta)=0 $, where the superpotential depends on two functions of the inflaton field $\Phi$. The difference with earlier models \cite{Kallosh:2010ug,Kallosh:2010xz,Kawasaki:2000yn}, is the presence of an $S$-independent function $g (\Phi)$ in $W$ and the requirement that $S$ is nilpotent. The mass of the gravitino at the minimum of the potential, $W=m_{3/2} =g(0)$, is non-vanishing in these new models, and SUSY is broken in the goldstino direction with $D_SW =M \neq 0$. In \cite{Kallosh:2010ug,Kallosh:2010xz,Kawasaki:2000yn} the mass of the gravitino was vanishing. Typically the minimum of the potential is these models had an unbroken supersymmetry in Minkowski minima. But in new models in \cite{Kallosh:2014via,Dall'Agata:2014oka,Kallosh:2014hxa} with $g(\Phi)\neq 0$ we find instead either de Sitter or Minkowski minima with spontaneously broken SUSY. From the point of view of string theory and ${\cal N} \geq 2$ spontaneously broken supergravity, another class of K{\"a}hler\, potentials, such as $K= -3\alpha \ln (T+\bar T)$, is more interesting due to their geometric nature and symmetries. The same models in Poincar\'e disk variables are given by $K= -3\alpha \ln (1-Z\bar Z)$. It is particularly important that these models have a boundary of the moduli space at \begin{equation} Z\bar Z \rightarrow 1 \, , \qquad Z\rightarrow \pm 1 \, , \qquad T \rightarrow 0\, , \qquad T^{-1} \rightarrow 0 \end{equation} where $T= {1+Z\over 1-Z}$, \, $T^{-1}= {1-Z\over 1+Z}$ \cite{Kallosh:2013hoa,Cecotti:2014ipa,Kallosh:2015zsa}. Inflation takes place near the boundary which leads to an attractor behavior when many models lead to the same inflationary predictions. A simple way to explain it is to refer to a geometric nature of the kinetic terms of the form \begin{equation} 3 \alpha {\partial T \partial \bar T \over (T+\bar T)^2 }|_{T=\bar T=t}= {3 \alpha \over 4} \left ({\partial t \over t }\right)^2= {3 \alpha \over 4} \left ({\partial (t ^{-1}) \over t ^{-1}}\right)^2 \label{pole} \end{equation} The kinetic term has a pole behavior near $t^{-1}\rightarrow 0$, near the boundary of the moduli space $T^{-1} \rightarrow 0$. This explains why the potentials can be changed without a change in cosmological observables and $r$ depends on the residue of the pole, i.e. on $\alpha$ \cite{Galante:2014ifa}. We may therefore change our potentials by small terms depending on $t^{-1}$ without changing the observables during inflation. We study these models here. They can use either the Poincar\'e disk variables $Z\bar Z < 1$ or the half-plane variables $T+\bar T>0$. We will also use the set of variables discussed in \cite{Carrasco:2015rva}, where \begin{equation} T= e^{\sqrt{2\over 3\alpha} \Phi} \, , \qquad Z= \tanh {\Phi\over \sqrt{6 \alpha}} \ . \label{Phi} \end{equation} In the context of our moduli space geometry the variables $\Phi$ represent the Killing adapted frame where the metric is inflaton independent. We will therefore call them Killing variables. Our purpose here is to generalize the models in \cite{Kallosh:2013hoa,Ferrara:2013rsa,Kallosh:2013yoa,Cecotti:2014ipa} to break ${\cal N} = 1$ SUSY spontaneously. The new models with $S^2(x, \theta)=0$, which are compatible with established cosmological data and designed to be compatible with the future data on $r$ and $m_{3/2}$ will depend on four parameters: $\alpha$, describing the K{\"a}hler\, geometry, $M$, defining the scale of SUSY breaking by goldstino $D_SW= M$, and $\mu$, related to scale of inflationary energy and $b$. The role of $b$ is the following: at the minimum \begin{equation} V= \Big (b^2-3 \Big ) {M^2\over b^2} \, , \qquad \Rightarrow \qquad b^2=3\, , \qquad V=0 \ . \label{V1}\end{equation} It shows that in ${\cal N}=1$ d=4 supergravity with a nilpotent goldstino multiplet {\it generic de Sitter minima require a universal condition that the goldstino energy $M^2$ exceeds the negative gravitino contribution to energy} where $ m_{3/2}^2= {M^2\over b^2}$. \begin{equation} V= M^2- 3 m_{3/2}^2 >0 \ . \label{V2}\end{equation} We keep here generic values of the parameter $b^2 > 3$ which allow generic de Sitter vacua of the string landscape type, including the case \begin{equation} \Lambda =M^2- 3 m_{3/2}^2= \Big (1-{3\over b^2} \Big ) M^2\sim 10^{-120} \ . \label{V3}\end{equation} \section{ Killing-adapted $\alpha$-attractor supergravity models.} \label{sect:Killing} We study here the following ${\cal N} = 1$ supergravity models, which can be described in disk geometry coordinates of the moduli space $Z$, \begin{equation} K= -3 \alpha \log \Big (1- Z\bar Z \Big ) +S\bar S\, , \qquad S^2(x, \theta)=0\, , \qquad W= \tilde A(Z) + S \tilde B(Z)\, \ . \label{Kdisk}\end{equation} The geometry has the $SU(1,1)$ symmetry \begin{equation} ds^2= K_{Z\bar Z} dZ d\bar Z= -3\alpha {dZ d\bar Z\over (1-Z\bar Z)^2} \ . \label{Dgeom}\end{equation} Alternatively, we can use the half-plane coordinates $T$ \begin{equation} K= -3\, \alpha \log \left(T + \bar T \right) + S\bar S\, , \qquad S^2(x, \theta)=0\, , \qquad W= \tilde G(T)+ S \tilde F(T)\ . \label{Khalf}\end{equation} The geometry has an $SL(2, \mathbb{R})$ symmetry \begin{equation} ds^2= K_{T\bar T} dT d\bar T= -3\alpha {dT d\bar T\over (T+\bar T)^2} \ . \label{HPgeom}\end{equation} In both cases, at $S=0$ the geometry is associated with the Poincare disk or half plane geometry where $3\alpha= R_{E}^2$ corresponds to the radius square of the Escher disk \cite{Kallosh:2015zsa}. We will now perform a K{\"a}hler\, transformation \cite{Carrasco:2015uma,Carrasco:2015rva} so that our new K{\"a}hler\, potential is inflaton shift-symmetric. First we use the original disk and half-plane variables and redefine the K{\"a}hler\, and superpotentials as follows \begin{equation} K= -{3\over 2} \alpha \log \left[{(1- Z\bar Z)^2\over (1-Z^2) (1-\overline Z^2)} \right] +S\bar S\, , \qquad S^2(x, \theta)=0\, , \qquad W= A(Z) + S B(Z)\, \ . \label{KdiskNew}\end{equation} where \begin{equation} A(Z) + S B(Z) =(1-Z^2)^{-3\alpha /2} ( \tilde A(Z) + S \tilde B(Z)) \ . \end{equation} In half-plane case \begin{equation} K= -{3\over 2}\, \alpha \log \left[ {(T + \bar T )^2 \over 4 T \bar T} \right] + S\bar S\, , \qquad S^2(x, \theta)=0\, , \qquad W= G(T)+ S F(T)\ . \label{KhalfNew}\end{equation} where \begin{equation} G(T) + S F(T) = T^{-3\alpha /2} \bigl(\tilde G(T)+ S \tilde F(T)\bigr) \ . \end{equation} Since we have performed a K{\"a}hler\, transform of the type \begin{equation} K\rightarrow K + {3\alpha \over 2} \log [(1-Z^2) (1-\bar Z^2)], \qquad W\rightarrow (1-Z^2)^{-3\alpha /2} W\, \qquad \overline W\rightarrow (1-\bar Z^2) ^{-3\alpha /2} \overline W \ . \end{equation} \begin{equation} K\rightarrow K + {3\alpha \over 2} \log [4 T \bar T], \qquad W\rightarrow T^{-3\alpha /2} W\, \qquad \overline W\rightarrow \bar T^{-3\alpha /2} \overline W \ . \end{equation} the geometry did not change, it is still given by \rf{Dgeom} and \rf{HPgeom}, respectively. Our next step is to switch to moduli space coordinates \rf{Phi} where the metric is manifestly inflaton-independent. {\it The choice of coordinates $Z= \tanh {\Phi\over 6 \alpha}$ and $ T= e^{\sqrt{2\over 3\alpha} \Phi} $ in the disk/half-plane geometry corresponds to a Killing-adapted choice of coordinates where the metric does not depend on $\varphi = {\rm Re} \, \Phi$}. We find that in these coordinates with Killing variables $\Phi= \varphi +i \vartheta$ \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S} \ . \end{equation} and \begin{equation} ds^2= -3\alpha {dZ d\bar Z\over (1-Z\bar Z)^2}= -3\alpha {dT d\bar T\over (T+\bar T)^2}= {\partial \Phi \partial \bar \Phi\over 2 \cos^2\Big (\sqrt{2\over 3\alpha}\, {\rm Im} \Phi\Big )} . \end{equation} The superpotential is now \begin{equation} W= A \Big (\tanh {\Phi \over \sqrt {6\alpha}} \Big) + S\, B \Big (\tanh {\Phi \over \sqrt {6\alpha}}\Big ) = G\Big (e^{ \sqrt{2\over 3\alpha} \Phi }\Big) + S F\Big ( e^{ \sqrt{2\over 3\alpha} \Phi }\Big) \, \ . \end{equation} Note that in our models $\vartheta=0$ during inflation and therefore the new holomorphic variable $\Phi$ during inflation becomes a real canonical variable $\varphi$. This is also easy to see from the kinetic terms in these variables, which are conformal to flat, \begin{equation} ds^2= {d\varphi^2 + d\vartheta^2 \over 2\cos^{2}\sqrt{2 \over 3\alpha} \vartheta } \ . \label{JJ}\end{equation} At $\vartheta=0$ they are both canonical $ ds^2|_{\vartheta=0} = {d\varphi^2 + d\vartheta^2 \over 2 } $. Thus, we will work with $\alpha$-attractor models \rf{Kdisk}, \rf{Khalf} in the form \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}\, , \quad W= G\Big (e^{ \sqrt{2\over 3\alpha} \Phi }\Big) + S F\Big ( e^{ \sqrt{2\over 3\alpha} \Phi }\Big). \label{new} \end{equation} Here one should keep in mind that our original half-plane variable $T$ is related to $\Phi$ as follows, $T= e^{ \sqrt{2\over 3\alpha} \Phi }$. We will use the following notation \begin{equation} G\Big (e^{ \sqrt{2\over 3\alpha} \Phi }\Big)\equiv g(\Phi)\, , \qquad F\Big ( e^{ \sqrt{2\over 3\alpha} \Phi }\Big)\equiv f(\Phi) \ . \end{equation} \noindent { \it To summarize, in Killing variables the $\alpha$-attractor supergravity models are} \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}\, , \quad W= g( \Phi ) + S f( \Phi ) \ . \label{newR} \end{equation} We find that the potential at $\Phi=\bar \Phi$ and at $S=0$ is given by \begin{equation} V_{\rm total}= 2 |g'(\varphi)|^2 - 3 |g(\varphi)|^2 + |f(\varphi)|^2 \ , \end{equation} since the K{\"a}hler\, covariant derivatives are the same as simple derivatives \begin{equation} D_\Phi W = \partial _\Phi W=g'(\Phi) \, , \qquad D_S W= \partial_S W= f(\Phi) \ , \end{equation} and at $\Phi=\bar \Phi$, $S=0$, $K=0$ and the inverse kinetic terms $K^{S\bar S}=1$ and $K^{\Phi\bar \Phi}=2$. \section{ Reconstruction models of inflation with SUSY breaking and de Sitter exit } \label{sect:Reconstruction} In the form \rf{newR} our $\alpha$-attractor models can be used to provide a de Sitter exit from inflation as well as supersymmetry breaking at the minimum of the potential, without changing any of the advantages in describing inflation. One of the simplest possibilities for such models is to require that \begin{equation} g(\Phi) ={1\over b} f(\Phi) \ , \label{W}\end{equation} \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}\, , \quad W= \Big (S+{1\over b} \Big) f( \Phi ) \ . \label{new1} \end{equation} In Killing variables we find that at $\Phi=\bar \Phi$ and at $S=0$ \begin{equation} D_\Phi W = \partial _\Phi W={1\over b} f'(\Phi) \, , \qquad D_S W= \partial_S W= f(\Phi) \ . \end{equation} The expression for the potential at $\Phi-\bar \Phi=S=0$ is now very simple and is given by \begin{equation} V= \Big (1-{3\over b^2} \Big ) |f (\varphi) |^2 + {2\over b^2} | f'(\varphi) |^2 \ . \label {V}\end{equation} Assume that at the minimum of the potential at $\Phi=0$ \begin{equation} f (0)= D_S W= M \neq 0\, , \qquad f'(0) = b \, D_\Phi W= 0 \ . \end{equation} This means that at the minimum supersymmetry is broken only in the direction of the nilpotent superfield $S$ and unbroken in the inflaton direction, since $b\neq 0$. We take $b^2> 3$. This provides an opportunity to have de Sitter vacua with positive cosmological constant $\Lambda$ in our inflationary models so that \begin{equation} V|_{\Phi=0} = \Lambda\, , \qquad \Lambda\equiv \Big (1-{3\over b^2} \Big ) M^2\, , \qquad b^2 = {3\over 1-{\Lambda\over M^2}} \ . \end{equation} The cosmological constant is extremely small, $\Lambda \sim 10^{{-120}}$, so we would like to make a choice of $f$ in \rf{W} such that the inflationary potential is presented by the second term in \rf{V}. In such case, with account of $\vartheta=0$ condition we can use the reconstruction method analogous to the one in \cite{Dall'Agata:2014oka}, where it was applied to canonical shift symmetric K{\"a}hler\, potentials with Minkowski vacua. We will show here how to generalize it for de Sitter exit from inflation and our logarithmic K{\"a}hler\, potentials. If the potential during inflation is expected to be given by the function \begin{equation} V(\varphi) = {\cal F}^2(\varphi) \ . \label{potential}\end{equation} we have to take \begin{equation} \partial _\varphi f( \varphi )= {b\over \sqrt{2}}\, {\cal F}(\varphi) \ , \end{equation} and \begin{equation} f( \varphi ) = {b\over \sqrt{2}}\, \int {\cal F}(\varphi) \qquad f( \varphi )|_{\varphi=0} =M \ . \end{equation} In these models the value of the superpotential at the minimum defines the mass of gravitino as follows \begin{equation} W_{\rm min}= {f\over b}|_{\Phi=0} = {M\over b}= {M\over \sqrt 3} \Big (1-{\Lambda\over M^2}\Big )^{1/2}= m_{3/2} \ , \end{equation} where $ \Lambda= M^2-3 m_{3/2}^2 $. The total potential at $\vartheta=0$ is therefore given by \begin{equation} V^{\rm total}= \Lambda {|f (\varphi)|^2 \over M^2} +|{\cal F}(\varphi)|^2 \ , \label{total}\end{equation} with \begin{equation} V^{\rm total}_{\rm min}= \Lambda = M^2-3 m_{3/2}^2 \ . \label{L}\end{equation} To get from the supergravity model \rf{new1} to the Planck, LHC, dark energy potential \rf{total} requires stabilization of the field $\vartheta$ at $\vartheta = 0$. We have checked that for all values of $\alpha$ during inflation, up to slow roll parameters, the main contribution to the mass to Hubble ratio is of the form \begin{equation} {m^2_\vartheta \over H^2} \approx 6 {|f |^2\over |f'|^2} \gg 1 . \label{6a}\end{equation} Here the mass of $\vartheta$ is defined with a proper account taken of the non-trivial kinetic term. Equation \rf{6a} implies that $\vartheta$ quickly reaches its minimum at $\vartheta=0$ at the bottom of the de Sitter valley, and inflation proceeds due to a slow evolution of $\varphi$. However, near the minimum of the potential, where the slow roll parameters are not small, a more careful evaluation of the mass of $\vartheta$ has to be performed. We will do it in examples below. \subsection{ The simplest T-model with broken SUSY and dS exit} We would like to have the inflationary part of the the potential to be \begin{equation}\label{nobreakFanned} V_{\rm infl}(\varphi) = \alpha \,\mu^{2}\tanh^{2} {\varphi\over \sqrt {6\alpha}} \ . \end{equation} This means that \begin{equation} {\cal F}= \sqrt \alpha \mu \, \tanh {\varphi\over \sqrt {6\alpha}} \end{equation} and \begin{equation} f(\varphi)= \sqrt 3\, \alpha\, \mu \, b\, \log \Big [\cosh {\varphi\over \sqrt {6\alpha}}\Big] +M \ . \end{equation} At $\varphi=0$ one has $f(\varphi)=M$. A complete supergravity version of the model is \begin{equation} K= -3\alpha\, \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}\, , \qquad W= \Big (S+{1\over b}\Big ) \Big [\sqrt 3 \, \alpha \, \mu \, b\, \log \Big[ \cosh {\Phi\over \sqrt {6\alpha}}\Big] +M \Big ] \, . \label{Tsugra}\end{equation} The total potential has a part proportional to the cosmological constant $\Lambda$ as well as the second part describing inflation: \begin{equation} V_{\rm total}= \Lambda {|f (\varphi)|^2 \over M^2} + \alpha \,\mu^{2}\tanh^{2} {\varphi\over \sqrt {6\alpha}} \ . \label{Tpot}\end{equation} The issue of the $\vartheta$ field stabilization which is required to get from \rf{Tsugra} to \rf{Tpot} presents an example of the general case. We find that during inflation ${m^2_\vartheta \over H^2}$ is positive and large, $\vartheta$ quickly reaches 0. However, near the minimum of the potential, the evaluation of the mass of $\vartheta$ shows that it is positive under condition that $\alpha \gtrsim 0.2$. Thus for $r\gtrsim 10^{-3}$ the model is safe without any stabilization terms even at the de Sitter minimum. For smaller $\alpha$ the bisectional curvature term has to be added to the K{\"a}hler\, potential, to stabilize $\vartheta$. It is given by an expression in disk variables of the form $A(Z, \bar Z) S\bar S (Z-\bar Z)^2$. The cosmological predictions of this model are represented by the straight vertical line in Fig. 1. A more direct comparison with the Planck results is provided by a figure presented in \cite{Kallosh:2015lwa}, which we reproduce here as Fig. \ref{f2a}. \begin{figure}[htb] \begin{center} \includegraphics[width=8cm]{AlphaPlanck60.pdf} \vspace*{-0.2cm} \caption{\footnotesize Cosmological predictions of the simplest T-model \rf{Tpot} with SUSY breaking and a non-vanishing cosmological constant $ \Lambda \sim 10^{{-120}}$. } \label{f2a} \end{center} \vspace{-0.6cm} \end{figure} Note that this model in disk variables and in a different K{\"a}hler\, frame was already presented in eqs. (3.20) and (3.21) in \cite{Kallosh:2015lwa}. An interesting property of the model \rf{Tpot} is that the amplitude of scalar perturbations does not depend on $\alpha$ and is determined only by $\mu \approx 10^{-5}$. \subsection{ The simplest E-model with broken SUSY and dS exit} We are looking at the inflationary $\alpha$ model with \begin{equation} V_{\rm infl} = m^2 \Big(1- e^{-\sqrt {2\over 3\alpha} \varphi}\Big)^2 \ . \end{equation} This means that \begin{equation} {\cal F}= m \Big(1- e^{-\sqrt {2\over 3\alpha} \varphi}\Big) \end{equation} and \begin{equation} f(\varphi)= {m b\over \sqrt 2} \Big ( \varphi + \sqrt {3\alpha \over 2} e^{-\sqrt {2\over 3\alpha} \varphi} -1 \Big )+M \ . \end{equation} At $\varphi=0$ one has $f(\varphi)=M$. Thus our complete model is \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}\, , \qquad W= \Big (S+{1\over b}\Big ) \Big [{mb\over \sqrt 2} ( \Phi + \sqrt {3\alpha \over 2} e^{-\sqrt {2\over 3\alpha} \Phi} -1)+M \Big ] \, . \label{Esugra}\end{equation} The total potential has a part proportional to the cosmological constant $\Lambda$ as well as the second part describing inflation: \begin{equation} V_{\rm total}= \Lambda {|f(\varphi) )|^2\over M^2} +m^2 \Big(1- e^{-\sqrt {2\over 3\alpha} \varphi}\Big)^2 \, . \label{Epot}\end{equation} The issue of the $\vartheta$ field stabilization which is required to get from \rf{Esugra} to \rf{Epot} has been studied separately and again confirms the general case as discussed below eq. \rf{L} concerning inflationary part. And again near the minimum of the potential, the evaluation of the mass of $\vartheta$ shows that it is positive under condition that $\alpha >0.2$. For smaller values of $\alpha$, the bisectional curvature term has to be added to the K{\"a}hler\, potential, to stabilize $\vartheta$. It is of the form $A(Z, \bar Z) S\bar S (Z-\bar Z)^2$ in disk variables. This model for $\alpha=1$ in half-plane variables in case of $\Lambda=0$ was proposed in \cite{Lahanas:2015jwa} in eqs. (28), (37). For the generic case of $\alpha\neq 1$ a related model was given in eqs. (4.23), (4.24) in \cite{Kallosh:2015lwa}. More general models can be constructed following the rules for this class of models proposed above in eqs. \rf{potential} - \rf{total}. \section{General models of inflation with SUSY breaking and dark energy} \label{sect:genModels} We have learned above how to build supergravity models by reconstructing superpotentials to produce a given choice of the bosonic inflationary potential $V(\varphi) = {\cal F}^2(\varphi)$ with our logarithmic K{\"a}hler\, potential $K= -3\alpha\, \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}$ in Killing variables. The exact answer for $ W= g( \Phi ) + S f( \Phi )$ can be obtained under condition $g(\Phi) ={1\over b} f(\Phi)$ and requires simply an integration so that $f( \varphi )$ is reconstructed by integration $f( \varphi )= {b\over \sqrt{2}}\, \int {\cal F}(\varphi)$. Obviously this can be carried out in any variables as long as one takes care of the K{\"a}hler\ measure relating the variables used to the functional form of the canonical variables, but it is particularly transparent in Killing-adapted variables as the measure is unity. Instead of the reconstructing strategy we may start with our models in \rf{newR} with superpotentials of the form \begin{equation} W= g(\Phi) + S f(\Phi) \end{equation} without a constraint that $g(\Phi) ={1\over b} f(\Phi)$. In such case the potentials are given by $ V_{\rm total}= 2 |g'(\varphi)|^2 - 3 |g(\varphi)|^2 + |f(\varphi)|^2 $. Near the minimum of the potential one has to check that we still satisfy the requirements that $D_S W= M \neq 0$ and $ D_\Phi W= 0$ to preserve the nice de Sitter exit properties with SUSY breaking as described in eq. \rf{V1}. In these models we end up with more complicated bosonic potentials describing some combination of our $\alpha$-attractor models. However, these models are still capable to fit the cosmological observables as well as providing the level of SUSY breaking in dS vacua with a controllable gravitino mass. Some examples of these models were given in \cite{Kallosh:2015lwa}, in eqs. (2.4), (3.15) and (2.7), (3.17). \begin{figure}[htb] \vspace*{-2mm \begin{center} \includegraphics[width=11cm]{Planck.pdf} \vspace*{-0.2cm} \caption{\footnotesize The potential for the supergravity model in eq. \rf{newRexample} as a function of $ \varphi$ and $\vartheta$. It has a de Sitter minimum at $ \varphi=\vartheta=0$ where $V_{\rm min} =\Lambda$. Supersymmetry is broken at this minimum with $D_S W=M$, the mass of gravitino is $m^2_{3/2}= {M^2\over 3} (1-{\Lambda\over M^2})$. The inflationary de Sitter valleys have a nice feature known for models with Minkowski minimum with unbroken SUSY, studied in \cite{Carrasco:2015rva}. These valleys provide nice initial conditions for the inflation to start in these models.} \label{Planck} \end{center} \vspace{-0.5cm} \end{figure} Here we will present an example where in disk variables the superpotential is relatively simple whereas the potential is not simple but satisfactory for our purpose. We take the inflaton shift-symmetric K{\"a}hler\, potential and the superpotential of the form \begin{equation} K= -{3\over 2} \alpha \log \left[{(1- Z\bar Z)^2\over (1-Z^2) (1-\overline Z^2)} \right] +S\bar S\, , \quad S^2(x, \theta)=0\, , \quad W= \Big (S+ {1-Z^2\over b}\Big ) ( {\sqrt 3\alpha } \, m^2 \, Z^2 + M)\, \ . \label{KdiskExample}\end{equation} The same model in Killing variables $\Phi$, where $Z= \tanh {\Phi\over \sqrt{6 \alpha}}$, is \begin{equation} K= -3\alpha \log \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S} , \quad W= \Big( {1\over b}{ \cosh ^{-2} \Big ({\Phi\over \sqrt {6\alpha}}\Big ) } + S\Big ) \Big ( \sqrt { 3 \alpha} \, m^2 \tanh^2 \Big ({\Phi\over \sqrt {6\alpha}}\Big ) +M \Big) . \label{newRexample} \end{equation} The potential at $S=0$ and $\vartheta=0$ has the form $ V_{\rm total}= 2 |g'(\varphi)|^2 - 3 |g(\varphi)|^2 + |f(\varphi)|^2 $, where in our case \begin{equation} g(\varphi)={1\over b}{ \cosh ^{-2} \Big ({\Phi\over \sqrt {6\alpha}}\Big ) } \Big ( \sqrt { 3 \alpha} \, m^2 \tanh^2 \Big ({\Phi\over \sqrt {6\alpha}}\Big ) +M \Big) , \quad f (\varphi) = \sqrt { 3 \alpha} \, m^2 \tanh^2 \Big ({\Phi\over \sqrt {6\alpha}}\Big ) +M \ . \end{equation} We have checked that the mass of the field $\vartheta$ is positive everywhere for all $\alpha >0.02$ and that during inflation the ratio ${m_{\vartheta}^2\over H^2}=6$. This can be also seen from the Fig. \ref{Planck}, where we plotted our potential. The inflationary de Sitter valleys are of the same width everywhere for larger and larger values of $\varphi$. The predictions of this class of models for $n_{s}$ and $r$ practically coincide with the predictions of the models discussed in Sections 4.1 and 4.2 for $\alpha = O(1)$. However, at $\alpha \gg 1$ the predictions are somewhat different. We show these predictions in Fig. \ref{flast} by a thin green line for $20> \alpha > 1/3$ and for the number of e-foldings $N = 60$. The top of the line indicated by the dark red star corresponds to $\alpha = 20$. The line ends at the pink star corresponding to $\alpha = 1/3$. We see that the predictions of this model in the large interval $20> \alpha > 1/3$ belong to the dark blue region favored by the Planck data. \begin{figure}[htb] \begin{center} \includegraphics[width=8cm]{twenty.pdf} \vspace*{-0.2cm} \caption{\footnotesize Predictions of the model \ref{KdiskExample} for $20> \alpha > 1/3$ are shown by the thin green line. The top of the line indicated by the dark red star corresponds to $\alpha = 20$. The line ends at the pink star corresponding to $\alpha = 1/3$. } \label{flast} \end{center} \end{figure} Thus in the last two sections we have presented several supergravity models where ${\delta\rho\over \rho}$, $n_s$ and $\Lambda $ take their known observable values, whereas the gravitino mass $m_{3/2}$ and the tensor-to-scalar ratio $r$ are free parameters which can take a broad range of values. \section{Discussion} \label{sect:Discussion} In this paper we have pursued a program of describing the main features of the universe evolution, early universe inflation and current acceleration compatible with the data, as well as providing an explanation of the possible origin of the supersymmetry breaking and the mass of gravitino, compatible with the future data from particle physics. Certain features of our four-parameter `primordial' supergravity models are motivated by the non-perturbative string theory. The origin of the nilpotent superfield $S^2(x, \theta)=0$ in these constructions is related to the D-brane physics, where one finds the fermionic Volkov-Akulov goldstino multiplet \cite{Volkov:1973ix,rocek,Komargodski:2009rz} on the world-volume of the D-branes \cite{Kallosh:2014wsa}. Our new cosmological models in string theory inspired supergravity suggest a possible bottom-up ${\cal N}=1$ supergravity models of inflation which might lead to a successful phenomenology of the early universe and the one which is accelerating now. These models also address the supersymmetry breaking issues. They differ from more traditional string cosmology models which were developed during the last decade, the latest models being discussed in \cite{Flauger:2014ana,Buchmuller:2015oma,Dudas:2015lga} and other papers. Our models have fundamental connections to string theory via the nilpotent superfield associated with the fermions on the D-branes. Another connection is via logarithmic K{\"a}hler\, potentials which are required for ${\cal N}\geq 2$ supergravity and are present in string theory motivated supergravity. And finally, the value of the positive cosmological constant in our models can be only explained with the reference to the string landscape. The mass of gravitino, $m_{3/2}$, and the level of gravity waves, $r$, are free parameters in our new cosmological models, to be determined by the future experiments. The progress in this direction was based on a better understanding of moduli stabilization and on the use of supergravity models with the universal spontaneous supersymmetry breaking via a fermionic goldstino multiplet. The reason for such universality is the following: the nilpotency condition $S^2(x, \theta)=0$ for $S=s+ \sqrt{2}\, \theta\, \psi_s + \, \theta^2 F_s$ can be satisfied only if $F_s \neq 0$. In such case the sgoldstino is not a fundamental scalar anymore but is given by a bilinear combination of fermionic goldstino's divided by the value of the auxiliary field $F_s$ \begin{equation} s= {\psi_s \psi_s\over 2 F_s} \ . \end{equation} There is no non-trivial solution if SUSY is unbroken and $F_s=0$, i.e. only $s=\psi_S=0$ solve the equation $S^2(x, \theta)=0$. Thus by requiring to have a fermion Volkov-Akulov goldstino nilpotent multiplet in supergravity theory we end up with the universal value of the supergravity potential at its minimum, with $e^K |F_S |^2=M^2$ \begin{equation} V=e^K( |F_S |^2 - 3 |W|^2)= M^2- 3 m_{3/2}^2= \Lambda >0 \ . \end{equation} {\it The new always positive goldstino contribution originates in an updated version of the KKLT uplifting via the the $\overline D$3 brane, with manifest spontaneously broken supersymmetry} \cite{Kallosh:2014wsa}. Our minimal supergravity models depend on two superfields, one of them typically represented using either a Poincare-disk variable $Z$, or a half-plane variable $T$. A new variable $\Phi$ which we used extensively in this paper describes the same geometry but in a Killing adapted frame where the metric does not depend on the inflaton direction. We call $\Phi$ a Killing variable. We explained the relation between these three holomorphic variables in Section~\ref{sect:Killing}. The canonically normalized inflaton in our models is $\varphi = {\rm Re} \, \Phi$. The inflaton partner scalar $\vartheta = {\rm Im} \, \Phi$ is supposed to vanish, which happens automatically during inflation in the models considered in this paper. In all our models in $\Phi$-variables the inflaton shift-symmetric K{\"a}hler\, potential is $ K= -3\alpha\, \ln \Big [\cosh {\Phi-\bar \Phi \over \sqrt{6\alpha}} \Big] + S \bar{S}$ and the superpotential is $W= g(\Phi) + S f(\Phi)$. The nilpotent multiplet $S$ does not have fundamental scalars, it has only a fermionic goldstino. In models with a canonical K{\"a}hler\, potential for the nilpotent multiplet $ K= S \bar{S}$ stabilization of $\vartheta$ in all models presented in this paper does not require any additional stabilization terms, as long as $\alpha > 0.2$. For smaller $\alpha$ one can stabilize $\vartheta$ by adding a bisectional curvature term to the K{\"a}hler\, potential of the form (in disk variables) $A(Z, \bar Z) S\bar S (Z-\bar Z)^2$. Thus, in presence of the nilpotent superfield $S$ the problem of stabilization of the direction orthogonal to the inflaton is solved during inflation as well as at the minimum of the potential. An unexpected benefit from the new tools for moduli stabilization during inflation was realized very recently. Many examples of previously known supergravity models, compatible with current and future cosmological observations, can now easily describe dark energy via tiny de Sitter vacua, and spontaneous breaking of supersymmetry. In this paper we provide examples of such generalizations of $\alpha$-attractor models \cite{Kallosh:2013hoa,Ferrara:2013rsa,Kallosh:2013yoa,Cecotti:2014ipa,Kallosh:2013tua,Galante:2014ifa,Kallosh:2015lwa,Kallosh:2015zsa,Carrasco:2015uma}. These models interpolate between various polynomial models $\varphi^{2m}$ at very large $\alpha$ and attractor line for $\alpha\leq 1$, see Figs. 1, 2. Therefore they are flexible with regard to data on B-modes, $r$. They provide a seamless natural fit to Planck data. For these kinds of cosmological models we have shown that it is possible to break supersymmetry without an additional hidden sector, with a controllable parameter of supersymmetry breaking. With inflationary scale $\sim 10^{-5} M_{p}$ the scale of supersymmetry breaking can be $M\sim (10 ^{-13}- 10^{-14}) M_{p} $, compatible with the discovery of supersymmetry at LHC. With $ M \gg 100- 1000$ TeV we will have equally good inflationary models, compatible with an absence of observed supersymmetry at LHC. In fact, such inflationary models are even easier to construct. In this paper we developed two methods of constructing inflationary models with supersymmetry breaking and de Sitter minimum. One is the reconstruction method in Section~\ref{sect:Reconstruction}, which allows to take any desirable inflationary models, in particular our $\alpha$-attractor models, and enhance them by SUSY breaking and a small cosmological constant. An advantage of this method, following \cite{Dall'Agata:2014oka,Kallosh:2014hxa,Lahanas:2015jwa,Kallosh:2015lwa}, is that it is powerful and easy. It requires only a simple integration of a given function. Thus one can obtain nearly arbitrary inflationary potentials, just as it was done in \cite{Kallosh:2010ug,Kallosh:2010xz}, so one can fit any set of observational data in the context of supergravity-based models of inflation. Moreover, in all of these models one can introduce SUSY breaking of any magnitude without introducing extra scalars such as Polonyi field. It can be done while preserving all desirable inflationary predictions. Thus from the purely phenomenological point of view, the reconstruction method is a great tool offering us enormous flexibility. On the other hand, this method does not use specific advantages of the cosmological attractors, including their geometric origin and stability of their predictions with respect to the change of the inflationary potential. In this sense, the method used for deriving the model described in Section~\ref{sect:genModels}, as well as of some other similar models found earlier in \cite{Kallosh:2015lwa}, preserves the attractor features of the theory by construction, for all values of the SUSY breaking parameters and arbitrary cosmological constant. Some of the features of these models (the existence of a dS valley of a constant width and depth shown in Fig. 3) play an important role in solving the initial conditions problem for inflation in these models. The details of this analysis can be found in \cite{Carrasco:2015rva} for $\alpha$-attractor models with a Minkowski minimum and unbroken SUSY. Here we see that in generic models with de Sitter exit and controllable SUSY breaking, initial condition problem for inflation is solved just as in the simpler case studied in \cite{Carrasco:2015rva}. \section*{Acknowledgments} We are grateful to S. Dimopoulos, M. Dine, S. Kachru, J. March-Russell, D. Roest, M. Scalisi, E. Silverstein, F. Quevedo and F. Zwirner for a discussion of cosmology and particle physics related issues. This work was supported by the SITP and by the NSF Grant PHY-1316699. RK is also supported by the Templeton foundation grant `Quantum Gravity Frontiers,' and AL is also supported by the Templeton foundation grant `Inflation, the Multiverse, and Holography.' JJMC received support from the Templeton foundation grant `Quantum Gravity Frontiers,' and is supported by the European Research Council under ERC-STG-639729, `Predictive Quantum Field Theory'.
proofpile-arXiv_068-16424
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} In pulsar wind nebulae (PWNe), gamma-ray bursts (GRBs), and jets from active galactic nuclei (AGNs), signatures of non-thermal processes are revealed by power-law radiation spectra spanning an extremely wide range of wavelengths, from radio to X-rays, and beyond. Yet, it is still a mystery how the emitting particles can be accelerated up to ultra-\RecentA{relativistic} energies \RecentA{and how the strong magnetic fields are generated, as} required in order to explain the observations. In most models, non-thermal particles and near-equipartition fields are thought to be produced at relativistic shock fronts, but the details of the mechanisms of particle acceleration and magnetic field generation are still not well understood. Particle acceleration in shocks is usually attributed to the Fermi process, where particles are energized by bouncing back and forth across the shock. Despite its importance, the Fermi process is still not understood from first principles. The highly nonlinear coupling between accelerated particles and magnetic turbulence -- which is generated by the particles, and at the same time governs their acceleration -- is extremely hard to incorporate in analytic models. Only in recent years, thanks to major breakthroughs on analytical and numerical grounds, has our understanding of the Fermi process in relativistic shocks \RecentA{significantly advanced}. This is the subject of the present review. \RecentA{Relativistic shocks pose some unique challenges with respect to their non-relativistic counterparts. For example, the distribution of accelerated particles can no longer be approximated as isotropic if the shock is relativistic. In a relativistic shock, the electric and magnetic fields significantly mix as one switches between upstream and downstream frames of reference. And unlike non-relativistic shocks, where some aspects of the theory can be tested by direct spacecraft measurements, relativistic shocks are only constrained by remote observations. For recent reviews of relativistic shocks, see \cite{BykovTreumann11,2012SSRv..173..309B}. } This chapter is organized as follows. First, we review recent analytical advances on the theory of particle acceleration in relativistic shocks, arguing that the accelerated particle spectrum and its power-law slope in the ultra-relativistic limit, $s_\gamma\equiv -d\log N/d\log \gamma\simeq2.2$ (where $\gamma$ is the particle Lorentz factor), are fairly robust (Section \ref{particles}). Here, we assume {\it a priori} that some magnetic turbulence exists on both sides of the shock, such that the Fermi process can operate. Next, we describe the plasma instabilities that are most relevant for generating this turbulence (Section \ref{waves}), stressing the parameter regime where the so-called Weibel (or ``filamentation'') instability -- which is often thought to mediate the Fermi process in weakly magnetized relativistic shocks -- can grow. Then, we summarize recent findings from particle-in-cell (PIC) simulations of relativistic shocks, where the non-linear coupling between particles and magnetic waves can be captured from first principles (Section \ref{PIC}). Finally, we describe the astrophysical implications of these results for the acceleration of ultra high energy cosmic rays (UHECRs) and for the radiative signatures of PWNe and GRB afterglows (Section \ref{rad}; for a review of PWNe, see Kargaltsev et al. (2015) in the present volume; for a review of GRBs, see Racusin et al. (2015) in the present volume). We briefly conclude in Section \ref{conc}. \section{Particle Acceleration in Relativistic Shocks}\label{particles} Diffusive (Fermi) acceleration of charged particles in collisionless shocks is believed to be responsible for the production of non-thermal distributions of energetic particles in many astronomical systems \citep[][but see, \emph{e.g.} \cite{AronsTavani94} for a discussion of alternative shock acceleration processes]{blandford_eichler_87, MalkovDrury01}. The Fermi acceleration process in shocks is still not understood from first principles: particle scattering in collisionless shocks is due to electromagnetic waves formed around the shock, but no present analytical formalism self-consistently calculates the generation of these waves, the scattering and acceleration of particles, and the backreaction of these particles on the waves and on the shock itself. The theory of particle acceleration was first developed mainly by evolving the particle distribution under some Ansatz for the scattering mechanism (\emph{e.g.} diffusion in pitch angle), within the ``test particle'' approximation, where modifications of wave and shock properties due to the high energy particles are neglected. This phenomenological approach proved successful in explaining the spectrum of relativistic particle distributions inferred from observations, although a more careful approach is needed to account for the energy fraction deposited in each particle species (electrons, positrons, protons, and possibly heavier ions), and to test the Ansatz of the scattering prescription. For \emph{non-relativistic} shocks, the linear theory of diffusive particle acceleration, first developed in 1977 \citep{Krymskii77, AxfordEtAl78, bell_78, blandford_ostriker_78}, yields a power-law distribution $d^3N/d^3p\propto p^{-{s_p}}$ of particle momenta $p$, with a spectral index \begin{equation} {s_p} = s_\gamma+2 = 3\beta_u / (\beta_u-\beta_d) \, . \label{eq:SIsoNR} \end{equation} Here, $\beta$ is the fluid velocity normalized to the speed of light $c$ in the frame of the shock, which is assumed planar and infinite, and subscripts $u$ ($d$) denote the upstream (downstream) plasma. For strong shocks in an ideal gas of adiabatic index $\Gamma=5/3$, this implies ${s_p}=4$ (\emph{i.e.} $s_\gamma=2$; constant energy per logarithmic energy interval, since $p^2d^3N/d^3p\propto p^{-2}$), in agreement with observations. The lack of a characteristic momentum scale, under the above assumptions, implies that the spectrum remains a power-law in the relativistic case, as verified numerically \citep{ostrowski_bednarz_98, achterberg_01}. The particle drift downstream of the shock implies that more particles are moving downstream than upstream; this anisotropy is of order of $\beta_u$ when measured in the downstream frame \citep{keshet_waxman_05}. Thus, while particle anisotropy is negligible for non-relativistic shocks, the distribution becomes highly anisotropic in the relativistic case, even when measured in the more isotropic downstream frame. Consequently, one must simultaneously determine the spectrum and the angular distribution of the particles, which is the main difficulty underlying the analysis of test particle acceleration when the shock is relativistic. Observations of GRB afterglows led to the conclusion that highly relativistic collisionless shocks produce a power-law distribution of high energy particles with ${s_p}=4.2\pm0.2$ \citep{Waxman97spectrum, FreedmanWaxman01, BergerEtAl03}. This triggered a numerical investigation of particle acceleration in such shocks, showing that ${s_p}$ indeed approaches the value of $4.2$ for large shock Lorentz factors ($\gamma_{u}\equiv(1-\beta_u^2)^{-1/2}\gg1$), in agreement with GRB observations, provided that particle scattering is sufficiently isotropic. The spectral index ${s_p}$ was calculated under the test particle approximation for a wide range of shock velocities, various equations of state, and different scattering prescriptions. This was achieved by approximately matching numerical eigenfunctions of the transport equation between upstream and downstream \citep{KirkSchneider87, HeavensDrury88, kirk_00}, by Monte Carlo simulations \citep{ostrowski_bednarz_98, achterberg_01,ellison_double_02,2003ApJ...589L..73L,niemiec_ostrowski_04,lemoine_revenu_06,EllisonEtAl13}, by expanding the distribution parallel to the shock front \citep{keshet_waxman_05}, and by solving for moments of the angular distribution \citep{Keshet06}. These studies have assumed rest frame diffusion in pitch angle or in the angle between particle velocity and shock normal. These two assumptions yield similar spectra in the limit of ultra-relativistic shocks \citep{ostrowski_bednarz_02}. {As discussed later in this review, one expects these assumptions to hold at relativistic shocks. However, some scenarios involve the conversion of the accelerated species into a neutral state and then back -- \emph{e.g.} proton to neutron and then back to proton via photo-hadronic interactions \citep{2003PhRvD..68d3003D} or electron to photon and then back to electron through Compton and pair production interactions \citep{stern_08} -- in which case the particle may have time to suffer a large angle deflection upstream of the shock, leading to large energy gains and generically hard spectra \citep{ostrowski_bednarz_98,MeliQuenby03, BlasiVietri05}.} For isotropic, small-angle scattering in the fluid frame, expanding the particle distribution about the shock grazing angle \citep{keshet_waxman_05} leads to a generalization of the non-relativistic Eq.~(\ref{eq:SIsoNR}) that reads \begin{equation} {s_p} = (3\beta_u - 2\beta_u \beta_d^2 + \beta_d^3) / (\beta_u - \beta_d) \, , \label{eq:SIso} \end{equation} in agreement with numerical studies \citep{kirk_00, achterberg_01} over the entire range of $\beta_u$ and $\beta_d$. In particular, in the ultra-relativistic shock limit, the spectral index becomes \begin{equation} {s_p}(\beta_u\rightarrow 1, \beta_d \rightarrow 1/3) = 38/9 = 4.222\ldots \end{equation} The spectrum is shown in \fig{SIso} for different equations of state, as a function of the shock four-velocity $\gamma_u\beta_u$. \begin{figure}[h] \begin{center} \includegraphics[width=0.75\textwidth]{TestPartSpect.png} \caption{\label{fig:SIso} Spectral index according to Eq.~(\ref{eq:SIso}) \citep[][curves]{keshet_waxman_05} and to a numerical eigenfunction method \citep[][symbols]{kirk_00}, as a function of $\gamma_u \beta_u$, for three different types of shocks \citep{KirkDuffy99}: a strong shock with the J\"{u}ttner/Synge equation of state (solid curve and crosses), a strong shock with fixed adiabatic index $\Gamma=4/3$ (dashed curve and x-marks), and for a relativistic gas where $\beta_u \beta_d=1/3$ (dash-dotted curve and circles). } \label{fig:sh} \end{center} \end{figure} The above analyses assumed that the waves scattering the particles move, on average, with the bulk fluid velocity. More accurately, one should replace $\beta$ by the mean velocity of the waves that are scattering the particles. In the shock precursor (see \S\ref{subsec:precursor}), the scattering waves are expected to be slower than the incoming flow, leading to a softer spectrum (smaller $\beta_u$ in Eq.~\ref{eq:SIso}). Small-angle scattering can be parameterized by the angular diffusion function $\mathcal{D}\equiv \langle(\Delta\theta)^2/\Delta t\rangle$, where $\theta$ is the angle of the particle velocity, taken here with respect to the shock normal, and angular brackets denote an ensemble average. The function $\mathcal{D}=\mathcal{D}(\theta,p,z)$ should be specified on both sides of the shock, and in general depends on $\theta$, on the particle momentum $p$, and on its distance $z$ from the shock front. For scattering off waves with a small coherence length $\lambda\ll r_L$, where $r_L=(p c/eB)$ is the Larmor radius, roughly $(r_L/\lambda)^2$ uncorrelated scattering events are needed in order to produce an appreciable deflection, so $\mathcal{D}\sim r_L^2 c /\lambda \propto p^2$ \citep{achterberg_01,2009MNRAS.393..587P}. Here, $B$ is the magnetic field, and $e$ is the electron's charge. Simulations \citep{sironi_13} confirm this scaling at early times; some implications are discussed in Section \ref{PIC}. The precise dependence of $\mathcal{D}$ upon $z$ is not well known. It is thought that $\mathcal{D}$ slowly and monotonically declines away from the shock, as the energy in self-generated fields decreases. However, the extents of the upstream precursor and downstream magnetized region are not well constrained observationally, and in general are numerically unaccessible in the foreseeable future. For an evolved magnetic configuration, it is natural to assume that the diffusion function is approximately separable in the form $\mathcal{D}=D(\theta)D_2(p,z)$. Here, $D_2$ \citep[which may be approximately separable as well, but see][]{2007ApJ...655..375K} can be eliminated from the transport equation by rescaling $z$, such that the spectrum depends only on the angular part $D(\theta)$. The spectrum is typically more sensitive to the downstream diffusion function $D_d$ than it is to the upstream $D_u$. In general, an enhanced $D_d$ along (opposite to) the flow yields a softer (harder) spectrum; the trend is roughly reversed for $D_u$ \citep{Keshet06}. Thus, the spectrum may deviate significantly from that of isotropic diffusion, in particular in the ultra-relativistic limit \citep{kirk_00, Keshet06}. However, the spectral slope $s$ is not sensitive to localized changes in $D$ at angles perpendicular to the flow \citep{Keshet06}. For roughly forward-backward symmetric scattering in the downstream frame, as suggested by PIC simulations, $s$ is approximately given by its isotropic diffusion value in Eq.~\ref{eq:SIso} (Keshet et al., in preparation). Particle acceleration is thought to be efficient, at least in weakly magnetized or quasi-parallel shocks, as discussed below. Thus, the relativistic particles are expected not only to generate waves, but also to slow down and heat the bulk plasma \citep{blandford_eichler_87}. As particles with higher energies are expected to diffuse farther upstream and slow the plasma, lower-energy particles are effectively accelerated by a slower upstream. Consequently, if the scattering waves are assumed to move with the bulk plasma, the spectrum would no longer be a power-law. However, this effect may be significant only for mildly relativistic shocks, with Lorentz factors below $\gamma_u\sim 3$ \citep{ellison_double_02, EllisonEtAl13}. To understand the energy, composition, and additional features of the accelerated particles, such as the acceleration time and energy cutoffs, one must not only analyze the scattering of these particles (for example, by deriving $\mathcal{D}$), but also address the injection problem, namely the process by which a small fraction of particles becomes subject to significant acceleration. Such effects were investigated using Monte-Carlo techniques \citep{ostrowski_bednarz_98, EllisonEtAl13}, in the so-called ``thermal leakage'' model, where fast particles belonging to the downstream Maxwellian are assumed to be able to cross the shock into the upstream. More self-consistent results on particle injection based on PIC simulations are presented in Section \ref{PIC}. To uncover the physics behind the injection and acceleration processes, we next review the generation of electromagnetic waves in relativistic shocks. \section{Plasma Instabilities in Relativistic Shocks}\label{waves} \subsection{The Shock Precursor}\label{subsec:precursor} The collisionless shock transition is associated with the build-up of some electromagnetic barrier, which is able to slow down and nearly isotropize the incoming unshocked plasma. In media of substantial magnetization\footnote{The magnetization is defined as $\sigma\,=\,B^2/\left[4\pi\gamma_{u}(\gamma_{u}-1)n'mc^2\right]$ in terms of $B$, the large-scale background magnetic field measured in the shock front rest frame, and $n'$, the proper upstream particle density. The mass $m$ is $m_p$ for an electron-proton shock, and $m_e$ for an electron-positron shock, \emph{i.e.} it corresponds to the mass of the particles which carry the bulk of the inertia. For a perpendicular shock, in which the background magnetic field in the shock frame is perpendicular to the flow, the magnetization can also be written as $\sigma\,=\,(u_{\rm A}/c)^2$, with $u_{\rm A}$ the Alfv\'en four-velocity of the upstream plasma.}, $\sigma\,\gtrsim\,10^{-2}$, this barrier can result from the compression of the background magnetic field (as a result of the Lorentz transformation to the frame of a relativistic shock, the most generic configuration is that of a quasi-perpendicular field), while at lower magnetizations, it is understood to arise from the generation of intense micro-turbulence in the shock ``precursor'', as explained hereafter and illustrated in Fig.~\ref{martin}. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.9\textwidth]{martin.png} \caption{\footnotesize{Phase diagram of relativistic collisionless shocks in the plane $(\gamma_u,\sigma)$; this figure assumes $\gamma_u>10$ and $\xi_{\rm cr}=0.1$, where the parameter $\xi_{\rm cr}\,=\,e_{\rm cr}/\left[\gamma_{u}(\gamma_{u}-1)n'mc^2\right]$ characterizes the energy density of supra-thermal particles ($e_{\rm cr}$) relative to the incoming energy flux, as measured in the shock rest frame. In region 1, the shock transition is initiated by magnetic reflection in the compressed background field, while in regions $2-5$, the magnetic barrier is associated to the growth of micro-instabilities, as indicated. The solid diagonal line indicates values of $\sigma$ and $\gamma_u$ above which the filamentation instability would not have time to grow, in the absence of deceleration resulting from the compensation of the perpendicular current of the supra-thermal particles gyrating in the background field. See Section~3.1 and ~\cite{2014EL....10655001L} for a detailed discussion.}} \label{martin} \end{center} \end{figure} At high magnetization, the gyration of the ambient particles in the background compressed magnetic field can trigger a synchrotron maser instability, which sends precursor electromagnetic waves into the upstream~\citep{langdon_88,hoshino_91,hoshino_92,gallant_92}. As incoming electrons and positrons interact with these waves, they undergo heating~\citep{hoshino_08,sironi_spitkovsky_11a}, but acceleration seemingly remains inefficient (Section~\ref{sec:mag}). At magnetizations $\sigma\,\lesssim\,10^{-2}$, the interpenetration of the incoming background plasma and the supra-thermal particles, which have been reflected on the shock front or which are undergoing Fermi cycles around the shock, leads to anisotropic micro-instabilities over an extended region in front of the shock, called the ``precursor'' here. These instabilities then build up a magnetic barrier, up to a level\footnote{The parameter $\epsilon_B$ denotes the magnetization of the turbulence, $\epsilon_B\,=\,\delta B^2/\left[4\pi\gamma_{ u}(\gamma_{u}-1)n'mc^2\right]$, where $\delta B$ is the fluctuating magnetic field.} $\epsilon_B\,\sim\,10^{-2}-10^{-1}$, sufficient to deflect strongly the incoming particles and thus mediate the shock transition. This picture, first envisioned by \citet{1963JNuE....5...43M}, has been recently demonstrated in {\it ab initio} PIC simulations~\citep{spitkovsky_05,2007ApJ...668..974K,spitkovsky_08}. The generation of micro-turbulence in the shock precursor is thus a key ingredient in the formation of the shock and in the development of the Fermi process, as anticipated analytically~\citep{2006ApJ...645L.129L} and from Monte Carlo simulations~\citep{2006ApJ...650.1020N}, and demonstrated by PIC simulations~\citep{spitkovsky_08b,sironi_spitkovsky_09,sironi_spitkovsky_11a}, see hereafter. As seen in the background plasma (upstream) rest frame, the supra-thermal particles form a highly focused beam, with an opening angle $\sim\,1/\gamma_{u}$ and a mean Lorentz factor $\overline\gamma_{\vert u}\,\sim\,\gamma_{u}^2$. In contrast, boosting back to the shock frame, this supra-thermal particle distribution is now open over $\sim\pi/2$, with a mean Lorentz factor $\overline\gamma_{\vert\rm sh}\,\gtrsim\,\gamma_{u}$, while the incoming plasma is highly focused, with a mean Lorentz factor $\gamma_{u}$. A host of micro-instabilities can in principle develop in such anisotropic configurations, see the general discussion by \citet{bret_09}. However, in the deep relativistic regime, the restricted length scale of the precursor imposes a strong selection of potential instabilities, since a background plasma volume element remains subject to the growth of instabilities only while it crosses the precursor. In the shock rest frame, this time scale is $t_{\times,B}\,\simeq\,\omega_{\rm c}^{-1}$ in the presence of a quasi-perpendicular background field\footnote{$\omega_{\rm c}\,\equiv\,e B_{\vert u}/m c$ represents the upstream frame cyclotron frequency (and $B_{\vert u}$ is the magnetic field in the upstream frame) while $\omega_{\rm p}\,\equiv\,\left(4\pi n'e^2/m\right)^{1/2}$ denotes the plasma frequency.} (a common field geometry in relativistic flows), or $t_{\times,\delta B}\,\simeq\,\gamma_u\epsilon_B^{-1}\omega_{\rm p}^{-1}\left(\omega_{\rm p}\lambda_{\delta B}/c\right)^{-1}$ if the scattering is dominated by short scale turbulence of magnetization $\epsilon_B$ and coherence length $\lambda_{\delta B}$ (assuming that the waves are purely magnetic in the rest frame of the background plasma), see \emph{e.g.} \citet{2006ApJ...651..979M}, \citet{pelletier_10} and \citet{plotnikov_12}. This small length scale implies that only the fastest modes can grow, which limits the discussion to a few salient instabilities. Before proceeding further, one should stress that the above estimates for $t_{\times}$ do not account for the influence of particles accelerated to higher energies, which can propagate farther into the upstream plasma and thus seed instabilities with smaller growth rate and on larger spatial scales. While such particles do not carry the bulk of the energy if the spectral index $s_\gamma>2$, it is anticipated that they should nevertheless influence the structure of the precursor, see in particular \citet{2006ApJ...651..979M}, \citet{2007ApJ...655..375K}, \citet{2009ApJ...696.2269M}, \citet{2009MNRAS.393..587P} and \citet{2014MNRAS.439.2050R} for general analytical discussions, as well as \citet{keshet_09} for an explicit numerical demonstration of their potential influence. Similarly, the above estimates do not make a distinction between electron-positron and electron-ion shocks; in particular, it is understood that $\omega_{\rm c}$ and $\omega_{\rm p}$ refer to the species which carries the bulk of the energy (\emph{i.e.} ions for electron-ion shocks). PIC simulations have demonstrated that in electron-ion shocks, electrons are heated in the precursor to nearly equipartition with the ions, meaning that in the shock transition their relativistic inertia becomes comparable to that of ions~\citep[\emph{e.g.} ][]{sironi_13}; hence one does not expect a strong difference between the physics of electron-positron and electron-ion shocks from the point of view of micro-instabilities, and unless otherwise noted, this difference will be omitted in the following. The microphysics of electron heating in the precursor nevertheless remains an important open question, see \citet{gedalin_08}, \citet{gedalin_12}, \citet{plotnikov_12} and \citet{2015arXiv150105466K} for recent discussions of this issue; indeed, the average Lorentz factor of electrons at the shock transition directly impacts the peak frequency of the synchrotron radiation of relativistic blast waves. In the context of relativistic weakly magnetized shocks, the most celebrated instability is the Weibel-like filamentation mode, which develops through a charge separation in the background plasma, triggered by magnetic fluctuations which segregate particles of opposite charges into current filaments of alternate polarity~\citep[\emph{e.g.} ][]{gruzinov_waxman_99,medvedev_loeb_99,2004A&A...428..365W,2006ApJ...647.1250L,2007A&A...475....1A,2007A&A...475...19A,pelletier_10,2010PhRvE..81c6402B,2011ApJ...736..157R,pelletier_11,2011ApJ...738...93N,2012ApJ...744..182S}. The current carried by the particles then positively feeds the magnetic fluctuations, leading to fast growth, even in the absence of a net large-scale magnetic field. In the rest frame of the background plasma, this instability grows in the linear regime as fast as\footnote{The maximal growth rate of the Weibel instability is related to the plasma frequency of the beam of supra-thermal particles, $\omega_{\rm pb}$, though $\Im\omega \,\simeq\,\omega_{\rm pb}$, with $\omega_{\rm pb}\,\simeq\,\xi_{\rm cr}^{1/2}\omega_{\rm p}$.} $\Im\omega\,\simeq\,\xi_{\rm cr}^{1/2}\omega_{\rm p}$, with maximum growth on scales of the order of $c/\omega_{\rm p}$; in the filament rest frame, this instability is mostly of magnetic nature, \emph{i.e.} $\Re\omega\,\sim\,0$. Several branches of this instability have been discussed in the literature, in particular the ``oblique mode'', which involves a resonance with electrostatic modes. Even though this latter mode grows slightly faster than the fully transverse filamentation mode, it suffers from Landau damping once the electrons are heated to relativistic temperatures, while the transverse filamentation mode appears relatively insensitive to temperature effects. Thus, at a first order approximation, the transverse filamentation mode indeed appears to dominate the precursor at very low magnetizations. Its non-linear evolution, however, remains an open question; analytical estimates suggest that it should saturate at values $\epsilon_B\,\ll\,10^{-2}$ via trapping of the particles~\citep{2004A&A...428..365W,2006ApJ...647.1250L,2007A&A...475....1A,2007A&A...475...19A}, while PIC simulations see a continuous growth of magnetic energy density even when the non-linear filamentary structures have been formed~\citep[\emph{e.g.} ][]{keshet_09,sironi_13}. Whether additional instabilities such as a kinking of the filaments contribute in the non-linear phase thus remains debated, see for instance~\citet{2006ApJ...641..978M}. At moderate magnetization levels, another fast instability can be triggered by the perpendicular current (transverse to both the magnetic field and the shock normal) seeded in the precursor by the supra-thermal particles during their gyration around the background field~\citep{2014EL....10655001L,2014MNRAS.440.1365L}. The compensation of this current by the background plasma on its entry into the precursor leads to a deceleration of the flow, which modifies somewhat the effective timescale available for the growth of plasma instabilities, and destabilizes the modes of the background plasma. The growth rate for this instability can be as large as $\Im\omega\,\sim\,\omega_{\rm p}$, indicating that it can compete with the Weibel filamentation mode at moderate magnetizations. If the supra-thermal particle beam carries a net charge (in the shock rest frame), or a net transverse current, other similar instabilities are to be expected~\citep[\emph{e.g.} ][]{2009MNRAS.393..587P,2013MNRAS.433..940C,2014MNRAS.439.2050R}. The phase space study of \citet{2014EL....10655001L} concludes that the filamentation mode likely dominates at magnetization levels $\sigma\,\lesssim\,10^{-7}$, while this perpendicular current-driven instability dominates at $10^{-3}\,\lesssim\,\sigma\,\lesssim\,10^{-2}$; in between, both instabilities combine to form a complex precursor structure. Interestingly, these results do not seem to depend on the shock Lorentz factor, in good agreement with PIC simulations~\citep{sironi_spitkovsky_09,sironi_spitkovsky_11a,sironi_13}. Finally, one should mention the particular case of quasi-parallel (subluminal) configurations: there, a fraction of the particles can in principle escape to infinity along the magnetic field and seed other, larger scale, instabilities. One prime candidate is the relativistic generalization of the Bell streaming instability~\citep[\emph{e.g.} ][]{2006ApJ...651..979M,reville_06}, which is triggered by a net longitudinal current of supra-thermal particles; this instability has indeed been observed in PIC simulations~\citep{sironi_spitkovsky_11a}. Of course, such a parallel configuration remains a special case in the deep relativistic regime. In mildly relativistic shock waves, with $\gamma_{ u}\beta_{u}\,\sim\,1$, locally parallel configurations become more frequent, hence one could expect such instabilities to play a key role in seeding large scale turbulence. \subsection{Downstream Magnetized Turbulence} \label{sec:PIC_mag} How the magnetized turbulence evolves downstream of the shock is an important question, with direct connections to observations. The previous discussion suggests that the coherence length of the fields generated in Weibel-like instabilities should be comparable to the plasma skin-depth, $c/\omega_{\rm p}$. However, magnetic power on such scales is expected to decay rapidly through collisionless phase mixing~\citep{Gruzinov01}, while modeling of GRB afterglow observations rather indicates that magnetic fields persist over scales $\sim\,10^7-10^9\,c/\omega_{\rm p}$ downstream~\citep{gruzinov_waxman_99}. In a relativistic plasma, small-scale turbulence is dissipated at a damping rate\footnote{The shock crossing conditions imply that the relativistic plasma frequency of the shocked downstream plasma is roughly the same as the plasma frequency of the upstream plasma; no distinction will be made here between these quantities.} $\Im\omega\,\simeq\,-k^3c^3/\omega_{\rm p}^2$~\citep{chang_08,2015JPlPh..8145101L} as a function of the wavenumber $k$, indicating that small scales are erased early on. Larger modes can survive longer; power on scales exceeding the Larmor radius of the bulk plasma decays on long, $\Im\omega\,\propto k^2$ MHD scales \citep{keshet_09}. It is not clear at present whether the small-scale turbulence manages to evolve to larger scales through inverse cascade effects~\citep[\emph{e.g.} ][]{MedvedevEtAl05,2007ApJ...655..375K,2014ApJ...794L..26Z}, whether it is dissipated but at a rate which allows to match the observations~\citep{lemoine_12,2013MNRAS.435.3009L}, or whether a large-scale field is seeded in the downstream plasma by some external instabilities~\citep[\emph{e.g.} ][]{sironi_goodman_07,couch_08,2009ApJ...705L.213L}. \begin{figure}[h] \begin{center} \includegraphics[width=0.75\textwidth]{PICmag.png} \caption{ \footnotesize{Pair plasma evolution within $1000\,c/\omega_{\rm p}$ of the shock. {The simulation is performed in the downstream frame, and the upstream flow moves with a Lorentz factor $\gamma_r=15$ (so, $\gamma_r$ is the relative Lorentz factor between the upstream and downstream regions).} The normalized transverse magnetic field $\mbox{sign}(B)\,\epsilon_B$ (color scale stretched in proportion to $\epsilon_B^{1/4}$ to highlight weak features) is shown at (a) early ($t_1=2250\,\omega_{\rm p}^{-1}$), and (b) late ($t_2=11925\,\omega_{\rm p}^{-1}$) times. Here $\Delta x\equiv x-x_{\rm sh}$ is the distance from the shock, with $x_{\rm sh}$ (dashed vertical line) defined as the location of median density between far upstream and far downstream. Also shown are the transverse averages (at $t_1$, dashed blue, and $t_2$, solid red) of (c) the electromagnetic energy $\epsilon_{EM} \equiv [(B^2+E^2)/8\pi]/[(\gamma_r-1)\gamma_rn'mc^2]$ (with $E$ the electric field amplitude in the downstream frame, included in the definition of $\epsilon_{EM}$ because in the simulation frame the induced electric field in the upstream medium is $E\sim B$) normalized to the upstream kinetic energy, (d) density normalized to the far upstream density $n_u=\gamma_rn'$, and (e) particle momentum $\gamma\beta$ (with $\beta$ the velocity in $c$ units) in the x-direction averaged over all particles (higher $\ave{\gamma \beta_x}$) and over downstream-headed particles only.}} \label{fig:PICMag} \end{center} \end{figure} PIC simulations have quantified the generation of upstream current filaments by pinching instabilities \citep[\emph{e.g.} ][]{silva_03, frederiksen_04, JaroschekEtAl05, spitkovsky_05, spitkovsky_08, chang_08}, and resolved the formation of shocks in two- and three-dimensional (2D and 3D) pair plasma \citep{spitkovsky_05, 2007ApJ...668..974K, chang_08, keshet_09,sironi_spitkovsky_09,haugbolle_10,sironi_spitkovsky_11a,sironi_13} and ion-electron plasma \citep{spitkovsky_08,martins_09,sironi_13}. These simulations revealed a rapid decay of the magnetic field downstream at early times \citep{Gruzinov01, chang_08}. Yet, a slow evolution of the plasma configuration takes place on $>10^3/\omega_{\rm p}$ timescales, involving a gradual increase in the scale of the magnetic structures, and consequently their slower dissipation downstream \citep{keshet_09}. This long-term evolution is driven entirely by the high-energy particles accelerated in the shock; it is seen both upstream (\emph{e.g.} in the precursor) and downstream, both of which become magnetized at increasingly large distances from the shock, and with an increasingly flat magnetic power-spectrum downstream \citep{keshet_09}. A flatter magnetic power spectrum at the shock implies a larger fraction of the magnetic energy stored in long-wavelength modes, which may survive farther from the shock. Indeed, the index of a power-law spectrum of magnetic fluctuations directly controls how fast the magnetic energy density, integrated over wavenumbers, decays behind the shock~\citep{chang_08,2015JPlPh..8145101L}; the scale-free limit corresponds to a flat magnetic power spectrum \citep{2007ApJ...655..375K}. Properly capturing the backreaction of high energy particles requires large simulation boxes and large particle numbers, to guarantee that the largest scale fields and the highest energy particles are included. The largest available simulations at the present, with length $L$ and time $T$ scales of $(L\omega_{\rm p}/c)^2\, (T\omega_{\rm p}) \lesssim 10^{11}$, show no sign of convergence at $T\gtrsim 10^4c/\omega_{\rm p}$ \citep{keshet_09,sironi_13}. This is illustrated in \fig{PICMag} for a pair-plasma shock in 2D. For magnetized shocks, the situation is different, as we describe below \citep{sironi_13}. At strong magnetizations, and for the quasi-perpendicular field geometry most relevant for relativistic flows, particle acceleration is suppressed, and the shock quickly reaches a steady state. At low (but nonzero) quasi-perpendicular magnetization, the shock evolves at early times similarly to the case of unmagnetized shocks (\emph{i.e.} $\sigma=0$). Particle acceleration proceeds to higher and higher energies, and modes of longer and longer wavelength appear. However, the maximum particle energy stops evolving once it reaches a threshold $\gamma_{sat}\propto \sigma^{-1/4}$ \citep{sironi_13}, and at that point the overall shock structure approaches a steady state.\citep{sironi_13}.\footnote{This conclusion regarding the saturation of the maximum particle Lorentz factor at $\gamma_{sat}$ has been tested in electron-positron shocks having $\sigma=\ex{4}-\ex{3}$ by \citet{sironi_13}, with the largest PIC study available to date. We caution that further nonlinear evolution, beyond the timespan covered by current PIC simulations, might be present in shocks with lower magnetization.} \section{PIC Simulations of Relativistic Shocks}\label{PIC} Only in the last few years, thanks to important advances in numerical algorithms and computer capabilities, plasma simulations have been able to tackle the problem of particle acceleration in relativistic shocks from first principles. In the following, we describe the major advances driven by large-scale PIC simulations in our understanding of particle acceleration in relativistic shocks. PIC codes can model astrophysical plasmas in the most fundamental way \citep{birdsall,buneman_93,spitkovsky_05}, as a collection of charged macro-particles that are moved by the Lorentz force. The currents deposited by the macro-particles on the computational grid are then used to solve for the electromagnetic fields via Maxwell's equations. The loop is closed self-consistently by extrapolating the fields to the macro-particle locations, where the Lorentz force is computed. Full PIC simulations can capture, from first principles, the acceleration physics of both electrons and ions. However, such simulations must resolve the electron plasma skin depth $c/{\omega_{\rm pe}}$, which is typically much smaller than astrophysical scales. Hence, most simulations can only cover limited time and length scales, and usually with low dimensionality (1D or 2D instead of 3D) and small ion-to-electron mass ratios (the ion skin depth $c/\omega_{\rm pi}$ is a factor of $\sqrt{m_i/m_e}$ larger than the electron skin depth $c/{\omega_{\rm pe}}$). The results discussed below pertain to simulation durations of order $\sim10^3-10^4\,{\omega_{\rm pe}}$ in electron-positron shocks and $\sim10^3\,\omega_{\rm pi}$ in electron-ion shocks (but with reduced mass ratios), so a careful extrapolation is needed to bridge these microscopic scales with the macroscopic scales of astrophysical interest. Yet, as we review below, PIC simulations provide invaluable insight into the physics of particle injection and acceleration in astrophysical sources. The structure of relativistic shocks and the efficiency of particle acceleration depend on the conditions of the upstream flow, such as bulk velocity, magnetic field strength and field orientation. PIC simulations have shown that the shock physics and the efficiency of particle acceleration are insensitive to the shock Lorentz factor (modulo an overall shift in the energy scale), in the regime $\gamma_r\gg1$ of ultra-relativistic flows \citep[\emph{e.g.} ][]{sironi_13}. Below, we only discuss results for shocks where the upstream Lorentz factor with respect to the downstream frame is $\gamma_r\gtrsim 5$, neglecting the trans- and non-relativistic regimes that are outside the scope of this review. We discuss the physics of both electron-positron shocks and electron-ion shocks (up to realistic mass ratios), neglecting the case of electron-positron-ion shocks presented by \emph{e.g.} \citet{hoshino_92,amato_arons_06,stockem_12}, which might be relevant for PWNe. As found by \citet{sironi_spitkovsky_09,sironi_spitkovsky_11a,sironi_13}, for highly relativistic flows, the main parameter that controls the shock physics is the magnetization $\sigma$. Below, we distinguish between shocks propagating into strongly magnetized media ($\sigma\gtrsim \ex{3}$) and weakly magnetized or unmagnetized shocks ($\sigma\lesssim \ex{3}$). \subsection{Particle Acceleration in Strongly Magnetized Shocks}\label{sec:mag} For high magnetizations ($\sigma\gtrsim10^{-3}$ in electron-positron flows, or $\sigma\gtrsim3\times10^{-5}$ in electron-ion flows), the shock structure and acceleration properties depend critically on the inclination angle $\theta$ between the upstream field and the shock direction of propagation \citep{sironi_spitkovsky_09,sironi_spitkovsky_11a}. If the magnetic obliquity is larger than a critical angle $\theta_{\rm crit}$, charged particles would need to move along the field faster than the speed of light in order to outrun the shock (``superluminal'' configurations). In \fig{super}, we show how the critical angle $\theta_{\rm crit}$ (as measured in the downstream frame) depends on the flow velocity and magnetization. In the limit of $\sigma\ll1$ and $\gamma_r\gg1$, the critical obliquity approaches the value $\theta_{\rm crit}\simeq34^\circ$. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.8\textwidth]{thetacrit.png} \caption{\footnotesize{Critical obliquity angle $\theta_{\rm crit}$ (measured in the downstream frame) that separates subluminal and superluminal configurations \citep{sironi_spitkovsky_09}, as a function of the flow Lorentz factor $\gamma_r$ and the magnetization $\sigma$, as indicated in the label. The filled black circle indicates our reference case with $\gamma_r=15$ and $\sigma=0.1$.}} \label{fig:super} \end{center} \end{figure} Only ``subluminal'' shocks ($\theta\lesssim\theta_{\rm crit}$) are efficient particle accelerators \citep{sironi_spitkovsky_09,sironi_spitkovsky_11a,sironi_13}, in agreement with the analytical findings of \citet{begelman_kirk_90}. As illustrated in Fig.~\ref{fig:shock1}, a stream of shock-accelerated particles propagates ahead of the shock (panel (c)), and their counter-streaming with the incoming flow generates magnetic turbulence in the upstream region (panel (b)). In turn, such waves govern the acceleration process, by providing the turbulence required for the Fermi mechanism. In the particular case of \fig{shock1} --- a relativistic shock with $\gamma_r=15$, $\sigma=0.1$ and $\theta=15^\circ$ propagating into an electron-ion plasma --- the upstream turbulence is dominated by Bell-like modes \citep{reville_06,pelletier_10,pelletier_11}. The downstream particle spectrum in subluminal shocks shows a pronounced non-thermal tail of shock-accelerated particles with a power-law index $2\lesssim s_\gamma\lesssim 3$ (panel (d)). The tail contains $\sim5\%$ of the particles and $\sim20\%$ of the flow energy at time $2250\,\omega_{\rm pi}^{-1}$; both values appear to be time-converged, within the timespan covered by our simulations. \begin{figure}[!tbp] \begin{center} \PNGfigure{\includegraphics[width=1\textwidth]{fluidsig.png}} \caption{\footnotesize{Structure of an electron-ion subluminal shock with $\gamma_r=15$, $\sigma=0.1$ and $\theta=15^\circ$, from \citet{sironi_spitkovsky_11a}. The simulation is performed in the downstream frame. The shock front is located at $x\sim725\,c/\omega_{\rm pi}$ (vertical dotted red line in panel (a)), and it separates the upstream region (to its right) from the compressed downstream region (to its left). A stream of shock-accelerated ions propagates ahead of the shock (see the diffuse cloud in the momentum space $x-p_{xi}$ of panel (c) to the right of the shock, at $x\gtrsim725\,c/\omega_{\rm pi}$). Their interaction with the upstream flow (narrow beam to the right of the shock in panel (c)) generates magnetic turbulence ahead of the shock (see the transverse waves in panel (b), to the right of the shock). In turn, such waves govern the process of particle acceleration. In fact, the particle spectrum behind the shock (solid lines in panel (d); red for ions, blue for electrons) is not compatible with a simple thermal distribution (dashed lines), showing a clear non-thermal tail of high-energy particles, most notably for ions.} } \label{fig:shock1} \end{center} \end{figure} In contrast, superluminal shocks ($\theta\gtrsim\theta_{\rm crit}$) show negligible particle acceleration \citep{gallant_92,hoshino_08,sironi_spitkovsky_09,sironi_spitkovsky_11a,sironi_13}. Here, due to the lack of significant self-generated turbulence, charged particles are forced to slide along the background field lines, whose orientation prohibits repeated crossings of the shock. This inhibits the Fermi process, and in fact the particle distribution behind superluminal shocks is purely thermal. The same conclusion holds for both electron-positron and electron-ion flows. In electron-ion shocks, the incoming electrons are heated up to the ion energy, due to powerful electromagnetic waves emitted by the shock into the upstream medium, as a result of the synchrotron maser instability (studied analytically by \citet{lyubarsky_06}, and with 1D PIC simulations by \emph{e.g.} \citet{langdon_88,gallant_92,hoshino_92,hoshino_08}). Yet, such heating is not powerful enough to permit an efficient injection of electrons into the Fermi acceleration process at superluminal electron-ion shocks. If magnetized superluminal shocks are responsible for producing the radiating particles in astrophysical relativistic sources, the strong electron heating observed in electron-ion shocks implies that the putative power-law tail in the electron spectrum should start from energies higher than the ion bulk kinetic energy. For models of GRBs and AGN jets that require a power-law distribution extending down to lower energies, the presence of such shocks would suggest that electron-positron pairs may be a major component of the flow. \begin{figure}[!htb] \begin{center} \label{fig:tot}\includegraphics[width=1.3\textwidth,angle=0]{visit3d_high.png} \caption{\footnotesize{Shock structure from the 3D PIC simulation of a $\sigma=10^{-3}$ electron-positron shock with $\gamma_r=15$, from \cite{sironi_13}. The simulation is performed in the downstream frame and the shock propagates along $+\hat{x}$. We show the $xy$ slice of the particle number density (normalized to the upstream density), and the $xz$ and $yz$ slices of the magnetic energy fraction $\epsilon_B$. A stream of shock-accelerated particles propagates ahead of the shock, and their counter-streaming motion with respect to the incoming flow generates magnetic turbulence in the upstream via electromagnetic micro-instabilities. In turn, such waves provide the scattering required for particle acceleration.}} \label{fig:shock} \end{center} \end{figure} \subsection{Particle Acceleration in Weakly Magnetized and Unmagnetized Shocks} Weakly magnetized shocks ($\sigma\lesssim10^{-3}$ in electron-positron flows, $\sigma\lesssim3\times10^{-5}$ in electron-ion flows) are governed by electromagnetic plasma instabilities (see \S\ref{subsec:precursor}), that generate magnetic fields stronger than the background field. Such shocks do accelerate particles self-consistently, regardless of the magnetic obliquity angle \citep[][]{spitkovsky_08,spitkovsky_08b,martins_09,haugbolle_10,sironi_13}. The stream of shock-accelerated particles propagates ahead of the shock, triggering the Weibel instability. The instability generates filamentary magnetic structures in the upstream region, as shown in Fig.~\ref{fig:shock}, which in turn scatter the particles back and forth across the shock, mediating Fermi acceleration. \begin{figure}[!hbt] \begin{center} \includegraphics[width=0.75\textwidth]{fig11b.png} \caption{\footnotesize{Temporal evolution of the downstream particle spectrum, from the 2D simulation of a $\gamma_r=15$ electron-ion ($m_i/m_e=25$) shock propagating into a flow with magnetization $\sigma=10^{-5}$, from \citet{sironi_13}. The evolution of the shock is followed from its birth (black curve) up to $\omega_{\rm pi}t=2500$ (red curve). In the top panel we show the ion spectrum and in the bottom panel the electron spectrum. The non-thermal tails approach at late times a power law with a slope $s_\gamma=3.0$ for ions and $s_\gamma=2.5$ for electrons (black dashed lines in the two panels). In the bottom panel, we overplot the ion spectrum at $\omega_{\rm pi}t=2500$ with a red dotted line, showing that ions and electrons are nearly in equipartition. Inset of the top panel: mean downstream ion (red) and electron (blue) energy, in units of the bulk energy of an upstream particle. The dashed blue line shows the electron energy at injection. Inset of the bottom panel: temporal evolution of the maximum Lorentz factor of ions (red) and electrons (blue), scaling as $\propto (\omega_{\rm pi} t)^{1/2}$ at late times (black dashed line).}} \label{fig:accel1} \end{center} \end{figure} The accelerated particles in weakly magnetized shocks populate in the downstream region a power-law tail $dN/d\gamma\propto \gamma^{-s_\gamma}$ with a slope $s_\gamma\sim2.5$, that contains $\sim3\%$ of the particles and $\sim10\%$ of the flow energy.\footnote{These values are nearly independent of the flow composition and magnetization, in the regime of weakly magnetized shocks. Also, they are measured at time $\sim 10^4\,\omega_{\rm pe}^{-1}$ in electron-positron shocks and at $\sim 10^3\,\omega_{\rm pi}^{-1}$ in electron-ion shocks, but they appear remarkably constant over time, within the timespan covered by our simulations.} In electron-ion shocks, the acceleration process proceeds similarly for the two species, since the electrons enter the shock nearly in equipartition with the ions, as a result of strong pre-heating in the self-generated Weibel turbulence \citep{spitkovsky_08,martins_09,sironi_13}. In both electron-positron and electron-ion shocks, the maximum energy of the accelerated particles scales in time as $\gamma_{max}\propto t^{1/2}$ \citep{sironi_13}, as shown in Fig.~\ref{fig:accel1}. More precisely, the maximum particle Lorentz factor in the downstream frame scales as \begin{eqnarray}\label{eq:gmax4a} &\gamma_{max}&\simeq0.5\,\gamma_r\,(\ompt)^{1/2}\\ \gamma_{max,i}&\sim\frac{\gamma_{max,e} m_e}{m_i}&\simeq0.25\,\gamma_r\,(\omega_{\rm pi}t)^{1/2}\label{eq:gmax4b} \end{eqnarray} in electron-positron and in electron-ion shocks, respectively \citep{sironi_13}. This scaling is shallower than the so-called (and commonly assumed) Bohm limit $\gamma_{max}\propto t$, and it naturally results from the small-scale nature of the Weibel turbulence generated in the shock layer (see Fig.~\ref{fig:shock}). The increase of the maximum particle energy over time proceeds up to a saturation Lorentz factor (once again, measured in the downstream frame) that is constrained by the magnetization $\sigma$ of the upstream flow \begin{eqnarray}\label{eq:gsat4a} &\gamma_{sat}&\simeq4\,\gamma_r\,\sigma^{-1/4}\\ \gamma_{sat,i}&\sim\frac{\gamma_{sat,e} m_e}{m_i}&\simeq2\,\gamma_r\,\sigma^{-1/4}\label{eq:gsat4b} \end{eqnarray} in electron-positron and electron-ion shocks, respectively. The saturation of the maximum particle energy is shown in Fig.~\ref{fig:accel2} for a shock with $\sigma=\ex{3}$. Further energization is prevented by the fact that the self-generated turbulence is confined within a region of thickness $L_{B,sat}\propto \sigma^{-1/2} $ around the shock \citep{sironi_13}. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.75\textwidth]{spectime_sig1e-3b.png} \caption{\footnotesize{Time evolution of the downstream particle spectrum from the 3D PIC simulation of a $\sigma=10^{-3}$ electron-positron shock with $\gamma_r=15$, from \cite{sironi_13}. The evolution of the shock is followed from its birth (black curve) up to $\omega_{\rm pe}t=3000$ (red curve). We overplot the spectrum at $\omega_{\rm pe}t=3000$ from a 2D simulation with the same parameters (red dotted line), showing excellent agreement at high energies. The inset shows that the maximum particle Lorentz factor grows as $\gamma_{max}\propto t^{1/2}$, before saturating at $\gamma_{sat}\propto \sigma^{-1/4}$. The results are consistent between 2D (dotted) and 3D (solid).}} \label{fig:accel2} \end{center} \end{figure} \section{Astrophysical Implications}\label{rad} \subsection{Acceleration of Ultra-High Energy Cosmic Rays} Relativistic shock waves have long been considered as prime candidates for the acceleration of cosmic rays to the highest energies observed, $E\,\sim\,10^{20}\,$eV. Indeed, a naive extrapolation of the acceleration time scale in the sub-relativistic regime ($t_{\rm acc}\,\sim\,t_{\rm scatt}/\beta_{u}^2$, with $t_{\rm scatt}$ the scattering timescale) suggests that relativistic shocks (\emph{i.e.} $\beta_u\sim 1$) accelerate particles on shorter time scales than non-relativistic shocks (\emph{i.e.} $\beta_u\ll 1$), at a given $t_{\rm scatt}$. For given radiative loss and escape time scales, this implies that relativistic shocks would be accelerating particles to much higher energies than non-relativistic shocks. However, the situation is more complex than it appears; in particular, in relativistic shock waves, $t_{\rm scatt}$ may be much larger than usually assumed. As mentioned repeatedly in the previous paragraphs, particle acceleration in the relativistic regime $\gamma_{u}\beta_{ u}\,\gg\,1$ around a steady planar shock wave, operates only if intense micro-turbulence has been excited in the shock precursor, as demonstrated analytically~\citep{2006ApJ...645L.129L}, by Monte Carlo simulations~\citep{2006ApJ...650.1020N} and by PIC simulations~\citep{sironi_13}; consequences for the acceleration of particles to ultra-high energies have been discussed in several papers, \emph{e.g.} by \citet{2008AIPC.1085...61P}, \citet{2009JCAP...11..009L}, \citet{2011AIPC.1367...70L}, \citet{2011ApJ...738L..21E}, \citet{2012SSRv..173..309B,sironi_13} or more recently by \citet{2014MNRAS.439.2050R}. Scattering in small-scale turbulence leads to a downstream residence time $t_{\rm scatt}\,\sim\,r_{L}^2/(\lambda_{\delta B}c)$, with $r_{L}$ the Larmor radius of the particle and $\lambda_{\delta B}$ the coherence length scale of the turbulence. This implies that the (shock frame) acceleration timescale $t_{\rm acc}$ grows quadratically with the energy, which fits well the result seen in PIC simulations that the maximum energy grows as the square root of time. In other words, as the particle energy grows, the acceleration timescale departs more and more from the Bohm estimate, which is generally used to compute the maximum energy. Comparing for instance the acceleration timescale, which is at least equal to the above downstream residence time, with the dynamical timescale $r/\gamma_{u}$ in the shock rest frame ($r$ denoting the radius of the shock front in the upstream rest frame), one finds a maximum energy $E_{\rm max}\,\lesssim \, e\, \delta B\, r\left(\gamma_{u}\lambda_{\delta B}/r\right)^{1/2}$, with $\delta B$ the strength of the turbulent field expressed in the shock frame; the above maximal energy has been written in the upstream (observer) frame. The factor in the brackets generally takes very small values, because $\lambda_{\delta B}\,\sim\,c/\omega_{\rm p}$ while $r$ is a macroscopic length scale; this maximal energy is thus far below the so-called Hillas estimate $e\,\delta B r$, which corresponds to a Bohm estimate for $t_{\rm scatt}$. Another way to phrase the problem is as follows (see \citealt{2009JCAP...11..009L} for a discussion): assume that the acceleration timescale is written $t_{\rm acc}\,=\,{\cal A}\,r_{L}/c$, and derive the maximum energy by comparing $t_{\rm acc}$ with $t_{\rm dyn}\,=\,r/(\gamma\beta c)$ as above for a jet moving at velocity $\beta$ towards the observer. Then one finds that acceleration of particles of charge $Z$ to $10^{20}E_{20}\,$eV requires that the isotropic equivalent magnetic luminosity of the object exceeds: $L_B\,\gtrsim\,10^{45}\,\,Z^{-2}E_{20}^2{\cal A}^2\gamma^2\,$erg/s, a very large number indeed, all the more so if ${\cal A}\,\gg\,1$. For acceleration at ultra-relativistic shock waves, ${\cal A}$ is much larger than unity (while the Bohm estimate corresponds to ${\cal A}\,\sim\,1$), with typical values ${\cal A}\,\sim\,E/\left(\gamma_u m_p c^2\right)$. In summary, particle acceleration at ultra-relativistic shock waves does not appear fast enough to produce particles of ultra-high energies. In particular, when the above arguments are applied to the case of the external shock of a GRB, the maximal energy is found to be of the order of $10^{16}\,$eV~\citep{plotnikov_12,sironi_13,2014MNRAS.439.2050R}. It is important however to note three caveats in the above arguments. One is that as $\gamma_{u}\beta_{u}\,\rightarrow\,1$, \emph{i.e.} for mildly relativistic shock waves, the nature of the turbulence remains unknown and one cannot exclude that scattering would be closer to a Bohm estimate. Two facts support such a speculation: (1) the precursor increases in size as $\gamma_{u}$ diminishes, which suggests that MHD-scale instabilities could arise and excite large scale turbulence; and (2), the obliquity becomes less of a problem for mildly relativistic shock waves, suggesting that large scale turbulence could possibly lead to acceleration in this regime. A second caveat is the fact that PWNe are very efficient particle accelerators, even though one would expect the opposite in the absence of reconnection or other dissipative processes, due to the large magnetization of the flow (Section~\ref{sect:pwn}). More precisely, synchrotron photons are observed with energies as high as $100\,$MeV, which means that pairs are accelerated up to the radiation-reaction limit, \emph{i.e.} with an acceleration time scale close to the theoretical Bohm scaling. Such empirical evidence suggests that ions could also be accelerated to very high energies, if ions are indeed injected along with pairs in the wind. In the Crab, such a maximal energy would be limited by the confinement in the nebular turbulence to values of the order of $10^{17}\,$eV (for $Z=1$); more powerful nebulae, associated with young pulsars born with a few millisecond periods, could however confine (and potentially accelerate) protons up to the highest energies \citep{2014arXiv1409.0159L}. Finally, as the nonlinear evolution of weakly magnetized or parallel shocks over long timescales is not yet understood, some of the above estimates, pertaining \emph{e.g.} to the diffusive properties and extent of the magnetic field, may be altered on macroscopic times. \subsection{Radiative Signatures of Relativistic Blast Waves} In line with the previous discussion, one can compute the maximal energy for electrons and derive the maximal synchrotron photon energy. Using an acceleration time scale $t_{\rm acc}\,\simeq\,r_{L}^2/(\lambda_{\delta B}c)$ and comparing to synchrotron losses in the self-generated micro-turbulence, characterized by its magnetization $\epsilon_B$, one derives a maximum synchrotron photon energy of the order of a few GeV in the early phase of GRB afterglows, \emph{i.e.} during the first hundreds of seconds~\citep{kirk_reville_10,plotnikov_12,lemoine_12,2013ApJ...771L..33W,sironi_13}. Let us stress that in the latter study, this estimate has been derived from PIC simulations with a self-consistent measurement of the acceleration time scale in the self-consistent magnetic field. The synchrotron radiation of electrons accelerated at the external ultra-relativistic shock of GRBs can thus produce the bulk of the long-lasting $>100\,{\rm MeV}$ emission detected by the Fermi satellite \citep[\emph{e.g.} ][]{barniol_09,ackermann_10,depasquale_10,ghisellini_10}. The photons that have been observed with energies in excess of $\gtrsim10\,$GeV probably result from inverse Compton interactions~\citep{2013ApJ...771L..33W}. Interestingly, the recent GRB130427A has revealed a long-lasting emission with a possible break in the spectrum at an energy of a GeV, characteristic of a turn-over between the synchrotron and the synchrotron self-Compton components~\citep{2013ApJ...771L..13T}, in good qualitative agreement with the above arguments. Other potential radiative signatures of the shock microphysics come from the small-scale nature of the turbulence and its long-term evolution in the blast. As discussed in Section~\ref{sec:PIC_mag}, one notably expects this turbulence to relax through collisionless damping on hundreds of $c/\omega_{\rm p}$~\citep{chang_08,keshet_09,2015JPlPh..8145101L} while the electrons typically cool on much longer length scales. In GRB external blast waves, the shocked region is typically $7-9$ orders of magnitude larger than $c/\omega_{\rm p}$ in size, which leaves room for a substantial evolution of $\epsilon_B$, even if it decreases as a mild power-law in distance from the shock, as suggested by the above studies. Since the electron cooling length depends on the inverse of the electron Lorentz factor, particles of different initial Lorentz factors emit their energy in regions of different magnetic field strength, leading to a non-standard synchrotron spectrum~\citep{rossi_rees_03,2007Ap&SS.309..157D,lemoine_12}, which could in principle be used as a tomograph of the evolution of the micro-turbulence downstream of the shock. Interestingly, in this picture the decay index of the turbulence is related to the long-wavelength content of the power spectrum of magnetic fluctuations at the shock front, which is unknown so far, as it is known to be modified by the acceleration of higher energy particles~\citep{keshet_09}. Finally, it is interesting to note that the recent broad-band analysis of GRB afterglows seen from the radio up to GeV energies has indeed revealed spectral signatures of a decaying magnetic field~\citep{2013MNRAS.435.3009L}, with a decay law scaling with distance from the shock roughly as $\Delta x^{-0.5}$ ($\Delta x$ being the proper distance to the shock in the downstream frame). As discussed in Section~\ref{sec:PIC_mag}, there are alternative possibilities however; it has been suggested for instance that the turbulence could evolve in a self-similar way as a function of distance to the shock, maintaining a uniform $\epsilon_B$ thanks to an inverse cascade process~\citep{2007ApJ...655..375K}. It is also possible that external sources seed the blast with a large scale long-lived turbulence, \emph{e.g.} through a Rayleigh-Taylor instability at the contact discontinuity~\citep{2009ApJ...705L.213L} or through small scale dynamos following the interaction of the shock front with external inhomogeneities~\citep{sironi_goodman_07,couch_08}. Hopefully, future high accuracy observational data will provide diagnostics which can be confronted with numerical simulations. The possibility that the small scale nature of the turbulence gives rise to diffusive (or jitter) synchrotron radiation rather than conventional synchrotron radiation has also attracted attention~\cite[\emph{e.g.} ][]{medvedev_00,medvedev_06,fleishman_06a,2011ApJ...731...26M,2011ApJ...737...55M,2013ApJ...774...61K}. In particular, jitter radiation has been proposed as a solution for the fact that GRBs prompt spectra below the peak frequency are not always compatible with the predictions of synchrotron emission (the so-called ``line of death'' puzzle, see \citet{preece_98}). In the jitter regime, particles are deflected by less than $1/\gamma$ ($\gamma$ is the electron Lorentz factor) as they cross a wavelength $\lambda_{\delta B}$, implying that coherence of the emission is maintained over several coherence cells of the turbulence. This regime thus takes place whenever the wiggler parameter $a\,\equiv\, e\delta B\lambda_{\delta B}/mc^2\,\ll\,1$, while the standard synchrotron approximation becomes valid in the opposite limit. However, it is easy to verify that in the vicinity of the shock $a\,\sim\,\overline\gamma_{\vert \rm sh}$, with $\overline\gamma_{\vert\rm sh}$ the average Lorentz factor of the supra-thermal electrons in the shock rest frame, suggesting that jitter signatures must be weak. The absence of jitter radiation in relativistic shocks has been demonstrated from first principles by computing the radiation from particles in PIC simulations~\citep{sironi_spitkovsky_09b}, which produce spectra entirely consistent with synchrotron radiation in the fields generated by the Weibel instability (Fig.~\ref{fig:radiation1}). The so-called ``jitter'' regime is recovered only by artificially reducing the strength of the fields, such that the parameter $a$ becomes much smaller than unity. So, if the GRB prompt emission results from relativistic unmagnetized shocks, it seems that resorting to the jitter regime is not a viable solution for the ``line of death'' puzzle. At frequencies above the peak, the synthetic spectra from PIC simulations show, somewhat unexpectedly, that the contribution of the upstream medium to the total emission is not negligible (Fig.~\ref{fig:radiation1}), yet it is omitted in most models. This causes the radiation spectrum to be flatter than the corresponding downstream spectrum, thus partly masking the contribution of downstream thermal particles. \begin{figure}[!tbp] \begin{center} \PNGfigure{\includegraphics[width=0.9\textwidth]{fig2.png}} \caption{\footnotesize{\textit{Ab initio} photon spectrum (thick solid lines) from the 2D PIC simulation of an unmagnetized (\emph{i.e.} $\sigma=0$) pair shock. Red lines are for head-on emission ($\hat{n}=\hat{x}$, along the shock direction of propagation), blue lines for edge-on emission ($\hat{n}=\hat{y}$, along the shock front). The slope at low frequencies is $2/3$ (black long-dashed lines), proving that the spectra are consistent with synchrotron radiation from a 2D particle distribution (in 3D, the predicted slope of 1/3 is obtained). By separating the relative contribution of downstream ($x\leq x_{\rm sh}$; thin solid lines) and upstream ($x\geq x_{\rm sh}$; dotted lines) particles, one sees that upstream particles contribute significantly to the total emission (thick solid lines), especially at high frequencies. Frequencies are in units of the plasma frequency $\omega_{\rm p}$.}} \label{fig:radiation1} \end{center} \end{figure} \subsection{Radiative Signatures of Pulsar Wind Nebulae}\label{sect:pwn} The spectrum of PWNe consists of two components, where the low energy component, most likely dominated by synchrotron, shows a cutoff at a few tens of MeV. The fact that synchrotron emission reaches these energies, despite the rapid synchrotron cooling, implies that particle acceleration in the nebula is an extremely fast process \citep{dejager_harding_92}, which challenges our understanding of particle acceleration in relativistic shocks. Around the equatorial plane of obliquely-rotating pulsars, the wind consists of toroidal stripes of opposite magnetic polarity, separated by current sheets of hot plasma. It is still a subject of active research whether the alternating stripes will dissipate their energy into particle heat ahead of the termination shock, or whether the wind remains dominated by Poynting flux till the termination shock \citep[][]{lyubarsky_kirk_01,kirk_sk_03,sironi_spitkovsky_11b}. If the stripes are dissipated far ahead of the termination shock, the upstream flow is weakly magnetized and the pulsar wind reaches a terminal Lorentz factor (in the frame of the nebula) $ \gamma_r\sim L_{sd}/m_e c^2 \dot{N}\simeq3.7\times 10^{4} L_{sd,38.5}\dot{N}_{40}^{-1}~, $ where $L_{sd}\equiv 3 \times 10^{38}L_{sd,38.5}\unit{erg s\,s^{-1}}$ is the spin-down luminosity of the Crab (the Crab Nebula is the prototype of PWNe), and $\dot{N}=10^{40}\dot{N}_{40}\unit{s^{-1}}$ is the particle flux entering the nebula, including the radio-emitting electrons \citep{bucciantini_11}. For electron-positron flows, as appropriate for pulsar winds, the maximum particle Lorentz factor in the downstream frame increases with time as $\gamma_{max}\sim 0.5 \,\gamma_r\, (\omega_{\rm p} t)^{1/2}$ (see Section \ref{PIC}). The plasma frequency $\omega_{\rm p}$ can be computed from the number density ahead of the termination shock, which is $n_{{\rm TS}}=\dot{N}/(4 \pi R_{\rm TS}^2 c)$, assuming an isotropic particle flux. Here, $R_{\rm TS}\equiv3\times10^{17}R_{\rm TS,17.5}\unit{cm}$ is the termination shock radius. Balancing the acceleration rate with the synchrotron cooling rate in the self-generated Weibel fields, the maximum electron Lorentz factor is \begin{eqnarray} \gamma_{sync,e}\simeq3.5\times10^{8}L_{sd,38.5}^{1/6}\dot{N}_{40}^{-1/3} \epsilon_{B,-2.5}^{-1/3}R_{\rm TS,17.5}^{1/3}~. \end{eqnarray} A stronger constraint comes from the requirement that the diffusion length of the highest energy electrons be smaller than the termination shock radius (\emph{i.e.} a confinement constraint). Alternatively, the acceleration time should be shorter than $R_{\rm TS}/c$, which yields the critical limit \begin{eqnarray} \gamma_{\mathit{conf,e}}\simeq1.9\times10^{7}L_{sd,38.5}^{3/4}\dot{N}_{40}^{-1/2}~, \end{eqnarray} which is generally more constraining than the cooling-limited Lorentz factor $\gamma_{sync,e}$. The corresponding synchrotron photons will have energies \begin{eqnarray} \!\!\!h \nu_{\mathit{conf,e}}&\simeq&0.17\,L_{sd,38.5}^{2}\dot{N}_{40}^{-1}\epsilon_{B,-2.5}^{1/2}R_{\rm TS,17.5}^{-1}\unit{keV} \end{eqnarray} which are apparently too small to explain the X-ray spectrum of the Crab, extending to energies beyond a few tens of MeV. We conclude that Fermi acceleration at the termination shock of PWNe is not a likely candidate for producing X-ray photons via the synchrotron process, and valid alternatives should be investigated. One possibility -- magnetic dissipation of the striped pulsar wind in and around the shock front itself -- has been extensively studied, with the conclusion that particle acceleration along extended X-lines formed by tearing of the current sheets may contribute to the flat particle distribution (with spectral index $s_\gamma\simeq1.5$) required to explain the far infrared and radio spectra of PWNe \citep[\emph{e.g.} ,][]{lyubarsky_03, sironi_spitkovsky_11b,sironi_spitkovsky_12}. Indeed, hard particle spectra are found to be a generic by-product of magnetic reconnection in the relativistic regime appropriate for pulsar winds \citep[][see also Kagan et al. (2015) in the present volume]{sironi_spitkovsky_14,sironi_15}. However, further acceleration to gamma-ray emitting energies by the Fermi process cannot occur in the shock that terminates the pulsar wind, if particle scattering depends only on the self-generated turbulence. Yet, the steady-state hard X-ray and gamma-ray spectra of PWNe do look like the consequences of Fermi acceleration -- particle distributions with $s_\gamma \simeq 2.4$ are implied by the observations. In this regard, we argue that the wind termination shock might form in a macroscopically turbulent medium, with the outer scale of the turbulence driven by the large-scale shear flows in the nebula \citep{komissarov_04,delzanna_04,camus_09}. If the large-scale motions drive a turbulent cascade to shorter wavelengths, back-scattering of the particles in this downstream turbulence, along with upstream reflection by the transverse magnetic field of the wind, might sustain Fermi acceleration to higher energies. Another ``external'' influence of reconnection on the shock structure, that might lead to particle acceleration to higher energies, may be connected to the accelerator behind the recently discovered gamma-ray flares in the Crab Nebula \citep{abdo_11}. Runaway acceleration of electrons and positrons at reconnection X-lines, a linear accelerator, may inject energetic beams into the shock, with the mean energy per particle approaching the whole open field line voltage, $\gtrsim 10^{16}\unit{V}$ in the Crab \citep{arons_12}, as required to explain the Crab GeV flares. This high-energy population can drive cyclotron turbulence when gyrating in the shock-compressed fields, and resonant absorption of the cyclotron harmonics can accelerate the electron-positron pairs in a broad spectrum, with maximum energy again comparable to the whole open field line voltage \citep{hoshino_92,amato_arons_06}. \section{Conclusions}\label{conc} There has been significant progress in our understanding of relativistic shocks in recent years, thanks to both analytical work and numerical simulations. The highly nonlinear problem of particle acceleration and magnetic field generation in shocks --- with the accelerated particles generating the turbulence that in turn mediates their acceleration --- is being tackled from first principles, assessing the parameter regime where particle acceleration in relativistic shocks is efficient. In this chapter, we have described the basic analytical formalism of test particle acceleration in relativistic shocks, leading to the ``universal'' energy slope $s_\gamma\simeq 2.2$ in the ultra-relativistic limit; we have unveiled the most relevant plasma instabilities that mediate injection and acceleration in relativistic shocks; and we have summarized recent results of large-scale PIC simulations concerning the efficiency and rate of particle acceleration in relativistic shocks, and the long-term evolution of the self-generated magnetic turbulence. Our novel understanding of particle acceleration and magnetic field generation in relativistic shocks has profound implications for the modeling of relativistic astrophysical sources, most importantly PWNe, GRBs, and AGN jets. \vspace{0.3in} {\bf Acknowledgments:} We gratefully thank Boaz Katz, Guy Pelletier, Anatoly Spitkovsky and Eli Waxman for their collaboration on many of the issues discussed here. U.K. is supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n\textordmasculine ~293975, by an IAEC-UPBC joint research foundation grant, and by an ISF-UGC grant. M.L. acknowledges support by the ANR-14-CE33-0019 MACH project. \bibliographystyle{aps-nameyear}
proofpile-arXiv_068-16548
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} General relativity predicts the existence of black holes, i.e. objects from which nothing can escape. Hawking found that incorporating quantum mechanics changes this understanding about black holes \cite{Hawking1,Hawking2}. He showed that black holes are not completely black. They radiate thermal spectrum due to the quantum effects. Furthermore, Hawking found a connection between the surface gravity of a black hole and its corresponding temperature. There are several ways in understanding the mechanism of black holes to radiate. One of the latest approaches was given by Parikh and Wilczek\cite{Wilczek,Parikh}, where the radiation of black holes is described as a quantum tunneling effect of particles through the event horizon. This method is found to be simpler and more intuitive. In the Parikh-Wilczek (PW) method, which is called sometime as the radial null geodesic method, one first computes the across horizon tunneling amplitude as an exponentiation of the imaginary part of the corresponding particle's action in the outgoing mode. Then the principle of detailed balance is used to connect the tunneling amplitude with the Boltzmann factor, hence the temperature can be obtained. In fact, the radial null geodesic method is not the only way to describe the Hawking radiation by using the tunneling mechanism. There is an alternative, known as the Hamilton-Jacobi method \cite{Paddy}, where one solves the semiclassical equation of motion for the tunneled particles by using the Hamilton-Jacobi ansatz for the corresponding particle's wave function. The discussions for Hamilton-Jacobi method in discussing the Hawking temperature and black hole thermodynamics for various cases and black holes have been performed in \cite{Vanzo,Kim,Jiang,Banerjee-Majhi,Bibhas,RB}. However, most of them are confined into the tunneling of scalar fields where one starts with a Klein-Gordon equation in curved spacetime and solves it by using Hamilton-Jacobi ansatz. For static spacetimes, the tunneling mechanism for Hawking radiation has been extended to the case of spin $\tfrac{1}{2}$ fermion \cite{Mann-spin}, as well as photon and gravitino \cite{Majhispin}. These works motivate us to study the Hawking mechanism for time dependent black hole's radiation as the tunneling processes of massless higher spins particles. The works presented in this paper can be also considered as an extension of our previous work \cite{Siahaan:2009qv} where the authors study the scalar tunneling which gives rise to the Hawking temperature of Vaidya black holes. Since we keep the spherically symmetric and time dependent spacetime metric in this paper to be quite general, i.e. spacetime whose line element can be written as eq. (\ref{metric-gen-timedep}), hence the results presented in this paper should be relevant to this class of spacetime\footnote{For example to those discussed in \cite{dynamicalBH}.}. The organization of paper is as follows. In the next section, we review the null geodesic method for a general time dependent metric background. In section \ref{sec:Dirac}, we discuss the tunneling of massless Dirac particles across the time dependent black hole's horizon. By using the solutions of massless Dirac fields and then the detailed balance principle, we can get the Hawking temperature of the black holes under consideration. Interestingly, the Hawking temperature derived in this section is invariant compared to the one obtained in the case of tunneled scalar fields \cite{Siahaan:2009qv}. The same prescription is repeated in section \ref{sec:photon}, where the starting point is the source free Maxwell equation in curved space. Again, the Hamilton-Jacobi ansatz is used to solve the corresponding equation. After using the Lorentz gauge condition, we can get the solution for the vector fields. The same finding is appeared in section \ref{sec:photon}, where the obtained Hawking temperature has no difference to that derived in the scalar tunneling \cite{Siahaan:2009qv}. In section \ref{sec:gravitino}, the tunneled object is massless gravitino, i.e. particle with spin $\tfrac{3}{2}$. We start from the Rarita-Schwinger equation in curved space, and Hamilton-Jacobi ansatz helps us to get the solutions. As we would expect after performing the analysis in sections \ref{sec:Dirac} and \ref{sec:photon}, the Hawking radiation due to the massless gravitino tunneling yields the same temperature as one in the scalar case \cite{Siahaan:2009qv}. The conclusions are given in the last section. \section{Radial Null Geodesic Method}\label{sec:radial-null} In \cite{Wilczek}, Parikh and Wilczek presented a direct and short derivation of Hawking radiation as a tunneling process. To dispel the coordinate singularity that the Schwarzschild~ coordinate has, they employ the Painleve transformation, which yields the Schwarzschild~ spacetime transforms to \begin{equation} ds^2 = - \left( {1 - \frac{{2M}}{r}} \right)dt^2 + 2\sqrt {\frac{{2M}}{r}} dtdr + dr^2 + r^2 d\Omega _2^2 \,, \end{equation} where $d\Omega_2 ^2=d\theta^2 + \sin^2\theta d\phi^2$ is the metric of 2-sphere with unit radius. In getting the last line element from the Schwarzschild~ metric, the Schwarzschild~ time $t_s$ is transformed as \cite{Wilczek} \begin{equation}\label{Painleve} t_s = t - 2\sqrt {2Mr} - 2M\ln \left( {\frac{{\sqrt r - \sqrt {2M} }}{{\sqrt r + \sqrt {2M} }}} \right)\,. \end{equation} For a general spherically symmetric and static spacetime \begin{equation} \label{static-gen} ds^2 = - X\left( r \right)dt^2 + Y\left( r \right)^{ - 1} dr^2 + r^2 d\Omega _2^2 \,, \end{equation} the Painleve transformation (\ref{Painleve}) can be written as \begin{eqnarray} dt \to dt - \sqrt {\frac{{1 - Y}}{{XY}}} dr\label{eq:3}\,. \end{eqnarray} In this paper we work out the tunneling prescription to explain the Hawking radiation for a time dependent and spherically symmetric black holes. In general, such black hole solutions can be read as \cite{Weinberg} \begin{eqnarray}\label{metric-gen-timedep} ds^2 = - X\left( {t,r} \right)dt^2 + Y\left( {t,r} \right)^{ - 1} dr^2 + Z\left( {t,r} \right) d\Omega_2 ^2\label{eq:2}\,. \end{eqnarray} In this paper we study a class of time dependent spacetime which has coordinate singularity at $r$ which yields $Y(t,r)=0$ and $X(t,r)=0$. In order to get rid of this coordinate singularity, we employ the ``generalized'' Painleve transformation (\ref{eq:3}) which is in a differential form can be read as \begin{equation} \label{Painleve-diff} dt_P = \frac{{\partial t_P}}{{\partial t }}dt + \frac{{\partial t_P}}{{\partial r}}dr\,. \end{equation} For a general spacetime metric (\ref{metric-gen-timedep}), it is clear that both ${{\partial t_P } \mathord{\left/ {\vphantom {{\partial t_P } {\partial t}}} \right. \kern-\nulldelimiterspace} {\partial t}}$ and ${{\partial t_P } \mathord{\left/ {\vphantom {{\partial t_P } {\partial r}}} \right. \kern-\nulldelimiterspace} {\partial r}}$ would be some functions of $t$ and $r$. However, by employing the relation which are applied in static case, i.e. $dt_P = dt - \sqrt {\frac{{1 - Y}}{{XY}}} dr$, we find that the mapping \begin{equation} \label{dtpdt} \frac{{\partial t_P }}{{\partial t}} = 1\, \end{equation} and \begin{equation} \label{dtpdr} \frac{{\partial t_P }}{{\partial r}} = - \sqrt {\frac{{1 - Y\left( {t,r} \right)}}{{X\left( {t,r} \right)Y\left( {t,r} \right)}}} \end{equation} remove the coordinate singularity in the general metric (\ref{metric-gen-timedep}). As in the static case, the left hand side of eq. (\ref{dtpdr}) is singular for the vanishing $X(t,r)$ or $Y(t,r)$. This is a normal consequence in such transformation to remove the coordinate singularity. Furthermore, to get an integrable $t_P$, from the fact that ${\textstyle{{\partial ^2 t_P } \over {\partial r\partial t}}} = 0$ in eq. (\ref{dtpdt}) one realizes that the right hand side of eq. (\ref{dtpdr}) must be independent of time $t$, say\footnote{In the case of static spacetime, eq. (\ref{int.cond}) is automatically satisfied.} \begin{equation}\label{int.cond} 1 - Y\left( {t,r} \right) = X\left( {t,r} \right)Y\left( {t,r} \right)C\left( r \right)\,, \end{equation} where $C(r)$ is an arbitrary function of $r$. Hence, a set of Painleve transformation (\ref{dtpdt}) and (\ref{dtpdr}) works only for a class of time dependent spherically symmetric spacetime whose metric functions satisfy the condition\footnote{In \ref{app.Vaidya}, we show a constraint for Vaidya black hole mass function $m(t,r)$ which comes from this condition.} (\ref{int.cond}). Consequently, after employing the transformation (\ref{dtpdt}) and (\ref{dtpdr}), the metric (\ref{metric-gen-timedep}) now can be written as \[ds^2 = - X(t,r)dt^2 + 2X(t,r)\sqrt{\frac{{1 - Y(t,r)}}{{X(t,r)Y(t,r)}}} dtdr \] \begin{eqnarray}\label{P1} ~~~~~~~~~~~~~~~~~~~~~~~+ dr^2 + Z\left( {t,r} \right) d\Omega_2 ^2\label{eq:4}\,. \end{eqnarray} It is understood that the coordinate singularity at $X(t,r)=Y(t,r)=0$ has already been removed in (\ref{eq:4}), where instead to be singular, now the $g_{tr}$ component of the line element above is indeterminate\footnote{We consider the case where after employing the L'Hospital's rule, one can get a non singular form of $g_{tr}$ in the metric (\ref{metric-gen-timedep}). Otherwise, the method developed in this paper might not work since $dr/dt$ in (\ref{eq:5}) could be singular.}. Accordingly, the null radial geodesics from the ``Painleve transformed'' metric (\ref{P1}) can be written as \begin{eqnarray} \frac{{dr}}{{dt}} = \sqrt {\frac{X(t,r)}{Y(t,r)}} \left( { \pm 1 - \sqrt {1 - Y(t,r)} } \right),\label{eq:5} \end{eqnarray} and $ + ( - )$ signs denote the outgoing(ingoing) geodesics. Moreover, for a practical benefit one can Taylor expand the coefficient $X$ and $Y$ at the near horizon, i.e. \begin{equation} \left. {X(t,r)} \right|_t \simeq \left. {X'(t,r_h)} \right|_t \left( {r - r_h } \right) + \left. {O\left( {\left( {r - r_h } \right)^2 } \right)} \right|_t \label{eq:6}\,, \end{equation} and \begin{equation} \left. {Y(t,r)} \right|_t \simeq \left. {Y'(t,r_h)} \right|_t \left( {r - r_h } \right) + \left. {O\left( {\left( {r - r_h } \right)^2 } \right)} \right|_t \label{eq:7}\,, \end{equation} where $r_h$ is the radius of event horizon. By using the Taylor expansions (\ref{eq:6}) and (\ref{eq:7}), the outgoing null radial geodesic (\ref{eq:5}) can be approached as \begin{eqnarray} \frac{{dr}}{{dt}} \simeq \frac{1}{2}\sqrt {X'\left( {r_h ,t} \right)Y'\left( {r_h ,t} \right)} \left( {r - r_h } \right).\label{eq:8} \end{eqnarray} Now we use the prescription by Parikh and Wilczek in getting the Hawking temperature using the picture of tunneling particle through the event horizon, which is sometime called the Radial Null Geodesic or Parikh-Wilczek (PW) method. However, it was found by Chowdhury in \cite{Chowdhury:2006sk} that the expression ${\rm Im} S = \int p_r dr$ which appears in original PW method is not canonically invariant. Therefore, we use the prescription by Akhmedov et al \cite{Akhmedov:2008ru} in computing the tunneling rate which reads \begin{equation}\label{rate-tunn} \Gamma \sim \exp \left[ { - \frac{{{\mathop{\rm Im}\nolimits} \oint {p_r dr} }}{\hbar }} \right]\,. \end{equation} The term inside of square bracket above then can be computed as \begin{eqnarray}\label{eq:9} {\mathop{\rm Im}\nolimits} \oint {p_r dr} = {\mathop{\rm Im}\nolimits} \oint {\int\limits_0^{p_r } {dp_r '} dr} = {\mathop{\rm Im}\nolimits} \oint {\int\limits_0^H {\frac{{dH'}}{{{\textstyle{{dr} \over {dt}}}}}} dr}\,, \end{eqnarray} where we have made use of the Hamilton equation $dr/dt = dH/dp_r |_r $ related to the canonical variables $r$ and $p_r$ (in this case, the radial component of the radius and the momentum). Different from the discussions of several authors for a static black hole mass, e.g. Refs. \cite{Wilczek} and \cite{22}, the outgoing particle's energy must be time dependent for black holes with varying mass. So, the $dH'$ integration at (\ref{eq:9}) is for all values of outgoing particle's energy, say from zero to $ + E\left( t \right)$. By using the approximation (\ref{eq:8}), we can perform the integration (\ref{eq:9}). For $dr$ integration, we can perform a contour integration for the upper half complex plane to avoid the coordinate singularity $r_h$. The result is \begin{eqnarray} {\mathop{\rm Im}\nolimits} \oint {p_r dr} = \frac{{4\pi E\left( t \right)}}{{\sqrt {X'\left( {r_h ,t} \right)Y'\left( {r_h ,t} \right)} }}.\label{eq:10} \end{eqnarray} Equalizing the tunneling rate (\ref{rate-tunn}) with the Boltzmann factor $\exp \left[ { - \beta E\left( t \right)} \right]$ for a system with time dependence of energy we obtain \begin{eqnarray}\label{Hawking-temp} T_H = \frac{{\hbar \sqrt {X'\left( {r_h ,t} \right)Y'\left( {r_h ,t} \right)} }}{{4\pi }}.\label{eq:11} \end{eqnarray} This temperature is also derived by Nielsen and Yeom in \cite{Nielsen:2008kd} by using a slightly different approach of PW method for a general time dependent background. At this point we understand that the temperature (\ref{Hawking-temp}) doesn't care about the spin of tunneled particles. As long as it follows the radial null geodesic, which we know the path of massless particles, then it will give the same contribution the temperature measured by a detector at infinity independent of the spin that it has. However, the spins that we are mentioning here are $0,1/2,1$ and $3/2$ only, since in the author's knowledge there is no works for Hawking radiation in the tunneling picture which uses spins $\ge 2$ as the tunneled particles. In the next sections, we will reproduce the temperature (\ref{Hawking-temp}) by considering the tunneling of Dirac fermion, photon, and gravitino from a spherically symmetric and time dependent black hole by using Hamilton-Jacobi method. \section{Massless Dirac Particle Tunneling}\label{sec:Dirac} In this section we will study the Hawking radiation of time dependent black holes where the tunneled particle has spin $\tfrac{1}{2}$. We start by writing an action describing massless Dirac fields in curved spacetime \cite{Nakahara} \begin{equation}\label{Dirac-action} S_\psi = \int {d^4 x\sqrt { - g} \bar \Psi i\tilde \gamma ^\mu \left( {\partial _\mu + \frac{i}{2}g^{\gamma \nu } \Gamma _{\mu \nu }^\beta \Sigma _{\beta \gamma } } \right)\Psi } \,. \end{equation} The corresponding equation of motion can be written as \begin{equation} \label{Diraceqtn} {\tilde{\gamma}} ^\mu \nabla _\mu \Psi = 0 \end{equation} where \begin{equation} \nabla _\mu = {\partial _\mu + \frac{i}{2}g^{\gamma \nu } \Gamma _{\mu \nu }^\beta \Sigma _{\beta \gamma } }\,, \end{equation} and $\Sigma _{\alpha \beta } = \frac{i}{4}\left[ {\gamma _\alpha ,\gamma _\beta } \right]$. We use ${\rm{diag}}(-,+,+,+)$ as the Minkowski metric tensor and the flat spacetime Dirac matrices $\gamma^\alpha$ are \[\gamma ^0 = \left( {\begin{array}{*{20}c} i & 0 \\ 0 & { - i} \\ \end{array}} \right) \,\,,\,\, \gamma ^1 = \left( {\begin{array}{*{20}c} 0 & {\sigma ^3 } \\ {\sigma ^3 } & 0 \\ \end{array}} \right)\,,\] \begin{equation} \gamma ^2 = \left( {\begin{array}{*{20}c} 0 & {\sigma ^1 } \\ {\sigma ^1 } & 0 \\ \end{array}} \right) \,\,,\,\, \gamma ^3 = \left( {\begin{array}{*{20}c} 0 & {\sigma ^2 } \\ {\sigma ^2 } & 0 \\ \end{array}} \right)\,. \end{equation} The flat Dirac matrices $\gamma^\alpha$ and the ``curved'' ones $\tilde{\gamma}^\mu$ are related by $\tilde{\gamma}^\mu = e^\mu_\alpha \gamma^\alpha$. For the general time dependent metric (\ref{metric-gen-timedep}), the tetrads $e_\mu ^a$ can be expressed as \begin{equation} e_\mu ^a = \left( {\begin{array}{*{20}c} {\sqrt X } & 0 & 0 & 0 \\ 0 & {1 \mathord{\left/ {\vphantom {1 a}} \right. \kern-\nulldelimiterspace} \sqrt{Y}} & 0 & 0 \\ 0 & 0 & r & 0 \\ 0 & 0 & 0 & {r\sin \theta } \\ \end{array}} \right)\label{tetrad} \end{equation} where $g_{\mu \nu } = e_\mu ^a e_\nu ^b \eta _{ab} $. Clearly $e^\mu_a$ is just the inverse of (\ref{tetrad}). Therefore, the Dirac matrices $\tilde \gamma ^\mu$ using the tetrads (\ref{tetrad}) can be written as \begin{equation} \tilde \gamma ^t = \frac{i}{{\sqrt X }}\left( {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & { - 1} \\ \end{array}} \right)\,, \end{equation} \begin{equation} \tilde \gamma ^r = {\sqrt Y }\left( {\begin{array}{*{20}c} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & { - 1} \\ 1 & 0 & 0 & 0 \\ 0 & { - 1} & 0 & 0 \\ \end{array}} \right)\,, \end{equation} \begin{equation} \tilde \gamma ^\theta = \frac{1}{r}\left( {\begin{array}{*{20}c} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array}} \right)\,, \end{equation} \begin{equation} \tilde \gamma ^\phi = \frac{i}{{r\sin \theta }}\left( {\begin{array}{*{20}c} 0 & 0 & 0 & { - 1} \\ 0 & 0 & 1 & 0 \\ 0 & { - 1} & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array}} \right)\,. \end{equation} Now we employ the Hamilton-Jacobi ansatz for the spinor wave function describing the massless Dirac particles. To simplify the computation, we perform the tunneling process analysis for spin up particle $\Psi_u$ and spin down one $\Psi_d$ separately \footnote{The reader should be familiar with the Dirac spinor as a direct sum $\Psi = \Psi _u \oplus \Psi _d $.}. Explicitly, the Hamilton-Jacobi ansatz for our spinors can be read as \begin{equation} \label{ansatzPsi-up} \Psi _u = \left( {\begin{array}{*{20}c} {\cal{A}} \\ 0 \\ {\cal{B}} \\ 0 \\ \end{array}} \right)\exp \left( {\frac{i}{\hbar }S_u } \right) \end{equation}\begin{equation}\label{ansatzPsi-down} \Psi _d = \left( {\begin{array}{*{20}c} 0 \\ {\cal{C}} \\ 0 \\ {\cal{D}} \\ \end{array}} \right)\exp \left( {\frac{i}{\hbar }S_d } \right)\,, \end{equation} where the function $S_u$ and $S_d$ are expanded in terms of $\hbar$ as \begin{equation} \label{expandS-Dirac} S_k = S_{0k} + \hbar S_{1k} + \hbar ^2 S_{2k} + \hbar ^3 S_{3k} + \dots \end{equation} and $k$ is understood as the spin of Dirac particles under consideration, i.e. $k=u$ or $k=d$. The coefficients ${\cal A}$, ${\cal B}$, ${\cal C}$, and ${\cal D}$ in general are $t,r,\theta$ and $\phi$ dependent, as well as the corresponding actions for spins up and down $S_u$ and $S_d$ respectively. As we understand in the Dirac formalism, (\ref{ansatzPsi-up}) is the wave function for spin up fermion, and (\ref{ansatzPsi-down}) is for the spin down. In the next step, we work out the analysis for spin up only, since an analogous method can also be performed for spin down which produces the same Hawking temperature. The case of radial null geodesic yields the corresponding particle action does not vary with respect to the $\theta$ and $\phi$ coordinates. Therefore, by inserting the spin up wave function (\ref{ansatzPsi-up}) into the equation (\ref{Diraceqtn}), we have a set of equations \begin{equation}\label{d1} \frac{{i{\cal{A}}}}{{\sqrt X }}\partial _t S_{0u} + {\sqrt Y }{\cal{B}}\partial _r S_{0u} + {\cal{O}}\left( \hbar \right) = 0\,, \end{equation} \begin{equation}\label{d3} {\sqrt Y }{\cal{A}}\partial _r S_{0u} - \frac{{i{\cal{B}}}}{{\sqrt X }}\partial _t S_{0u} + {\cal{O}}\left( \hbar \right) = 0\,, \end{equation} We focus only on the leading order terms in the equations, hence ${\cal{O}}(\hbar)$ can be neglected. Moreover, the equations (\ref{d1}) and (\ref{d3}) can be shown in a matrix operation as following \begin{equation} \left( {\begin{array}{*{20}c} {iX^{ - 1/2} \partial _t S_{0u} } & {Y^{ - 1/2} \partial _r S_{0u} } \\ {Y^{ - 1/2} \partial _r S_{0u} } & { - iX^{ - 1/2} \partial _t S_{0u} } \\ \end{array}} \right)\left( {\begin{array}{*{20}c} {\cal{A}} \\ {\cal{B}} \\ \end{array}} \right) \equiv {\tilde D}\left( {\begin{array}{*{20}c} {\cal{A}} \\ {\cal{B}} \\ \end{array}} \right) = 0\,. \end{equation} The vanishing of last equation is guaranteed if the determinant of $\tilde{D}$ is zero, which leads to \begin{equation} \label{actionSu-eq1} \left( {\partial _t S_{0u} } \right)^2 = XY\left( {\partial _r S_{0u} } \right)^2 \,. \end{equation} We notice that equation (\ref{actionSu-eq1}) is just the equation for scalar particle's action in curved space after we employ the Hamilton-Jacobi ansatz and consider only the leading terms in the equation \cite{Paddy,Siahaan:2009qv}. Moreover, the last equation can be rewritten as \begin{equation}\label{eqfn} \partial _r S_{0u} = \pm {\frac{1}{\sqrt{XY}}} \partial _t S_{0u} \end{equation} where $(-)+$ signs correspond to the (outgoing)ingoing modes. The discussions of these modes can be found in Apendix 2.A of \cite{Majhi:thesis}. In \cite{Siahaan:2009qv}, the authors have derived the solution for an equation like (\ref{eqfn}) where they consider the tunneling of scalar particles from a time dependent black hole. Therefore, the techniques presented in \cite{Siahaan:2009qv} can be adopted to get an expression for $S_u$. We look for a general form of solution for the action\footnote{A discussion of Schrodinger equation with time dependent Hamiltonian which supports this general form of action is given in \ref{app.timedepSchrodinger}.} \begin{equation} \label{Su-gen} S_{0u} \left( {t,r} \right) = - \int\limits_0^t {E\left( {t'} \right)dt'} + \tilde S_{0u} \left( {t,r} \right)\,, \end{equation} where $E(t')$ stands for the time dependent energy of the Dirac particle which tunnels across the event horizon. The time dependence of energy is understood since the mass of black hole decreases as time passes. Taking the derivative with respect to time in both sides of the last equation provides us \begin{equation} \partial _t S_{0u} \left( {t,r} \right) = - E\left( t \right) + \partial _t \tilde S_{0u} \left( {t,r} \right)\,, \end{equation} and from the differentiation with respect to radius $r$ we have \begin{equation} \label{drS-dtS} \partial _r S_{0u} \left( {t,r} \right) = \partial _r \tilde S_{0u} \left( {t,r} \right)\,. \end{equation} The chain rule allows us to write \begin{equation} \label{eqtn-Su-afterchain} \frac{{d\tilde S_{0u} \left( {t,r} \right)}}{{dr}} = \frac{{\partial \tilde S_{0u} \left( {t,r} \right)}}{{\partial r}} + \frac{{\partial \tilde S_{0u} \left( {t,r} \right)}}{{\partial t}}\frac{{dt}}{{dr}}\,. \end{equation} In this section we don't use the Painleve transformation as we have used in the previous section. Therefore, the corresponding radial null geodesic in the background (\ref{metric-gen-timedep}) is \begin{equation} \label{radial-null} \frac{{dr}}{{dt}} = \pm \sqrt {XY}\,. \end{equation} The $+(-)$ signs in the left hand side of (\ref{radial-null}) refers to the geodesic of outgoing(ingoing) null particles respectively. Combining equations (\ref{eqtn-Su-afterchain}) and (\ref{radial-null}) gives us \begin{equation} \frac{{\partial \tilde S_{0u} \left( {t,r} \right)}}{{\partial r}} = \frac{{d\tilde S_{0u} \left( {t,r} \right)}}{{dr}} \mp \frac{1}{\sqrt {XY}} \frac{{\partial \tilde S_{0u} \left( {t,r} \right)}}{{\partial t}} \end{equation} with $ -(+)$ signs refer to the outgoing(ingoing) spin up Dirac from the black hole. Recall that for the dynamics of a particle with Hamiltonian $H$ and action $S$, one can show the relation \cite{Goldstein} \begin{equation}\label{S-H} \frac{{\partial S}}{{\partial t}} + H = 0\,. \end{equation} The last equation also emerges in the semiclassical discussion, for example, in WKB approximation to solve the one dimensional Schroedinger equation \begin{equation}\label{Schro-wkb} i\hbar \frac{{\partial \Psi }}{{\partial t}} = - \frac{{\hbar ^2 }}{{2m}}\frac{{\partial ^2 \Psi }}{{\partial x^2 }} + V\left( x \right)\Psi \,, \end{equation} where we use the ansatz $\Psi = e^{iS/\hbar } $ and $S$ is the classical action of particle associated to the wave function $\Psi$. From eq. (\ref{Schro-wkb}), we may observe that the partial derivative of action with respect to time would be a negative quantity for a particle with positive energy, since the eigenvalue of $H$ must be positive. Therefore, the $(-)$ sign in (\ref{eqfn}) belongs to the outgoing particle, \begin{equation}\label{dtS0-minXYdrS0} \partial _t S_{0u} = - \sqrt {XY} \partial _r S_{0u} \,, \end{equation} since the momentum $p_r = \partial_r S_0$ is positive. Correspondingly, the one with $(+)$ sign refers to the ingoing particle, \begin{equation}\label{dtS0-plusXYdrS0} \partial _t S_{0u} = \sqrt {XY} \partial _r S_{0u} \,. \end{equation} Then we use equations (\ref{drS-dtS}), (\ref{eqtn-Su-afterchain}), (\ref{dtS0-minXYdrS0}), and (\ref{dtS0-plusXYdrS0}) to get \begin{equation} \frac{{d\tilde S_{0u} \left( {t,r} \right)}}{{dr}} = \pm {\frac{E\left( t \right)}{\sqrt{XY}}} \end{equation} whose solution can be read as \begin{equation} \tilde S_{0u} \left( {t,r} \right) = \pm E\left( t \right)\int { {\frac{dr}{\sqrt{XY}}}} \,. \end{equation} The $+$ and $-$ signs in the last equation belong to the outgoing and ingoing particle respectively. Accordingly, a solution for the action (\ref{Su-gen}) can be read as \begin{equation}\label{S0usol-Dirac} S_{0u} \left( {t,r} \right) = - \int\limits_0^t {E\left( {t'} \right)dt'} \pm \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}\,. \end{equation} Plugging the solution (\ref{S0usol-Dirac}) into (\ref{ansatzPsi-up}) gives us \begin{equation} \Psi _{u,in} = \left( {\begin{array}{*{20}c} {\cal{A}} \\ 0 \\ {\cal{B}} \\ 0 \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }\left( { - \int\limits_0^t {E\left( {t'} \right)dt'} - \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right)\,, \end{equation} and \begin{equation} \Psi _{u,out} = \left( {\begin{array}{*{20}c} {\cal{A}} \\ 0 \\ {\cal{B}} \\ 0 \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }\left( { - \int\limits_0^t {E\left( {t'} \right)dt'} + \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right). \end{equation} Making the ingoing probability $P_{in} = \left| {\Psi _{u,in} } \right|^2 $ is unity, i.e. all fields that come close to a black hole will be absorbed, yields \begin{equation} \int\limits_0^t {E\left( {t'} \right)dt'} = - \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}\,. \end{equation} Therefore the outgoing probability can be written as \begin{equation} P_{out} = \left| {\Psi _{u,out} } \right|^2 = \exp \left( { - \frac{{4\pi E\left( t \right)}}{{\hbar \sqrt {X'Y'} }}} \right)\,. \end{equation} The ``detailed balance'' principle tells us that \[P_{out} = e^{ - \beta E} P_{in} \] which then allow us to write the Hawking temperature for a general time dependent black hole (\ref{metric-gen-timedep}) as \begin{equation}\label{Hawking-temp-Dirac} T_H = \frac{{\hbar \sqrt {X'\left( {t,r_h } \right)Y'\left( {t,r_h } \right)} }}{{4\pi }}\,. \end{equation} The Hawking temperature (\ref{Hawking-temp-Dirac}) is interpreted as the measured temperature by a detector at infinity where the radiation consists of massless quantum particle with spin $\tfrac{1}{2}$ moving outward radially from the black holes. \section{Photon Tunneling}\label{sec:photon} In \cite{Majhispin}, Majhi and Samanta discuss the tunneling of photon and gravitino which yield the Hawking radiation from a static black hole. One of the conclusions in their work is that the Hawking radiation in the form of the tunneling of photon and gravitino yields the Hawking temperature which is invariant compared to the one computed in the case of scalar tunneling. In this section and the next one, we show that the same conclusion is obtained for a time dependent black hole. We start from an action for Maxwell fields in curved spacetime, \begin{equation} S = - \frac{1}{4}\int {\sqrt { - g} F_{\mu \nu } F^{\mu \nu } } d^4 x\,. \end{equation} Taking the variation of $A_\mu$ in the action above, we obtain \begin{equation}\label{Maxwelleqtn} \nabla _\mu F^{\mu \nu } = 0\,, \end{equation} which is known as the Maxwell equation in the absence of the source $J^\nu$. By following Majhi et al \cite{Majhispin}, we use the Hamilton-Jacobi ansatz for the vector field \begin{equation} \label{Amu} A^\mu \sim k^\mu e^{ \frac{i}{\hbar }S\left( {t,r,\theta ,\phi } \right)} \,, \end{equation} where $k^\mu$ is the polarization vector. This polarization vector is independent of the spacetime coordinates. As usual, the action is expanded as \begin{equation} \label{S-expand} S\left( {t,r,\theta ,\phi } \right) = \sum\limits_{i = 0}^\infty {\hbar ^i S_i \left( {t,r,\theta ,\phi } \right)} \end{equation} just like what we did in the spinor case (\ref{expandS-Dirac}). In \cite{Majhispin}, the polarization vector $k^\mu$ is also expanded in $\hbar$ since the authors discuss the quantum correction which comes from the higher order terms in $\hbar$ of $S(t,r,\theta,\phi)$ and $k^\mu$. However, since we are not interested in pursuing such quantum correction, the polarization vector $k^\mu$ in (\ref{Amu}) can be kept up to its semiclassical value only. Plugging the ansatz (\ref{Amu}) for the gauge fields into the equation (\ref{Maxwelleqtn}) which alternatively can be expressed as \begin{equation} \label{Maxwelleqtn2} \partial _\mu F^{\mu \nu } + \Gamma _{\tau \mu }^\mu F^{\tau \nu } + \Gamma _{\tau \mu }^\nu F^{\mu \tau } = 0\,, \end{equation} one can get \begin{equation} \label{eqtnnofix} \left( {k^\nu \partial ^\mu S_0 - k^\mu \partial ^\nu S_0 } \right)\partial _\mu S_0 = 0\,. \end{equation} In getting the last equation, we have taken the limit $\hbar \to 0$ in the equation (\ref{Maxwelleqtn2}). We choose to work in the Lorentz gauge, \begin{equation} \label{Lorentz} \nabla _\mu A^\mu = 0\,, \end{equation} which after plugging the gauge fields (\ref{Amu}) gives \begin{equation} \label{Lorentz2} k^\mu \partial _\mu S_0 = 0\,. \end{equation} Again we have employed the limit $\hbar \to 0$ in obtaining the equation (\ref{Lorentz2}). In this Lorentz gauge condition, the reading of equation (\ref{eqtnnofix}) reduces to \begin{equation} \label{eqtnfixed} k^\nu \left( {\partial ^\mu S_0 } \right)\left( {\partial _\mu S_0 } \right) = 0\,. \end{equation} Working on the $t$ and $r$ sectors only in the spacetime under consideration allow us to write (\ref{eqtnfixed}) as \begin{equation} g^{tt} \left( {\partial _t S_0 } \right)^2 + g^{rr} \left( {\partial _r S_0 } \right)^2 = 0 \,. \end{equation} We find that the last equation is similar to (\ref{eqfn}) if we replace $S_0$ with $S_{0u}$. It is clear since the blackground of spacetime where the vector probes come and fall into a black hole is also the same, i.e. the metric (\ref{metric-gen-timedep}). Therefore, the solution for $S(t,r,\theta,\phi)$ for the vector fields $A_\mu$ can be read as \begin{equation} S_{0} \left( {t,r} \right) = - \int\limits_0^t {E\left( {t'} \right)dt'} \pm \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}\,. \end{equation} Accordingly, the ingoing and outgoing solutions for the vector fields can be read as \begin{equation} A_{{\rm{in}}}^\mu \sim k^\mu \exp \left( { \frac{i}{\hbar }\left( {-\int\limits_0^t {E\left( {t'} \right)dt'} - i\pi \frac{{E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right) \end{equation} and \begin{equation} A_{{\rm{out}}}^\mu \sim k^\mu \exp \left( {\frac{i}{\hbar }\left( {-\int\limits_0^t {E\left( {t'} \right)dt'} + i\pi \frac{{E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right) \end{equation} respectively. The unit incoming probability $P_{{\rm{in}}} = |A^\mu_{\rm{in}}|^2$, the relation between incoming and outgoing probabilities \begin{equation} P_{{\rm{out}}} = P_{{\rm{in}}} \exp \left( { - \frac{{4\pi E\left( t \right)}}{{\hbar \sqrt {X'Y'} }}} \right)\,, \end{equation} and the ``detailed balance'' principle $P_{{\rm{out}}} = P_{{\rm{in}}} \exp{(-\beta E(t))}$ yields the reading of Hawking temperature is \begin{equation} T_H = \frac{{\hbar \sqrt {X'\left( {t,r_h } \right)Y'\left( {t,r_h } \right)} }}{{4\pi }}\,. \end{equation} One observes that the temperature in the last equation is equal to the one computed in the Dirac particle case (\ref{Hawking-temp-Dirac}) and scalar case \cite{Siahaan:2009qv}. \section{Gravitino tunneling}\label{sec:gravitino} We start with the action of massless Rarita-Schwinger $\Psi_\alpha$ fields in curved spacetime, \begin{equation}\label{RS-action} S_\psi = \int {d^4 x\sqrt { - g} \bar \Psi^\alpha i\tilde \gamma ^\mu \left( {\partial _\mu + \frac{i}{2}g^{\gamma \nu } \Gamma _{\mu \nu }^\beta \Sigma _{\beta \gamma } } \right)\Psi_\alpha } \,, \end{equation} where $\Sigma_{\beta\gamma}$ and the Dirac matrices $\tilde{\gamma}^\mu$ in the action above are those used in the Dirac action (\ref{Dirac-action}). Accordingly, the action (\ref{RS-action}) tells us that the corresponding equation of motion for $\Psi_\alpha$ can be read as \begin{equation} \label{RSeqtn} {\tilde{\gamma}} ^\mu \nabla _\mu \Psi_\alpha = 0\,, \end{equation} which is known as the massless Rarita-Schwinger equation in curved space. It looks like the Dirac equation, with the Dirac spinor $\Psi$ is replaced by the vector-spinor $\Psi_\mu$. The number of degree of freedom of $\Psi_\mu$ is sixteen, which eight of them are removed by the two additional constraints: $\tilde{\gamma}^\mu\Psi_\mu = 0$ and $\nabla^\mu\Psi_\mu = 0$. The Hamilton-jacobi ansatz for vector-spinor $\Psi_\mu$ can be read as \begin{equation} \Psi _{\left( u \right)\mu } = \left( {\begin{array}{*{20}c} {{\cal{A}}_\mu } \\ 0 \\ {{\cal{B}}_\mu } \\ 0 \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }S_{\left( u \right)} } \right)\,, \end{equation} and \begin{equation} \Psi _{\left( d \right)\mu } = \left( {\begin{array}{*{20}c} 0 \\ {{\cal{C}}_\mu } \\ 0 \\ {{\cal{D}}_\mu } \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }S_{\left( d \right)} } \right)\,, \end{equation} where $\Psi _{\left( u \right)\mu }$ and $\Psi _{\left( d \right)\mu }$ are the Rarita-Schwinger fields with spins $+3/2$ and $-3/2$ respectively. In the background (\ref{metric-gen-timedep}), equation (\ref{RSeqtn}) for radial geodesic can be read as \begin{equation}\label{dRS1} \frac{{i{\cal{A}}_\mu}}{{\sqrt X }}\partial _t S_{0(u)} + {\sqrt Y }{\cal{B}}_\mu\partial _r S_{0(u)} + {\cal{O}}\left( \hbar \right) = 0\,, \end{equation} \begin{equation}\label{dRS3} {\sqrt Y }{\cal{A}}_\mu\partial _r S_{0(u)} - \frac{{i{\cal{B}}_\mu}}{{\sqrt X }}\partial _t S_{0(u)} + {\cal{O}}\left( \hbar \right) = 0\,. \end{equation} The action $S_{0(u)}$ is understood as the zeroth order term in the action expansion $S_{\left( u \right)} = \sum\limits_{i = 0}^\infty {\hbar ^i S_{i\left( u \right)} } $. Moreover, the last two equations are very close to (\ref{d1}) and (\ref{d3}) since the operator that applies to the massless Rarita-Schwinger fields in (\ref{RSeqtn}) is just the same with that applies to the massless Dirac field in (\ref{Diraceqtn}). Analogous to the technique applies to the Dirac fermion in the previous section, we rewrite the equations (\ref{dRS1}) and (\ref{dRS3}) in the form \[ \left( {\begin{array}{*{20}c} {iX^{ - 1/2} \partial _t S_{0(u)} } & {Y^{ - 1/2} \partial _r S_{0(u)} } \\ {Y^{ - 1/2} \partial _r S_{0(u)} } & { - iX^{ - 1/2} \partial _t S_{0(u)} } \\ \end{array}} \right)\left( {\begin{array}{*{20}c} {\cal{A}}_\mu \\ {\cal{B}}_\mu \\ \end{array}} \right) \equiv {\tilde D}\left( {\begin{array}{*{20}c} {\cal{A}}_\mu \\ {\cal{B}}_\mu \\ \end{array}} \right) = 0\,, \] whose solution for the action $S_{0(u)}$ finally can be found as \begin{equation}\label{S0usol-RS} S_{0(u)} \left( {t,r} \right) = - \int\limits_0^t {E\left( {t'} \right)dt'} \pm \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}\,. \end{equation} Procedure that is needed in obtaining the solution (\ref{S0usol-RS}) is obvious since we deal the same equations in section \ref{sec:Dirac}. In discussing gravitino, we replace the complex valued functions ${\cal{A}}$ and ${\cal{B}}$ with the vectors ${\cal{A}}_\mu$ and ${\cal{B}}_\mu$ which are also complex valued. Moreover, the solutions for spin $\tfrac{3}{2}$ fields then can be written as \begin{equation} \Psi _{\mu (u,in)} = \left( {\begin{array}{*{20}c} {\cal{A}}_\mu \\ 0 \\ {\cal{B}}_\mu \\ 0 \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }\left( { - \int\limits_0^t {E\left( {t'} \right)dt'} - \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right)\,, \end{equation} and \begin{equation} \Psi _{\mu (u,out)} = \left( {\begin{array}{*{20}c} {\cal{A}}_\mu \\ 0 \\ {\cal{B}}_\mu \\ 0 \\ \end{array}} \right)\exp \left( { \frac{i}{\hbar }\left( { - \int\limits_0^t {E\left( {t'} \right)dt'} + \frac{{i\pi E\left( t \right)}}{{\sqrt {X'Y'} }}} \right)} \right)\,. \end{equation} Again, using the ''detailed balance'' principle for the relation of outgoing and ingoing gravitino probabilities one can get \begin{equation} T_H = \frac{{\hbar \sqrt {X'\left( {t,r_h } \right)Y'\left( {t,r_h } \right)} }}{{4\pi }}\,, \end{equation} as the Hawking temperature from dynamical black holes (\ref{metric-gen-timedep}) in the form of massless gravitino tunneling from black holes. We observe that the Hawking temperature due to the massless gravitino tunneling is equal to the temperatures computed in the last two sections. \section{Conclusion and Discussions}\label{sec:conclusion} We have analyzed the Hawking radiation in the form of Dirac fermion, photon, and gravitino tunneling across the event horizon of time dependent black holes. The resulting Hawking temperatures are invariant compared to the one obtained in the scalar tunneling case. The results lead to a conclusion that the Hawking temperature obtained in the tunneling method is independent of the spins of the tunneled particles. We confirm that for a time dependent and spherically symmetric black hole whose metric has the form (\ref{metric-gen-timedep}), the Hawking temperature is independent of the spins of the tunneled particle, as in the case of static one. It is interesting to note that the PW method presented in section \ref{sec:radial-null} works for a limited case only, i.e. the spacetime metric whose metric functions satisfy the condition (\ref{int.cond}). The obtained Hawking temperature in this method is confirmed by the result derived via Hamilton-Jacobi method by using higher spins tunneling in the succeeding sections as well as scalar tunneling \cite{Siahaan:2009qv}. In the Hamilton-Jacobi method we do not make use of the Painleve transformation, hence there is no integrability condition (\ref{int.cond}) to be fulfilled. Such condition also does not appear in \cite{Nielsen:2008kd}, and yet the same result for Hawking temperature (\ref{Hawking-temp}) is achieved by the authors. Hence, presumably there are some other transformations which yield the line element (\ref{metric-gen-timedep}) to be regular at event horizon, and therefore the PW method can be performed. Clearly each of these transformations will have a set of integrability conditions which justify a class of spacetime which fits in the computation of Hawking temperature using PW method. Finding an alternative to the generalized Painleve transformation (\ref{dtpdt}) and (\ref{dtpdr}) which yields the metric (\ref{metric-gen-timedep}) transforms to be regular at the event horizon would be challenging and we address this issue in our future work. In their seminal paper \cite{Wilczek}, Parikh and Wilczek included the back reaction effect in their analysis of Hawking radiation in the tunneling picture. Hence, discussing a correction to the entropy of time dependent black holes which comes from the back reaction effect should also be possible. For the case of Vaidya black holes entropy, Zhang et al \cite{Zhang:2007ar} had carried out an analysis which takes back reaction effect into account, where the Vaidya spacetime is written in the Eddington-Finkelstein coordinate rather that Schwarzschild-like one. We note that Zhang et al consider the particle energy to be time independent, unlike the time dependent case presented in this paper. We address the analysis of time dependent black hole's entropy by considering back reaction effect in Schwarzschild-like coordinate in our future project. \section*{Acknowledgments} I thank Profs. Paulus Tjiang and Triyanta for useful discussions.
proofpile-arXiv_068-16564
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} An important task in science and technology is to find the highest achievable precision in measuring and estimating parameters of interest with given resources, and design schemes to reach it. Quantum metrology, which exploits quantum mechanical effects to achieve high precision, has gained increasing attention in recent years\cite{Giovannetti2011, wineland,cavesprd,rosetta,VBRAU92-1,GIOV04,Fujiwara2008,Escher2011,Tsang2013,Rafal2012,durkin,Knysh2014,Jan2013,Rafal2014,Alipour2014,Chin2012,Tsang2011,Berry2013,Berry2015}, where a typical situation is to estimate the value of a continuous parameter $x$ encoded in some quantum state $\rho_x$ of the system. To estimate the value, one needs to first perform measurements on the system, which, in the general form, are described by Positive Operator Valued Measurements(POVM), $\{E_y\}$, which provides a distribution for the measurement results $p(y|x)=Tr(E_y\rho_x)$. According to the Cram\'{e}r-Rao bound in statistical theory\cite{HELS67,HOLE82,CRAM46,Rao}, the standard deviation for any unbiased estimator of $x$, based on the measurement results $y$, is bounded below by the Fisher information: $\delta \hat{x}\geq \frac{1}{\sqrt{I(x)}},$ where $\delta \hat{x}$ is the standard deviation of the estimation of $x$, and $I(x)$ is the Fisher information of the measurement results, $I(x)=\sum_y p(y|x)(\frac{\partial lnp(y|x)}{\partial x})^2$\cite{Fisher}. The Fisher information can be further optimized over all POVMs, which gives \begin{equation} \label{eq:J} \delta\hat{x}\geq\frac{1}{\sqrt{\max_{E_y}I(x)}}=\frac{1}{\sqrt{J(\rho_x)}}, \end{equation} where the optimized value $J(\rho_x$) is called quantum Fisher information\cite{HELS67, HOLE82,BRAU94,BRAU96}. If the above process is repeated $n$ times, then the standard deviation of the estimator is bounded by $\delta \hat{x}\geq \frac{1}{\sqrt{nJ(\rho_x)}}.$ To achieve the best precision, we can further optimize the encoding procedures $x\rightarrow \rho_x$ so that $J(\rho_x)$ is maximized. Typically the encoding is achieved by preparing the probe in some initial state $\rho_0$, then let it evolve under a dynamics which contains the interested parameter, $\rho_0\xrightarrow{\phi_x} \rho_x$. Usually $\phi_x$ is determined by a given physical dynamics which is then fixed, while the initial state is up to our choice and can be optimized. A pivotal task in quantum metrology is to find out the optimal initial state $\rho_0$ and the corresponding maximum quantum Fisher information under any given evolution $\phi_x$. When $\phi_x$ is unitary the GHZ-type of states are known to be optimal which leads to the Heisenberg limit. However when $\phi_x$ is noisy, such states are in general no longer optimal. Finding the optimal probe states and the corresponding highest precision limit under general dynamics has been the main quest of the field. Recently using the purification approach much progress has been made on developing systematical methods of calculating the highest precision limit\cite{Fujiwara2008,Escher2011,Tsang2013,Rafal2012,Jan2013,Rafal2014}, however how to actually achieve the highest precision limit is still largely unknown, as these methods do not provide ways to obtain the optimal probe states. Another restriction of these methods\cite{Fujiwara2008,Escher2011} is that they usually restrict to smooth representations of the Kraus operators, which is not intrinsic to the dynamics. In this article, we develop a general framework for quantum parameter estimation which relates the ultimate precision limit directly to the geometrical properties of underlying dynamics, this provides systematical methods for computing the ultimate precision limit and optimal probe states without additional assumptions. This framework also provides analytical formulas for the precision limit with arbitrary pure probe states which spares the needs of optimization over equivalent Kraus operators required in previous studies\cite{Fujiwara2008,Escher2011}. We further demonstrate the power of the framework by deriving sufficient conditions on when ancillary systems are not useful for improving the precision limit \section{Ultimate precision limit} The precision limit of measuring $x$ from a set of quantum states $\rho_x$ is determined by the distinguishability between $\rho_x$ and its neighboring states $\rho_{x+dx}$\cite{BRAU94,Wootters1981}. This is best seen if we expand the Bures distance between the neighboring states $\rho_x$ and $\rho_{x+dx}$ up to the second order of $dx$\cite{BRAU94}: \begin{equation} \label{eq:BJ} d^2_{Bures}(\rho_x,\rho_{x+dx})=\frac{1}{4}J(\rho_x)dx^2, \end{equation} where $d_{Bures}(\rho_1,\rho_2)=\sqrt{2-2F_B(\rho_1,\rho_2)}$, here $F_B(\rho_1,\rho_2)=Tr\sqrt{\rho_1^{\frac{1}{2}}\rho_2\rho_1^{\frac{1}{2}}}$ is the fidelity between two states. Thus maximizing the quantum Fisher information is equivalent as maximizing the Bures distance, which is equivalent as minimizing the fidelity between $\rho_x$ and $\rho_{x+dx}$. If the evolution is given by $\phi_x$, $\rho_x=\phi_x(\rho)$ and $\rho_{x+dx}=\phi_{x+dx}(\rho)$, the problem is then equivalent to finding out $\min_{\rho}F_B[\phi_x(\rho),\phi_{x+dx}(\rho)]$ and the optimal $\rho$ that achieves the minimum. We now develop tools to solve this problem for both unitary and non-unitary dynamics. Given two general evolution $\phi_1$ and $\phi_2$ of the same dimension, we define the Bures angle between them as $B(\phi_1,\phi_2)=\max_{\rho}\cos^{-1}[F_B(\phi_1(\rho),\phi_2(\rho))]$, this generalizes the Bures angle on quantum states\cite{Bures}. From the definition of the Bures distance it is easy to see $\max_{\rho} d^2_{Bures}[\phi_x(\rho),\phi_{x+dx}(\rho)]=2-2\cos B(\phi_x,\phi_{x+dx})$, thus from Eq.(\ref{eq:BJ}) we have \begin{eqnarray} \label{eq:maxQFIphi1} \aligned \max_{\rho} J[\phi_x(\rho)]&=\lim_{dx\rightarrow 0}\frac{8[1-\cos B(\phi_x,\phi_{x+dx})]}{dx^2}. \endaligned \end{eqnarray} The ultimate precision limit under the evolution $\phi_x$ is thus determined by the Bures angle between $\phi_x$ and the neighboring channels \begin{equation} \label{eq:Precision1} \delta \hat{x}\geq\frac{1}{\lim_{dx\rightarrow 0} \frac{\sqrt{8[1-\cos B(\phi_x,\phi_{x+dx})]}}{\mid dx\mid}\sqrt{n}}, \end{equation} where $n$ is the number of times that the procedure is repeated. If $\phi_x$ is continuous with respect to $x$, then when $dx\rightarrow 0$, $B(\phi_x,\phi_{x+dx})\rightarrow B(\phi_x,\phi_{x})=0$, in this case \begin{eqnarray} \label{eq:maxQFIphi} \aligned \max_{\rho} J[\phi_x(\rho)]&=\lim_{dx\rightarrow 0}\frac{8[1-\cos B(\phi_x,\phi_{x+dx})]}{dx^2}\\ &=\lim_{dx\rightarrow 0}\frac{16\sin^2\frac{B(\phi_x,\phi_{x+dx})}{2}}{dx^2}\\ &=\lim_{dx\rightarrow 0}\frac{4B^2(\phi_x,\phi_{x+dx})}{dx^2}, \endaligned \end{eqnarray} the ultimate precision limit is then given by \begin{equation} \label{eq:Precision} \delta \hat{x}\geq\frac{1}{\lim_{dx\rightarrow 0} 2\frac{B(\phi_x,\phi_{x+dx})}{\mid dx\mid}\sqrt{n}}. \end{equation} The problem is thus reduced to determine the Bures angle between quantum channels. We will first show how to compute the Bures angle between unitary channels, then generalize to noisy quantum channels. \subsection{Ultimate precision limit for unitary channels} Given two unitaries $U_1$ and $U_2$ of the same dimension, since $F_B(U_1\rho U^\dagger_1,U_2\rho U^\dagger_2)=F_B(\rho,U^\dagger_1U_2\rho U^\dagger_2U_1)$, we have $B(U_1,U_2)=B(I,U^\dagger_1U_2)$, i.e., the Bures angle between two unitaries can be reduced to the Bures angle between Identity and a unitary. For a $m\times m$ unitary matrix $U$, let $e^{-i\theta_j}$ be the eigenvalues of $U$, where $\theta_j\in(-\pi,\pi]$ for $1\leq j\leq m$, which we will call the eigen-angles of $U$. If $\theta_{\max}=\theta_1\geq \theta_2\geq \cdots \geq \theta_m=\theta_{\min}$ are arranged in decreasing order, then $B(I,U)=\frac{\theta_{\max}-\theta_{\min}}{2}$ when $\theta_{\max}-\theta_{\min}\leq \pi$\cite{ChildsPR00, Acin01, Duan2007,Chau2011,Fung2,Fung3}, specifically if $U=e^{-iHt}$, then $B(I,U)=\frac{(\lambda_{\max}-\lambda_{\min})t}{2}$ if $(\lambda_{\max}-\lambda_{\min})t\leq \pi$, where $\lambda_{\max(\min)}$ is the maximal (minimal) eigenvalue of $H$. This provides ways to compute Bures angles on unitary channels. For example, suppose the evolution takes the form $U(x)=(e^{-ixHt})^{\otimes N}$ (tensor product of $e^{-ixHt}$ for $N$ times, which means the same unitary evolution $e^{-ixHt}$ acts on all $N$ probes). Then \begin{eqnarray} \aligned B[U(x),U(x+dx)]&=B[I,U^\dagger(x)U(x+dx)]\\ &=B[I,(e^{-iHtdx})^{\otimes N}].\\ \endaligned \end{eqnarray} It is easy to see that the difference between the maximal eigen-angle and the minimal eigen-angle of $(e^{-iHtdx})^{\otimes N}$ is $\theta_{\max}-\theta_{\min}=N(\lambda_{\max}\mid dx\mid t-\lambda_{\min}\mid dx\mid t)$. Thus $B(I,(e^{-iHtdx})^{\otimes N})=\frac{\theta_{\max}-\theta_{\min}}{2}=\frac{(N\lambda_{\max}\mid dx\mid-N\lambda_{\min}\mid dx\mid)t }{2},$ Eq.(\ref{eq:Precision}) then recovers the Heisenberg limit $$\delta \hat{x}\geq \frac{1}{\sqrt{n}(\lambda_{\max}-\lambda_{\min})t}\frac{1}{N}.$$ \subsection{Ultimate precision limit for noisy quantum channels} For a general quantum channel which maps from a $m_1$- to $m_2$-dimensional Hilbert space, the evolution can be represented by a Kraus operation $K(\rho^S)=\sum_{j=1}^d F_j\rho^S F^\dagger_j$, here the Kraus operators $F_j, 1\leq j\leq d$, are of the size $m_2\times m_1$, $\sum_{j=1}^d F^\dagger_jF_j=I_{m_1}$. The channel can be equivalently represented as \begin{eqnarray} \aligned K(\rho^S)=Tr_E(U_{ES}(|0_E\rangle\langle0_E|\otimes \rho^S) U^\dagger_{ES}),\\ \endaligned \end{eqnarray} where $|0_E\rangle$ denotes some standard state of the environment, and $U_{ES}$ is a unitary operator acting on both system and environment, which we will call as the unitary extension of $K$. A general $U_{ES}$ can be written as \begin{align} \label{eqn-U-general} U_{ES}=(W_E \otimes I_{m_2}) \underbrace{ \begin{bmatrix} F_1 & * & * & \cdots & * \\ F_2 & * & * & \cdots & * \\ \vdots & &\vdots & & \vdots \\ F_{d} & * & * & \cdots & *\\ \textbf{0} & * & * & \cdots & *\\ \vdots & &\vdots & & \vdots\\ \textbf{0} & * & * & \cdots & *\\ \end{bmatrix} }_{\displaystyle U}, \end{align} here only the first $m_1$ columns of $U$ are fixed and $W_E \in U(p)$($p\times p$ unitaries) only acts on the environment and can be chosen arbitrarily, here $p\geq d$ as $p-d$ zero Kraus operators can be added. Given a channel an ancillary system can be used to improve the precision limit, this can be described as the extended channel $$(K\otimes I_A) (\rho^{SA})=\sum_j (F_j\otimes I_A) \rho^{SA} (F_j\otimes I_A)^\dagger,$$ here $\rho^{SA}$ represents a state of the original and ancillary systems. Without loss of generality, the ancillary system can be assumed to have the same dimension as the original system. Given two quantum channels $K_1$ and $K_2$ of the same dimension, let $U_{ES1}$ and $U_{ES2}$ as unitary extensions of $K_1$ and $K_2$ respectively, we have\cite{Yuan2015} \begin{eqnarray} \aligned B(K_1\otimes I_A,K_2\otimes I_A)&=\min_{U_{ES1},U_{ES2}}B(U_{ES1},U_{ES2})\\ &=\min_{U_{ES1}}B(U_{ES1},U_{ES2})\\ &=\min_{U_{ES2}}B(U_{ES1},U_{ES2}). \endaligned \end{eqnarray} This extends Uhlmann's purication theorem on mixed states\cite{Uhlmann1976} to noisy quantum channels. Furthermore we show in the appendix that $B(K_1\otimes I_A,K_2\otimes I_A)$ can be explicitly computed from the Kraus operators of $K_1$ and $K_2$\cite{supplement}: if $K_1(\rho^S)=\sum_{j=1}^d F_{1j}\rho^S F^\dagger_{1j}$, $K_2(\rho^S)=\sum_{j=1}^d F_{2j}\rho^S F^\dagger_{2j}$, then $\cos B(K_1\otimes I_A, K_2\otimes I_A) =\max_{\|W\|\leq 1}\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W),$ here $\lambda_{\min}(K_W+K^\dagger_W)$ denotes the minimum eigenvalue of $K_W+K^\dagger_W$ where $K_W=\sum_{ij}w_{ij}F^\dagger_{1i}F_{2j}$, with $w_{ij}$ as the $ij$-th entry of a $d\times d$ matrix $W$ which satisfies $\|W\|\leq 1$($\|\cdot\|$ denotes the operator norm which equals to the maximum singular value). If we substitute $K_1=K_x$ and $K_2=K_{x+dx}$, where $K_x(\rho^S)=\sum_{j=1}^d F_j(x)\rho^S F^\dagger_j(x)$ and $K_{x+dx}(\rho^S)=\sum_{j=1}^d F_j(x+dx)\rho^SF^\dagger_j(x+dx)$ with $x$ being the interested parameter, then \begin{eqnarray} \aligned \label{eq:suppTE} &\cos B(K_x\otimes I_A, K_{x+dx}\otimes I_A)\\ =& \max_{\|W\|\leq 1 }\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W), \endaligned \end{eqnarray} where $K_W=\sum_{ij}w_{ij}F^\dagger_i(x)F_j(x+dx)$. By substituting $\phi_x=K_x\otimes I_A$ and $\phi_{x+dx}=K_{x+dx}\otimes I_A$ in Eq.(\ref{eq:maxQFIphi1}), we then get the maximal quantum Fisher information for the extended channel $K_x\otimes I_A$, \begin{eqnarray} \aligned \label{eq:maxQFIopen} \max J=\lim_{dx\rightarrow 0}\frac{8[1-\max_{\|W\|\leq 1 }\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W)]}{dx^2}. \endaligned \end{eqnarray} In previous studies the operator $W_E$ in Eq.(\ref{eqn-U-general}), which can be arbitrary chosen, was assumed to depend on $x$ smoothly\cite{Fujiwara2008,Escher2011}. As a result, the $W$ in Eq.(\ref{eq:maxQFIopen}) was restricted to unitary operators that depends smoothly on $x$ as explained in detail in the appendix \ref{sec:con}. This restriction was introduced out of computational convenience in previous studies, which is not intrinsic to the dynamics. The formula here does not have such assumption and can be applied more broadly, for example it can be applied to the discrimination of quantum channels which is discrete in nature\cite{Yuan2015}. Also since any $W$ that is not optimal gives a lower bound on the precision limit, the formula here also provides more room for obtaining useful lower bounds. The maximization in Eq.(\ref{eq:maxQFIopen}) can be further formulated as a semi-definite programming and solved efficiently: $\max_{\|W\|\leq 1}\frac{1}{2}\lambda_{\min}(K_W+K^\dagger_W)=$ \begin{eqnarray} \label{eq:sdp} \aligned &maximize \qquad \frac{1}{2}t \\ s.t.\qquad &\left(\begin{array}{cc} I & W^\dagger \\ W & I \\ \end{array}\right)\succeq 0,\\ & K_W+K^\dagger_W-tI \succeq 0. \endaligned \end{eqnarray} Another advantage of this formulation is that the dual form of this semi-definite programming provides a systematical way for obtaining the optimal probe states, which we will show in the next setion. \section{Optimal probe states} Developing systematical methods to obtain the optimal probe states are essential for achieving the precision limit. So far there are only a few cases for which optimal probe states are known, mostly for phase estimations\cite{durkin,Berry2000,Rafal2009, Nair2011,Knysh2014, Frowis2014}. A systematical way of obtaining optimal probe states for general quantum dynamics is highly desired as it will pave the way for achieving the ultimate precision limit. We now show how to obtain the optimal probe states that achieve the ultimate precision limit for extended channels. We first provide an analytical formula for calculating quantum Fisher information with any given pure input states, for both unextended and extended channels, then use it to obtain optimal probe states for extended channels. In the appendix we showed that for both unextended and extended channels with pure probe states we have\cite{supplement} \begin{equation} \label{eq:fidelityS} F_B[K_x(\rho^{S}),K_{x+dx}(\rho^{S})]=\|M(\rho^S)\|_1, \end{equation} \begin{equation} \label{eq:fidelitySA} F_B[(K_x\otimes I_A)(\rho^{SA}),(K_{x+dx}\otimes I_A)(\rho^{SA})]=\|M(\rho^S)\|_1, \end{equation} here $M(\rho^S)$ is a $d\times d$ matrix with its $ij$-entry equals to $Tr[\rho^S F^\dagger_i(x)F_j(x+dx)]$, and $\|\cdot\|_1$ represents the trace norm which equals to the summation of singular values. For the unextended channel this formula works for the pure probe state $\rho^S=|\psi_S\rangle\langle \psi_S|$, while for the extended channel although $\rho^{SA}=|\psi_{SA}\rangle\langle \psi_{SA}|$ is required to be a pure state, $\rho^S=Tr_A(\rho^{SA})$ can be any mixed state, which characterizes the advantage provided by ancillary systems. The above two formulas provide a straightforward way calculating the quantum Fisher information with any pure probe states, \begin{eqnarray} \label{eq:QFIM} \aligned J[K_x(\rho^{S})]&=\lim_{dx\rightarrow 0}\frac{8(1-\|M(\rho^S)\|_1)}{dx^2},\\ J[(K_x\otimes I_A)(\rho^{SA})]&=\lim_{dx\rightarrow 0}\frac{8(1-\|M(\rho^S)\|_1)}{dx^2}. \endaligned \end{eqnarray} In contrast to previous studies\cite{Fujiwara2008,Escher2011,Escher2012}, optimization over equivalent representations of Kraus operator is not needed in this formulation. In fact $\|M(\rho^S)\|_1$ does not depend on any particular representation of the Kraus operators: if we use a different representation of Kraus operators for $K_x$ and $K_{x+dx}$, for example $\tilde{F}_i(x)=\sum_r u_{ir}F_r(x)$ and $\tilde{F}_j(x+dx)=\sum_s v_{js}F_s(x+dx)$, where $u_{ir}$ and $v_{js}$ are entries of some unitary matrices $U$ and $V$ respectively, then $Tr[\rho^S \tilde{F}^\dagger_i(x)\tilde{F}_j(x+dx)]=\sum_{r,s} u^*_{ir}v_{js}Tr[\rho^S F^\dagger_r(x)F_s(x+dx)]$, thus $\tilde{M}(\rho^S)=\bar{U} M(\rho^S)V^T$ which has the same trace norm $\|\tilde{M}(\rho^S)\|_1=\|M(\rho^S)\|_1$. The optimal probe states for the extended channel can then be obtained by minimizing over input states at both sides of Eq.(\ref{eq:fidelitySA}), \begin{eqnarray} \label{eq:optimalstate} \aligned &\min_{\rho^{SA}}F_B[(K_x\otimes I_A)(\rho^{SA}),(K_{x+dx}\otimes I_A)(\rho^{SA})]\\ =&\min_{\rho^S}\|M(\rho^S)\|_1. \endaligned \end{eqnarray} This can be computed by a semi-definite programming formulation for the trace norm\cite{Fuel} as $\min_{\rho^S}\|M(\rho^S)\|_1=$ \begin{eqnarray} \label{eq:SDPrho} \aligned minimize \qquad &\frac{1}{2}Tr(P)+\frac{1}{2}Tr(Q) \\ s.t.\qquad &\left(\begin{array}{cc} P & M^\dagger(\rho^S) \\ M(\rho^S) & Q \\ \end{array}\right)\succeq 0,\\ & \rho^S\succeq 0, Tr(\rho^S)=1, \endaligned \end{eqnarray} where $P, Q$ are Hermitian matrices. One can verify that this is exactly the dual form of the semi-definite programming used in Eq.(\ref{eq:sdp}). From the output $\rho^S$, we can easily obtain the optimal probe state $\rho^{SA}$, which is any purification with the reduced states equals to the optimal $\rho^S$. This gives a systematical way to obtain the optimal probe states for the extended channel which we demonstrate through some examples. Consider phase estimation with spontaneous emission, $K_x(\rho_0)=F_1(x)\rho_0F^\dagger_1(x)+F_2(x)\rho_0F_2^\dagger(x)$, where $F_1(x)=\left(\begin{array}{cc} 1 & 0 \\ 0 & \sqrt{\eta} \\ \end{array}\right)U(x)$,$F_2(x)=\left(\begin{array}{cc} 0 & \sqrt{1-\eta} \\ 0 & 0 \\ \end{array}\right)U(x)$, $U(x)=\exp(-i\frac{\sigma_3}{2}x)$. Suppose a pure input state $\rho^{SA}$ is prepared for the extended channel $K_x\otimes I_A$, then $\rho^S=Tr_A(\rho^{SA})$, $M(\rho^S)=\left(\begin{array}{cc} Tr[\rho^SF_1^\dagger(x)F_1(x+dx)] & Tr[\rho^SF_1^\dagger(x)F_2(x+dx)] \\ Tr[\rho^SF_2^\dagger(x)F_1(x+dx)] & Tr[\rho^SF_2^\dagger(x)F_2(x+dx)] \\ \end{array}\right),$ in this case the problem can be solved analytically, in the appendix we showed that the optimal $\rho^S$ is given by $\rho^S=\left(\begin{array}{cc} \frac{\sqrt{\eta}}{1+\sqrt{\eta}} & 0 \\ 0 & \frac{1}{1+\sqrt{\eta}} \\ \end{array}\right)$ and the corresponding maximal quantum Fisher information is $\max J=\frac{4\eta}{(1+\sqrt{\eta})^2}$\cite{supplement}. Since the optimal $\rho^S$ is mixed, an ancillary systems is necessary. The optimal input state in this case is any pure state $\rho^{SA}$ with the reduced state equal to $\rho^S$, the simplest choice of the optimal input state in this case is $\sqrt{\frac{\sqrt{\eta}}{1+\sqrt{\eta}}}|00\rangle+\sqrt{\frac{1}{1+\sqrt{\eta}}}|11\rangle$, which is not a maximally entangled state as previously suspected\cite{Jan2013,Jan2014}. We can also use the method to find the maximal quantum Fisher information without using ancilla by imposing the condition that $\rho^S$ be pure. In that case the maximal quantum Fisher information turns out to be $\eta$ and the optimal input state has the form $(|0\rangle+\exp(i \theta)|1\rangle)/\sqrt{2}$ for some $\theta \in \mathbb{R}$\cite{supplement}. For high dimensional systems, we use the CVX package in Matlab\cite{CVX} to implement the semi-definite programming of (\ref{eq:SDPrho}) and obtain the optimal input states. For example, consider two qubits with independent dephasing noises, which can be represented by a Kraus operation with 4-Kraus operators: $F_1(x)\otimes F_1(x), F_1(x)\otimes F_2(x), F_2(x)\otimes F_1(x), F_2(x)\otimes F_1(x)$ with $F_1(x)=\sqrt{\frac{1+\eta}{2}}U(x)$, $F_2(x)=\sqrt{\frac{1-\eta}{2}}\sigma_3U(x)$, here $U(x)=\exp(-i\frac{\sigma_3}{2}x).$ It turns out that $\min_{\rho^S}\|M(\rho^S)\|$ can always be attained with a pure $\rho^S$, ancillary systems are thus not necessary. In Fig.~\ref{fig:entropy} we plotted the entanglement of optimal states, which is quantified by the entropy of the reduced single qubit state $\rho^S$, at different $\eta$. It can be seen that there exists a threshold for $\eta$: when $\eta$ exceeds the threshold, the optimal state is the GHZ state, which is maximally entangled; when $\eta$ is below the threshold, GHZ state ceases to be optimal, with the decreasing of $\eta$, the optimal state gradually changes from the maximally entangled state to separable state. Fig.~\ref{fisher} shows the quantum Fisher information with the optimal state and the separable input state $|++\rangle$, where $|+\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}$. It can be seen that the gain of entanglement is only obvious in the region of high $\eta$, i.e., low noises. Similar behaviour is found for more qubits, i.e., there exists a threshold for $\eta$, above the threshold the optimal state is the GHZ state and with the decreasing of $\eta$, the optimal state gradually changes from GHZ state to separable state, and this threshold increases with the number of qubits. \begin{figure} \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.4]{DephasingEntropy.eps} \caption{Entropy of reduced single qubit state, which is used to quantify the entanglement of the optimal state, at different $\eta$.} \label{fig:entropy} \end{minipage}% \qquad \begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.4]{DephasingFisher.eps} \caption{Quantum Fisher information with the optimal input state and separable input state $|++\rangle$ for 2 qubits with independent dephasing noises.} \label{fisher} \end{minipage} \end{figure} In Fig.\ref{fig:dep5} the optimal state for 5 qubits with independent dephasing noises is shown, and in Fig.\ref{fig:Dep5Fisher} the quantum Fisher information for the optimal state, GHZ state and the separable state are plotted. We also calculated the optimal state for 10 qubits with 5 of them under independent spontaneous emission and 5 of them as ancillary qubits, in Fig.\ref{fig:Sop10Fisher} we plotted the Fisher information for the optimal state, GHZ state and separable state. \begin{figure} \centering \includegraphics[scale=.4]{d5qubit.eps} \caption{Optimal probe state for 5 qubits under independent dephasing noises. The optimal state has the form $|\psi\rangle=a_0|\psi_0\rangle+a_1|\psi_1\rangle+a_2|\psi_2\rangle$, where $|\psi_i\rangle$ denotes the summation of all basis states with $i$ zeros or $i$ ones, for example $|\psi_0\rangle=|00000\rangle+|11111\rangle$. } \label{fig:dep5} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{Dep5Fisher.eps} \caption{Quantum Fisher information for optimal probe states, GHZ state and separable state for 5 qubits under independent dephasing noises. } \label{fig:Dep5Fisher} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{Sop10Fisher.eps} \caption{Quantum Fisher information for optimal probe states, GHZ state and separable state for 10 qubits with 5 of them under independent spontaneous emission and 5 of them as ancillary system.. } \label{fig:Sop10Fisher} \end{figure} \section{When ancillary systems are not helpful} The formulas developed here not only provides systematical methods to compute the ultimate precision limit and optimal probe states, but also have wide implications, which we will demonstrate by deriving a sufficient condition on when ancillary systems are not useful for improving the precision limit. We have seen that in the spontaneous emission case ancillary system helps improving the precision limit while in some other cases---known examples include unitary, classical, phase estimation with dephasing and lossy channels\cite{Jan2013,Jan2014}, ancillary systems do not help. For a general channel it is usually difficult to tell whether ancillary systems can help improving the precision limit or not. Previously this problem was usually studied case by case by comparing the maximum quantum Fisher information for the unextended and extended channels. The formulas developed here provide a more direct way: from Eq.(\ref{eq:QFIM}) it is obvious that ancillary systems do not help improving the precision limit if and only if $\min_{\rho^S}\|M(\rho^S)\|_1$ can be reached at a pure state $\rho^S$, which can be checked by using the semi-definite programming to obtain optimal $\rho^S$ and see if $(\rho^S)^2=\rho^S$. A more easily verifiable sufficient condition is as following: given a channel $K_x(\rho)=\sum_{i=1}^d F_i(x)\rho F^\dagger_i(x)$, if $F_i^\dagger(x)F_j(x+dx)$, $1\leq i,j\leq d$ can be simultaneously diagonalized, then ancillary systems do not help improving the precision limit. As if there exist a basis such that $F_i^\dagger(x)F_j(x+dx)$ are all diagonal, then in that basis only the diagonal entries of $\rho^S$ enter into $M(\rho^S)$(as the entries of $M(\rho^S)$ are of the form $Tr[\rho^S F_i^\dagger(x)F_j(x+dx)]$ which only depends on the diagonal terms of $\rho^S$ if $F_i^\dagger(x)F_j(x+dx)$ is diagonal), other entries can be chosen freely. This means that $\min_{\rho^S}\|M(\rho^S)\|_1$ can be achieved by all $\rho^S$ with the optimal diagonal entries, $\rho^S_{ii}=a_i$, this always includes a pure state $|\psi_S\rangle=\sum_i \sqrt{a_i}|i\rangle$, hence ancillary system is not necessary to achieve $\min_{\rho^S}\|M(\rho^S)\|_1$ for such channels. This sufficient condition is satisfied by unitary, classical and phase estimation with dephasing channel, and many other channels that have not been categorized before, for example phase estimation with noises along the $X$ and $Y$ directions satisfies this condition, as it can be represented with two following Kraus operators $F_1(x)=\sqrt{\frac{1+\eta}{2}}\sigma_1\exp(-i\frac{\sigma_3}{2}x)$ and $F_2(x)=\sqrt{\frac{1-\eta}{2}}\sigma_2\exp(-i\frac{\sigma_3}{2}x)$, one can easily check that they satisfy the condition thus ancillary system does not help improving the ultimate precision limit for this channel. \section{Conclusion} In conclusion we presented a general framework for quantum metrology which provides systematical ways of obtaining the ultimate precision limit and optimal input states. This framework relates the ultimate precision limit directly to the underlying dynamics, which opens the possibility of utilizing quantum control methods to alter the underlying dynamics for better precision limit\cite{YuanTime}. The tools developed here, such as the generalized Bures angle on quantum channels that can be efficiently computed using semi-definite programming, are expected to find wide applications in various fields of quantum information science.
proofpile-arXiv_068-16604
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{Sec:Intro} Higher curvature gravitational interactions have been investigated in a great many physical contexts. Among such models, the special class of Lovelock gravity theories \cite{Lovelock:1971yv} is distinguished via having field equations that depend only on the Riemann tensor, and not on its derivatives, and hence include at most second derivatives of the metric tensor. This has a number of important consequences at both the classical and quantum levels. For example, it leads to a Hamiltonian formulation in terms of the standard canonical gravitational degrees of freedom \cite{Teitelboim:1987zz} and to the absence of the ghost degrees of freedom that are typical of higher curvature theories \cite{Zwiebach:1985uq,Zumino:1985dp}. Lovelock theories include a single interaction term at each higher curvature order, so that the Lagrangian in $n$ spacetime dimensions is given by $\mc{L}=\sum_{k=0}^{p} c_k\mc{L}_k$ with\footnote{The antisymmetrized Kronecker symbol used here has overall strength $k!$ and is defined by \begin{equation}\nonumber \delta^{\alpha_1\dots \alpha_k}_{\beta_1\dots \beta_k} = k!\delta^{[\alpha_1}_{\beta_1}\dots\delta^{\alpha_k]}_{\beta_k} = k!\delta^{\alpha_1}_{[\beta_1}\dots\delta^{\alpha_k}_{\beta_k]}. \end{equation}} \begin{equation}\label{Love} \mc{L}_k=\frac{1}{2^k}\,\delta^{\alpha_1\beta_1\dots\alpha_k\beta_k}_{\mu_1\nu_1\dots\mu_k\nu_k} R_{\alpha_1\beta_1}^{\mu_1\nu_1}\dots R_{\alpha_k\beta_k}^{\mu_k\nu_k}, \end{equation} with the upper limit of the sum given by $p\equiv[(n-1)/2]$. The term $\mc{L}_0$ gives the cosmological constant term in the action, while $\mc{L}_1$ gives the Einstein-Hilbert term and the $\mc{L}_k$ with $k\ge 2$ are higher curvature terms. The Lagrangian truncates because the interactions $\mc{L}_k$ vanish identically for $k>n/2$, while for $n$ even the variation of $\mc{L}_{n/2}$ gives a total divergence and hence does not contribute to the equations of motion\footnote{In spacetime dimension $n=2k$, the term $\mc{L}_k$ is the Euler density, whose integral over a compact manifold without boundary is topologically invariant}. This truncation distinguishes between even and odd dimensions. Moving up from an even dimension to the next higher odd dimension, a new Lovelock interaction is introduced. However, no new term is introduced in even dimensions. The coefficients $c_k$ are the couplings of the theory, and we will be interested in how the space of possible vacua of Lovelock gravity varies as a function of these couplings. The simplest vacua of Lovelock theories are maximally symmetric ones, and depending on the values of the couplings $c_k$, Lovelock gravity in $n$-dimensions may have up to $p$ such vacua with distinct curvatures \cite{Boulware:1985wk,Wheeler:1985nh,Wheeler:1985qd}. Assuming that $n>1$, the curvature of a maximally symmetric spacetime has the form \begin{equation}\label{Riem} R^{\mu\nu}_{\alpha\beta}=K\delta^{\mu\nu}_{\alpha\beta}, \end{equation} where the constant $K$ is related to the scalar curvature according to $K= {1\over n(n-1)}R$. The maximally symmetric spacetime is then either Minkowski (M), de Sitter (dS), or anti-de Sitter (AdS) spacetimes\footnote{If instead we considered Euclidean metrics then we would have respectively Euclidean ($\mathds{E}$), spherical (S), or hyperbolic (H) spaces corresponding to these ranges of the curvature constant.}, depending on whether $K=0$, $K>0$, or $K<0$. When the Riemann tensor has the constant curvature form (\ref{Riem}), the Lovelock equations of motion reduce to a $p$th-order polynomial equation for $K$. Only real roots of this equation correspond to physical vacua. Therefore, while there exists a region of coupling space with $p$ distinct maximally symmetric vacua, there are also regions with fewer such vacua. For $p$ odd there will be at least one maximally symmetric vacuum. However, for $p$ even there is a range of couplings such that no maximally symmetric vacuum exist. We can think of the surfaces in coupling space that divide such regions, with different numbers of maximally symmetric vacua, as critical surfaces of the theory. Consider, for example, Gauss-Bonnet gravity in dimensions $n\ge 5$ where only the couplings $c_k$ with $k=0,1,2$ are taken to be nonzero\footnote{This includes the full set of allowable couplings for dimensions $n=5,6$, while for $n>6$ the higher order couplings $c_k$ with $k=3,\dots,p$ are taken to be zero in Gauss-Bonnet gravity.}. In this case, the equations of motion yield a quadratic equation for the curvature constant $K$. For fixed values of $c_0$ and $c_1$, one finds that there is always either a maximum or minimum value of $c_2$, depending on the sign of $c_0$, beyond which maximally symmetric vacua no longer exist. It is then natural to ask what vacua exist beyond this critical value of the coupling $c_2$. Such vacua will necessarily have less than the maximal symmetry allowed in a given spacetime dimension. In this paper, we will investigate a simple class of alternative vacua with reduced amounts of symmetry and examine how the number and existence of such vacua vary over the space of Lovelock couplings. The vacua we consider are products of maximally symmetric space(times). In $n$ spacetime dimensions, we will write this product in the form $\mc{K}_1^d\times\mc{K}_2^{n-d}$. Here, the first factor $\mc{K}_1^d$ is a Lorentzian maximally symmetric spacetime of dimension $d$ with curvature constant $K_1$, while the second factor $\mc{K}_2^{n-d}$ is an $(n-d)$-dimensional maximally symmetric Euclidean space with curvature constant $K_2$. The existence of such reduced symmetry vacua introduces a new set of regions and critical surfaces in coupling space. We will see how these regions either extend, or fail to extend, the region in which maximally symmetric vacua exist. Some special cases of such product vacua have already been discussed in literature. The product vacua $M^4\times S^{3}$ and more generally $\mc{K}_1^4\times S^{3}$ in third order Lovelock gravity were studied in \cite{MuellerHoissen:1985mm} and \cite{Canfora:2008iu} respectively. The Nariai and (anti)-Nariai \cite{Nar,Dadhich:2000zg} type factorizations $dS_2\times S^{n-2}$ and $AdS_2\times H^{n-2}$ were investigated in Einstein gravity \cite{Cardoso:2004uz} and in Gauss-Bonnet gravity \cite{LorenzPetzold:1987ih}, while the Bertotti-Robinson \cite{Bertotti:1959pf,Robinson:1959ev} type factorization AdS$_2\times S^{n-2}$ was investigated in Einstein gravity \cite{Cardoso:2004uz}, in generic Lovelock gravity \cite{Maeda:2011ii}, in pure Lovelock gravity theories \cite{Dadhich:2012zq}, and in $5$-dimensional quadratic gravity \cite{Clement:2013twa}. The product $\mc{K}_1^2\times\mc{K}_2^{3}$ in $5$-dimensional Gauss-Bonnet gravity was considered by \cite{Canfora:2007ux,Izaurieta:2012fi}. Related work has also appeared in references \cite{Canfora:2013xsa,Canfora:2014iga} which study dynamical compactification, in the broken symmetry regime of Gauss-Bonnet gravity, that are products of a $4D$ FRW spacetime with a compact space having a time dependent scale factor. The dynamics of possible compactifications has also been studied in \cite{Chirkov:2014nua,Chirkov:2015kja,Pavluchenko:2015daa}. This paper is organized as follows. In Section (\ref{Sec:3rdLove}) we give some more details of Lovelock gravity. In order to keep our analysis of product vacua tractable we will restrict our analysis to at most cubic order interactions in the curvature. In Section (\ref{Einsteinsection}), in order to orient the subsequent discussion, we recall the maximally symmetric and product vacua of Einstein gravity. In Section (\ref{GBsection}) we look at the maximally symmetric and product vacua of Gauss-Bonnet gravity in $n=5$ and $n=6$ dimensions. In Section (\ref{3rdordersection}), we study product vacua in third order Lovelock theory, making a further restriction of the couplings to keep the problem tractable. Finally, we offer some concluding remarks in Section (\ref{conclude}). \section{Low order Lovelock theory}\label{Sec:3rdLove} In practice, in order to keep our analysis of product vacua tractable, we will restrict our attention to Lovelock theories including only the first few, relatively low order interaction terms. Accordingly, we will work with the theory described by the action \begin{equation}\label{3rdLove} I=\frac{1}{16\pi G_n}\int d^{n}x\sqrt{-g}\left(-2\Lambda_0+R+\alpha_2\mc{L}_2+\alpha _3\mc{L}_3\right), \end{equation} where we have written the cosmological and Einstein-Hilbert terms in the action in their conventional forms, while leaving the $2$nd and $3$rd order Lovelock terms in the compact form (\ref{Love}). The explicit form of the second order Gauss-Bonnet term, which is dynamically relevant in dimensions $n\ge 5$, is given by \begin{equation} \mc{L}_2=R^2-4R_\alpha^\mu R_\mu^\alpha+R_{\alpha\beta}^{\mu\nu}R_{\mu\nu}^{\alpha\beta}. \end{equation} The explicit form for the third order term, which is relevant in dimensions $n\ge 7$, is unwieldy. The equations of motion of a general Lovelock theory are given by $\sum_{k=0}^{p} c_k\mc{G}^{(k)}{}^\mu_{\nu}=0$, where \begin{equation} \mc{G}^{(k)}{}^\mu_\nu = -{1\over 2^{k+1}}\, \delta^{\mu\alpha_1\beta_1\dots\alpha_k\beta_k}_{\nu\sigma_1\kappa_1\dots\sigma_k\kappa_k}R_{\alpha_1\beta_1}^{\sigma_1\kappa_1}\dots R_{\alpha_k\beta_k}^{\sigma_k\kappa_k}. \end{equation} and the expression for ${\mc{G}^{(1)}}^\mu_\nu$ reproduces the ordinary Einstein tensor $G^\mu_\nu$. For our theory (\ref{3rdLove}) the equations of motion are then given by \begin{equation}\label{3rdLoveEqn} \Lambda_0\delta^\mu_\nu+G^\mu_\nu+\alpha_2{\mc{G}^{(2)}}^\mu_\nu+\alpha_3{\mc{G}^{(3)}}^\mu_\nu=0. \end{equation} In order to orient the discussion below, we will first set the couplings $\alpha_2=\alpha_3=0$ and look at product vacua of Einstein gravity. We will then take $\alpha_2$ to be nonvanishing and study the problem for Gauss-Bonnet gravity\footnote{In the context of the low energy effective action from string theory, it is noted in \cite{Boulware:1985wk} that $\alpha_2>0$, but we will consider all possible values here.}. Finally, we will allow $\alpha_3$ to be nonzero, although we will restrict our attention to a subclass of theories with a definite relation between the couplings $\alpha_2$ and $\alpha_3$, in order to make the analysis manageable. \section{Product vacua in Einstein gravity}\label{Einsteinsection} In order to orient the discussion of product vacua in Lovelock theories, we first present the analysis for Einstein gravity, setting the $\alpha_2=\alpha_3=0$ in our theory (\ref{3rdLove}). Our theory is then parameterized by the cosmological constant $\Lambda_0$. For each value of the cosmological constant, the theory has a maximally symmetric vacuum, with curvature constant $K$ related to the cosmological constant by \begin{equation}\label{constant} K=\frac{2\Lambda_0}{(n-1)(n-2)}. \end{equation} The different possible ranges for the cosmological constant $\Lambda_0<0$, $\Lambda_0=0$ and $\Lambda_0>0$ correspond to AdS, Minkowski and dS vacua respecitvely. We illustrate this situation in Figure (\ref{Fig1}), which serves as the prototype for subsequent, more complicated figures which we will use to display our results. In this case, the figure shows the different types of maximally symmetric vacua corresponding to different values of the cosmological constant. There is no evidence of critical behavior in this case. As noted above, for Einstein gravity there is a unique maximally symmetric vacuum associated with each value of the cosmological constant. \begin{figure}[t] \centering \includegraphics[width=1.6in,angle=270]{Fig1-MS-GR.eps} \caption{\sl Maximally symmetric vacua of Einstein gravity in $n$ dimensions.} \label{Fig1} \end{figure} We now consider certain reduced symmetry vacua of Einstein gravity, taking the $n$-dimensional spacetime manifold to be a direct product of two maximally symmetric submanifolds $\mc{K}_1^d$ having and $\mc{K}_2^{n-d}$ of dimensions $d$ and $n-d$ respectively. We will assume for the moment that $n\ge 4$ and that $n-1> d >1$ and consider the case of $1$-dimensional submanifolds separately. The metric is then taken to have the form \begin{equation} ds^2=g_{\mu\nu}(x)dx^\mu dx^\nu=g_{ab}(u)du^adu^b+g_{ij}(v)dv^idv^j, \end{equation} where $g_{ab}(u)$ is the metric on $\mc{K}_1^d$, which is assumed to have Lorentzian signature, and $g_{ij}(v)$ is the metric on the manifold $\mc{K}_2^{n-d}$, which is taken to have Euclidean signature. Coordinate indices along $\mc{K}_1^d$ have been denoted by $a,b,c,\ldots$ and coordinates indices for $\mc{K}_2^{n-d}$ by $i,j,k,\ldots$. This construction enables us to decompose the Riemann tensor $R^{\mu\nu}_{\alpha\beta}$ of the full spacetime as \begin{figure}[t] \centering \includegraphics[width=1.6in,angle=270]{Fig5-DP-GR-d2.eps} \caption{\sl Direct product vacua of Einstein gravity in $n$ dimensions.} \label{Fig5} \end{figure} \begin{equation} R^{ab}_{cd}=R^{\quad ab}_{(1)cd},~~R^{ij}_{kl}={R}{}^{\quad ij}_{(2)kl},~~R^{ai}_{bj}=0, \end{equation} where ${R}{}^{\quad ab}_{(1)cd}$ and ${R}{}^{\quad ij}_{(2)kl}$ are the Riemann tensors of $\mc{K}_1^d$ and $\mc{K}_2^{n-d}$, respectively, which are each assumed to have the constant curvature form \begin{equation}\label{dRiem} R^{\quad ab}_{(1)cd}=K_1\delta^{ab}_{cd},\qquad R^{\quad ij}_{(2)kl}=K_2\delta^{ij}_{kl} \end{equation} with curvature constants $K_1$ and $K_2$. Plugging into the Einstein equation then yields for these constants \begin{equation}\label{prodcurv} K_1 = {2\Lambda_0\over (d-1)(n-2)},\qquad K_2 = {2\Lambda_0\over (n-d-1)(n-2)} \end{equation} We see in particular that $K_1$ and $K_2$ necessarily each have the same sign as the cosmological constant $\Lambda_0$. The product vacua we consider, therefore, are always of the form $AdS_d\times H^{n-d}$ for $\Lambda_0$ negative, $dS_d\times S^{n-d}$ for $\Lambda_0$ positive, and $M_d\times E^{n-d}$ for vanishing cosmological constant. This result is illustrated in Figure (\ref{Fig5}). As with the maximally symmetric vacua, there is no evidence of critical behavior for the product vacua. Such vacua exist for all values of the cosmological constant. The cases $d=1$ and $d=n-1$, where one of the submanifolds $\mc{K}_1^d$ or $\mc{K}_2^{n-d}$ has dimension $1$ and is therefore flat, require special handling. In these cases we see that one or the other of the terms in the formal solution in (\ref{prodcurv}) diverges. This can be traced back to an inconsistency between the different components of the Einstein equations, which implies that there is no product vacuum unless $\Lambda_0=0$, as illustrated in Figure (\ref{Fig4}). \begin{figure}[t] \centering \includegraphics[width=1.6in,angle=270]{Fig4-DP-GR-d1.eps} \caption{\sl Direct product vacua of Einstein gravity in the case of one-dimensional submanifolds in $n$ dimensions, which exist only for $\Lambda_0=0$.} \label{Fig4} \end{figure} \section{Product vacua in Gauss-Bonnet gravity}\label{GBsection} Now equipped with an understanding of product vacua in Einstein gravity, we move on to consider such vacua in Gauss-Bonnet gravity, which is obtained by setting the coupling $\alpha_3=0$ in the action (\ref{3rdLove}). \subsection{Maximally symmetric vacua} Let us first consider the maximally symmetric vacua, having constant curvature (\ref{Riem}). We will characterize these solutions by an effective cosmological constant $\Lambda$ related to the curvature constant $K$ by \begin{equation} \Lambda = {(n-1)(n-2)\over 2} K, \end{equation} which is the relation that holds between the curvature and cosmological constants for constant curvature solutions in Einstein gravity (\ref{constant}). It is well known that the equations of motion for constant curvature solutions in Gauss-Bonnet gravity reduce to a quadratic equation, which is given in terms of the effective cosmological constant by \begin{equation}\label{VacEqnGB} \tilde{\alpha}_2\Lambda^2+\Lambda-\Lambda_0=0, \end{equation} where $\tilde\alpha_2= {2(n-3)(n-4)\over (n-1)(n-2)}\alpha_2$. The maximally symmetric solutions are then characterized by the effective cosmological constants \begin{equation}\label{VacGB} \Lambda_{\pm}=-\frac{1}{2\tilde{\alpha}_2}\left(1\pm\sqrt{1+4\tilde{\alpha}_2\Lambda_0}\right). \end{equation} Since only real values of $\Lambda$ correspond to physical vacua, the number of such solutions will depend on the cosmological constant $\Lambda_0$ and the coupling strength $\alpha_2$ of the Gauss-Bonnet term. For sufficiently small values of $\tilde\alpha_2$ there will always be two physical maximally symmetric vacua. In the limit of small Gauss-Bonnet coupling, such that $|4\tilde{\alpha}_2\Lambda_0|\ll 1$, these are given approximately by \begin{equation} \Lambda_- \simeq \Lambda_0,\qquad \Lambda_+\simeq -{1\over \tilde\alpha_2} \end{equation} One sees that $\Lambda_-$ matches on to the vacuum of Einstein gravity in this limit, while the $\Lambda_+$ branch goes off to infinite curvature. For this reason, the corresponding branches of solutions in (\ref{VacGB}) are known as the Einstein and Gauss-Bonnet branches respectively. The set of maximally symmetric vacua of Gauss-Bonnet gravity is displayed in Figure (\ref{Fig2}). To read the figure, envision fixing a value of the Gauss-Bonnet coupling $\alpha_2$, or equivalently $\tilde\alpha_2$, and asking how the number and character of the maximally symmetric vacua vary with the value of the cosmological constant $\Lambda_0$. The behavior is qualitatively the same for all values of $\tilde\alpha_2>0$, which are displayed in the top half of the diagram, and also for all values of $\tilde\alpha_2<0$, which are displayed on the bottom half. For $\tilde\alpha_2>0$, it follows from (\ref{VacGB}) that two distinct vacua exist for all values of $\Lambda_0>\Lambda_{0}^{CS}$, while at the critical value \begin{equation}\label{CS} \Lambda_{0}^{CS} = -{1\over 4\tilde\alpha_2}, \end{equation} the two branches of solutions become degenerate and a single unique maximally symmetric vacuum exists. We call this critical point the CS point because in $n=5$ dimensions the theory can be re-expressed as a Chern-Simons theory when the Gauss-Bonnet coupling and cosmological constant are related in this way (see reference \cite{Crisostomo:2000bb}). It also follows from (\ref{VacGB}) that no physical vacua exist for $\Lambda_0<\Lambda_{0}^{CS}$. This implies that whatever vacuum solutions exist in this region of coupling space will necessarily be symmetry breaking ones. \begin{figure}[t] \centering \includegraphics[width=4.35in,angle=270]{Fig2-MS-GB.eps} \caption{\sl Maximally symmetric vacua of Gauss-Bonnet gravity in $n$ dimensions. Here, the plus and minus signs refer to the branches in the solution (\ref{VacGB}), and $\Lambda_{0>}^{CS}$ and $\Lambda_{0<}^{CS}$ represent the CS points when $\tilde{\alpha}_2>0$ and $\tilde{\alpha}_2<0$, respectively.} \label{Fig2} \end{figure} We can also examine the character of the maximally symmetric vacua on the two branches for $\Lambda_0>\Lambda_{0}^{CS}$. One finds that the sign of the effective cosmological constant $\Lambda_-$ for the Einstein branch is precisely correlated with the cosmological constant $\Lambda_0$, so that the solutions along this branch are dS, Minkowski, or AdS depending on whether the value of $\Lambda_0$ is positive, zero, or negative. These are respectively the $dS_n^{(-)}$, $M^n_{(-)}$ and $AdS_n^{(-)}$ solutions shown in the top half of Figure (\ref{Fig2}). Note that $\Lambda_{0}^{CS}\rightarrow -\infty$ in the limit of vanishing Gauss-Bonnet coupling, and that the Einstein branch correctly reduces to the result in Figure (\ref{Fig1}). On the Gauss-Bonnet branch, however, the effective cosmological constant is always negative, independent of the sign of $\Lambda_0$, giving the $AdS_n^{(+)}$ branch of vacua. Finally, this entire structure is mirrored in a straightforward way for $\tilde\alpha_2<0$ on the bottom of Figure (\ref{Fig2}). \subsection{Product vacua} We now turn to vacua which, as above, are products $\mc{K}_1^d\times \mc{K}_2^{n-d}$ of maximally symmetric submanifolds. The Gauss-Bonnet equations of motion for such product vacua reduce to a coupled set of quadratic equations for the curvature constants $K_1$ and $K_2$ given by \begin{align}\label{gb1} \alpha_2\left( {(d-1)!\over (d-5)!}K_1^2 + {2(d-1)!D!\over (d-3)!(D-2)!} K_1K_2\right.&\left. + {D!\over (D-4)!}K_2^2\right)\\ & +{(d-1)!\over (d-3)!}K_1+ {D!\over (D-2)!}K_2-2\Lambda_0 =0\nonumber \end{align} \begin{align}\label{gb2} \alpha_2\left( {(D-1)!\over (D-5)!}K_2^2 + {2(D-1)!d!\over (D-3)!(d-2)!}K_1K_2\right.&\left.+ {d!\over (d-4)!}K_1^2\right)\\ &+{(D-1)!\over (D-3)!}K_2+ {d!\over (d-2)!}K_1-2\Lambda_0 =0\nonumber \end{align} where $D=n-d$. The linear equations that result from setting the Gauss-Bonnet coupling $\alpha_2$ to zero were used above to obtain the product solutions in Einstein gravity (\ref{prodcurv}). However, for $\alpha_2\neq 0$ we cannot write down a general analytic solution to the equations. For sufficiently small values of $d$ or $D$, however, the equations simplify and yield interesting results, and we will focus on such cases. In particular, we will focus on product vacua for Gauss-Bonnet gravity in $n=5$ dimensions, which is the lowest dimension in which the Gauss-Bonnet term is relevant, and in $n=6$ dimensions, where it is also the highest order Lovelock term. The subsequent term $\mc{L}_3$ which is cubic in the curvature becomes relevant in $n=7$ dimensions. In $n=5$ dimensions we consider the cases of $3+2$ and $2+3$ dimensional splits, which differ only in which factor of the product $\mc{K}_1^d\times \mc{K}_2^{n-d}$ is Lorentzian and which is Euclidean. Taking the case $d=3$, $D=2$ case first, the equations of motion simplify to \begin{align}\label{GBeqns} 8\alpha_2 K_1K_2 +2(K_1+K_2)-2\Lambda_0&=0\\ 6K_1-2\Lambda_0&=0\nonumber \end{align} The resulting curvature constants can then be written in the form \begin{equation}\label{GBprods} K_1 = {\Lambda_0\over 3},\qquad K_2 = {1\over \left(1-{\Lambda_0\over\,\,\,\Lambda_0^{CS}}\right)}\cdot{2\over 3}\Lambda_0, \end{equation} where $\Lambda_0^{CS}=-{3\over 4\alpha_2}$ is the CS value of the cosmological constant (\ref{CS}) in $n=5$ dimensions. These values of the curvature constants $K_1$ and $K_2$ approach those for Einstein gravity (\ref{prodcurv}) in the limit of small Gauss-Bonnet coupling. We display these results in Figure (\ref{Fig8}). Again, to read this diagram, envision fixing a non-zero value of the Gauss-Bonnet coupling $\alpha_2$ and then consider the full range of values for the cosmological constant $\Lambda_0$. Let us focus on the top half of the diagram where $\alpha_2>0$. As in the case of Einstein gravity, displayed above in Figure (\ref{Fig5}), there is again a single product vacuum across the full range of values for the cosmological constant. However, the curvatures of the two factor spaces are no longer precisely correlated with the sign of $\Lambda_0$ across the full range of values, as they are in the Einstein case. For $\Lambda_0>\Lambda_0^{CS}$ there is such a precise correlation, with both $K_1$ and $K_2$ being either positive, zero or negative in correspondence with the value of $\Lambda_0$. This produces, in succession for smaller values of $\Lambda_0$, products solutions of the form $dS_3\times S^2$, $M^3\times E^2$ and $AdS_3\times H^2$. However, the solutions (\ref{GBprods}) have a critical point at $\Lambda=\Lambda_0^{CS}$, where the curvature $K_2$ diverges. For $\Lambda<\Lambda_0^{CS}$ the curvature of the Euclidean space $\mc{K}_2^{2}$ becomes positive, opposite to the sign of $\Lambda_0$, giving a product of the form $AdS_3\times S^2$. This behavior is mirrored for $\alpha_2<0$ on the bottom half of the diagram. \begin{figure}[t] \centering \includegraphics[width=4.0in,angle=270]{Fig8-DP-GB-n5d3.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity when $n=5$ and $d=3$.} \label{Fig8} \end{figure} The case in $n=5$ dimensions for a product of a maximally symmetric $d=2$ dimensional Lorentzian spacetime with a maximally symmetric $D=3$ dimensional Euclidean space is very similar. The equations of motion (\ref{gb1}) and (\ref{gb2}) again reduce to have the drastically simplified (\ref{GBeqns}), but with the curvature constants swapped, so that the solutions are now given by \begin{equation}\label{GBprods2} K_1 = {1\over \left(1-{\Lambda_0\over\,\,\,\,\,\Lambda_0^{CS}}\right)}\cdot{2\over 3}\Lambda_0,\qquad K_2 = {\Lambda_0\over 3} \end{equation} These results are displayed in Figure (\ref{Fig7}), which is very similar to Figure (\ref{Fig8}), the key difference being that moving through the CS value of the cosmological constant changes the sign of the curvature of the Lorentzian, rather than the Euclidean, part of the product in this case. \begin{figure}[t] \centering \includegraphics[width=3.75in,angle=270]{Fig7-DP-GB-n5d2.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity when $n=5$ and $d=2$.} \label{Fig7} \end{figure} It is intriguing that the critical point occurs at $\Lambda_0=\Lambda_0^{CS}$ for both these product vacua, this being the same as the critical point that separates different regimes for maximally symmetric vacua, as displayed in Figure (\ref{Fig2}). If via some mechanism, the cosmological constant were dynamical, then various possibilities for vacuum transitions exist. Assume for concreteness that the Gauss-Bonnet coupling is positive. If $\Lambda_0$ then evolved through the CS point from above, a maximally symmetric $AdS_5$ vacuum of Figure (\ref{Fig2}) might transition into the $AdS_3\times S^2$ vacuum of Figure (\ref{Fig8}), a process of spontaneous compactification. We now move on to discuss product vacua for Gauss-Bonnet gravity in $n=6$ spacetime dimensions, which we will see display different types of critical behavior. We will consider $3+3$, $4+2$ and $2+4$ splits into Lorentzian and Euclidean factors in turn. In $n=5$ dimensions we saw that the product vacua existed for all values of the cosmological constant, providing possible vacuum states in the broken symmetry regime. We will see that this is no longer the case in $n=6$ dimensions. \begin{figure}[t] \centering \includegraphics[width=4.0in,angle=270]{Fig10-DP-GB-n6d3.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity when $n=6$ and $d=3$.} \label{Fig10} \end{figure} For a $3+3$ split one finds that the equations of motion (\ref{gb1}) and (\ref{gb2}) simplify to \begin{align} 24\alpha_2 K_1K_2 + 2K_1 +6K_2-2\Lambda_0 &=0\\ 24\alpha_2 K_1K_2 + 2K_2 +6K_1-2\Lambda_0 &=0 \end{align} which has the two solutions $K_1=K_2=K_\pm$ where \begin{equation} K_\pm = {\Lambda_0^c\over 2}\left(1\pm\sqrt{1-{\Lambda_0\over\Lambda_0^c}}\right) \end{equation} and $\Lambda_0^c = -{1\over 3\alpha_2}$ is a new critical point that arises in this system. Focusing on $\alpha_2>0$, which is illustrated in the top half of Figure (\ref{Fig10}), there will be two physical $3+3$ product solutions for $\Lambda_0>\Lambda_0^c$ and none in the regime $\Lambda_0<\Lambda_0^c$, with a unique solution at the critical point. In $n=6$ dimensions, one finds from (\ref{CS}) that $\Lambda_0^{CS}=-{5\over 12\alpha_2}$, so that with $\alpha_2>0$ one has the ordering $\Lambda_0^{CS}<\Lambda_0^c$. It follows that in the symmetry breaking regime $\Lambda<\Lambda_0^{CS}$, where no maximally symmetric vacua exist, there are also no $3+3$ split product vacua. One also finds that, as for the maximally symmetric solutions, there is an ``Einstein" branch of product solutions with $K_1=K_2=K_-$ which approaches the analogous product vacua of Einstein gravity in the limit of small Gauss-Bonnet coupling, and a ``Gauss-Bonnet" branch with $K_1=K_2=K_+$ where the curvatures of both factors diverge in this limit. On the Einstein branch, the curvatures of both factors are precisely correlated with the sign of $\Lambda_0$, while on the Gauss-Bonnet branch both factors are always negatively curved. Finally, this whole structure is mirrored on the bottom half of the diagram for $\alpha_2<0$. \begin{figure}[t] \centering \includegraphics[width=4.0in,angle=270]{Fig11-DP-GB-n6d4.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity when $n=6$ and $d=4$.} \label{Fig11} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4.0in,angle=270]{Fig9-DP-GB-n6d2.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity when $n=6$ and $d=2$.} \label{Fig9} \end{figure} The situation is similar in most respects for the $d=4$, $D=2$ product vacua. In this case the equations (\ref{gb1}) and (\ref{gb2}) for the curvature constants $K_1$ and $K_2$ reduce to \begin{align}\label{42split1} 24 \alpha_2 K_1K_2 +6K_1+2K_2-2\Lambda_0 &=0\\ 24\alpha_2 K_1^2 +12 K_1-2\Lambda_0 &=0\label{42split2} \end{align} which have the solutions \begin{equation}\label{4plus2} K_1^\pm = {\Lambda_0^c\over 3}B^\pm,\qquad K_2^\pm =\Lambda_0^c{B^\pm(B^\pm-1)\over 3B^\pm-1} \end{equation} where in this case $\Lambda_0^c = -{3\over 4\alpha_2}$ and $B^\pm = \left(1\pm\sqrt{1-{\Lambda_0\over \Lambda_0^c}}\right) $. These solutions are displayed in Figure (\ref{Fig11}). Focusing on the top half of the diagram with $\alpha_2>0$, there are again two branches of solutions for all values of the cosmological constant $\Lambda_0>\Lambda_0^c$. However, in this case we have $\Lambda_0^{CS}>\Lambda_0^c$, so that the product vacua extend some distance into the broken symmetry regime. The curvatures $K_1^+$ and $K_2^+$ are always negative along the Gauss-Bonnet branch, so these solutions are $AdS_4\times H^2$. The Einstein branch is more intricate. For $\Lambda_0>\Lambda_0^{CS}$, the signs of the curvatures $K_1^-$ and $K_2^-$ are both correlated with the sign of $\Lambda_0$ as they were in the $3+3$ split product described above. However, the denominator of the expression for $K_2^-$ vanishes linearly\footnote{One finds that the factor in the denominator of $K_2^-$ in (\ref{4plus2}) can be written as \begin{equation} 3B^--1 = -5 \left({1-{\Lambda_0\over\,\,\,\,\, \Lambda_0^{CS}}\over 2+3\sqrt{1-{\Lambda_0\over \Lambda_0^c}}}\right).\end{equation}} at $\Lambda_0=\Lambda_0^{CS}$ leading to a transition to $AdS_4\times S^2$ in the range $\Lambda_0^c<\Lambda_0<\Lambda_0^{CS}$. Precisely at the critical point $\Lambda_0=\Lambda_0^c$ there is a single $AdS_4\times E^2$ solution. The case of a $2+4$ split product vacua in $n=6$ dimensions again reduces to equations (\ref{42split1}) and (\ref{42split2}), but now with the curvatures $K_1$ and $K_2$ swapped. The resulting configurations are shown in Figure (\ref{Fig9}). These vacua also exist a finite distance into the broken symmetry regime. \begin{figure}[t] \centering \includegraphics[width=3.75in,angle=270]{Fig6-DP-GB-d1.eps} \caption{\sl Direct product vacua of Gauss-Bonnet gravity in the case of one-dimensional submanifolds in $n$ dimensions.} \label{Fig6} \end{figure} Finally, as in Einstein gravity, the cases with $1$-dimensional factors, $d=1$ and $d=n-1$, require special handling, although in this case we are able to do the analysis for general spacetime dimension $n$. Taking $d=1$, the equations of motion (\ref{gb1}) and (\ref{gb2}) reduce to \begin{align} \alpha_2(n-1)(n-2)(n-3)(n-4)K_2^2 +(n-1)(n-2)K_2-2\Lambda_0 &=0 \\ \alpha_2(n-2)(n-3)(n-4)(n-5)K_2^2 +(n-2)(n-3)K_2-2\Lambda_0 &=0 \end{align} These equations are inconsistent, except for the two special cases \begin{align} \Lambda_0 &=0,\qquad K_2=0\\ \Lambda_0 &=\Lambda_0^{CS},\qquad K_2={4\over (n-1)(n-2)}\Lambda_0^{CS} \end{align} which are displayed along with the corresponding results for $d=n-1$ in Figure (\ref{Fig6}). This is a curious result. Recall that in Einstein gravity, as shown in Figure (\ref{Fig4}), there are no similar $1$-dimensional product vacua with a non-zero value of the coupling constant. However, any value of the cosmological constant in Einstein gravity can be thought of as being a CS value in the sense considered here. It would be interesting to look at the CS limits of higher order Lovelock theories to see if any pattern emerges with respect to product vacua with flat directions\footnote{Note that warped products with one dimensional factors were found for Lovelock theories with CS couplings in \cite{Kastor:2006vw}.}. \section{Vacua in third order Lovelock gravity}\label{3rdordersection} We now consider the full `low order' Lovelock theory introduced in Section (\ref{Sec:3rdLove}), which includes the third order Lovelock term as well. This will be relevant in $n=7$ dimensions and beyond. The equation determining the maximally symmetric vacua is now given by \begin{equation}\label{cubic} \tilde{\alpha}_3\Lambda^3+\tilde{\alpha}_2\Lambda^2+\Lambda-\Lambda_0=0, \end{equation} where $\tilde\alpha_2$ is as given above and $\tilde{\alpha}_3=\frac{4(n-3)(n-4)(n-5)(n-6)}{(n-1)^2(n-2)^2}\,\alpha_3$. Since this is a cubic equation, there will always be at least one real root and, hence, at least one physical maximally symmetric vacuum state. Therefore, third order Lovelock theory has no broken symmetry regime in coupling space. The roots of equation (\ref{cubic}) are given by \begin{eqnarray} \Lambda_{1}&=&\frac{1}{3\tilde{\alpha}_3}\left[-\tilde{\alpha}_2+A+\frac{\Delta_0}{A}\right],\label{Rroot}\\ \Lambda_{2\pm}&=&-\frac{1}{6\tilde{\alpha}_3}\left[2\tilde{\alpha}_2+A+\frac{\Delta_0}{A}\pm i\sqrt{3}\left(A-\frac{\Delta_0}{A}\right)\right],\label{Croots} \end{eqnarray} where $A^3={1\over2}\left[-\Delta_1+\sqrt{\Delta_1^2-4\Delta_0^3}\right]$ with $\Delta_0=\tilde{\alpha}_2^2-3\tilde{\alpha}_3$ and $\Delta_1=2\tilde{\alpha}_2^3-9\tilde{\alpha}_2\tilde{\alpha}_3-27\tilde{\alpha}_3^2\Lambda_0$. As noted in \cite{Amirabi:2013bqa}, when one takes the Gauss-Bonnet limit, $\alpha_3\rightarrow 0$, the real root $\Lambda_{1}$ diverges, while the complex conjugate pair $\Lambda_{2\pm}$ become the solutions (\ref{VacGB}). \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=270]{Fig3-MS-3rdLG.eps} \caption{\sl Maximally symmetric vacua of third-order Lovelock gravity in the special case $\tilde{\alpha}_3=\tilde{\alpha}_2^2/3$ in $n$ dimensions. The script \textquotedblleft$(r)$" refers to the real solution (\ref{Rroot1}).} \label{Fig3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.75in,angle=270]{Fig16-DP-3rdLG-n7d5.eps} \caption{\sl Direct product vacua of third-order Lovelock gravity when $n=7$ and $d=5$.} \label{Fig16} \end{figure} The nature of the three roots (\ref{Rroot}) and (\ref{Croots}) depends on the sign of the quantity $\Delta_1^2-4\Delta_0^3$ in the following way\footnote{The first case is obvious, for it is the generic case. The second case can be understood as follows. When $\Delta_1^2-4\Delta_0^3=0$, there are three possibilities, which can be studied by considering $\Delta_1=0$, $\Delta_1>0$, and $\Delta_1<0$: If $\Delta_1=0$, then $\Delta_0=0$ and $A=0$. So in this case, (\ref{Rroot}) and (\ref{Croots}) seem indeterminate, but rewriting the equation (\ref{cubic}) as $$\left(\Lambda+\frac{\tilde{\alpha}_2}{3\tilde{\alpha}_3}\right)^3-\frac{\Delta_0}{3\tilde{\alpha}_3^2}\left(\Lambda+\frac{\tilde{\alpha}_2}{3\tilde{\alpha}_3}\right)+\frac{\Delta_1}{27\tilde{\alpha}_3^3}=0,$$ we see that in this case there is one triple real root. This is just the CS point. On the other hand, if $\Delta_1>0$ or $\Delta_1<0$, then $\Delta_1=2\Delta_0^{3/2}$ and $A=-\Delta_0^{1/2}$ or $\Delta_1=-2\Delta_0^{3/2}$ and $A=\Delta_0^{1/2}$, respectively. In either case, the complex part inside the square bracket in (\ref{Croots}) drops and we get three real roots two of which are always equal. And finally in the last case, $\Delta_1^2-4\Delta_0^3<0$, $A$ becomes complex and so $|A|^2=\Delta_0$. By using simple complex algebra, one can easily show that the roots (\ref{Rroot}) and (\ref{Croots}) are indeed real.} \begin{equation}\label{cases} \begin{array}{ll} 1.&\tx{$\Delta_1^2-4\Delta_0^3>0$}\Rightarrow\tx{one real and two complex roots}, \\ 2.&\tx{$\Delta_1^2-4\Delta_0^3=0$}\Rightarrow\tx{multiple real roots}, \\ 3.&\tx{$\Delta_1^2-4\Delta_0^3<0$}\Rightarrow\tx{three distinct real roots}. \end{array} \end{equation} The CS point for the third order theory, {\it i.e.} the point at which all three roots coincide, occurs when the quantities $\Delta_0= \Delta_1=0$. The couplings at the CS points are then related according to \begin{equation}\label{3rdCScon} \tilde{\alpha}_3=\frac{\tilde{\alpha}_2^2}{3},\qquad \Lambda_0=\Lambda_0^{CS}\equiv-\frac{1}{3\tilde{\alpha}_2}. \end{equation} The effective cosmological constant at the CS point is $\Lambda=-\frac{1}{\tilde{\alpha}_2}$ which can be either $dS_n$ for $\tilde{\alpha}_2<0$ or $AdS_n$ for $\tilde{\alpha}_2>0$. It is difficult to investigate the full parameter space of third order Lovelock theory. In order to make progress, we will restrict our attention to the two parameter family of theories satisfying $\Delta_0=0$, which will greatly simplify our analysis of product vacua, while still yielding interesting results. We can now regard the third order coupling $\alpha_3$ as fixed in terms of $\alpha_2$ by the first condition in (\ref{3rdCScon}). From (\ref{cases}) we see that with $\Delta_0=0$ there will generically be one real root, which is found to be \begin{equation}\label{Rroot1} \Lambda_{1}=-\frac{1}{\tilde{\alpha}_2}\left[1-(1+3\tilde{\alpha}_2\Lambda_0)^{1/3}\right]. \end{equation} which can be zero for $\Lambda_0=0$, positive for $\Lambda_0>0$, or negative for $\Lambda_0<0$, and yielding respectively an $M^n$, $dS_n$, or $AdS_n$ vacuum. If in addition $\Delta_1=0$, which yields the second condition in (\ref{3rdCScon}), this is then the CS case with three coinciding real roots. The corresponding maximally symmetric vacua are represented in \fig{Fig3}. The vacua of Einstein gravity are recovered by taking the limit $\alpha_2\rightarrow 0$, which because $\Delta_0=0$ also takes the third order coupling to zero. \begin{figure}[t] \centering \includegraphics[width=4.0in,angle=270]{Fig13-DP-3rdLG-n7d2.eps} \caption{\sl Direct product vacua of third-order Lovelock gravity when $n=7$ and $d=2$.} \label{Fig13} \end{figure} We now turn to product vacua for third order Lovelock gravity. The equations satisfied by the curvatures $K_1$ and $K_2$ are now given by \begin{align}\label{third1}\nonumber \alpha_3\left({(d-1)!\over (d-7)!}K_1^3 + {3(d-1)!D!\over (d-5)!(D-2)!} K_1^2K_2 \right.&\left.+ {3(d-1)!D!\over (d-3)!(D-4)!} K_1K_2^2 +{D!\over (D-6)!}K_2^3\right)\\ +\alpha_2\left( {(d-1)!\over (d-5)!}K_1^2 \right.&\left. + {2(d-1)!D!\over (d-3)!(D-2)!} K_1K_2 + {D!\over (D-4)!}K_2^2\right)\\ & +{(d-1)!\over (d-3)!}K_1+ {D!\over (D-2)!}K_2-2\Lambda_0 =0\nonumber \end{align} \begin{align}\label{third2} \nonumber \alpha_3\left({(D-1)!\over (D-7)!}K_2^3 + {3(D-1)!d!\over (D-5)!(d-2)!} K_2^2K_1 \right.&\left.+ {3(D-1)!d!\over (D-3)!(d-4)!} K_2K_1^2 +{d!\over (d-6)!}K_1^3\right)\\ +\alpha_2\left( {(D-1)!\over (D-5)!}K_2^2 \right.&\left. + {2(D-1)!d!\over (D-3)!(d-2)!} K_2K_1 + {d!\over (d-4)!}K_1^2\right)\\ & +{(D-1)!\over (D-3)!}K_2+ {d!\over (d-2)!}K_1-2\Lambda_0 =0\nonumber \end{align} We will also restrict our analysis to $n=7$ dimensions, which is lowest dimension in which the third order Lovelock term is relevant. We will consider in turn product vacua with $5+2$, $2+5$, $4+3$ and $3+4$ splits. \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=270]{Fig15-DP-3rdLG-n7d4.eps} \caption{\sl Direct product vacua of third-order Lovelock gravity when $n=7$ and $d=4$.} \label{Fig15} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=270]{Fig14-DP-3rdLG-n7d3.eps} \caption{\sl Direct product vacua of third-order Lovelock gravity when $n=7$ and $d=3$.} \label{Fig14} \end{figure} Beginning with the $d=5$, $D=2$ split, we find that the equations of motion (\ref{third1}) and (\ref{third2}) reduce to \begin{align} 144\alpha_3 K_1^2K_2 +24\alpha_2 K_1^2 +48\alpha_2K_1K_2+12K_1+2K_2-2\Lambda_0 &=0\\ 120\alpha_2 K_1^2 +20K_1-2\Lambda_0 &=0 \end{align} We see that the second equation can be solved for $K_1$, with the first equation then determining $K_2$, giving \begin{equation} K_1^\pm = {\Lambda_0^{CS}\over 5}C^\pm,\qquad K_2^\pm = -{4\Lambda_0^{CS}\over 5}{C^\pm\over C^\pm-1} \end{equation} where $C^\pm=1\pm\sqrt{1-{\Lambda_0\over\,\,\,\,\,\Lambda_0^{CS}}}$, $\Lambda_0^{CS}=-{5\over 12\alpha_2}$ is the value of the cosmological constant at the CS point for the third order Lovelock theory in $n=7$ dimensions, and we have set $\alpha_3=2\alpha_2^2$ in accordance with our assumption that $\Delta_0=0$. The character of these solutions and the range of $\Lambda_0$ covered is shown in Figure (\ref{Fig16}). The case of a $2+5$ split reduces to the same set of equations, with the roles of the two curvatures $K_1$ and $K_2$ swapped. The resulting solutions are displayed in Figure (\ref{Fig13}). The structure of product vacua with a $d=4$, $D=3$ split is considerably more intricate. In this case the equations of motion reduce to \begin{align} 72\alpha_2 K_1K_2+6K_1 +6K_2 -2\Lambda_0& =0\\ 144\alpha_3 K_2K_1^2 +48\alpha_2K_1K_2+24\alpha_2K_1^2+2K_2+12K_1-2\Lambda_0 &=0. \end{align} After including the relation $\alpha_3=2\alpha_2^2$ this yields the curvatures \begin{equation}\label{3rdLGK1K2:n7d4} K_1=\frac{2\Lambda_0}{15(1-{\Lambda_0\over\,\Lambda_0^2})},\qquad K_2=\frac{(1-{\Lambda_0\over\,\Lambda_0^1})\Lambda_0}{5(1-{\Lambda_0\over\,\,\,\,\,\Lambda_0^{CS}})}, \end{equation} where the critical values of the cosmological constant at which one or the other of $K_1$ and $K_2$ change sign are given by $\Lambda_0^1=-{3\over 4\alpha_2}$, $\Lambda_0^2=-{5\over 4\alpha_2}$ and $\Lambda_0^{CS}=-{5\over 12\alpha_2}$. These results are displayed in Figure (\ref{Fig15}). The case of a $3+4$ split is simply obtained by swapping the curvatures $K_1$ and $K_2$ and is displayed in Figure (\ref{Fig14}). Finally, we consider products with one dimensional factors, $d=n-1$ and $d=1$. Taking $d=1$, we find that as in the Gauss-Bonnet case solutions exist only for $\Lambda_0=0$ with $K_2=0$ and for $\Lambda_0=-{1\over 3\tilde\alpha_2}\equiv\Lambda_0^{CS}$ with $K_2={6\over (n-1)(n-2)}\Lambda_0^{CS}$. This result\footnote{The solutions with $\Lambda_0=0$ are actually present for general values of $\alpha_2$ and $\alpha_3$.} along with the corresponding result for $d=n-1$ is displayed in Figure (\ref{Fig12}). It is intriguing that the CS value of the couplings show up again here. It would be interesting to understand the case of product vacua with one dimensional factors more generally. \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=270]{Fig12-DP-3rdLG-d1.eps} \caption{\sl Direct product vacua of third-order Lovelock gravity in the case of one-dimensional submanifolds in $n$ dimensions.} \label{Fig12} \end{figure} \section{Conclusion}\label{conclude} Einstein gravity has maximally symmetric vacua for all values of the cosmological constant and in all spacetime dimensions. However, Lovelock theories can have symmetry breaking regions of coupling space, in which no maximally symmetric vacua exist. We have carried out a partial survey of alternative, reduced symmetry vacua in Lovelock theories that are products of lower dimensional maximally symmetric space(times), with particular interest in whether such vacua exist cover symmetry breaking regions of coupling space. Our findings on this question show indications of interesting structure. Gauss-Bonnet gravity in any dimension has such a symmetry breaking region of coupling space. We looked at product vacua in $n=5$ and $n=6$ dimensions, finding sharply different results. While product vacua cover the entire symmetry breaking region of coupling space in $n=5$ dimensions, in $n=6$ dimensions such vacua only cover a small portion of the symmetry breaking area. In $n=7$ dimensions, the third order Lovelock interaction becomes physically relevant, and so long as its coupling is nonzero at least one maximally symmetric vacuum will exist. We have looked at product vacua in this theory, restricting our focus to a tractable region of coupling space. We found that $5+2$ and $2+5$ dimensional products again exist throughout this region. However, $4+3$ and $3+4$ dimensional products exist only over a portion of coupling space. It would be interesting to extend this study further. For example, one could look at product vacua in Gauss-Bonnet gravity beyond $n=6$ dimensions, {\it i.e.} setting the couplings of the relevant higher order Lovelock terms to zero. Our results in $n=5,6$ dimensions would be consistent with a number of possible patterns. For example, it might turn out that product vacua cover all of the symmetry breaking region of coupling space only in $n=5$ dimensions. Alternatively, it could be that there is an alternation between even and odd dimensions, with product vacua covering the symmetry breaking region fully in odd dimensions. As the dimension gets higher, it might also be natural to consider product vacua with more than two factors. It would be particularly interesting to map out the product vacua of $4$th order Lovelock theory, which will also have a symmetry breaking regime, in $n=9,10$ dimensions, where it includes all the relevant Lovelock terms. However, even if one considers only a particular subspace of the full set of couplings, the equations for the curvatures will be quartic and difficult to analyze. It is also important to note that not all maximally symmetric vacua in Lovelock gravity are stable. For example, the Gauss-Bonnet branch of vacua in Gauss-Bonnet gravity suffers from a ghost instability \cite{Boulware:1985wk}. As noted in \cite{Canfora:2008iu}, it will be important to study the stability of product solutions such as those found here, in order to determine the true vacua of the theory. Finally, it would be interesting to consider the potential physical relevance of transitions across critical surfaces in coupling space in which the number of maximally symmetric vacua change. This could happen, for example, if the cosmological constant were dynamical\footnote{There has recently been a good deal of interest in considering the cosmological constant as a thermodynamic variable in the context of black hole physics (see {\it e.g.} \cite{Kastor:2009wy,Cvetic:2010jb,Kubiznak:2012wp} and references thereto). The thermodynamics of varying Lovelock couplings has also been considered in \cite{Kastor:2010gq}.}. If the cosmological constant crossed into the symmetry breaking region of coupling space in $n=5$ dimensional Gauss-Bonnet gravity, it might be possible to transition from an $AdS_5$ vacuum to an $AdS_3\times S^2$ vacuum as in the top half of Figure (\ref{Fig8}). \subsection*{Acknowledgements} \c{C}. \c{S}. wishes to thank ACFI for hospitality and also thanks the Scientific and Technological Research Council of Turkey (T{\"U}B\.{I}TAK) for financial support under the Programme BIDEB-2219.
proofpile-arXiv_068-16612
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{I. Introduction} The magneto-optical trap (MOT), first demonstrated in 1987~\cite{Pritchard1987}, is widely used nowadays. It has triggered a broad field of research in the past decades. Following its initial demonstration, early work explored the role of sub-Doppler mechanisms~\cite{Cohen1989} and multiple scattering of light~\cite{Walker1990}. In addition to representing essential experimental technology to reach quantum degeneracy in ultracold gases, MOTs can harbor in the large atom number limit a variety of interesting nonlinear phenomena that are not yet fully understood and bear intriguing ties to plasma~\cite{Mendonca2008,Rodrigues2016,Barre2019, Hansen1975,Evrard1979} and stellar physics~\cite{Labeyrie2006, Cox1980}. Descriptions borrowed from the fields of nonlinear dynamics~\cite{Wilkowski2000, DiStephano2004} and fluid physics~\cite{Mendonca2012} have been employed to investigate these phenomena. In a MOT with a large number of trapped atoms $N$, these are subjected to three forces. A trapping and cooling force is exerted by laser beams in the presence of a magnetic field gradient. This force depends on laser and magnetic field parameters, but not on $N$. Two additional, \textit{collective} forces appear when $N$ is large enough. First, the laser beams are attenuated inside the atom cloud due to photon scattering. This attenuation yields a compressive correction to the trapping force~\cite{Dalibard1988}. Second, the scattered photons can be re-scattered by other atoms, which gives rise to a Coulomb-like repulsive force~\cite{Walker1990}. It is the interplay between these three forces that can generate unstable dynamics in large MOTs. During the last two decades, instabilities in MOTs have been studied in various configurations~\cite{Wilkowski2000, DiStephano2003, DiStephano2004, Romain2016, Labeyrie2006}. In Ref.~\cite{Labeyrie2006} we reported an unstable behavior for a large balanced MOT (see section II) and presented a preliminary study of its instability threshold. A simple unidimensional analytical model allowed us to provide a rough instability criterion. In the present work, we provide a detailed analysis of the instability threshold using an improved experimental scheme where the number of trapped atoms is controlled. Some of our observations deviate from the scaling predicted by the analytical model of Ref.~\cite{Labeyrie2006}. We thus developed a three-dimensional numerical simulation based on microscopic theoretical ingredients, whose results are in qualitative agreement with the experiment. The article is organized as follows. In section II, we describe our experimental setup and measurement procedure. We then review previous theoretical models of MOT instabilities in section III.a, and describe our numerical approach in section III.b. In section IV, we present our experimental results and discuss the comparison to our numerical simulations. The implications of our findings and ensuing perspectives for future work are outlined in sections V and VI. \section{II. Experimental Setup} \label{experiment} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{Fig1.pdf} \caption{Experimental procedure. \textbf{A} Details of the arrangement for one pair of MOT beams (1). The other two pairs ((2) and (3)) are identical and orthogonal to the first one. The beams are delivered by polarization-maintaining single-mode optical fibers (OF) coupled to a collimator (C). The beams intensities are balanced using a half-wave plate ($\lambda/2$) and polarizing beam splitter (PBS) assembly placed on one arm. The PBS also allows to collect part of the counter-propagating beam after its passage through the cloud, for optical density measurement using a photodiode (PD). The beams are expanded to a waist of 3.4 cm using afocal telescopes (L$_1$ + L$_2$). Their polarization is adjusted using quarter-wave plates ($\lambda/4$). The magnetic field gradient is provided by a pair of anti-Helmoltz coils (AHC). \textbf{B} Timing of the experiment. The MOT is loaded for 2 s with a detuning $\delta_{load}$, adjusted to maintain the number of atoms fixed during the measurement. The detuning is then changed to $\delta$ for 100 ms (instability phase). An image is finally acquired with a fixed detuning $\delta_{im.} = -8\Gamma$.} \label{setup} \end{center} \end{figure} Our large MOT and its characteristics have been thoroughly described in Ref.~\cite{Camara2014}. Here, we briefly reiterate the aspects that are most relevant to the present work. We use six large (3.4 cm waist) trapping laser beams of same intensity tuned close (detuning $\delta$) to the $F = 2 \rightarrow F^{\prime} = 3$ transition of the $^{87}$Rb D2 line to trap and cool atoms from an ambient vapor (see Fig.~\ref{setup}). These six beams, originating from the same source coupled into single-mode optical fibers, form three counter-propagating pairs crossing at 90$^\circ$. Because of this balanced arrangement, the nature of the instabilities is different from that of Refs.~\cite{Wilkowski2000,Romain2016} where the beams were retro-reflected, such that the center-of-mass motion played a dominant role in the nonlinear dynamics of the trapped atoms. In our setup, the intensities of the beams in each pair are balanced, yielding in principle a centro-symmetric situation. However, a source of asymmetry is the magnetic field gradient $\nabla$B, which is generated by a pair of anti-Helmoltz coils and is thus twice stronger along the coil's axis than in the transverse plane. Small defects in the spatial profiles of the beams are also creating a local intensity imbalance, which is another source of symmetry breaking. The peak intensity for each beam is typically $I = 5$ mW/cm$^2$. The corresponding saturation parameter per beam is $s = \frac{I/I_{sat}}{1+4(\delta/\Gamma)^2} \approx 0.08$ for $\delta = -3\Gamma$, assuming atoms are pumped into the streched Zeeman sub-states ($I_{sat} = 1.67$ mW/cm$^2$). $\Gamma$ is the natural line width of the transition ($\frac{\Gamma}{2 \pi} = 6.06$ MHz). In addition to the trapping light, all beams also contain a small amount of ''repumping'' light tuned close to the $F = 1 \rightarrow F^{\prime} = 2$ transition to maintain the atoms in interaction with the trapping lasers. Three pairs of Helmoltz coils are used to compensate for stray magnetic fields at the position of MOT center. Because of this, the position of the cloud in the stable regime does not vary in the course of the experiment when the magnetic field gradient is adjusted. In section IV, we will present the measured threshold detunings as the magnetic field gradient is varied, while keeping the number of trapped atoms $N$ fixed. Since we employ a vapor-loaded MOT where the steady-state value of $N$ is determined by ($\delta$, $\nabla$B), this requires to use the temporal sequence of Fig.~\ref{setup}\textbf{B}. It is composed of three successive phases that are continuously cycled. In the first phase, we load the MOT for a duration of 2 s with a detuning $\delta_{load}$. We set the value of $N$ by adjusting $\delta_{load}$. We then, in the second phase, rapidly change the detuning for 100 ms to a value $\delta$, which determines the dynamical regime of the MOT that we wish to probe. Finally, in the third phase, the detuning is adjusted to $\delta_{im.} = -8~\Gamma$ for 1 ms to perform a fluorescence image acquisition. We image the cloud from two directions at 90$^\circ$, giving access to projections of the atomic distribution along the three spatial dimensions. Note that the assumption that the detected fluorescence is proportional to the column density is safe because of the large detuning chosen for the imaging~\cite{Camara2014}. An absorption imaging scheme, not represented in Fig.~\ref{setup}, allows us to determine the value of $N$. The measurement sequence of Fig.~\ref{setup}B enables to maintain a fixed atom number determined by $\delta_{load}$. The 100 ms delay between detuning step and image acquisition is necessary to decorrelate the observed dynamics from the ''kick'' applied to the atoms as the detuning is abruptly changed. Note however that this delay is small compared to the loading time constant of the MOT ($\approx 2$ s), which ensures that the atom number is determined by $\delta_{load}$ and remains approximately independent of $\delta_{inst.}$. Since only one image is recorded for each sequence, by cycling over many runs (typically a hundred) we randomly probe the dynamics. \section{III. Previous theoretical approaches and three-dimensional numerical simulations} \subsection{III.a. Previous theoretical approaches} We briefly recall here the evolution of the theoretical approaches that were employed in the past to describe the physics of large balanced MOTs. This evolution ultimately led to the development of the three-dimensional numerical approach described in Section III.b. The model introduced in the 90s by C. Wieman and co-workers~\cite{Walker1990} was the first to describe the operation of the MOT in the stable, multiple-scattering regime. It relies on a Doppler description of the trapping forces which seems appropriate for very large MOTs, although it is known that sub-Doppler mechanisms can play an important role for alkali MOTs at low and moderate atom numbers~\cite{Cohen1989, Kim2004}. Contrary to the case of small atom numbers, where the dynamics is governed by single-atom physics and the MOT size is determined by the temperature of the gas, in the regime of large atom numbers the radiation pressure forces acting on the atoms depend on the atomic density distribution and can therefore lead to collective behavior. This occurs when the optical density of the cloud at the trapping laser frequency becomes non-negligible. On one hand, the trapping beams are then attenuated while propagating through the cloud, which produces a density dependent compression force~\cite{Dalibard1988}. On the other hand, the scattered photons can be re-scattered by other atoms, resulting in a Coulomb-like repulsion force~\cite{Walker1990}. Because the absorption cross-section for scattered photons $\sigma_R$ is different from and larger than that for laser photons $\sigma_L$, the repulsion force is larger than the compression force and the cloud expands when $N$ is increased~\cite{Walker1990, Steane1992}. At equilibrium, the atomic density inside the cloud is constant and only determined by the ratio $\frac{\sigma_R}{\sigma_L}$, and does not depend on $N$. This results in a characteristic $R \propto N^{1/3}$ scaling law~\cite{Walker1990, Camara2014}, providing a clear signature of this regime defined by a spatially linear trapping force and weak attenuation of the trapping beams. This simplified treatment of~\cite{Walker1990} was extended in~\cite{Labeyrie2006} in order to account for larger optical densities and to analyze MOT instabilities that can occur for larger atom numbers. This approach allowed to derive a criterion for the threshold of MOT instabilities that were observed when $N$ exceeded a certain critical value. In this model, which assumed a constant density, the nonlinear dependence of the attenuated trapping force on both position and velocity was retained. An unstable regime was found to occur when the cloud's radius was larger than a critical value $R_c$ given by: \begin{equation} \mu~\nabla B~R_{c} \approx \left|\delta\right| \label{threshold} \end{equation} where $\mu = 2\pi \times 1.4 \times 10^6$ s$^{-1}$G$^{-1}$ for the considered Rubidium transition. The MOT thus becomes unstable when the Zeeman detuning at the cloud's edge exceeds the absolute value of the laser detuning. In this situation, the total force at the cloud's edge reverses its sign, and the atomic motion becomes driven instead of damped. For $\delta = -2 \Gamma$ and $\nabla$B = 10 G/cm, the criterion given by Eq.1 yields $R_{c} \approx 9$ mm. Such a large MOT size typically requires a large atom number $N > 10^{10}$. While this simplified model provided an intuitive picture for the emergence of the instability and the existence of an instability threshold, it did not make quantitative predictions and was not able to describe the dynamics of the unstable cloud. The assumption of a constant density was relaxed in~\cite{Pohl2006, Gattobigio2010}, based on a kinetic theory that described the phase space density of the atoms using a spatial radial symmetry hypothesis. The numerical test-particle simulations of the derived kinetic equations were able to confirm the simple instability criterion given by Eq.~\ref{threshold}. They also yielded a generic shape for the atomic density distribution of stable clouds under the form of truncated Gaussians. Finally, these simulations provided insights on the mechanism of the instability, and gave access to the dynamics of the cloud in the unstable regime. However, a limitation of this approach was the assumed spherical symmetry, effectively reducing the dimension of the problem to 1D and preventing e.g. center-of-mass motion in the dynamics of the unstable cloud. \subsection{III.b. Three-dimensional numerical simulations} To overcome the limitations of the previous models, we have developed a 3D numerical approach based on a microscopic description of the light-atom interaction. The detailed description of this model will be published elsewhere, we simply outline here its main features. Since modeling $10^{11}$ atoms is out of reach with present computers, we used $N_s = 7 \times 10^3$ ''super-particles'', each representing $\alpha = \frac{N}{N_s}$ real atoms. The mass and scattering cross-section of these super-particles are $\alpha$ times larger than that of individual atoms. We checked the validity of this approach by verifying that the outcome of the simulation becomes independent of $N_s$ for $N_s > 5 \times 10^3$. What we simulate is the dynamics of these $N_s$ particles submitted to the three MOT forces mentioned earlier: the trapping force, the compressive attenuation force and the repulsive re-scattering force. The finite temperature of the cloud is accounted for by including a velocity diffusion term in the dynamics, which depends on the photon scattering rate for each particle. We use a leap-frog algorithm~\cite{Kaya1998} to compute the particles dynamics. In the following, we describe the essential steps employed in the simulations. We use a Doppler model for the various forces, which are based on radiation pressure. To simplify the expressions given below, we assume here that our particles are two-level atoms. However, in the simulation we use a more realistic 0 $\rightarrow$ 1 transition model. It is obviously much simpler than the actual 2 $\rightarrow$ 3 transition of the D2 line of $^{87}$Rb used in the experiment, but allows for a correct description of the three-dimensional trapping by the MOT~\cite{Brickman2007}. To calculate the forces acting on a particle located at position $r$, we first compute the local intensity $I(r)$ of each of the six laser beams. This is achieved by calculating their attenuation due to the rest of the cloud. By summing independently the radiation pressure forces $F_{rp}$ due to each beam, we obtain the trapping plus attenuation force. We thus neglect effects of cross-saturations between laser beams~\cite{Romain2011}, which is strictly valid only in the low saturation limit. The radiation pressure exerted by one laser beam of intensity $I(r)$ on an atom located at position $\textbf{r}$ is of the form: \begin{equation} F_{rp}(r,v)=\frac{\sigma_L(r,v) I(r)}{c} \label{Frp} \end{equation} where $c$ is the speed of light. The scattering cross-section for a laser photon $\sigma_L$ is: \begin{equation} \sigma_L(r,v)=\frac{3\lambda^2}{2\pi}\frac{1}{1+I_{tot}(r)/I_s+(2 \delta_{eff}(r,v)/\Gamma)^2} \label{sigma_L} \end{equation} where $\lambda$ is the wavelength of the atomic transition. The presence of the other laser beams is taken into account by the $I_{tot}(r)$ term in the denominator of expression~\ref{sigma_L}, which is the total local laser intensity obtained by summing the intensities $I_{i}(r)$ of each laser beam. Considering for simplification a beam propagating in the positive $x$ direction, the Doppler- and Zeeman-shifted detuning $\delta_{eff}$ is: \begin{equation} \delta_{eff}(x,v)=\delta - \textbf{k}.\textbf{v} - \mu \nabla B x \label{delta eff} \end{equation} The re-scattering force acting on the particle at position $r$ is obtained by summing all binary interactions with the other particles in the cloud. For a second particle located at position $r^{\prime}$, the binary interaction $F_{bin}(r,r^{\prime})$ is of the form: \begin{equation} F_{bin}(r,r^{\prime})=\frac{I_{tot}(r^{\prime}) \sigma_L(r^{\prime}) \sigma_R(r,r^{\prime})}{4\pi c (r^{\prime} - r)^2} \label{binary} \end{equation} The computation of the re-absorption cross-section $\sigma_R$ is not straightforward. It involves the convolution of two quantities: the absorption cross-section for a scattered photon with a certain frequency by the atom illuminated by the total laser field of intensity $I_{tot}(r)$, and the spectrum of the light scattered by the atom at position $r^{\prime}$~\cite{Walker1990, Pruvost2000}. Indeed, the atom is a nonlinear scatterer for light, and the scattering process is inelastic if the saturation parameter is not negligible compared to unity. We compute both the scattered light spectrum and the absorption cross-section for scattered light using the approach developed by Mollow~\cite{Mollow1969,Mollow1972}. Note that since we take into account the inhomogeneous laser intensity distribution inside the cloud, $\sigma_R$ depends on both spatial coordinates $r$ and $r^{\prime}$, and thus plays the role of a nonlocal effective charge in the Coulombian binary interaction described by Eq.~\ref{binary}. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig2.pdf} \caption{Simulated MOT size versus $N$. The parameters are: $\delta = -5.5\Gamma$, $\nabla$B = 7.2 G/cm and I = 5 mW/cm$^2$. We plot the cloud's \textit{rms} radius versus $N$ (dots). The dashed line corresponds to the $N^{1/3}$ scaling. The insets show examples of cloud density profiles which have been spatially rescaled. The solid curves are the density profiles while the dashed curves correspond to Gaussian fits.} \label{sizesSim} \end{center} \end{figure} An illustration of the result of this simulation is provided in Fig.~\ref{sizesSim}, where we plot the cloud's \textit{rms} radius versus the number of simulated atoms, in the stable regime. As can be observed, below $10^7$ atoms the cloud size is $N$-independent and its profile is Gaussian as expected in the temperature-limited regime. For $N > 10^8$, the cloud's radius increases as $N^{1/3}$ (dashed line), the scaling predicted by Ref.~\cite{Walker1990}. The increase of cloud size with $N$ is a clear signature of the multiple-scattering regime. Within this regime, we observe variations of the cloud's density profile. Around $10^9$ atoms, the profile displays a rather flat top. At even higher atom numbers, the top of the profiles rounds off and gets closer to a Gaussian. In all instances, however, the wings of the profiles in the multiple-scattering regime are decaying faster than a Gaussian. Overall, the observed evolution is consistent with our previous observations~\cite{Camara2014}, and also with the theoretical predictions of Ref.~\cite{Pohl2006}. It should be noted that the simulated clouds are systematically larger than those observed in the experiment, roughly by a factor 2. We discuss the implications of this observation in section V. The numerical simulations not only reproduce the behavior of a stable MOT in the multiple scattering regime, but also and most importantly they yield an unstable behavior for parameters close to those used in the experiments. The onset of the instability as observed in the simulations is illustrated in Fig.~\ref{InstabSim}. We observe a sharp transition between stable and unstable behaviors when the control parameter, here is the laser detuning, is varied. We plot the temporal evolution of the \textit{rms} radius of the cloud both below ($\delta = -3\Gamma$) and above ($\delta = -2.8\Gamma$) threshold. The initial transient (first 50 ms) is due to the slight mismatch in size and shape between the Gaussian atomic distribution used as a starting point for the simulation, and the final distribution. In the unstable regime, we generally observe another transient before the onset of oscillations, whose duration depends on the distance from threshold. In Fig.~\ref{InstabSim}, this duration is roughly 0.4 s. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig3.pdf} \caption{Instability of simulated MOT. We plot the cloud's \textit{rms} radius versus time, in the stable regime ($\delta = -3.0 \Gamma$, thin curve) and in the unstable regime ($\delta = -2.8 \Gamma$, bold curve). The parameters are: $N = 1.5 \times 10^{10}$, $\nabla$B = 3 G/cm, $I = 5$ mW/cm$^2$.} \label{InstabSim} \end{center} \end{figure} \section{IV. Experimental results and comparison with simulations} \label{results} \subsection{IV.a. Experimental determination of instability threshold} To determine the instability threshold, we monitor the evolution of the cloud's spatial density distribution during the dynamics. As discussed in Section II, only one image is recorded during each experimental cycle described in Fig.~\ref{setup}B, which corresponds to a random probing of the dynamics of the cloud. We thus record a given set of typically 50 images, and then compare the images two by two. This is done by subtracting the two images, and spatially integrating the squared difference image. After normalization, this operation yields the ''cloud fluctuation'' of Fig.~\ref{ThreshDet}, a number whose value is zero if the two images are identical, and one if there is no overlap between the two density distributions (corresponding to a maximal deformation). The operation is repeated for many pairs of images and the corresponding fluctuations are averaged. Fig.~\ref{ThreshDet} illustrates the behavior of the cloud fluctuation as $\delta$ is varied over the whole experimental range $-4\Gamma \leq \delta \leq -0.8\Gamma$, for the three values of $\nabla$B. As can be seen, crossing the threshold results in an abrupt increase of the fluctuation. The position of the threshold can be estimated by fitting the initial growth by a linear function and extrapolating to the value below threshold. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig4.pdf} \caption{Determination of instability threshold. We plot the cloud fluctuation (see text) versus detuning for three values of magnetic field gradient: 1) $\nabla$B = 12 G/cm (open circles), 2) $\nabla$B = 4.8 G/cm (dots), and 3) $\nabla$B = 1.2 G/cm (stars). An extrapolation of the observed growth rate (dotted lines) allows to determine the threshold detuning (arrows). The insets show examples of fluorescence images for a stable and an unstable cloud.} \label{ThreshDet} \end{center} \end{figure} \subsection{IV.b. Threshold detuning versus atom number} In this section we study the impact of $N$ on the instability threshold. To this end, we vary the number of trapped atoms by adjusting the diameter of the MOT beams using diaphragms. This provides an efficient way of tuning $N$ without affecting the other MOT parameters, as demonstrated in~\cite{Camara2014}. The experimental variation of $\delta_{th}$ with $N$ is reported in Fig.~\ref{deltavsN} (dots), in log-log scale. Very roughly, we observe that $\left|\delta_{th}\right|$ increases by $\Gamma$ when $N$ increases by one order of magnitude. A linear fit of the data in the log-log plot yields a slope of 0.17 (dotted line). The result of the numerical simulations described in section III.b is reported in circles. It shows a similar scaling, but with an offset of approximately $\Gamma$. We also show the prediction of the model of Ref.~\cite{Labeyrie2006} (squares), as given by Eq.~\ref{threshold}. This equation establishes a link between $\nabla B, \delta$ and the critical radius $R_c$, but does not provide an expression for the cloud size as a function of $N$. Thus, we use the cloud sizes measured in the experiment to compute $\delta_{th}$ using Eq.~\ref{threshold}. However, the model of~\cite{Labeyrie2006} assumes a constant density while the experimental profiles are closer to Gaussians. We thus have to choose a definition of $R_c$ to use in Eq.~\ref{threshold}. In Fig.~\ref{deltavsN}, we used $R_c = 2 \sigma$, where $\sigma$ is the measured \textit{rms} cloud size in the plane of the magnetic field gradient coils. Since we find a scaling $\sigma \propto N^{0.36}$, the prediction Eq.~\ref{threshold} is definitely different from both the experimental observation and the result of the numerical simulations. This is a clear indication that the model used in~\cite{Labeyrie2006} cannot quantitatively describe the dependence of the instability threshold on the atom number. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig5.pdf} \caption{Threshold detuning (absolute value) versus atom number. The dots are experimental data for a magnetic field gradient $\nabla$B = 7.2 G/cm and a beam intensity $I = 5$ mW/cm$^2$. The circles are the simulation result for the same parameters. The squares correspond to the prediction of the model of Ref.~\cite{Labeyrie2006} using experimentally measured cloud sizes (see text). The dotted lines correspond to linear fits of the log-log data.} \label{deltavsN} \end{center} \end{figure} \subsection{IV.c. Threshold detuning versus magnetic field gradient at fixed atom number} We now present measurements of the threshold detuning $\delta_{th}$ as $\nabla$B is varied, using the procedure described in section II to maintain a fixed number of trapped atoms N = 1.5 $\times~10^{10}$. The results of different experimental runs are shown as dots in Fig.~\ref{compNfixed}, together with a linear fit taking into account all the data (solid line). The threshold detuning is seen to increase (in absolute value) linearly with $\nabla$B, with a slope $\approx~0.14~\Gamma/$(G/cm). We compare the experimental data to the result of the numerical simulations (circles) using the experimental parameters. Once again, we observe that the slopes of both experimental and simulated curves are very similar. This agreement is a good indication that our numerical model captures efficiently the main ingredient involved in the determination of the instability threshold. We note again that the simulated thresholds are systematically larger (in absolute value) than the experimental ones by approximately $\Gamma$. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{Fig6.pdf} \caption{Threshold detuning versus magnetic field gradient at fixed atom number. The experimental data (including several runs) is shown as dots, with a linear fit (solid line). The parameters are: N = $1.5 \times 10^{10}$, $I = 5$ mW/cm$^2$. The circles correspond to the numerical simulation result.} \label{compNfixed} \end{center} \end{figure} \section{V. Discussion} \label{discussion} The new experimental data reported in this work need to be compared to our earlier results of~\cite{Labeyrie2006}. They are obtained in conditions that are both better controlled (threshold at constant $N$) and more extended (threshold as a function of $N$), and thus put more stringent constraints on the models they are being compared to. In particular, the simple analytical model of~\cite{Labeyrie2006}, which seemed in reasonable agreement with the early experimental data, is now clearly unable to reproduce the scaling observed when the atom number is varied. Although we have no reason to question the physical picture of the instability mechanism that it conveys, it is too simplified to reproduce even qualitatively the experimentally observed behavior. This is not the case for the numerical simulations, which are in good qualitative agreement with both experiments at fixed $N$ and as a function of $N$. This indicates that our improved model catches the important ingredients determining the behavior of the MOT instability threshold. We plan in the future to investigate which of these ingredients are determinant in describing the correct MOT behavior. The constant offset of about one $\Gamma$ in the threshold detuning between experiment and numerics is probably linked to the larger cloud sizes found in the simulations. The origin of this mismatch is not at present clearly identified, but it is not surprising considering the large number of simplifications still included in the model. The most prominent is the simplified atomic structure, possibly yielding a different effective Zeeman shift in the MOT. \section{VI. Conclusion} We presented in this paper a detailed experimental and numerical study of the instability threshold for a balanced, six-independent beams magneto-optical trap containing large numbers of cold atoms. Using an improved experimental scheme, we were able to study the impact of the atom number on the threshold. We also measured the ($\delta$, $\nabla$B) unstable boundary while maintaining this atom number fixed. These experimental results were compared to a three-dimensional numerical simulation of the MOT based on a microscopic description. We obtain a good qualitative agreement, despite some unavoidable simplifications in the description of the MOT physics. The scaling of the threshold with atom number, for both experiment and simulations, is clearly different from that given by the analytical approach of~\cite{Labeyrie2006}. Our numerical model also allows us to go beyond the approach of~\cite{Pohl2006}, which was assuming a central symmetry. In particular we now can and do observe center-of-mass as well as radial oscillations. This approach will be useful in the future to investigate the atom cloud dynamics in the unstable regime. \section{Acknowledgements} Part of this work was performed in the framework of the European Training Network ColOpt, which is funded by the European Union (EU) Horizon 2020 programme under the Marie Sklodowska-Curie action, grant agreement No. 721465.
proofpile-arXiv_068-16692
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The theory of evolutionary computation aims at explaining the behavior of evolutionary algorithms, for example by giving detailed run time analyses of such algorithms on certain test functions, defined on some search space (for this paper we will focus on $\{0,1\}^n$). The first general method for conducting such analyzes is the \emph{fitness level method (FLM)} \cite{Wegener01,Wegener02}. The idea of this method is as follows. We partition the search space into a number $m$ of sections (``levels'') in a linear fashion, so that all elements of later levels have better fitness than all elements of earlier levels. For the algorithm to be analyzed we regard the best-so-far individual and the level it is in. Since the best-so-far individual can never move to lower levels, it will visit each level at most once (possibly staying there for some time). Suppose we can show that, for any level $i < m$ which the algorithm is currently in, the probability to leave this level is at least $p_i$. Then, bounding the expected waiting for leaving a level $i$ by $1/p_i$, we can derive an upper bound for the run time of $\sum_{i=1}^{m-1} 1/p_i$ by pessimistically assuming that we visit (and thus have to leave) each level $i < m$ before reaching the target level $m$. The fitness level method allows for simple and intuitive proofs and has therefore frequently been applied. Variations of it come with tail bounds~\cite{Witt14}, work for parallel EAs~\cite{LassigS14}, or admit non-elitist EAs~\cite{Lehre11,DangL16algo,CorusDEL18,DoerrK19}. While very effective for proving upper bounds, it seems much harder to use fitness level arguments to prove lower bounds (see Theorem~\ref{thm:FLMlow} for an early attempt). The first (and so far only) to devise a fitness level-based lower bound method that gives competitive bounds was Sudholt \cite{Sudholt13}. His approach uses viscosity parameters $\gamma_{i,j}$, $0 \le i < j \le n$, which control the probability of the algorithm to jump from one level $i$ to a higher level $j$ (see Section~\ref{sec:flmWithViscosities} for details). While this allows for deriving strong results, the application is rather technical due to the many parameters and the restrictions they have to fulfill. In this paper, we propose a new variant of the FLM for lower bounds, which is easier to use and which appears more intuitive. For each level $i$, we regard the \emph{visit probability} $v_i$, that is, the probability that level $i$ is visited at all during a run of the algorithm. This way we can directly characterize the run time of the algorithm as $\sum_{i=1}^{m-1} v_i/p_i$ when $p_i$ is the precise probability to leave level $i$ independent of where on level $i$ the algorithm is. When only estimates for these quantities are known, e.g., because the level leaving probability is not independent from the current state, then we obtain the corresponding upper or lower bounds on the expected run time (see Section~\ref{sec:flmWithVisitProbabilities} for details). We first use this method to give the precise expected run time of the \oea on \leadingones in Section~\ref{sec:leadingones}. While this run time was already well-understood before, it serves as a simple demonstration of the ease with which our method can be applied. Next, in Section~\ref{sec:onemax}, we give a bound on the expected run time of the \oea on \onemax, precise apart from terms of order $\Theta(n)$. Such bounds have also been known before, but needed much deeper methods (see Section~\ref{ssec:onemaxlit} for a detailed discussion). Sudholt's lower bound method has also been applied to this problem, but gave a slightly weaker bound deviating from the truth by an $O(n \log\log n)$ term. In addition to the precise result, we feel that our FLM with visit probabilities gives a clearer structure of the proof than the previous works. In Section~\ref{sec:jump}, we prove tighter lower bounds for the run time of the \oea on jump functions. We do so by determining (asymptotically precise) the probability that in a run of the \oea on a jump function the algorithm does not reach a non-optimal search point outside the fitness valley (and thus does not have to jump over this valley). Interestingly, this probability is only $O(2^{-n})$ regardless of the jump size (width of the valley). Finally, in Section~\ref{sec:longKPaths}, we consider the \oea on so-called long $k$-paths. We show how the FLM with visit probabilities can give results comparable to those of the FLM with viscosities while again being much simpler to apply. \section{The {\oea}} \label{sec:algo} In this paper we consider exactly one randomized search heuristic, the \oea. It maintains a single individual, the best it has seen so far. Each iteration it uses standard bit mutation with mutation rate $p \in (0,1)$ (flipping each bit of the bit string independently with probability $p$) and keeps the result if and only if it is at least as good as the current individual under a given fitness function $f$. We give a more formal definition in Algorithm~\ref{alg:oea}. \begin{algorithm2e} Let $x$ be a uniformly random bit string from $\{0,1\}^n$\; \While{optimum not reached} $y \assign \mbox{mutate}_p(x)$\; \lIf{$f(y) \geq f(x)$}{$x \assign y$} } \caption{The \oea to maximize $f : \{0,1\}^n \to \R$.} \label{alg:oea} \end{algorithm2e} \section{The Fitness Level Methods} The fitness level method is typically phrased in terms of a \emph{fitness-based partition}, that is, a partition of the search space into sets $A_1,\ldots,A_m$ such that elements of later sets have higher fitness. We first introduce this concept and abstract away from it to ease the notation. After this, in Section~\ref{sec:originalFLM}, we state the original FLM. In Section~\ref{sec:flmWithViscosities} we describe the lower bound based on the FLM from Sudholt \cite{Sudholt13}, before presenting our own variant, the FLM with visit probabilities, in Section~\ref{sec:flmWithVisitProbabilities}. \subsection{Level Processes} \begin{definition}[Fitness-Based Partition {\cite{Wegener02}}] Let $f: \{0,1\}^n \rightarrow \R$ be a fitness function. A partition $A_1,\ldots,A_m$ of $\{0,1\}^n$ is called a \emph{fitness-based partition} if for all $i,j \leq m$ with $i < j$ and $x \in A_i$, $y \in A_j$, we have $f(x) < f(y)$. \end{definition} We will use the shorthands $A_{\geq i} = \bigcup_{j=i}^m A_j$ and $A_{\leq i} = \bigcup_{j=1}^i A_j$. In order to simplify our notation, we focus on processes on $[1..m]$ (the levels) with underlying Markov chain as follows. \begin{definition}[Non-decreasing Level Process]\label{def:levelProcess} A stochastic process $(X_t)_t$ on $[1..m]$ is called a \emph{non-decreasing level process} if and only if (i)~there exists a Markov process $(Y_t)_t$ over a state space $S$ such that there is an $\ell: S \rightarrow [1..m]$ with $\ell(Y_t) = X_t$ for all $t$, and (ii)~the process $(X_t)_t$ is non-decreasing, that is, we have $X_{t+1} \ge X_t$ with probability one for all $t$. \end{definition} We later want to analyze algorithms in terms of non-decreasing level processes, making the transition as follows. Suppose we have an algorithm with state space $\{0,1\}^n$. Denoting by $Y_t$ the best among the first $t$ search points generated by the algorithm, this defines a Markov Chain $(Y_t)_t$ in the state space $S = \{0,1\}^n$, the run of the algorithm. Further, suppose the algorithm optimizes a fitness function $f$ such that the state of the algorithm is non-decreasing in terms of fitness. In order to get a non-decreasing level process, we can now define any fitness-based partition and get a corresponding \emph{level function} $\ell: S \rightarrow [1..m]$ by mapping any $x \in S$ to the unique $i$ with $x \in A_i$. Then the process $(\ell(Y_t))_t$ is a non-decreasing level process. The main reason for us to use the formal notion of a level process is the property formalized in the following lemma. Essentially, if a level process makes progress with probability at least $p$ in each iteration (regardless of the precise current state), then the expected number of iterations until the process progresses is at most $1/p$. This situation resembles a geometric distribution, but does not assume independence of the different iterations (one could show that the time to progress is stochastically dominated by a geometric distribution with success rate $p$, but we do not need this level of detail). \begin{lemma}\label{lem:geometricDistribution} Let $(X_t)_t$ be a non-decreasing level process with underlying Markov chain $(Y_t)_t$ and level function $\ell$. Assume $X_t$ starts on some particular level. Let $p \in (0,1]$ be a lower bound on the probability for level process to leave this level regardless of the state of the underlying Markov chain. Then the expected first time $t$ such that $X_t$ changes is at most $1/p$. Analogously, if $p$ is an upper bound, the expected time $t$ such that $X_t$ changes is at least $1/p$. \end{lemma} \begin{proof} We let $(Z_t)_t$ be the stochastic process on $\{0,1\}$ such that $Z_t$ is $1$ if and only if $X_t > X_0$. According to our assumptions, we have, for all $t$ before the first time that $Z_t=1$, that $E[Z_{t+1} - Z_t \mid Z_t] \geq p$. From the additive drift theorem \cite{HeY01,Lengler20bookchapter} we obtain that the expected first time such that $Z_t = 1$ is bounded by $1/p$ as desired. The ``analogously'' clause follows analogously. \end{proof} \subsection{Original Fitness Level Method} \label{sec:originalFLM} The following theorem contains the original Fitness Level Method and makes the basic principle formal. \begin{theorem}[Fitness Level Method, upper bound {\cite{Wegener02}}]\label{thm:classic} Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). For all $i \in [1..m-1]$, let $p_i$ be a lower bound on the probability of a state change of $(X_t)_t$, conditional on being in state~$i$. Then the expected time for $(X_t)_t$ to reach the state $m$ is \[ E[T] \leq \sum_{i=1}^{m-1} \frac{1}{p_i}. \] \end{theorem} This bound is very simple, yet strong. It is based on the idea that, in the worst case, all levels have to be visited sequentially. Note that one can improve this bound (slightly) by considering only those levels which come after the (random) start level $X_0$ (by changing the start of the sum to $X_0$ instead of $1$). Intuitively, low levels that are never visited do not need to be left. There is a lower bound based on the observation that at least the initial level has to be left (if it was not the last level). \begin{theorem}[Fitness Level Method, lower bound {\cite{Wegener02}}]\label{thm:FLMlow} Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). For all $i \in [1..m-1]$, let $p_i$ be an upper bound on the probability of a state change, conditional on being in state $i$. Then the expected time for $(X_t)_t$ to reach the state $m$ is $$ E[T] \geq \sum_{i=0}^{m-1} \Pr[X_0 = i] \frac{1}{p_i}. $$ \end{theorem} This bound is very weak since it assumes that the first improvement on the initial search point already finds the optimum. We note, very brief{}ly, that a second main analysis method, \emph{drift analysis}, also has additional difficulties with lower bounds. Additive drift~\cite{HeY01}, multiplicative drift~\cite{DoerrJW12algo}, and variable drift~\cite{MitavskiyRC09,Johannsen10} all easily give upper bounds for run times, however, only the additive drift theorem yields lower bounds with the same ease. The existing multiplicative~\cite{Witt13,DoerrDK18,DoerrKLL20} and variable~\cite{DoerrFW11,FeldmannK13,GiessenW18,DoerrDY20} drift theorems for lower bounds all need significantly stronger assumptions than their counterparts for upper bounds. \subsection{Fitness Level Method with Viscosity} \label{sec:flmWithViscosities} While the upper bound above is strong and useful, the lower bound is typically not strong enough to give more than a trivial bound. Sudholt~\cite{Sudholt13} gave a refinement of the method by considering bounds on the transition probabilities from one level to another. \begin{theorem}[Fitness Level Method with Viscosity, lower bound {\cite{Sudholt13}}] Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). Let $\chi,\gamma_{i,j} \in [0,1]$ and $p_i \in (0,1]$ be such that \begin{itemize} \item for all $t$, if $X_t = i$, the probability that $X_{t+1} = j$ is at most $p_i \cdot \gamma_{i,j}$; \item $\sum_{j=i+1}^m \gamma_{i,j} = 1$; and \item for all $j>i$, we have $\gamma_{i,j} \geq \chi \sum_{k=j}^m \gamma_{i,k}$. \end{itemize} Then the expected time for $(X_t)_t$ to reach the state $m$ is $$ E[T] \geq \sum_{i=1}^{m-1} \Pr[X_0 = i] \; \chi \sum_{j=i}^{m-1} \frac{1}{p_j}. $$ \end{theorem} This result is much stronger than the original lower bound from Fitness Level Method, since now the leaving probabilities of all segments are part of the bound, at least with a fractional impact prescribed by $\chi$. The weakness of the method is that $\chi$ has to be defined globally, the same for all segments~$i$. There is also a corresponding upper bound as follows. \begin{theorem}[Fitness Level Method with Viscosity, upper bound {\cite{Sudholt13}}] Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). Let $\chi,\gamma_{i,j} \in [0,1]$ and $p_i \in (0,1]$ be such that \begin{itemize} \item for all $t$, if $X_t = i$, the probability that $X_{t+1} = j$ is at least $p_i \cdot \gamma_{i,j}$; \item $\sum_{j=i+1}^m \gamma_{i,j} = 1$; \item for all $j>i$, we have $\gamma_{i,j} \leq \chi \sum_{k=j}^m \gamma_{i,k}$; and \item for all $j \leq m-2$, we have $(1-\chi)p_j \leq p_{j+1}$. \end{itemize} Then the expected time for $(X_t)_t$ to reach the state $m$ is $$ E[T] \leq \sum_{i=1}^{m-1} \Pr[X_0 = i] \; \left(\frac{1}{p_j}+\chi \sum_{j=i+1}^{m-1} \frac{1}{p_j}\right). $$ \end{theorem} \subsection{Fitness Level Method with Visit Probabilities} \label{sec:flmWithVisitProbabilities} In this paper, we give a new FLM theorem for proving lower bounds. The idea is that exactly all those levels that have ever been visited need to be left; thus, we can use the expected waiting time for leaving a specific level multiplied with the probability of visiting that level at all. The following theorem makes this idea precise for lower bounds; Theorem~\ref{thm:fbp_visit_upper} gives the corresponding upper bound. We note that for the particular case of the optimization of the \leadingones problem via $(1+1)$-type elitist algorithms, our bounds are special cases of~\cite[Lemma~5]{DoerrJWZ13} and~\cite[Theorem~3]{Doerr19tcs}. \begin{theorem}[Fitness Level Method with visit probabilities, lower bound]\label{thm:fbp_visit_lower} Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). For all $i \in [1..m-1]$, let $p_i$ be an upper bound on the probability of a state change of $(X_t)_t$, conditional on being in state~$i$. Furthermore, let $v_i$ be a lower bound on the probability of there being a $t$ such that $X_t = i$. Then the expected time for $(X_t)_t$ to reach the state $m$ is $$ E[T] \geq \sum_{i=1}^{m-1} \frac{v_i}{p_i}. $$ \end{theorem} \begin{proof} For each $i < m$, let $T_i$ be the (random) time spent in level~$i$. Thus, $$ T = \sum_{i=1}^{m-1} T_i. $$ Let now $i < m$. We want to show that $E[T_i] \geq v_i/p_i$. We let $E$ be the event that the process ever visits level $i$ and compute $$ E[T_i] = E[T_i \mid E]\Pr[E] + E[T_i \mid \overline{E}]\Pr[\overline{E}] \geq E[T_i \mid E]v_i. $$ For all $t$ with $X_t = i$, with probability at most $p_i$, we have $X_{t+1} > i$. Thus, using Lemma~\ref{lem:geometricDistribution}, the expected time until a search point with $X_k > i$ is found is at least $1/p_i$, giving $E[T_i \mid E] \geq 1/p_i$ as desired. \end{proof} A strength of this formulation is that skipping levels due to a higher initialization does not need to be taken into account separately (as in the two previous lower bounds), it is part of the visit probabilities. A corresponding upper bound follows with analogous arguments. \begin{theorem}[Fitness Level Method with visit probabilities, upper bound]\label{thm:fbp_visit_upper} Let $(X_t)_t$ be a non-decreasing level process (as detailed in Definition~\ref{def:levelProcess}). For all $i \in [1..m-1]$, let $p_i$ be a lower bound on the probability of a state change of $(X_t)_t$, conditional on being in state $i$. Furthermore, let $v_i$ be an upper bound on the probability there being a $t$ such that $X_t = i$. Then the expected time for $(X_t)_t$ to reach the state $m$ is $$ E[T] \leq \sum_{i=1}^{m-1} \frac{v_i}{p_i}. $$ \end{theorem} In a typical application of the method of the FLM, finding good estimates for the leaving probabilities is easy. It is more complicated to estimate the visit probabilities accurately, so we propose one possible approach in the following lemma. \begin{lemma}\label{lem:visitprob} Let $(Y_t)_t$ be a Markov-process over state space $S$ and $\ell: S \rightarrow [1..m]$ a level function. For all $t$, let $X_t = \ell(Y_t)$ and suppose that $(X_t)_t$ is non-decreasing. Further, suppose that $(X_t)_t$ reaches state $m$ after a finite time with probability $1$. Let $i < m$ be given. For any $x \in S$ and any set $M \subseteq S$, let $x \rightarrow M$ denote the event that the Markov chain with current state $x$ transitions to a state in $M$. For all $j$ let $A_j = \{s \in S \mid \ell(s) = j\}$. Suppose there is $v_i$ such that, for all $x \in A_{\leq i-1}$ with $\Pr[x \rightarrow A_{\geq i}] > 0$, $$ \Pr[x \rightarrow A_i \mid x \rightarrow A_{\geq i}] \geq v_i, $$ and $$ \Pr[Y_0 \in A_i \mid Y_0 \in A_{\geq i}] \geq v_i. $$ Then $v_i$ is a lower bound for visiting level $i$ as required by Theorem~\ref{thm:fbp_visit_lower}. \end{lemma} \begin{proof} Let $T$ be minimal such that $Y_T \in A_{\geq i}$. Then the probability that level $i$ is being visited is $\Pr[Y_T \in A_i]$, since $(X_t)_t$ is non-decreasing. By the law of total probability we can show the claim by showing it first conditional on $T=0$ and then conditional on $T\neq 0$. We have that $T = 0$ is equivalent to $Y_0 \in A_{\geq i}$, thus we have $\Pr[Y_T \in A_i \mid T = 0] \geq v_i$ from the second condition in the statement of the lemma. Otherwise, let $x = Y_{T-1}$. Since $Y_T \in A_{\geq i}$, \begin{eqnarray*} \Pr[Y_T \in A_i \mid T \neq 0] & = & \Pr[Y_T \in A_i \mid Y_T \in A_{\geq i}, T \neq 0]\\ & = & \Pr[x \rightarrow A_i \mid x \rightarrow A_{\geq i}, T \neq 0]\\ & = & \Pr[x \rightarrow A_i \mid x \rightarrow A_{\geq i}]. \end{eqnarray*} As $T$ was chosen minimally, we have $x \not\in A_{\geq i}$ and thus get the desired bound from the first condition in the statement of the lemma. \end{proof} Implicitly, the lemma suggests to take the minimum of all these conditional probabilities over the different choices for $x$. Note that this estimate might be somewhat imprecise since worst-case $x$ might not be encountered frequently. Also note that a corresponding upper bound for Theorem~\ref{thm:fbp_visit_upper} follows analogously. \section{The Precise Run Time for \leadingones} \label{sec:leadingones} One of the classic fitness functions used for analyzing the optimization behavior of randomized search heuristics is the \leadingones function. Given a bit string $x$ of length $n$, the \leadingones value of $x$ is defined as the number of $1$s in the bit string before the first $0$ (if any). In parallel independent work, the precise expected run time of the \oea on the \leadingones benchmark function was determined in \cite{BottcherDN10,Sudholt13}. Even more, the distribution of the run time was determined with variants of the FLM in \cite{DoerrJWZ13,Doerr19tcs}. As a first simple application of our methods, we now determine the precise run time of the \oea on \leadingones via Theorems~\ref{thm:fbp_visit_lower} and~\ref{thm:fbp_visit_upper}. \begin{theorem}\label{thm:LO} Consider the \oea optimizing \leadingones with mutation rate $p$. Let $T$ be the (random) time for the \oea to find the optimum. Then $$ E[T] = \frac{1}{2} \sum_{i=0}^{n-1} \frac{1}{(1-p)^i p}. $$ \end{theorem} \begin{proof} We want to apply Theorems~\ref{thm:fbp_visit_lower} and~\ref{thm:fbp_visit_upper} simultaneously. We partition the search space in the canonical way such that, for all $i \leq n$, $A_i$ contains the set of all search points with fitness $i$. Now we need a precise result for the probability to leave a level and for the probability to visit a level. First, we consider the probability $p_i$ to leave a given level $i < n$. Suppose the algorithm has a current search point in $A_i$, so it has $i$ leading $1$s and then a $0$. The algorithm leaves level $A_i$ now if and only if it flips the first $0$ of the bit string (probability of $p$) and no previous bits (probability $(1-p)^i$). Hence, $p_i = p(1-p)^i$. Next we consider the probability $v_i$ to visit a level $i$. We claim that it is exactly $1/2$, following reasoning given in several places before~\cite{DrosteJW02,Sudholt13}. We want to use Lemma~\ref{lem:visitprob} and its analog for upper bounds. Let $i$ be given. For the initial search point, if it is at least on level $i$ (the condition considered by the lemma), the individual is on level $i$ if and only if the $i+1$st bit is a $0$, so exactly with probability $1/2$ as desired for both bounds. Before an individual with at least $i$ leading $1$s is created, the bit at position $i+1$ remains uniformly random (this can be seen by induction: it is uniform at the beginning and does not experience any bias in any iteration while no individual with at least $i$ leading $1$ is created). Once such an individual is created, if the bit at position $i+1$ is $1$, the level $i$ is skipped, otherwise it is visited. Thus, the algorithm skips level $i$ with probability exactly $1/2$, giving $v_i = 1/2$. With these exact values for the $p_i$ and $v_i$, Theorems~\ref{thm:fbp_visit_lower} and~\ref{thm:fbp_visit_upper} immediately yield the claim. \end{proof} By computing the geometric series in Theorem~\ref{thm:LO}, we obtain as a (well-known) corollary that the \oea with the classic mutation rate $p = 1/n$ optimizes \leadingones in an expected run time of $n^2\frac{e-1}{2}(1\pm o(1))$. \section{A Tight Lower Bound for \onemax} \label{sec:onemax} In this section, as a first real example of the usefulness of our general method, we prove a lower bound for the run time of the \oea with standard mutation rate $p=\frac 1n$ on \onemax, which is only by an additive term of order $O(n)$ below the upper bound following from the classic fitness level method. This is tighter than the best gap of order $O(n \log\log n)$ proven previously with fitness level arguments. Moreover, our lower bound is the tightest lower bound apart from the significantly more complicated works that determine the run time precise apart from $o(n)$ terms. We defer a detailed account of the literature together with a comparison of the methods to Section~\ref{ssec:onemaxlit}. We recall that the fitness levels of the \onemax function are given by \[A_i \coloneqq \{x \in \{0,1\}^n \mid \OM(x) = i\}, i \in [0..n].\] We use the notation $A_{\ge i} \coloneqq \bigcup_{j = i}^n A_j$ and $A_{\le i} \coloneqq \bigcup_{j = 0}^i A_j$ for all $i \in [0..n]$ as defined above for fitness-based partitions, but with the appropriate bounds $0$ and $n$ instead of $1$ and $m$. We denote by $T_{k,\ell}$ the expected number of iterations the $\oea$, started with a search point in $A_k$, takes to generate a search point in $A_{\ge \ell}$. We further denote by $T_{\rand,\ell}$ the expected number of iterations the \oea started with a random search point takes to generate a solution in $A_{\ge \ell}$. These notions extend previously proposed fine-grained run time notions: $T_{\rand,\ell}$ is the fixed target run time first proposed in~\cite{DoerrJWZ13} as a technical tool and advocated more broadly in~\cite{BuzdalovDDV20}. The time $T_{k,n}$ until the optimum is found when starting with fitness $k$ was investigated in~\cite{AntipovBD20ppsn} when $k > n/2$, that is, when starting with a better-than-average solution. We spare the details and only note that such fine-grained complexity notions (which also include the fixed-budget complexity proposed in~\cite{JansenZ14}) have given a much better picture on how to use EAs effectively than the classic run time $T_{\rand,n}$ alone. In particular, it was observed that different parameters or algorithms are preferable when not optimizing until the optimum or when starting with a good solution. For all $k, \ell \in [0..n]$, we denote by $p_{k,\ell}$ the probability that standard bit mutation with mutation rate $p = \frac 1n$ creates an offspring in $A_\ell$ from a parent in $A_k$. We also write $p_{k,\ge \ell} := \sum_{j = \ell}^n p_{k,j}$ to denote the probability to generate an individual in $A_{\ge \ell}$ from a parent in $A_k$. Then $p_i := p_{i, \ge i+1}$ is the probability that the \oea optimizing \onemax leaves the $i$-th fitness level. \subsection{Upper and Lower Bounds Via Fitness Levels} Using the notation just introduced, the classic fitness level method (see Theorem~\ref{thm:classic} and note that the fitness of the parent individuals describes a non-decreasing level process with state change probabilities $p_i$) shows that \[ T_{k,\ell} \le \sum_{i = k}^{\ell-1} \frac 1 {p_{i}} =: \tilde T_{k,\ell}. \] To prove a nearly matching lower bound employing our new methods, we first analyze the probability that the \oea optimizing \onemax skips a particular fitness level. Note that if $q_i$ is the probability to skip the $i$-th fitness level, then $v_i := 1 - q_i$ is the probability to visit the $i$-th level as used in Theorem~\ref{thm:fbp_visit_lower}. \begin{lemma}\label{lmiss} Let $i \in [0..n]$. Consider a run of the \oea with mutation rate $p = \frac 1n$ on the \onemax function started with a (possibly random) individual $x$ with $\onemax(x) < i$. Then the probability $q_i$ that during the run the parent individual never has fitness $i$ satisfies \[q_i \le \frac{n-i}{n(1-\frac 1n)^{i-1}}.\] \end{lemma} \begin{proof} Since we assume that we start below fitness level $i$, by Lemma~\ref{lem:visitprob} (and using the notation from that lemma for a moment) we have \begin{align*} q_i &\le \max\{\Pr[x \to A_{\ge i+1} \mid x \to A_{\ge i}] \mid \onemax(x) < i\}\\ & \le \max_{k \in [0..i-1]} \frac{p_{k,\ge i+1}}{p_{k, \ge i}}. \end{align*} Hence it suffices to show that $\frac{p_{k,\ge i+1}}{p_{k, \ge i}} \le \frac{n-i}{n(1-\frac 1n)^{i-1}}$ for all $k \in [0..i-1]$, and this is what we will do in the remainder of this proof. Let us, slightly abusing the common notation, write $\Bin(m,p)$ to denote a random variable following a binomial law with parameters $m$ and~$p$. Let $k, \ell \in \N$ with $k \le \ell$. Noting that the only way to generate a search point in $A_\ell$ from some $x \in A_k$ is to flip, for some $j \in [\ell-k..\min\{n-k,\ell\}]$, exactly $j$ of the $n-k$ zero-bits of $x$ and exactly $j - (\ell-k)$ of the $k$ one-bits, we easily obtain the well-known fact that \begin{align*} p_{k,\ell} &= \sum_{j = \ell-k}^{\min\{n-k,\ell\}} \Pr[\Bin(n-k,p) = j] \Pr[\Bin(k,p) = j - (\ell-k)]\\ &= \sum_{j = \ell-k}^{\min\{n-k,\ell\}} \binom{n-k}{j} \binom{k}{j - (\ell-k)} p^{2j - \ell + k} (1-p)^{n - 2j + \ell - k}. \end{align*} Since $p = \frac 1n$, the mode of $\Bin(n-k,p)$ is at most $1$. Since the binomial distribution is unimodal, we conclude that $\Pr[\Bin(n-k,p) = j] \le \Pr[\Bin(n-k,p) = \ell-k]$ for all $j \ge \ell-k$. Consequently, the first line of the above set of equations gives \begin{align*} p_{k,\ell} &\le \Pr[\Bin(n-k,p) = \ell-k] \Pr[\Bin(k,p) \in [0..\min\{n-\ell,k\}]] \\ &\le \Pr[\Bin(n-k,p) = \ell-k] \end{align*} and thus \begin{equation} p_{k, \ge\ell} \le \Pr[\Bin(n-k,p) \ge \ell-k].\label{eq:pbin} \end{equation} We recall that our target is to estimate $\frac{p_{k,\ge i+1}}{p_{k, \ge i}}$ for all $k \in [0..i-1]$. By~\eqref{eq:pbin}, we have \begin{align*} p_{k, \ge i+1} &\le \Pr[\Bin(n-k,p) \ge i+1 - k] \\ &\le \frac{(i+1-k)(1-p)}{i+1-k-(n-k)p} \Pr[\Bin(n-k,p) = i+1 - k], \end{align*} where the last estimate is~\cite[equation following Lemma~1.10.38]{Doerr20bookchapter}. We also have $p_{k,\ge i} \ge p_{k,i} \ge (1-p)^k \Pr[\Bin(n-k,p)=i-k]$. Hence from \begin{align*} \frac{\Pr[\Bin(n-k,p) = i+1 - k]}{\Pr[\Bin(n-k,p)=i-k]} &= \frac{\binom{n-k}{i+1-k} p^{i+1-k} (1-p)^{n-k-(i+1-k)}}{\binom{n-k}{i-k} p^{i-k} (1-p)^{n-k-(i-k)}} \\ &= \frac{(n-i)p}{(i+1-k)(1-p)} \end{align*} we conclude \begin{align*} \frac{p_{k,\ge i+1}}{p_{k, \ge i}} &\le \frac{(i+1-k)(1-p)}{i+1-k-(n-k)p} \frac{(n-i)p}{(i+1-k)(1-p)^{k+1}}\\ & \le \frac{n-i}{n(i-k)(1-\frac 1n)^k}, \end{align*} using again that $p = \frac 1n$. For $k \in [0..i-1]$, this expression is maximal for $k = i-1$, giving that $q_i \le \frac{n-i}{n(1-\frac 1n)^{i-1}}$ as claimed. \end{proof} With this estimate, we can now easily give a very tight lower bound on the run time of the \oea on \onemax. \begin{theorem}\label{thm:onemax1} Let $k , \ell \in [0..n]$ with $k < \ell$. Then the expected number $T_{k,\ell}$ of iterations the \oea optimizing \onemax and initialized with any search point $x$ with $\onemax(x) = k$ takes to generate a search point $z$ with fitness $\onemax(z) \ge \ell$ is at least \[ T_{k,\ell} \ge \tilde T_{k,\ell} - (\ell-k-1) e (e-1) \exp\left(\frac{k}{n-1}\right), \] where $\tilde T_{k,\ell}$ is the upper bound stemming from the fitness level method as defined at the beginning of this section. This lower bound holds also for $T_{k',\ell}$ with $k' \le k$, that is, when starting with a search point $x$ with $\onemax(x) \le k$. \end{theorem} \begin{proof} We use our main result Theorem~\ref{thm:fbp_visit_lower}. We note first that when assuming that the level process regarded in Theorem~\ref{thm:fbp_visit_lower} starts on level~$k'$, then the expected time for it to reach level $\ell$ or higher is at least $\sum_{i=k'}^{\ell-1} \frac{v_i}{p_i}$. This follows immediately from the proof of the theorem or by applying the theorem to the level process $(X_t')$ defined by $X'_t = \min\{\ell, X_t\} - k'$ for all~$t$. Consider now a run of the \oea on the \onemax function started with an initial search point $x_0$ such that $k' = \onemax(x_0) \le k$. Denote by $x_t$ the individual selected in iteration $t$ as future parent. Then $X_t = \onemax(x_t)$ defines a level process. As before, we denote the probabilities to visit level $i$ by $v_i$, to not visit it by $q_i = 1 - v_i$, and to leave it to a higher level by $p_i$. Using our main result and the elementary argument above, we obtain an expected run time of \begin{align*} E[T_{k',\ell}] \ge \sum_{i = k'}^{\ell-1} \frac{v_i}{p_{i}} \ge \sum_{i = k}^{\ell-1} \frac{v_i}{p_{i}} \ge \sum_{i = k}^{\ell-1} \frac{1}{p_{i}} - \sum_{i=k+1}^{\ell-1} \frac{q_i}{p_{i}}. \end{align*} We note that the first expression is exactly the upper bound $\tilde T_{k,\ell}$ stemming from the classic fitness level method. We estimate the second expression. We have \begin{equation} p_i = p_{i,\ge i+1} \ge p_{i,i+1} \ge (1 - \tfrac 1n)^{n-1} \tfrac{n-i}{n}, \label{eq:lbpi} \end{equation} where the last estimate stems from regarding only the event that exactly one missing bit is flipped. Together with the estimate $q_i \le \frac{n-i}{n(1-\frac 1n)^{i-1}}$ from Lemma~\ref{lmiss}, we compute \begin{align} \sum_{i=k+1}^{\ell-1} \frac{q_i}{p_{i}} &\le \sum_{i=k+1}^{\ell-1} \frac{n-i}{n(1-\frac 1n)^{i-1}} \frac{n}{(n-i) (1 - \frac 1n)^{n-1}} \nonumber\\ &= \sum_{i=k+1}^{\ell-1} \left(1 + \frac 1 {n-1}\right)^{n+i-2} \nonumber\\ &= \left(1 + \frac 1 {n-1}\right)^{n+k-1} \,\, \sum_{j=0}^{\ell-k-2} \left(1 + \frac 1 {n-1}\right)^j\nonumber\\ &= \left(1 + \frac 1 {n-1}\right)^{n+k-1} \frac{\left(1 + \frac 1 {n-1}\right)^{\ell-k-1} - 1}{\left(1 + \frac 1 {n-1}\right) - 1}\nonumber\\ &= \left(1 + \frac 1 {n-1}\right)^{n+k-1} (n-1) \left(\left(1 + \frac 1 {n-1}\right)^{\ell-k-1} - 1\right)\nonumber\\ &\le (n-1) \exp\left(\frac{n+k-1}{n-1}\right) \left(\exp\left(\frac{\ell-k-1}{n-1}\right) - 1\right)\label{eq:onemax1sharp}\\ & = (n-1) e \exp\left(\frac{k}{n-1}\right) \left(\exp\left(\frac{\ell-k-1}{n-1}\right) - 1\right)\nonumber\\ &\le (n-1) (e-1) e \exp\left(\frac{k}{n-1}\right) \frac{\ell-k-1}{n-1}, \nonumber \end{align} where the estimate in \eqref{eq:onemax1sharp} uses the well-known inequality $1+r \le e^r$ valid for all $r \in \R$ and the last estimate exploits the convexity of the exponential function in the interval $[0,1]$, that is, that $\exp(\alpha) \le 1 + \alpha(\exp(1)-\exp(0))$ for all $\alpha \in [0,1]$. \end{proof} The result above shows that the classic fitness level method and our new lower bound method can give very tight run time results. We note that the difference $\delta_{k,\ell} = (\ell-k-1) e (e-1) \exp\left(\frac{k}{n-1}\right)$ between the two fitness level estimates is only of order $O(\ell - k)$, in particular, only of order $O(n)$ for the classic run time $T_{\rand,n}$, which itself is of order $\Theta(n \log n)$. Hence here the gap is only a term of lower order. \subsection{Estimating the Fitness Level Estimate $\tilde T_{k,\ell}$} To make our results above meaningful, it remains to analyze the quantity $\tilde T_{k,\ell}= \sum_{i=k}^{\ell-1} 1/p_{i}$, which is the estimate from the classic fitness level method. Here, again, it turns out that upper bounds tend to be easier to obtain since they require a lower bound for the $p_{i}$, for which the estimate $p_{i} \ge (1-\tfrac 1n)^{n-1} \frac{n-i}{n}$ from~\eqref{eq:lbpi} usually is sufficient. To ease the presentation, let us use the notation $e_n = (1 - \tfrac 1n)^{-(n-1)}$ and note that $e (1-\frac 1n) \le e_n \le e$, see, e.g., \cite[Corollary 1.4.6]{Doerr20bookchapter}. With this notation, the lower bound~\eqref{eq:lbpi} gives the upper bound \begin{equation} \tilde T_{k,\ell} \le e_n n \sum_{i = k}^{\ell-1} \frac{1}{n-i} =: \tilde T_{k,\ell}^+.\label{eq:levub} \end{equation} To prove a lower bound, we observe that \[ p_{i} = \sum_{d = 1}^{n-i} \Pr[\Bin(n-i,p) = d] \Pr[\Bin(i,p) < d]. \] We can thus estimate \begin{align} p_{i} &\le \Pr[\Bin(n-i,p) = 1] \Pr[\Bin(i,p) =0] + \Pr[\Bin(n-i,p) \ge 2] \nonumber\\ &\le \left(1-\frac 1n\right)^{n-1} \frac{n-i}{n} + \frac{(n-i)(n-i-1)}{2n^2}, \label{eq:pub} \end{align} where the last inequality follows from the estimate $\Pr[\Bin(n,p) \ge k] \le \binom{n}{k} p^k$, see, e.g.,~\cite[Lemma~3]{GiessenW17} or \cite[Lemma~1.10.37]{Doerr20bookchapter}. We note that the first summand in~\eqref{eq:pub} is exactly our lower bound~\eqref{eq:lbpi} for $p_{i}$, so it is the second term that determines the slack of our estimates. We estimate coarsely \begin{align*} \frac 1 {p_{i}} & \ge \left(\left(1-\frac 1n\right)^{n-1} \frac{n-i}{n} + \frac{(n-i)(n-i-1)}{2n^2}\right)^{-1} \\ & = \frac{2 e_n n^2}{2n(n-i) + e_n(n-i)(n-i-1)}\\ & = \frac{e_n n}{n-i} - \frac{e_n^2 n (n-i-1)}{(n-i)(2n + e_n(n-i-1))} \ge \frac{e_n n}{n-i} - \frac 12 e_n^2. \end{align*} Summing over the fitness levels, we obtain \begin{align} \tilde T_{k,\ell} & = \sum_{i=k}^{\ell-1} \frac 1 {p_{i}}\nonumber\\ & \ge \sum_{i=k}^{\ell-1} \left(\frac{e_n n}{n-i} - \frac 12 e_n^2 \right)\nonumber\\ &= \tilde T_{k,\ell}^+ - \tfrac 12 e_n^2 (\ell-k) =: \tilde T_{k,\ell}^-.\label{eq:tildetlb} \end{align} We note that our upper and lower bounds on $\tilde T_{k,\ell}$ deviate only by $\tilde T_{k,\ell}^+ - \tilde T_{k,\ell}^- = \frac 12 e_n^2 (\ell-k)$. Together with Theorem~\ref{thm:onemax1}, we have proven the following estimates for $T_{k,\ell}$, which are tight apart from a term of order $O(\ell-k)$. \begin{theorem}\label{thm:onemax2} The expected number of iterations the \oea optimizing \onemax, started with a search point of fitness $k$, takes to find a search point with fitness $\ell$ or larger, satisfies \begin{align*} & e_n n \sum_{i = n-\ell+1}^{n-k} \frac{1}{i} \,-\, (\ell-k-1) e (e-1) \exp\left(\frac{k}{n-1}\right) - \frac 12 e_n^2 (\ell-k) \\ & \le T_{k,\ell} \le \\ &e_n n \sum_{i = n-\ell+1}^{n-k} \frac{1}{i}\,, \end{align*} where $e_n := (1-\frac 1n)^{-(n-1)}$. \end{theorem} We recall from above that $e (1-\frac 1n) \le e_n \le e$. We add that for $\ell<n$, the sum $\sum_{i = n-\ell+1}^{n-k} \frac{1}{i}$ is well-approximated by $\ln(\frac{n-k}{n-\ell})$, e.g., $\ln(\frac{n-k}{n-\ell}) -1 < \sum_{i = n-\ell+1}^{n-k} \frac{1}{i} < \ln(\frac{n-k}{n-\ell})$ or $\sum_{i = n-\ell+1}^{n-k} \frac{1}{i} = \ln(\frac{n-k}{n-\ell}) - O(\frac{1}{n-\ell})$, see, e.g.,~\cite[Section 1.4.2]{Doerr20bookchapter}. For $\ell = n$, we have $\ln(n-k) < \sum_{i = n-\ell+1}^{n-k} \frac{1}{i} \le \ln(n-k)+1$ and $\sum_{i = n-\ell+1}^{n-k} \frac{1}{i} = \ln(n-k)+O(\frac{1}{n-k})$. When starting the \oea with a random initial search point, the following bounds apply. \begin{theorem} There is an absolute constant $K$ such that the expected run time $T = T_{\rand,n}$ of the \oea with random initialization on \onemax satisfies \[ e_n n \sum_{i = 1}^{\lceil n/2 \rceil} \frac{1}{i} - 4.755 n - K \le T \le e_n n \sum_{i = 1}^{\lceil n/2 \rceil} \frac{1}{i} + K. \] In particular, \[ en \ln(n) - 4.871n - O(\log n) \le T \le e n \ln(n) - 0.115 n + O(1). \] \end{theorem} \begin{proof} By~\cite[Theorem~2]{DoerrD16}, the expected run time of the \oea with random initialization on \onemax differs from the run time when starting with a search point on level $A_M$, $M \coloneqq \lfloor n/2 \rfloor$, by at most a constant. Hence we have $T \le T_{M,n} + O(1) \le \tilde T^+_{M,n} + O(1) = e_n n \sum_{i = 1}^{\lceil n/2 \rceil} \frac{1}{i} + O(1)$ by Theorem~\ref{thm:onemax2}. For the lower bound, we use Equation~\eqref{eq:onemax1sharp} in the proof of Theorem~\ref{thm:onemax1}, which is slightly tighter than the result stated in the theorem itself. Together with~\eqref{eq:tildetlb}, we estimate \begin{align*} T &\ge T_{M,n} -O(1)\\ &\ge \tilde T_{M,n} - (n-1) \exp(\tfrac{n + M - 1}{n-1}) (\exp(\tfrac{n - M - 1}{n-1}) - 1) - O(1) \\ &\ge \tilde T_{M,n}^+ - \tfrac 12 e_n^2 (n - M) - n e^{1.5} (e^{0.5} - 1) - O(1)\\ &= \tilde T_{M,n}^+ - \tfrac 14 e^2 n - n (e^2 - e^{1.5}) - O(1) \\ &= \tilde T_{M,n}^+ - n (\tfrac 54 e^2 - e^{1.5}) - O(1) \ge \tilde T_{M,n}^+ - 4.755 n - O(1). \end{align*} The second set of estimates stems from noting that $\tilde T_{M,n}^+ = e_n n \sum_{i = 1}^{\lceil n/2 \rceil} \frac{1}{i} = e_n n (\ln(\lceil n/2 \rceil) + \gamma \pm O(\frac 1n)) = e (1 - O(\frac 1n)) n (\ln n - \ln 2 + \gamma \pm O(\frac 1n))$, where $\gamma = 0.5772156649\dots$ is the Euler-Mascheroni constant. \end{proof} Let us comment a little on the tightness of our result. Due to the symmetries in the \onemax process, the probability to leave the $i$-th fitness level is independent of the particular search point $x \in A_i$ the current parent is equal to. Consequently, in principle, Theorems~\ref{thm:fbp_visit_upper} and~\ref{thm:fbp_visit_lower} give the exact bound \[E[T] = \sum_{k = 0}^{n-1} 2^{-n} \binom{n}{k} \sum_{i = k}^{n-1} \frac{v_{i|k}}{p_i},\] where $v_{i|k}$ denotes the probability that the process started on level $k$ visits level $i$. The reason why we cannot avoid a gap of order $\Theta(n)$ in our bounds is that computing the $v_{i|k}$ and $p_i$ precisely is very difficult. Let us regard the $v_{i|k}$ first. It is easy to see that states $i$ with $k < i \le (1-\eps) n$, $\eps$ a positive constant, have a positive chance of not being visited: By Lemma~\ref{lmiss}, with probability $\Omega(1)$ level $i-1$ is visited and from there, again with probability $\Omega(1)$, a two-bit flip occurs that leads to level $i+1$. Since with constant probability the last level visited below level $i$ is not $i-1$, and since skipping level $i$ conditional on the last level below $i$ being at most $i-2$ is, by a positive constant, less likely that skipping level $i$ when on level $i-1$ before (that is, $\frac{p_{i-2,\ge i+1}}{p_{i-2,\ge i}} \le \frac{p_{i-1,\ge i+1}}{p_{i-1,\ge i}} - \Omega(1)$, we omit a formal proof of this statement), our estimate $q_{i|k} \le \max_{j \in [k..i-1]} \frac{p_{j,\ge i+1}}{p_{j,\ge i}}$ already leads to a constant factor loss in the estimate of the $q_i$, which translates into a $\Theta(n)$ contribution to the gap of our lower bound from the truth. To overcome this, one would need to compute $q_{i|k} = \sum_{j = k}^{i-1} Q_{j|k} \frac{p_{j,\ge i+1}}{p_{j,\ge i}}$ precisely, where $Q_{j|k}$ is the probability that level $j$ is the highest level visited below $i$ in a process started on level~$k$. This appears very complicated. The second contribution to our $\Theta(n)$ gap is the estimate of $p_i$. We need a lower bound on $p_i$ both in the estimate of the run time advantage due to not visiting all levels (see Equation~\eqref{eq:onemax1sharp}) and in the estimate of the run time estimate stemming from the fitness level method~\eqref{eq:levub}. Since the $q_i$ are $\Omega(1)$ when $i \le (1-\eps)n$, a constant-factor misestimation of the $p_i$ leads to a $\Theta(n)$ contribution to the gap. Unfortunately, it is hard to avoid a constant-factor misestimation of the $p_i$, $i \le (1-\eps)n$. Our estimate $p_i \ge (1-\frac 1n)^{n-1} \frac{n-i}{n}$ only regards the event that the $i$-th level is left (to level $i+1$) by flipping exactly one zero-bit into a one-bit. However, for each constant $j$ the event that level $i+1$ is reached by flipping $j+1$ zero-bits and $j$ one-bits has a constant probability of appearing. Moreover, for each constant $j$ the event that level $i$ is left to level $i+j$ also has a constant probability. For these reasons, a precise estimate of the $p_i$ appears rather tedious. In summary, we feel that our method quite easily gave a run time estimate precise apart from terms of order $O(n)$, but for more precise results drift analysis~\cite{Lengler20bookchapter} might be the better tool (though still the relatively precise estimate of the expected progress from a level $i \le (1-\eps)n$, which will necessarily be required for such an analysis, will be difficult to obtain). \subsection{Comparison with the Literature}\label{ssec:onemaxlit} We end this section by giving an overview on the previous works analyzing the run time of the \oea on \onemax and comparing them to our result. Some of the results described in the following, in particular, Sudholt's lower bound~\cite{Sudholt13}, were also proven for general mutation rates $p$ instead of only $p = \frac 1n$. To ease the comparison with our result, we only state the results for the case that $p = \frac 1n$. We note that with our method we could also have analysed broader ranges of mutation rates. The resulting computations, however, would have been more complicated and would have obscured the basic application of our method. To the best of our knowledge, the first to state and rigorously prove a run time bound for this problem was Rudolph in his dissertation~\cite[p.~95]{Rudolph97}, who showed that $T = T_{\rand,n}$ satisfies $E[T] \le (1-\frac 1n)^{n-1} n \sum_{i=1}^{n} \frac{1}{i}$, which is exactly the upper bound $\tilde T^+_{0,n}$ from the fitness level method and from only regarding the events that levels are left via one-bit flips. A lower bound of $n \ln(n) - O(n \log\log n)$ was shown in~\cite{DrosteJW98ecj} for the optimization of a general separable function with positive weights when starting in the search point $(0, \dots, 0)$. From the proof of this result, it is clear that it holds for any pseudo-Boolean function with unique global optimum $(1, \dots, 1)$. This lower bound builds on the argument that each bit needs to be flipped at least once in some mutation step. It is not difficult to see that the expected time until this event happens is indeed $(1 \pm o(1)) n \ln n$, so this argument is too weak to make the leading constant of $E[T]$ precise. Only a very short time after these results and thus quite early in the young history of run time analysis of evolutionary algorithms, Garnier, Kallel, and Schoenauer~\cite{GarnierKS99} showed that $E[T] = en\ln(n) + c_1 n + o(n)$ for a constant $c_1 \approx -1.9$, however, the completeness of their proof has been doubted in~\cite{HwangPRTC18}. Since at that early time precise run time analyses were not very popular, it took a while until Doerr, Fouz, and Witt~\cite{DoerrFW10} revisited this problem and showed with $E[T] \ge (1-o(1)) e n \ln(n)$ the first lower bound that made the leading constant precise. Their proof used a variant of additive drift from~\cite{Jagerskupper07} together with the potential function $\ln(Z_t)$, where $Z_t$ denotes the number of zeroes in the parent individual at time $t$. Shortly later, Sudholt~\cite{Sudholt10} (journal version~\cite{Sudholt13}) used his fitness level method for lower bounds to show $E[T] \ge en \ln(n) - 2n\log\log n - 16n$. That the run time was $E[T] = en\ln(n) - \Theta(n)$ was proven first in~\cite{DoerrFW11}, where an upper bound of $en\ln(n) - 0.1369n + O(1)$\footnote{The constant $0.1369$ was wrongly stated as $0.369$ as pointed out in~\cite{LehreW14}} was shown via variable drift for upper bounds~\cite{MitavskiyRC09,Johannsen10} and a lower bound of $E[T] \ge en\ln(n) - O(n)$ was shown via a new variable drift theorem for lower bounds on hitting times. An explicit version of the lower bound of $en\ln(n) - 7.81791n - O(\log n)$ and an alternative proof of the upper bound $en\ln(n) - 0.1369n + O(1)$ was given in~\cite{LehreW14} via a very general drift theorem. The final answer to this problem was given in an incredibly difficult work by Hwang, Panholzer, Rolin, Tsai, and Chen~\cite{HwangPRTC18} (see~\cite{HwangW19} for a simplified version), who showed $E[T] = en\ln(n) + c_1 n + \frac 12 e \ln(n) + c_2 + O(n^{-1} \log n)$ with explicit constants $c_1 \approx -1.9$ and $c_2 \approx 0.6$. In the light of these results, we feel that our proof of an $en\ln(n) \pm O(n)$ bound is the first simple proof a run time estimate of this precision for this problem. Interestingly, our explicit lower bound $en\ln(n) - 4.871n - O(\log n)$ is even a little stronger than the bound $en\ln(n) - 7.81791n - O(\log n)$ proven with drift methods in~\cite{LehreW14}. \section{Jump Functions}\label{sec:jump} In this section, we regard jump functions, which comprise the most intensively studied benchmark in the theory of randomized search heuristics that is not unimodal and which has greatly aided our understanding how different heuristics cope with local optima~\cite{DrosteJW02,JansenW02,Lehre10,DoerrLMN17,CorusOY17,CorusOY18fast,DangFKKLOSS16,DangFKKLOSS18,WhitleyVHM18,LissovoiOW19,RoweA19,Doerr20gecco,AntipovDK20,AntipovD20ppsn,AntipovBD21gecco,RajabiW20,RajabiW21evocop,RajabiW21gecco,DoerrZ21aaai,BenbakiBD21}. For all representation lengths $n$ and all $k \in [1..n]$, the \emph{jump function} with jump size $k$ is defined by \[ \jump_{n,k}(x)=\left\{\begin{array}{ll} \|x\|_{1}+k & \text { if }\|x\|_{1} \in[0 . . n-k] \cup\{n\}, \\ n-\|x\|_{1} & \text { if }\|x\|_{1} \in[n-k+1 . . n-1], \end{array}\right. \] for all $x \in \{0,1\}^n$. Jump functions have a fitness landscape isomorphic to $\onemax$, except on the fitness valley or gap \[ G_{n,k}:=\left\{x \in\{0,1\}^{n} \mid n-k<\|x\|_{1}<n\right\}, \] where the fitness is low and deceptive (pointing away from the optimum). For simple elitist heuristics, not surprisingly, the time to find the optimum is strongly related to the time to cross the valley of low fitness. For the \oea with mutation rate $\frac 1n$, the probability to generate the optimum from a search point on the local optimum $L = \{x \in \{0,1\}^n \mid \|x\|_1 = n-k\}$ is $p_k = (1-\frac 1n)^{n-k} n^{-k}$, and hence the expected time to cross the valley of low fitness is $\frac 1 {p_k}$. The true expected run time deviates slightly from this value, both because some time is spent to reach the local optimum and because the algorithm may be lucky and not need to cross the valley or not in its full width. The first aspect, making additive terms of order at most $O(n \log n)$ more precise, can be treated with arguments very similar to the ones of the previous section, so we do not discuss this here. More interestingly appears the second aspect. In particular for larger values of $k$, the algorithm has a decent chance to start in the fitness valley. It is clear that even when starting in the valley, the deceptive nature of the valley will lead the algorithm rather towards the local optimum. We show now how our argumentation via omitted fitness levels allows to prove very precise bounds with elementary arguments. In principle, we could also use the our fitness level theorem, but since we shall regard only the single level $N_{n,k} = \{x \in \{0,1\}^n \mid \|x\|_1 \in [0..n-k]\}$, we shall not make this explicit and simply use the classic typical-run argument (that except with some probability $q$, a state is reached from which the expected run time is at least some $t$, and that this gives a lower bound of $(1-q)t$ for the expected run time). The two previous analyses of the run time of the \oea on jump functions deal with the problem of starting in the valley in a different manner. In~\cite{DrosteJW02}, it is argued that with probability at least $\frac 12$, the initial search point has at most $\frac n2$ ones. In case the initial search point is nevertheless in the gap region (because $k > \frac n2$), then with high probability a \onemax-style optimization process will reach the local optimum with high probability in time $O(n^2)$ except when in this period the optimum is generated. Since all parent individuals in this period have Hamming distance at least $\frac n2$ from the optimum, the probability for this exceptional event is exponentially small. This argument proves an $\Omega(\frac 1 {p_k})$ bound for the expected run time, and this for all values of $k \ge 2$. In~\cite{DoerrLMN17}, only the case $k \le \frac n2$ was regarded and it was exploited that in this case, the probability for the initial search point to be in the gap (or the optimum) is only $2^{-n} \binom{n}{\le k-1}$. This gives a lower bound of $\big(1 - 2^{-n} \binom{n}{\le k-1}\big) \frac 1 {p_k}$, which is tight including the leading constant for $k \in [2..\frac n2 - \omega(\sqrt n)]$. We now show that estimating the probability of never reaching a search point $x$ with $\|x\|_1 \le n-k$ is not difficult with arguments similar to the ones used in the previous section. We need a slightly different approach since now the probability to skip a fitness level is not maximal when closest to this fitness level (the probability to skip $N_{n,k}$ is maximal when in the lowest fitness level, which is in Hamming distance $k-1$ from $N_{n,k}$). Interestingly, we obtain very tight bounds which could be of some general interest, namely that the probability to never reach a point $x$ with $\|x\| \le n-k$ is $O(\frac 1n)$, when allowing an arbitrary initialization (different from the global optimum), and is only $O(2^{-n})$ when using the usual random initialization. \begin{theorem} Let $n \in \N$ and $k \in [2..n]$. Consider a run of the \oea with mutation rate $p = \frac 1n$ on the jump function $\jump_{n,k}$. Denote by $N := N_{n,k} = \{x \in \{0,1\}^n \mid \|x\|_1 \in [0..n-k]\}$ the set of non-optimal solutions that lie not in the gap region of the jump function and by $p_k = (1-\frac 1n)^{n-k} n^{-k}$ the probability to generates the optimum from a solution on the local optimum. \begin{enumerate} \item Assume that the \oea starts with an arbitrary solution different from the global optimum. Then with probability $1 - O(\frac 1n)$, the algorithm reaches a search point in $N$. Consequently, the expected run time is at least $(1 - O(\frac 1n)) p_k^{-1}$. \item Assume that the \oea starts with a random initial solution. Then with probability $1 - O(2^{-n})$, the algorithm reaches a search point in $N$. Consequently, the expected run time is at least $(1 - O(2^{-n})) p_k^{-1}$. \end{enumerate} \end{theorem} \begin{proof} Denote by $f$ the jump function $\jump_{n,k}$. We consider the partition of the search space into the fitness levels of the gap as well as $N$ and the optimum. Hence let \begin{align*} A_j &:= \{x \in \{0,1\}^n \mid f(x) = j\} \mbox{ for } j \in [1..k-1],\\ A_{k} &:= \{x \in \{0,1\}^n \mid f(x) \in [k..n]\} = N,\\ A_{k+1} &:= \{(1, \dots, 1)\}. \end{align*} Our first claim is that, regardless of the initialization as long as different from the optimum, the probability $q_k$ that the algorithm never has the parent individual in $A_k$ is $O(\frac 1n)$. Since we start the algorithm with a non-optimal search point, the only way the algorithm can avoid $A_k$ is by generating from a parent in $A_j$, $j \in [1..k-1]$, the global optimum. Denote by $r_j$ the probability that the algorithm, if the current search point is in $A_j$, in the remaining run generates the optimum from a search point in $A_j$. Then, as just discussed, by the union bound, \begin{equation} q_k \le \sum_{j=1}^{k-1} r_j. \label{eq:jumpr} \end{equation} The probability $r_j$ is exactly the probability that in the iteration in which from a search point in $A_j$ a better individual is generated, this is actually the global optimum. Hence $r_j = \Pr[y = (1, \dots, 1) \mid f(y) > j]$, where $y$ is a mutation offspring generated from a search point in $A_j$. We compute \begin{align} r_j &= \Pr[y = (1, \dots, 1) \mid f(y) > j] = \frac{\Pr[y = (1, \dots, 1)]}{\Pr[f(y) > j]} \nonumber\\ &\le \frac{n^{-j}}{(1-\frac 1n)^{n-1} \frac {n-j}n} \le \frac{e}{n^{j-1} (n-j)},\nonumbe \end{align} where we estimated the probability to generate a search point with fitness better than $j$ by the probability of the event that a single one is flipped into a zero. Consequently, $q_k \le \sum_{j=1}^{k-1} r_j \le \sum_{j=1}^{n-1} \frac{e}{n^j (n-j)} = O(\frac 1n)$. Once a search point in $A_k$ is reached, the remaining run time dominates a geometric distribution with success probability $p_k = (1-\frac 1n)^{n-k} n^k$, simply because each of the following iterations (before the optimum is found) has at most this probability of generating the optimum; hence the expected remaining run time is at least $\frac 1 {p_k}$. This shows that the expected run time of the \oea started with any non-optimal search point is at least $(1-q_k) \frac 1 {p_k}$. For the case of a random initialization, we proceed in a similar manner, but also use the trivial observation that to skip the fitness range $N$ by jumping from $A_j$, $j \in [1..k-1]$, right into the optimum, it is necessary that the algorithm visits $A_j$. To visit $A_j$, it is necessary that the initial search point lies in $A_1 \cup \dots \cup A_j$, which happens with probability $2^{-n} \sum_{i=1}^{k-1} \binom{n}{i}$ only. This, together with the observation that the only other way to avoid $A_k$ is that the initial individual is already the optimum, gives \begin{align*} q_k &\le \sum_{j=1}^{k-1} \left(2^{-n} \sum_{i=1}^{j} \binom{n}{i}\right) r_j + 2^{-n}. \end{align*} Using a tail estimate for binomial distributions (equation (VI.3.4) in~\cite{Feller68}, also to be found as (1.10.62) in~\cite{Doerr20bookchapter}), we bound $\sum_{i=1}^{j} \binom{n}{i} \le 1.5 \binom{n}{j}$ for all $j \le \frac 14 n$. We also note from~\eqref{eq:jumpr} that $r_j \le 4 n^{-j}$ in this case. For $j \ge \frac 14 n$, we trivially have $\sum_{i=1}^{j} 2^{-n} \binom{n}{i} r_j \le r_j \le e r^{-(j-1)}$. Consequently, \begin{align*} q_k &\le \sum_{j=1}^{n-1} \left(2^{-n} \sum_{i=1}^{j} \binom{n}{i}\right) r_j + 2^{-n}\\ &\le 2^{-n} 1.5 \sum_{j = 1}^{\lfloor n/4 \rfloor} \binom{n}{j} r_j + \sum_{j = \lceil n/4 \rceil}^{n-1} e n^{-(j-1)} + 2^{-n}\\ &\le 2^{-n} 1.5 \sum_{j = 0}^{\lfloor n/4 \rfloor} \frac{n^j}{j!} 4 n^{-j} + O(n^{-(n/4)+1}) \le 2^{-n} 6 e + O(n^{-(n/4)+1}). \end{align*} Hence, as above, the expected run time is at least $(1-q_k) \frac 1 {p_k} = (1 - O(2^{-n})) \frac 1 {p_k}$. \end{proof} We note that the $O(\frac 1n)$ term in the bound for arbitrary initialization cannot be avoided in general, simply because when starting with a search point that is a neighbor of the optimum, the first iteration with probability at least $\frac 1{en}$ generates the optimum. The $O(2^{-n})$ term in the bound for random initialization is apparently necessary because with probability $2^{-n}$. We also note that we did not optimize the implicit constants in the $O(\frac 1n)$ and $O(2^{-n})$ term. With more care, these could be replaced by $(1 + o(1)) \frac{1}{e-1} \frac 1n$ and $(1+o(1)) \frac{e}{e-1} 2^{-n}$, respectively. \section{A Bound for Long $k$-Paths} \label{sec:longKPaths} Long $k$-paths, introduced in \cite{Rudolph96}, have been studied in various places; we point the reader to \cite{Sudholt09} for a discussion, which also contains the formalization that we use. A lower bound for long $k$-paths using FLM with viscosities was given in~\cite{Sudholt13}. We use \cite[Lemma~3]{Sudholt09} (phrased as a definition below) and need to know no further details about what a long $k$-path is. In fact, our proof uses all the ideas of the proof of \cite{Sudholt13}, but cast in terms of our FLM with visit probabilities, which, we believe, makes the proof simpler and the core ideas given by \cite{Sudholt13} more prominent. Note that \cite{Sudholt13} first needs to extend the FLM with viscosities by introducing an additional parameter before it is applicable in this case. \begin{definition} Let $k, n$ be given such that $k$ divides $n$. A \emph{long $k$-path} is function $f: \{0,1\}^n \rightarrow \R$ such that \begin{itemize} \item The $0$-bit string has a fitness of $0$; there are $m = k2^{n/k} - k$ bit strings of positive fitness, and all these values are distinct; all other bit strings have negative fitness. We call the bit strings with non-negative fitness as being \emph{on the path} and consider them ordered by fitness (this way we can talk about the ``next'' element on the path and similar). \item For each bit string with non-negative fitness and each $i < k$, the bit string with $i$-next higher fitness is exactly a Hamming distance of $i$ away. \item For each bit string with non-negative fitness and each $i\geq k$, the bit string with $i$-next higher fitness is at least a Hamming distance of $k$ away. \end{itemize} \end{definition} For an explicit construction of a long $k$-path, see~\cite{DrosteJW02,Sudholt09}. The long $k$-paths are designed such that optimization proceeds by following the (long) path and true shortcuts are unlikely, since they require jumping at least $k$. The following lower bound for optimizing long $k$-paths with the \oea is given in \cite{Sudholt13}. Note that $n$ is the length of the bit strings, $m$ is the length of the path and $p$ is the mutation rate. \begin{equation} m \; \frac{1-2p}{p(1-p)^{n}} \; \frac{1-2p}{1-p} \; \left(1- \left( \frac{p}{1-p}\right)^{k}\right)^m. \label{eq:sud} \end{equation} We want to show here that we can derive the essentially same bound with the same ideas but less technical details. Note that the lower bound given in \cite{Sudholt13} is only meaningful for $k \geq \sqrt{n/\log(1/p)}$, as the last term of the bound would otherwise be close to~$0$: \begin{align*} \left(1- \left(\frac{p}{1-p}\right)^k\right)^m &\leq \left(1- p^k\right)^m \leq \exp(-mp^k) \\ &\leq \exp(-2^{n/k}p^k) = \exp(-2^{n/k-k\log(1/p)}). \end{align*} We have that $n/k-k\log(1/p)$ is positive if and only if $n/\log(1/p) \geq k^2$. In fact, if $k = \omega\left(\sqrt{n/\log(1/p)}\right)$, we have \begin{align*} \left(1- \left(\frac{p}{1-p}\right)^k\right)^m & \geq \left(1- \left(2p\right)^k\right)^m\\ & \geq 1 - m(2p)^k\\ & \geq 1-2^{3n/k-k\log(1/p)}\\ & = 1-2^{\sqrt{n}o(\log(1/p))-\sqrt{n}\omega(\log(1/p))}\\ & = 1-2^{-\sqrt{n}\omega(\log(1/p))}\\ & \geq 1 - 2^{-\sqrt{n}}. \end{align*} This also entails $$ p \leq \exp(-n/k^2). $$ With our fitness level method, we obtain the following lower bound. It differs from Sudholt's bound~\eqref{eq:sud} by an additional term $m$, which reduces the lower bound. Analyzing why this term does not appear in Sudholt's analysis, we note that the $\gamma_{i,j}$ chosen in \cite{Sudholt13} are underestimating the true probability to jump to elements of the path that are more than $k$ steps (on the path) away. When this is corrected, as confirmed to us by the author, Sudholt's proof would also only show our bound below. Consequently, there is currently no proof for~\eqref{eq:sud}. \begin{theorem} Consider the \oea on a long $k$-path of length $m$ with mutation rate $p \leq 1/2$ starting at the all-$0$ bit string (the start of the path).\footnote{This simplifying assumption about the start point was also made in \cite{Sudholt13}. Let $T$ be the (random) time for the \oea to find the optimum. Then $$ E[T] \geq m \; \frac{1-2p}{p(1-p)^{n}} \; \frac{1-2p}{1-p} \; \left(1- m \left( \frac{p}{1-p}\right)^{k-1}\right)^m. $$ \end{theorem} \begin{proof} We are setting up to apply Theorem~\ref{thm:fbp_visit_lower}. We partition the search space in the canonical way such that, for all $i \leq m$ with $i > 0$, $A_i$ contains the only $i$-th point of the path and nothing else, and $A_0$ contains all points not on the path. In order to simplify the analysis, we will first change the behavior of the algorithm such that it discards any offspring which differs from its parent by at least $k$ bits. This will allow us to apply Theorem~\ref{thm:fbp_visit_lower} quickly and cleanly, afterwards we will show that the progress of this modified algorithm is very close to the progress of the original algorithm. In this modified process, we first consider the probability $p_i$ to leave a given level $i < m$. For this, the algorithm has to jump up exactly $j < k$ fitness levels, which is achieved by flipping a specific set of $j$ bits; the probability for this is \begin{align*} p_i = \sum_{j=1}^{k-1} p^j(1-p)^{n-j} & \leq (1-p)^{n} \sum_{j=1}^{\infty} \left(\frac{p}{1-p}\right)^j\\ & = (1-p)^{n} \frac{p/(1-p)}{1-p/(1-p)}\\ & = p(1-p)^{n} \frac{1}{1-2p}. \end{align*} Next we consider the probability $v_i$ to visit a level $i$. We want to apply Lemma~\ref{lem:visitprob}, so let some $x \in A_{< i}$ be given, on level $\ell(x)$. Let $d = i-\ell(x)$. Note that $d$ is the Hamming distance between $x$ and the unique point in $A_i$. Thus, in case of $d \geq k$, we have $\Pr[x \rightarrow A_i] = 0$, so suppose $d < k$. Then we have \begin{align*} \Pr\bigg[x \rightarrow A_i \,\bigg|\, x \rightarrow \bigcup_{j=i}^m A_j\bigg] & = \frac{\Pr[x \rightarrow A_i]}{\Pr[x \rightarrow \bigcup_{j=i}^m A_j]}\\ & = \frac{p^{d}(1-p)^{n-d}}{\sum_{j=d}^k p^{j}(1-p)^{n-j}}\\ & = \frac{1}{\sum_{j=d}^k p^{j-d}(1-p)^{d-j}}\\ & = \frac{1}{\sum_{j=0}^{k-d} p^{j}(1-p)^{-j}}\\ & \geq \frac{1}{\sum_{j=0}^{\infty} p^{j}(1-p)^{-j}}\\ & = 1 - \frac{p}{1-p}\\ & = \frac{1-2p}{1-p}. \end{align*} By Lemma~\ref{lem:visitprob}, we can use this last term as $v_i$ in Theorem~\ref{thm:fbp_visit_lower} (it also fulfills the second condition of Lemma~\ref{lem:visitprob}, since the process starts deterministically in the $0$ string). Note that neither $p_i$ nor $v_i$ depends on $i$. Using Theorem~\ref{thm:fbp_visit_lower} and recalling that we have $m$ levels, we get a lower bound of $$ m \frac{1-2p}{p(1-p)^{n}} \frac{1-2p}{1-p}. $$ Note that this is exactly the term derived in~\cite{Sudholt13} except for a term correcting for the possibility of jumps of more than $k$ bits, which we also still need to correct for. We now show that this probability of making a successful jump of distance at least $k$ is small. To that end we will show that it is very unlikely to leave a fitness level with a large jump rather than just move to the next level. Suppose the algorithm is currently at $x \in A_i$. Leaving $x$ with a jump of at least $k$ to a specific element on the path is less likely the longer the jump is (since $p \leq 1/2$). Thus, we can upper bound the probability of jumping to an element of the path which is more than $k$ away as $p^k(1-p)^{n-k}$. Thus, conditional on leaving the fitness level, the probability of leaving it with a $\geq k$-jump is \begin{align*} \Pr[x \rightarrow A_{\geq i+k} \mid x \rightarrow A_{>i}] & = \frac{\Pr[x \rightarrow A_{\geq i+k}]}{\Pr[x \rightarrow A_{>i}]}\\ & \leq \frac{mp^k(1-p)^{n-k}}{p(1-p)^{n-1}}\\ & = m \left( \frac{p}{1-p}\right)^{k-1}. \end{align*} Thus, the probability of never making an accepted jump of at least $k$ is bounded from below by the probability to, independently once for each of the $m$ fitness levels, leave the fitness level with a $1$-step rather than a jump of at least $k$: $$ \left(1- m \left( \frac{p}{1-p}\right)^{k-1}\right)^m. $$ By pessimistically assuming that the process takes a time of $0$ in case it ever makes an accepted jump of at least $k$, we can lower-bound the expected time of the original process to reach the optimum as the product of the expected time of the modified process times the probability to never make progress of $k$ or more. \ignore{==== Old from here We want to pessimistically suppose that the original algorithm takes exactly $0$ steps if it (a) takes longer than $2t_0$ iterations; or (b) if it ever makes a jump of size larger than $k$ within $2t_0$ steps. Let $T'$ be the random variable describing the run time of the modified process analyzed above and let $A$ be the event that the process makes a jump of at least $k$ within $2t_0$ iterations. We can thus say that the expected run time of the original process is at least $$ E[T' \mathds{1}(T' \leq 2t_0)]\Pr[A]. $$ We first bound $\Pr[A]$. The probability to jump more than $k$ to an element of the path under this assumption at most $m p^k(1-p)^{n-k}$, since there are at most $m$ elements on the path in a distance of at least $k$, and in the best case they are exactly $k$ away (using $p \leq 1/2$). The term $p^k(1-p)^{n-k}$ is maximized for $p = k/n$. Note that $ \frac{1-2p}{1-p} \leq 1. $ Thus, the probability to flip at least $k$ elements and land on the path in any of $2t_0$ attempts is (using a union bound) \begin{align*} \Pr[A] &\leq 2t_0 m p^k(1-p)^{n-k}\\ & = 2m^2 (1-2p)^2 p^{k-1}(1-p)^{-k-1}\\ & \leq 2 m^2 (p/(1-p))^{k-1}\\ & \leq k2^{2n/k+1} (p/2)^{k-1}\\ & = 2^{\log(k) + 2n/k+1 -(k-1)\log(1/p)-k+1}\\ & = 2^{\log(k) + \sqrt{n\log(1/p)}o(1) - \sqrt{n\log(1/p)}\omega(1)-k+1}\\ & = 2^{- \sqrt{n\log(1/p)}\omega(1)}\\ & \leq 2^{- \sqrt{n}\omega(1)}. \end{align*} Let us now turn to the term $E[T' \mathds{1}(T' \geq 2t_0)]$. We approach this term by using $$ E[T'] = E[T' \mathds{1}(T' \leq 2t_0)] + E[T' \mathds{1}(T' > 2t_0)] $$ and rearranging. We already computed $E[T']$, so it remains to bound $E[T' \mathds{1}(T' > 2t_0)]$. Essentially we need a concentration result for the run time of the modified process. To this end we turn to a particularly strong result about upper bounds with fitness levels providing just this. According to \cite[Theorem~1.8.6]{Doerr20bookchapter}, the run time of the \oea on a long $k$-path is dominated by a sum of $m$ independent geometric distributions with parameter $p(1-p)^{n-1}$; according to \cite[Theorem~1.10.32]{Doerr20bookchapter}, the probability of this sum of geometric distributions to exceed its expectation by a factor of $\delta$ is exponentially small: $2^{-\delta (m-1)/4}$. Thus, we can compute \begin{align*} E[T' \mathds{1}(T' > 2t_0)] & \leq \sum_{i=2t_0+1}^\infty \Pr[T' = i] \\ & \leq \sum_{i=2t_0+1}^\infty 2^{-(i/t_0 - 1)(m-1)/4}\\ & \leq \int_{2t_0}^\infty 2^{-(i/t_0 - 1)(m-1)/4} \mathrm{d}i\\ & = [-4t_0/m 2^{-(i/t_0 - 1)(m-1)/4} ]_{2t_0}^\infty\\ & = 4t_0/m 2^{-(2 - 1)(m-1)/4}\\ & = 4 \frac{(1-2p)^2}{p(1-p)^{n+1}} 2^{-(m-1)/4}\\ & \leq \frac{1}{p(1-p)^{n-1}} 2^{-m + 9/4}\\ & \leq 2^{-n+1}\frac{1}{p} 2^{-m + 9/4}\\ & \leq 2^{-n+1+\log(1/p) - m + 9/4} & \leq 2^{-O(m)}. \end{align*} \merk{The last line is not correct in all cases, just in interesting ones.} Thus, we see $$ E[T' \mathds{1}(T' \leq 2t_0)]\Pr[A] = (E[T'] - E[T' \mathds{1}(T' > 2t_0)])\Pr[A] = (t_0 - 2^{-O(m)})2^{- \omega(\sqrt{n})} \geq t_0 2^{- \omega(\sqrt{n})}. $$ \end{proof} \section{Conclusion} In this work, we proposed a simple and natural way to prove lower bounds via fitness level arguments. The key to our approach is that the true run time can be expressed as the sum of the waiting times to leave a fitness level, weighted with the probability that this level is visited at all. When applying this idea, usually the most difficult part is estimating the probabilities to visit the levels, but as our examples \leadingones, \onemax, jump functions, and long paths show, this is not overly difficult and clearly easier than setting correctly the viscosity parameters of the previous fitness level method for lower bounds. For this reason, we are optimistic that our method will be an effective way to prove other lower bounds in the future, most easily, of course, for problems where upper bounds were proven via fitness level arguments as well. Our method makes most sense for elitist evolutionary algorithms even though by regarding the best-so-far individual any evolutionary algorithm gives rise to a non-decreasing level process (at the price that the estimates for the level leaving probabilities become weaker). We are optimistic that our method can be extended to non-elitist algorithms, though. We note that the level visit probability $v_i$ for an elitist algorithm is equal to the expected number of separate visits to this level (simply because each level is visited exactly once or never). When defining the $v_i$ as the expected number of times the $i$-th level is visited, our upper and lower bounds of Theorems~\ref{thm:fbp_visit_lower} and~\ref{thm:fbp_visit_upper} remain valid (the proof would use Wald's equation). We did not detail this in our work since our main focus were the elitist examples regarded in~\cite{Sudholt13}, but we are optimistic that this direction could be interesting to prove lower bounds also for non-elitist algorithms. \newcommand{\etalchar}[1]{$^{#1}$}
proofpile-arXiv_069-263
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Traffic monitoring cameras and smart roadside units with vision-based sensors are becoming increasingly popular for traffic management purposes. Local Departments of Transportation (DOTs) use the videos to investigate driving safety, study traffic congestion, and sometimes issue tickets for rule violations. This type of equipment is also an essential component of the intelligent road infrastructure system for the ``automated vehicles'' in the future. However, there are three unsolved problems related to these cameras. First, transmitting and archiving the videos cost a significant amount of network bandwidth and storage space. Second, these videos are unfriendly to index, search, and automated analysis since they contain mainly unstructured information. Especially, it is difficult to obtain 3D states of the vehicles from 2D images. For example, local DOTs often require traffic management officers to monitor and interpret the videos to evaluate driving safety based on safety-critical events such as accidents. These events do not happen very frequently. The analysis can also involve subjective bias, e.g., errors in determining the velocity of vehicles or distance between vehicles due to the restriction of camera perspective angles. Third, privacy concerns haunt the civil usage of these videos, which restricts non-authority organizations to access them. For example, it is usually not acceptable for a local DOT to send the raw videos to third-party companies or scholars for data analysis unless they are contracted by the DOT. Also, insurance companies cannot access them for improving the process of traffic incident claims. \begin{figure}[] \begin{center} \includegraphics[width=3.4in]{figures/overview.jpg} \caption{An overview of CAROM: (a) original traffic monitoring video, (b) detected vehicles, (c) replay on a 2D map, (d) replay on a 3D map.} \label{fig:overview} \end{center} \vspace{-0.25in} \end{figure} Automated vehicles (AVs) can also benefit from these cameras. For example, it is a tough question for both the DOTs and the manufacturers to answer how safe are the AVs currently testing on the road. The traffic monitoring cameras are commonly mounted on road infrastructures with the advantage of covering a large area. Hence, they can be used to objectively assess the operational safety by calculating a set of safety metrics \cite{wishart2020driving} directly from vehicle movements captured on the videos. Meanwhile, the information of the surrounding traffic scene obtained by these cameras can complement the perception of AVs because the in-vehicle sensors can only reach places in the line-of-sight. To address these issues, we propose CAROM, a framework that can extract 3D information from the videos, generate a series of structured data records of vehicle states, and reconstruct traffic scenes on a 2D map or a 3D map, as shown in Fig \ref{fig:overview}. This work is part of research being conducted by the Institute of Automated Mobility (IAM) \cite{IAM} to develop an operational safety assessment methodology and an intelligent automated infrastructure for vehicles. CAROM facilitates a series of applications including road safety evaluation, roadside information services for AVs, traffic data archiving, sharing, and further automated analysis. The generated data records can be saved in a database or sent over the network with significantly less storage and bandwidth cost than the raw videos. Moreover, a reconstructed traffic scene can be replayed to offer an objective vision of the traffic situations in a bird's-eye view to an interested organization besides the video owner. Last but not the least, the generated data could be easily anonymized by removing any Personally Identifiable Information (PII). In summary, our contributions are as follows: \begin{enumerate} \item We constructed a vehicle tracking, localization, and velocity measurement pipeline using videos taken by monocular road traffic monitoring cameras. \item We built a reconstruction system for vehicle shapes and traffic scenes using the tracking results. Additionally, we created two visualizers to replay the reconstructed traffic scene on both 2D and 3D maps. \item We evaluated the vehicle localization and velocity measurement performance using both differential GPS and drone videos, which shows promising results. \end{enumerate} \section{Related Work} With the advancement of effective neural network object detectors \cite{ren2015faster}\cite{he2017mask}, tracking algorithms \cite{wojke2017simple}, and large scale datasets \cite{geiger2013vision}\cite{naphade20204th}, current research work has obtained great achievements in video based road traffic analysis in the past decade \cite{gupte2002detection}\cite{sivaraman2013looking}\cite{liu2013survey}\cite{kumaran2019anomaly}\cite{tian2011video}\cite{datondji2016survey}. Commercial video analysis software platforms as well as roadside smart cameras are also emerging \cite{Transoft}\cite{NoTraffic}. However, localization of vehicles, speed measurement, and reconstruction of traffic scenes in 3D space are still challenging due to two core problems. First, accurate calibration of the cameras is necessary to convert 2D pixels to 3D locations. This can be done manually using labeled point correspondences or automatically using vanishing points calculated from geometric primitives \cite{dubska2014automatic}\cite{corral2014automatic}\cite{lee2011robust} and objects with known shapes \cite{sochor2017traffic}. Typically, the automated calibration algorithm also sets up a 3D world reference frame (not related to any predefined map). Second, in addition to accurate vehicle detection and tracking on the 2D images, robust estimation of vehicle 3D pose and vehicle dimension is required. The 3D representation of a vehicle can be a point with an orientation vector \cite{juranek2015real}, a 3D bounding box \cite{dubska2014automatic}\cite{sochor2016boxcars}\cite{mousavian20173d}, a few key points \cite{zhang2020vehicle}, a wireframe model \cite{ding2018vehicle}\cite{ansari2018earth}, or a parametric 3D shape model \cite{leotta2010vehicle}. The location and speed of a vehicle are typically determined from three pieces of information: (1) the 2D locations on the images, (2) the 3D poses of the vehicle, and (3) the transformation between image coordinates and the ground coordinates obtained from the camera calibration results. Usually, the vehicle states are also estimated jointly through a filtering process by considering the vehicle kinematics or dynamics \cite{chen2011kalman}\cite{li2018generic}. The vehicle shape can be reconstructed using stereo cameras \cite{engelmann2017samp} or monocular cameras \cite{ansari2018earth}\cite{chhaya2016monocular}\cite{prisacariu2012simultaneous} through a sequence of algorithms for depth estimation, model fitting, and shape optimization. Our paper is based on the existing works for several individual computer vision tasks and we integrated them to a unified framework that extracts the location, speed, and vehicle shape in the 3D space. Further, our tracking results allow the vehicle movements to be replayed on a 2D map or a 3D map so as to support traffic analysis tasks. The use of simulators with a single vehicle or a collection of vehicles have been studied intensively to visualize vehicle motion, study vehicle dynamics, understand traffic patterns, and train driving behaviors of AVs \cite{dosovitskiy2017carla}\cite{lopez2018microscopic}. Unlike these works, we desire to ``re-simulate'' the traffic scenes using the reconstruction results from the videos. \section {The CAROM Framework Architecture} The CAROM framework consists of three subsystems, as illustrated in Fig \ref{fig:arch}. The first one is the tracking system, which runs a pipeline to generate data structures of vehicle states from videos. This pipeline contains an offline calibration stage (detailed in Section III.A) and a few online video processing stages, including vehicle detection, tracking, localization, type recognition, and 3D state estimation (detailed in Section III.B). The generated data structures can be stored in files or a database for future usage, such as road safety assessment. The second one is the reconstruction system for vehicle shapes (detailed in Section III.C) and the map (detailed in Section III.D). The third subsystem is the replay engine that animates the traffic scene on the reconstructed map using the tracking results (detailed in Section III.E). \begin{figure}[ht] \begin{center} \includegraphics[width=3.2in]{figures/arch.jpg} \caption{The CAROM Framework Architecture} \label{fig:arch} \end{center} \vspace{-0.2in} \end{figure} \subsection{Camera and Map Calibration} CAROM uses a pinhole camera model and assumes the camera distortion is negligible, as illustrated in Fig. \ref{fig:calibration}. The ground is modeled either as a flat surface corresponding to a 2D satellite image map or a 3D surface with a high-resolution 3D mesh map, as shown in Fig. \ref{fig:map_3d}. There are three reference frames: (1) the camera frame in image pixel coordinates, (2) the world frame in metric coordinates, and (3) the map frame in map coordinates. For a 2D map, the origin is the top-left corner, the axes follow east-south directions, and the unit is a map pixel (as in Fig. \ref{fig:calibration}). For a 3D map, the origin can be any point on the ground surface, the axes follow east-north-up directions, and the coordinates use the metric unit (as in Fig. \ref{fig:map_3d}). The calibration procedure constructs two sets of parameters: (1) a camera projection matrix from the world frame to the camera frame, (2) a transformation between the map frame and the world frame. Since the traffic monitoring cameras do not move, we only need to run the calibration procedure once for each camera in the following steps. First, we label a set of at least six point correspondences on the map and the image, typically using the lane markers and features on the ground. Second, we create the world frame and compute the transformation between the world frame to the map frame. Usually, the XOY plane of the world frame is the ground plane in the 2D map and the x-axis follows the traffic moving direction. Third, we transform the labeled points on the map to the world frame and compute the camera projection matrix from the point correspondences \cite{hartley2003multiple}. Optionally, the calibration of the camera can be automated \cite{dubska2014automatic}, but the transformation between the world frame and the map still needs to be determined using labeled point correspondences. The transformation from any image coordinates to the world frame on the ground is crucial for vehicle localization, and we denoted it as $T$. If a 2D map is used, $T$ is the planar homography between the camera frame and the ground plane, which is derived from the camera projection matrix. If the 3D map is used, we back-project each pixel on the image to the 3D ground surface to obtain its corresponding point in the world frame. Then we construct $T$ as a look-up-table. Additionally, we also compute the horizon line on the image from the camera projection matrix. \subsection{Online Vehicle Tracking Pipeline} The tracking pipeline consists of a set of online algorithms that independently processes every image in a video. It uses information from the previous images for vehicle speed measurement. With enough processing resources, it may be able to run in real-time. It has the following stages. \textbf{(1) Vehicle Detection}: For each video image, the system runs an object detection and instance segmentation network. In our implementation, We fine-tuned a Mask RCNN \cite{he2017mask} on a custom dataset created from traffic monitoring videos for this step. The quality of the masks is crucial since the later localization stage relies on the contour of the mask. \textbf{(2) Vehicle Tracking}: For each detected object instance, its 2D bounding box on the current image is enlarged four times as a region-of-interest (ROI). The sparse optical flow vectors \cite{lucas1981iterative} from the previous image to the current image are calculated within this ROI and on the masks. The detected instances on the two images are associated in linked lists using these vectors and the mask overlapping percentages. \textbf{(3) Vehicle Type Recognition}: For each detected object instance, the system crops a square patch from the image just large enough to contain its 2D bounding box, resizes the cropped patch and runs a classifier to predict its type. The following types are used: $\{$pedestrian, two-wheelers, bus, mini-truck, semi-truck, pickup-truck, convertible, coupe, sedan, all-terrain vehicle, minivan, van, SUV, trailer$\}$. In our implementation, we trained a ResNet-18 \cite{he2016deep} on a custom dataset for this step. The decoupling of detector and vehicle type classifier is intentional, which makes both neural networks easier to train. Since this recognizer can learn a different set of features dedicated to its task regardless of the detector, it may also perform better. Moreover, we plan to build a fine-grained vehicle make and model classifier to replace this vehicle type recognizer in the future. \begin{figure}[] \vspace{0.07in} \begin{center} \includegraphics[width=3.0in]{figures/calibration.jpg} \caption{An illustration of reference frames and point correspondences.} \label{fig:calibration} \end{center} \vspace{-0.2in} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=3.0in]{figures/map_3d.jpg} \caption{An example of the 3D map and camera coverage.} \label{fig:map_3d} \end{center} \vspace{-0.3in} \end{figure} \textbf{(4) Vehicle Localization}: The system runs RANSAC \cite{fischler1981random} on the computed optical flow vectors obtained in the previous vehicle tracking stage to select those vectors that meet at the same vanishing point on the horizon line (which is computed from the camera calibration results), as shown in Fig. \ref{fig:bb3d}. Because a vehicle rarely moves backward on the road, the vehicle heading is determined by the line from the center of its 2D bounding box to this vanishing point. The vehicle has its XYZ coordinate reference frame where the x-axis points to the vehicle's forward, and the y-axis points to its left. We assume that the vehicle is always on the ground, \textit{i.e.}, its z-axis is pointing up relative to the ground surface. With the center of its 2D bounding box as its temporary location on the image, the transformation $T$, and its heading, the system computes the other two vanishing points corresponding to the y-axis and z-axis of the vehicle. Finally, using all three vanishing points, the 3D bounding box of a vehicle is computed from the contour of its segmentation mask using the tangent line method \cite{dubska2014automatic} (illustrated in Fig. \ref{fig:bb3d}). We made several improvements in implementation details to handle a few particular viewing angles not considered in \cite{dubska2014automatic}, and we also made adjustments on the computed 3D bounding box using empirical results to accommodate vehicles without ``boxy'' shapes. Besides, we use the recognized vehicle type and prior knowledge of the vehicle dimensions for different vehicle types to adjust the 3D bounding box dimensions. Additionally, the 3D bounding box is not calculated if certain occlusion conditions are detected using the 2D bounding box overlap and the size of the mask. Finally, the center of this 3D bounding box's bottom surface is the vehicle's location on the image. Again, with the transformation $T$, the location of the vehicle in the world frame is obtained. The heading calculation may fail in a few cases: (a) when the vehicle stops, (b) when the vehicle is far away with only small motion on the image, or (c) when the RANSAC fails. In these cases, the heading is inferred from the accumulated motion on several previous video images by assuming the vehicle travels in a straight line within a short amount of time. Moreover, we use a neural network similar to \cite{juranek2015real} to predict the heading angle of a vehicle on the image as a backup. It is trained on a dataset of images patches with correct heading calculated by the optical flow based method. However, this neural network method is generally slower, less accurate and less robust than the optical flow method. \begin{figure}[] \vspace{0.1in} \begin{center} \includegraphics[width=3.1in]{figures/bb3d.jpg} \caption{An illustration of 3D bounding box. The optical flow vector inliers are shown as the thin blue lines (on the vehicle body) and outliers are shown as the thin red lines (on the wheels).} \label{fig:bb3d} \end{center} \vspace{-0.28in} \end{figure} \textbf{(5) Vehicle Velocity Measurement}: The system first averages the length of the inlier vectors of the RANSAC results obtained in the previous step and use this length as the distance of vehicle movement from the current image and the previous image. Next, it also obtains the corresponding distance in the world frame using the vehicle's location, heading, and the transformation $T$. After that, the system uses the linked list of associated instances to aggregate these distances calculated from previous image pairs in sequence until the total distance exceeds a threshold or up to a certain amount of steps (5 m or 30 steps in our implementation). Finally, the velocity is calculated from this aggregated distance, the number of frame pairs, and the frame interval time. The direction of the velocity is the same as the heading. \textbf{(6) Vehicle State Estimation}: Given the location and the velocity of a vehicle, the system runs a Kalman filter with states $\mathbf{x} = (x, y, z, \dot{x}, \dot{y}, \dot{z})$ and linear 6DOF rigid body kinematics to estimate the vehicle states in the 3D world. The process and observation noise covariance matrices are empirically determined. For a 2D map, $z$ and $\dot{z}$ are always zero. The prediction step of the state estimation will keep running for a few iterations when the detection of this vehicle fails. Once it is detected again, a detected instance can be re-associated to this vehicle. The heading and 3D bounding box dimension is also smoothed using a running average on previous video images. Finally, a record of the location, velocity, heading, 3D bounding box points, and the vehicle type is created as the output of this pipeline. \subsection{Vehicle Shape Reconstruction} \begin{table*}[] \vspace{0.07in} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Videos & MOTA & MODA & MME & FP & FN & \#Objects & \#Images & \#Vehicles & \#IDE & \#TO & \#PO & MT & ML & Resolution \\ \hline Track 1A & 96.2\% & 98.1\% & 220 & 20 & 3,802 & 95,227 & 17,891 & 286 & 22 & 7 & 112 & 271 & 5 & 720p \\ \hline Track 1B & 90.9\% & 92.4\% & 1,218 & 330 & 5,670 & 79,210 & 17,912 & 225 & 40 & 10 & 109 & 207 & 6 & 720p \\ \hline Track 2 & 95.5\% & 96.3\% & 65 & 0 & 496 & 12,346 & 25,458 & 80 & 1 & 0 & 2 & 75 & 2 & 1080p \\ \hline \end{tabular} \caption{Tracking results.} \label{tb:tracking} \vspace{-0.2in} \end{table*} We applied the tracking pipeline on the traffic monitoring videos obtained from four cameras pointing to the four directions of an intersection, as shown in Fig. \ref{fig:reconstruction} (right). These videos allow us to observe the same vehicle traveling through the intersection from multiple viewing angles. Given a sequence of vehicle locations, 3D bounding boxes, and segmentation masks from multiple images, we compute the vehicle's visual hull using the shape-from-silhouette method \cite{laurentini1994visual}. Specifically, we first initialize a rectangular cuboid of voxels using the 3D bounding box. Then we carve those voxels that cannot be projected onto any mask on any views. After that, we further process the remaining voxels using the symmetry property along the vehicle's x-axis. The voxels can be converted to a mesh using the marching cubes algorithm \cite{lorensen1987marching} for visualization, as in Fig. \ref{fig:reconstruction} (left). The voxels can also be converted to a 2D histogram by ignoring the details on the bottom side. Specifically, for each histogram bin at $(x, y)$ in the vehicle's own XYZ coordinate frame, the histogram value is the maximum of the $z$ coordinates of all remaining voxels with this $(x, y)$ coordinates. Similarly, a 2D histogram can be converted back to voxels. We further resample the 2D histogram to a fixed size of n-by-m bins using bilinear interpolation. The result histogram is denoted as a n-by-m matrix $H$, or flattened as a n*m dimensional vector $\mathbf{h}$. In our implementation, $n=m=50$. \begin{figure}[] \vspace{0.07in} \begin{center} \includegraphics[width=3.3in]{figures/reconstruction.jpg} \caption{The vehicle shape reconstruction pipeline with the reconstructed 3D shape of an example vehicle (left), three images of the vehicle (middle), and the intersection with four cameras providing the videos (right).} \label{fig:reconstruction} \end{center} \vspace{-0.3in} \end{figure} Our objective is to reconstruct the vehicle shape and representing it in a fixed-sized data structure. Here $H$ can be a candidate. However, it usually differs from the actual vehicle shape due to the limited view angles in the voxel carving process and errors in the localization results. To solve this problem, we construct a shape prior model from 80 different 3D CAD vehicle models and fit the model to the reconstructed histogram $H$ by the following procedures. \textbf{Step (1)}: For each 3D CAD model, we converted it to a histogram using an algorithm similar to the one that converts voxels to a histogram. The generated histogram was resampled to n-by-m, and flattened to a n*m vector (denoted as the model vector $\{\mathbf{u}_i\}$, $1 \le i \le 80$). \textbf{Step (2)}: With all 80 model vectors, we run Principal Component Analysis (PCA) to reduce their dimension from n*m to 20. After this step we obtained 20 principle component vectors (denoted as a n*m-by-20 matrix $S$). The vector set $\{\mathbf{u}_i\}$ and matrix $S$ are called the \textbf{vehicle shape prior}, similar to the shape prior models in shape analysis and multiple view reconstruction \cite{cootes1995active}\cite{engelmann2017samp}\cite{chhaya2016monocular}. \textbf{Step (3)}: We projected the reconstructed histogram $\mathbf{h}$ to the column space spanned by $S$ by solving the following least-square problem: $$\underset{\mathbf{v}}{\arg\min} || \mathbf{h} - S\mathbf{v} || + \lambda || \mathbf{v} - \mathbf{t}||.$$ Here the last term is a regularizer, $\mathbf{t}$ is a template vector for the type of the reconstructed vehicle, and it is computed by averaging the subset of $\{\mathbf{u}_i\}$ with the same type. For example, if $\mathbf{h}$ is reconstructed from a vehicle that is recognized as a ``sedan'', $\mathbf{t}$ is the template vector of ``sedan'', which is computed by averaging of those $\mathbf{u}_i$ derived from 3D models of sedans. Moreover, $\mathbf{t}$ is also used to represent those vehicles whose shapes cannot be reconstructed due to occlusion. Finally, the vector $\mathbf{v}$ is the output of this pipeline as the shape representation of the reconstructed vehicle. Given $\mathbf{v}$ and $S$, an approximated histogram representation of the vehicle shape can be recovered by $\mathbf{\hat{h}} = S\mathbf{v}$. This histogram can be further converted to voxels or a mesh. The texture of the vehicle 3D model is not reconstructed for anonymity. \subsection{Map Reconstruction} We constructed the 2D map using a satellite image at the place where the camera is mounted. Many online map services (such as Google Maps) offer satellite images. The rows and columns of the image are usually already aligned to the east and the south. We also calculate the scale factor between the map pixel and the metric unit using two points with known actual distance in the metric unit (which can be measured using the online map service tools or in the world). For the 3D map, we flew a survey-grade drone (DJI Phantom Pro with RTK) on the site, ran a 3D reconstruction software (Pix4D mapper) to obtain a point cloud from the drone images, and then processed the point cloud to a 3D mesh map. We also calibrated this mesh map to align its axes to east-north-up and recover the actual scale. Additionally, we chose a reference point for both types of maps and obtained its longitude, latitude, and height above the geodesic ellipsoid using the online map service or a hand-held GPS receiver device on site. Then, we set up the transformation between the map reference frame to the WGS84 reference frame so that we can compare our localization results with GPS measurements. \subsection{Traffic Scene Visualization and Replay} To replay a traffic scene captured by the cameras, we built two visualizers, one using the 2D map and the other using the 3D map, as shown in Fig. \ref{fig:overview}. Here, a traffic scene is defined as the collection of the road environment (\textit{i.e.}, the map) and vehicles captured by a specific camera within a certain period (\textit{i.e.}, the tracking results). Both visualizers transform the vehicle states to the map frame using the calibration results and animate the vehicle movement. We use a template 3D mesh model for each vehicle type or the reconstructed vehicle models for the 3D animation. The size of the 3D model is scaled to fit the vehicle 3D bounding box. Besides, during the replay, the user can modify the speed of one specific vehicle, and the visualizer can ``re-simulate'' this vehicle from the modified states following the recorded trajectory while keeping replaying other vehicles. \section{Empirical Evaluation} We obtained traffic monitoring videos from two sites. The first one is an intersection with four cameras pointing to its four directions (the same intersection in \cite{wishart2020driving}), which is the one shown in Fig. \ref{fig:reconstruction} in section III.C. The second site is a local road segment with one camera, shown in Fig. \ref{fig:drone}. \begin{figure}[] \begin{center} \includegraphics[width=3.35in]{figures/drone.jpg} \caption{Example images taken by the ground camera on the road infrastructure (left) and the drone at a height of 80 meters (right) at the second site. The coverage of both cameras are shown on the map (middle).} \label{fig:drone} \end{center} \vspace{-0.15in} \end{figure} First, we evaluate the vehicle type recognition performance. The recognizer is trained with 10,200 images and tested with 883 images. All images are cropped from the videos recorded by the four cameras at the first site. The overall accuracy is 84\%. The majority of the wrong predictions are among the following type pairs: (SUV, sedan), (SUV, minivan), (sedan, coupe). Prediction errors are more frequent when only the frontal side or the rear side of the vehicle is visible, \textit{i.e.}, when the vehicle is driving directly towards the camera or away from the camera. \begin{figure*}[] \vspace{0.07in} \begin{center} \includegraphics[width=6.8in]{figures/scene.jpg} \caption{Examples of reconstructed traffic scenes. The first row shows the original video with vehicle 3D bounding boxes. The second row shows on the map with the vehicle location with uncertainty range (the rectangles and the circles on them) and speed in km/h (the numbers adjacent to the rectangles).} \label{fig:scene} \end{center} \vspace{-0.1in} \end{figure*} \begin{figure*}[] \begin{center} \includegraphics[width=6.9in]{figures/shapes.jpg} \caption{Examples of reconstructed vehicles with voxel representations and histogram shapes.} \label{fig:shapes} \end{center} \vspace{-0.3in} \end{figure*} Second, we evaluate the tracking performance on the 2D images with two video tracks from the first site (the eastbound and the southbound) and one video track from the second site. The results are shown in TABLE \ref{tb:tracking}, mostly following the metrics in \cite{stiefelhagen2006clear}. Here ``\#Objects'', ``\#Images'' and ``\#Vehicles'' mean the total number of objects, images, and vehicles in the video track. ``\#IDE'' means the number of vehicles that have tracking ID errors. ``\#TO'' means ``total occlusions'', which is the number of vehicles that are occluded by other vehicles in at least one image such that more than 80\% of the vehicle is not visible. Generally, the detector fails to detect them and the tracker needs to re-associate it later. If a vehicle is partially occluded but still detected, it is counted in ``\#PO'', which means ``partial occlusions''. Typically, the traffic monitoring cameras are mounted at strategically chosen places to minimize occlusion. ``\#MT'' is the number vehicles that are tracked more than 80\% of the life span (\textit{i.e.}, ``mostly tracked''). ``\#ML'' is the number vehicles that are tracked less than 20\% of the life span (\textit{i.e.}, ``mostly lost''). Our system is able to track most detected vehicles that are not completely occluded. \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Video & \begin{tabular}[c]{@{}c@{}}L-Diff \\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}V-Diff \\ (m/s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}\#Vehicles \\ (w/ Ref) \end{tabular} & \begin{tabular}[c]{@{}c@{}}Coverage\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Ref \\ Device\end{tabular} \\ \hline Track 1A & 2.05 & 1.01 & 1 & 25 $\sim$120 & GPS \\ \hline Track 1B & 1.57 & 0.69 & 1 & 25 $\sim$120 & GPS \\ \hline Track 2 & 1.68 & 1.47 & 69 & 15 $\sim$110 & Drone \\ \hline \end{tabular} \caption{Localization and speed measurement results.} \label{tb:location} \vspace{-0.25in} \end{table} Third, we quantitatively evaluate the vehicle localization and velocity measurement performance in the 3D world frame using two different types of references. For the first site, we drove a vehicle with a differential GPS receiver through the intersection. For the second site, we flew a drone and capture videos from 80 meters above the road. We processed the drone videos to obtain the vehicle location and velocity using a method similar to \cite{zhan2019interaction}. We also compared the location measurements between the GPS and the drone using another drone video track that captures the movement of a vehicle with a differential GPS receiver. The location and velocity measurements accuracy of both types of reference devices are within 1 m and 1 m/s respectively. Our results are shown in TABLE \ref{tb:location}, where ``L-Diff'' and ``V-Diff'' mean differences in location and velocity between our results and the reference measurements. Only the vehicles that are correctly tracked and those with corresponding reference measurements are evaluated, as shown in the ``\#Vehicles'' columns. The ``Coverage'' column shows the minimum distance and the maximum distance between the measured vehicles and the camera. Note that ``L-Diff'' actually varies with the distance between the vehicle and the camera, and typically it is smaller when the vehicle is close to the camera. For example, the average value of ``L-Diff'' is 0.79 m within 50 m to the camera in track 2, which is less than the average ``L-Diff'' value in the whole range. Also, ``L-Diff'' is not the same in the longitudinal direction (\textit{i.e.}, the camera pointing direction) and the lateral direction (\textit{i.e.}, perpendicular to the camera pointing direction). For example, in track 2, the average lateral location difference is just 0.22 m but the average longitudinal location difference is 1.52 m. At certain view angles, e.g., when the vehicle is driving directly towards the camera, the 3D bounding box is inaccurate. Moreover, the road surface is not perfectly flat, which can cause errors in the conversion of image coordinates to the world coordinates when the 2D map is used. Converting world coordinates to GPS coordinates may also introduce small errors. Fourth, we show the qualitative results in Fig. \ref{fig:scene}, Fig. \ref{fig:shapes}, and the supplemental videos\footnote{https://github.com/duolu/CAROM}. We also obtained positive feedback from Maricopa County DOT and the Institute of Automated Mobility \cite{IAM} in Arizona, which confirmed that these results are useful to some extent for tasks such as traffic counting and driving safety analysis. At last, we discuss the limitations of the current system and possible future improvements. Currently, both the 3D bounding box calculation and the shape-from-silhouette algorithm require a complete segmentation mask. Inaccurate results are generated for vehicles with partial occlusion. We plan to collect a large-scale dataset and train a neural network pose estimator that can work robustly under partial occlusion. We also aim to train a neural network to directly predict the vehicle shape vectors and its pose jointly from a single image using the current vehicle reconstruction results as training data, so that the 3D shape of a vehicle can be directly obtained at every frame. This will enable us to develop a model-based tracking algorithm to increase the processing speed and improve the robustness. Besides, pedestrians, cyclists, and other types of traffic participants are not reconstructed in our current implementation, and we want to calculate ``bounding cylinders'' for these moving objects with non-boxy shapes. Finally, we are working on calculating safety metrics \cite{wishart2020driving} from our tracking results as an application. \section{Conclusions} In this paper, we present CAROM, a vehicle localization and traffic scene reconstruction framework using videos taken by monocular cameras mounted on road infrastructures, which achieves promising results for vehicle localization and velocity measurement. Still, CAROM is in its early stage with limitations in robustness and efficiency. With further development, we hope it can be deployed together with traffic monitoring cameras on the roadside infrastructure in the future, to allow jurisdictional authorities and AVs on the road gain better awareness of the traffic situation. \bibliographystyle{IEEEtran}
proofpile-arXiv_069-313
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Two-dimensional (2D) materials have been among most extensively studied structures due to the wide range of applications in nanoscience and nanotechnology \cite{xie2015two,fiori2014electronics,wu2018thermo,zhang2015strain}. Recently, the atomically thin layers of metal monochalcogenides as a new class of this family, has attracted much attention \cite{demirci2017structural,zhou2017multiband,cao2015tunable,shi2015anisotropic}. These 2D materials have special electronic and structural properties which make them promising candidates for different applications such as field-effect transistors(FETs), electronic sensors, and solar energy and photoelectric devices \cite{cai2019synthesis,feng2015performance,wang2015high,wang2019first,budweg2019control}. The general chemical formula of these layered materials is MX, where M belongs to Group III and X refers to Group VI in the table of elements. GaS, GaSe and InSe are some examples. In their bulk form, there is a strong covalent chemical bond between metal and chalcogenide atoms and each layer is coupled to its neighboring layers by the van der Waals forces \cite{hu2015gese,bejani2019lattice,ariapour2020strain}. When the thickness of this group of materials reduces to few monolayers, their valance band looks like a “Mexican-hat” \cite{seixas2016multiferroic}. A Mexican-hat dispersion forms ring-shaped valence band edges, at which the van Hove singularities appear with $\ 1/\sqrt {E}$ divergence in the 2D density of states (DOS) \cite{wickramaratne2015electronic,stauber2007fermi}. Exploring this novel class of 2D semiconductors with a large DOS near the Fermi surface, tunable magnetism, superior flexibility and good ambient stability is an important research topic in recent years. In addition, successfully synthesizing monolayer and few-layer MXs, including GaS, GaSe and InSe, presents an intriguing opportunities for future semiconductor technology \cite{hu2012synthesis,zhou2018inse,chang2018synthesis,lei2013synthesis}. Over the course of past few decades, a great deal of attention has focused on double-layer 2D structures because of their interesting many-body and transport features which arise from the inter-layer Coulomb interaction between the two parallel electron or hole systems that are coupled in close proximity \cite{vazifehshenas2012thickness,tanatar2001dynamic,gumbs2018effect,perali2013high,vazifehshenas2015geometrical}. The Coulomb drag phenomenon provides an opportunity to measure the effects of electron-electron interactions through the transport measurement, directly where the momentum is transferred from one layer to the other layer due to the inter-layer Coulomb coupling \cite{hwang2011coulomb,narozhny2016coulomb,carrega2012theory,vazifehshenas2007thickness}. A driving current$\ (I_{drive})$ in one layer ("the active layer") induces a voltage$\ (V_{drag})$ in the other layer ("the passive layer"). This phenomenon is called Coulomb drag. The transresistivity or the drag coefficient$\ (\rho_{D})$ is a measure of inter-layer interaction and can be determined by calculating the ratio of$\ V_{drag}$ to$\ I_{drive}$ \cite{narozhny2012coulomb,sivan1992coupled}. This phenomenon has previously been studied in some nanostructures such as n-doped and p-doped double quantum wells \cite{flensberg1994coulomb,yurtsever2003many,hwang2003frictional,pillarisetty2005coulomb}, double quantum wires \cite{tanatar1998disorder,tanatar1996coulomb,tanatar2000effects}, mismatched subsystems \cite{badalyan2020coulomb}, double layers of topological materials \cite{liu2019coulomb}, double-layer and bilayer graphene \cite{tse2007theory,narozhny2012coulomb,hwang2011coulomb}, and double-layer phosphorene \cite{saberi2016coulomb}. For a double quantum wells system with a 2D electron density$\ n$ and layer separation$\ d$, the drag transresisitivity changes as$\ T^{2}/n^{2}d^{4}$ ($\ 1/Tn^{3(4)}d^{3}$) at low (high) temperature$\ (T)$. In the case of double-layer graphene with linear energy band dispersion, it has been found that$\ \rho_{D}$ has a $\ T^{2}/n^{2}d^{2}$ ($\ T^{2}/n^{4}d^{6}$) dependency at low (high) carrier density, while $\ \rho_{D}$ for a system of double graphene bilayers with quadratic dispersion shows a$\ T^{2}/n^{3}d^{4}$ ($\ T^{2}/n^{3}ln(d)$) behavior in the large (small) layer separation case \cite{hwang2011coulomb}. Also, at low (high) temperature, a system of double quantum wires exhibits a$\ T^{2}$($\ T^{-3/2}$) dependence within the Fermi liquid approach \cite{glazman2006coulomb}. However, the Coulomb drag effect has not been studied for materials with the Mexican-hat dispersion and these interesting double-layer systems are still open for investigations. In MX monolayers, the Mexican-hat dispersion results in a high density of states and a van Hove singularity near the valence band maximum which can affect their electronic \cite{demirci2017structural,zhao2019magnetism}, optoelectronic \cite{magorrian2017spin,lei2016surface}, thermoelectric\cite{nurhuda2020thermoelectric,wickramaratne2015electronic,wang2019strain} and many-body properties. This motivates us to theoretically investigate the many-body Coulomb drag effect of such 2D materials with Mexican-hat band structure. Among above mentioned monolayer MXs, GaS has a larger Mexican-hat that can be attributed to the charge transfer, caused by the elements' electronegativities difference (Se$\ <$ S, In$\ <$ Ga), which occupies the $p$ orbitals of S or Se and dominates the top valence bands \cite{wang2019first}. In this paper, we theoretically investigate the Coulomb drag effect between two p-type doped identical parallel monolayers of a few III-VI compounds whose valance bands look like Mexican-hat. Special attention is paid to GaS with a lattice constant of$\ a = 3.46$ {\AA} which is known to have promising electronic and optical characteristics \cite{demirci2017structural,yagmurcukardes2016mechanical,ho2006optical,budweg2019control}. We will start off with the expression for drag resistivity based upon the semiclassical Boltzmann transport equation and energy-independent scattering time approximation. Then, we will use a general formalism for calculating the drag resistivity in our desired system and the effects of various parameters such as temperature($\ T$), hole density ($p$) and layer separation($\ d$) will be investigated. In order to better understand the drag resistivity behavior, we also extract the double-layer plasmon modes as functions of the studied parameters from the dynamical dielectric function. We will finally present a comparison between drag resistivity of GaS monolayer and its some other family members such as GaSe and InSe monolayers. We have ignored the virtual phonon exchange effects on the drag transresistivity\cite{gramila1993evidence,tso1992direct} in our calculations. This mechanism is expected to be relevant at very low temperatures (where the contribution of plasmons to the drag is negligible) and for the large inter-layer separations (where the Coulomb interaction between the layers is weak)\cite{amorim2012coulomb,zarenia2019coulomb,flensberg1994coulomb}. In this study, the distance between two layers is chosen to be small (15-30{\AA}). Therefore, the Coulomb interaction between the layers is strong enough that one can safely neglect the effect of virtual phonon exchange. Also, the coupling between the plasmons and surface optical phonons of substrate is not taken into account because for most parameters used here, the Fermi energy, as a result of the large density of states at band edge (van Hove singularity), is very small and far below the surface optical phonon energy so that the interaction between them is almost negligible. The rest of this paper is structured as follows: in Sec. II, we describe the model and theoretical formalism. In Sec. III, we present results together with detailed discussion and finally, conclusion is given in Sec. IV. \section{Model and Formalism} The structure is modeled as two p-type doped identical parallel monolayers with Mexican-hat valence band dispersion which are coupled by Coulomb interaction in a short distance. The separation is still far enough to prohibit any electron tunneling. Figure \ref{Fig-1:sil}(a) shows a schematic model of this system and Figure \ref{Fig-1:sil}(b) demonstrates the top view of the crystal structure of III-VI compounds. \begin{figure}[t!] \includegraphics[width=4cm]{Fig-1b}\includegraphics[width=4cm]{Fig-1a} \caption{(a) Side view of a double-layer structure composed of III-VI compounds monolayers in a drag setup. (b) Top view of III-VI compounds general atomic structure.}\label{Fig-1:sil} \end{figure} The valence band energy dispersion relation of each layer is given by \cite{das2019charged}: \begin{equation}\label{eq:E} E(k)=E_{0} -\lambda_{1}k^{2} +\lambda_{2}k^{4} \end{equation} where $E_{0} $ is the height of the hat at$\ k=0 $ (see Figure \ref{Fig-2:mx}), $\lambda_{1}=\hbar^2/2m^{*}$, $\lambda_{2}=\hbar^4/4E_{0} {m^{*}}^{2}$ and$\ m^*$ is the hole effective mass at$\ k=0 $. $\ E_{0}$ and$\ m^{*}$ are, respectively, set to 111.2 meV and 0.409$\ m_{0}$ for GaS monolayer, with $\ m_{0}$ being the free electron mass \cite{wickramaratne2015electronic}. As shown in Figure \ref{Fig-2:mx}, the hole kinetic energy is assumed to be positive. According to the dispersion energy equation given above, the valence band edge is located at$\ E = 0$ and negative energies represent energies in the bandgap. There are two Fermi wave vectors,$\ {{k}_{F}}_{1}$ and$\ {{k}_{F}}_{2}$, for positive Fermi energies smaller than$\ E_{0}$ in the Mexican-hat dispersion. These two Fermi wave vectors originate from the two branches of the dispersion with concentric ring radii of$\ {k_{F}}_{1}=\sqrt{(4m^{*}E_{0}/\hbar^{2})(1-\sqrt{E/E_{0}})}$ and$\ {k_{F}}_{2}=\sqrt{(4m^{*}E_{0}/\hbar^{2})(1+\sqrt{E/E_{0}})}$ corresponding to the Fermi surface. Density of states for a 2D Mexican-hat structure is given by \cite{das2019charged}: \begin{figure}[ht!] \includegraphics[width=6cm]{Fig-2a} \includegraphics[width=4cm]{Fig-2b} \caption{(a) Mexican-hat dispersion for GaS monolayer with$\ p=5\times10^{13} $cm$^{-2}$ and$\ E_{F}=0.37E_{0}$. The two concentric rings show the two Fermi circles with radii $\ {k_{F}}_{1}$ and$\ {k_{F}}_{2}$ that exist at Fermi energies below$\ E_{0}$. (b) DOS and van Hove singularities.}\label{Fig-2:mx} \end{figure} \begin{equation}\ \label{eq:DS} DOS(E) = \left\{ \begin{array}{rl} \frac{2m^*}{\pi \hbar^2} \sqrt{\frac{E_{0}}{E}} & \ \ \ E<E_{0} \\ \frac{m^*}{\pi \hbar^2} \sqrt{\frac{E_{0}}{E}} & \ \ \ E>E_{0} \end{array}\right. \end{equation} with the Fermi energy $\ E_F=p^{2}\pi^{2}\hbar^{4}/16E_{0} {m^{*}}^{2} $ where $p$ is the 2D hole density. The Mexican-hat electronic band structure leads to divergences in the density of states, the so-called van Hove singularities: the first one diverges with$\ 1/\sqrt{E} $ behavior at$\ E = 0$ and another is a Heaviside step function discontinuity at$\ E = E_{0}$. Existence of the van Hove singularities promises new electronic properties when the Fermi energy is in close vicinity \cite{zhou2017multiband,rybkovskiy2014transition}. The drag conductivity is defined by: \begin{equation}\label{eq:sigma} \sigma_D=\frac{J^\alpha_1}{E^\alpha_2} \end{equation} where$\ \alpha$ is the direction along $\ x$ or $\ y$ in which the current $\ J_1$ flows. Indices 1 and 2 denote the active and passive layers, respectively. The drag resistivity relates to the layers conductivities in isotropic systems as follows : \begin{equation}\label{eq:rh} \rho_{D}\simeq-\frac{\sigma_D}{\sigma_{11}\sigma_{22}} \end{equation} where $\sigma_{11}$ and $\sigma_{22}$ are the intra-layer conductivities of the active and passive layers, respectively. The drag resistivity can be obtained through several methods such as the Kubo formula based on the leading-order diagrammatic perturbation theory\cite{flensberg1995linear,kamenev1995coulomb}, the memory function formalism\cite{zheng1993coulomb} and the linear response Boltzmann transport equation\cite{jauho1993coulomb}. Within the third approach, the drag resistivity is given by \cite{flensberg1994coulomb}: \begin{equation}\label{eq:rhoD} \begin{split} \rho_{D}=-\frac{{m^{*}}_{1}{m^{*}}_{2}}{4\pi k_{B}T p_{1}p_{2} e^{4}\tau_{1}\tau_{2}} \qquad\qquad\\ \times\sum_{\textbf{q}}\int d\omega \frac{Im[\Gamma^{\alpha}_{1}(\textbf{q},\omega)]Im[\Gamma^{\alpha}_{2}(\textbf{q},\omega)] {\left|W_{12}(q,\omega)\right|}^2}{sinh^2(\hbar\omega/2k_{B}T)}. \end{split} \end{equation} \ $\omega$ and $\ \textbf{q}$ are the transferred energy and momentum from layer 1 to layer 2 at temperature T, $\ k_{B}$ is the Boltzmann constant,$\ W(\textbf{q},\omega)$ is the screened inter-layer Coulomb interaction and$\ \tau_{1(2)}$ is the transport scattering time of layer 1 (layer 2). We assume that the relaxation time is not energy dependent and both layer 1 and layer 2 are identical with equal hole densities. $\ \Gamma^{\alpha}_{i}(\textbf{q},\omega)$ is the non-linear susceptibility along $\alpha$ direction which is given as \cite{zheng1993coulomb}: \begin{equation}\label{eq:Fi} \vspace{2mm} \Gamma^{\alpha}_{i}(\textbf{q},\omega)=g\sum_{\textbf{k}}\frac{e(f_{i}(\textbf{k})-f_{i}(\textbf{k}^{'}))(\tau_{i}v^{\alpha}(\textbf{k})-\tau_{i}v^{\alpha}(\textbf{k}^{'}))}{E(\textbf{k})-E(\textbf{k}^{'})+\omega+i\eta^{+}}. \end{equation} In this equation$\ \textbf{k}{'}=\textbf{k}+\textbf{q}$,$\ g$ is spin degeneracy,$\ v^{\alpha}(\textbf{k}) $ is the $\alpha$ component of group velocity ,$\ e$ is the electron charge and$\ f(\textbf{k})=\{exp[(E(\textbf{k})-\mu)/k_{B}T]+1\}^{-1}$ is the equilibrium Fermi distribution function with $\mu$ being the chemical potential. $\ E(\textbf{k})$ refers to the Mexican-hat dispersion given in Eq. (\ref{eq:E}). The 2D non-linear susceptibility in a special direction such as $\ x$ can be obtained as: \begin{equation}\label{eq:Gama} \Gamma^{x}_{i}(\textbf{q},\omega)=\sum_{\textbf{k}}\dfrac{e\tau [f_{i}(\textbf{k})-f_{i}(\textbf{k}^{'})]{\Delta v^{x}_{\textbf{k},\textbf{k}^{'}}}}{\Delta E_{\textbf{k},\textbf{k}^{'}}+\omega+i\eta^{+}} \end{equation} where $\ {\Delta v^{x}_{\textbf{k},\textbf{k}^{'}}}$ and $\ \Delta E_{\textbf{k},\textbf{k}^{'}}$ are given by following relations: \begin{equation}\label{eq:deltaV} \begin{split} {\Delta v^{x} _{\textbf{k},\textbf{k}^{'}}}=\frac{2\lambda_{1}q_{x}}{\hbar}+\frac{4\lambda_{2}}{\hbar} [{k_{x}}^3-{(k_x+q_x)}^3\\+k_{x}{k_{y}}^2-(k_{x}+q_{x}){(k_{y}+q_{y})}^2]. \end{split} \end{equation} \vspace{0.2cm} and \begin{equation}\label{eq:deltaE} \Delta E_{\textbf{k},\textbf{k}^{'}}=A(k,q)cos^{2}\theta+B(k,q)cos\theta+C(k,q) \end{equation} with A, B and C defined as \begin{equation}\label{eq:A} A(k,q)=4\lambda_{2} k^2 q^2 \end{equation} \begin{equation}\label{eq:B} B(k,q)=2 \lambda_{1}kq-4\lambda_{2}k q^3-4\lambda_{2}k^3 q \end{equation} \begin{equation}\label{eq:C} C(k,q)=2 \lambda_{2} k^2 q^2 +\lambda_{2}q^4-\lambda_{1}q^2 \end{equation} where$\ \theta$ is the angle between$\ \textbf{k}$ and$\ \textbf{q}$.\\ \section{Results and discussion} \begin{figure*}[h] \centering \includegraphics[width=7cm]{Fig-3a} \hspace{0.5cm} \includegraphics[width=7cm]{Fig-3b} \caption{The loss function for a double-layer structure of GaS monolayers for$\ d=15$ {\AA} and$\ p=4\times10^{13} $cm$^{-2} $ at two temperatures: (a)$\ T=0$ and (b)$\ T=0.5 T_{F}$ .}\label{Fig-3:loss3} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=5.6cm]{Fig-4a} \includegraphics[width=5.6cm]{Fig-4b} \includegraphics[width=5.6cm]{Fig-4c} \caption{The loss function for the double-layer structure of GaS monolayers at zero temperature and$\ d=15$ {\AA} for various densities: (a)$\ p=2\times10^{13} $cm$^{-2} $ (b)$\ p=4\times10^{13} $cm$^{-2} $ and (c)$\ p=6\times10^{13} $cm$^{-2} $.}\label{Fig-4:loss4} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=5.6cm]{Fig-5a} \includegraphics[width=5.6cm]{Fig-5b} \includegraphics[width=5.6cm]{Fig-5c} \caption{The loss function for the double-layer structure of GaS monolayers at zero temperature and $\ p=3\times10^{13} $cm$^{-2} $ for various layer separations: (a)$\ d=15$ {\AA}, (b)$\ d=30$ {\AA} and (c)$\ d=45$ {\AA}.}\label{Fig-5:loss5} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[width=7cm]{Fig-6a} \hspace{0.5cm} \includegraphics[width=7cm]{Fig-6b} \caption{Acoustic and optical plasmon modes behavior at zero temperature for (a) various$\ p=2, 4$ and $\ 6\times10^{13} $cm$^{-2} $and $\ d=15$ {\AA} and (b) various$\ d=15, 30$ and$\ 45${\AA} and $\ p=3\times10^{13} $ cm$^{-2}$.}\label{Fig-6:pl} \end{figure*} The many-body interaction is taken into account through the dynamically screened Coulomb potential \cite{hwang2007dielectric}: \begin{equation}\label{eq:W12} W_{12}(\textbf{q},\omega)=\frac{2\pi e^2\ exp({-qd})}{\kappa q \ \varepsilon(q,\omega)} \end{equation} where$\ d$ is the distance between the two layers,$\ \kappa$ refers to the relative background permittivity and$\ \varepsilon(q,\omega)$ is the dynamical dielectric function. The random phase approximation (RPA) which has been successfully employed for calculating the dielectric function in a double-layer system with identical background permittivity is given by \cite{tanatar2001dynamic}: \begin{equation}\label{eq:epsilon} \begin{split} \varepsilon(q,\omega)=\left(1-\frac{2\pi e^2}{\kappa q}\Pi_{1}(q,\omega)\right)\left(1-\frac{2\pi e^2}{\kappa q}\Pi_{2}(q,\omega)\right)\\ -\left(\frac{2\pi e^2\ exp({-qd})}{\kappa q}\right)^{2}\Pi_{1}(q,\omega)\Pi_{2}(q,\omega) \end{split} \end{equation} with$\ \Pi_{i}(q,\omega)$ being the 2D non-interacting polarizability of layer$\ i$ at finite temperature: \cite{stern1967polarizability,maldague1978many} \begin{equation}\label{eq:Pi} \Pi_{i}(q,\omega)=g\sum_{\textbf{k}}\dfrac{f_{i}(\textbf{k})-f_{i}(\textbf{k}{'})}{\Delta E_{\textbf{k},\textbf{k}{'}}+\omega+i\eta^{+}} \end{equation} In 2D double-layer structures, the collective density fluctuations (plasmons) play an important role in determining the many-body properties of the system such as screening and the drag effect \cite{liu2008plasmon,van2013plasmon}. The plasmon modes are given by the poles of the density-density response function, or equivalently by the zeros of the dynamical dielectric function , Eq.(\ref{eq:epsilon}). The loss function, given by$\ -Im[\varepsilon(q,\omega)^{-1}]$, can be used to study the plasmon dispersion,$\ \omega_{p}(q)$. A plasmon mode appears when both$\ Re[\varepsilon(q,\omega)]$ and$\ Im[\varepsilon(q,\omega)]$ become zero; a situation where$\ -Im[\varepsilon(q,\omega)^{-1}]$ is a $\ \delta$-function with the strength$\ W(q) =\pi[\partial Re[\varepsilon(q,\omega)]/ \partial\omega|_{\omega=\omega_p(q)}]^{-1}$. We start presenting our results with Figure \ref{Fig-3:loss3} where the loss function has been calculated for a p-type doped GaS double-layer structure. We have used$\ m^{*}=0.409 \ m_{0}$,$\ \kappa=3.1$ and$\ E_{0}=111.2 $ meV for GaS \cite{wickramaratne2015electronic,das2019charged}. In Figure \ref{Fig-3:loss3}, we illustrate the loss function in the$\ (q,\omega)$ space at two temperatures,$\ T=0$ and$\ 0.5T_{F}$ for a hole density $\ p= 4 \times10^{13} $ cm$^{-2}$ and an inter-layer separation of$\ d=15${\AA}. The color scale represents the mode spectral strength. As can be seen from this figure, the single-particle excitations (SPE) continuum has a gap in its low energy part similar to that obtained for a 1D electron gas system (quantum wire). It seems the van Hove singularity in the density of states at band edge which diverges as $\ 1/\sqrt{E}$ is responsible for this newly emerged gap in SPE region of the 2D materials with Mexican hat dispersion. As shown in Figure \ref{Fig-2:mx}(a) there are two Fermi wave vectors$\ {{k}_{F}}_{1}$ and$\ {{k}_{F}}_{2}$ for positive Fermi energies smaller than$\ E_{0}$. They cause the appearance of a narrow SPE band located just below the main dome of SPE continuum (see Figures \ref{Fig-3:loss3}(a) and (b)). The curves in Figure \ref{Fig-3:loss3} indicate the optical and acoustic plasmonic branches and it is notable that the optical branch appears in higher energies. In the acoustic (optical) mode the carriers residing on the two layers oscillate out-of-phase (in-phase), collectively. A comparison between Figures \ref{Fig-3:loss3}(a) and (b) makes it clear that the effect of the finite temperature is to intensify the plasmon damping process. Since at finite temperature hole carriers with larger kinetic energies are excited at negligible energy cost, they enter into the SPE region easier. In Figure \ref{Fig-4:loss4}, we show increasing the carrier density results in shifting the damped optical and acoustic plasmon modes up to higher energies where they eventually enter into the SPE region. Damped plasmons correspond to the broadened peaks in the loss function. Our results show that the density dependence of plasmon modes can be approximated by$\ p^{0.5}$ which happens to be the same behavior as a conventional 2D system with the parabolic energy dispersion \cite{narozhny2016coulomb}. In Figure \ref{Fig-5:loss5}, we show the effect of increasing distance$\ (d)$ between the layers on the plasmon modes behavior. We have plotted the loss function for several separations$\ (d=15$,$\ 30$ and$\ 45$ {\AA}), at zero temperature and fixed density$\ (p=3\times10^{13}$ cm$^{-2})$. Calculations indicate that by moving layers away from each other, the optical and acoustic plasmon branches converge and that the mode damping occurs at smaller energies (see Figures \ref{Fig-5:loss5}(a)-(c)). This observation can be attributed to the fact that the inter-layer interaction reduces by increasing inter-layer separation and eventually the system can be considered as two separate layers for which the plasmon branches are degenerate. For a better comparison, the variations of both acoustic and optical plasmon modes with carrier density and layer separation at zero temperature are shown in Figures \ref{Fig-6:pl}(a) and (b), respectively. According to the plasmon branches given in Figure \ref{Fig-6:pl}, we learn that in the limit of long wavelength, the acoustic (optical) plasmon modes show a $\ q (\sqrt{q})$ dependence in this system which is quite similar to other double-layer structures consisting of 2D materials such as the 2D electron gas, graphene, bilayer graphene, etc. \cite{hwang2011coulomb,hwang2007dielectric,flensberg1995plasmon}. At larger wave vectors, however, the acoustic (optical) plasmon branches have $\ q^{1.3}(q)$ dispersions. The calculations suggest a$\ d^{0.2}$ and a$\ d^{0.1}$ dependency for the acoustic and optical plasmon energies, respectively. \begin{figure}[!ht] \includegraphics[width=8cm]{Fig-7} \caption{Drag resistivity as a function of temperature for various densities $ p=2$,$ 2.5 $ and $ 3 \times10^{13} $ cm$^{-2} $ with $ d=15$ {\AA}.} \label{Fig-7:d1} \end{figure} Now that the effects of$\ T$,$\ p$ and$\ d$ on the plasmon modes of the double-layer system of GaS (as a synthesized 2D material with the Mexican-hat dispersion) are known, we may investigate the Coulomb drag resistivity in such double-layer structure. The calculated drag resistivity as a function of temperature for various densities$\ (p=2, 2.5$ and$\ 3\times10^{13} $ cm$^{-2}) $ at a fixed distance$\ (d=15$ {\AA}) has been shown in Figure \ref{Fig-7:d1}. It can be observed that the drag resistivity decreases with increasing carrier density, at any temperature. To understand this behavior one may note that the plasmon modes take higher energies at higher densities and as a result, they enter into the SPE region easier and get damped faster (see Figures \ref{Fig-4:loss4}(a)-(c)). Therefore, their contribution to the drag resistivity gets weaker and consequently the drag resistivity decreases. One can also learn from Figure \ref{Fig-7:d1} that the drag resistivity rises when the temperature increases at a constant density. Eq. (\ref{eq:rhoD}) can explain this observation: there are two types of important contributions to the Coulomb drag resistivity;$\ Im(\Gamma_{i}(\textbf{q},\omega))$ and$\ W_{12}(\textbf{q},\omega)$. At zero temperature, the well-defined plasmon modes always lie outside the SPE region and there is no coupling between SPE region and plasmon modes$\ (Im(\Gamma_{i}(\textbf{q},\omega)=0)$ which results in$\ \rho_{D}=0$. It is obvious in Figures \ref{Fig-3:loss3}(a) and (b), that by increasing the temperature, the SPE continuum and plasmon peaks are broadened and partially overlapped due to the thermally activated holes. In this situation$\ Im(\Gamma_{i}(\textbf{q},\omega))$ has a non-zero value, resulting in the plasmon contributions enhancement (described by the zeros of the dielectric function$\ \varepsilon(q,\omega)$) to$\ \rho_{D}$. On the other hand, according to our calculations which have been performed for several hole densities and inter-layer separations, we have found that the temperature dependence of the drag resistivity can be approximated as$\ T^{2}$ at low temperature and for$\ k_{F}d>1$. This behavior has been reported for other double-layer Fermi systems like double-quantum well with parabolic energy dispersion \cite{jauho1993coulomb}. At intermediate temperatures, a$\ T^{2.8}$ dependence has been obtained for GaS which is due to the plasmon enhancement effect. This effect can be clearly observed in Figure \ref{Fig-8:sd} where both the statically and dynamically screened results of the drag resistivity (scaled by$\ T^{2}$) have been shown as a function of temperature. The calculations for a hole density of $\ 2.5 \times10^{13} $ cm$^{-2} $ suggest that the plasmon contribution to the drag resistivity becomes important as $T$ increases above an intermediate temperature $\sim 0.45T_{F}$ and exhibits a peak around $T= 0.9T_{F}$. \begin{figure}[!ht] \includegraphics[width=8cm]{Fig-8} \caption{Drag resistivity scaled by$\ T^{2}$ as a function of temperature at$\ p=2.5 \times10^{13} $ cm$^{-2} $and$\ d=15${\AA}. The solid (dashed) curve shows the corresponding dynamic (static) screening results.}\label{Fig-8:sd} \end{figure} \begin{figure}[!h] \includegraphics[width=8cm]{Fig-9} \caption{Coulomb drag resistivity as a function of temperature for various layer separations$\ d=15$ {\AA},$\ 20$ {\AA} and$\ 25$ {\AA} at$\ p=3\times10^{13} $ cm$^{-2}$.} \label{Fig-9:d9} \end{figure} \begin{figure}[!h] \includegraphics[width=8cm]{Fig-10} \caption{Inter-layer separation dependence of the Coulomb drag resistivity in a double-layer GaS at$\ T=50$ K and for $\ p=2, 2.5 $ and $3\times10^{13} $ cm$^{-2}$.} \label{Fig-10:d10} \end{figure} \begin{figure}[!h] \includegraphics[width=8cm]{Fig-11} \caption{Density dependence of the Coulomb drag resistivity in a double-layer GaS system at$\ T=10,40,60 $ and$ 100$ K with$\ d=15$ {\AA}.} \label{Fig-11:d11} \end{figure} \begin{figure*}[ht!] \centering \includegraphics[width=7cm]{Fig-12a} \hspace{0.5cm} \includegraphics[width=7cm]{Fig-12b} \caption{Coulomb drag resistivity as a function of temperature in (a) the double-layer of GaS, GaSe and InSe systems at $\ p=3\times10^{13} $ cm$^{-2}$ and$\ d=15$ {\AA} with $\ k_{F}d\sim 2-3$ and (b) a double-layer of GaAs-based 2D electron gas with $\ k_{F}d= 2.44$, compared to those given in (a).}\label{Fig-12:d12} \end{figure*} As we mentioned before, the effect of the inter-layer spacing,$\ d$, on the drag transresistivity is also of interest. In Figure \ref{Fig-9:d9}, we have presented calculations for drag resistivity as a function of temperature for three layer separations$\ (d=15$, $\ 20$ and$\ 25$ {\AA}) at a fixed density$\ (p=3\times10^{13}$ cm$^{-2})$. Our results suggest that$\ \rho_{D}$ decreases with increasing$\ d$. It is not surprising though, because by increasing$\ d$ the Coulomb interaction between layers decreases and consequently the inter-layer coupling becomes weaker. This is obvious that no Coulomb drag effect appears when the two layers are sufficiently far away. In addition, Figure \ref{Fig-10:d10} demonstrates the drag resistivity as a function of distance between centers of layers at $\ T=50$ K and for three different hole densities, $\ p=2, 2.5 $ and $3\times10^{13} $ cm$^{-2} $. As it can be observed, $\ \rho_{D}$ reduces exponentially with increasing the layers separation for all hole densities. Interestingly, this behavior has been obtained for a double-quantum wire system, experimentally \cite{debray2001experimental}. To illustrate the behavior of the drag transresistivity more clearly, we have displayed the change of$\ \rho_{D}$ with the hole density at four different temperatures,$\ T=10,40,60 $ and $ 100 $ K in Figure \ref{Fig-11:d11}. Calculations show that the density dependence of the drag resistivity varies with temperature and it can approximately be given as$\ p^{-4}$ ($\ p^{-4.5}$) at low (intermediate) temperatures. Now we are all set to step forward and look into other important materials in the same family as GaS. In Figure \ref{Fig-12:d12}(a), we have compared the temperature dependence of the drag resistivity in the case of double-layer GaS with those obtained for double-layer GaSe and double-layer InSe systems at a fixed density and layer separation. It should be pointed out that a different set of parameters, including effective mass, relative permittivity and the Mexican-hat height, defines each of the mentioned materials. Here, the corresponding parameters for GaSe are$\ m^{*}=0.6 m_{0}$,$\ \kappa=3.55$ and$\ E_{0}=58.7$ meV and for InS the parameters are $\ m^{*}=0.926 m_{0}$,$ \ \kappa=3.38$ and$\ E_{0}=34.9 $ meV \cite{wickramaratne2015electronic,das2019charged}. The obtained results suggest that the drag transresistivity decreases with increasing the Mexican-hat height so that GaS with the largest Mexican-hat takes smaller values of drag resistivity at any temperature and it is InSe that provides the highest drag resistivity among the materials studied here. In addition, while$\ \rho_{D}$ shows a$\ T^{2}$ dependency at low temperatures, their drag resistivities at intermediate temperatures have a faster growth with$\ T$ (i.e. $\ T^{2.8}$) for all double-layer systems studied here. It occurs because of enhancing contributions of the plasmon modes in drag resistivity. In Figure \ref{Fig-12:d12}(b), we compare our results shown in Figure \ref{Fig-12:d12}(a) with a system consisting of two parallel layers of GaAs-based 2D electron gas (quantum well) with parabolic energy dispersion. We set the value of $\ k_{F}d=2.44$ for this 2D electron gas system close to the values we used in Figure \ref{Fig-12:d12}(a), $\ k_{F}d \sim 2-3$, to make sure all systems to be in the same coupling regime. It should be noted that the carrier densities and Fermi energies do not match in this comparison. The results suggest that the drag resistivity of the parabolic dispersion system takes higher values than those in our Mexican-hat dispersion systems. It seems the differences in the SPE regions of the two systems could probably account for this observation; the opening of a gap in the SPE continuum reduces the contribution of $\ Im\Pi\neq0$ to the drag resistivity in our system compared to the conventional 2D electron gas. \\ \section{Conclusion} To summarize, first we have investigated the behavior of plasmons and SPE region in the double-layer system with Mexican-hat bandstructure which consists of two p-type doped GaS monolayers in close proximity with no tunnelings. Our numerical results show that the damped optical and acoustic plasmon branches shift to higher energies and then enter into the SPE region when density is increased. In addition, the density dependence of plasmon modes is approximately$\ p^{0.5}$. Moreover, at fixed density and finite temperature, plasmon modes damping accelerates in comparison with that at zero temperature. Besides, we have found that the dependence of acoustic and optical modes to the inter-layer spacing can be approximated as$\ d^{0.2}$ and$\ d^{0.1}$, respectively. Also, the acoustic (optical) plasmon branch follows a $\ q$ ($\ \sqrt{q}$) dispersion at long wavelengths and shows a $\ q^{1.3}$ ($\ q$) behavior at larger wave vectors, before entering the SPE damping region. According to our results, the drag resistivity has a temperature dependence as$\ T^{2}$ ($\ T^{2.8}$) at low (intermediate) temperatures. Our calculations also show the drag resistivity decreases exponentially with increasing layers separation similar to the case of double-quantum wire system. Furthermore, we note that although the change of transresistivity with the hole density can be approximated as$\ p^{-4}$ at low temperatures, it exhibits a faster reduction at higher temperatures. It has been found that the change of $\ \rho_{D}$ with the carrier density (layer separation) follows the same behavior as in double-quantum well (double-quantum wire) system. Finally, we have compared the temperature dependence of the drag resistivity for three materials with Mexican-hat valence band dispersion (GaS, GaSe and InSe) and shown the drag resistivity value of GaS is the smallest while its Mexican-hat is the largest.
proofpile-arXiv_069-557
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{introduction} Sobolev and Moser-Trudinger type inequalities play an important role in both PDE and geometry. On one hand, these inequalities are widely used in the study of existence and regularity of solutions to partial differential equations. On the other hand, people also discovered that they are equivalent to isoperimetric and isocapacitary inequalities \cite{Ma}. Despite the classical Sobolev and Moser-Trudinger inequalities, the analogous inequalities for a series of fully nonlinear equations with variational structure have been developed, including both real and complex Hessian equations \cite{W1, TrW, TiW, AC20}. In particular, the Moser-Trudinger type inequality for the complex Monge-Amp\`ere equations has been established \cite{BB, C19, GKY, AC19, WWZ1}. The Moser-Trudinger type inequality can also be related to the Skoda integrability of plurisubharmonic functions \cite{DNS, DN, Ka, DMV}. In this paper, we study the relations between trace inequalities(Sobolev and Moser-Trudinger type), isocapacitary inequalities and the regularity of the complex Hessian and Monge-Amp\`ere equations with respect to a general nonnegative Borel measure $\mu$. Our results generalize the classical trace inequalities \cite{AH}. Here `trace' refers to that $\mu$ lives on a domain $\Omega$ and is a surface measure on a smooth submanifold in $\Omega$. The trace and isocapacitary inequalities for the real Hessian equations were obtained by \cite{XZ}. Let $\Omega\subset {\mathbb C}^{n}$ be a pseudoconvex domain with smooth boundary $\partial \Omega$ and $\omega$ be the K\"ahler form associated to the standard Euclidean metric. Let $\mathcal{PSH}_k(\Omega)$ be the $k$-plurisubharmonic functions on $\Omega$ and $\mathcal{PSH}_{k,0}(\Omega)$ be the set of functions in $\mathcal{PSH}_{k,0}(\Omega)$ with vanishing boundary value. Let $\mathcal F_k(\Omega)$ be set of $k$-plurisubharmonic functions which can be decreasingly approximated by functions in $\mathcal{PSH}_{k,0}(\Omega)\cap C(\overline{\Omega})\cap C^2(\Omega)$ \cite{C06}. For $u\in\mathcal F_k(\Omega)$, denote the $k$-Hessian energy by $$\mathcal E_k(u)=\int_{\Omega}(-u)(dd^cu)^k\wedge\omega^{n-k}$$ and the norm by $$\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}:=\left(\int_{\Omega}(-u)(dd^cu)^k\wedge\omega^{n-k}\right)^{\frac{1}{k+1}}.$$ In particular, when $k=n$, we write $\|u\|_{\mathcal{PSH}_0(\Omega)}=\|u\|_{\mathcal{PSH}_{n,0}(\Omega)}$ for simplicity. Let $\mu$ be a nonnegative Borel measure with finite mass on $\Omega \subset {\mathbb C}^n$. We will consider the trace inequalities and isocapacitary inequalities with respect to $\mu$ as in the classical case \cite{AH}. We recall the capacity for plurisubharmonic functions. The relative capacity for plurisubharmonic functions was introduced by Bedford-Taylor \cite{BT, B}. For a Borel subset $E\subset \Omega$, the {\it $k$-capacity} is defined as $$\text{Cap}_k(E,\Omega)=\sup\left\{\left.\int_{E}(dd^cv)^k\wedge\omega^{n-k}\,\right|\, v\in\mathcal{PSH}_k(\Omega),-1\le v\le 0\right\}.$$ Throughout the context, we will use $|\cdot|$ to denote the Lebesgue measure of a Borel subset. We follow \cite{Ma} to define the {\it capacity minimizing function} with respect to $\mu$ $$\nu_k(s,\Omega,\mu):=\inf\left\{\left.\text{Cap}_k(K,\Omega)\,\right|\,\mu(K)\ge s, K\Subset\Omega\right\},\ \ \ 0<s<\mu(\Omega).$$ Then we denote \begin{align}\label{cond-sobolev0} I_{k, p}(\Omega,\mu):=\begin{cases} \int_0^{\mu(\Omega)}\left(\frac{s}{\nu_k(s,\Omega,\mu)}\right)^{\frac{p}{k+1-p}}\,ds, & 0<p<k+1,\\[5pt] \displaystyle\sup_{t>0}\left\{\frac{t}{\nu_k(t,\Omega,\mu)^{\frac{p}{k+1}}}\right\}, & p\geq k+1, \end{cases} \end{align} and \begin{align}\label{cond-MT} I_{n}(\beta,\Omega,\mu):=\sup\left\{s\exp\left(\frac{\beta}{\nu_n(s,\Omega,\mu)^{\frac{q}{n+1}}}\right): \ 0<s<\mu(\Omega)\right\}. \end{align} Note that if $\mu(P)>0$ for a $k$-pluripolar subset $P\subset \Omega$, then $I_{k, p}(\Omega,\mu)$, $I_{n}(\beta,\Omega,\mu)=+\infty$. Therefore, we only need to consider those measures which charge no mass on pluripolar subsets. The main result of this paper is as follows. \begin{theo}\label{main3} Suppose $\Omega\subset{\mathbb C}^n$ is a smooth, $k$-pseudoconvex domain, where $1\le k\le n$. \begin{enumerate} \item[(i)] The Sobolev type trace inequality \begin{align}\label{sobolev1} \sup\left\{\frac{\|u\|_{L^p(\Omega,\mu)}}{\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}}: u\in \mathcal F_k(\Omega), 0<\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}<\infty\right\}<+\infty \end{align} holds if and only if $I_{k, p}(\Omega,\mu)<+\infty$. Moreover, the Sobolev trace map $$Id:\,{\mathcal F}_k(\Omega)\hookrightarrow L^p(\Omega,\mu), \ 1<p<\infty$$ is a compact embedding if and only if \begin{align}\label{cond-compact} \lim_{s\to 0}\frac{s}{\nu_k(s,\Omega,\mu)^{\frac{p}{k+1}}}\to 0. \end{align} \item[(ii)] When $k=n$, for $q\in [1,\frac{n+1}{n}]$ and $\beta>0$, the Moser-Trudinger type trace inequality \begin{align}\label{MT} \sup\left\{\int_{\Omega}\exp\left(\beta\left(\frac{-u}{\|u\|_{\mathcal{PSH}_0(\Omega)}}\right)^{q}\right)\,d\mu: u\in \mathcal F_n(\Omega), 0<\|u\|_{\mathcal{PSH}_0(\Omega)}<\infty\right\} \end{align} holds if and only if $I_{n}(\beta,\Omega,\mu)<+\infty$. \end{enumerate} \end{theo} \begin{rem}\label{rem} (a) For $p\geq k+1$, $\beta>0$, the conditions $I_{k, p}(\Omega,\mu), I_{n}(\beta,\Omega,\mu)<+\infty$ are equivalent to the following isocapacitary type inequalities \begin{eqnarray} &&\mu(K)\le I_{k, p}(\Omega,\mu)\cdot\text{Cap}_k(K,\Omega)^{\frac{p}{k+1}}, \label{cond-sobolev1}\\ &&\mu(K)\leq I_{n}(\beta,\Omega,\mu)\cdot\exp\left(-\frac{\beta}{\text{Cap}_n(K,\Omega)^{\frac{q}{n+1}}}\right) \label{cond-MT1} \end{eqnarray} for $K\Subset \Omega$. By \cite{K96, DK}, it is known that when $\mu$ is Lebesgue measure \begin{eqnarray} |E|&\leq& C_{\lambda,\Omega}\cdot\text{Cap}_k(E,\Omega)^{\lambda},\ \lambda<\frac{n}{n-k},\ 1\le k\le n-1,\label{isocap1}\\[3pt] |E|&\leq& C_{\beta,n,\Omega}\cdot \exp\left(-\frac{\beta}{\text{Cap}_n(E,\Omega)^{\frac{1}{n}}}\right),\ 0<\beta<2n. \label{isocap} \end{eqnarray} It is still open whether $\beta$ can attain $2n$. By a result in \cite{BB}, the conclusion is true for subsets $E\Subset\Omega$ with ${\mathbb S}^1$-symmetry(invariant under the rotation $e^{{\sqrt{-1}}\theta}z$ for all $\theta\in {\mathbb R}^1$), when $\Omega\subset {\mathbb C}^n$ is a ball centered at the origin. See Remark \ref{rem1} for more explanations. (b) By the arguments of \cite[Section 5]{BB}, \eqref{MT} is equivalent to \begin{align*} \sup\left\{\int_{\Omega}\exp\left(k(-u)-\frac{n^nk^{1+n}{\mathcal E}_n(u)}{(n+1)^{1+n}\beta^n}\right)^{q}\,d\mu: \forall k>0, u\in \mathcal F_n(\Omega)\right\}<+\infty. \end{align*} (c) We can also prove the quasi Moser-Trudinger type trace inequality is equivalent to the quasi Brezis-Merle type trace inequality(see Theorem \ref{remB}). Note that the proof for the case $\mu$ is Lebesgue measure in \cite{BB} used thermodynamical formalism and a dimension induction argument. Our proof here uses the isocapacitary inequality \eqref{cond-MT1}. \end{rem} \begin{ex} Let $\mu$ be the measure with singularities of Poincar\`e type, i.e., $$d\mu=\frac{1}{\prod_{j=1}^d|z^j|^2\left(1-\log |z^j|\right)^{1+\alpha}}\,dz^1\wedge d\bar z^1\wedge\cdots\wedge dz^n\wedge d\bar z^n, \ \ \ \text{where }d\le n, \ \alpha>0.$$ nn the unit disk $\mathbb D^n\subset \mathbb C^n$, According to \cite[Lemma 4.1]{DL}, we have $$\mu(K)\le C\cdot {\text{Cap}_n(K,\mathbb D^n)}^{\alpha}.$$ By Theorem \ref{main3}, a Sobolev type inequality with respect to $d\mu$ holds for plurisubharmonic functions. \end{ex} Now we turn to the corresponding equations. Consider the Dirichlet problem \begin{equation}\label{cMA} \begin{cases} (dd^cu)^k\wedge\omega^{n-k}=d\mu \ \ &\text{in\ $\Omega$,} \\ u=\varphi, &\text{on }\partial\Omega \end{cases} \end{equation} for a nonnegative Borel measure $\mu$. In a seminal work \cite{K98}, Ko\l{}odziej obtained the $L^\infty$-estimate and existence of continuous solutions for the complex Monge-Amp\`ere equation when $d\mu$ is dominated by a suitable function of capacity, especially for $d\mu\in L^p(\Omega)$. Furthermore, the solution is shown to be H\"older continuous under certain assumptions on $\Omega$ and $\varphi$ \cite{GKZ}. Generalizations to the complex Hessian equations were made by \cite{DK, Ngu}. These results were established by pluripotential theory. In \cite{WWZ2}, the authors present a new PDE proof for the complex Monge-Amp\`ere equation with $d\mu\in L^p(\Omega)$ based on the Moser-Trudinger type inequality. By Theorem \ref{main3}, we have \begin{theo}\label{main5} Let $\mu$ be a non-pluripolar, nonnegative Radon measure with finite mass. Then the following statements are equivalent: \begin{enumerate} \item [(i)] There exist $0<\delta<\frac{1}{k}$ and a constant $C>0$ depending on $\mu$ and $\Omega$ such that for any Borel subset $E\subset \Omega$, the Dirichlet problem \begin{align}\label{k-Dir} \begin{cases} (dd^cu)^k\wedge\omega^{n-k}=\chi_E\,d\mu,\ \ \ &\text{in }\Omega, \\ u=0, &\text{on }\partial \Omega, \end{cases} \end{align} admits a continuous solution $u_E\in \mathcal{PSH}_{k,0}(\Omega)$ such that \begin{align}\label{infty} \|u_E\|_{L^{\infty}(\Omega,\mu)}\le C\mu(E)^{\delta}. \end{align} Here $\chi_E$ is the characteristic function of $E$. \item [(ii)] There exists $p\ge k+1$ such that $I_{k,p}(\Omega,\mu)<+\infty$. \end{enumerate} More precisely, $\delta$ and $p$ can be determined mutually by $p=\frac{k+1}{1-k\delta}$. \end{theo} \begin{rem} (1) The conclusion from (ii) to (i) in the above theorem also holds with general continuous boundary value. (2) When $d\mu$ is an integrable function, once we have the $L^{\infty}$-estimate for complex Monge-Amp\`ere equation, the $L^{\infty}$-estimate of many other equations, including the complex Hessian equations and $p$-Monge-Amp\`ere operations \cite{HL09}, can be derived by a simple comparison. In real case, this is indicated by \cite{W2}. For example, we consider the complex $k$-Hessian equation \begin{equation}\label{eqhe} \begin{cases} (dd^cu)^k\wedge \omega^{n-k}=f\,\omega^n,\ \ \ &\text{in }\Omega, \\ u=\varphi, &\text{on }\partial \Omega. \end{cases} \end{equation} Suppose $d\mu=f\,\omega^n$ with $f\in L^{\frac{n}{k}}(\log L^1)^{n+\varepsilon}$. Let $v$ be the solution to the complex Monge-Amper\`e equation $$ \begin{cases} (dd^cv)^n=f^{\frac{n}{k}}\,\omega^n,\ \ \ &\text{in }\Omega, \\ v=\varphi, &\text{on }\partial \Omega. \end{cases} $$ By elementary inequalities, $v$ is a subsolution to \eqref{eqhe}. Then $\|u\|_{L^\infty(\Omega)}\leq \|v\|_{L^\infty(\Omega)}\leq C$. However, when $\mu$ is a general measure, this comparison does not work. \end{rem} It is also interesting to ask when the solution is H\"older continuous. In \cite{DKN}, Dinh, Ko\l{}odziej and Nguyen introduced a new condition on $d\mu$ and proved that it is equivalent to the H\"older continuity for the complex Monge-Amp\`ere equation. We give a pure PDE proof as well as for the complex Hessian equations, based on the Sobolev type inequality for complex Hessian operators and the arguments in \cite{WWZ2}. As in \cite{DKN}, we denote by $W^*(\Omega)$ the set of functions $f\in W^{1,2}(\Omega)$ such that $$df\wedge d^cf\le T$$ for some closed positive $(1,1)$-current $T$ of finite mass on $\Omega$. Define a Banach norm by $$\|f\|_*:=\|f\|_{L^1(\Omega)}+\min\left\{\|T\|_{\Omega}^{\frac{1}{2}}\,\big|\,T\text{ as above}\right\},$$ where the mass of $T$ is defined by $\|T\|_{\Omega}:=\int_\Omega T\wedge\omega^{n-1}$. \begin{theo}\label{main6} Suppose $1\le k\le n$, $\Omega$ is a $k$-pseudoconvex domain with smooth boundary. Let $\mu$ be a Radon measure with finite mass. Let $\gamma\in\left(\frac{(n-k)(k+1)}{2nk+n+k}, k+1\right]$. The following statements are equivalent: \begin{enumerate} \item [(i)]The Dirichlet problem \eqref{cMA} admits a solution $u\in C^{0,\gamma'}(\Omega)$ with $$0<\gamma'<\frac{(2nk+n+k)\gamma-(n-k)(k+1)}{(n+1)k\gamma+(n+1)k^2+nk+k}.$$ \item [(ii)] There exists $C>0$ such that for every smooth function $f\in W^*(\Omega)$ with $\|f\|_*\le 1$, \begin{align}\label{cond-DKN} \mu(f):=\int_\Omega f\,d\mu\le C\|f\|_{L^1(\Omega)}^{\gamma}. \end{align} \end{enumerate} \end{theo} The structure of the paper is as follows: Section \ref{pre} is devoted to a review on the relative capacity for $k$-plurisubharmonic functions. In particular, we obtain several equivalent definitions for the capacity. Section \ref{capest}, we establish the capacitary estimates for level sets of $k$-plurisubharmonic functions, which is the main tool in the proof of Theorem \ref{main3}. Theorem \ref{main3} will be proved case by case in Section \ref{mainpf}. In the last section, we apply Theorem \ref{main3} to the Dirichlet problem for the complex Hessian equations to prove Theorem \ref{main5} and \ref{main6}. \vskip 20pt \section{On relatively capacities and relatively extremal functions}\label{pre} In this section, we recall the relative capacity for plurisubharmonic functions \cite{BT, B}. For a Borel subset $E\subset \Omega$, the {\it $k$-capacity} is defined as $$\text{Cap}_k(E,\Omega)=\sup\left\{\left.\int_{E}(dd^cv)^k\wedge\omega^{n-k}\,\right|\, v\in\mathcal{PSH}_k(\Omega),-1\le v\le 0\right\}.$$ It is well known that the $k$-capacity can be characterized by the {\it relatively $k$-extremal function} $u_{k,E,\Omega}^*$, which is the upper regularization of $$u_{k,E,\Omega}:=\sup\left\{v\,\left|v\in \mathcal{PSH}_k(\Omega),v\le -1\text{ on }E, v<0\text{ on }\Omega\right.\right\}.$$ We will usually write $u_{k,E}^*$ for simplicity if there is no confusion. When $E=K$ is a compact subset, we have $-1\le u_{k,K}^*\le 0$, the complex Hessian measure $(dd^cu_K^*)^k\wedge\omega^{n-k}=0$ on $\Omega\setminus K$. Moreover, $\text{Cap}_k(K,\Omega)=0$ if $u_{k,K}^*>-1$ on $K$. The following well-known fact shows $u_{k,K}^*\in\mathcal{PSH}_{k,0}(\Omega)$. \begin{lem}\label{exb} If $K\subset \Omega$ is a compact subset, we have $u_{k,K}^*\big|_{\partial \Omega}=0$. \end{lem} \begin{proof} By \cite[Proposition 1.2]{KR}, there exists an exhaustion function $\psi\in C^{\infty}(\Omega)\cap \mathcal{PSH}_{k,0}(\Omega)$. By the maximum principle we have $-a:=\sup_K\psi<0$. Let $\hat \psi:=\frac{\psi}{a}\le -\chi_K$, then we get $\hat\psi\le u_{k,K}^*$ on $\Omega$. By the fact $\hat\psi(\xi)\to 0$ as $\xi\to z\in \partial\Omega$, we get the result. \end{proof} Suppose $\Omega$ is $k$-hyperconvex so that $\mathcal{PSH}_{k,0}(\Omega)$ is non-empty. Inspired by \cite{XZ} for the real Hessian equations, we consider several capacities defined as follows. \begin{defi}\label{cap-2} (i) Let $K$ be a compact subset of $\Omega$. Define \begin{align} &\widetilde{\text{Cap}}_{k,1}(K,\Omega)=\sup\left\{\left.\int_K (-v)(dd^cv)^k\wedge\omega^{n-k}\,\right|\, v\in \mathcal{PSH}_{k,0}(\Omega), -1\le v\le 0\right\} , \\ &\widetilde{\text{Cap}}_{k,2}(K,\Omega):=\inf\left\{\left.\int_{\Omega}(dd^cv)^k\wedge\omega^{n-k}\,\right|\,v\in \mathcal{PSH}_{k,0}(\Omega), v\big|_{K}\le -1\right\},\\ &\widetilde{\text{Cap}}_{k,3}(K,\Omega):=\inf\left\{\left.\int_{\Omega}(-v)(dd^cv)^k\wedge\omega^{n-k}\,\right|\,v\in \mathcal{PSH}_{k,0}(\Omega), v\big|_{K}\le -1\right\}. \end{align} (ii) For an open subset $O\subset \Omega$ and $j=1$, $2$, $3$, let $$\widetilde{\text{Cap}}_{k,j}(O,\Omega):=\sup\left\{\left.\widetilde{\text{Cap}}_{k,j}(K,\Omega)\,\right|\,\text{compact }K\subset O\right\}.$$ (iii) For a Borel subset $E\subset \Omega$ and $j=1$, $2$, $3$, let $$\widetilde{\text{Cap}}_{k,j}(E,\Omega):=\inf\left\{\left.\widetilde{\text{Cap}}_{k,j}(O,\Omega)\,\right|\,\text{open $O$ with }E\subset O\subset \Omega\right\}.$$ \end{defi} In order to show the equivalence of capacities defined above, we need the following well-known comparison principle. \begin{lem}\label{compar} Suppose $\Omega$ is a $k$-hyperconvex domain with $C^1$-boundary. Let $u$, $v\in {\mathcal F}_k(\Omega)$. If $\lim_{z\to \xi\in \partial \Omega}(u-v)\ge 0$, $u\le v$ in $\Omega$, then there hold $$\int_{\Omega}(dd^cu)^k\wedge\omega^{n-k}\ge \int_{\Omega}(dd^cv)^k\wedge\omega^{n-k},\ \ \ \int_{\Omega}(-u)(dd^cu)^k\wedge\omega^{n-k}\ge \int_{\Omega}(-v)(dd^cv)^k\wedge\omega^{n-k}.$$ \end{lem} \begin{proof} It is a direct consequence of the integration by parts and the smooth approximation for functions in ${\mathcal F}_k(\Omega)$. \end{proof} \begin{lem}\label{cap-extre} Suppose $1\le k\le n$, and $K\subset \Omega$ is a compact subset. Then \begin{align} \widetilde{\text{Cap}}_{k,1}(K,\Omega)=\int_K(-u_{k,K}^*)(dd^cu_{k,K}^*)^k\wedge\omega^{n-k}. \end{align} \end{lem} \begin{proof} First, by definition we have $$\widetilde{\text{Cap}}_{k,1}(K,\Omega)\ge \int_K(-u_{k,K}^*)(dd^cu_{k,K}^*)^k\wedge\omega^{n-k}.$$ To reach the reversed inequality, we choose $\{K_j\}$ to be a sequence of compact subsets of $\Omega$ with smooth boundaries $\partial K_j$ such that $$K_{j+1}\subset K_j,\ \ \ \bigcap_{j=1}^{\infty}K_j=K.$$ Using the smoothness of $\partial K_j$, the relatively extremal function $u_j:=u_{k,K_j}^*=u_{k,K_j}\in C(\overline \Omega)$. Note that $u_j\uparrow v$ and $u_{k,K}^*=v^*$. Then for $u\in \mathcal{PSH}_{k,0}(\Omega)$ such that $-1\le u\le 0$, we have $u\ge u_j$ on $K_j$. Therefore, by Lemma \ref{compar}, \begin{eqnarray*} \int_{K}(-u)(dd^cu)^k\wedge\omega^{n-k} &\le& \int_{\{u_j\le u\}}(-u)(dd^cu)^k\wedge\omega^{n-k}\\ &\le& \int_{\{u_j\le u\}}(-u_j)(dd^cu_j)^k\wedge\omega^{n-k} \\ &\le& \int_{\Omega}(-u_j)(dd^cu_j)^k\wedge\omega^{n-k}= \int_{K_j}(-u_j)(dd^cu_j)^k\wedge\omega^{n-k}. \end{eqnarray*} Taking $j\to \infty$, we obtain $$\int_{K}(-u)(dd^cu)^k\wedge\omega^{n-k}\le \int_{\Omega}(-u_{k,K}^*)(dd^cu_{k,K}^*)^k\wedge\omega^{n-k}=\int_{K}(-u_{k,K}^*)(dd^cu_{k,K}^*)^k\wedge\omega^{n-k}.$$ This yields $$\widetilde{\text{Cap}}_{k,1}(K,\Omega)\le \int_{K}(-u_K^*)(dd^cu_K^*)^k\wedge\omega^{n-k},$$ thereby completing the proof. \end{proof} \begin{lem}\label{equal} For any Borel set $E\subset \Omega$, we have $$\widetilde{\text{Cap}}_{k,j}(E,\Omega)=\text{Cap}_k(E,\Omega),\ \ \ j=1,2,3.$$ \end{lem} \begin{proof} By definition, it suffices to prove the equalities when $E=K$ is a compact subset of $\Omega$. Note that for $u$, $v\in \mathcal{PSH}_{k,0}(\Omega)$ such that $-1\le u\le 0$, $v\big|_{K}\le -1$, we have $$\int_K(-u)(dd^cu)^k\wedge\omega^{n-k}\le \int_K(dd^cu)^k\wedge\omega^{n-k}\le \int_K(dd^cu_{k,K}^*)^k\wedge\omega^{n-k}\le \int_{\Omega}(dd^cv)^k\wedge\omega^{n-k}.$$ Hence we obtain $$\widetilde{\text{Cap}}_{k,1}(K,\Omega)\le \text{Cap}_k(K,\Omega)\le \widetilde{\text{Cap}}_{k,2}(K,\Omega).$$ It suffices to prove \begin{equation}\label{capeq} \widetilde{\text{Cap}}_{k,2}(K,\Omega)\le \widetilde{\text{Cap}}_{k,3}(K,\Omega)\le \widetilde{\text{Cap}}_{k,1}(K,\Omega). \end{equation} We still choose $\{K_j\}$ to be a sequence of compact subsets in $\Omega$ with smooth boundaries $\partial K_j$ such that $$K_{j+1}\subset K_j,\ \ \ \bigcap_{j=1}^{\infty}K_j=K.$$ Then $u_j:=u_{k,K_j}^*=u_{k,K_j}\in C(\overline{\Omega})$, $u_j\uparrow v$ with $v^*=u_{k,K}^*$, and $u_j\big|_{K_j}\equiv-1$. For any $j$, we have \begin{eqnarray*} \widetilde{\text{Cap}}_{k,2}(K,\Omega)&\le& \int_{\Omega}(dd^cu_j)^k\wedge\omega^{n-k}\\ &=&\int_{\Omega}(-u_j)(dd^cu_j)^k\wedge\omega^{n-k}\le \int_{\Omega}(-v_j)(dd^cv_j)^k\wedge\omega^{n-k}, \end{eqnarray*} where $v_j$ is an arbitrary function in $\mathcal{PSH}_{k,0}(\Omega)$ such that $v_j\big|_{K_j}\le -1$. This implies $\widetilde{\text{Cap}}_{k,2}(K,\Omega)\le \widetilde{\text{Cap}}_{k,3}(K_j,\Omega)$. Then by Lemma \ref{compar}, we have $$\widetilde{\text{Cap}}_{k,3}(K_{j+1},\Omega)\le \int_{\Omega}(-u_{j})(dd^cu_{j})^k\wedge\omega^{n-k}=\widetilde{\text{Cap}}_{k,1}(K_{j},\Omega)=\text{Cap}_{k}(K_j,\Omega).$$ Letting $j\to\infty$, \eqref{capeq} holds by the convergence of $\text{Cap}_{k}(K_j,\Omega)$ and the weak continuity of $(-u_j)(dd^cu_j)^k\wedge\omega^{n-k}$. \end{proof} \vskip 20pt \section{Capacitary estimates for level sets of plurisubharmonic functions}\label{capest} In this section, we establish a capacitary estimate for level sets of $k$-plurisubharmonic functions, which will play an important role in the proof of Theorem \ref{main3}. The analogous estimate for the real Hessian equation was obtained by \cite{XZ}. This estimate is a generalization of the capacitary estimates for the Wiener capacity \cite[Chaper 7]{AH}. First, we recall the {\it capacitary weak type inequality} \begin{align}\label{w-ineq} t^{k+1}\text{Cap}_k(\left\{z\in \Omega\,\left|\,u(z)\le -t\right.\right\},\Omega)\le \|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1},\ \ \ \forall t>0. \end{align} This inequality was proved by \cite{ACKZ} for $k=n$, and by \cite{Lu} for general $k$. We prove the {\it capacitary strong type inequality} as follows. \begin{theo} Suppose $u\in \mathcal{PSH}_{k,0}(\Omega)\cap C^2(\Omega)\cap C(\overline{\Omega})$. For any $A>1$, we have \begin{align}\label{s-ineq} \int_0^{\infty}t^k\text{Cap}_k(\left\{z\in\Omega\,\left|\,u<-t\right.\right\},\Omega)\,dt\le \left(\frac{A}{A-1}\right)^{k+1}\log A \cdot\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}. \end{align} \end{theo} \begin{proof} We use similar arguments as \cite{Ma} for the Laplacian and \cite{XZ} for the real Hessian case. For $t>0$, denote $$K_t:=\left\{z\in \Omega\,\left|\,u(z)\le -t\right.\right\}, \ \Omega_t:=\left\{z\in \Omega\,\left|\,u(z)< -t\right.\right\}$$ and $v_t:=u_{k,K_t}^*$. For a Borel subset $E\subset \Omega$, we denote $$\phi(E):=\frac{\int_E(-u)(dd^cu)^k\wedge\omega^{n-k}}{\int_{\Omega}(-u)(dd^cu)^k\wedge\omega^{n-k}}.$$ For $A>1$, \begin{align} \int_0^{\infty}\phi(\Omega_t\setminus K_{At})\,\frac{dt}{t} \le & \int_0^{\infty}\phi(K_t\setminus K_{At})\,\frac{dt}{t} \nonumber \\ = & \int_0^{\infty}\left(\int_t^{At}\frac{d\phi(K_s)}{ds}\,ds\right)\,\frac{dt}{t} \nonumber\\ = & \int_0^{\infty}\left(\int_s^{\frac{s}{A}}\frac{dt}{t}\right)\frac{d\phi(K_s)}{ds}\,ds \nonumber \\ = & -\log A\int_0^{\infty}\frac{d\phi(K_s)}{ds}\,ds \nonumber\\ = & \lim_{t\to 0^+}\phi(K_t)\cdot\log A \nonumber \\ \le & \log A. \nonumber \end{align} This implies \begin{eqnarray}\label{key-1} \int_0^{\infty}\left\|u\cdot \chi_{\Omega_t\setminus K_{At}}\right\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}\, \frac{dt}{t}&=& \int_{0}^{\infty}\left(\int_{\Omega_t\setminus K_{At}}(-u)(dd^cu)^k\wedge\omega^{n-k}\right)\,\frac{dt}{t} \nonumber\\[4pt] &\le& \|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}\log A. \end{eqnarray} Then for $\forall t>0$, we consider $$u^t:=\frac{u+t}{(A-1)t},\ \ \ \tilde{u}^t:=\max\{u^t, -1\}.$$ It is clear that $u^t,\tilde u^t\in \mathcal{PSH}_{k,0}(\Omega_t)\cap C^{0,1}(\overline\Omega_t)$ and $\tilde u^t= -1$ on $K_{At}$. We have \begin{align*} \|\tilde u^t\|_{\mathcal{PSH}_{k,0}(\Omega_t)}^{k+1}= & \int_{\Omega_t}(-\tilde{u}^t)(dd^c\tilde{u}^t)^k\wedge\omega^{n-k} \\ = & \int_{\Omega_t}d\tilde{u}^t\wedge d^c\tilde{u}^t\wedge (dd^c\tilde{u}^t)^{k-1}\wedge\omega^{n-k} \\ = & \int_{\Omega_t\setminus K_{At}}d\tilde{u}^t\wedge d^c\tilde{u}^t\wedge (dd^c\tilde{u}^t)^{k-1}\wedge\omega^{n-k} \\ \le & \int_{\Omega_t\setminus K_{At}}\left(-\frac{u}{(A-1)t}\right)(dd^c\tilde{u}^t)^{k}\wedge\omega^{n-k} \\ = & (A-1)^{-k-1}t^{-k-1}\int_{\Omega_t\setminus K_{At}}\left(-u\right)(dd^cu)^k\wedge\omega^{n-k}, \end{align*} where we have used $\frac{\partial \tilde{u}^t}{\partial \nu}\ge 0$ almost everywhere on $\partial K_{At}$ for $t>0$ at the fourth line. That is, \begin{eqnarray*} \int_{\Omega_t}(-\tilde{u}^t)(dd^c\tilde{u}^t)^k\wedge\omega^{n-k}\le (A-1)^{-k-1}t^{-k-1}\phi(\Omega_t\setminus K_{At})\cdot \|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}. \end{eqnarray*} Now we denote by $\hat v_t:=u_{k,K_{At},\Omega_t}^*$ the relatively extremal function of $K_{At}$ with respect to $\Omega_t$. Note that $\hat v_t\ge \tilde{u}^t$ in $\Omega_t$, and $\hat v_t=\tilde{u}^t=0$ on $\partial \Omega_t$. By comparison principle, we have \begin{align}\label{key-2} \text{Cap}_k(K_{At},\Omega_t)= & \int_{K_{At}}(-\hat v_t)(dd^c\hat v_t)^k\wedge\omega^{n-k} \le \int_{\Omega_t}(-\tilde{u}^t)(dd^c\tilde{u}^t)^k\wedge\omega^{n-k} \nonumber\\ \le & (A-1)^{-k-1}t^{-k-1}\phi(\Omega_t\setminus K_{At})\cdot \|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}. \end{align} Finally, by \eqref{key-1}, \eqref{key-2} with $\lambda=At$, we obtain \begin{eqnarray*} \int_0^{\infty}\lambda^k \text{Cap}_k(K_\lambda,\Omega)\,d\lambda &\le & A^{k+1}\int_0^{\infty}t^k \text{Cap}_k(K_{At},\Omega_t)\,dt \\ &\le & A^{k+1}(A-1)^{-k-1}\log A \cdot\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{k+1}. \end{eqnarray*} \end{proof} \vskip 10pt \section{The Trace Inequalities}\label{mainpf} In this section, we are going to prove the trace inequalities in Theorem \ref{main3}. \subsection{The Sobolev type trace inequality(the case $0<p<k+1$).} First, we show $I_{k,p}(\Omega,\mu)<+\infty$ implies the Sobolev type inequality. For $u\in \mathcal F_k(\Omega)$, denote $$S_{k,p}(\mu,u):=\sum_{j=-\infty}^{\infty}\frac{[\mu(K_{2^j}^u)-\mu(K_{2^{j+1}}^u)]^{\frac{k+1}{k+1-p}}}{\text{Cap}_k(K_{2^j}^u,\Omega)^{\frac{p}{k+1-p}}},\ \ \ K_s^u:=\{u\le -s\}.$$ By the elementary inequality $a^c+b^c\le (a+b)^c$ for $a$, $b\ge 0$ and $c\ge 1$, we get \begin{align*} S_{k,p}(\mu,u) \le & \sum_{j=-\infty}^{\infty}\frac{[\mu(K_{2^j}^u)-\mu(K_{2^{j+1}}^u)]^{\frac{k+1}{k+1-p}}}{[\nu_k(2^j,\Omega,\mu)]^{\frac{p}{k+1-p}}} \\ \le & \sum_{j=-\infty}^{\infty}\frac{[\mu(K_{2^j}^u)]^{\frac{k+1}{k+1-p}}-[\mu(K_{2^{j+1}}^u)]^{\frac{k+1}{k+1-p}}}{[\nu_k(2^j,\Omega,\mu)]^{\frac{p}{k+1-p}}} \\ \le & \int_0^{\infty}\frac{1}{[\nu_k(s,\Omega,\mu)]^{\frac{p}{k+1-p}}}\,d\left(s^{\frac{k+1}{k+1-p}}\right) = \frac{k+1}{k+1-p}I_{k,p}(\Omega,\mu). \end{align*} Then by the strong capacitary inequality \eqref{s-ineq} with $A=n$ and by integration by parts, \begin{align*} \int_{\Omega}(-u)^p\,d\mu = & \int_{\Omega}\int_0^{\infty}\chi_{[0,-u]}(s^p)\,d\left(s^p\right)\,d\mu \\ = & \int_0^{\infty}\mu(K_s^u)d\left(s^p\right) \\ = & \int_0^{\infty} s^p\,d\mu(K_s^u) \\ \le & \sum_{j=-\infty}^{\infty}2^{(j+1)p}\cdot [\mu(K_{2^j}^u)-\mu(K_{2^{j+1}}^u)] \\ \le & {S_{k,p}(\mu,u)}^{\frac{k+1-p}{k+1}}\cdot\left(\sum_{j=-\infty}^{\infty}2^{j(k+1)}\text{Cap}_k(K_{2^{j(k+1)}}^u,\Omega)\right)^{\frac{p}{k+1}} \\ \le & (k+1){S_{k,p}(\mu,u)}^{\frac{k+1-p}{k+1}}\cdot \left(\int_0^{\infty}s^k\text{Cap}(K_s^u,\Omega)\,ds\right)^{\frac{p}{k+1}} \\ \le & C(n,k,\mu,p)\cdot\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^p, \end{align*} thereby completing the proof.\qed On the other hand, suppose there exists $p\in(0,k+1)$ such that the inequality \eqref{sobolev1} holds. That is, for any $u\in{\mathcal F}_k(\Omega)$ $$\sup_{t>0}\{t\mu(K_t^u)^{\frac{1}{p}}\}\le \|u\|_{L^p(\Omega,\mu)}\le C\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}.$$ By the definition of $\nu_k$, for any integer $j<\frac{\log \mu(\Omega)}{\log 2}$, we can choose a compact subset $K_j\subset \Omega$ such that \begin{eqnarray*} \mu(K_j)\ge 2^j, \ \text{Cap}_k(K_j,\Omega)\le 2\nu_k(2^j,\Omega,\mu). \end{eqnarray*} Then by Lemma \ref{equal}, we can choose $u_j\in{\mathcal F}_k(\Omega)$ such that \begin{eqnarray*} u_j\big|_{K_j}\le -1,\ {\mathcal E}_k(u_j)\le 2\text{Cap}_k(K_j,\Omega). \end{eqnarray*} Then for integers $i$, $m$ with $-\infty< i< m$ and $\frac{\log \mu(\Omega)}{\log 2}\in (m,m+1]$, let $$\gamma_j:=\left(\frac{2^j}{\nu_k(2^j,\Omega,\mu)}\right)^{\frac{1}{k+1-p}}, \ \ \forall j\in [i,m] \text{\ and \ } u_{i,m}:=\sup_{i\le j\le m}\{\gamma_ju_j\}.$$ Since $u_{i,m}\in {\mathcal F}_k(\Omega)\cap C(\overline{\Omega})$, we have $${\mathcal E}_k(u_{i,m})\le C_{n,k}\sum_i^m\gamma_j^{k+1}{\mathcal E}_k(u_j)\le C_{n,k}\sum_i^m\gamma_j^{k+1}\nu_k(2^j,\Omega,\mu).$$ Note that for $i\le j\le m$ we have $$2^j<\mu(K_j)\le \mu(K_{\gamma_j}^{u_{i,m}}).$$ This implies \begin{eqnarray*} \|u_{i,m}\|^p_{\mathcal{PSH}_{k,0}(\Omega)}&\ge & C^{-p}C_{n,k,p}\int_{\Omega}|u_{i,m}|^p\,d\mu \\ &\ge & C\int_0^\infty\left(\inf\left\{t\,\left|\,\mu(K_t^{u_{i,m}})\le s\right.\right\}\right)^p\,ds \\ &\ge & C\sum_{j=i}^m \left(\inf\left\{t\,\left|\,\mu(K_t^{u_{i,m}})\le 2^j\right.\right\}\right)^p \cdot 2^j \\ &\ge & C\sum_{j=i}^m\gamma_j^p2^j \\ &\ge & C\frac{\sum_{j=i}^m\gamma_j^p2^j}{\sum_{j=i}^m\gamma_j^{k+1}\nu_k(2^j,\Omega,\mu)^{\frac{p}{k+1}}} \|u_{i,m}\|^p_{\mathcal{PSH}_{k,0}(\Omega)} \\ &\ge & C\frac{\sum_{j=i}^m2^{\frac{j(k+1)}{k+1-p}}\nu_k(2^j,\Omega,\mu)^{-\frac{p}{k+1-p}}}{\left(\sum_{j=i}^m2^{\frac{j(k+1)}{k+1-p}}\nu_k(2^j,\Omega,\mu)^{-\frac{p}{k+1-p}}\right)^{\frac{p}{k+1}}} \|u_{i,m}\|^p_{\mathcal{PSH}_{k,0}(\Omega)} \\ &= & C\left(\sum_{j=i}^m2^{\frac{j(k+1)}{k+1-p}}\nu_k(2^j,\Omega,\mu)^{-\frac{p}{k+1-p}}\right)^{\frac{k+1-p}{k+1}} \|u_{i,m}\|^p_{\mathcal{PSH}_{k,0}(\Omega)} . \end{eqnarray*} Consequently, \begin{eqnarray*} I_{k,p}(\Omega,\mu)&\le & \lim_{i\to -\infty}\sum_{j=i}^m 2^{\frac{(j+1)(k+1)}{k+1-p}}\nu_k(2^j,\Omega,\mu)^{-\frac{p}{k+1-p}}+\int_{2^m}^{\mu(\Omega)} \left(\frac{s}{\nu_k(s,\Omega,\mu)}\right)^{\frac{p}{k+1-p}}\,ds\\ &\le & (1+\mu(\Omega))\lim_{i\to -\infty}\sum_{j=i}^m 2^{\frac{(j+1)(k+1)}{k+1-p}}\nu_k(2^j,\Omega,\mu)^{-\frac{p}{k+1-p}}<+\infty.\qed \end{eqnarray*} \subsection{The Sobolev type trace inequality(the case $p\ge k+1$).} First, we assume $I_{k,p}(\Omega,\mu)<+\infty$ and prove the Sobolev type inequality. For $u\in \mathcal F_k(\Omega)$, we have \begin{align*} \int_{\Omega}(-u)^p\,d\mu= & p\int_0^{\infty}\mu(K_{t})t^{p-1}\,dt \\ \le & pI_{k,p}(\Omega,\mu)\int_0^{\infty}{\text{Cap}_k(K_t, \Omega)}^{\frac{p}{k+1}}t^{p-1}\,dt \\ = & pI_{k,p}(\Omega, \mu)\int_0^{\infty}t^{p-1-k}t^k{\text{Cap}_k(K_t, \Omega)}^{\frac{p-1-k}{k+1}+1}\,dt \\ \le & pI_{k,p}(\Omega,\mu)\cdot\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{p-1-k}\int_0^{\infty}t^k\text{Cap}_k(K_t,\Omega)\,dt \\ \le & \left[p I_{k,p}(\Omega,\mu)\left(\frac{A}{A-1}\right)^{k+1}\log A\right]\cdot\|u\|_{\mathcal{PSH}_{k,0}(\Omega)}^{p}. \end{align*} The sufficient part is proved. \vskip 10pt On the contrary, let $u=u_{k,K}^*$ the relatively capacitary potential with respect to a compact set $K\subset \Omega$, then $$\mu(K)^{\frac{1}{p}}\le \|u_{k,K}^*\|_{L^p(\Omega)}\le C\cdot\text{Cap}_k(K,\Omega)^{\frac{1}{k+1}}.$$ By Remark \ref{rem} (c), $I_{k,p}(\Omega,\mu)<+\infty$. \subsection{Compactness} In this section, we are going to consider the compactness of the embedding induced by inequalities \eqref{sobolev1}. First, we recall the Poincar\'e type inequality for complex Hessian operators. \begin{theo}\label{quotient1}\cite{Hou, AC20} Suppose $\Omega$ is a pseudoconvex domain with smooth boundary, and $1\le l< k\le n$. Then there exists a uniform constant $C>0$ depending on $k$, $l$ and $\Omega$ such that \begin{align} \|u\|_{\mathcal{PSH}_{l,0}(\Omega)}\le C\|u\|_{\mathcal{PSH}_{k,0}(\Omega)},\ \ \ \forall u\in \mathcal{PSH}_{k,0}(\Omega). \end{align} \end{theo} \begin{cor}\label{ccor} Suppose $K$ is a compact subset of $\Omega$ with smooth boundary $\partial K$, and $1\le l < k\le n$. Then we have \begin{align}\label{P} \text{Cap}_l(K,\Omega)\le C\cdot {\text{Cap}_k(K,\Omega)}^{\frac{l+1}{k+1}}. \end{align} \end{cor} \begin{proof} Let $u\in \mathcal{PSH}_{k,0}(\Omega)$ such that $u\le -\chi_K$. By $\mathcal{PSH}_{k,0}(\Omega)\subset \mathcal{PSH}_{l,0}(\Omega)$, we have \begin{eqnarray*} \text{Cap}_l(K,\Omega)=\widetilde{\text{Cap}}_{l,3}(K,\Omega)&\le& \int_\Omega(-u)(dd^cu)^l\wedge\omega^{n-l}\\ &\le& C\left[\int_{\Omega}(-u)(dd^cu)^{k}\wedge\omega^{n-k}\right]^{\frac{l+1}{k+1}}. \end{eqnarray*} Then \eqref{P} follows by taking the infimum. \end{proof} Now, we are in position to show the compactness of the Sobolev trace inequality. First we need the following compactness theorem for classical Sobolev embedding. \begin{theo}\label{Maz}\cite{Ma} Suppose $p\ge 2$, the embedding $$Id:\,W^{1,2}_0(\Omega)\to L^{p}(\Omega,\mu)$$ is compact if and only if \begin{align}\label{cond-com-1} \lim_{s\to 0}\frac{s}{\nu_1(s,\mu)^{\frac{p}{2}}}\to 0. \end{align} \end{theo} By Corollary \ref{ccor}, condition \eqref{cond-compact} implies \eqref{cond-com-1}. \noindent{\it Proof of compactness in Theorem \ref{main3}(i).} First, we consider the 'if' part. For a sequence $u_j\in \mathcal F_k(\Omega)$ with bounded $\|u_j\|_{\mathcal{PSH}_{k,0}(\Omega)}$, we can obtain the boundedness of $\|u_j\|_{W^{1,2}_0(\Omega)}$ by Theorem \ref{quotient1}. Then by Theorem \ref{Maz}, we can find a subsequence $\{u_{j_l}\}$ such that $u_{j_l}$ converges to $u$ in $L^{\frac{p+2}{2}}(\Omega,\mu)$. Hence, we have \begin{align} \int_{\Omega}|u_{j_l}-u|^p\,d\mu\le & \left(\int_{\Omega}|u_{j_l}-u|^{\frac{p+2}{2}}\,d\mu\right)^{\frac{1}{2}}\cdot \left(\int_{\Omega}|u_{j_l}-u|^{\frac{3p-2}{2}}\,d\mu\right)^{\frac{1}{2}} \nonumber\\ \le & C\|u_{j_l}-u\|_{L^{\frac{p+2}{2}}(\Omega,\mu)}\cdot (\|u_{j_l}\|_{\mathcal{PSH}_{k,0}(\Omega)}^{\frac{3p-2}{4}}+\|u\|^{\frac{3p-2}{4}}_{\mathcal{PSH}_{k,0}(\Omega)})\to 0, \end{align} thereby completing the proof. For the 'only if' part, we assume $\mathcal{PSH}_{k,0}(\Omega)$ is compactly embedded into $L^p(\Omega,\mu)$. Similar to the linear case \cite{Ma}, we will show that for any $s>0$, there exists $\varepsilon(s)$ satisfying $\varepsilon(s)\to 0$, as $s\to 0$, such that for any compact subset $K\subset\Omega$ with $\mu(K)<s$, it holds \begin{align}\label{equi-norm} \left(\int_K(-u)^p\,d\mu\right)^{\frac{1}{p}}\le \varepsilon(s)\|u\|_{\mathcal{PSH}_{k,0}(\Omega)} \end{align} holds for all $u\in \mathcal{PSH}_{k,0}(\Omega)$. We show this by contradiction. Assume there exists a sequence of compact subsets $K_i$ such that $\mu(K_i)\to 0$ and $\{u_j\}_{j=1}^{\infty}\subset \mathcal{PSH}_{k,0}(\Omega)$, and a uniform constant $c>0$ such that $$\int_{K_i}(-u_j)^p\,d\mu\ge c\|u_j\|_{\mathcal{PSH}_{k,0}(\Omega)}^p.$$ By scaling, we may assume $\|u_j\|_{\mathcal{PSH}_{k,0}(\Omega)}=1$. Then by the compactness of the embedding, there is a subsequence, still denoted by $\{u_j\}$, which converges to $u\in L^p(\Omega,\mu)$ in $L^p$-sense. Now we consider a sequence of cut-off functions $\eta_j\in C^{\infty}_0(\Omega)$ such that $\text{supp}(\eta_j)\subset K_j$. Then by $\mu(K_i)\to 0$, $\eta_j^{\frac{1}{p}}u_j$ converges to $0$ in $L^p(\Omega,\mu)$ sense. Hence, $\|u_j\|_{L^p(K_j,\mu)}\to 0$ as $j\to \infty$, which makes a contradiction. Finally, for any $K\Subset\Omega $ with $\mu(K)<s$, by letting $u=u_{k,K}^*$ in the \eqref{equi-norm}, we have $$\frac{\mu(K)}{\text{Cap}_k(K,\Omega)^{\frac{p}{k+1}}}\le \varepsilon(s)\to 0.\qed$$ \subsection{The Moser-Trudinger type trace inequality(Proof of Theorem \ref{main3}(ii))} Let $\mu$ be a positive Randon measure, \begin{eqnarray*} &&\int_{\Omega}\exp\left(\beta\left(\frac{-u}{\|u\|_{\mathcal{PSH}_0(\Omega)}}\right)^{q}\right)\,d\mu \\ &=& \sum_{i=0}^{\infty}\frac{\beta^i}{i!}\int_{\Omega}\left(\frac{-u}{\|u\|_{\mathcal{PSH}_0(\Omega)}}\right)^{q i}\,d\mu \\ &= & \sum_{i<\frac{n+1}{q}}\frac{\beta^i}{i!}\int_{\Omega}\left(\frac{-u}{\|u\|_{\mathcal{PSH}_0(\Omega)}}\right)^{q i}\,d\mu +\sum_{i\ge \frac{n+1}{q}}\frac{\beta^i}{i!}\int_{\Omega}\left(\frac{-u}{\|u\|_{\mathcal{PSH}_0(\Omega)}}\right)^{q i}\,d\mu \\ &= :& I+II. \end{eqnarray*} Since $I_{n}(\beta,\Omega,\mu)<+\infty$ implies $I_{k,p}(\Omega,\mu)$, we have $I\leq C$. It suffices to estimate $II$. We have \begin{align}\label{key-3} II= & \sum_{i\ge \frac{n+1}{q}}\frac{\beta^i}{i!}\|u\|_{\mathcal{PSH}_0(\Omega)}^{-q i}\int_{\Omega}(-u)^{q i}\,d\mu \nonumber\\ = & \sum_{i\ge \frac{n+1}{q}}\frac{\beta^i}{i!}\|u\|_{\mathcal{PSH}_0(\Omega)}^{-q i}\int_{0}^{\infty}t^{q i}\frac{d\mu(K_t)}{dt}\,dt \nonumber\\ = & \sum_{i\ge \frac{n+1}{q}}\frac{\beta^i}{i!}\|u\|_{\mathcal{PSH}_0(\Omega)}^{-q i}\int_0^{\infty}\mu(K_t)\,d(t^{q i}) \nonumber \\ \le & \sum_{i\ge \frac{n+1}{q}}\frac{\beta^i}{i!}\int_0^{\infty}\frac{\text{Cap}(K_t,\Omega)}{t^{-n}\|u\|^{n+1}}\left(\frac{\mu(K_t)}{\text{Cap}(K_t,\Omega)^{\frac{q i}{n+1}}}\right)\,dt \nonumber\\ \le & \beta q \|u\|^{-n-1}\int_0^{\infty}\sum_{i=0}^{\infty}\frac{\beta^i}{i!}\left(\frac{\mu(K_t)}{\text{Cap}(K_t,\Omega)^{\frac{q i}{n+1}}}\right)\,t^n \text{Cap}(K_t,\Omega)\,dt \nonumber \\ = & \beta q \|u\|^{-n-1}\int_0^{\infty}\mu(K_t)\exp\left(\frac{\beta}{\text{Cap}(K_t,\Omega)^{\frac{q}{n+1}}}\right)t^n \text{Cap}(K_t,\Omega)\, dt, \nonumber \\ \le & \beta q I_{n}(\beta,\Omega,\mu)\left(\frac{A}{A-1}\right)^{k+1}\log A,\nonumber \end{align} where we have used \eqref{w-ineq} at the fourth line and \eqref{s-ineq} at the last line. \qed \begin{rem}\label{rem1} When the measure $d\mu$ is Lebesgue measure and $\Omega=B_1$ is the unit ball centered at the origin, as proved in \cite{K96}, there holds \begin{equation}\label{isomoser} |K|\le C_{\lambda,\Omega,n}\exp\left\{-\frac{\lambda}{\text{Cap}_n(K,\Omega)^{\frac{1}{n}}}\right\} \end{equation} for any $0<\lambda<2n$. In particular, when $K=B_r$ with $r<1$, by standard computations, $$|B_r|e^{\frac{\lambda}{\text{Cap}_n(B_r,\Omega)^{\frac{1}{n}}}}=C_nr^{2n-\lambda}.$$ Hence \eqref{isomoser} does not hold when $\lambda>2n$. It is natural to ask if $\lambda$ can attain the optimal constant $2n$. As shown in \cite{BB}, for those $u\in \mathcal{PSH}_0(\Omega)$ with ${\mathbb S}^1$-symmetry, i.e. $u(z)=u(e^{{\sqrt{-1}} \theta}z)$ for all $\theta\in{\mathbb R}^1$, \eqref{MT} holds with $\beta=2n$. Then by the proof of Theorem \ref{main3} (ii) with ${\mathbb S}^1$-symmetry, there holds $$|E|\le C\exp\left\{-\frac{2n}{\text{Cap}_n(E,\Omega)^{\frac{1}{n}}}\right\}$$ for any ${\mathbb S}^1$-invariant subset $E$. Moreover, by \cite{BB14}, the Schwartz symmetrization $\hat u$ of $u_{E}^*$ is plurisubharmonic and has smaller $\|\cdot\|_{\mathcal{PSH}_0( \Omega)}$-norm. This leads to $$|E|e^{\frac{2n}{\text{Cap}_n(E,\Omega)^{\frac{1}{n}}}}\le |B_r|e^{\frac{2n}{\text{Cap}_n(B_r,\Omega)^{\frac{1}{n}}}}\equiv C_n|B_1|.$$ \end{rem} \subsection{The Brezis-Merle type trace inequality} Similar to \cite{BB}, we can obtain a relationship between the Brezis-Merle type trace inequality and the Moser-Trudinger type inequality. \begin{theo}\label{remB} The Moser-Trudinger type trace inequality \eqref{MT} holds for any $0<\lambda<\beta$ for some $\beta>0$ and $q\in [1,\frac{n+1}{n}]$ if and only if for any $0<\lambda<\beta$, the Brezis-Merle type trace inequality \begin{align}\label{BM} \sup\left\{\int_{\Omega}\exp\left(\lambda\left(\frac{-u}{{\mathcal M}[u]^{\frac{1}{n}}}\right)^{\frac{nq}{n+1}}\right)\,d\mu: u\in \mathcal F_n(\Omega), 0<\|u\|_{\mathcal{PSH}_0(\Omega)}<\infty\right\}<+\infty \end{align} holds. Here ${\mathcal M}[u]:=\int_{\Omega}(dd^cu)^n$. \end{theo} \begin{proof} First, we prove the if part. By Theorem \ref{main3}(iv), there is $C>0$ such that for any compact subset $K\subset \Omega$ $$\mu(K)e^{\lambda\frac{1}{\text{Cap}_n(K,\Omega)^{\frac{q}{n+1}}}}\le C.$$ For $u\in{\mathcal F}_k(\Omega)$, we denote $K_t:=\{u\le -t\}$, $t>0$. By comparison principle, we have $${\mathcal M}[u]=\int_{\Omega}(dd^cu)^n\ge t^n\int_{\Omega}(dd^cu_{K_t})^n=t^n\text{Cap}_n(K_t,\Omega), \ \forall t>0.$$ Therefore, $$\mu(K_t)\le Ce^{-\lambda\frac{1}{\text{Cap}_n(K,\Omega)^{\frac{q}{n+1}}}}\le Ce^{-\lambda\frac{t^{\frac{nq}{n+1}}}{{\mathcal M}[u]^{\frac{q}{n+1}}}}.$$ Then for every $0<\varepsilon<\lambda$, \begin{eqnarray*} \int_{\Omega}e^{(\lambda-\varepsilon)\frac{(-u)^{\frac{nq}{n+1}}}{{\mathcal M}[u]^{\frac{q}{n+1}}}}\,d\mu &=& \sum_{j=0}^{\infty}\frac{(\lambda-\varepsilon)^j}{j!}\int_{\Omega}\frac{(-u)^{\frac{nqj}{n+1}}}{{\mathcal M}[u]^{\frac{qj}{n+1}}}\,d\mu \\ &= & \sum_{j=0}^{\infty}\frac{(\lambda-\varepsilon)^j}{j!}\int_0^{\infty}\frac{\mu(K_t)}{{\mathcal M}[u]^{\frac{qj}{n+1}}}\,d\left(t^{\frac{nqj}{n+1}}\right) \\ &\le & C\sum_0^{\infty}\frac{(\lambda-\varepsilon)^j}{j!}\int_0^{\infty}e^{-\lambda\frac{t^{\frac{nq}{n+1}}}{{\mathcal M}[u]^{\frac{q}{n+1}}}}d\left(\frac{t^{\frac{nqj}{n+1}}}{{\mathcal M}[u]^{\frac{qj}{n+1}}}\right) \\ &\le & \sum_{j=0}^{\infty}\frac{(\lambda-\varepsilon)^j}{\lambda^j}=\frac{\lambda}{\varepsilon}-1. \end{eqnarray*} Next, we show the only if part. For a Borel subset $E\subset \Omega$, we can apply the Brezis-Merle trace inequality to $u_E^*$ to obtain the condition \eqref{cond-MT1}, which implies the Moser-Trudinger type inequality. \end{proof} \vskip 30pt \section{The Dirichlet problem} \subsection{The continuous solution(Proof of Theorem \ref{main5})} First, we prove (ii) $\Rightarrow$ (i). Suppose $E\subset \Omega$ is a Borel subset, and $u$ solves \begin{align}\label{diri1} \begin{cases} (dd^cu)^k\wedge\omega^{n-k}=\chi_E\,d\mu,\ \ \ &\text{in }\Omega, \\ u=0, &\text{on }\partial \Omega, \end{cases} \end{align} where $\mu$ is a positive Radon measure. \begin{theo}\label{GPT-infty} Assume there exists $p>k+1$ such that $I_{k,p}(\Omega,\mu)<+\infty$. Let $u$ be a solution to \eqref{diri1}. Then there exists $C_1>0$ depending on $k$, $p$ and $I_{k,p}(\Omega,\mu)$ such that \begin{equation}\label{inftyest} \|u\|_{L^{\infty}(\Omega,\mu)}\le C_1\mu(E)^{\delta} \end{equation} where $\delta=\frac{p-k-1}{kp}$. \end{theo} \begin{proof} The proof is similar to \cite{WWZ2}. We denote $\hat \mu:=\chi_{E}\cdot\mu$. For any $s>0$, let $K_s:=\{u\le -s\}$ and $u_s=u+s$. Denote $p=\frac{k+1}{1-k\delta}$. By $I_{k,p}(\Omega,\mu)<+\infty$, we can apply Theorem \ref{main3} to get $$\left(\int_{K_s}\left(\frac{-u _s}{\|u _s\|_{\mathcal{PSH}_{k,0}(K_s)}}\right)^pd\hat\mu \right)^{\frac{1}{p}}\leq C.$$ Then by the equation, \begin{eqnarray*} t\int_{K_{s+t}}(dd^cu _s)^k\wedge\omega^{n-k} & \le & \int_{K_s}(-u _s)\,d\hat\mu \\ &\le& \left(\int_{K_s}(-u _s)^pd\hat\mu \right)^{\frac{1}{p}}\left(\int_{K_s}d\hat\mu \right)^{1-\frac{1}{p}} \\ &=&\left(\int_{K_s}\left(\frac{-u _s}{\|u _s\|_{\mathcal{PSH}_{k,0}(K_s)}}\right)^pd\hat\mu \right)^{\frac{1}{p}}\left(\int_{K_s}d\hat\mu \right)^{1-\frac{1}{p}}\cdot \|u _s\|_{\mathcal{PSH}_{k,0}(K_s)}, \end{eqnarray*} which implies \begin{align}\label{ite} t\hat\mu (K_{s+t})\le C\hat\mu (K_s)^{(1-\frac{1}{p})\frac{k+1}{k}}=C\hat\mu (K_s)^{1+\delta}. \end{align} In particular, $$\hat\mu (K_s)\le \frac{C}{s}\hat\mu (\Omega)^{1+\delta}$$ for some $C>1$. Then choose $s_0=2^{1+\frac{1}{\delta }}C^{1+\frac{1}{\delta }}\hat\mu (\Omega)^{\delta }$, we get $\hat\mu (K_{s_0})\leq \frac{1}{2}\hat\mu (\Omega)$. For any $l\in \mathbb{Z}_+$, define \begin{align}\label{iteration} s_l=s_0+\sum_{j=1}^l2^{-\delta j}\hat\mu (\Omega)^{\delta },\ \ u^l=u+s^l, \ \ K_l=K_{s_l}. \end{align} Then $$2^{-\delta (l+1)}\hat\mu (\Omega)^{\delta }\hat\mu (K_{l+1})=(s_{l+1}-s_l)\hat\mu (K_{l+1})\le C\hat\mu (K_l)^{1+\delta}.$$ We claim that $|K_{l+1}|\leq \frac{1}{2}|K_l|$ for any $l$. By induction, we assume the inequality holds for $l\leq m$. Then \begin{align*} \hat\mu (K_{m+1})\leq & C\hat\mu (K_m)^{1+\delta }\frac{2^{\delta (m+1)}}{\hat\mu (\Omega)^{\delta }} \\ \leq & C\left[\left(\frac{\hat\mu (K_0)}{2^m}\right)^{\delta }\frac{2^{\delta (m+1)}}{\hat\mu (\Omega)^{\delta }}\right]\cdot |K_m| \\ \leq & \left[C^{1+\delta }\frac{2^{\delta }}{s_0^{\delta }}\hat\mu (\Omega)^{\delta ^2}\right]\cdot \hat\mu (K_m) \leq \frac{1}{2}\hat\mu (K_m). \end{align*} This implies that the set $$\hat\mu \left(\left\{x\in \Omega\big|u<-s_0-\sum_{j=1}^{\infty}\left(\frac{1}{2^{\delta}}\right)^j\hat\mu (\Omega)^{\delta }\right\}\right)=0.$$ Hence, \begin{eqnarray*} \|u\|_{L^{\infty}(\Omega, \mu)}&\leq& s_0+\sum_{j=1}^{\infty}\left(\frac{1}{2^{\delta }}\right)^j\hat\mu (\Omega)^{\delta }\\ &=&2^{1+\frac{1}{\delta }}C^{1+\frac{1}{\delta }}\hat\mu (\Omega)^{\delta }+\frac{1}{2^{\delta }-1}\hat\mu (\Omega)^{\delta }\\ &\leq& C\hat\mu (\Omega)^{\delta }=C\mu(E)^{\delta}. \end{eqnarray*} \vskip -30pt \end{proof} Next, by similar arguments as in \cite{WWZ2}, we can get a stability result as well as the existence of the unique continuous solution $u_E$ for \eqref{k-Dir}. We need the stability lemma. \begin{lem}\label{stab-holder-mu} Let $u$, $v$ be bounded $k$-plurisubharmonic functions in $\Omega$ satisfying $u\geq v$ on $\partial \Omega$. Assume $(dd^cu)^k\wedge\omega^{n-k}=d\mu$ and $\mu$ satisfies the condition in Theorem \ref{main5} (ii). Then $\forall\ \varepsilon>0$, there exists $C>0$ depending on $k$, $p$ and $I_{k,p}(\Omega,\mu)$, \begin{equation}\label{vu2-mu} \sup\limits_{\Omega}(v-u)\leq \varepsilon+C\mu(\{v-u>\varepsilon\})^{\delta}. \end{equation} \end{lem} \begin{proof} We may suppose $d\mu\in L^\infty(\Omega)$ for the estimate since we will consider the approximation of $\mu$ later in the proof of existence and continuity. Denote $u_{\varepsilon}:=u+\varepsilon$ and $\Omega_{\varepsilon}:=\{v-u_{\varepsilon}>0\}$. It suffices to estimate $\sup_{\Omega_{\varepsilon}}|u_{\varepsilon}-v|$. Note that $\Omega_{\varepsilon}\Subset\Omega$ and $u_{\varepsilon}$ solves $$ \begin{cases} (dd^cu_{\varepsilon})^k\wedge\omega^{n-k}=d\mu \ \ &\text{\ in $\Omega_{\varepsilon}$,} \\[-3pt] u_{\varepsilon}=v & \text{\ on ${\p\Om}_{\varepsilon}$.} \end{cases} $$ Let $u_0$ be the solution to the Dirichlet problem $$ \begin{cases} (dd^cu_{0})^k\wedge\omega^{n-k}=\chi_{\Omega_{\varepsilon}}d\mu \ \ &\text{\ in $\Omega$,} \\[-5pt] \ u_{0}=0 \hskip55pt \ \ &\text{\ on ${\p\Om}$.} \end{cases} $$ By the comparison principle we have $$u_0\leq u_{\varepsilon}-v\leq 0 \ \ \text{in\ $\Omega_{\varepsilon}$}.$$ Hence we obtain $$\sup_{\Omega_{\varepsilon}}|u_{\varepsilon}-v| \leq \sup_{\Omega_{\varepsilon}}|u_0|\leq C\mu(\Omega_\varepsilon)^{\delta}.$$ \end{proof} \begin{prop}\label{prep-holder-mu} Let $u$, $v$ be bounded $k$-plurisubharmonic functions in $\Omega$ satisfying $u\geq v$ on $\partial \Omega$. Assume that $(dd^cu)^k\wedge\omega^{n-k}=d\mu$ and $\mu$ satisfies the condition in Theorem \ref{main5} (ii). Then for $r\geq 1$ and $0\leq \gamma'<\frac{\delta r}{1+\delta r}$, it holds \begin{equation}\label{vu-mu} \sup\limits_{\Omega}(v-u)\leq C\|\max(v-u,0)\|_{L^r(\Omega,d\mu)}^{\gamma} \end{equation} for a uniform constant $C=C(\gamma',\|v\|_{L^{\infty}(\Omega,d\mu)})>0$. \end{prop} \begin{proof} Note that for any $\varepsilon>0$, $$\mu(\{v-u>\varepsilon\}) \leq \varepsilon^{-r}\int_{\{v-u>\varepsilon\}}|v-u|^{r} \,d\mu \leq \varepsilon^{-r}\int_\Omega[\max(v-u,0)]^r\,d\mu.$$ Let $\varepsilon:=\|\max(v-u,0)\|_{L^r(\Omega,d\mu)}^{\gamma'}$, where $\gamma'$ is to be determined. By Lemma \ref{stab-holder-mu}, we have \begin{eqnarray} \sup\limits_{\Omega}(v-u) &\leq & \varepsilon+C\mu(\{v-u>\varepsilon\})^{\delta} \label{vu3-mu} \\ &\leq & \|\max(v-u,0)\|_{L^r(\Omega,d\mu)}^{\gamma'}+C\|\max(v-u,0)\|_{L^r(\Omega,d\mu)}^{\delta(r-\gamma' r)}. \nonumber \end{eqnarray} Choose $\gamma'\leq \frac{\delta r}{1+\delta r}$, where $0<\delta<\frac{1}{k}$, \eqref{vu-mu} follows from \eqref{vu3-mu}. \end{proof} Now we can show the existence of the unique continuous solution for \eqref{k-Dir} when $\mu$ satisfies the condition in Theorem \ref{main5} (ii). We consider $\tilde\mu_{\varepsilon}:=\rho_{\varepsilon} *d\mu$ defined on $\Omega_{\varepsilon}:=\{x\in \Omega\,|\,\text{dist}(x,\partial \Omega)\le \varepsilon\}$, where $\varepsilon>0$ and $\rho_{\varepsilon}$ is the cut-off function such that $$\rho_{\varepsilon}\in C^{\infty}_0({\mathbb R}^n),\ \rho_{\varepsilon}=1\ \ \text{in }B_{\varepsilon}(O),\ \text{and }\rho_{\varepsilon}=0\text{ on }{\mathbb R}^n\setminus \overline{B_{2\varepsilon}(O)}.$$ Then for every $E\subset \Omega$, we define $$\mu_{\varepsilon}(E):=\tilde{\mu}_{\varepsilon}(E\cap \Omega_{\varepsilon}).$$ By the classic measure theory, $\mu_{\varepsilon}$ is absolutely continuous with respect to Lebesgue measure $\omega^n$ with bounded density functions $f_{\varepsilon}$. We are going to check $\mu_{\varepsilon}$ satisfies the condition in Theorem \ref{main5} (ii) uniformly. Denote $K_y:=\{x\in\Omega:x-y\in K\cap \Omega_{\varepsilon}\}$. By definition, for every compact subset $K\subset \Omega$, we have \begin{align}\label{11} \mu_{\varepsilon}(K)=\int_{{\mathbb C}^n}\int_{K_y}\rho_{\varepsilon}(x-y)\,d\mu(x)\,\omega^n(y) \le C\sup_{|y|\le 2\varepsilon}\mu(K_y) \le C\sup_{|y|\le 2\varepsilon}\left(\text{Cap}_{k}(K_y,\Omega)\right)^{p}. \end{align} Let $u_y(x):=u_{k,K_y}^*(x+y)$, $u_{\varepsilon}(x):=u_{k,\Omega_{\varepsilon}}^*(x)$ be the relative $k$-extreme functions of $K_y$, $\Omega_{\varepsilon}$ with respect to $\Omega$. For any $0<c<\frac{1}{2}$, denote $\Omega_{c}=\{u_{\varepsilon}\le -c\}$. Let $$ g(x):=\begin{cases} \max\left\{u_y(x)-c, (1+2c)u_{\varepsilon}\right\},\ \ \ &x\in \Omega_{\frac{c}{2}}; \\ (1+2c)u_{\varepsilon}, &x\in \Omega\setminus\Omega_{\frac{c}{2}}. \end{cases} $$ Note that $g$ is a well-defined $k$-plurissubharmonic function in $\Omega$. Since $K_y\subset \Omega_{\varepsilon}$ for every $|y|\le \varepsilon$, we have $u_y-c\ge (1+2c)u_{\varepsilon}=-1-2c$ on $\overline{\Omega_{\varepsilon}}$. Hence for any compact subset $K\subset \Omega$, we can get \begin{eqnarray}\label{22} \text{Cap}_k(K,\Omega)&\ge& (1+2c)^{-k}\int_{K\bigcap \Omega_{\varepsilon}}(dd^cg)^k\wedge\omega^{n-k}=(1+2c)^{-k}\int_{K\bigcap \Omega_{\varepsilon}}(dd^cu_y)^k\wedge\omega^{n-k} \nonumber\\ &=&(1+2c)^{-k}\int_{K_y}(dd^cu_{k,K_y}^*)^k\wedge\omega^{n-k}=(1+2c)^{-k}\text{Cap}_k(K_y,\Omega). \end{eqnarray} By \eqref{11} and \eqref{22}, we have shown that $\mu_{\varepsilon}$ satisfies the condition in Theorem \ref{main5} (ii) uniformly. Hence the solutions $\{u_{\varepsilon}\}$ to \eqref{k-Dir} with $\{f_\varepsilon\}$ are uniformly bounded and continuous. Furthermore, there exists a sequence $\varepsilon_j\to 0$ such that $\lim_{j\to \infty}\|\mu_{\varepsilon_j}-\mu\|=0$, where $\|\cdot\|$ denotes the total variation of a signed measure. Then the proof is finished by the following well-known result. \begin{prop} Let $\mu_j=(dd^c\varphi_j)^k\wedge\omega^{n-k}$, $\mu=(dd^c\varphi)^k\wedge\omega^{n-k}$ be non-pluripolar nonnegative Radon measures with finite mass, where $\varphi_j$, $\varphi\in\mathcal{PSH}_{k,0}(\Omega)$ and $\|\varphi_j\|_{L^\infty(\Omega,d\mu)}$, $\|\varphi\|_{L^\infty(\Omega, d\mu)}\le C$. If $\|\mu_j-\mu\|\to 0$, then $$\varphi_j\to\varphi\text{ in }L^1(\Omega,d\mu).$$ \end{prop} \begin{proof} When $k=n$, this is Proposition 12.17 in \cite{GZ}. The proof also applies to $k<n$. We write a sketch of its proof here. Note that by $\|\mu_j-\mu\|\to 0$, the measure $$\nu:=2^{-1}\mu+\sum_{j\ge 2}2^{-j}\mu_j$$ is a well-defined non-pluripolar nonnegative Radon measure. $\mu$, $\mu_j$ are absolutely continuous with respect to $\nu$. We may suppose that $\mu_j=f_j\nu$, $\mu=f\nu$ and $f_j\to f$ in $L^1(\Omega,\nu)$. Then by the weak compactness, there exists a subsequence $\varphi_j$ and $\psi\in\mathcal{PSH}_{k,0}(\Omega)$ such that $\varphi_j\to \psi$ in $L^1(\Omega,d\mu)$. Denote $\psi_j=\left(\sup_{l\ge j}\varphi_l\right)^*$, which converges to $\psi$ decreasingly almost everywhere with respect to $\mu$. Then by comparison principle, we have $$(dd^c\psi_j)^k\wedge\omega^{n-k}\le (dd^c\varphi_j)^k\wedge\omega^{n-k}=d\mu_j,$$ which implies $(dd^c\psi)^k\wedge\omega^{n-k}\le d\mu$. To get the equality, we use the absolutely continuity to obtain $$(dd^c\psi_j)^k\wedge\omega^{n-k}\ge \inf_{l\ge j}f_l\,d\nu,$$ thereby completing the proof. \end{proof} The proof of (i) $\Rightarrow$ (ii) is simple. Let $\varphi_E$ be the solution to \eqref{k-Dir}. Then $\hat \varphi:=\frac{\varphi_E}{C\mu(E)^{\delta}}\in\mathcal{PSH}_k(\Omega)$, and $-1\le \hat \varphi\le 0$. By definition we have $$C^{-k}\mu(E)^{1-k\delta}=\frac{1}{C^k\mu(E)^{k\delta}}\int_E(dd^c\varphi_E)^k\wedge\omega^{n-k}=\int_{E}(dd^c\hat \varphi)^k\wedge\omega^{n-k}\le \text{Cap}_k(E,\Omega).$$ In view of \eqref{cond-sobolev1}, we have completed the proof. \subsection{H\"older continuity(Proof of Theorem \ref{main6})} In this section, we consider the H\"older continuity of the solution. As we mentioned in the introduction, we will give a pure PDE proof for Theorem \ref{main6} as in \cite{WWZ2}. For the proof from (i) to (ii), we just follow Proposition 2.4 in \cite{DKN}, which is a PDE approach. It suffices to consider the other direction. Suppose \eqref{cond-DKN} holds. In order to obtain the H\"older estimate, we need an $L^{\infty}$-estimate like \eqref{inftyest} with the measure replaced by Lebesgue measure. We suppose the solution $u$ and the measure $d\mu$ in \eqref{cMA} are smooth, and the general result follows by approximation. First, we show there is an upper bound on $\|u\|_*$ under condition \eqref{cond-DKN}. Note that by the classical Sobolev inequality and Theorem \ref{quotient1}, we have \begin{align}\label{*1} \|u\|_*\le C\|u\|_{L^1(\Omega,\omega^n)}+C\left(\int_{\Omega}du\wedge d^cu\wedge\omega^{n-1}\right)^{\frac{1}{2}}\le C{\mathcal E}_k(u)^{\frac{1}{k+1}}<+\infty. \end{align} By condition \eqref{cond-DKN}, we get \begin{align} \frac{{\mathcal E}_k(u)}{\|u\|_*}=\int_{\Omega}\left(-\frac{u}{\|u\|_*}\right)\,d\mu\le C\frac{\|u\|_{L^1(\Omega,\omega^n)}^{\gamma}}{\|u\|_{*}^{\gamma}}. \end{align} This implies \begin{align}\label{*2} {\mathcal E}_k(u)^{1-\frac{\gamma}{k+1}}\le C\|u\|_*^{1-\gamma}. \end{align} Then by \eqref{*1} we get $\|u\|_*\le c_0$. Choose $f=\hat u:=\frac{u}{c_0}$ in condition \eqref{cond-DKN}. Then we get $$\int_{\Omega}(-\hat u)\,d\mu\le C\left(\int_{\Omega}(-\hat u)\,\omega^n\right)^{\gamma}.$$ Then in the proof of the $L^{\infty}$-estimate(Theorem \ref{GPT-infty}), we use a different iteration. We denote $\Omega_s:=\{\hat u\le -s\}$, $p< \frac{n(k+1)}{n-k}$(when $k=n$, we can choose any $p>0$) and $\hat u_s:=\hat u+s$. Note that \begin{align}\label{energy*} {\mathcal E}_{k,\Omega_s}(\hat u_s):= & \int_{\Omega_s}(-\hat u_s)\,d\mu \le C\left(\int_{\Omega_s}(-\hat u_s)\,\omega^n\right)^{\gamma} \nonumber\\ \le & C\left(\int_{\Omega_s}\left(\frac{-\hat u_s}{{\mathcal E}_{k,\Omega_s}(\hat u_s)^{\frac{1}{k+1}}}\right)^p\,\omega^n\right)^{\frac{\gamma}{p}}{\mathcal E}_{k,\Omega_s}(\hat u_s)^{\frac{\gamma}{k+1}}|\Omega_s|^{(1-\frac{1}{p})\gamma}. \end{align} By the Sobolev inequality for the complex Hessian equation with respective to the Lebesgue measure \cite{AC20}, we get \begin{eqnarray*} t|\Omega_{s+t}|\le \int_{\Omega_s}(-\hat u_s)\,\omega^n&\le& C\left(\int_{\Omega_s}(-\hat u_s)^p\,\omega^n\right)^{\frac{1}{p}}|\Omega_s|^{1-\frac{1}{p}}\\ &\le& C{\mathcal E}_{\Omega_s}(\hat u_s)^{\frac{1}{k+1}} |\Omega_s|^{1-\frac{1}{p}}\le C|\Omega_s|^{1+\frac{\gamma p-k-1}{(k+1-\gamma)p}}. \end{eqnarray*} Hence, $$\|\hat u\|_{L^{\infty}(\Omega,\omega^n)}\le C|\Omega|^{\frac{\gamma p-k-1}{(k+1-\gamma)p}}.$$ By \eqref{*1} and \eqref{energy*}, we have $\|u\|_*\le C|\Omega|^{\frac{(p-1)\gamma}{(k+1-\gamma)p}}$. Therefore, for $1<p\le \frac{n(k+1)}{n-k}$ when $1\le k<n$ and $p>1$ when $k=n$, \begin{equation}\label{maes} \|u\|_{L^{\infty}(\Omega,\omega^n)}\le C|\Omega|^{\frac{\gamma p-k-1}{(k+1-\gamma)p}}\|u\|_*\le C|\Omega|^{\frac{2\gamma p-\gamma-k-1}{(k+1-\gamma)p}}, \end{equation} where \begin{equation}\label{del} \delta:=\frac{2\gamma p-\gamma-k-1}{(k+1-\gamma)p}. \end{equation} Then by the same arguments as in \cite{WWZ2}, we can obtain the H\"older continuity. For readers' convenience and the self-containness of this paper, we include a sketch here. First, since we have $L^{\infty}$-estimate \eqref{maes} now, we can get the similar stability results as in Lemma \ref{stab-holder-mu} and Proposition \ref{prep-holder-mu}, in which we change the measure $d\mu$ to be the standard Lebesgue measure $\omega^n$. \begin{prop}\label{prep-holder} Let $u$, $v$ be bounded $k$-plurisubharmonic functions in $\Omega$ satisfying $u\geq v$ on $\partial \Omega$. Assume that $(dd^cu)^k\wedge\omega^{n-k}=d\mu$ and $d\mu$ satisfies the \eqref{cond-DKN}. Then for $r\geq 1$ and $0\leq \gamma'<\frac{\delta r}{1+\delta r}$, it holds \begin{equation}\label{vu} \sup\limits_{\Omega}(v-u)\leq C\|\max(v-u,0)\|_{L^r(\Omega,\omega^n)}^{\gamma} \end{equation} for a uniform constant $C=C(\gamma',\|v\|_{L^{\infty}(\Omega,\omega^n)})>0$. Here $\delta$ is given by \eqref{del}. \end{prop} \vskip 10pt For any $\varepsilon>0$, we denote $\Omega_\varepsilon:=\{x\in \Omega|\, dist(x,{\p\Om})>\varepsilon\}$. Let \begin{eqnarray*} u_{\varepsilon}(x)&:=&\sup\limits_{|\zeta|\leq \varepsilon}u(x+\zeta),\ x\in\Omega_{\varepsilon},\\ \hat{u}_{\varepsilon}(x)&:=&\bbint_{|\zeta-x|\leq \varepsilon}u(\zeta)\,\omega^n,\ x\in\Omega_{\varepsilon}. \end{eqnarray*} Since $u$ is plurisubharmonic in $\Omega$, $u_\varepsilon$ is a plurisubharmonic function. For the H\"older estimate, it suffices to show there is a uniform constant $C>0$ such that $u_{\varepsilon}-u\leq C\varepsilon^{\alpha'}$ for some $\alpha'>0$. The link between $u_{\varepsilon}$ and $\hat{u}_{\varepsilon}$ is made by the following lemma. \begin{lem}(Lemma 4.2 in \cite{Ngu})\label{interchange} Given $\alpha\in (0,1)$, the following two conditions are equivalent. (1) There exists $\varepsilon_0$, $A>0$ such that for any $0<\varepsilon\leq \varepsilon_0$, $$u_{\varepsilon}-u\leq A\varepsilon^\alpha\ \ \text{on\ $\Omega_\varepsilon$}.$$ (2) There exists $\varepsilon_1$, $B>0$ such that for any $0<\varepsilon\leq \varepsilon_1$, $$\hat{u}_{\varepsilon}-u\leq B\varepsilon^\alpha\ \ \text{on\ $\Omega_\varepsilon$}.$$ \end{lem} The following estimate is a generalization of Lemma 4.3 in \cite{Ngu}. \begin{lem}\label{lapalace-control} Assume $u\in W^{2, r}(\Omega)$ with $r\geq 1$. Then for $\varepsilon>0$ small enough, we have \begin{equation} \label{L54} \left[\int_{\Omega_{\varepsilon}}|\hat{u}_{\varepsilon}-u|^r\,\omega^n\right]^{\frac{1}{r}}\leq C(n,r)\|\triangle u\|_{L^r(\Omega,\omega^n)}\varepsilon^2 \end{equation} where $C(n,r)>0$ is a uniform constant. \end{lem} Note that the function $u_\varepsilon$ is not globally defined on $\Omega$. However, by $\varphi\in C^{2\alpha}(\partial\Omega)$, there exist $k$-plurisubharmonic functions $\{\tilde u_\varepsilon\}$ which decreases to $u$ as $\varepsilon\to 0$ and satisfies \cite{GKZ} \begin{equation}\label{barrier} \begin{cases} \tilde u_\varepsilon=u+C\varepsilon^\alpha & \text{in}\ \Omega\setminus \Omega_\varepsilon,\\[4pt] \hat u_\varepsilon\leq\tilde u_\varepsilon\leq \hat u_\varepsilon+C\varepsilon^\alpha& \text{in}\ \Omega_\varepsilon , \end{cases} \end{equation} where the constant $C$ is independent of $\varepsilon$. Then if $u\in W^{2, r}(\Omega)$, by choosing $v=\tilde{u}_{\varepsilon}$, $\gamma'< \frac{\delta r}{1+\delta r}$ in Proposition \ref{prep-holder}, and using Lemma \ref{lapalace-control}, we have \begin{eqnarray}\label{HR} \sup_{\Omega_\varepsilon} (\hat u_\varepsilon-u)&\leq& \sup_\Omega(\tilde u_\varepsilon-u)+C\varepsilon^\alpha \nonumber\\ &\leq& C\|\tilde u_\varepsilon-u\|_{L^r}^{\gamma'}+C\varepsilon^\alpha\\ &\leq & C\|\triangle u\|_{L^r(\Omega,\omega^n)}^{\gamma'}\varepsilon^{2\gamma'}+C\varepsilon^\alpha\nonumber. \end{eqnarray} Hence, once we have $u\in W^{2, r}$ for $r\geq 1$, it holds $u\in C^{\gamma'}$ for $\gamma'<\frac{\delta r}{1+\delta r}$, where $\delta=\frac{2\gamma p-\gamma-k-1}{(k+1-\gamma)p}$ and $p\le \frac{n(k+1)}{n-k}$($p\ge 1$ when $k=n$). Finally, we show that under the assumption of Theorem \ref{main6}, it holds $u\in W^{2,1}(\Omega)$, i.e., $\triangle u$ has finite mass, and hence $u\in C^{\gamma'}$ for $\gamma'<\frac{\delta r}{1+\delta r}$. Let $\rho$ be the smooth solution to \begin{align*} \begin{cases} (dd^c\rho)^k\wedge\omega^{n-k}=\omega^n,\ \ &\text{in}\ \Omega, \\[4pt] \rho=0,\ \ &\text{on}\ \partial \Omega. \end{cases} \end{align*} We can choose $A>>1$ such that $dd^c(A\rho)\ge \omega$. Then by the generalized Cauchy-Schwartz inequality, \begin{eqnarray*} \int_{\Omega}dd^cu\wedge \omega^{n-1} &\le& \int_{\Omega}dd^cu\wedge [dd^c(A\rho)]^{n-1}\\ &\le& A^{k-1}\left(\int_{\Omega}(dd^c\rho)^k\wedge\omega^{n-k}\right)\cdot\left(\int_{\Omega}(dd^cu)^k\wedge\omega^{n-k}\right)\leq C. \end{eqnarray*} Hence we finish the proof. \begin{ex} As an example, we consider the Dirichlet problem \eqref{k-Dir} with the measure $\mu_S$ defined by $\mu_S(E)={\mathcal H}^{2n-1}(E\cap S)$ for a real hypersurface $S\subset {\mathbb C}^{n}$ and $E\subset \Omega$, where ${\mathcal H}^{2n-1}(\cdot)$ is the $2n-1$-Hausdorff measure. By Stein-Tomas Restriction Theorem, for all $f\in C^{\infty}_0({\mathbb C}^n)$, there holds \cite{BS} $$\|f|_S\|_{L^2(S, \mu_S)}\le C\|f\|_{L^p(\Omega)},\ \ \ 1\le p\le \frac{4n+2}{2n+3}.$$ Therefore, we can apply Theorem \ref{main6} with $\gamma=1$. This leads to $u\in C^{\gamma'}(\Omega)$ for $$\gamma'<\frac{n+k+2}{(n+1)(k+2)}.$$ \end{ex} \vskip 20pt
proofpile-arXiv_069-658
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the quest to find New Physics (NP), there is another possible route besides the traditional high energy and precision frontiers, which is the low energy regime. There has been a consistent anomaly in the measured angular distribution of $e^+ e^-$ pairs in the 18.15 MeV decay of the excited state Be$^*$, by the Atomki collaboration over the last few years \cite{Krasznahorkay:2017gwn,Krasznahorkay:2017bwh,Krasznahorkay:2017qfd,Krasznahorkay:2018snd}. In the Standard Model (SM), at such low transition energies, this process is mediated by a photon with a Branching Ratio (BR) $\approx 1$, and, due to this massless mediator, one expects few events at large angles between the electron and positron. However, what is seen by the Atomki collaboration is an excess of events at large angles $\sim 140^{\circ} $. The simplest explanation is to introduce a new boson with mass just under the transition energy, such that it is produced close to on-shell, which then decays to the $e^{+}e^{-}$ pair which is detected by the apparatus. This boson can be either of vector (spin 1) or of pseudoscalar (spin 0) nature \cite{DelleRose:2018pgm}. The Atomki collaboration made an initial determination that the best mass and BR of such a boson should be \begin{equation} M_{Z'} = 16.7 \pm 0.35 \textrm{(stat)} \pm 0.5 \textrm{(sys)} \textrm{ MeV}, \end{equation} \begin{align} \label{eq:Br} R &\equiv {\frac{{\rm BR}(^8{{\rm Be}^*} \to Z' + {^8{\rm Be}})}{{\rm BR}(^8{{\rm Be}^*} \to \gamma + {^8{\rm Be}})} \times {\rm BR}(Z'\to e^+ e^-)} =5.8 \times 10 ^{-6} . \end{align} Subsequently, additional masses and BR combinations were suggested in a private communication to Feng {\sl et al.} \cite{Feng:2016ysn}, which we list in Tab. \ref{tab:Br}, and eventually consider all possibilities in our results. \begin{table}[!t] \centering \begin{tabular}{c | c} $M_{Z'} ~(\textrm{MeV}) $ & $R$ \\ \hline \vspace{-1em} &\\ 16.7 & $5.8 \times 10^{-6}$ \\ 17.3 & $2.3 \times 10^{-6}$ \\ 17.6 & $5.0 \times 10^{-7}$ \end{tabular} \caption{Solutions to the Atomki anomaly, with best fit mass value (16.7 MeV), and subsequent alternative masses (17.3 MeV and 17.6 MeV), along with the corresponding ratio of BRs, $R$, as defined in Eq. (\ref{eq:Br}).} \label{tab:Br} \end{table} In this work, we will focus on a vector boson explanation to match these conditions for the anomaly explanation and construct a minimal scenario which may evade all other experimental constraints \cite{DelleRose:2018eic}. We first extend the SM by a new $U(1)'$ group. After rotating away the new mixed term in the kinetic Lagrangian, one finds a covariant derivative \begin{equation} {\cal D}_\mu = \partial_\mu + .... + i g_1 Y B_\mu + i (\tilde{g} Y + g' z) B'_\mu, \end{equation} where $z$ defines the charge of the field content under the new $U(1)'$ group, with associated gauge boson $B'$, and $g',\tilde{g}$ are the new gauge coupling and gauge-kinetic mixing, respectively. The gauge boson interacts with the fermions through the gauge current \begin{equation} J^\mu_{Z'} = \sum_f \bar \psi_f \gamma^\mu \left( C_{f, L} P_L + C_{f, R} P_R \right) \psi_f , \end{equation} and, assuming the limit of small gauge coupling and mixing, we find the vector and axial couplings \begin{eqnarray} C_{f, V} &=& \frac{C_{f,R} + C_{f,L}}{2}\nonumber\\ &\simeq &\tilde g c_W^2 \, Q_f + g' \left[ z_H (T^3_f - 2 s_W^2 Q_f) + z_{f,V} \right], \\ C_{f, A} &=& \frac{C_{f,R} - C_{f,L}}{2} \simeq g' \left[ - z_H \, T^3_f + z_{f,A} \right], \label{eq:Axial} \end{eqnarray} where we introduce the notation $z_{f,V/A}=(z_{f_R} \pm z_{f_L})/2$, $s_W = \sin \theta _W$ and $c_W = \cos \theta _W$. Assuming a family non-universal scenario, one has 15 free parameters, the 12 SM fermion charges under the new group, the SM Higgs charge, new gauge coupling, $g'$, and gauge-kinetic mixing parameter, $\tilde{g}$. In the next sections we detail motivations to fix the field charges from some reasonable requirements of the theory. \section{Charge Assignment} We first require that the Atomki condition itself be satisfied. Though we do not detail the nuclear physics here, this requires that up and down quarks have non-zero axial couplings to the gauge boson, $C_{u/d,A} \neq 0$. We also demand that the theory be anomaly free without requiring extra vector-like states, and the six possible triangle diagrams be satisfied by only SM + Right Handed (RH) neutrinos \footnote{We see in our particular set-up that at least two RH neutrinos are required to be charged under the new $U(1)'$.}. Next, we subject our scenario to constraints from experimental data. Introducing a new $Z'$, even with small gauge couplings, is very sensitive to certain experiments. Firstly, neutrino experiments are very constraining, such as meson decays like $K^{\pm}\rightarrow \pi ^{\pm} \nu \nu$ and electron-neutrino scattering from the TEXONO experiment. Since the $Z'$ must couple to electrons, any neutrino coupling will show clear deviations from such precise experiments. To this end, we require no vector or axial couplings to the neutrinos, $C_{\nu_i,V/A}=0$. Another potential issue is to ensure the precise $(g-2)_{e,\mu}$ measurements are sufficiently unaffected. One must have electron couplings, but these measurements are more sensitive to the axial couplings, so we enforce no axial couplings to the electrons or muons, $C_{e/\mu,A}=0$. Finally, we do not enforce canonical gauge-invariant Yukawa couplings in the first two generations, but motivated by more natural Yukawa couplings in the third generation, we require that the these charges are gauge invariant, i.e. for the top quark coupling like $(\bar{Q} Y_u \tilde{H} u_R)$, this coupling must have vanishing charge, $-z_{Q_3} - z_H + z_{u_{R_3}} = 0$. We make no further comment on the generation of mass in the first two generations, but appeal to higher scale physics, such as radiative mass generation, or horizontal symmetries. However, our final constraint is that the first two generations be family universal, as we expect whatever mechanism generates the masses should apply in a similar fashion for both the first two generations. These constraints together entirely fix the 15 fermion charges and SM Higgs charge up to a scale factor, for which we fix the SM Higgs charge to unity. This is detailed in Tab. \ref{tab:charges}. \begin{table} \hspace{-0cm} \begin{minipage}[b]{.6\textwidth} \centering \includegraphics[width=1.0\linewidth]{Final_Region_NA64_allowed_highlighted.pdf} \captionof{figure}{Allowed parameter space mapped on the $(g',\tilde{g})$ plane explaining the anomalous $ \textrm{Be}^*$ decay for $Z'$ solutions with mass 16.7 (red), 17.3 (purple) and 17.6 (green) MeV. The white regions are excluded by the non-observation of the same anomaly in the $ \textrm{Be}^{*'}$ transition. Also shown are the constraints from $(g-2)_\mu$, to be within the two dashed lines; $(g-2)_e$, to be inside the two dotted lines (shaded in blue) and the electron beam dump experiment, NA64, to be in the shaded blue region outside the two solid lines. The surviving parameter space lies inside the two red parallelograms at small positive and negative $\tilde{g}$ (though not at $\tilde{g}=0$), inside the dark shaded blue region which overlaps with the coloured bands, representing the different possibly Atomki anomaly solutions.} \label{fig:1HDM_ZPHI_0.5_ZQ3_-1Region} \end{minipage}\qquad \begin{minipage}[b]{.35\textwidth} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline & \multirow{2}{*}{$SU(3)$} & \multirow{2}{*}{$SU(2)$} & \multirow{2}{*}{$U(1)_Y$} & \multirow{2}{*}{$U(1)'$} \\ &&&&\\ \hline \vspace{-1em} &&&&\\ $Q_{1}$ & 3 & 2 & 1/6 & $1/3$ \\ $Q_{2}$ & 3 & 2 & 1/6 & $1/3$ \\ $Q_{{3}}$ & 3 & 2 & 1/6 & $1/3$ \\ $u_{R_{1}}$ & 3 & 1 & 2/3 & $-2/3$ \\ $u_{R_{2}}$ & 3 & 1 & 2/3 & $-2/3$ \\ $u_{R_{3}}$ & 3 & 1 & 2/3 & $4/3$ \\ $d_{R_{1}}$ & 3 & 1 & -1/3 & $4/3$ \\ $d_{R_{2}}$ & 3 & 1 & -1/3 & $4/3$\\ $d_{R_{3}}$ & 3 & 1 & -1/3 & $-2/3$ \\ $L_{1}$ & 1 & 2 & -1/2 & $-1$ \\ $L_{2}$ & 1 & 2 & -1/2 & $-1$ \\ $L_{3}$ & 1 & 2 & -1/2 & $-1$ \\ $e_{R_{1}}$ & 1 & 1 & -1 & $0$ \\ $e_{R_{2}}$ & 1 & 1 & -1 & $0$ \\ $e_{R_{3}}$ & 1 & 1 & -1 & $-2$ \\ $H$ & 1 & 2 & 1/2 & $1$ \\ \hline \end{tabular} } \caption{Charge assignment of the SM particles under the family-dependent (non-universal) $U(1)'$. This numerical charge assignment satisfies the discussed anomaly cancellation conditions, enforces a gauge invariant Yukawa sector of the third generation and family universality in the first two fermion generations as well as no coupling of the $Z'$ to the all neutrino generations.} \label{tab:charges} \end{minipage} \end{table} \section{Results} There are many experiments which will constrain some areas of parameter space, but for the sake of brevity we discuss only the most constraining ones in our particular parameter space which may satisfy the Atomki anomaly. As we must have vector couplings to the electrons, and require first and second generation universal couplings, hence also to the muons, there is a contribution to their anomalous magnetic moments, $(g-2)_{e,\mu}$. The other experiment which then bounds our entire parameter space is the electron beam dump NA64 experiment. This experiment looks for dark photons which bremsstrahlung from electrons scattered from target nuclei struck by an incoming electron beam dump. We present our allowed parameter space in the gauge coupling, $g'$, and gauge-kinetic mixing, $\tilde{g}$, plane. We colour the three different mass bands allowed by the three different masses detailed as solutions by the Atomki collaboration. The $(g-2)_\mu$ constrains allowed parameter space to lie inside the dashed blue lines, and similarly for $(g-2)_e$ with a dotted line. The NA64 experiment requires one to lie outside the solid blue lines. Regions of parameter space which can explain the anomaly as well as evade all experimental constraints lie inside the two red parallelograms, in the dark blue shaded region, overlapping the other coloured bands, at small positive and negative (but non-zero) $\tilde{g}$, and at small $g'$. \section{Conclusion} In this proceedings, we have presented a viable model to explain the current Atomki anomaly, through a $U(1)'$ extension of the SM, with a new $Z'$ with mass around 17 MeV, and gauge coupling $g' \sim 10^{-5}$ and gauge-kinetic mixing $\tilde{g} \sim 10^{-4}$. Driven by a simple set of requirements, we confine the free SM field charges under the $U(1)'$ to be entirely fixed, and find viable parameter space left which can avoid all constraints. \section*{Acknowledgements} The work of LDR and SM is supported in part by the NExT Institute. SM also acknowledges partial financial contributions from the STFC Consolidated Grant ST/L000296/1. Furthermore, the work of LDR has been supported by the STFC/COFUND Rutherford International Fellowship Programme (RIFP). SJDK and SK have received support under the H2020-MSCA grant agreements InvisiblesPlus (RISE) No. 690575 and Elusives (ITN) No. 674896. In addition SK was partially supported by the STDF project 13858. All authors acknowledge support under the H2020-MSCA grant agreement NonMinimalHiggs (RISE) No. 64572. \section*{References}
proofpile-arXiv_069-813
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The last decade has seen a significant rise in the hype regarding autonomous vehicles and intelligent transportation systems (ITS). The prime motivation behind these concepts is to handle the ever-increasing number of road accidents (World Health Organization reported 1.25 million fatalities in 2013, and millions of serious injuries \cite{WHO_traffic_deaths}), and decrease the traffic congestion, to ensure more efficient and safer mobility. Other advantages of ITS include the reduction of carbon dioxide emissions and a decrease in fuel usage which consequently helps in the conservation of non-renewable fossil fuels. The major driving technologies behind ITS include artificial intelligence (AI), sensor networks, control systems, and communication networks. The latter component arguably enjoys the utmost importance since it enables the coordination of the vehicle with all systems, either on-board, in the environment, or located at a central controlling entity. The generic term given to vehicular communication is vehicle-to-everything (V2X), which incorporates the vehicle's communication with the network (V2N), other vehicles (V2V), infrastructure (V2I), and even pedestrians (V2P). A V2X system is expected to communicate with other vehicles and infrastructure to ensure road safety and optimize the traffic flow. This necessitates a suitable communication system that takes into consideration all the possible components that need to communicate, the operating environment, possible use cases, and individual user/application requirements \cite{v2xlte}. The traditional approach to vehicular communication uses a technology known as dedicated short-range communications (DSRC), based on IEEE 802.11p standard \cite{v2xlte}. DSRC supports communication between transmitters aboard vehicles and infrastructural road-side units (RSUs). While it works efficiently for V2V and limited V2I scenarios, DSRC lacks the scalability for large scale V2I or V2N communications due to the lack of centralized control, causing severe service degradation in congested scenarios \cite{8300313}. On the other hand, cellular-based V2X (C-V2X) boasts the ability to support larger coverage with a well-developed infrastructure but the current (till LTE-Advanced) C-V2X solutions are primarily hindered by latency. However, the fifth-generation (5G) of wireless communications promises to provide ubiquitous connectivity to all kinds of users, be it humans or machines, with stringent reliability and latency requirements. With the supported latency of 1ms for ultra-reliable low latency (uRLLC) service, this drawback of C-V2X can be addressed. This, combined with the limitations of DSRC, has led to a significant amount of research in presenting 5G-V2X as a promising candidate for future vehicular communications. While the primary objective of C-V2X (or any other vehicular communication standard) is to provide seamless connectivity between different vehicular and infrastructural nodes, the broadcast nature of communication renders it susceptible to different attacks. Given the industrial trend in ITS regarding the increased use of AI, the need for secure communication is even more pronounced. V2X systems need to ensure security against malicious attacks targeting system performance, user, and data privacy. These attacks range from stealing user information to manipulating communication in a destructive manner. User identification and authentication are particularly sensitive since they determine the legitimate access of users to data/services \cite{8300313}. The prevalent wireless communication security techniques can be categorized into \textit {cryptographic} and \textit{physical layer security (PLS)} methods, where the former includes key-based approaches usually applied at higher network layers but the key management has become more challenging with the heterogeneous network deployment in 5G networks. This gave rise to PLS techniques that utilize the unique properties of wireless communication, i.e., channel, interference, and noise to provide security to the users \cite{5751298}. The applicability of PLS techniques in the context of V2X communication has recently been discussed in \cite{elhalawany2019physical} and \cite{luo2020physical}. However, these works primarily consider eavesdropping attacks, where the illegitimate node is only interested in listening to the communication between the legitimate ones. Furthermore, integration of PLS with non-orthogonal multiple access is considered in the former work owing to the potentially large number of interaction nodes. Luo \textit{et. al.} \cite{luo2020physical} consider radio resource management methods, cooperative jamming, multi-antenna schemes, and key-based PLS methods to protect communication against eavesdropping. The authors also point out the inability of any single PLS scheme to provide appropriate security for different applications and scenarios, suggesting cooperative use of different PLS methods. This work, driven by the above-mentioned motivation, envisions providing robust security solutions against eavesdropping, jamming, and spoofing attacks using an intelligent engine. This engine utilizes information about the radio environment and application requirements to provide the best possible PLS solution in a proactive manner, satisfying the user's security needs. The contributions of this work are listed below: \begin{itemize} \item An adaptive, proactive, and intelligent security framework in V2X, referred to as intelligent V2X security (IV2XS), focusing on PLS is proposed. \item The factors and conditions affecting the IV2XS framework such as environment, speed, application, etc. are also elaborated. \item An illustrative example of the concept is provided along with the challenges and open issues related to the proposed framework. \end{itemize} An overview of the components of V2X communication is provided in Section \ref{sec:V2X_channel}, followed by a description of security threats and possible PLS solution in Section \ref{sec:threats}. IV2XS design is discussed in Section \ref{sec:intelligent_security} while an illustrative example for the proposed framework is provided in \ref{sec:illustration}. Section \ref{sec:open_issues} sheds light on the open issues of IV2XS before the conclusion in Section \ref{sec:conclusion}. \section{V2X Characteristics} \label{sec:V2X_channel} V2X is an umbrella term for vehicular communications and it involves a vehicle's communication with different classes of components, all of which have their own corresponding applications. Figure \ref{fig:V2x_overview} gives an overview of a V2X system involving V2I links for traffic management, V2N for internet access to the users, V2V for collision avoidance and V2P for providing safety alerts to pedestrians and cyclists. Multiple communication standards have been developed to ensure interoperability in information exchange between vehicles \cite{v2xlte}, however, for this work we keep ourselves focused on C-V2X because it supports both direct communication as well as communication over a cellular network. The former is carried over the PC5 interface and it includes V2V, V2I, and V2P operating in ITS bands. It is particularly suitable for latency-critical applications concerned with safety and reliability. The latter caters to V2N which uses a traditional mobile broadband licensed spectrum. V2N is concerned with latency tolerant use cases, such as telematics or infotainment. \begin{figure}[t] \centering \includegraphics[scale=0.3]{figure1.pdf} \caption{An overview of a V2X system and possible security threats. } \label{fig:V2x_overview} \end{figure} In addition to the various components, V2X also has a channel unlike any other communication application. The fundamental difference is the extremely low temporal correlation of the channel due to a rapidly changing environment, which is a consequence of the continuous vehicular mobility and the resulting high Doppler spread. In addition to this, the mobility also causes continuously changing network topology. \section{Security Threats in V2X and Solutions} \label{sec:threats} \begin{figure*}[t] \centering \includegraphics[scale=0.62]{IV2X_Engine3.pdf} \caption{Conceptual system model for the IV2XS approach. Information from REM and different network layers is exploited by the AI-based engine to allocate proper security resources and algorithms using the SDR platform.} \label{fig:system_model} \end{figure*} \subsection{Security Threats in V2X} The broadcast nature of wireless V2X communication makes it vulnerable to \textit{eavesdropping}, \textit{spoofing}, and \textit{jamming} attacks. These attacks can cause serious problems for V2X communication, especially for autonomous or remote driving and other critical cases in terms of safety, privacy, and efficiency. In \textit{eavesdropping}, an illegitimate receiver tries to intercept the communication between legitimate parties, thus violating confidentiality and privacy. Figure \ref{fig:V2x_overview} shows the example of an eavesdropper trying to access parking-related information of a user. In the case of \textit{jamming}, the illegitimate node generates intentional interference to disrupt the communication between the legitimate nodes. As shown in Fig. \ref{fig:V2x_overview}, a jammer might try to interrupt the communication between vehicles, forcing them to collide. Finally, in the \textit{spoofing} attack, the control of the communication channel between the legitimate parties is taken by spoofer. The spoofer can replace, modify, and intercept the messages that are being transmitted between two legitimate parties \cite{oursur1}. A spoofing attack on V2I communication might result in vehicles moving in direct collision paths of each other, as shown in Fig. \ref{fig:V2x_overview}. There are two popular security approaches to tackle these attacks in V2X communication: cryptography-based solutions and PLS-based solutions. The former, based on key sharing by a trustable third party, are effective in providing secure data communication in current 5G V2X and other wireless systems \cite{hamida2015security}. However, cryptography-based solutions may not be suitable for future V2X wireless communication because the management and maintenance of keys are very challenging tasks in a decentralized and heterogeneous environment such as V2X communication. This is further compounded by intermittent connectivity and the varying speed of the V2X entities. Moreover, the security of the key-sharing process is critical, i.e., if the key is intercepted during the said process, all subsequent transmissions are liable to illegitimate access. Additionally, the sensors, actuators and transceivers utilized for instant control in autonomous or remote-controlled driving are processing restricted, power-limited, and delay-sensitive. These limitations render them incapable of supporting sophisticated encryption/decryption techniques needed for cryptographic security solutions. In order to handle these issues, PLS techniques have emerged as an effective security solution for future V2X communication that can complement and even replace the cryptography-based approaches \cite{oursur1}. PLS exploits the dynamic features of wireless communication to secure the link between legitimate nodes. It has the following potentials as a solution for future communication security. Firstly, these approaches can extract keys from the time-varying wireless channel, avoiding key management and maintenance issues in decentralized V2X wireless networks. Secondly, they are also suitable for power-restricted and delay-sensitive applications since many of them can be implemented by relatively simple signal processing algorithms \cite{5751298}. \subsection{PLS Solutions in V2X} \subsubsection{Anti-eavesdropping solutions for V2X} There are several PLS techniques in the literature against eavesdropping applicable for V2X, such as channel-based adaptation and key extraction, channel-coding design-based methods, and injection of artificial noise. The basic idea in channel-based adaptation is to modify the transmission parameters based on the requirements, location, and wireless fading channel conditions of the legitimate receiver \cite{oursur1}. Examples of this approach include beamforming, adaptive modulation and coding, directional modulation, and adaptive power allocation \cite{oursur1}. Channel-based key extraction techniques generate high-rate keys from the dynamic vehicular wireless channel \cite{7876781}. The methods based on channel coding design use special channel codes to ensure secure communication. In the case of artificial noise-based approaches, an interfering signal (noise/jamming) is added by exploiting the null space of the legitimate V2X node’s channel to degrade the performance of eavesdroppers. \begin{table*}[t]\centering\renewcommand{\arraystretch}{1.35} \caption{Summary of security threats, risks and IV2XS framework examples.} \label{table:PLS_Solutions} \centering\resizebox{1.97\columnwidth}{!}{ \begin{tabular}{|c|l|l|} \hline \textbf{Threats} & \multicolumn{1}{c|}{\textbf{Risks}} & \multicolumn{1}{c|}{\textbf{IV2XS Framework Examples}} \\ \hline \textbf{Eavesdropping} & \begin{tabular}[c]{@{}l@{}}Stealing of personal, financial,\\ and location information.\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{High security}: Adaptive artificial noise based, interference-based, pre-coding, and hybrid techniques.\\ \textbf{Medium security}: Beamforming, directional modulation and pre-coding techniques.\\ \textbf{Low security}: Interleaving, adaptive modulation and coding.\\\end{tabular} \\ \hline \textbf{Spoofing} & \begin{tabular}[c]{@{}l@{}}Car hijacking and stealing, \\ traffic disturbance, accidents,\\ fake messages.\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{High security}: Joint features extraction from CSI, RSS and AFE imperfections for authentication.\\ \textbf{Medium security}: More than one feature extraction from CSI/RSS/AFE imperfections.\\ \textbf{Low security}: Single feature extraction from CSI/RSS/AFE imperfections.\end{tabular} \\ \hline \textbf{Jamming} & \begin{tabular}[c]{@{}l@{}}Traffic disturbance, accidents, \\ communication disturbance, \\ loss of control.\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{High security}: Multi-antenna approaches, SS with more processing gain.\\ \textbf{Medium security}: SS with medium processing gain, relays.\\ \textbf{Low security}: Relays, SS with less processing gain.\end{tabular} \\ \hline \end{tabular}} \end{table*} \subsubsection{Anti-spoofing solutions for V2X} The authentication based on conventional cryptography is a significant challenge for V2X communication networks. This is due to the unwanted latency caused by complex backhaul processing and multiple handshakes between users, base stations (BSs), and roadside units (RSUs) for pairwise key or information exchange. However, message or entity authentication in a highly dynamic vehicular communication system can be done in a faster and more robust way using PLS approaches \cite{7498103}. For example, reciprocal channel properties such as channel state information (CSI) and received signal strength (RSS) between two communicating entities of V2X system, or the analog front-end (AFE) imperfections of wireless V2X transceivers such as carrier frequency offset (CFO) and in-phase/quadrature imbalance (IQI) can be exploited to authenticate the communicating nodes. \subsubsection{Anti-jamming solutions for V2X} Jamming attacks can cause serious problems for V2X scenarios by interrupting legitimate communication between different nodes, leading to traffic disruption or accidents. There are three broad categories of PLS-based solutions against jamming attacks: multi-antenna based approach, which is an effective solution because of its ability to avoid interference from unwanted sources \cite{kosmanos2016mimo}; cooperative relaying schemes, where V2X entities can act as relays, have the ability to re-route the traffic \cite{8336901}; and spread spectrum (SS) techniques, (e.g. direct sequence spread spectrum (DSSS) and frequency-hopping spread spectrum (FHSS)), which can also be used by V2X entities against jamming attacks by spreading the signal or by rapid frequency switching \cite{5751298}. \section{Intelligent Security Design for V2X Communication} \label{sec:intelligent_security} The security needs vary dramatically depending on the environment/medium, scenario/use-case, and application/service associated with each legitimate transmission. This necessitates the use of intelligent security design. \subsection{Intelligent Security Design} The different components of V2X may have different uses, ranging from collision avoidance (V2V) to monetary transactions such as toll payment (V2I) to onboard entertainment (V2N). It seems intuitive that the criticality of security is in descending order in the above-mentioned tasks. This is just a simple example illustrating the varying requirements of a few different use cases concerning various V2X components. If the entire V2X communication was limited to this (i.e., different components having their own level of security requirements), the provision of adaptive security related to particular components would have been relatively easy. This, unfortunately, is not the case. The required security levels depend not only on the components that are communicating but also on the particular application, location of the user, utility, environment, etc. To cater to this, V2X communication security needs an intelligent framework. \begin{figure*}[t] \centering \subfigure[]{ \includegraphics[scale = 0.27]{figures1.pdf} \label{fig:conditions} } \subfigure[]{ \includegraphics[scale = 0.48]{figure3_v3.pdf} \label{fig:examples} } \caption{(a) An illustrative example regarding different conditions and security requirements, the vehicles being considered are the ambulance and the yellow car (b) Security resource allocation corresponding to the scenario and vehicle locations (A-B-C for an ambulance, A'-B'-C' for the yellow car) in (a).} \label{fig:illus2} \end{figure*} IV2XS is driven by the idea of providing proactive, adaptive, and efficient security to V2X entities. Figure \ref{fig:system_model} illustrates the conceptual IV2XS framework. It is powered by an AI-engine with input from radio environment map (REM), physical and higher network layers. REM is a cognitive enabler that provides information such as user distribution, traffic levels, and power maps, allowing smart network planning. The application layer provides information regarding the application requirements while the physical layer provides instantaneous CSI that can be utilized for different PLS schemes. The AI-powered engine makes use of the provided information to extract conditions such as location, utility, application, environment, situation, and vehicle specifications. Different conditions necessitate varying security requirements (this is elaborated in section \ref{subsec:conditions}). Depending on these requirements, appropriate resources and methods from a pre-defined set will be allocated and then provided using a software-defined radio for ensuring secure communication. Table \ref{table:PLS_Solutions} presents the different types of potential attacks on V2X communication and their possible solutions by IV2X engine based on the security requirements. The requirement of security can be variable based on the criticality of the conditions. For the purpose of this paper and the illustrations within, we have considered three security levels, low, medium, and high. Here, low may correspond to illegitimate access to user information such as the entertainment content being accessed while high-security level might refer to the security of an emergency vehicle at a busy junction. \subsection{Conditions Affecting IV2XS Security Framework} \label{subsec:conditions} \subsubsection{Location} Security threats and V2X entity locations have a high degree of correlation. In essence, the location where a security breach can affect more users requires stronger protection. Consider the case where two V2X entities are following each other on an otherwise deserted road, they only need to know about the other vehicle's speed and distance to keep a safe cushion to avoid any collision. Now take the case of two vehicles at an intersection, there is an increased number of factors that affect the decision making process for vehicles about when to move and in which direction, resulting in higher security requirements. In addition to monitoring each other, these vehicles also need to take into account the traffic movement in other directions, which is dictated by the traffic signals. A spoofing attack, in this case, can wreak havoc by making vehicles from different directions move at the same time. To cater to such situations, IV2X can be used to raise the level of security once it detects the location to be an intersection, ensuring nothing untoward happens. Figure \ref{fig:illus2} shows the variation in security requirements as a function of the location. The positions of two vehicles, ambulance, and the yellow car are labeled as points A-B-C and A'-B'-C', respectively. Here B and B' represent the vehicles in the middle of the intersection, where the security requirement as well as allocated resources for it increase, as shown in Fig. \ref{fig:examples}, to account for the higher risk. Security is also critical in the case of mountains or bridges or generally any location that can act as a bottleneck for traffic. On the other hand, the required security level inside gated communities or university campuses is relatively low. \subsubsection{Utility} Another factor that determines the necessary level of security is the type of V2X entity and its usage. A vehicle may be private (belonging to individual or companies), public (public transportation, municipality vehicles), or belong to emergency services like police, ambulance, or fire brigade. It stands to reason that in the case of an emergency the latter category of V2X entities should be given priority on the road and in the communication. If we consider a successful breach, it is logical that a breach targeting an emergency vehicle can cause more damage than that of an individual's vehicle, which means these vehicles need higher security. This is also illustrated in Fig. \ref{fig:examples} where the security level of the ambulance and the corresponding resources allocated for it are higher than a regular car (shown in yellow in Fig. \ref{fig:conditions}). \begin{figure}[t] \centering \subfigure[]{ \includegraphics[width=0.83\columnwidth]{figure4v22.pdf} \label{fig:illus1} } \subfigure[]{ \includegraphics[width=0.83\columnwidth]{PER3.eps} \label{fig:illus1sim} } \caption{(a) Illustration for comparison of the proposed framework versus conventional approaches, (b) Simulation results in terms of eavesdropper packet error rate (PER) vs noise power for an adaptive artificial noise based solution.} \label{fig:illus22} \end{figure} \subsubsection{Application} From the security perspective, the application is an important parameter for the IV2XS concept. There are different types of applications in V2X communication that may be safety-related (e.g., collision avoidance and cooperative driving, queue warning), traffic-related (e.g., optimizing the traffic flow), infotainment (e.g., internet access and video streaming), payment applications (e.g., toll collection), location-based application (e.g., finding the closest fuel station). The above-mentioned applications require different levels of communication security. Safety-related applications would require more resources for security while infotainment applications employ more resources to achieve high data rates. \subsubsection{Environment} The social environment is primarily categorized into rural, suburban, and urban. These environments differ from each other on the basis of population, V2X entities density, and available infrastructure. The environment is a significant factor in deciding the security levels of V2X entities. The traffic density is higher in urban areas, and it is plausible to assume these areas would also be attacked more often which consequently dictates the level of security needed. The illustration in Fig. \ref{fig:illus2} shows an urban environment with relatively high security requirements, potentially using a significant amount of the available resources for ensuring the privacy of the communication. \subsubsection{Situation/Time} Existing security threats would differ based on time and the situation in which communication is taking place. The density of vehicles is different at different times. For example, vehicle density is high during the day, particularly during office hours, and generally low at night. In general, the probability of threats increases with the increase in density. The IV2XS engine will adjust the security resources accordingly. Additionally, weather conditions would also affect V2X communication requirements. For instance, the required latency of V2V messages would be different for a normal road as compared to that for a road that is slippery due to snow, which would affect the security requirements. \subsubsection{Vehicle Specifications} Vehicle specifications such as size, engine capacity, power, and mileage also need to be considered for security requirements. A larger vehicle can cause more harm to the vehicles or infrastructure around it and needs to have higher security than smaller ones. This is another parameter that is reflected in Fig. \ref{fig:illus2}, where the larger size (and speed) of the ambulance also contributes to higher security requirements. Similarly, the degree of autonomy of the V2X entity has to be taken into account as well, it follows basic intuition that a fully autonomous vehicle is more dependent on V2X communications, and therefore can be affected more severely by a breach in security. \section{IV2XS: An Illustrative Example} \label{sec:illustration} Given that this article proposes a generic framework for intelligent security provision for V2X communication, an implementation or simulation covering the whole scope of the framework is relatively tedious and impractical. Therefore, we have taken the liberty of explaining the concept with, first, a generic comparison between conventional and proposed approaches for security provision in terms of resource allocation for data and security. Second, we consider a specific artificial noise-based adaptive technique against eavesdropping and look at the resource distribution for different levels of security. Fig. \ref{fig:illus1} shows how the proposed approach would stack up against conventional methods in terms of resource allocation for security. In a conventional approach, it is possible to tune the network parameters either to the worst-case scenario (see the dashed lines) or an average/general case (see the dotted lines). In the former case, resources are unnecessarily allocated for security since the worst-case scenario rarely occurs, leading to scarcity of resources for the data. In the latter case, the assigned security resources are too little to thwart any strong attacks, leaving the communication vulnerable to being listened, interrupted, or manipulated. The solid lines represent the IV2XS resource division. Since it is able to adapt to the usage scenario and attack types, it provides more efficient resource utilization by only sacrificing the system capacity when absolutely needed. To illustrate the concept further, we give the example of an adaptive technique capable of providing security against eavesdropping for different application requirements by adjusting the power level of the artificial noise \cite{hamamreh2018joint}. Figure \ref{fig:illus1sim} shows the security level in terms of packet error rate (PER) at the illegitimate eavesdropper as the noise power is changed. The horizontal axis shows different security levels, where low security refers to 10$\%$ PER, medium refers to 50$\%$ PER and high refers to 90$\%$ PER at the eavesdropper. In this particular simulation setup, the total power (signal + noise) is kept constant, and the normalized power levels for both are shown in the figure on the vertical axes, where the blue and red lines (and axes) represent the noise and signal power levels, respectively. An alternate approach is to add noise without reducing the signal power, this would improve the legitimate user's performance as compared to the previous case at the cost of increased total transmit power. It should be kept in mind that the increased security requirement means the availability of fewer resources for data transmission, leading to degraded throughput even for the legitimate users. This necessitates the need for adaptability of the security approaches, the lack of which would adversely affect the network performance, in terms of either capacity or security. Here it is important to note that the proposed framework is NOT limited to a single adaptive technique, such as the one described above. Rather, it is capable of switching between different algorithms, potentially even using multiple of them simultaneously to avoid eavesdropping, jamming, and/or spoofing attacks. \section{IV2X Challenges and Potential Solutions}\label{sec:open_issues} \subsection{Security for High Mobility} Acquiring CSI is central to several security algorithms under the PLS umbrella. However, owing to the high mobility in the V2X scenario and consequent channel variation, estimating and tracking the channel becomes very challenging. One approach to ease this burden is to exploit the channel sparsity, which means there are fewer parameters to estimate \cite{7131541}. Alternatively, PLS techniques that require partial or no CSI can be utilized, such as, directional modulation, interference-based algorithm \cite{oursur1}, etc. Yet another alternative is to use the time-invariant characteristics of the communication to provide security. For instance, radio frequency (RF) fingerprinting is an approach that leverages the uniqueness of AFE imperfections of the devices for their authentication, thwarting spoofing attacks in the process \cite{wang2016wireless}. \subsection{Condition Detection} \vspace{-3pt} The detection of the condition information at the IV2XS engine is the first step for the IV2XS security approach. As explained earlier, in this work we are considering six conditions, i.e., location, utility, application, time/situation, environment, and vehicle specifications. The information from REM and different layers of the communication system is leveraged to identify the condition. In addition to the user and traffic densities, REM can provide information about the expected RSS levels, which can be used to localize the vehicles. Furthermore, application-specific requirements are provided to the IV2XS engine from the corresponding network layer. \vspace{-3pt} \subsection{Security Level Identification} \vspace{-3pt} Once a user's condition is determined, the next step is to define the corresponding level of security threat. One possible approach is to define three security levels: low, medium, and high, where low refers to data/information theft, medium level corresponds to possible damage to property and high-security level refers to life-threatening situations. We need to reiterate that this is not a thorough categorization of the security levels, and depending on the available resources, the security levels might be changed. While having more security levels would ensure more efficient resource utilization, it would come at the cost of increased complexity. Balancing this trade-off, or proposing an improved approach is an open research area. \vspace{-3pt} \subsection{Security Mechanism and Resource Allocation} \vspace{-3pt} After identifying the condition and its corresponding security level, the next step is to select suitable algorithms and allocate appropriate resources to achieve that goal. Continuing with respect to the above-mentioned example of Fig. \ref{fig:illus2}, the car at the intersection with high-security risk level, where any attack can cause a major problem, will be allocated more resources and stronger algorithms by the IV2XS engine. More specifically, the IV2XS engine can select artificial noise injection with multi-antenna based approaches at the transmitter to protect information from eavesdropping while simultaneously using interference alignment techniques at the receiver to combat jamming \cite{oursur1}. The design of artificial noise can be a function of the security threat level. On the other hand, for the low-security threat level, simple security techniques such as adaptive resource allocation based algorithms can be selected proactively by the IV2XS engine. It is also possible to use multiple security algorithms in conjunction with each other when the application requires a higher level of security. In this case, it is advisable to use an AI-based approach \cite{AI} to decide upon the most suitable algorithms and their respective resources. \vspace{-3pt} \subsection{Challenges Related to PLS} \vspace{-3pt} In general, PLS techniques are sensitive to channel reciprocity and estimation mismatch errors. These errors should be considered while designing future PLS techniques, and novel and effective channel estimation algorithms need to be proposed. Similarly, more efficient designs for noise-based techniques like the one described above are needed. \section{Conclusion} \label{sec:conclusion} Security is pivotal in wireless communication, particularly in a use case like V2X that depends heavily on seamless and reliable connectivity between its different components. In this paper, we have introduced the use of an intelligent proactive framework for PLS that detects the condition of a user, considers the channel and upper-layer information before making a decision about the best-suited security level and allocating resources for ensuring secure communication accordingly. Here it is important to reiterate that the focus of this work is to provide an initial framework for IV2XS and highlight the factors affecting it, rather than focusing on specific PLS techniques. We believe the proposed intelligent framework would prove to be a stepping stone towards secure V2X communications.
proofpile-arXiv_069-866
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction } The right to control one's personal information has gained significant importance lately \cite{lee2019information}. Indeed, 58\% of the countries have data protection and privacy legislation, while another 10\% have drafted legislation about it \cite{unctd}. This broad interest is related to the massive amount of personal data collected by information systems and the risk that such information could be wrongly distributed online \cite{lee2019information}. The study of information privacy has advanced our understanding of individuals' concerns regarding organizational practices associated with collecting and using their personal information \cite{smith1996information}. \nuevo{However, a literature review revealed a strong bias towards USA-centered studies across privacy concerns literature and warned about the limitations to generalizability this entails} \cite{belanger2011privacy,OKAZAKI2020458}. The review's authors hypothesized that individuals from different world regions have diverse cultures, values, and laws, which can, in turn, result in different conceptualizations of information privacy and its impacts \cite{belanger2011privacy,MOHAMMED2017254}. To study these differences, privacy research has often relied on survey-based studies \cite{cockcroft2016relationship}. For example, a questionnaire was applied to explore differences in privacy perceptions between Facebook users from Germany and the USA \cite{krasnova2010privacy}, and a cross-national survey was conducted to evaluate information attitudes of consumers in the USA and Brazil \cite{markos2017information}. These multi-country privacy studies have had limited sample sizes, which makes the results difficult to generalize \cite{lee2019information,huang2016privacy}. They also tend to be focused on one or two cultures, usually including the USA \cite{cockcroft2016relationship}. Hence, multi-country information privacy research is still needed to extend our understanding of this increasingly relevant topic around the globe \cite{adu2019individuals,va_zou2018ve}. We propose an alternative approach to study information privacy concerns over a large geographical scope. This work combines word embeddings, open coding, and content analysis to examine tweets related to a large data breach scandal. We seek to characterize similarities and differences in privacy terms across people who tweet about this issue in different languages and from different world regions. Inspired by \cite{rho2018fostering}, where text analysis was used to analyze answers about individuals' privacy concerns, we analyze the semantic context in which privacy-related terms were used in tweets written by different groups of people. We focus on the Facebook–Cambridge Analytica data scandal. \nuevo{In 2018, the firm Cambridge Analytica was accused of collecting and using the personal information of more than 87 million Facebook users without their authorization} \cite{venturini2019api,isaak2018user,lapaire2018content}. The scandal sparked multiple conversations over technology's societal impact and risks to citizens' privacy and well-being worldwide. Opinions, facts, and stories related to it took place \nuevo{on} different social media platforms such as Twitter, where the hashtag \#DeleteFacebook became a trending topic for several days \cite{lin2018deletefacebook,mirchandani2018delete}. \hlreview{We analyze more than a million public tweets in Spanish or English that use hashtags or keywords related to the scandal. We divide the dataset by language (Spanish and English) and regions (Latin America, Europe, North America, and Asia) and create word-embeddings for each subset. Then, we systematically analyze and compare the semantic context of four keywords, such as \textit{data}, \textit{privacy}, \textit{user}, and \textit{company}, across the embeddings. We contrast our results with one of the most used information privacy concerns framework to find terms and tweets matching different concerns. Then, we test a null hypothesis that there is no difference in emphasis on information privacy terms across languages and world regions. In this process, we discover the presence of related concepts that could be integrated into information privacy frameworks, such as regulations. We also observe statistically significant language differences in emphasis on data collection and significant regional differences in emphasis on awareness. Finally, we discuss the implications of our results.} We summarize prior work on information privacy concerns in Section \ref{secinformationprivacydifferences}. Section \ref{sec:research_question} introduces our research question \hlreview{and hypothesis}. Section \ref{sec:data_and_methods} details our research method, while Section \ref{sec:results} reports on our findings. Section \ref{sec:discussion} offers a discussion of our results, limitations, and future work. Finally, Section \ref{sec:conclusion} provides our conclusions. \section{Information privacy concerns}\label{secinformationprivacydifferences} \nuevo{Information privacy concerns emerge when an individual ``feels threatened by a perceived unfair loss of control over their privacy by an information-collecting body''}\cite{lee2015compensation}. \nuevo{Previous research argues that information privacy concerns are a multidimensional construct}\cite{JOZANI2020106260,correia2017information,HERAVI2018441}. \hlreview{A multidimensional approach allows identifying to what extent users are concerned about different aspects of information privacy} \cite{YUN2019570,10.2307/43825946,10.2307/43825946}. \hlreview{Different authors have proposed alternative conceptualizations to measure information privacy concerns. We briefly review the most adopted ones in the following subsection. Then, we summarize prior work on differences in privacy concerns across countries, regions, and other characteristics.} \celeste{Prior research has examined privacy concerns from other perspectives as well }\cite{HERAVI2018441,gerber2018explaining,YUN2019570}. \celeste{A vast portion of privacy research on social networking sites has focused on examining users' privacy behaviors, such as the intention to provide personal information or transact online }\cite{WISNIEWSKI201795,kokolakis2017privacy,gerber2018explaining,OGHAZI2020531,su12198286,HERAVI2018441,markos2017information}. \nuevo{In a similar direction, several studies have investigated the use of privacy setting configurations} \cite{va_10.1145/2675133.2675256,IJoC3208,WISNIEWSKI201795,doi:10.1080/0144929X.2020.1831608}. \nuevo{Rather than centering on behavior or behavioral intention, we focus our review on information privacy concerns that characterize general personal dispositions }\cite{c2013empirically}.\nuevo{ We think that this part of the literature aligns better with what people can say, in a declarative way, about data privacy on social medi .} \subsection{Assessing information privacy concerns} Two questionnaires have been widely used to evaluate individuals' information privacy concerns \cite{belanger2011privacy,cockcroft2016relationship,morton2014desperately}: Concerns for Information Privacy (CFIP) and Internet Users' Information Privacy Concerns (IUIPC). \subsubsection{CFIP: Concerns for Information Privacy} The CFIP framework \cite{smith1996information} focuses on individuals' perceptions of how organizations use and protect personal information \cite{van2006concern}. CFIP identifies four dimensions \begin{itemize} \item \textit{Collection:} concerns about personal data that is collected over time; \item \textit{Unauthorized secondary use:} concerns about organizations using personal data for another purpose without the individual's authorization; \item \textit{Improper access:} concerns about unauthorized people having access to personal data; \item \textit{Errors:} concerns about adequate protections from deliberate and accidental errors in personal data. \end{itemize} To measure them, \inlinecite{smith1996information} proposed and validated a 15-item questionnaire. The CFIP questionnaire was validated by surveying 355 consumers from the USA and applying confirmatory factor analysis (CFA) \cite{stewart2002empirical}. So far, this questionnaire had been considered as one of the most established methods to measure quantitatively information privacy concerns \cite{harborth2018german} and had been widely used in the literature \cite{harborth2017privacy,stewart2002empirical}. However, the CFIP and its measurement instrument were originally defined for users in an offline context \cite{palos2017behavioral}. As the Internet enabled new ways to collect and process data, it was expected that new concerns about information privacy might emerge \cite{malhotra2004internet}, and a new framework was proposed: the IUIPC. \subsubsection{IUIPC: Internet Users' Information Privacy Concerns} \label{iuipc} \inlinecite{malhotra2004internet} introduced the IUIPC framework and conceptualized Internet users' concerns about information privacy from a perspective of fairness. Drawing from social contract theory \cite{c2013empirically}, \inlinecite{malhotra2004internet} argue that personal data collection is perceived to be fair when a user has control over their personal data and is informed about the intentions that organizations have about how to use it. The IUIPC includes three constructs: \begin{itemize} \item \textit{Collection:} concerns about the amount of personal data owned by others compared to the perceived benefits \cite{malhotra2004internet}. It is related to the perceived fairness of the outcomes one receives. Users provide information if they expect to obtain something of value after a cost-benefit analysis of a transaction. \item \textit{Control:} concerns about control over personal information, including approval, modification of collected data, and opportunity to opt-in or opt-out from data collection \cite{malhotra2004internet}. It is related to the perceived fairness of the procedures that maintain personal data. \item \textit{Awareness:} concerns about personal awareness of organizational information practices \cite{malhotra2004internet}. It relates to issues of transparency of the procedures and specificity of information to be used. \end{itemize} A 10-item questionnaire to assess these constructs was validated in \cite{malhotra2004internet}. The questionnaire has been widely used to this day \cite{YUN2019570,raber2018privacy} because it considers the Internet context, and it can explain more variance in a person's willingness to transact than CFIP \cite{rowan2014observed}. Recent work has explored text mining as an alternative research method to identify IUIPC dimensions. \inlinecite{raber2018privacy} found that IUIPC dimensions can be derived from written text. They observed a correlation between IUIPC concerns, as measured by the questionnaire, and LIWC language features of social media posts from a sample of 100 users. \subsubsection{Other instruments of assessment} The Westin-Harris Privacy Segmentation Index measures individuals' attitudes and concerns about privacy and how they vary over time \cite{kumaraguru2005privacy} based on answers to three questions \cite{egelman2015predicting,woodruff2014would}. It categorizes individuals into three groups \cite{kumaraguru2005privacy,da2018information,motiwalla2014privacy}: \textit{Fundamentalists} are highly concerned about sharing their data, protect their personal information, prefer privacy controls over consumer-service benefits, and are in favor of new privacy regulations; \textit{Pragmatists} tend to seek a balance between the advantages and disadvantages of sharing personal information before arriving at a decision; \textit{Unconcerned} users believe there is a greater benefit to be derived from sharing their personal information, trust organizations that collect their personal data and are the least protective of their privacy. The Westin-Harris' index was introduced as a way to meaningfully classify internet users based on their attitude toward privacy and their motivations to disclose personal information \cite{torabi2016towards}. It has been used for several decades. However, recent studies have raised questions about its validity \cite{egelman2015predicting}. Prior work has failed to establish a significant correlation between the Westin-Harris' segmentation and context-specific, privacy-related actual or intended behaviors \cite{consolvo2005location,woodruff2014would,egelman2015scaling}. The existence of a mismatch between privacy concerns and privacy behaviors, known as the ``privacy paradox'' \cite{kokolakis2017privacy,dienlin2015privacy}, motivated the creation of a new measurement instrument. Buchanan's Privacy Concern scale aims to capture different aspects of the paradox. \inlinecite{buchanan2007development} developed three privacy scales: two of them assess privacy behavior, and the third one measures information privacy concerns. However, some limitations have been identified. \nuevo{Their scales are not able to identify different privacy dimensions, but only one, which appears to map onto the general concept of privacy concern.} Thus, a more fine-grained examination is desirable to improve the design of this scale \cite{buchanan2007development}. Because our study focuses on people's comments about a specific information privacy scandal (and not their privacy behavior), our work will mostly build upon the information privacy concerns frameworks, particularly the IUIPC. \subsection{Differences on information privacy concerns} Information privacy concerns can vary across individuals based on peoples' perceptions and values \cite{buchanan2007development}. People may have different concerns even if they experience the same situation \cite{lee2015compensation}. It has been argued that information privacy concerns can be influenced by different factors \cite{smith2011information}, \hlreview{such as} national culture \cite{cho2009multinational,huang2016privacy,cao2008user,krasnova2010privacy}, and individuals' demographics (e.g., age, gender) \cite{zukowski2007examining,lee2019information,jai2016privacy,rowan2014observed,cho2009multinational,markos2017information}. We review these factors below. \subsubsection{National Culture} While there are similarities in what privacy means across cultures \cite{cockcroft2016relationship}, there is no universal consensus on its definition \cite{cannataci2009privacy}. According to \inlinecite{newell1995perspectives}, several cultures do not possess an equivalent term to the English' privacy definition in their own language, e.g., Arabic, Dutch, Japanese, and Russian. Nevertheless, this does not mean that these cultures lack a sense of privacy \cite{newell1995perspectives}. Every society appreciates privacy in some way, but the expression of it varies \cite{cho2009multinational}. The concept of national culture has been studied as one of the factors related to information privacy concerns \cite{nov2009social,malhotra2004internet,bellman2004international}. National culture can be defined as ``the collective mindset distinguishing the member of one nation from another''\cite{cho2009multinational}. Hofstede's cultural dimensions theory \cite{hofstede1983national} has been the most used conceptual model to study cultural differences in this context. This trend is expected since Hofstede's theory has been widely used to study the relationship between culture and technology \cite{leidner2006review}, even though there are a number of criticisms of this theory \cite{terlutter2006globe}. The latest version of this theory proposes six cultural dimensions \cite{hofstede2011dimensionalizing}. Among them, the \textit{individualism/collectivism} dimension has been found relevant to information privacy concerns. % \textit{Individualism/collectivism} refers to the extent to which individuals are part of groups beyond their immediate families. Differences \celeste{in} information privacy concerns have been explained using some cultural dimensions at a country and regional level (see Table \ref{tab:summary_work_related_culture}). Participants from \hlreview{individualistic} countries (Australia and United States) exhibited a higher level of online privacy concerns than individuals from collectivist countries \cite{cho2009multinational}. The authors' rationale is that high individualism is associated with an emphasis on private life and independence from the collective; thus, people from individualist countries are more worried about privacy intrusions. In the same direction, \inlinecite{bellman2004international} found that controlling for internet experience and privacy regulations, people from countries with high individualism show \celeste{deeper} concern about two CFIP dimensions: \textit{unauthorized secondary use} and \textit{improper access}. On the other hand, no regional differences in privacy concerns were found through online surveys with 226 English-fluent crowd workers from six regions (Africa, Asia, Western Europe, Eastern Europe, North America, and Latin America). The authors argued that it is unclear if their finding is due to true similarities or \hlreview{a lack of enough} power in measuring privacy concerns \cite{huang2016privacy}. \begin{table}[ht] \caption{Culture and information privacy concerns} \label{tab:summary_work_related_culture} \begin{tabular}{p{0.12\linewidth}p{0.35\linewidth}p{0.17\linewidth}p{0.25\linewidth}} \hline Independent variables & Method & \# participants and origin & Key findings \\ \hline National culture & 5-item questionnaire \cite{cho2009multinational}, based on a unidimensional conceptualization of online privacy concerns. Items were comprehensive enough to measure general concerns about online privacy. & 1261 from Seoul, Singapore, Bangalore, Sydney, New York & Participants from \hlreview{individualistic} countries exhibited higher concern about online privacy \cite{cho2009multinational} \\ National culture & 15-item questionnaire (CFIP) \cite{smith1996information}, based on a multidimensional constructive model of privacy concerns (collection, unauthorized secondary use, improper access, errors). & 534 from 38 countries & Participants from \hlreview{individualistic} countries showed higher concern about improper access and secondary use \cite{bellman2004international} \\ Regional culture & 4-item questionnaire \cite{dinev2006extended}, based on a unidimensional conceptualization of privacy concerns, which is defined as apprehension about how online personal information is used by others. & 226 from Africa, Asia, Western and Eastern Europe, North and Latin America & No regional differences in privacy concerns were found \cite{huang2016privacy}\\ \hline \end{tabular} \end{table} \subsubsection{Languag } Relatedly, the Sapir-Whorf hypothesis suggests that the structure of anyone's native language influences the world-views they will acquire \cite{kay1984sapir}. \hlreview{Depending on the language, a message is coded and decoded differently based on standardized language norms and culture \mbox{\cite{zarifis2019exploring}}}. Thus, individuals who speak different native languages could think, perceive reality and organize the world around them in different ways \cite{hussein2012sapir}. Previous work has explored how user-generated content can reveal different views about the same issues among people who write in different languages. \inlinecite{jiang2017mapping} conducted a semantic network analysis to examine the semantic differences that emerge from the Wikipedia articles about China. Results suggest that Chinese-speaking and English-speaking contributors framed articles about China in different and even opposite ways, which were aligned to their national cultures and values. The Chinese version framed them from perspectives of authority respect, emphasizing harmony and patriotism. Articles in English were written from \celeste{the} point of view that is distinctive of many Western societies: the core value of democracy. A potential role of the spoken language in the information privacy context has also been studied. \inlinecite{li2017cross} created a cross-cultural privacy prediction model. The model applies supervised machine learning to predict users' decisions on the collection of their personal data. Using answers from an online survey of 9,625 individuals from 8 countries on four continents: Canada, China, Germany, United States, United Kingdom, Sweden, Australia and, India,they found that the model's prediction accuracy improved \nuevo{when adding individual's language (English, Chinese, French, Swedish, and German) or Hofstede's cultural dimensions}. Our work will build upon this line of reasoning to \hlreview{deepen} our understanding of information privacy concerns across the globe. \subsubsection{Other individual characteristics} Even though our work will not address the relationship between demographics and information privacy concerns, we will briefly review the literature about this topic Prior studies suggest that older Internet users are more concerned about online information privacy than younger ones \cite{cho2009multinational}. Older participants were more sensitive to privacy issues and exhibited a greater desire to control the amount of information collected about them \cite{zukowski2007examining}. In contrast, younger users declared themselves to be more willing to share their personal information with third parties \cite{jai2016privacy}. The relation between privacy concerns and gender has also been studied \cite{cho2009multinational}. \inlinecite{jai2016privacy} found that women were less willing than men to permit third parties to share their personal information. Similarly, \inlinecite{rowan2014observed} observed that women reported greater information privacy concerns than their male counterparts. Both studies considered gender as binary. Another relevant factor is participants' internet experience. As users grow in internet experience, concerns for online information privacy may decrease \cite{zukowski2007examining}. \inlinecite{bellman2004international} concluded that participants with more internet experience were less concerned about online privacy overall, and in particular, were less worried about \textit{improper access} and \textit{secondary use}. This could be explained by increased familiarity with online privacy practices \cite{zukowski2007examining}. \section{Research Questions} \label{sec:research_question} \hlreview{Overall, while concepts around information privacy concerns have been extensively investigated, some limitations are shared among the studies that assess differences in these concerns worldwide.} Most research has been conducted through surveys and has focused only on a few geographic regions, with a notable exception of \cite{li2017cross}. Many studies have had a limited sample size \cite{vitkauskaite2010overview,ur2013cross,ebert2020does,su12198286,doi:10.1080/0144929X.2020.1831608,OGHAZI2020531,krasnova2010privacy}\hlreview{; thus, their findings' generalizability} has been questioned \cite{lee2019information}. Moreover, when information privacy concerns questionnaires are delivered in English to speakers of other languages, key differences among countries may be obscured, as has happened with other cross-national research \cite{harzing2002interaction,harzing2006response}. Unfortunately, conducting larger-scale, multi-country, and multi-language surveys can be quite expensive \cite{harzing2005does,doi:10.1080/0144929X.2020.1831608}. Yet, large-scale research to deepen our understanding of information privacy concerns worldwide is still needed \cite{vitkauskaite2010overview,su12198286,doi:10.1080/0144929X.2020.1831608,va_zou2018ve,OGHAZI2020531,OKAZAKI2020458}. \hlreview{We seek to assess the feasibility of using social media data to identify information privacy concerns and characterize language and regional differences.} Twitter is a popular micro-blogging service where individuals from different world regions who speak diverse languages share opinions, information, and experiences \cite{yaqub2017analysis,shen2015analysis}. Mining text from this platform has been used as a fast and inexpensive method to gather opinions from individuals \cite{o2010tweets}, which can complement findings obtained \hlreview{from traditional polls or other research methods}. Prior research has found a significant correlation between tweets and public opinion in diverse domains \cite{o2010tweets,tumasjan2010predicting,10.1145/3396956.3396973,doi:10.1080/21645515.2020.1714311}. \final{Following this trend of research, we aim to investigate whether Twitter data can reveal people's information privacy concerns. Thus, our first research question is as follows:} \begin{itemize} \item \hlreview{\textit{RQ1: Which information privacy concerns are present over social media content about a data-breach scandal?}} \end{itemize} As we have reviewed in the prior section, there are arguments and evidence to support that information privacy concerns can vary across culture, language, and demographics \cite{su12198286,doi:10.1080/0144929X.2020.1831608,OGHAZI2020531,OGHAZI2020531,gonzalez2019information}. \final{If information privacy concerns are present in a Twitter dataset, we could explore how they differ across people who live in different parts of the world and those who speak different languages. As we do not expect any specific trend of differences, we propose to test the following null hypotheses:} \begin{itemize} \item \textit{ H0a. There are no differences in information privacy concerns by language} \item \textit{ H0b. There are no differences in information privacy concerns by world region.} \end{itemize} \section{Data \& Methods} \label{sec:data_and_methods} To answer our research question \hlreview{and test the hypotheses,} we implemented a four-step methodology (see Fig \ref{fig:methodology}). We retrieved tweets associated with data privacy during a specific period (\textit{\ref{collection}. data collection}). We filtered the data, removing retweets and \hlreview{excluding} tweets likely to be generated by bots (\textit{\ref{preprocessing}. data pre-processing}). \final{We created word-embeddings (a multi-dimensional representation of a corpus) for the remaining tweets according to their language and world region} (\textit{\ref{mining}. text mining}). Finally, we conducted an analysis to identify similarities and differences in the semantic contexts of privacy keywords \hlreview{in the word embeddings (\textit{\ref{analysis}. coding and analysis}). Details about each of these steps are presented below. \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{method_diagram_v3.jpg} \caption{Methodology flow chart} \label{fig:methodology} \end{figure} \subsection{Data collection} \label{collection} We retrieved tweets related to the Facebook and Cambridge Analytica scandal between April 1st and July 10th, 2018. We focused on tweets in Spanish and English. On March 17, 2018, it was revealed that the data firm Cambridge Analytica used personal data of 87 million Facebook users for political advertising purposes without their consent \cite{schneble2018cambridge,OGHAZI2020531}. This scandal caused the closure of Cambridge Analytica \cite{solon2018cambridge} and numerous lawsuits against Facebook in the USA and the European Union. On Twitter, a \#DeleteFacebook campaign started as a response to this scandal \cite{lin2018deletefacebook}. \final{As the Cambridge Analytica scandal triggered Twitter users from different world regions (who speak diverse languages) to spontaneously share their opinions, experiences, and perspectives about data privacy, we decided to use a sample of these tweets to answer our research question and test our hypotheses. We used Tweepy\footnote{http://www.tweepy.org/} to collect relevant tweets. Tweepy is a Python library for accessing the standard real-time streaming Twitter API,\footnote{https://developer.twitter.com/en/docs/tweets/filter-realtime/guides/basic-stream-parameters.html} which allows to freely retrieve tweets that match a given query. If the query is too broad that it includes over 1\% of the total number of tweets posted at that time worldwide, the query's response is sampled \cite{aghababaei2017activity,morstatter2014biased}. The way in which Twitter samples the data is unpublished. Nevertheless, studies have shown that as more data from the API is retrieved, a more representative sample of the Twitter stream is obtained \cite{leetaru_is_nodate,morstatter2013sample}. \hlreview{To obtain relevant tweets, we used Tweepy's language filter to retrieve tweets in Spanish or English. We manually crafted a list of hashtags and keywords related to the Cambridge Analytica scandal. We collected tweets that had at least one of these terms. Examples of these terms are: ``\#DeleteFacebook'', ``\#CambridgeAnalytica'', ``\#Mark Zuckerberg'',``Facebook'', ``Facebook Cambridge'', and ``Facebook data breach''. Additionally, when appropriate, we added translations to Spanish of these terms to build the Spanish dataset.\footnote The authors are fairly confident of the quality of these translations because some of them are Spanish native speakers while others are English native speakers} In this way, if a tweet in Spanish had a hashtag in English, the tweet was collected and added to the Spanish dataset. A full list of the terms used to retrieve our data is available online\footnote{https://github.com/gonzalezf/Regional-Differences-on-Information-Privacy-Concerns}.} Following this procedure, we retrieved more than $470,000$ tweets in Spanish and more than 7.4 million tweets written in English (see Table \ref{Table:NumberOfTweets}). The tweets in Spanish were produced by approximately 220,000 users while tweets in English were generated by about 1.8 million unique Twitter accounts. \begin{table}[!h] \caption{Datasets before and after data cleaning} \label{Table:NumberOfTweets} \begin{tabular}{@{\extracolsep{6pt}}lrrrr} \toprule Dataset & \multicolumn{2}{c}{Spanish} & \multicolumn{2}{c}{English} \\ \cmidrule{2-3} \cmidrule{4-5} & \#Tweets & \#Accounts & \#Tweets & \#Accounts \\ \midrule Total & 472,363 & 222,352 & 7,476,988 & 1,846,542 \\ Original & 106,656 & 47,951 & 1,572,371 & 574,452 \\ With Botometer score & 100,606 & 44,182 & 1,442,112 & 504,214\\ Human-owned & 74,644 & 36,056 & 975,678 & 410,180\\ \bottomrule \end{tabular} \end{table} \subsection{Data pre-processing} \label{preprocessing} As we meant to analyze people's opinions about information privacy, we decided to pre-process our data in three ways. We removed all retweets to avoid analyzing exact duplicates. Afterwards, we sought to identify and filter out tweets that were generated by bots. Our last step was to associate tweets with different world regions. \hlreview{We further explain each of these steps below.} \celeste{ First, we excluded retweets to avoid analyzing exact duplicates of content. This methodology step is suggested by several authors} \cite{HAJJEM2017761,aguero2021discovering,8963749}. \celeste{ We kept tweets, quoted tweets, and replies to tweets. Exclusion of retweets reduced our datasets' size by 80\%. We refer to the resulting datasets as \textit{original} tweets (see Table {\ref{Table:NumberOfTweets}}).} We used Botometer \cite{davis2016botornot} to detect and remove tweets created by bots. Botometer uses machine-learning to analyse more than one thousand features \cite{badawy2018analyzing} including tweets' content and sentiment, accounts' and friends' metadata, retweet/mention network structure, and time series of activity \cite{varol2017online,yang2019arming} to generate a score that ranges from 0 to 1. A higher value suggests a high likelihood that an inspected account is a bot \cite{badawy2018analyzing}. This tool has reached high accuracy (94\%) in predicting both simple and sophisticated bots \cite{varol2017online,badawy2018analyzing}. Botometer is free and has been widely used\footnote{Since its release in May 2014, Botometer has served over one million requests \cite{davis2016botornot} via its website (https://botometer.iuni.iu.edu) and its Python API (https://github.com/IUNetSci/botometer-python)} \cite{varol2017online,yang2019arming}. \hlreview{Botometer processed all of the Twitter accounts who wrote original tweets. It} returned a score for 44,182 (92.14\%) and 504,214 (87.77\%) \hlreview{accounts} of the Spanish and English datasets, respectively. \hlreview{Botometer cannot generate scores for suspended accounts or those that have their tweets protected. We decided to remove the tweets from these accounts from our datasets because we cannot confidently claim that they come from humans' accounts.} We applied the Ckmeans \cite{wang2011ckmeans} algorithm to define a threshold to distinguish between humans' and bots' accounts. For each language, we clustered the Botometer scores into five groups, where the first group included the accounts with the lowest scores (more human-like) and the fifth group \hlreview{comprised those} with the highest scores (more bot-like). After manually inspecting the accounts around the thresholds of each group, we concluded that the fourth and fifth groups in each dataset were unlikely to contain human accounts. Therefore, we used the fourth group's lowest threshold to discriminate humans' and bots' accounts. Accounts with a score lower than 0.4745 and 0.4947 in Spanish and English, respectively, were considered as human-owned. These thresholds are similar to those used in related work, where scores lower than 0.5 had been considered as humans \cite{varol2017online,badawy2018analyzing}. As a result, our datasets contain 36,056 human-owned accounts that created 74,644 tweets in Spanish and 410,180 accounts that created 975,678 tweets in English. Finally, we used the GeoNames API\footnote{http://www.geonames.org/} to identify the country of residence of Twitter users in our datasets. On Twitter, users can self-report their city or country of precedence. Nevertheless, textual references to geographic locations can be ambiguous. For example, over 60 different places around the world are named ``Paris'' \cite{jackoway2011identification}. To deal with this challenge, we employed the GeoNames API, which is a collaborative gazetteer project that contains more than 11 million entries and alternate names for locations around the world in a variety of different languages \cite{bergsma2013broadly}. Given a text, its algorithm performs operations to recognize potential locations, followed by a disambiguation process. This last step checks hierarchical relations and picks a location by their proximity to other locations mentioned in the text \cite{lambdaalphagammaovarsigma2017comparative}. This tool has yielded results with an accuracy above 80\% \cite{jackoway2011identification}. We found that 80.68\% of users in our Spanish dataset and 78.68\% of users in our English dataset had filled the city or country fields in their profiles. However, the GeoNames API could not detect the users' location in several cases, for example when inaccurate information was provided (e.g., ``Planet Earth.. where everyone else is from'', ``Mars''). Nonetheless, the tool was able to identify the location of users who created 58.7\% of the Spanish tweets and 59.9\% of the English ones. In the Spanish dataset, most tweets came from Spain (16.5\%) and Latin American countries, such as Mexico (11.9\%), Argentina (6.2\%), and Venezuela (4.7\%). In the English dataset, the majority of tweets came from the United States (32.4\%), followed by United Kingdom (6.9\%), India (3.2\%), and Canada (2.7\%) (see Table \ref{tab:geolocated_number_tweets}). \begin{table}[!h] \caption{Top-10 most frequent user locations in the Spanish and English datasets} \label{tab:geolocated_number_tweets} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{@{\extracolsep{3pt}}lrrrrlrrrr@{}} \toprule \multicolumn{5}{c}{Spanish} & \multicolumn{5}{c}{English} \\ \cmidrule{1-5} \cmidrule{6-10} & \multicolumn{2}{c}{Tweets} & \multicolumn{2}{c}{Users} & & \multicolumn{2}{c}{Tweets} & \multicolumn{2}{c}{Users} \\ \cmidrule{2-3} \cmidrule{4-5} \cmidrule{7-8} \cmidrule{9-10} Location & \# & \% & \# & \% & Location & \# & \% & \# & \% \\ \midrule Spain & 12,342 & 16.5 & 5,483 & 15.2 & U.S & 315,913 & 32.4 & 132,155 & 32.2 \\ Mexico & 8,852 & 11.9 & 4,720 & 13.1 & U.K & 66,901 & 6.9 & 29,656 & 7.2 \\ Argentina & 4,648 & 6.2 & 2,505 & 6.9 & India & 30,781 & 3.2 & 12,424 & 3.0 \\ Venezuela & 3,518 & 4.7 & 1,094 & 3.0 & Canada & 26,487 & 2.7 & 11,564 & 2.8 \\ Colombia & 2,447 & 3.3 & 1,348 & 3.7 & Australia & 13,375 & 1.4 & 6,501 & 1.6 \\ U.S & 1,823 & 2.4 & 948 & 2.6 & Germany & 9,260 & 0.9 & 3,493 & 0.9 \\ Chile & 1,806 & 2.4 & 1,073 & 3.0 & France & 8,605 & 0.9 & 3,006 & 0.7 \\ Peru & 1,116 & 1.5 & 619 & 1.7 & Nigeria & 5,156 & 0.5 & 2,787 & 0.7 \\ Ecuador & 893 & 1.2 & 455 & 1.3 & U.A.E & 5,120 & 0.5 & 1,504 & 0.4 \\ Brazil & 587 & 0.8 & 172 & 0.5 & South Africa & 4,962 & 0.5 & 2,912 & 0.7 \\ Other & 5,815 & 7.8 & 3,100 & 8.6 & Other & 98,100 & 10.1& 42,658 & 10.4 \\ Unknown & 30,797 & 41.3 & 14,574 & 40.4 & Unknown & 391,018 & 40.1 & 161,767 & 39.4 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} To compare information privacy concerns by geographical regions, we divided the Spanish Twitter dataset in two sets: tweets written by users from (1) Latin America and (2) Europe. Similarly, we categorized the English dataset into three groups: tweets written by users from (1) North America, (2) Europe and (3) Asia. As a result, five different language-regional datasets were generated. The Spanish-Latin America dataset includes 27,839 tweets written by 13,937 users. The Spanish-Europe dataset comprises 12,799 tweets created by 5,774 accounts. Regarding the English data, the North America dataset includes 342,400 tweets generated by 142,719 users, the English-Europe one has 111,745 tweets of 46,927 users, and the English-Asia dataset contains 42,208 tweets produced by 17,929 accounts (Table \ref{tab:regional_number}). \celeste{We did not consider other subsets because of their small size. In Spanish, we only collected 1,929 tweets from North America and 217 tweets from Asia. In English, we only collected 3,851 tweets from Latin America. } \begin{table}[!h] \caption{Tweets and users in each dataset} \label{tab:regional_number} \begin{tabular}{@{}clrr@{}} \toprule \multicolumn{1}{l}{Language} & Region & \# of tweets & \# of users \\ \midrule \multirow{2}{*}{Spanish} & Latin America & 27,839 & 13,937 \\ & Europe & 12,799 & 5,774 \\ \midrule \multirow{3}{*}{English} & North America & 342,400 & 143,719 \\ & Europe & 111,745 & 46,927 \\ & Asia & 42,208 & 17,929 \\ \bottomrule \end{tabular} \end{table} \subsection{Text mining: Word embeddings to identify semantic contexts} \label{mining} We employed word embeddings \cite{mikolov2013distributed} to characterize the semantic context in which privacy-related keywords are framed. Based on co-occurrence of terms, word embeddings create a reduced multi-dimensional representation of a corpus. Such representation can be used to analyze the semantic proximity among the corpus' terms. Analyzing the closest terms of a given term can reveal the semantic context in which it is used \cite{rho2018fostering,gonzalez2019information}. We created a set of word embeddings to enable cross-language and cross-regional comparisons. First, we built word embeddings for the Spanish and English datasets (containing both geolocated and non-geolocated tweets). Then, we generated word embeddings for each of our five language-regional datasets. Before creating the word embeddings, we transformed the text to lowercase. We also removed stop-words and digits from the tweets. We customized our stop-words to ensure that symbols like ``\#'' were removed but not the words that follow it. Links and usernames were removed. Words with total frequency lower than three were ignored. These steps downsized the vocabulary by approximately 67\% (details in Table \ref{tab:vocabulary_size_datasets}). \begin{table}[!h] \caption{Initial and final vocabulary size in each dataset} \label{tab:vocabulary_size_datasets} \begin{tabular}{@{}clrr@{}} \toprule \multicolumn{1}{l}{Language} & Region & Initial vocabulary size & Final vocabulary size\\ \midrule \multirow{3}{*}{Spanish} & All & 65,036 & 21,736\\ \cmidrule{2-4} & Latin America & 35,149 & 11,359 \\ & Europe & 21,630 & 6,696 \\ \midrule \multirow{4}{*}{English} & All & 244,371 & 76,128\\ \cmidrule{2-4} & North America & 115,710 & 41,109 \\ & Europe & 66,042 & 23,514 \\ & Asia & 39,120 & 13,896 \\ \bottomrule \end{tabular} \end{table} We considered eight word embedding architecture combinations that involve \textit{Word2Vec/FastText}, \textit{CBOW/Skipgram} and different numbers of dimensions and epochs. As there is still no consensus about which word embedding evaluation method is more adequate \cite{bakarov2018survey}, we evaluated each word embedding architecture for the English dataset over 18 intrinsic conscious evaluation methods \cite{bakarov2018survey} using a word embedding benchmark library.\footnote{https://github.com/kudkudak/word-embeddings-benchmarks} \inlinecite{bakarov2018survey} approach has categorized the evaluation methods in three categories: \begin{itemize} \item Word semantic similarity (WSS): RW, MEN, Mturk287, WS353R, WS353S, WS353, SimLex999, RG65 and TR9856 \item Word Analogy (WA): Google Analogy Test set, MSR and SemEval 2012-2 \item Concept categorization (CC): AP, BLESS, BM, ESSLI 1A, ESSLI 2B, and ESSLI 2C \end{itemize} To choose the best architecture, we designed a point system to reflect the embeddings' performance. For each evaluation method, the word embedding with the highest accuracy received a score of 8 points, the embedding with the second highest accuracy was assigned 7 points, and so on. After running all evaluation methods, we summed the points obtained for each architectur . Considering a negative sampling and windows size parameters equal to 5, a Word2Vec CBOW architecture with 300 dimensions trained during 50 epochs achieved the total highest score (see Table \ref{tab:WordEmbeddingsEvaluation}). The same architecture had the best performance for all English regional datasets. Given that these evaluation methods are not available for a Spanish corpus, the same architecture was used to create all the Spanish word embeddings \begin{table}[!h] \caption{Word embedding architectures and their evaluation scores. Best performance is indicated with bold font style.} \label{tab:WordEmbeddingsEvaluation} \begin{tabular}{@{\extracolsep{6pt}}llrrrrrr@{}} \toprule \multicolumn{4}{c}{Architecture} & \multicolumn{4}{c}{Evaluation word embedding score} \\ \cmidrule{1-4} \cmidrule{5-8} Type & Model & Dim. & Epochs & WSS & WA & CC & Total \\\midrule FastText & CBOW & 100 & 10 & 22 & 19 & 25 & 66 \\ Word2Vec & Skipgram & 100 & 10 & 40 & 4 & 35 & 79 \\ Word2Vec & CBOW & 100 & 10 & 35 & 11 & 33 & 79 \\ Word2Vec & CBOW & 100 & 50 & 34 & 16 & 41 & 91 \\ Word2Vec & CBOW & 100 & 300 & 33 & 9 & 31 & 73 \\ Word2Vec & CBOW & 300 & 10 & 40 & 18 & 32 & 90 \\ \textbf{Word2Vec} & \textbf{CBOW} & \textbf{300} & \textbf{50} & \textbf{53}& \textbf{21} & \textbf{36} & \textbf{110} \\ Word2Vec & CBOW & 300 & 300 & 31 & 10 & 29 & 70 \\ \bottomrule \end{tabular} \end{table} Previous work has reported that word embeddings can reflect gender bias as a result of social constructs embedded in the data \cite{zhao2018learning,jha2017does}. To reduce gender bias while preserving its useful properties such as the ability to cluster related concepts, we followed \inlinecite{bolukbasi2016man} approach. This is a post-processing method that projects gender-neutral words to a subspace which is perpendicular to a gender dimension, defined by a set of terms associated with gender such as \textit{girl}, \textit{boy}, \textit{mother} and \textit{father} \cite{zhao2018learning}. We applied the following procedure to our English embeddings: (1) we identified a gender subspace selecting pairs of English words that can reflect a gender direction in each word embedding such as \textit{woman-man}, \textit{daughter-son} and \textit{female-male}, (2) we ensured that gender neutral words are zero in the gender subspace, and (3) we made neutral words equidistant to all pair of terms contained in a collection of equality sets. A equality set is composed by a pair of words that should differ only in the gender component such as \textit{\{grandmother, grandfather\}} and \textit{\{guy, gal\}}. During this process, we used the English terms suggested by \inlinecite{bolukbasi2016man}. For the Spanish word embeddings, we used Google Translate API\footnote{https://cloud.google.com/translate/} to translate the same terms. \subsection{Manual coding \& analysis} \label{analysis} We \hlreview{conducted a systematic qualitative examination of the semantic contexts in which information privacy terms appear according to the word embeddings. First, we conducted open coding of the semantic neighborhoods of privacy-related keywords. After several iterations, we developed a set of categories to characterize them. To assess if information privacy concerns were present (RQ1), we contrasted these categories to a widely accepted framework to describe internet users' information privacy concerns.} We focused our investigation on four keywords in English: \textit{information}, \textit{privacy}, \textit{users} and \textit{company}. We used their corresponding translations in Spanish: \textit{informaci\'{o}n}, \textit{privacidad}, \textit{usuarios} and \textit{empresa}. \hlreview We chose to include \textit{information} and \textit{privacy} because they are the main concepts under study. We could have added data; however, its semantic context is almost identical to that of information. Thus, adding it would have resulted in a mere duplication of terms. To increase the size of our dataset, we decided to add \textit{users} and \textit{company} because of their key roles in respect of controlling and safeguarding personal information. We also considered these terms more specific to the vocabulary of the data privacy domain than alternative ones (e.g., people, organizations)}. For each embedding, we retrieved the closest terms to the four keywords. Closeness between each term and a keyword was measured using cosine similarity. For instance, the closest terms retrieved to the keyword \textit{information} in the English word embedding were \textit{info}, \textit{data}, \textit{details}, and \textit{personal}, in that order. \hlreview{We chose to study the 40 closest terms after careful examination of the lists of close terms according to our different embeddings. After the 40th position in these lists, we rarely found terms that were even slightly related to information privacy. We reason that the value of this threshold is dataset-dependent. It is likely to be related to the vocabulary sizes (ours range from 6,696 to 41,109). In our case, we opted for using 40 as the threshold to study the semantic context of each keyword. Hence, we qualitatively analyzed 160 terms for each embedding. Overall, our dataset for qualitative analysis included 1,120 terms.} Two of the authors conducted open coding of the 320 terms retrieved from the Spanish and English word embeddings. Open coding is a process to identify, define and develop categories based on properties and dimensions of raw data \cite{williams2019art}. We used this technique to identify distinct concepts and themes from the extracted terms \cite{williams2019art}. After inspecting the retrieved terms during several iterations, the coders developed a coding guideline with multiple concept categories and their corresponding explanations to classify the retrieved terms. For example, the term \textit{info} extracted from the keyword \textit{information} was categorized as a \textit{synonymous}, given that we can attribute to it the same meaning. The terms \textit{data} and \textit{details} were classified as \textit{data} \& \textit{information}, and \textit{personal} was labeled as \textit{attribute or characteristic}. During a series of meetings, both coders compared their categorization process and refined a common coding guideline, establishing rules that would increase the categorization's reliability. The goal during this process is to segregate, group, regroup and re-link the terms to consolidate meaning and explanation of the categories \cite{williams2019art}. At the end of this process, 15 categories emerged from the data (see Table \ref{tab:coding_guideline}). Considering the four keywords, an inter-coder reliability measure (Cohen's kappa) of 0.685 and 0.754 were obtained for the Spanish and English dataset, respectively. These scores indicate substantial agreement \cite{viera2005understanding} during the process. We repeated the procedure for the regional datasets. The coders categorized the 40 closest terms to the keywords according to the coding guideline. Through an iterative process, a total of 800 words were manually coded. No new categories emerged from the data. On average, a Cohen's kappa above 0.722 was obtained in all the regional datasets. \begin{table}[!h] \caption{Inter-rater reliability (Cohen's kappa) score by dataset } \label{tab:kappa_score} \resizebox{\textwidth}{!}{ \begin{tabular}{@{}clrrrrr@{}} \toprule \multicolumn{1}{c}{Language} & Region & Information & Privacy & Company & Users & \textbf{Avg. by dataset}\\ \midrule \multirow{3}{*}{Spanish} & All & 0.864 & 0.630 & 0.604 & 0.642 & \textbf{0.685}\\ & Latin America & 0.749 & 0.673 & 0.687 & 0.778 & \textbf{0.722} \\ & Europe & 0.820 & 0.710 & 0.774 & 0.827 & \textbf{0.783}\\ \midrule \multirow{4}{*}{English} & All & 0.768 & 0.747 & 0.672 & 0.829 & \textbf{0.754}\\ & North America & 0.831 & 0.721 & 0.912 & 0.971 & \textbf{0.859} \\ & Europe & 0.777 & 0.743 & 0.805 & 0.807 & \textbf{0.783} \\ & Asia & 0.832 & 0.685 & 0.736 & 0.833 & \textbf{0.771} \\ \midrule \multicolumn{2}{c}{\textbf{Average of all embeddings}} &\textbf{0.806}&\textbf{0.701}&\textbf{0.741}&\textbf{0.812}&\textbf{0.765} \\ \bottomrule \end{tabular} } \end{table} \hlreview{To assess if information privacy concerns were present in a Twitter dataset about a data-breach scandal (RQ1), we compared the resulting categories with the IUIPC's dimensions: \textit{collection}, \textit{control}, and \textit{awareness}. IUIPC is a theory-based model that has been widely used to study information privacy concerns on the internet} (see Section \ref{iuipc}). \celeste{Then, we tested the null hypotheses about differences in information privacy concerns across language and world regions (H0a and H0b). To do so, we used a Chi-squared test to assess if the proportion of terms in the semantic contexts were significantly different across word embeddings. In all of these tests, we accounted for multiple comparisons by applying alpha adjustment according to Šidák } \cite{doi:10.1080/01621459.1967.10482935,article_sidak}.\celeste{ This method allowed us to control the probability of making false discoveries when performing multiple hypotheses tests.} \section{Results} \label{sec:results} \hlreview{ In this section, we address our research question and test the null hypotheses about differences in information privacy concerns by language and world regions.} As explained \hlreview{above , we create word embeddings for our Spanish and English datasets of tweets about the Cambridge Analytica scandal. Then, we take a closer examination of how the semantic context of four keywords varies across language and world regions. The semantic context is operationalized as the 40 closest terms of each keyword: \textit{information}, \textit{privacy}, \textit{company}, and \textit{users}. As an example, Table \ref{tab:most_similar_terms_company_users} and Table \ref{tab:most_similar_terms_information_privacy} show the 20 closest terms to the keywords, according to the Spanish and English word embeddings.\footnote{\celeste{Terms in Spanish were translated to English by the authors} } Full results are available online.\footnote{https://github.com/gonzalezf/Regional-Differences-on-Information-Privacy-Concerns} \begin{table}[!h] \caption{Top 20 closest terms to \textit{information} and \textit{privacy} in the Spanish and English word embeddings} \label{tab:most_similar_terms_company_users} \begin{tabular}{@{\extracolsep{6pt}}llll} \toprule \multicolumn{2}{c}{Information} & \multicolumn{2}{c}{Privacy} \\ \cmidrule{1-2} \cmidrule{3-4} Spanish & English & Spanish & English \\ \midrule data & info & intimacy & data privacy \\ info & data & data & gdpr \\ third parties & details & confidentiality & protection \\ fact & personal & scams & users \\ third & users & personal data & user \\ interviewer & profiles & privacy policy & consumers \\ facebook & identifiers & digital security & data \\ users & personal data & data protection & transparency \\ privacy & private & identity & facebook \\ consent & records & minor & personal \\ authorization & user & facebook & consent \\ purposes & consent & third parties & security \\ private & advertisers & information & sharing \\ personal & permission & cibersecurity & data protection \\ location & metadata & sensitive & tos \\ ecomlancer & datas & emails & consumer \\ serve & companies & cookies & collection \\ viatec & individuals & protect yourself & opt \\ intimate & freely & suppose & trust \\ profiles & informations & take care of your data & privacyrights \\ \bottomrule \end{tabular} \end{table} \begin{table}[!h] \caption{Top 20 closest terms to `\textit{company} and \textit{users} in the Spanish and English word embeddings} \label{tab:most_similar_terms_information_privacy} \begin{tabular}{@{\extracolsep{6pt}}llll} \toprule \multicolumn{2}{c}{Company} & \multicolumn{2}{c}{Users} \\ \cmidrule{1-2} \cmidrule{3-4} Spanish & English & Spanish & English \\ \midrule company & companies & third parties & user \\ consultant & firm & sensitive & consumers \\ firm & companys & citizens & personal \\ organization & platform & illegally & peoples \\ obtain & firms & users & subscribers \\ relation & organization & authorization & customers \\ researcher & data & profiles & people \\ deliver & entity & private & facebook \\ plot & giant & used & data \\ finance & user & people & fb \\ ltd & facebook & illegal & apps \\ way & corporation & clients & individuals \\ facebook & fb & user & advertisers \\ illegally & organisation & improperly & privacy \\ companies & business & obtained & information \\ ca & users & nametests & app \\ own & businesses & information & citizens \\ brand & service & facebook & profiles \\ decide & ca & data & companies \\ scl & site & voters & collected \\ creole & personal & cambridgeanalytics & private \\ relations & organizations & infringement & consent \\ cambridge & employees & use & accounts \\ data & agency & purposes & permissions \\ laboratories & co & authorized & use\\ \bottomrule \end{tabular} \end{table} \subsection{\hlreview{Information privacy concerns present in a Twitter dataset}} As a result of the coding process, we define 15 categories to \hlreview{analyze the closest terms} (see Table \ref{tab:coding_guideline}). To answer our first research question , we compare our categories with IUIPC, a framework widely used to measure information privacy concerns in the context of the Internet \cite{liu2018impact}. We find relationships among some of our categories and the three IUIPC concepts as well as our initial keywords, as shown in Figure \ref{fig:iuipc_dimensions_per_language_details}. \begin{table}[!ht] \setlength{\tabcolsep}{2pt} \renewcommand{\arraystretch}{1} \scriptsize \centering \caption{\hlreview{Coding guideline to classify the semantic contexts of our keywords}. \celeste{Yellow background is for categories that match our initial keywords. Light blue background is for categories related to IUIPC. Gray background is for other categories.}} \label{tab:coding_guideline} \begin{tabular}{p{0.16\textwidth}p{0.26\textwidth}p{0.27\textwidth}p{0.27\textwidth} \toprule Categories & Description &Spanish Examples & English Examples\\ \bottomrule \rowcolor{Yellow} Data \& Information & Direct references to these concepts and examples of user data and its meaning & records, location, data, emails, profiles, contacts & profile, messages, documents, location, accounts, metadata \\ \rowcolor{Yellow} Companies & Entities that manipulate user data for their own purposes &facebook, cambridgeanalytica, scl, grindr, advertiser & facebook, cambridgeanalytica, google, emerdata, apple \\ \rowcolor{Yellow} Users & Data owners & users, consumers, population, citizens, people & users, consumers, subscribers, citizens, people \\ \rowcolor{LightCyan} Data collection, handling and/or storage & Mechanisms and verbs associated with obtaining, collecting and handling data& use, obtained, log in, collecting, apps, mechanisms &collected, store, access, databases, usage, analyzed\\ \rowcolor{LightCyan} Ownership agency & User-control over personal information& authorization, agree, consent, protect & consent, opt, shield, autonomy \\ \rowcolor{LightCyan} Privacy \& security terms & Words associated with data privacy and security & cybersecurity, confidentiality, intimacy, safe, secure & confidenciality, transparency, privately, dataprivacy, security \\ \rowcolor{LightCyan} Security mechanisms & Tools and techniques that implement security services& credentials, password, biometry, key & encrypted, password, biometric, privacybydesign \\ \rowcolor{LightCyan} Privacy \& security risks & Entities or bad practices that can compromise sensitive data& trojan, cybercriminal, illegally, scams, stealing & grooming, databreach, misuse, illegally, violated \\ \rowcolor{LightCyan} Regulation & Law, rule or regulation that controls the use of user data& rgpd, right to be forgotten, privacypolicy, iso, habeasdata & gdpr, tos, hippa, privacyrights, regulation \\ \rowcolor{Gray} Synonymous & Same meaning than the keyword& info, company, firm, user, private & info, companies, firm, privacy, user, \\ \rowcolor{Gray} Attribute or characteristic&A characteristic of the keyword & false, private, external, specialized, britain & sensitive, giant, holistic, strategic, affected \\ \rowcolor{Gray} Action & Action or activity linked to the keyword& define, explode, promote, attend, move & order, solve, reveal, update, confirming \\ \rowcolor{Gray} Third party & Can not be categorized as User or Company. There is not sufficient contextual information to do so& rrhh, medicians, interviewer, ex employee, philippine & government, agency, indians, europeans, developers, \\ \rowcolor{Gray} Reaction or attitude & Way of feeling or acting toward a entity & guilt, honest, overfall, suffers, handle & willingly, freely, tighter, restricting, forced \\ \rowcolor{Gray} Undetermined & Relationship between keyword and term is unknown & approximately, v.i.p, ground, higher, depth & psychological, millions, new, group, image \\ \bottomrule \end{tabular} \end{table} \begin{figure}[!h] \centering \includegraphics[width=0.93\linewidth]{proposed_mapping_journal_july.png} \caption{Relationships between our categories and IUIPC dimensions} \label{fig:iuipc_dimensions_per_language_details} \end{figure} Three categories match our initial keywords (Table \ref{tab:coding_guideline}, yellow background): (1) \textbf{data \& information} is associated with the \textit{information} keyword, including direct references to this concept and examples of user data and its meaning (e.g.,``messages'' and ``metadata''), (2) \textbf{companies} include terms about organizations that use personal data for their own purposes such as ``Facebook'' and ``Apple'', and (3) \textbf{users} contain references to this keyword (e.g., ``customers'', ``people''). Five categories are related to IUIPC (Table \ref{tab:coding_guideline}, light blue background). We identify a \textbf{data collection, handling and/or storage} category that contains words associated with technology or techniques useful to obtain, collect or handle data (e.g: ``databases'', ``services'', ``app'', ``website''). This matches to the IUIPC’s \textit{collection} dimension, which refers to the ``degree to which a person is concerned about the amount of individual-specific data possessed by others relative to the value of benefits received''\cite{malhotra2004internet}. \celeste{Examples of tweets that include terms that fit this category are: } \begin{quotation} `\textsf{$@$hidden\_username $@$hidden\_username This is bigger than facebook because all social media outlets collect and store this data on every user. If you haven't looked, check and see what twitter has collected on you. Free apps are not free, neither are paid ones}' \end{quotation} \begin{quotation} `\textsf{Facebook collects and sells PII data. Google and others maintain behavioral data anonymously and serve ads against it, but don't connect that data to identities that are sold to advertisers. I was not aware Facebook was such an anomaly.}' \end{quotation} The IUIPC's \textit{control} dimension denotes concerns about control over personal information. This is often exercised through approval, modification and opportunity to opt-in or opt-out \cite{malhotra2004internet}. Terms related to this dimension appeared in the coding phase (e.g., ``consent'', ``opt'', ``permission'') and were categorized as \textbf{ownership agency}. This category also includes advice directed to users and good privacy practices terms (e.g., ``prevent'', ``protect''). \hlreview{Examples of tweets related to this category are:} \begin{quotation} `\textsf{If anything we should learn from the \#Facebook data breach. Don't volunteer information and prevent that secondary data collection by using \#adblocker and \#VPN}' \end{quotation} \begin{quotation} `\textsf{Cambridge Analytica whistleblower Christopher Wylie urges U.S. senators to focus less on data consent and more on the idea that it's almost impossible to opt out of, for example, Google.}' \end{quotation} The third IUIPC's dimension is \textit{awareness}, which refers to individual concerns about her/his awareness of organizational information privacy practices \cite{malhotra2004internet}. Three of our categories are associated with this dimension: (1) \textbf{privacy and security terms} that include words associated with data privacy and security such as ``confidentiality'', ``transparency'' and ``safety''; (2) \textbf{security mechanisms} that refers to tools and techniques that implement security services (e.g., ``password'', ``encryption''); and, (3) \textbf{privacy \& security risks} that denote entities or bad practices that can compromise sensitive data, for example: \textit{``troyano''} (trojan), ``databreach'', ``grooming'' and \textit{``ciberdelincuente''} (cybercriminal). \hlreview{Tweets that use these terms are:} \begin{quotation} `\textsf{Hmm- what do you think? I forsee a wave of new social network startups- will any be able to rise? Besides privacy and transparency what else would you want from a social network? \#swtech}' \end{quotation} \begin{quotation} `\textsf{WhatsApp Co-Founder To Leave Company Amid Disagreements With Facebook. Facebook's desire to weaken WhatsApp's encryption and collect more personal data reportedly fueled the decision}' \end{quotation} \begin{quotation} `\textsf{Canadian federal privacy officials warned that third-party developers' access to Facebook users' personal information raises serious privacy risks back in 2009. $@$hidden\_link}' \end{quotation} Another privacy-related category emerges from our coding but can not be \hlreview{easily} associated with an IUIPC dimension. This is the \textbf{regulation} category, which includes terms associated with laws and rules that control the use of personal data such as ``gdpr'' in reference to the European General Data Protection Regulation or ``tos'' in reference to Terms of Services. \hlreview{Examples of tweets with these terms are:} \begin{quotation} `\textsf{New regulation in Europe called gdpr makes companies liable for data breaches with penalties which include fines of a percentage of global turnover. It feels like all Zuckerberg is liable for is a slap on the wrist and having to apologise in public}' \end{quotation} \begin{quotation} `\textsf{\#Today we are confirming that multiple snippets of data from CI that was lifted from facebook are in Russia. If you are an EU citizen this means you have a right to sue both companies for gdpr based infringements. We will be leading this cause should no one else step up....}' \end{quotation} \begin{quotation} `\textsf{Senator to \#Zuckerberg: Your terms of services are only a few pages long. People complain when online contracts are too long and filled with legalese. Now lawmakers are complaining they're too short. What's the threshold for length and detail, and how do we decide?}' \end{quotation} Other categories are identified as well (Table \ref{tab:coding_guideline}, gray background). The \textbf{attribute or characteristic} category contains modifiers of a specific keyword. For example, the term ``sensitive'' emerges from the closest terms to \textit{information}, and the term \textit{``britanica''} (British) appears among the nearest terms to \textit{company}. The \textbf{action} category includes words related to an act. For instance, the verbs \textit{``obtener''} (obtain) and \textit{``entregar''} (deliver) come out among the closest terms to \textit{company}. The \textbf{third party} category contains terms related to entities that can not be categorized as user or company because there is not sufficient contextual information to do so, such as ``indians'', ``third'', ``individuals'' and ``americans''. Additionally, the \textbf{reaction or attitude} category comprises terms that represent a way of feeling or acting toward a person, thing or situation. For example, the terms ``deny'' and ``admitted'' are present in the closest terms to \textit{company}. A \textbf{synonymous} category emerge during the process as well. This contains equivalent terms to each keyword. For example, the terms ``info'' and ``informations'' are close to \textit{information} and the terms ``companys'', ``corporation'', ``companies'' and ``firm'' appear among the closest terms to \textit{company}. Terms with no clear relation to the keywords were classified as \textbf{undetermined}. \begin{figure}[!h] \centering \includegraphics[width=0.85\linewidth]{categories_and_terms_closest_to_privacy.png} \caption{This force-directed graph represents the open coding categories related to the keyword \textit{privacy} and provides examples of the terms that were coded as each category. Categories with the higher frequency are larger and closer to the keyword. Yellow nodes represent keywords and light-blue nodes denote privacy-related categories.} \label{fig:closest_terms_to_privacy} \end{figure} We used force-directed graphs \cite{kobourov2004force} to represent all the categories that emerged from the analysis of semantic context of each keyword. Figure \ref{fig:closest_terms_to_privacy} shows the categories related to the keyword \textit{privacy}. In this graph, distance represents closeness in the semantic context. For example, terms that were categorized as regulation are closer to privacy than terms that were categorized as security mechanisms. Additionally, the visualization shows examples of terms in each category in Spanish or English.\footnote{Terms in Spanish were translated to English by the authors. These terms are shown in cursive} Force-directed graphs of the categories and terms associated with the other keywords (\textit{information}, \textit{company}, and \textit{users}) are available online.\footnote{\url{https://andreafigue.github.io/word_embeddings/visualization.html}} \hlreview{Overall, we observe that the semantic contexts of four privacy-related keywords include terms corresponding to information privacy concerns. We illustrate such presence in Figure} \ref{fig:iuipc_dimensions_per_language_details}. \hlreview{We positioned each IUIPC dimension at the intersection between two of our keywords.} \textit{Companies} carry out collection, handling and or storage activities regarding \textit{data} \& \textit{information}. \textit{Users} exercise (some) agency over the control of their \textit{data} \& \textit{information}. The awareness dimension arises from the \textit{users}' perception of the \textit{companies}' practices. \hlreview{Our results suggest that the awareness dimension might be further categorized into sub-topics, such as awareness of} privacy and security terms, security mechanisms, and privacy and security risks. Beyond what the IUIPC model proposes, we find that \textit{regulations} are relevant to Twitter users who talk about Cambridge Analytica. We position this concept close to \textit{awareness}, as it is considered an environmental factor that relates to information privacy concerns \cite{lee2019information,va_zou2018ve,MOHAMMED2017254} \hlreview{but is not integrated into the IUIPC.} \subsection{Emphasis on information privacy concerns across languages and world regions} \hlreview{As we are able to observe the presence of information privacy concerns on the Twitter datasets, we can now turn to test the null hypotheses regarding differences across language and world regions.} We compare the \hlreview{emphasis on} \hlreview{information privacy concerns (IPC)} in the semantic contexts that emerge from the different word embeddings. Figure \ref{fig:global_categories_by_language_region} reports the distribution of terms that relate to the initial keywords (Table \ref{tab:coding_guideline}, yellow background) and IPC (Table \ref{tab:coding_guideline}, light blue background) in each language and world region under study. \textit{Others} include all remaining categories. \celeste{To test our hypotheses, we performed Chi-square goodness-of-fit tests. Because we ran multiple tests, we applied Šidák correction to counteract the problem of multiple comparisons, thus controlling the family-wise error rate. According to our Šidák ’s adjustment, to maintain an overall alpha of 0.05 for the collection of 10 tests, null hypotheses can be rejected when $p < 0.0102$. } \begin{figure}[!h]% \centering \subfloat[\centering By Language]{{\includegraphics[width=0.49\linewidth]{IPC_language.jpg} }}% \hfill \subfloat[\centering By world-region]{{\includegraphics[width=0.49\linewidth]{IPC_region.jpg} }}% \caption{Proportion of categories by language (a) and world region (b)}% \label{fig:global_categories_by_language_region} \end{figure} \hlreview{We find no significant differences on the emphasis on information privacy concerns across languages or regions} (see Table {\ref{tab:null_hypothesis_IPC_tokens}). \hlreview{Thus, we cannot reject the null hypotheses. We conclude that IPC are present at similar rates in Spanish and English.} \hlreview{They cover a considerable proportion of the semantic contexts , with more than 30\% of terms in both languages. \hlreview{Considering the regional datasets, IPC describes between 20\% and 40\% of the terms. \celeste{While we observe some variation in emphasis on IPC across regions, with the largest proportion in the Latin American dataset and the smallest fraction in the Asian data, the differences across regions are not enough to be statistically significant.} \hlreview{The rest of the terms are better described by our initial categories, such as \textit{company}, \textit{information} and \textit{users}. Compared to the IPC category, all of them cover smaller fractions of the semantic contexts under study. }It should \hlreview{also} be noted that irrelevant categories (grouped as \textit{others}) add up to \hlreview{large proportions} in all datasets, ranging from 30\% to more than 60\%. \hlreview{Together, these results reveal that while a social media dataset about a data breach scandal does bring relevant content about information privacy concerns, this comes with a fair amount of noisy content. } \begin{table}[!h] \caption Results of Chi-squared tests to compare proportions of terms by language and world regions. The null hypothesis was rejected if $p<.0102$ \label{tab:null_hypothesis_IPC_tokens} \begin{tabular}{@{}p{0.67\textwidth}rrrr} \toprule \multicolumn{1}{c}{Null hypothesis} & \multicolumn{1}{c}{$\chi^2$}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{DF} & \multicolumn{1}{c}{$p$ value} \\ \midrule There is no difference in \% of \textit{IPC} terms between languages & 0.15 & 110 & 1 & .70\\ There is no difference in \% of \textit{IPC} terms among world regions & 8.04 & 237 & 4 & .09\\ \midrule There is no difference in \% of \textit{collection} terms between languages & 11.65 &31 &1&\textbf{\textless{}.001} \\ There is no difference in \% of \textit{collection} terms among world regions & 10.97 &68 &4 & .03 \\ There is no difference in \% of \textit{control} terms between languages & 0.00&24& 1 &1.00\\ There is no difference in \% of \textit{control} terms among world regions & 7.15 & 33 & 4 & .13 \\ There is no difference in \% of \textit{awareness} terms between languages & 4.12 &41& 1& .04 \\ There is no difference in \% of \textit{awareness} terms among world regions &13.58& 95& 4 &\textbf{.009}\\ \midrule There is no difference in \% of \textit{regulation} terms between languages & 0.69 & 13 & 1 & .41 \\ There is no difference in \% of \textit{regulation} terms among world regions & 11.69 & 26 & 4 & .02 \\ \bottomrule \end{tabular} \end{table} \begin{comment} \begin{table}[!h] \caption{\textbf{ SIN OTROS!!!} Null hypothesis regarding the distribution of terms associated to information privacy concerns (IPC) versus the association of these terms to other categories in each word embedding. Null hypothesis were rejected if $p<0.0039$. \label{tab:null_hypothesis_IPC_tokens} \begin{tabular}{@{}p{0.7\textwidth}rrrr} \toprule \multicolumn{1}{c}{Null hypothesis} & \multicolumn{1}{c}{$\chi^2$}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{DF} & \multicolumn{1}{c}{$p$ value} \\ \midrule Difference between IPC and other categories is irrelevant in Spanish. & 52.68 & 93 & 3 & \textbf{\textless{}.001}\\ Difference between IPC and other categories is irrelevant in English. & 31.13 & 128 & 3 & \textbf{\textless{}.001} \\ Difference between IPC and other categories is irrelevant in Spanish - Latin America & 126.32 & 79 & 3 & \textbf{\textless{}.001} \\ Difference between IPC and other categories is irrelevant in Spanish - Europe & 100.68 & 57 & 3 & \textbf{\textless{}.001} \\ Difference between IPC and other categories is irrelevant in English - North America & 19.13 & 120 & 3 & \textbf{\textless{}.001} \\ Difference between IPC and other categories is irrelevant in English - Asia & 23.27 & 73 & 3 & \textbf{\textless{}.001} \\ Difference between IPC and other categories is irrelevant in English - Europe & 23.22 & 98 & 3 & \textbf{\textless{}.001} \\ \bottomrule \end{tabular} \end{table} \begin{table}[!h] \caption{\textbf{CON OTROS } Null hypothesis regarding the distribution of terms associated to information privacy concerns (IPC) versus the association of these terms to other categories in each word embedding. Null hypothesis were rejected if $p<0.0039$. \label{tab:null_hypothesis_IPC_tokens} \begin{tabular}{@{}p{0.7\textwidth}rrrr} \toprule \multicolumn{1}{c}{Null hypothesis} & \multicolumn{1}{c}{$\chi^2$}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{DF} & \multicolumn{1}{c}{$p$ value} \\ \midrule The difference between IPC and other categories is irrelevant in Spanish. & 86.125 & 160 & 4 & \textbf{\textless{}.001} \\ The difference between IPC and other categories is irrelevant in English. & 31.125 & 160 & 4 & \textbf{\textless{}.001} \\ The difference between IPC and other categories is irrelevant in Spanish - Latin America & 171.75 & 160 & 4 & \textbf{\textless{}.001} \\ The difference between IPC and other categories is irrelevant in Spanish - Europe & 241.75 & 160 & 4 & \textbf{\textless{}.001} \\ The difference between IPC and other categories is irrelevant in English - North America & 20.438 & 160 & 4 & \textbf{\textless{}.001} \\ The difference between IPC and other categories is irrelevant in English - Asia & 131.44 & 160 & 4 & \textbf{\textless{}.001}\\ The difference between IPC and other categories is irrelevant in English - Europe & 52.938 & 160 & 4 & \textbf{\textless{}.001} \\ \bottomrule \end{tabular} \end{table} \end{comment} \subsubsection{IUIPC dimensions} Digging deeper \hlreview{in the terms related to information privacy concerns, we analyze the proportions of terms that match each IUIPC dimension across languages and world regions (see Figure {\ref{fig:representation_iuipc_dimensions_by_language_region}} and Table {\ref{tab:null_hypothesis_IPC_tokens}}). \hlreview{We observe a broader emphasis on \textit{collection} in English} ($\chi^2(1, 31)=11.65, p=<.001$) \celeste{than in Spanish.} \nuevo{Cohen's effect size value (w = $.61$) suggests that this is a high practical significance }\cite{cohen1988statistical}. Even though this pattern seems to be influenced by a higher proportion on \textit{collection} in the English content from North America than in any other region} (Figure \ref{fig:representation_iuipc_dimensions_by_language_region}), \hlreview{regional differences are not statistically significant after multiple comparisons correction ($\chi^2(4, 68)=10.97, p=.03$).} \hlreview{In turn, while we cannot reject a null hypothesis regarding differences on \textit{awareness} by language after corrections ($\chi^2(1, 41)=4.12, p=.04$), we find a significant difference across world regions ($\chi^2(4, 95)=13.58, p=.009$). }\nuevo{Cohen’s effect size value (w = .38) suggests a moderate to high practical significance.} \celeste{ Here, we calculated the standard residuals to determine which world regions make the greater contribution to this chi-square test result. We find that compared with other world regions, data in English from North America have a smaller ratio of awareness terms (chi-square standard residual = $-2.56$). The opposite is found in data in Spanish from Latin America (chi-square standard residual = $2.05$). } \hlreview{Finally, we find no evidence to reject the null hypothesis regarding control.} Control is equally present in both languages and the regions under study. \begin{comment} \begin{table}[!h] \caption{Null hypothesis regarding the distribution of terms associated to IUIPC dimensions in each word embedding. Null hypothesis were rejected if $p<0.0039$. \label{tab:null_hypothesis_IUIPC_dimensions} \begin{tabular}{@{}p{0.7\textwidth}rrrr} \toprule \multicolumn{1}{c}{Null hypothesis} & \multicolumn{1}{c}{$\chi^2$}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{DF} & \multicolumn{1}{c}{$p$ value} \\ \midrule Difference between IUIPC dimensions is irrelevant in Spanish. & 15.60 & 45 & 2 & \textbf{\textless{}.001} \\ Difference between IUIPC dimensions is irrelevant in English. & 5.76 & 51 & 2 & .06 \\ Difference between IUIPC dimensions is irrelevant in Spanish - Latin America & 25.55 & 40 & 2 & \textbf{\textless{}.001} \\ Difference between IUIPC dimensions is irrelevant in Spanish - Europe & 16.77 & 39 & 2 & \textbf{\textless{}.001} \\ Difference between IUIPC dimensions is irrelevant in English - North America & 8.71 & 42 & 2 & .01 \\ Difference between IUIPC dimensions is irrelevant in English - Asia & 1.65 & 34 & 2 & .44 \\ Difference between IUIPC dimensions is irrelevant in English - Europe & 9.41 & 41 & 2 & .01 \\ \bottomrule \end{tabular} \end{table} \end{comment} \begin{figure}[!h]% \centering \subfloat[\centering By Language]{{\includegraphics[width=0.49\linewidth]{IUIPC_language.jpg} }}% \hfill \subfloat[\centering By world-region]{{\includegraphics[width=0.49\linewidth]{IUIPC_region.jpg} }}% \caption{Proportion of terms related to the IUIPC dimensions by language (a) and region (b)}% \label{fig:representation_iuipc_dimensions_by_language_region} \end{figure} \subsubsection{Regulation} Even though the concept of regulation is not part of the IUIPC dimensions, prior literature \cite{cockcroft2016relationship,lee2019information,da2018information} has suggested that it is related to people's concerns about information privacy. We find terms associated with this category in all our word embeddings (see Figure \ref{fig:regulation_representation_by_language_region}). \hlreview{However, the difference in proportions between Spanish and English data is not statistically significant ($\chi^2(1, 13)=0.69, p=.41$). Likewise, we do not find enough evidence to reject the null hypothesis regarding differences across world regions after multiple comparison correction ($\chi^2(4, 26)=11.69, p=.02$) (see Table} {\ref{tab:null_hypothesis_IPC_tokens}}). \begin{figure}[!h]% \centering \subfloat[\centering By Language]{{\includegraphics[width=0.49\linewidth]{regulation_language.jpg} }}% \hfill \subfloat[\centering By world-region]{{\includegraphics[width=0.49\linewidth]{regulation_region.jpg} }}% \caption{Proportion of terms associated to regulations by language (a) and region (b)}% \label{fig:regulation_representation_by_language_region} \end{figure} \begin{comment} \begin{table}[!h] \caption{Null hypothesis regarding the distribution of terms associated to regulation by language and world-regions. Null hypothesis were rejected if $p<0.0039$. \label{tab:null_hypothesis_regulations} \begin{tabular}{@{}p{0.7\textwidth}rrrr} \toprule \multicolumn{1}{c}{Null hypothesis} & \multicolumn{1}{c}{$\chi^2$}& \multicolumn{1}{c}{N} & \multicolumn{1}{c}{DF} & \multicolumn{1}{c}{$p$ value} \\ \midrule Difference of terms categorized as regulation between Spanish and English is irrelevant. & 0.69 & 13 & 1 & .41 \\ Difference of terms categorized as regulation between regions is irrelevant. & 11.69 & 26 & 4 & .02 \\ \bottomrule \end{tabular} \end{table} \end{comment} \section{Discussion} \label{sec:discussion} \hlreview{This work explores the potential of social media as a data source to study cross-language and cross-regional differences in information privacy concerns.} We conduct an analysis of \hlreview{Twitter} data related to a particular data breach \hlreview{news} to deepen our understanding of how \hlreview{people} from different world regions and who speak different languages frame privacy concerns. We chose to focus on the Cambridge Analytica scandal because it triggered a wide-ranging exchange on social media about user \hlreview{information and companies' data} practices. \hlreview{We build upon the potential of word embeddings to derive a semantic context of each term in a corpus. The contexts are built according to terms that are commonly used in the same phrases.} By characterizing a keyword's nearby terms, we \hlreview{seek} to reveal the context in which a keyword was discussed \cite{rho2018fostering}. \hlreview{Based on more than a million non-duplicated, human-generated tweets, we generate word embeddings for data in Spanish and English and for data from Latin America, North America, Asia and Europe.} For each embedding, we conduct a qualitative analysis of the semantic contexts of four privacy-related keywords: \textit{information}, \textit{privacy}, \textit{company}, and \textit{users}. \hlreview{Collecting and analyzing the semantic contexts of these privacy-related keywords} allows us to observe \hlreview{the presence of terms related to} information privacy concerns in the collected tweets. Through iterative manual coding, we \hlreview{characterize the semantic contexts using 15 categories. Several of these categories are easily mapped to the three dimensions of the Internet User Information Privacy Concerns (IUIPC): \textit{collection}, \textit{awareness}, and \textit{control}} (See Figure \ref{fig:iuipc_dimensions_per_language_details}). In this way, we find evidence that social media content can reveal information about privacy concerns. Our approach \hlreview{takes} into consideration a vast amount of online \hlreview{content} posted freely and spontaneously on Twitter to create the semantic context of each keyword. Thus, it \hlreview{gives} a sense of a collective perspective on information privacy concerns by language and world region, which can become a complementary approach to current survey-based methods. \final{Our method aims to discover knowledge from a large-scale social media dataset in a topic for which a ground truth does not exist. Unfortunately, such ground truth is unlikely to exist because large-scale, multi-country, and multi-language surveys are too expensive to conduct} \cite{doi:10.1080/0144929X.2020.1831608}. \final{As an alternative approach, we used word embeddings to find the semantic contexts of relevant keywords and followed a qualitative approach to validate the results. We carefully analyzed more than a thousand terms of the semantic contexts, conducted open coding to formulate a data-grounded categorization, and then contrasted our categorization with IUIPC} \cite{malhotra2004internet}\nuevo{, one of the well-accepted theoretical conceptualizations of information privacy concerns}. \final{ While this is not the common ground truth of other natural language processing tasks such as classification, our process draws from qualitative approaches to validate the results of an automated text analysis. We discuss below how our findings extend our current understanding of privacy concerns and open new lines of inquiry. \hlreview{Beyond matching content to current conceptualizations of information privacy concerns, our results suggest a more granular categorization of one of them. Our results hint that \textit{awareness} might include more specific sub-topics that users can be aware of, such as \textit{privacy and security terms} (e.g., cybersecurity, confidentiality), \textit{security mechanisms} (e.g., credentials, encrypted), and \textit{privacy and security risks} (e.g., scams, grooming). The presence of terms that fit these categories reveals that they are already part of public online conversations around privacy. A distinction among broad privacy and security terms, mechanisms to protect data and potential data risks might be useful to further describe the kinds of knowledge people have. Additionally, awareness about some of these subtopics might be more influential than others. For example, knowing about risks and mechanisms might be a sign of higher privacy concerns while knowing broad privacy and security terms might not. The distinction between sub-topics could also guide users', educators' and practitioners' efforts to enhance information privacy literacy. Future work can explore the relevance of this distinction and its implications for information privacy practices.} \hlreview{Besides, the presence of the \textit{regulation} category highlights its importance in relation to information privacy concerns. Regulation refers to laws or rules that aim to regulate the use of personal data. They have often been considered a factor influencing information privacy concerns}\cite{ebert2020does,benamati2021information}\hlreview{. The emergence of this category from our open coding confirms its relevance through its frequent appearance in public posts about a data breach scandal. Such relevance might be related to the elaboration of laws and public policies about data usage worldwide. These regulations are not only a topic of data and law experts, but it seems to be part of the public discourse around data privacy online. It is noticeable the common presence of a specific regulation, the GDPR, in our datasets. } \celeste{GDPR is a privacy regulation that has been in effect since late May 2018 in the European Union \mbox{\cite{gellman2019fair}}. Our data collection period covered the early months of its implementation. This regulation prohibits processing and exploiting personal data such as health status, political orientation, sexual preferences, religious beliefs, and ethnic origin. Thus, it aims to decrease the privacy risks that may derive from malicious use of such information, including cases like the Cambridge Analytica scandal \mbox{\cite{cabanas2018facebook}}. GDPR seeks to convert individuals into empowered citizens involved in the decision-making process related to their personal information \mbox{\cite{karampela2019exploring}}. As an example, with this regulation in effect, companies are required to inform individuals about their rights (e.g., restriction of processing, erasure of data), the storage period of data, and additional sources that have been used to acquire personal data \mbox{\cite{ebert2020does}}}. \hlreview{The explicit presence of GDPR in our data might be evidence of its influence on shaping people's arguments about privacy concerns and its importance not only in Europe but worldwide. Further work can focus on exploring how to integrate better the role of regulations into the current conceptualizations of information privacy concerns, which were proposed long before data privacy regulations were as common as they are now around the globe. Moreover, future work could also explore the interaction between regulations and specific information privacy concerns dimensions.} \hlreview{While we find similar rates of terms related to information privacy concerns across languages and regions, we observe significant differences in emphasis on collection and awareness.} These results indicate that different groups view the Cambridge Analytica scandal from a particular standpoint. It is important to notice though that while information privacy terms appear through our method, they also come along with a considerable amount of other terms that we consider noisy data. Nevertheless, our findings show the potential \hlreview{of using social media data} for cross-language and cross-regional comparisons to identify similarities and nuanced differences on privacy-related perspectives worldwide. Our analysis reveals that the semantic contexts generated by tweets \hlreview{written in English have significantly} more terms related to \textit{collection} than those \hlreview{written in Spanish}. This is a novel finding. When freely expressing online about privacy keywords, English speakers give significantly more emphasis to data collection than Spanish speakers. This difference can lead researchers and practitioners to explore the effectiveness of more tailored data privacy campaigns to specific populations. For example, populations that are more concerned about collection might need more information about the benefits of sharing their information to be able to make a decision about it. A high emphasis on collection in English is also congruent with prior literature observing that college students from the USA are more worried about collection of personal information than control over it \cite{yang2013young}. Exploring if this trend is shared by people from other English speaking countries can help clarifying which of these patterns are better explained by location or language. Future work can explore why we observe a significant language difference in emphasis on collection. A feasible explanation might be related to the users' country of residence. \hlreview{Note that our tweets in English come mainly from the USA and UK.} \hlreview{Both were the countries most closely connected to the Cambridge Analytica scandal due to the misuse of data for political campaigns in the USA's 2016 presidential election and Brexit \mbox{\cite{cadwalladr2018revealed}}. It is possible that this shared experience resulted in a larger emphasis on collection in the English than the Spanish data. An alternative hypothesis is associated with differences in regulations.} Information privacy concerns \hlreview{might be} a reflection of customer privacy regulations in their respective countries \cite{markos2017information,kumar2018customer}. In contrast to European countries, \hlreview{that have} adopted a data protection directive from a \textit{government-imposed} perspective, the USA has followed an \textit{industry self-regulation} \mbox{\cite{kumar2018customer}}. Considering that companies have more freedom to collect and process personal data in \hlreview{North America}, it would be reasonable that data collection practices are of deeper concern to individuals from North America than those in Europe. \hlreview{This could also be supported by our data when observing that North America has the highest proportion of terms related to \textit{company} (see Figure }\ref{fig:global_categories_by_language_region}), which we also found in our prior work using a different text mining method and a smaller dataset \cite{gonzalez2019global}. \hlreview{However, our data analysis does not support the hypothesis of regional differences. It is possible that our data does not have enough power given the multiple comparisons we conducted. Future research is needed to explore alternative hypothesis that can explain the broader emphasis on collection among English speakers, compared to Spanish speakers. } \hlreview{We also observe significant regional differences on \textit{awareness}. Particularly, data from North America shows the smallest emphasis on \textit{awareness} while Latin America has the highest. Given that most studies on information privacy concerns are centered on the USA, this finding is particularly important. It warns us against the (sometimes implicit) assumption that North American data about privacy concerns can be generalizable to other regions. At least regarding emphasis on awareness, we find evidence that data from the USA is not similar to other regions. Thus, this result provides observational evidence to argue that it is necessary to include more diverse populations to have more a accurate understanding of the phenomena around data privacy. This finding also invites practitioners to address other regions, such as Latin America, using more different approaches in their terms of services and privacy policies. Populations that are more concerned about awareness might be more receptive to companies that use more transparent communications of their use of personal data, for example.} \hlreview{It is worth noting that Latin American shows the largest emphasis on \textit{awareness}.} \hlreview{Our results provide evidence of a disconnection between Latin America and North America regarding this aspect. It is possible that this broad interest on awareness can be a reflection of a connection of Latin America to the European perspective on data privacy.} Latin America presents a \hlreview{particular} scenario. It lies between two different approaches to personal data regulation: the principles contained by the European GDPR and the fragmented framework of the USA, where data protection is divided by sector \cite{aguerre2019digital}. Privacy regulations are considered an essential concern for many Latin American countries, and after data privacy breaches such as the Cambridge Analytica one, this issue has received increased attention in the public opinion and policy spheres in the region \cite{aguerre2019digital}. Previously, researchers have argued that GDPR could be one of the most influential pieces of data protection legislation ever enacted with influence beyond Europe \cite{kuner2017gdpr}. Indeed, in Brazil, a new GDPR-like law (\textit{Lei Geral de Proteção de Dados Pessoais, LGPD, in Portuguese}) \hlreview{has} become effective since August 2020 \cite{dias2020perceptions}. \hlreview{Future studies can explore connections among data privacy regulations worldwide, how they relate to public opinion on the issues of privacy, and how they are influenced by national and international data breaches. \revisado{As we found regional but not language differences in emphasis on privacy concerns, we conducted a follow-up analysis to assess whether there is a language difference within a single region. Europe was the only region where we had enough data in both languages to conduct such a comparison. We did not find significant differences in emphasis of any IUIPC dimension between data in English and Spanish from Europe ($\chi^2(3, 92)=0.15, p=.98$). There was no evidence of significant differences in emphasis on regulations either. Thus, this additional analysis provides additional evidence to support that information privacy concerns are more related to the region of residence than the spoken language. Nevertheless, further research is required to understand better the role of regulatory regimes, consumer practices, and economic development factors on these differences \mbox{\cite{OKAZAKI2020458}}. As the Spanish-English balance in tweets in our dataset is such that it does not lend itself to intra-region comparison for Asia and North and Latin America, future work could seek to explore if this pattern repeats in those regions as well} As with any study, our research has limitations. We collected data through the free standard streaming Twitter API using specific hashtags and keywords. Thus, we only had access to a limited sample of all the tweets about the scandal. \hlreview{We used Botometer to detect and remove tweets likely to be created by bots. This tool can only analyze Twitter public accounts; therefore, it could not be used on suspended accounts or those with their tweets protected when running our analysis. We decided to remove these accounts' tweets from our datasets because we can not confidently claim that humans generated them. Indeed, previous research suggests that it is likely that social bots were present in this cohort \mbox{\cite{8537833}}. \hlreview{Moreover, we focused our investigation on four keywords in English: \textit{information}, \textit{privacy}, \textit{users}, and \textit{company} and their corresponding translations to Spanish. While using synonyms would have brought similar semantic contexts, adding more concepts can strengthen the results. Future work can explore other keywords such as: \textit{intimacy}, and \textit{consumers}. } \revisado{Similarly, we did not use the terms \textit{user}, and \textit{companies} as keywords. While word embeddings capture syntactic regularities such as singular/plural forms \mbox{\cite{mikolov2013distributed,9259855}}, we reason that this methodological decision should not have affected considerably our results. Nevertheless, future work could include plural and singular versions of the same term to confirm this hypothesis.} \hlreview{The sample size of our manual coding process (40 words per keyword in each embedding) could have impacted the results. We chose the number of retrieved terms after manually inspecting the list of nearest words by each keyword in all our embeddings. We picked a threshold that allowed us to obtain a high number of meaningful words in most embeddings. Higher thresholds make it more likely to include terms with no apparent relation to the keywords (e.g., v.i.p; ground; approximately). In word embeddings with reduced vocabularies like ours, the number of relevant terms available for a specific keyword is limited. This characteristic explains why the number of irrelevant terms (\textit{Other} in Figure 3) is high in datasets with small vocabularies, such as the Spanish and English-Asia datasets. Future work could evaluate how sensitive our approach is to changes in vocabulary size and threshold for the nearest terms. This decision may introduce a bias in the results, and it is one of the limitations of our approach of social media textual data.} \section{Conclusion} \label{sec:conclusion} We \hlreview{conducted} a cross-language and cross-regional study on social media content about a major data privacy leakage: the Cambridge Analytica scandal. We categorized our Twitter data into two different languages and four geographical regions. Our results shed light on \hlreview{language and} regional differences on information privacy concerns by 1) creating word embeddings by language and world regions to leverage social media data about a data breach scandal, 2) conducting open coding and content analysis of the semantic contexts (generated by the embeddings) of privacy-related keywords, 3) mapping the results to a well-known information privacy framework, and (4) conducting a comparative analysis across two languages and four world regions. We found that data \hlreview{in English} shows a broader emphasis on data collectio , \hlreview{while data from North America shows the smallest emphasis on awareness. In turn, data from Latin America has the broadest emphasis on awareness.} We discuss how our findings extend current conceptualizations of information privacy concerns, and how they might relate to regulations about personal data usage in the regions we analyzed. Future work can dig deeper on the differences we observed and explore further the potential causes we discussed. Future studies might build upon our work to examine privacy concerns considering more languages, more geographical locations \hlreview{or different information privacy frameworks.} Using our methodology to compare datasets across longer periods of time could be useful to determine if the semantic contexts of the privacy keywords changes over time. \section{Acknowledgments} \hlreview{The authors want to thank Francisco Tobar, MSc. Computer Science student at Universidad Técnica Federico Santa María, for helping us to strengthen our findings through statistical analysis.} \nuevo{ Moreover, we acknowledge anonymous reviewers for insightful comments that helped us revise and refine the paper. } \section{Funding and conflicts of interests} This collaboration was possible thanks to the support of the Fulbright Program, under a 2017-18 Fulbright Fellowship award. This work was also partially funded by CONICYT Chile, under grant Conicyt-Fondecyt Iniciaci\'on 11161026. The first author acknowledges the support of the PIIC program from Universidad T\'{e}cnica Federico Santa Mar\'{i}a and CONICYT-PFCHA/Mag\'{i}sterNacional/2019-22190332. The authors declare that there is no conflict of interest regarding the publication of this paper. \bibliographystyle{cscwjournal}
proofpile-arXiv_069-1001
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} As the movement of cells plays a significant role in many biological systems and processes, analyzing the underlying mechanisms can prove useful in understanding these systems and processes themselves. One such process, which is naturally of extensive interest, is the invasive movement of tumor cells into healthy tissue along gradients of tissue density during the progression of certain types of cancer, which is governed by a mechanism generally called haptotaxis (cf.\ \cite{CarterHaptotaxisMechanismCell1967}). Similarly to the efforts made to understand the related process of chemotaxis (cf.\ \cite{BellomoMathematicalTheoryKeller2015}), which models movement along gradients of a diffusive chemical as opposed to non-diffusive tissue, mathematical modeling of haptotaxis has proven to be a fruitful area of study. In both cases, by far the most attention at this point has been paid to approaches employing a Fickian diffusive movement model for the organisms in question, which assumes some homogeneity of the underlying medium. But bolstered by experiments regarding cell aggregation near interfaces between grey and white matter in mouse brains (cf.\ \cite{Burden-GulleyNovelCryoimagingGlioma2011a}), it has recently been suggested that especially in more heterogeneous environments, such as brain tissue, cell movement might be better described by non-Fickian diffusion (cf.\ \cite{Belmonte-BeitiaModellingBiologicalInvasions2013}), which is far less mathematically studied in these taxis settings. \\[0.5em] In an effort to add to the base of knowledge in this area, we will focus our efforts here on a haptotaxis model of cancer invasion featuring such non-Fickian \emph{myopic diffusion}, which was introduced in \cite{EngwerEffectiveEquationsAnisotropic2016}. More specifically, we consider the system \begin{equation}\label{proto_problem} \left\{ \begin{aligned} u_t &= \nabla \cdot (\mathbb{D} \nabla u + u \nabla \cdot \mathbb{D}) - \chi \nabla \cdot (u\mathbb{D}\nabla w) + \logc u(1-u^{r- 1}), \\ w_t &= - uw \end{aligned} \right. \end{equation} in a smooth bounded domain $\Omega \subseteq \R^n$, $n \in \{2,3\}$, with a no-flux boundary condition and appropriate parameters $\chi > 0$, $\mu > 0$, $r \geq 2$ and $\mathbb{D}: \overline{\Omega} \rightarrow \mathbb{R}^{n\times n}$, $\D$ positive semidefinite on $\overline{\Omega}$. The first equation models the invading cancer cells moving according to the aforementioned myopic diffusion, which is represented by the term $\div(\mathbb{D} \nabla u + u \nabla \cdot \mathbb{D})$, as well as according to haptotaxis, which is represented by the term $- \chi \nabla \cdot (u\mathbb{D}\nabla w)$. Apart from this, the equation further incorporates a logistic source. The second equation models the remaining healthy tissue cells and only features a consumption term. \\[0.5em] The key feature of interest in the above system from both an application as well as a mathematical perspective is of course the parameter matrix $\D$, which represents a space dependent coupled diffusion and taxis tensor. In practice, this tensor can be derived from the underlying tissue structure by employing direct imaging methods (cf.\ \cite{EngwerEffectiveEquationsAnisotropic2016}) and represents the influence of said underlying structure on the movement of cells through it. To account for situations of both locally very dense as well as locally very sparse tissue, which both occur in concrete applications and hinder cell movement significantly, we allow $\D$ to be potentially degenerate. Notably in one dimension, solutions to a closely related system with degenerate diffusion have already been shown to reflect the aggregation behavior in interface regions seen in experiments (cf.\ \cite{Burden-GulleyNovelCryoimagingGlioma2011a}, \cite{WinklerSingularStructureFormation2018}) while to our knowledge in systems with non-degenerate diffusion long-time behavior results seem to generally be restricted to homogenization (cf.\ e.g.\ \cite{WangLargeTimeBehavior2016}). This seems to indicate that models of this kind featuring degenerate diffusion could potentially be a better representation of real world behavior. As such, the development of methods to cope with the challenges of this model, namely the reduced regularizing effect of the degenerate diffusion operator and the destabilizing effects of taxis, while still allowing for a sufficiently large class of matrices to enable the modeling of real world scenarios seems to be a worthwhile endeavor, which to our knowledge has thus far only been addressed in one dimension. Thus, the aim of this paper is the investigation of the apparently still open question whether solutions exist in two and three dimensions even if the diffusion operator is degenerate. \paragraph{Results.}Our main results regarding the haptotaxis model described above are twofold: First, we establish the existence of global classical solutions given a uniform positivity condition for $\D$, which allows us to basically treat it as we would any other elliptic diffusion operator, as well as a condition ensuring sufficient regularizing influence of the logistic source term. Second, we establish that it is still possible to construct fairly standard weak solutions under much more relaxed conditions for $\D$. More specifically, we drop the assumption that $\D$ must be globally positive in $\overline{\Omega}$ and replace it with a set of assumptions much more tailored to our methods for constructing said weak solutions, which are strictly weaker than the prior positivity assumption in allowing for matrices that are in some (small) parts of $\Omega$ only positive semidefinite. \\[0.5em] Given that the definitions necessary to properly formulate the above results take up significant space, we will not go into more detail here but instead refer the reader to the very next section for the pertinent details regarding said results. As an addition to stating precise versions of our results in the next section, we will further discuss the prototypical examples of a matrix with a single point degeneracy as well as a matrix with degeneracies on a manifold of higher dimension and derive some conditions, under which they still allow for the construction of weak solutions. We do this to help build some additional intuition in parallel to the rather abstract regularity properties introduced in said section as well as to illustrate that our results can work for some scenarios with real world relevance such as e.g.\ a domain divided by an impenetrable membrane. \paragraph{Approach.}Let us now give a brief sketch of the methods employed to achieve our two main results. \\[0.5em] For our classical existence result, we begin by using standard contraction mapping methods to gain local solutions with an associated blow-up criterion as the operator in the first equation is strictly elliptic for globally positive matrices $\D$. We then immediately transition to analyzing the function $a \defs u e^{-\chi w}$, which together with $w$ solves the closely related problem (\ref{problem_a}). We do this because, in a sense, this transformation eliminates the problematic cross-diffusive term from the first equation by integrating it into the function $a$ and its diffusion operator. Using a fairly classical Moser-type iteration argument, we then establish an $L^\infty(\Omega)$ bound for $a$, which translates back to $u$. Using this bound combined with two testing procedures then yields a further $W^{1,4}(\Omega)$ bound for $w$, which together with the already established bound for $u$ is sufficient to ensure that finite time blow-up is in fact impossible in two and three dimensions and thus completes the proof of our first result. \\[0.5em] Regarding our second result, we begin by approximating the initial data, the matrix $\D$ as well as the logistic source term in such a way as to make the already established global classical existence result applicable to the in this way approximated versions of (\ref{proto_problem}). For the family of solutions $(\ue, \we)$, $\eps \in (0,1)$, gained in this fashion, we then establish a bound of the form \[ \int_\Omega \ue\ln(\ue) + \int_\Omega \frac{\grad \we\cdot \De \grad \we}{\we} + \int_0^t \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \int_0^t\int_\Omega \ue^{r+\eps} \ln(\ue) \leq C \] by way of an energy-type inequality, which already proved useful in the one-dimensional case discussed in \cite{WinklerSingularStructureFormation2018}. Using this as a baseline, we then derive the bounds necessary for applications of the Aubin--Lions compact embedding lemma to gain our desired weak solutions as limits of the approximate ones. \paragraph{Prior work.} As haptotaxis models (cf.\ \cite{WangReviewQualitativeBehavior2020} for a general survey) as well as the closely related chemotaxis models (cf.\ \cite{BellomoMathematicalTheoryKeller2015} for a general survey) have been extensively studied in many possible variations since the introduction of their progenitor in the seminal 1970 paper by Keller and Segel (cf.\ \cite{keller1970initiation}), there is of course a lot of prior art available regarding global existence theory for said models. While it is certainly out of scope for this paper to cover prior results in their entirety, we will nonetheless give an overview of some notable ones. \\[0.5em] Let us first note that for the one-dimensional case, where $\D$ simplifies to a real-valued function, there are already some results available for a variant of our scenario without a logistic source term (including potential spacial degeneracy) dealing with existence theory as well as long time behavior (cf.\ \cite{WinklerSingularStructureFormation2018}, \cite{WinklerRefinedRegularityStabilization2020} and \cite{WinklerGlobalWeakSolutions2017}). Weak solutions have also been constructed in very similar haptotaxis systems featuring porous-medium type and signal-dependent degeneracies as opposed to spacial ones (cf.\ \cite{ZhigunGlobalExistenceDegenerate2016}). \\[0.5em] Regarding haptotaxis system with non-degenerate diffusion operators, e.g.\ $\D \equiv 1$ in our system, global existence and sometimes boundedness theory has been studied in various closely related settings (cf.\ \cite{CaoBoundednessThreedimensionalChemotaxishaptotaxis2016}, \cite{LiBoundednessChemotaxishaptotaxisModel2016}, \cite{LitcanuAsymptoticBehaviorGlobal2010}, \cite{TaoBoundednessStabilizationMultidimensional2014}, \cite{WalkerGlobalExistenceClassical2006}, \cite{WangBoundednessHigherdimensionalChemotaxishaptotaxis2016}, \cite{XiangNewResult2D2019}). Notably, these systems often feature an additional equation modeling a diffusive (potentially attractive) chemical and the fixed parameter choice $r = 2$ for the logistic term in addition to the more regular diffusion. In many of these scenarios, it has further been established that solutions converge to their constant steady states (cf.\ \cite{LiBoundednessAsymptoticBehavior2015}, \cite{LitcanuAsymptoticBehaviorGlobal2010}, \cite{PangAsymptoticBehaviorSolutions2019}, \cite{TaoBoundednessStabilizationMultidimensional2014}, \cite{WangLargeTimeBehavior2016}, \cite{ZhengLargeTimeBehavior2019}) under varied but sometimes restrictive assumptions. There has also been some analysis of haptotaxis with tissue remodeling, which is represented in the model by some additional source terms in the equation for $w$ (cf.\ \cite{PangGlobalExistenceTwodimensional2017}, \cite{TaoGlobalExistenceHaptotaxis2011}, \cite{TaoEnergytypeEstimatesGlobal2014}). \\[0.5em] Apart from haptotaxis models, there has also been significant analysis of chemotaxis models featuring degenerate diffusion (cf.\ \cite{EberlAnalysisDegenerateBiofilm2014}, \cite{LaurencotChemotaxisModelThreshold2005}, \cite{XuChemotaxisModelDegenerate2020} including degeneracies depending on the cell density itself). \\[0.5em] Lastly, let us just briefly mention that the regularizing effects of logistic source terms we rely on in this paper have already been very well-documented in various chemotaxis systems (cf.\ \cite{LankeitEventualSmoothnessAsymptotics2015}, \cite{WinklerBoundednessHigherdimensionalParabolicparabolic2010} among many others) as well as haptotaxis systems (cf.\ \cite{TaoGlobalClassicalSolutions2020}). \section{Main Results and Related Definitions} \label{section:main_results} As already alluded to in the introduction, we will focus our attention in this paper on the system \begin{equation}\label{problem} \left\{ \begin{aligned} u_t &= \div (\D \grad u + u \div \D) - \chi \div (u\D \grad w) + \logc u(1-u^{\loge - 1}) \;\;\;\; &&\text{ on } \Omega\times(0,\infty), \\ w_t &= - uw \;\;\;\; &&\text{ on } \Omega\times(0,\infty), \\ (\D \grad u) \cdot \nu &= \chi (u\D \grad w) \cdot \nu - u (\div \D) \cdot \nu \;\;\;\; &&\text{ on } \partial\Omega\times(0,\infty),\\ u(\cdot, 0) &= u_0, \;\; w(\cdot, 0) = w_0 \;\;\;\; &&\text{ on } \Omega \end{aligned} \right. \end{equation} in a smooth bounded domain $\Omega \subseteq \R^n$, $n \in \{2,3\}$, with parameters $\chi > 0$, $\mu > 0$, $r \geq 2$, $\D: \overline{\Omega} \rightarrow \R^{n\times n}$, $\D$ positive semidefinite on $\overline{\Omega}$, and some initial data $u_0, w_0 : \Omega \rightarrow [0,\infty)$. \\[0.5em] Our results concerning this system are twofold. We will first derive the following existence result concerning global classical solutions in two and three dimensions under the assumptions that $\D$ and the initial data are sufficiently regular, $\D$ is positive definite on $\overline{\Omega}$ and the logistic source term is sufficiently strong. \begin{theorem} \label{theorem:classical_solution} Let $\Omega \subseteq \R^n$, $n\in\{2,3\}$, be a bounded domain with a smooth boundary, $\chi \in (0,\infty)$, $\mu \in (0,\infty)$, $r\in[2,\infty)$ and $\D\in C^2(\overline{\Omega}; \R^{n\times n})$. We further assume that $\D$ is positive definite on $\overline{\Omega}$ and satisfies $(\div \D) \cdot \nu = 0$ on $\partial \Omega$. Let $u_0, w_0 \in C^{2+\vartheta}(\overline{\Omega})$, $\vartheta \in (0,1)$, be some initial data with $u_0, w_0 > 0$ on $\overline{\Omega}$ and $(\D \grad u_0)\cdot\nu = (\D \grad w_0)\cdot \nu= 0$ on $\partial \Omega$. \\[0.5em] If either $r > 2$ or $\logc \geq \chi \|w_0\|_\L{\infty}$, then there exist positive functions $u, w \in C^{2,1}(\overline{\Omega}\times[0,\infty))$ such that $(u,w)$ is a global classical solution to (\ref{problem}) with initial data $(u_0, w_0)$. \end{theorem}\noindent This result, while of course also of independent interest, will then serve as a building block for the construction of weak solutions to the same system under much more relaxed restrictions on $\D$ and the initial data. Chiefly, global positivity of the matrix $\D$ is not necessarily needed anymore and is instead replaced by a set of much weaker but more specific regularity assumptions. \\[0.5em] The first such regularity property concerns the divergence of $\D$ (applied column-wise) and how it can be estimated by the (potentially degenerate) scalar product induced by $\D$. \begin{definition}\label{definition:div_regularity} Let $\Omega \subseteq \R^n$, $n\in\N$, be a bounded domain with a smooth boundary. We then say a positive semidefinite $\D = (\D_1 \,\dots\, \D_n) \in L^1(\Omega; \R^{n \times n})$ with $\div \D \defs (\div \D_1 \,, \dots, \div \D_n)\in L^1(\Omega; \R^n)$ allows for a \emph{divergence estimate} with exponent $\divrege \in [\frac{1}{2},1)$ if there exists $\divregc \geq 0$ such that \begin{equation} \label{regularity_div} \int_\Omega \left|(\div \D) \cdot \Phi\right| \leq \divregc \left( \int_\Omega \left| \Phi \cdot \D \Phi \right|^\divrege + 1 \right) \end{equation} for all $\Phi \in C^0(\overline{\Omega};\R^n)$. \end{definition} \begin{remark}\label{remark:div_regularity_consequence} Note that if $\D \in C^0(\overline{\Omega};\R^{n\times n})$, $\D$ allowing for a divergence estimate with exponent $\beta \in (\frac{1}{2}, 1)$ implies that $\div \D \in L^\frac{2\beta}{2\beta - 1}(\Omega; \R^n) \subseteq L^2(\Omega; \R^n) $. This stems from the fact that the estimate (\ref{regularity_div}) essentially means that the functional $\Phi \mapsto \int_\Omega (\div \D) \cdot \Phi$ is an element of $(L^{2\beta}(\Omega; \R^n))^*$, which is isomorphic to $L^{\frac{2\beta}{2\beta - 1}}(\Omega;\R^n)$. \end{remark} \begin{remark}\label{remark:div} It is fairly easy to verify that any smooth, positive definite $\D$ allows for such an estimate with the optimal exponent $\beta = \frac{1}{2}$. Let us therefore now briefly illustrate that the above property is also achievable for less regular $\D$, which are e.g.\ at some points in $\Omega$ only positive semidefinite, by giving some examples. While we will not necessarily fully explore these examples and leave out some of the more cumbersome corner cases for ease of presentation, they will accompany us throughout this section as a tool to give some intuition for later introduced definitions as well as to give concrete examples for degenerate cases in which weak solutions can still be constructed. \\[0.5em] We will first take a look at the prototypical case of a matrix-valued function $\D_1$ on a ball with a single degenerate point in the origin, or more precisely we will consider $\D_1(x) \defs |x|^\prote I$ on $\Omega \defs B_1(0) \subseteq \R^n$, $n \in \N$, $I$ being the identity matrix and $\prote$ being some positive real number. \\[0.5em] As $\div \D_1(x) = \grad (|x|^\prote) = \prote |x|^{\prote - 2}x$ almost everywhere, we can estimate \begin{align*} \int_\Omega \left|(\div \D_1) \cdot \Phi\right| &\leq \prote\int_\Omega |x|^{\prote - 1}\left|\Phi\right| = \prote \left\| |x|^{\frac{\prote}{2} - 1} (\Phi \cdot |x|^{\prote} I \Phi)^\frac{1}{2} \right\|_\L{1} \\ &\leq s \left\| |x|^{\frac{\prote}{2} - 1} \right\|_\L{\frac{2\beta}{2\beta - 1}} \left\|(\Phi \cdot |x|^{\prote} I \Phi)^\frac{1}{2} \right\|_\L{2\beta}\\ &\leq s \left\| |x|^{\frac{s}{2} - 1} \right\|_\L{\frac{2\beta}{2\beta - 1}} \left( \int_\Omega \left( \Phi \cdot \D_1 \Phi \right)^\beta + 1 \right) \end{align*} for all $\beta \in (\frac{1}{2}, 1)$ and $\Phi \in C^0(\overline{\Omega};\R^n)$ using the Hölder inequality as well as Young's inequality. As $|x|^{\frac{\prote}{2} - 1} \in \L{\frac{2\beta}{2\beta - 1}}$ if and only if $\frac{2\beta}{2\beta - 1} (\frac{s}{2} - 1) > - n$, the prototypical case discussed above fulfills the divergence estimate for all $\beta \in (\frac{n}{s - 2 + 2n},1) \cap (\frac{1}{2}, 1)$. Note that for $s > 2 - n$, which in two or more dimensions is always ensured, the set $(\frac{n}{s - 2 + 2n},1) \cap (\frac{1}{2}, 1)$ is never empty and therefore our prototypical example always has the discussed property for all positive $s$ and some appropriate $\beta$. \\[0.5em] To illustrate that our framework also supports analysis of singularities occurring on higher dimensional manifolds, let us further consider the similar prototypical example $\D_2(x_1, \dots, x_n) \defs |x_1|^s I$ on the same set $\Omega$ with $s$ now being a real number greater than 1. As here $\div \D_2(x_1, \dots, x_n) = (s |x_1|^{s-2}x_1, 0, \dots, 0)$ almost everywhere, we gain that $\D_2$ has the property laid out in \Cref{definition:div_regularity} for all $\beta \in (\frac{1}{s}, 1) \cap (\frac{1}{2}, 1)$ by a similar argument as for the previous example. \\[0.5em] As to be expected in both cases, smaller values of $s$ result in the divergence estimate only holding for ever larger exponents $\beta$. As we will see in our theorem regarding the existence of weak solutions at the end of this section, these larger values of $\beta$ will necessitate stronger regularizing influence from the logistic source term to compensate. \end{remark} \noindent Before we can now approach the second regularity property of this section as well as properly defining what we in fact mean by weak solutions in this paper, we need to first introduce a set of function spaces. Said spaces are generally fairly straightforward generalizations of standard Sobolev and Lebesgue spaces incorporating $\D$ as well as some spaces derived from them, which are more specific to our setting. For a more thorough discussion of e.g.\ the degenerate Sobolev spaces introduced below, we refer the reader to \cite{SawyerDegenerateSobolevSpaces2010}. \\[0.5em] We will further take the introduction of said spaces as an opportunity to present some of their most important properties for our purposes immediately after defining them. \begin{definition}\label{definition:spaces} Let $\Omega\subseteq\R^n$, $n\in\N$, be a bounded domain with a smooth boundary and $p\in [1,\infty)$. \\[0.5em] We then define the Sobolev-type space \[ W^{1,p}_\mathrm{div}(\Omega;\R^{n\times n}) \defs \left\{ M \in L^p(\Omega; \R^{n\times n}) \; \middle| \; \div M \in L^p(\Omega; \R^n) \right\} \] with the norm \[ \|M\|_{W^{1,p}_\mathrm{div}(\Omega;\R^{n\times n})} \defs \|M\|_{L^p(\Omega; \R^{n\times n})} + \|\div M\|_{L^p(\Omega; \R^n)}. \] Herein, the divergence of a square matrix $M = (M_1\, \dots \, M_n)$ is defined as $\div M \defs (\div M_1,\, \dots, \div M_n)$. \\[0.5em] Let now $\D \in C^0(\overline{\Omega};\R^{n\times n})$ be positive semidefinite everywhere. We then define the Lebesgue-type space $\LD{p}$ as the set of all measurable $\R^n$-valued functions $\Phi$ on $\Omega$ with finite seminorm \[ \| \Phi \|_{\LD{p}} \defs \left(\int_\Omega \left( \Phi \cdot \D \Phi \right)^\frac{p}{2} \right)^\frac{1}{p} \] modulo all of those functions with $\| \Phi \|_{\LD{p}} = 0$ in the same vain as the standard Lebesgue spaces. \\[0.5em] Furthermore, we define the Sobolev-type spaces $\WD{p}$ as the completion of $C^\infty(\overline{\Omega})$ in the norm \[ \| \phi \|_\WD{p} \defs \| \phi \|_\L{p} + \| \grad \phi \|_{\LD{p}} \] in the same vain as the standard Sobolev spaces. It is straightforward to see that each space $\WD{p}$ can be interpreted as a subspace of $\L{p} \times \LD{p}$ in a natural way and thus elements of these spaces can be written as tuples $(\phi, \Phi)$. As such, there exist the natural continuous projections \[P_1 : \WD{p} \rightarrow L^p(\Omega) \stext{ and } P_2: \WD{p} \rightarrow \LD{p}\] associated with this representation. \end{definition} \begin{remark} For a more comprehensive exploration of these spaces and their properties see e.g.\ \cite{SawyerDegenerateSobolevSpaces2010}. \\[0.5em] We will now give a brief overview of the properties the above spaces retain from the standard Sobolev and Lebesgue spaces as well as some of the differences. As most of the proofs translate directly from standard Sobolev theory or are laid out in \cite{SawyerDegenerateSobolevSpaces2010}, we will only list the properties we are interested in without extensive argument. \\[0.5em] First of all by construction, $W_\mathrm{div}^{1,p}(\Omega;\R^{n \times n})$, $\LD{p}$ and $\WD{p}$ are Banach spaces, which are reflexive if $p \in (1,\infty)$, by essentially the same arguments as for the standard Sobolev and Lebesgue spaces and, for $p = 2$, they are in fact Hilbert spaces with the natural inner products. It is further easy to see that, if $(\phi, \Phi)$ is a strong or weak limit of a sequence $(\phi_n, \Phi_n)_{n\in\N} \subseteq \WD{p}$, the function $\phi\in\L{p}$ coincides with the pointwise almost everywhere limit of the sequence $(\phi_n)_{n\in\N}$ if it exists due to $P_1$ being continuous regarding both topologies and well-known results about strong and weak convergence in $\L{p}$. \\[0.5em] As opposed to the classical Sobolev spaces, the spaces $\WD{p}$ can not necessarily be understood as subspaces of the spaces $\L{p}$ because their equivalents to the weak gradients in the classical Sobolev spaces are not necessarily unique here, meaning essentially that $P_1$ is not always injective.\ (For an example of this, see \cite[p.\ 1877]{SawyerDegenerateSobolevSpaces2010}). Given that this can be problematic when deriving analogues to the (compact) embedding properties of Sobolev spaces for our weaker variants, let us now briefly note that, under sufficient regularity assumptions for $\D$, the spaces $\WD{p}$ do in fact embed into the spaces $\L{p}$ again. In particular if $p = 2$, which is the parameter choice we are most interested here, this is the case if $\sqrt{\D} \in W_{\mathrm{div}}^{1,2}(\Omega; \R^{n \times n})$ according to Lemma 8 from \cite{SawyerDegenerateSobolevSpaces2010}. \\[0.5em] While it presents a slight abuse of notation, we will in a similar fashion to \cite{SawyerDegenerateSobolevSpaces2010} use $\phi$ to mean $P_1(\phi) \in \L{p}$ for elements $\phi \in \WD{p}$ when unambiguous and generally use the convention $\grad \phi = P_2(\phi)$ even if $\grad \phi$ is not necessarily the actual weak derivative. We will further often write \[ \|\phi\|_\WD{p} \stext{ for } \|(\phi,\grad \phi)\|_\WD{p} \] to simplify the notation in later arguments. If $\phi$ is additionally an element of $C^1(\overline{\Omega})$, we will always assume $\grad \phi$ to be equal to the classical derivative, of course. \end{remark} \noindent Having established these function spaces, we can now clearly state the second and last regularity property for $\D$ we are interested in. It is a simple compact embedding property, which is mainly used in this paper to facilitate application of the well-known Aubin--Lions lemma. \begin{definition}\label{definition:comp_regularity} Let $\Omega \subseteq \R^n$, $n\in\N$, be a bounded domain with a smooth boundary. We say a positive semidefinite $\D \in C^0(\overline{\Omega};\R^{n \times n})$ allows for a \emph{compact $L^1(\Omega)$ embedding} if $\WD{2}$ embeds compactly into $\L{1}$. \end{definition} \begin{remark} Let us briefly note that any $\D$, which is equal to zero on any open subset $U$ of $\Omega$, cannot fulfill the property laid out in \Cref{definition:comp_regularity} as it is well documented that $L^2(U)$, which is equal to $W^{1,2}_\D(U)$ in this case, does not embed compactly into $L^1(U)$. \end{remark} \noindent We will now give some additional criteria for the above compact embedding property to not only make our results easier to use in application but also to help us prove that both of the examples discussed in \Cref{remark:div} in fact fulfill it. \begin{lemma}\label{lemma:comp_regularity_criteria} Let $\Omega \subseteq \R^n$, $n\in\N$, be a bounded domain with a smooth boundary and $N \subseteq \Omega$ be a relatively closed set in $\Omega$ with measure zero. Let then \[ \Omega_{N,\eps} \defs \left\{ x \in \Omega \; \middle| \; \mathrm{dist}(x, N\cup\partial\Omega) > \eps \right\} \] and let $\D \in C^0(\overline{\Omega};\R^{n \times n})$ be positive semidefinite and fulfill $\sqrt{\D} \in W^{1,2}_{\mathrm{div}}(\Omega;\R^{n \times n})$. If \begin{enumerate} \item $\WD{2}$ embeds compactly into $L^1_\loc(\Omega\setminus N)$ or \item $\D$ is positive definite on $\Omega\setminus N$ and there exists $\eps_0 > 0$ such that $W^{1,2}(\Omega_{N,\eps})$ embeds compactly into $L^1(\Omega_{N,\eps})$ for all $\eps \in (0,\eps_0)$ or \item $\D$ is positive definite on $\Omega\setminus N$ and there exists $\eps_0 > 0$ such that $\Omega_{N,\eps}$ has a Lipschitz boundary for all $\eps \in (0,\eps_0)$, \end{enumerate} then $\D$ allows for a compact $L^1(\Omega)$ embedding. \end{lemma} \begin{proof} Due to our assumption that $\sqrt{\D} \in W^{1,2}_\mathrm{div}(\Omega;\R^{n\times n})$, Lemma 8 from \cite{SawyerDegenerateSobolevSpaces2010} immediately yields that the projection $P_1 : \WD{2}\rightarrow L^2(\Omega) \subseteq L^1(\Omega)$ from \Cref{definition:spaces} is injective and thus provides us with a continuous embedding of $\WD{2}$ into $L^1(\Omega)$. It thus only remains to show that this embedding is in fact compact given the various criteria outlined above. \\[0.5em] To do this, we first fix a bounded sequence $(\varphi_k)_{k\in\N} \subseteq \WD{2}$. We then only need to construct a subsequence of $(\varphi_k)_{k\in\N}$ that converges in $L^1(\Omega)$ to some function $\varphi$ to prove our desired outcome. As it is further possible to find another sequence $(\psi_k)_{k\in\N} \subseteq C^\infty(\overline{\Omega})$ such that $\|\varphi_k - \psi_k\|_\L{1} \leq \sqrt{|\Omega|} \|\varphi_k - \psi_k\|_\WD{2} \leq \frac{1}{k}$ by definition of $\WD{2}$, we can further assume that $\varphi_k \in C^\infty(\overline{\Omega})$ for all $k$ without loss of generality. \\[0.5em] If $\WD{2}$ now embeds compactly into $L^1_\loc(\Omega\setminus N)$, we can choose a subsequence $(\varphi_{k_j})_{j\in\N}$ and function $\varphi: \Omega \rightarrow \R$ such that $\varphi_{k_j} \rightarrow \varphi$ in all $L^1(\Omega_{N,\eps})$, $\eps > 0$, as $j \rightarrow \infty$. As by our assumptions $N\cup\partial\Omega$ is closed and thus $\Omega\setminus N = \bigcup_{k\in\N} \Omega_{N, 1/k}$, we can then employ a standard diagonal sequence argument to gain yet another subsequence, which we will again call $(\varphi_{k_j})_{j\in\N}$ for convenience, with the property that $\varphi_{k_j} \rightarrow \varphi$ almost everywhere in $\Omega\setminus N$ and thus almost everywhere in all of $\Omega$ as $j \rightarrow \infty$ because $N$ is a null set. Given that the thus constructed subsequence is further bounded in $L^2(\Omega)$ due to it being bounded in $\WD{2}$, we can use Vitali's theorem and the de La Valleé Poussin criterion for uniform integrability (cf.\ \cite[pp.\ 23-24]{DellacherieProbabilitiesPotential1978}) to conclude that $\varphi_{k_j} \rightarrow \varphi$ in $L^1(\Omega)$ as well, yielding the first part of our result. \\[0.5em] If $\D$ is positive definite on $\Omega\setminus N$, then for every $\eps > 0$ there exists $K(\eps) > 0$ such that $\D > K(\eps)$ on $\Omega_{N,\eps}$ due to the continuity of $\D$ and the fact that $\overline{\Omega_{N,\eps}} \subseteq \Omega\setminus N$ is compact. Thus, the norms of the spaces $W^{1,2}(\Omega_{N,\eps})$ and $W_\D^{1,2}(\Omega_{N,\eps})$ are equivalent. As such, the sequence $(\varphi_k)_{k\in\N}$ is bounded in all of the spaces $W^{1,2}(\Omega_{N,\eps})$, $\eps > 0$. Due to our further assumption that there exists $\eps_0 > 0$ such that $W^{1,2}(\Omega_{N,\eps})$ embeds compactly into $L^1(\Omega_{N,\eps})$ for all $\eps \in (0,\eps_0)$ and the fact that any compact set $K\subseteq \Omega\setminus N$ is a subset of some $\Omega_{N,\eps}$ as another consequence of $\Omega\setminus N$ being equal to $\bigcup_{\eps > 0} \Omega_{N, \eps}$, a standard diagonal sequence argument yields a subsequence along which the functions $\varphi_{k}$ converge to some $\varphi$ in $L^1_\loc(\Omega\setminus N)$. Combining this with the arguments from the previous paragraph then yields the second part of our result. \\[0.5em] To now complete the proof, we first note that \cite[Theorem 6.3]{AdamsSobolevSpaces2003} states that a Lipschitz boundary condition for the sets $\Omega_{N,\eps}$ ensures the Sobolev embedding necessary for our second result and thus the third result follows directly from the second. \end{proof} \begin{remark}\label{remark:comp} Going briefly back to the examples introduced in \Cref{remark:div}, we see that $\div \sqrt{\D_1(x)} = \frac{s}{2}|x|^{\frac{s}{2} - 2}x$, $s > 0$, and $\div \sqrt{\D_2(x_1, \dots, x_n)} = (\frac{s}{2}|x_1|^{\frac{s}{2} - 2}x_1, 0, \dots, 0)$, $s > 1$, are both elements of $L^2(\Omega; \R^n)$ and thus $\sqrt{\D_1}, \sqrt{\D_2} \in W^{1,2}_\mathrm{div}(\Omega; \R^{n\times n})$ in dimensions two or higher. Furthermore due to the fairly straightforward geometry of the degeneracy set $N$ in both cases, it is easy to verify that both examples also fulfill the third criterion in \Cref{lemma:comp_regularity_criteria} and thus both $\D_1$ and $\D_2$ allow for a compact $L^1(\Omega)$ embedding in accordance with \Cref{definition:comp_regularity}. \end{remark} \noindent While we have now invested some effort into formalizing the restrictions on $\D$ necessary for our later construction of weak solutions, we have yet to clarify what we in fact mean by a weak solution to (\ref{problem}). Let us now rectify this in the following definition. \begin{definition}\label{definition:weak_solution} Let $\Omega \subseteq \R^n$, $n\in\N$, be a bounded domain with a smooth boundary and let $\chi > 0$, $\mu > 0$ and $r\in[1,\infty)$. Let $\D \in W_\mathrm{div}^{1,q}(\Omega; \R^{n\times n}) \cap C^0(\overline{\Omega}; \R^{n \times n})$, $q \in (1,\infty)$ and $p \defs \max(2,r,\frac{q}{q-1})$. Let further $u_0, w_0 \in L^1(\Omega)$ be some initial data. \\[0.5em] We then call a tuple of functions \begin{align*} u &\in L_\loc^1([0,\infty);\WD{1}) \cap L_\loc^p(\overline{\Omega}\times[0,\infty)), \\ w &\in L_\loc^2([0,\infty);\WD{2}) \end{align*} a weak solution of (\ref{problem}) with initial data $u_0$, $w_0$ and the above parameters if \begin{align*} \int_0^\infty \int_\Omega u \phi_t - \int_\Omega u_0 \phi(\cdot, 0) &= \int_0^\infty \int_\Omega \grad u \cdot \D \grad \phi + \int_0^\infty \int_\Omega u (\div \D) \cdot \grad \phi \\ &- \chi \int_0^\infty \int_\Omega u \grad w \cdot \D \grad \phi - \mu \int_0^\infty \int_\Omega u(1-u^{r-1})\phi \end{align*} and \[ \int_0^\infty \int_\Omega w \phi_t - \int_\Omega w_0 \phi(\cdot, 0) = \int_0^\infty\int_\Omega uw \phi \] hold for all $\phi \in C_c^\infty(\overline{\Omega}\times[0,\infty))$. \end{definition} \noindent As we have at this point clearly defined the target and some of the preconditions, let us now outright state the second main theorem we endeavor to prove in this paper. \begin{theorem}\label{theorem:weak_solutions} Let $\Omega \subseteq \R^n$, $n\in\{2,3\}$, be a bounded domain with a smooth boundary, $\chi \in (0,\infty)$, $\mu \in (0,\infty)$, $\beta \in [\frac{1}{2}, 1)$, $r\in[2,\infty)$ with $\frac{\beta}{1-\beta} \leq r$ and $\D \in W^{1,2}_\mathrm{div}(\Omega;\R^{n\times n})\cap C^0(\overline{\Omega}; \R^{n\times n})$ be positive semidefinite everywhere. Let further $\D$ allow for a divergence estimate with exponent $\divrege$ (cf.\ \Cref{definition:div_regularity}) and let $\D$ allow for a compact $L^1(\Omega)$ embedding (cf.\ \Cref{definition:comp_regularity}). Finally, let $u_0 \in \L{z[\ln(z)]_+}$ and $w_0 \in C^0(\overline{\Omega})$ be some initial data with $\sqrt{w_0} \in W^{1,2}(\Omega)$ and $u_0 \geq 0$, $w_0 \geq 0$ almost everywhere. Here, $\L{z[\ln(z)]_+}$ is the standard Orlicz space associated with the function $z \mapsto z [\ln(z)]_+$. \\[0.5em] Then there exist a.e.\ non-negative functions \begin{align*} u &\in L_\loc^\frac{2r}{r+1}([0,\infty);\WD{\frac{2r}{r+1}}) \cap L_\loc^r(\overline{\Omega}\times[0,\infty)), \numberthis \label{eq:weak_solution_regularity_1} \\ w &\in L_\loc^2([0,\infty);\WD{2})\cap L^\infty(\Omega\times(0,\infty)) \numberthis \label{eq:weak_solution_regularity_2} \end{align*} that are a weak solution to (\ref{problem}) in the sense of \Cref{definition:weak_solution}. \end{theorem} \begin{remark} In light of the above theorem, we take another look at our prototypical examples $\D_1(x) \defs |x|^s I$, $s > 0$, and $\D_2(x_1, \dots, x_n) \defs |x_1|^s I$, $s > 1$, on $\Omega \defs B_1(0)\subseteq \R^n$, $n\in\{2,3\}$. Given the discussion in \Cref{remark:div}, we know that, if we assume \begin{equation}\label{eq:remark_div_consequence_1} r\geq \frac{\beta}{1-\beta} > \frac{\frac{n}{s-2+2n}}{1 - \frac{n}{s-2+2n}} = \frac{n}{s-2+n}, \end{equation} then $\D_1$ allows for the necessary divergence estimate with an exponent $\beta$ fulfilling $\frac{\beta}{1-\beta} \leq r$. Similarly if we assume that \begin{equation}\label{eq:remark_div_consequence_2} r \geq \frac{1}{s-1}, \end{equation} the same holds true for $\D_2$. Further due to the arguments presented in \Cref{remark:comp}, both $\D_1$ and $\D_2$ allow for a compact $L^1(\Omega)$ embedding. Therefore, the above theorem means that, for sufficiently regular initial data $u_0$, $w_0$ and if either $\D = \D_1$ and $r$ and $s$ satisfy (\ref{eq:remark_div_consequence_1}) or $\D = \D_2$ and $r$ and $s$ satisfy (\ref{eq:remark_div_consequence_2}), weak solutions to (\ref{problem}) in fact exist in two and three dimensions. \end{remark} \section{Existence of Classical Solutions} As the existence of classical solutions to (\ref{problem}), apart from being an interesting result by its own merits, plays an important role in our construction of their weak counterparts, we will in this section first focus on their derivation. As such, our ultimate goal for this section will be the proof of our first main result, namely \Cref{theorem:classical_solution}. The methods presented here will in many ways mirror those for similar systems with a standard Laplacian as diffusion operator. We mainly verify that the differing elements in our systems do not impede said methods. \\[0.5em] To this end, we now fix a smooth bounded domain $\Omega\subseteq\R^n$, $n\in\{2,3\}$ and system parameters $\chi \in (0,\infty)$, $\logc \in (0,\infty)$, $\loge \in [2,\infty)$ and $\D \in C^2(\overline{\Omega};\R^{n\times n})$. We further assume that $\D$ is in fact positive definite everywhere and has the property $(\div \D) \cdot \nu = 0$ on $\partial \Omega$. Given these assumptions, we can fix $M \geq 1$ such that \begin{equation}\label{eq:classical_D_assumption} \frac{1}{M} \leq \D \leq M, \;\;\;\; \|\div \D\|_\L{\infty} \leq M \stext{ and } \|\div (\div \D)\|_\L{\infty} \leq M. \end{equation} We also fix some initial data $u_0, w_0 \in C^{2+\vartheta}(\overline{\Omega})$, $\vartheta \in (0,1)$, with $(\D \grad u_0) \cdot \nu = (\D \grad w_0) \cdot \nu = 0$ on $\partial \Omega$ and $u_0 > 0$, $w_0 > 0$ on $\overline{\Omega}$. \\[0.5em] Comparing the very strong regularity assumptions for $\D$ in this section to the much weaker ones in the following section devoted to the construction of weak solutions, the question why the gap in assumed regularity between these sections is as large as it is naturally presents itself. Let us therefore briefly address this issue. It is certainly possible to derive most of the a priori estimates, which are used in this section to argue that blow-up of local solutions is impossible, under similarly specific regularity assumptions as seen in \Cref{definition:div_regularity} or \Cref{definition:comp_regularity} (albeit with some additions). But generalizing the theory employed by us to first gain said local solutions with less regular $\D$ would necessitate Schauder and semigroup theory for potentially very degenerate operators, which is out of scope for this paper. Furthermore, we think that this result is already of interest in and of itself. \subsection{Existence of Local Solutions} After this introductory paragraph giving our rational for the assumptions about $\D$ in this section, we will now focus on the construction of local solutions to the system (\ref{problem}) as a first step in constructing global ones. As for a positive definite matrix $\D$, the diffusion operator in the first equation is strictly elliptic and therefore accessible to most of the same existence and regularity theory as the Laplacian, we will not go into detail concerning the construction of local solutions but rather refer the reader to a local existence result for a similar haptotaxis system with our operator replaced by the Laplacian in \cite{TaoGlobalClassicalSolutions2020}. \begin{lemma}\label{lemma:local_solution} There exist $\tmax \in (0,\infty]$ and positive functions $ u,w \in C^{2,1}(\overline{\Omega}\times[0,\tmax)) $ such that $(u,w)$ is a classical solution to (\ref{problem}) on $\overline{\Omega}\times(0,\tmax)$ with initial data $(u_0, w_0)$ and satisfies the following blow-up criterion: \begin{equation} \label{eq:blowup} \text{ If } \tmax < \infty, \text{ then } \limsup_{t\nearrow \tmax} \left( \|u(\cdot, t)\|_\L{\infty} + \|w(\cdot, t)\|_{W^{1,n+1}(\Omega)} \right) = \infty. \end{equation} \end{lemma} \noindent For ease of further discussion, we now fix such a maximal local solution $(u,w)$ on $(0,\tmax)$ with initial data $(u_0, w_0)$ and the parameters as stated in the above introductory paragraphs. \\[0.5em] Before diving into the derivation of more substantial bounds for the above solution, we derive a straightforward mass bound for the first solution component as well as an $L^\infty(\Omega)$ bound for the second solution component. These bounds will not only prove useful when ruling out blow-up in this section but also serve as a baseline for bounds derived in our later efforts focused on the construction of weak solutions. \begin{lemma}\label{lemma:absolute_baseline} The inequalities \[ \int_\Omega u(\cdot, t) \leq \mu |\Omega| t + \int_\Omega u_0 \stext{ and } \|w(\cdot, t)\|_\L{\infty} \leq \|w_0\|_\L{\infty} \] hold for all $t \in (0,\tmax)$. \end{lemma} \begin{proof} Integrating the first equation in (\ref{problem}) and applying partial integration yields \[ \frac{\d}{\d t}\int_\Omega u = \mu \int_\Omega u(1-u^{r-1}) \leq \mu |\Omega| \] for all $t \in (0,T)$ and therefore immediately give us the first half of our result by time integration. Given that further $w_t \leq 0$ due to the second equation in (\ref{problem}), the second half of our result follows directly as well. \end{proof}\noindent \subsection{A Priori Estimates} The next natural step after establishing local solutions with an associated blow-up criterion is of course arguing that finite-time blow-up is impossible and the maximal local solutions were in fact global all along. To do this, we will devote this section to a set of a priori estimates, which increase in strength as the section goes on until they rule out blow-up of both $u$ and $w$. \\[0.5em] As is not uncommon in the analysis of these kinds of haptotaxis systems (cf.\ \cite{TaoGlobalClassicalSolutions2020}), we will from now consider the function $a \defs u e^{-\chi w}$ defined on $\overline{\Omega}\times[0,\tmax)$ and its associated initial data $a_0 \defs u_0 e^{-\chi w_0}$ defined on $\overline{\Omega}$ in addition to the actual solutions components $u$ and $w$ themselves. A simple computation then shows that $(a,w)$ is a classical solution of the following related system: \begin{equation}\label{problem_a} \left\{ \begin{aligned} a_t &=e^{-\chi w} \div ( e^{w\chi}\D\grad a) + e^{-\chi w} \div (a e^{\chi w} (\div \D)) \\ &\;\;\;\;+ \mu a (1-a^{r-1}e^{\chi(r-1)w}) +\chi a^2 w e^{\chi w} &&\text{ on } \Omega\times(0,\infty),\\ w_t &= - a e^{\chi w}w \;\;\;\; &&\text{ on } \Omega\times(0,\infty), \\ (\D \grad a) \cdot \nu &= -a (\div \D) \cdot \nu = 0 \;\;\;\; &&\text{ on } \partial\Omega\times(0,\infty),\\ a(\cdot, 0) &= a_0 > 0, \;\; w(\cdot, 0) = w_0 > 0 \;\;\;\; &&\text{ on } \Omega. \end{aligned} \right. \end{equation} The key property of the above system, which makes it so useful for our purposes, is that it in a sense eliminates the taxis term or at least the explicit gradient of $w$ from the first equation (by in a sense integrating it into $a$ and its diffusion operator). This alleviates many of the normal problems associated with the taxis term in testing or semigroup based approaches used to derive a priori estimates. A second useful property of this transformation is that, by definition, bounds that do not involve derivatives are easily translated back from $a$ to $u$ as we will see later. Note however that, as soon as we want to back propagate bounds about the gradient of $a$ to $u$, the complications introduced by the taxis term come back into play, making this transformation much less useful for endeavors of this kind. \\[0.5em] We now begin by translating the baseline estimates given in \Cref{lemma:absolute_baseline} to our newly defined function $a$ as we will henceforth focus on $(a,w)$ as our central object of analysis for quite some time. We will further for the foreseeable future work under the assumption that $\tmax < \infty$ as this is exactly the case we want to rule out by leading this assumption to a contradiction with the blow-up criterion. \begin{corollary}\label{corollary:absolute_baseline_a} If $\tmax < \infty$, there exists $C > 0$ such that \[ \int_\Omega a \leq C \] for all $t\in(0,\tmax)$. \end{corollary} \begin{proof} As $\int_\Omega a = \int_\Omega u e^{\chi w} \leq e^{\chi \|w\|_\L{\infty}} \int_\Omega u$, this is a direct consequence of \Cref{lemma:absolute_baseline} if $\tmax < \infty$. \end{proof}\noindent In preparation for a later Moser-type iteration argument for the first solution component $a$ (cf.\ \cite{AlikakosBoundsSolutionsReactiondiffusion1979} and \cite{MoserNewProofGiorgi1960} for some early as well as \cite{FuestBlowupProfilesQuasilinear2020} and \cite{TaoBoundednessQuasilinearParabolicparabolic2012} for some more contemporary examples of this technique), which will later be used to rule out its finite-time blow-up, we will now derive a recursive inequality for terms of the form $\int_\Omega a^p$. This recursion will in fact allow us to estimate each term of the form $\int_\Omega a^p$ by terms of the form $(\int_\Omega a^\frac{p}{2})^2$ with constants independent of $p$, which will prove sufficient to later gain an $L^\infty(\Omega)$ bound for $a$. The method employed to gain said recursion is testing the first equation in (\ref{problem_a}) with $e^{\chi w}a^{p-1}$ followed by some estimates based on the Gagliardo--Nirenberg inequality. \\[0.5em] To facilitate this derivation of said recursion, we will from now on assume that the regularizing influence of the logistic source term in the first equation of (\ref{problem}) is sufficiently strong, or more precisely we assume that either $r > 2$ or $\mu$ is sufficiently large in comparison to $\chi$ and the $L^\infty(\Omega)$ norm of $w_0$. However at this point and therefore for the whole of the Moser-type iteration argument, we will not use our assumed restriction to two or three dimensions just yet. \begin{lemma} \label{lemma:linfty_recursion} If $\tmax < \infty$ and further $r > 2$ or $\logc \geq \chi \|w_0\|_\L{\infty}$, then there exists a constant $C > 0$ such that \[ \sup_{t\in(0,\tmax)}\int_\Omega a^p \leq C\max\left( \; \int_\Omega a_0^p, \; C^{p+1}, \; p^C \left( \sup_{t\in(0,\tmax)}\int_\Omega a^\frac{p}{2} \right)^2 \; \right) \] for all $p \geq 2$. \end{lemma} \begin{proof} We test the first equation in (\ref{problem_a}) with $e^{\chi w} a^{p-1}$ and apply partial integration to see that \begin{align*} \frac{1}{p}\frac{\d}{\d t}\int_\Omega e^{\chi w} \a^p &= \int_\Omega e^{\chi w} \a^{p-1} {\a}_t + \frac{\chi}{p} \int_\Omega {w}_t e^{\chi w} \a^p = \int_\Omega e^{\chi w} \a^{p-1} {\a}_t - \frac{\chi}{p} \int_\Omega w e^{2 \chi w} \a^{p + 1}\\ &= \int_\Omega \a^{p-1} \div ( e^{\chi w}\D \grad \a) + \int_\Omega \a^{p-1} \div (a e^{\chi w} (\div \D)) \\ &\hphantom{=\,}+ \logc \int_\Omega e^{\chi w} \a^{p} - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \frac{p-1}{p} \int_\Omega w e^{2 \chi w} \a^{p + 1} \\ &= - (p-1) \int_\Omega e^{\chi w} \a^{p-2} (\grad \a \cdot \D \grad \a) - (p-1)\int_\Omega e^{\chi w}\a^{p-1} ((\div \D) \cdot \grad \a) \\&\hphantom{=\,}+ \logc \int_\Omega e^{\chi w} \a^{p} - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \frac{p-1}{p} \int_\Omega w e^{2 \chi w} \a^{p + 1} \numberthis \label{eq:ae_test_1} \end{align*} for all $t \in (0,\tmax)$ and $p \geq 2$. Given our assumptions for $\D$ in (\ref{eq:classical_D_assumption}), we can use Young's inequality to further estimate that \begin{align*} &- (p-1) \int_\Omega e^{\chi w} \a^{p-2} (\grad \a \cdot \D \grad \a) - (p-1)\int_\Omega e^{\chi w}\a^{p-1} ((\div \D) \cdot \grad \a) \\ &\leq -\frac{p-1}{M} \int_\Omega e^{\chi w} a^{p-2} |\grad a|^2 + M(p-1) \int_\Omega e^{\chi w} a^{p-1} |\grad a| \\ &\leq -\frac{p-1}{2M} \int_\Omega e^{\chi w} a^{p-2} |\grad a|^2 + 2 M^3(p-1) \int_\Omega e^{\chi w} a^p \\ &\leq- \frac{p-1}{p^2} \frac{2}{M} \int_\Omega e^{\chi w} |\grad a^\frac{p}{2}|^2 + 2 M^3\, p \int_\Omega e^{\chi w} a^p \\ &\leq-\frac{1}{p}\frac{1}{M} \int_\Omega e^{\chi w} |\grad a^\frac{p}{2}|^2 + 2 M^3\, p \int_\Omega e^{\chi w} a^p \end{align*} as well as more elementary that \[ \chi \frac{p-1}{p} \int_\Omega w e^{2\chi w}a^{p+1} \leq \chi \|w_0\|_\L{\infty} \int_\Omega e^{2\chi w}a^{p+1} \] for all $t \in (0,\tmax)$ and $p \geq 2$, which when applied to (\ref{eq:ae_test_1}) results in \begin{align*} &\frac{1}{p}\frac{\d}{\d t}\int_\Omega e^{\chi w} \a^p + \frac{1}{p} \frac{1}{M} \int_\Omega e^{\chi w} |\grad a^\frac{p}{2}|^2 \\ \leq& (\logc + 2M^3\, p) \int_\Omega e^{\chi w} \a^{p} - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \|w_0\|_\L{\infty} \int_\Omega e^{2 \chi w} \a^{p + 1} \numberthis \label{eq:ae_test_2} \end{align*} for all $t \in (0,\tmax)$ and $p \geq 2$. If $r > 2$, we can now further estimate that \begin{align*} &\hphantom{\leq\;\,} - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \|w_0\|_\L{\infty} \int_\Omega e^{2 \chi w} \a^{p + 1} \\ &\leq - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \|w_0\|_\L{\infty} \int_\Omega e^{\loge \chi w} \a^{p + 1}\\ &\leq \chi \|w_0\|_\L{\infty} \left( \frac{\chi \|w_0\|_\L{\infty} }{\mu}\right)^\frac{p + 1}{r-2} e^{\loge \chi \|w_0\|_\L{\infty}} |\Omega| \leq K_1^{p+1} \end{align*} with \[ K_1 \defs \left(\chi \|w_0\|_\L{\infty} e^{r\chi \|w_0\|_\L{\infty}} |\Omega| + 1 \right)\left( \frac{\chi \|w_0\|_\L{\infty} }{\mu}\right)^\frac{1}{r-2} \] for all $t \in (0,\tmax)$ and $p \geq 2$ by Young's inequality. If, however, $r = 2$ and $\logc \geq \chi \|w_0\|_\L{\infty}$, it is immediately obvious that \[ \hphantom{\leq\;\,} - \logc \int_\Omega e^{\loge \chi w} a^{p-1+\loge} + \chi \|w_0\|_\L{\infty} \int_\Omega e^{2 \chi w} \a^{p + 1} \leq 0 \leq K_1^{p+1} \] with $K_1 \defs 1$ for all $t \in (0,\tmax)$ and $p \geq 2$. As such, we can in both cases conclude from (\ref{eq:ae_test_2}) that \begin{equation}\label{eq:ae_test_3} \frac{1}{p}\frac{\d}{\d t}\int_\Omega e^{\chi w} \a^p + \frac{1}{p} \frac{1}{M} \int_\Omega e^{\chi w} |\grad a^\frac{p}{2}|^2 \leq (\logc + 2 M^3 \,p) \int_\Omega e^{\chi w} \a^{p} + K_1^{p+1} \leq p\, K_2 \int_\Omega \a^{p} + K_1^{p+1} \end{equation} with $K_2 \defs (\mu + 2 M^3)e^{\chi \|w_0\|_\L{\infty}}$ for all $t \in (0,\tmax)$ and $p \geq 2$. \\[0.5em] We can now use the Gagliardo--Nirenberg inequality to fix a constant $K_3 > 0$ such that \begin{align*} \int_\Omega a^p &= \| a^\frac{p}{2} \|^2_\L{2} \leq K_3 \|\grad a^\frac{p}{2}\|_\L{2}^{2\alpha} \| a^\frac{p}{2}\|^{2(1-\alpha)}_\L{1} + K_3\| a^\frac{p}{2} \|^2_\L{1} \\ &\leq \frac{1}{p^2} \frac{1}{M} \frac{1}{K_2} \int_\Omega |\grad a^\frac{p}{2}|^2 + ( (p^2 M K_2)^\frac{\alpha}{1-\alpha}K_3^\frac{1}{1-\alpha} + K_3) \left( \int_\Omega a^\frac{p}{2} \right)^2 \\ &\leq \frac{1}{p^2} \frac{1}{M} \frac{1}{K_2} \int_\Omega e^{\chi w}|\grad a^\frac{p}{2}|^2 + K_4 p^{K_4} \left( \int_\Omega a^\frac{p}{2} \right)^2 \end{align*} for all $t \in (0,\tmax)$ and $p \geq 2$ with \[ \alpha \defs \frac{1}{1+\frac{2}{n}} \in (0,1) \] and $K_4 \defs \max(\frac{2\alpha}{1-\alpha}, (M K_2)^\frac{\alpha}{1-\alpha}K_3^\frac{1}{1-\alpha} + K_3)$. Applying this to (\ref{eq:ae_test_3}) then implies \[ \frac{\d}{\d t}\int_\Omega e^{\chi w} \a^p \leq K_2 K_4 p^{K_4 + 2} \left(\int_\Omega a^\frac{p}{2}\right)^2 + pK_1^{p+1} \leq K_2 K_4 p^{K_4 + 2} \left(\int_\Omega a^\frac{p}{2}\right)^2 + (2K_1)^{p+1} \] for all $t \in (0,\tmax)$ and $p \geq 2$. Time integration then yields \[ \int_\Omega \a^p(\cdot, t) \leq \int_\Omega e^{\chi w} \a^p(\cdot,t) \leq \tmax K_2 K_4 p^{K_4 + 2} \left( \sup_{s\in(0,\tmax)}\int_\Omega a^\frac{p}{2}(\cdot, s)\right)^2 + \tmax(2K_1)^{p+1} + e^{\chi \|w_0\|_\L{\infty}} \int_\Omega a_0^p \] for all $t\in(0,\tmax)$ and $p \geq 2$ as $\tmax < \infty$, which after estimating the sum on the right-hand side by thrice the maximum of its summands completes the proof. \end{proof}\noindent We will now proceed to give the actual iteration argument yielding an $L^\infty(\Omega)$-type bound for $a$ and therefore $u$, which is sufficient to rule out finite-time blow-up for the first solution component $u$. \begin{lemma}\label{lemma:classical_recursion} If $\tmax < \infty$ and further $r > 2$ or $\logc \geq \chi \|w_0\|_\L{\infty}$, then there exists a constant $C > 0$ such that \[ \|a(\cdot, t)\|_\L{\infty} \leq C \stext{ and therefore } \|u(\cdot, t)\|_\L{\infty} \leq C \] for all $t\in(0,\tmax)$. \end{lemma} \begin{proof} \newcommand{\iter}{J} Let $p_i \defs 2^{i}$, $i \in \N_0$, and $\iter_i \defs \sup_{t\in(0,\tmax)} \left(\int_\Omega a^{p_i}(\cdot,t) \right)^\frac{1}{p_i}$. Then $\iter_0$ is finite because of \Cref{corollary:absolute_baseline_a} and the fact that $p_0 = 1$. We further know that \[ \|a_0\|_\L{p_i} \leq (1+|\Omega|)\|a_0\|_\L{\infty} \sfed K_1. \] Due to \Cref{lemma:linfty_recursion}, we can conclude that there exists a constant $K_2 \geq 1$ such that the numbers $\iter_i$ conform to the following recursion: \[ \iter_i \leq K_2^\frac{1}{p_i} \max\left(\; \|a_0\|_\L{p_i},\; K_2^{\frac{p_i + 1}{p_i}}, \; p_i^\frac{K_2}{p_i} \iter_{i-1} \right) \;\;\;\; \text{ for all } i \in \N. \] Iterating this recursion finitely many times ensures that all $\iter_i$ are finite. \\[0.5em] If there exists an incrementing sequence of indices $i\in\N$, along of which $\iter_i \leq \max(K_1 K_2, K_2^3)$, we immediately gain our desired result by taking the limit of $\iter_i$ along said sequence. As such, we can now assume that there exists $i_0 \in \N$ with \[ \iter_i \geq \max(K_1 K_2, K_2^3)> \left\{ \begin{aligned} &K_2^\frac{1}{p_i} \|a_0\|_\L{p_i} \\ &K_2^\frac{1}{p_i} K_2^\frac{p_i+1}{p_i} \end{aligned} \right. \;\;\;\; \text{ for all } i \geq i_0 \] to cover the remaining case. Given these assumptions, the above recursion simplifies to \[ \iter_i \leq (p_i K_2)^\frac{K_2}{p_i}\iter_{i-1} \leq K_3^{\frac{1}{\sqrt{p_i}}} \iter_{i-1} \] for all $i \geq i_0$ with some $K_3 > 0$ (only depending on $K_2$) as the function $z \mapsto (zK_2)^\frac{ K_2}{\sqrt{z}}$ is bounded on $[1,\infty)$. By now again iterating this recursion finitely many times, we gain that \begin{equation}\label{eq:recusion_consequence} \iter_i \leq K_3^{\sum_{j = i_0}^{i}\frac{1}{\sqrt{p_j}}} \iter_{i_0 - 1} \end{equation} for all $i \geq i_0$. As \[ \sum_{j=i_0}^i \frac{1}{\sqrt{p_j}} = \sum_{j=i_0}^i \left(\frac{1}{\sqrt{2}} \right)^{j} \leq \sum_{j=0}^\infty \left(\frac{1}{\sqrt{2}} \right)^j < \infty \] for all $i \geq i_0$ due to the series on the right side being of geometric type, we can conclude from (\ref{eq:recusion_consequence}) that the sequence $J_i$ is uniformly bounded. Therefore, taking the limit $i \rightarrow \infty$ gives us our desired bound for $a$. As $u = a e^{\chi w}$, the corresponding bound for $u$ follows directly from this and \Cref{lemma:absolute_baseline}. \end{proof} \noindent To now establish that finite-time blow-up of the second solution component $w$ is equally as impossible, we will begin by testing the first equation in (\ref{problem_a}) with $-\div (\D \grad a)$ and combining the result with the differential equation associated with $\frac{\d}{\d t}\int_\Omega |\grad w|^4$. The key to extracting a sufficiently strong bound for $w$ is to then use the strength of the absorptive terms originating from the fully elliptic operator $-\div (\D \grad \cdot )$ to counteract the influence of potentially destabilizing terms due to the haptotaxis interaction. Note that the ellipticity of the operator is ensured because we assume that $\D$ is positive definite everywhere in $\overline{\Omega}$. \begin{lemma}\label{lemma:grad_w_bound} If $\tmax < \infty$ and further $r > 2$ or $\logc \geq \chi \|w_0\|_\L{\infty}$, then there exists a constant $C > 0$ such that \[ \| \grad w(\cdot, t) \|_\L{4} \leq C \] for all $t \in (0,\tmax)$. \end{lemma} \begin{proof} Given \Cref{lemma:classical_recursion}, we can fix a constant $K_1 \geq 1$ such that \begin{equation} \|a(\cdot, t)\|_\L{\infty} \leq K_1 \stext{ and } \int_\Omega \left( a^2(\cdot, t) + a^{2\loge}(\cdot, t) + a^4(\cdot, t) \right) \leq K_1 \;\;\;\; \text{ for all } t \in (0,\tmax). \label{eq:combined_ae_lp_estimates} \end{equation} Using the Gagliardo--Nirenberg inequality and standard regularity estimates (cf.\ \cite[Theorem 19.1]{FriedmanPartialDifferentialEquations1969} or \cite[Theorem 3.1.1]{LunardiAnalyticSemigroupsOptimal1995}) for the elliptic operator operator $-\div(\D \grad \,\cdot \,)$ (with Neumann-type boundary conditions), we can fix a constant $K_2 \geq 1$ such that \[ \int_\Omega |\grad \phi|^4 \leq K_2 \left( \int_\Omega |\div (\D \grad \phi)|^2 + \int_\Omega |\phi|^2 \right) \|\phi\|^2_\L{\infty} \;\;\;\; \text{ for all } \phi \in C^2(\overline{\Omega}) \text{ with }(\D\grad \phi) \cdot \nu = 0 \text{ on } \partial \Omega. \] This in turn implies that \begin{equation} \int_\Omega |\grad a|^4 \leq K_3 \left( \int_\Omega |\div (\D \grad a)|^2 + 1\right) \label{eq:ae_adapted_gni} \end{equation} for all $t \in (0,\tmax)$ with $K_3 \defs K_1^3 K_2$ . \\[0.5em] After establishing these preliminaries, we now note that the first equation in (\ref{problem_a}) can also be written as \[ a_t = \div (\D \grad a) + \chi\grad w \cdot \D \grad a + \div (a (\div \D)) + \chi a (\grad w \cdot (\div \D)) + \mu a (1-a^{r-1}e^{\chi(r-1)w}) +\chi a^2 w e^{\chi w}. \] We then test this variant of said equation with $-\div (\D \grad \a)$ and employ partial integration (using the fact that $(\div \D) \cdot \nu = 0$ on $\partial \Omega$) as well as Young's inequality to conclude that \begin{align*} \frac{1}{2}\frac{\d}{\d t} \int_\Omega (\grad a \cdot \D \grad a) &= \int_\Omega (\grad {a}_t \cdot \D \grad a) \\ &= \int_\Omega \grad ( \div (\D \grad a)) \cdot \D \grad a + \chi \int_\Omega \grad ( \grad w \cdot \D \grad a) \cdot \D \grad a\\ &\hphantom{=\;}+ \int_\Omega \grad (\div(a \div \D)) \cdot \D \grad a + \chi \int_\Omega \grad (a \grad w \cdot (\div \D)) \cdot \D \grad a \\ &\hphantom{=\;}+ \int_\Omega \grad \left( \logc a(1-a^{\loge - 1}e^{\chi(r-1)w} ) + \chi a^2 w e^{\chi w} \right) \cdot \D \grad a \\ &\leq -\frac{1}{2}\int_\Omega |\div (\D \grad a) |^2 + 2\chi^2 \int_\Omega |\grad w \cdot \D \grad w| |\grad a \cdot \D \grad a| \\ &\hphantom{=\;}+ 2\int_\Omega |\div (a \div \D)|^2 + 2\chi^2 \int_\Omega a^2 |\grad w|^2 |\div \D|^2 \\ &\hphantom{=\;}+ K_4 \int_\Omega \left( a^2 + a^{2\loge} + a^4 \right) \numberthis \label{eq:div_ae_test} \end{align*} for all $t \in (0,\tmax)$ with $K_4 \defs 8 \max\left( \logc, \logc e^{\chi (\loge - 1)\|w_{ 0}\|_\L{\infty}}, \chi \|w_{0}\|_\L{\infty}e^{\chi \|w_{0}\|_\L{\infty}}\right)^2$. Using the bounds outlined in (\ref{eq:classical_D_assumption}) and (\ref{eq:combined_ae_lp_estimates}), we can now further derive that \begin{align*} 2\chi^2\int_\Omega |\grad w \cdot \D \grad w| |\grad a \cdot \D \grad a| \leq 2\chi^2 M^2\int_\Omega |\grad w|^2 |\grad a|^2 \leq 8 \chi^4 M^4 K_3 \int_\Omega |\grad w|^4 + \frac{1}{8K_3}\int_\Omega |\grad a|^4 \end{align*} and \begin{align*} 2\int_\Omega |\div (a \div \D)|^2 &\leq 4\int_\Omega |\grad a|^2 |\div \D|^2 + 4 \int_\Omega a^2|\div (\div \D)|^2 \\ &\leq 4 M^2 \left(\int_\Omega |\grad a|^2 + \int_\Omega a^2 \right) \\ &\leq 4 M^2 \left(\int_\Omega |\grad a|^2 + K_1 \right) \\ &\leq \frac{1}{8K_3}\int_\Omega |\grad a|^4 + 32 M^4K_3 + 4 M^2 K_1 \end{align*} and \begin{align*} 2 \chi^2 \int_\Omega a^2 |\grad w|^2 |\div \D|^2 \leq \chi^2 M^2 \left(\int_\Omega a^4 + \int_\Omega |\grad w|^4 \right) \leq \chi^2 M^2 K_1 \left(\int_\Omega |\grad w|^4 + 1 \right) \end{align*} for all $t \in (0,\tmax)$. Applying these three estimates combined with the second bound in (\ref{eq:combined_ae_lp_estimates}) to (\ref{eq:div_ae_test}) then yields \begin{equation} \label{eq:div_ae_test_2} \frac{1}{2}\frac{\d}{\d t} \int_\Omega (\grad a \cdot \D \grad a) \leq -\frac{1}{2}\int_\Omega |\div (\D \grad a) |^2 + \frac{1}{4K_3} \int_\Omega |\grad a|^4 + K_5 \int_\Omega |\grad w|^4 + K_6 \end{equation} for all $t \in (0,\tmax)$ with $K_5 \defs 8 \chi^4 M^4 K_3 + \chi^2 M^2 K_1$ and $K_6 \defs 32 M^4K_3 + 4 M^2 K_1 + \chi^2 M^2 K_1 + K_1K_4$. \\[0.5em] As our second step, we now obtain the following estimate for the time derivative of certain gradient terms of the second solution component $w$ as follows: \begin{align*} \frac{1}{4} \frac{\d}{\d t}\int_\Omega |\grad w|^4 &= \int_\Omega |\grad w|^2 \grad w \cdot \grad {w}_t = -\int_\Omega |\grad w|^2 \grad w \cdot \grad (a e^{\chi w}w) \\ &=-\int_\Omega |\grad w|^4 a e^{\chi w}(\chi w + 1) - \int_\Omega |\grad w|^2 (\grad w \cdot \grad a) e^{\chi w}w \\ &\leq K_7 \int_\Omega |\grad w|^3 |\grad a| \leq K_7 \int_\Omega |\grad w|^4 + K_7 \int_\Omega |\grad a|^4 \end{align*} for all $t \in (0,\tmax)$ with $K_7 \defs \|w_{0}\|_\L{\infty}e^{\chi \|w_{0}\|_\L{\infty}}$. \\[0.5em] Now combining this with (\ref{eq:div_ae_test_2}) (using an appropriate scaling factor) we gain \begin{align*} &\frac{1}{2}\frac{\d}{\d t} \int_\Omega (\grad a \cdot \D \grad a) + \frac{1}{16 K_3 K_7 } \frac{\d}{\d t}\int_\Omega |\grad w|^4 \leq -\frac{1}{2}\int_\Omega |\div (\D \grad a) |^2 + \frac{1}{2K_3} \int_\Omega |\grad a|^4 + K_8 \int_\Omega |\grad w|^4 + K_6 \end{align*} for all $t \in (0,\tmax)$ with $K_8 \defs K_5 + \frac{1}{4K_3}$. The application of (\ref{eq:ae_adapted_gni}) to the inequality above then yields \begin{align*} \frac{1}{2}\frac{\d}{\d t} \int_\Omega (\grad a \cdot \D \grad a) + \frac{1}{16 K_3 K_7} \frac{\d}{\d t}\int_\Omega |\grad w|^4 &\leq K_8 \int_\Omega |\grad w|^4 + K_6 + \frac{1}{2} \\ &\leq K_9 \left( \frac{1}{2}\int_\Omega (\grad a \cdot \D \grad a) + \frac{1}{16 K_3 K_7} \int_\Omega |\grad w|^4 \right) + K_6 + \frac{1}{2} \end{align*} with $K_9 \defs 16 K_3 K_7 K_8$ for all $t\in(0,\tmax)$, which, by a standard comparison argument and the assumption that $\tmax$ is finite, directly gives us our desired result. \end{proof} \begin{remark} The result of the above lemma only ensures that finite-time blow-up of the second solution component is impossible in two and three dimensions according to our blow-up criterion (\ref{eq:blowup}). As such, it is at this point and only this point in this section, where our restriction to two or three dimensions becomes necessary. This, of course, in turn means that any extension of the results of this section to a higher dimensional setting would only need to extend the above argument to one providing better bounds for the gradient of $w$. \end{remark} \noindent Given that \Cref{lemma:classical_recursion} and \Cref{lemma:grad_w_bound} rule out any kind of finite-time blow-up for our local solutions, the proof of the first central result of this paper can now be stated quite succinctly. \begin{proof}[Proof of \Cref{theorem:classical_solution}] If we assume $\tmax < \infty$, \Cref{lemma:classical_recursion} and \Cref{lemma:grad_w_bound} in combination contradict the consequence of the blow-up criterion (\ref{eq:blowup}) in this case. Therefore, $\tmax = \infty$ and thus the local solutions constructed in \Cref{lemma:local_solution} must be in fact global. This is sufficient to prove \Cref{theorem:classical_solution} as the fixed assumptions of this section were in fact identical to those of said theorem. \end{proof} \begin{remark} It is also possible to construct classical solutions in the two dimensional case without relying on logistic influences by using some methods that have previously been used when for example dealing with standard diffusion and some slightly modified versions of our arguments (cf.\ \cite{BellomoMathematicalTheoryKeller2015}). \\[0.5em] Essentially, the argument boils down to using an estimate of the form \[ \|u\|^3_\L{3} \leq \eps \|u\|^2_{W^{1,2}(\Omega)} \|u\ln(u) \|_\L{1} + C(\eps) \|u\|_\L{1} \] with $\eps$ being potentially arbitrarily small (cf.\ \cite[p.1199]{BilerDebyeSystemExistence1994}) in combination with an additional baseline $\int_\Omega u\ln(u)$ estimate based on an energy-type inequality (cf.\ \Cref{lemma:energy_baseline}) to establish an $L^2(\Omega)$ estimate. From there, the arguments are very similar to the Moser-type iteration argument presented above, only with some slight complications added, which are easily surmountable. \Cref{lemma:grad_w_bound} translates basically verbatim. \\[0.5em] We decided not to present this result here as it will not be needed for our later construction of weak solutions and is not appreciably different from what we have done here or has already been done in the classical diffusion case. \end{remark} \section{Existence of Weak Solutions} We have at this point established all the classical existence theory we want to address in this paper and therefore will now transition to our construction of weak solutions, which is in part based on said classical theory. \subsection{Approximate Solutions} As is fairly common, our construction of weak solutions will centrally rely on approximation of said solutions by classical solutions, which solve a suitably regularized version of the original problem. As we already derived global existence of classical solutions for the system (\ref{problem}) with very strong assumptions on $\D$, we of course want to construct our weak solutions under much weaker assumptions on $\D$ because there would be almost nothing gained otherwise. As such, the central regularization employed by us will be concerned with approximating a potentially quite irregular $\D$ by matrices $\De$ that are sufficiently regular to ensure classical existence of solutions. Apart from this, we will use approximated initial data. We will also slightly modify the logistic source term to ensure $r > 2$ in our approximated system because we can then further eliminate the assumption concerning the parameters $\chi$ and $\mu$ needed for the classical theory when $r = 2$. One central advantage of this approach is that our approximate systems are very close to the system we actually want to construct solutions for and thus our regularizations only minimally interfere with the structures present in the system, which we want to exploit for e.g.\ a priori information. \\[0.5em] To now make all of this more explicit, we begin by fixing a smooth bounded domain $\Omega\subseteq\R^n$, $n\in\{2,3\}$, and system parameters $\chi \in (0,\infty)$, $\logc \in (0,\infty)$, $\loge \in [2,\infty)$. We also fix some a.e.\ non-negative initial data $u_0 \in L^{z[\ln(z)]_+}(\Omega)$ and $w_0 \in C^0(\overline{\Omega})$ with $\sqrt{w_0} \in W^{1,2}(\Omega)$, where $L^{z[\ln(z)]_+}(\Omega)$ is the standard Orlicz spaces associated with the function $z \mapsto z \left[\ln(z)\right]_+$. We further fix $\D \in W^{1,2}_\mathrm{div}(\Omega;\R^{n\times n}) \cap C^0(\overline{\Omega}; \R^{n\times n})$ with the following properties: \begin{itemize} \item $\D$ is positive semidefinite everywhere. \item $\D$ allows for a divergence estimate with exponent $\beta \in [\frac{1}{2}, 1)$ and constant $A > 0$ such that $\frac{\beta}{1-\beta}\leq r$ (cf.\ \Cref{definition:div_regularity}). \item $\D$ allows for a compact $L^1(\Omega)$ embedding (cf.\ \Cref{definition:comp_regularity}). \end{itemize} As for any $\beta \in [\frac{1}{2}, \frac{2}{3}]$ the condition $\frac{\beta}{1-\beta} \leq r$ is always fulfilled independent of our choice of $r \in [2,\infty)$ and as it is easy to see that, if $\D$ allows for a divergence estimate in accordance with \Cref{definition:div_regularity}, it also allows for a divergence estimate with any larger exponent, we can assume that the parameter $\beta$ seen in the second of the above properties is in fact an element of $[\frac{2}{3}, 1) \subseteq (\frac{1}{2}, 1)$ without loss of generality. Then according to \Cref{remark:div_regularity_consequence}, the aforementioned divergence estimate directly implies that \[ \D \in W^{1,q}_\mathrm{div}(\Omega;\R^{n\times n}) \subseteq W^{1,2}_\mathrm{div}(\Omega;\R^{n\times n})\subseteq W^{1,\frac{r}{r-1}}_\mathrm{div}(\Omega;\R^{n\times n}) \] with $q \defs \frac{2\beta}{2\beta - 1}$. \\[0.5em] Given these assumptions, we now choose an approximate family $(\De)_{\eps \in (0,1)} \subseteq C^2(\overline{\Omega}; \R^{n \times n})$ with $\De$ positive definite on $\overline{\Omega}$, $(\div \De) \cdot \nu = 0$ on $\partial \Omega$ for all $\eps \in (0,1)$ and \begin{equation}\label{eq:De_convergence} \De \rightarrow \D \stext{in} W^{1,q}_\mathrm{div}(\Omega;\R^{n\times n}) \cap C^0(\overline{\Omega}; \R^{n\times n}) \;\;\;\; \text{ as } \eps \searrow 0. \end{equation} We can further choose this family in such a way as to ensure that \begin{equation}\label{eq:De_div_estimate} \int_\Omega |(\div \De) \cdot \Phi| \leq B \left(\int_\Omega (\Phi \cdot \De \Phi)^\beta + 1\right) \end{equation} with $B \defs A + 1$ and \begin{equation}\label{eq:De_estimate} \D + \eps \leq \De \leq \D + 3\eps \end{equation} for all $\Phi \in C^0(\overline{\Omega}; \R^n)$ and $\eps \in (0,1)$. These additional properties for the approximation $\De$ essentially mean that the regularity properties assumed for $\D$ are also valid for said approximation in an $\eps$ independent fashion. \begin{remark} Let us briefly illustrate how such an approximation of $\D = (d_{i,j})_{i,j \in \{1,\dots, n\}}$ can be achieved. This will be a two-step process. We first approximate $\D$ in our desired function space with the appropriate boundary conditions and then, as a second step, we show that, with only slight modification, we can gain the remaining properties from that approximation. \\[0.5em] For the initial approximation, we assume without loss of generality that $\D$ is smooth. We can do this as it is well-known that a standard convolution argument would give us a smooth approximation of $\D$ in our desired space, which we can then approximate again to gain all additional desired properties. In our case, the key property not covered by such a convolution based method is that we want all our approximate matrices to have very specific boundary values. As such, we will now demonstrate how an approximation of a smooth $\D$ by matrices with exactly this property can be achieved using the continuity properties of semigroups associated with carefully chosen sectorial operators (cf.\ \cite{FeffermanSimultaneousApproximationLebesgue2021}). \\[0.5em] To this end, we fix functions $d'_{i,j}$ such that \begin{equation}\label{eq:matrix_modification} d_{i,j} = \begin{cases} d'_{i,i} + \sum^n_{l,k = 1} d'_{l,k}, &\text{ if } i = j, \\ d'_{i,j}, &\text{ if } i\neq j \end{cases} \end{equation} for all $i,j \in \{1,\dots,n\}$. As can be easily seen, the functions $d'_{i,j}$ are linear combinations of the components of $\D$ and therefore smooth as well. We then set $d'_{i,j,\eps} = e^{\eps L_{i,j}} d'_{i,j}$, $\eps \in (0,1)$, where $L_{i,j}$ is the negative Laplacian on $\Omega$ with boundary conditions $\grad \phi \cdot \nu + \frac{1}{2}(\partial_{x_i} \phi) \nu_j + \frac{1}{2}(\partial_{x_j} \phi) \nu_i = 0$ and $(e^{t L_{i,j}})_{t\geq 0}$ is the associated semigroup. Due to the well-known continuity properties of said semigroup (cf.\ \cite{HenryGeometricTheorySemilinear1981}, \cite{LunardiAnalyticSemigroupsOptimal1995}, \cite{TriebelInterpolationTheoryFunction1978}), we know that $d'_{i,j,\eps} \rightarrow d'_{i,j}$ and therefore $d_{i,j,\eps} \rightarrow d_{i,j}$ in $W^{1,q}(\Omega)\cap C^0(\overline{\Omega})$ as $\eps \searrow 0$ with $d_{i,j,\eps}$ defined in an analogous fashion to (\ref{eq:matrix_modification}). Thus, $\De \defs (d_{i,j,\eps})_{i,j \in \{1,\dots, n\}} \rightarrow \D$ in our desired way. Further, \begin{align*} (\div \De) \cdot \nu =& \sum^n_{i,j=1} (\partial_{x_j} d_{i,j,\eps}) \nu_i = \sum _{i,j=1, i\neq j}^n (\partial_{x_j} d_{i,j,\eps}) \nu_i + \sum^n_{i=1} (\partial_{x_i} d_{i,i,\eps}) \nu_i \\ =&\sum _{i,j=1, i\neq j}^n (\partial_{x_j} d'_{i,j,\eps}) \nu_i + \sum^n_{i=1} \left( \partial_{x_i} d'_{i,i,\eps} + \sum^n_{l,k = 1} \partial_{x_i}d'_{l,k,\eps} \right) \nu_i \\ =& \sum_{i,j=1}^n \left(\tfrac{1}{2}(\partial_{x_j} d'_{i,j,\eps}) \nu_i + \tfrac{1}{2}(\partial_{x_i} d'_{i,j,\eps}) \nu_j \right) + \sum^n_{l,k =1} \grad d'_{l,k,\eps} \cdot \nu \\ =& \sum_{i,j=1}^n \left( \grad d'_{i,j,\eps} \cdot \nu + \tfrac{1}{2}(\partial_{x_j} d'_{i,j,\eps}) \nu_i + \tfrac{1}{2}(\partial_{x_i} d'_{i,j,\eps}) \nu_j \right) = 0 \numberthis \label{eq:De_boundary_condition_calculation} \end{align*} on $\partial \Omega$ for all $\eps \in (0,1)$ due to the prescribed boundary conditions of the operators $L_{i,j}$. Thus, we have constructed a suitable approximate family for $\D$ with the correct boundary conditions. \\[0.5em] Having now presented the full argument used to achieve the boundary condition (\ref{eq:De_boundary_condition_calculation}), let us briefly note that we introduced the functions $d'_{i,j}$ to ensure that the operators $L_{i,j}$ have sufficiently non-tangential boundary conditions and are therefore sectorial (cf.\ \cite{LunardiAnalyticSemigroupsOptimal1995}, \cite{TriebelInterpolationTheoryFunction1978}), which is of course necessary for our semigroup based arguments. \\[0.5em] As our second step, we will now fix one such family of approximations of $\D$ and call it $\De'$, $\eps \in (0,1)$, as we still want to slightly modify it. We can assume that \[ \|\D'_\eps - \D\|_\L{\infty} \leq \eps \stext{ and } \| \div \De' - \div \D\|_\L{\frac{2\beta}{2\beta-1}} \leq \eps^\frac{1}{2} \] for all $\eps \in (0,1)$ without loss of generality. If we then set $\De \defs \De' + \| \De' - \D \|_\L{\infty} + \eps$, we can ensure that \[ \D + \eps = \De - \De + \D + \eps = \De - \De' + \D - \|\De' - \D\|_\L{\infty} \leq \De + \| \De' - \D\|_\L{\infty} - \|\De' - \D\|_\L{\infty} = \De \] and \[ \De = \D - \D + \De = \D + \De' - \D + \|\De' - \D\|_\L{\infty} + \eps \leq \D + 2\|\De' - \D\|_\L{\infty} + \eps \leq \D + 3\eps \] for all $\eps \in (0,1)$ without affecting any of the desired properties that we already derived as we only modify $\D'_\eps$ by adding constants that converge to zero as $\eps \searrow 0$. This gives us (\ref{eq:De_estimate}). \\[0.5em] To derive the divergence estimate, we first observe that \begin{align*} \left|\int_\Omega |(\div \De) \cdot \Phi| - \int_\Omega |(\div \D) \cdot \Phi| \right| \leq& \int_\Omega |\div \De - \div \D||\Phi| \leq \|\div \De - \div \D\|_\L{\frac{2\beta}{2\beta - 1}} \|\Phi\|_\L{2\beta} \\ \leq& \| \eps^\frac{1}{2} \Phi\|_\L{2\beta} \leq \left(\int_\Omega (\Phi \cdot \eps \Phi)^\beta + 1\right) \leq \left(\int_\Omega (\Phi \cdot \De \Phi)^\beta + 1\right) \end{align*} for all $\eps \in (0,1)$ and $\Phi \in C^0(\overline{\Omega};\R^n)$. We can then further estimate \begin{align*} \int_\Omega |(\div \De) \cdot \Phi| \leq& \int_\Omega |(\div \D) \cdot \Phi| + \left|\int_\Omega |(\div \De) \cdot \Phi| - \int_\Omega |(\div \D) \cdot \Phi| \right| \\ \leq& A \left( \int_\Omega (\Phi \cdot \D \Phi)^\beta + 1 \right) + \left( \int_\Omega (\Phi \cdot \De \Phi)^\beta + 1 \right) \\ \leq& (A + 1)\left( \int_\Omega (\Phi \cdot \De \Phi)^\beta + 1 \right) \end{align*} for all $\eps \in (0,1)$ and $\Phi \in C^0(\overline{\Omega};\R^n)$ using our assumed divergence estimate for $\D$ and (\ref{eq:De_estimate}). This gives us (\ref{eq:De_div_estimate}) and thus completes the discussion of our construction. \end{remark}\noindent We will now proceed to construct our approximate initial data. To do this, we first fix families $(u_{0,\eps})_{\eps \in (0,1)}$, $(w'_{0,\eps})_{\eps \in (0,1)} \subseteq C^3(\overline{\Omega})$ of positive functions with $(\De \grad u_{0, \eps})\cdot\nu = (\De \grad w'_{0, \eps})\cdot\nu = 0$ on $\partial \Omega$ and \begin{equation*} \begin{aligned} u_{0,\eps} &\rightarrow u_0 \;\;\;\;&&\text{in } \L{z[\ln(z)]_+},\\ w'_{0,\eps} &\rightarrow \sqrt{w_0} &&\text{in } W^{1,2}(\Omega) \cap C^0(\overline{\Omega}) \end{aligned} \end{equation*} as $\eps \searrow 0$. These families can again be constructed by using convolutions or by a similar semigroup based method as seen before in the much more challenging case of the family $(\De)_{\eps \in(0,1)}$. Positivity of both families can further be achieved by first approximating the function in a non-negative way, which is a property of both convolution and semigroup based methods, and then adding $\eps$ to the resulting approximation as a secondary step. \\[0.5em] We then let $w_{0,\eps} \defs (w'_{0,\eps})^2 \in C^3(\overline{\Omega})$ for all $\eps \in (0,1)$ and, because of the properties already established for the family $(w'_{0,\eps})_{\eps \in (0,1)}$, it is straightforward to derive that $w_{0,\eps} > 0$ on $\overline{\Omega}$, $(\De \grad w_{0, \eps})\cdot\nu = 0$ on $\partial \Omega$ and \begin{equation*} \begin{aligned} w_{0,\eps} &\rightarrow w_0 \;\;\;\; &&\text{in } C^0(\overline{\Omega}),\\ \sqrt{w_{0,\eps}} &\rightarrow \sqrt{w_0} &&\text{in } W^{1,2}(\Omega) \cap C^0(\overline{\Omega}) \end{aligned} \end{equation*} as $\eps \searrow 0$. \\[0.5em] One important consequence of the above approximations is that we can fix a uniform constant $M > 0$ such that \begin{equation}\label{eq:weak_De_bounds} \|\div \De\|_\L{2} \leq M, \;\;\;\; \|\De\|_\L{\infty} \leq M \end{equation} and \begin{equation}\label{eq:weak_initial_data_bounds} \int_\Omega u_{0,\eps} \leq M, \;\;\;\; \int_\Omega u_{0,\eps} \ln(u_{0,\eps}) \leq M,\;\;\;\; \|w_{0,\eps}\|_\L{\infty} \leq M, \;\;\;\; \int_\Omega \frac{\grad {w_{0, \eps}} \cdot \De \grad {w_{0. \eps}}}{w_{0,\eps}} \leq M \end{equation} for all $\eps \in (0,1)$. \\[0.5em] We then consider the approximate systems \begin{equation}\label{approx_problem} \left\{ \begin{aligned} u_{\eps t} &= \div (\De \grad \ue + \ue \div \De) - \chi \div (\ue\De \grad \we) + \logc \ue(1-\ue^{\loge + \eps - 1}) \;\;\;\; &&\text{ on } \Omega\times(0,\infty), \\ w_{\eps t} &= - \ue \we \;\;\;\; &&\text{ on } \Omega\times(0,\infty), \\ (\De \grad \ue) \cdot \nu &= \chi (\ue\De \grad \we) \cdot \nu - \ue (\div \De) \cdot \nu\;\;\;\; &&\text{ on } \partial\Omega\times(0,\infty)\\ \ue(\cdot, 0) &= u_{\eps, 0} > 0, \;\; \we(\cdot, 0) = w_{\eps, 0} > 0 \;\;\;\; &&\text{ on } \Omega \end{aligned} \right. \end{equation} and use our already established classical existence theory from \Cref{theorem:classical_solution} to now fix positive, global classical solutions $(\ue, \we)$ to the above system for each $\eps \in (0,1)$. Note that as $r + \eps > 2$, we do not need to make additional assumptions on the parameters $\chi$ and $\logc$ to ensure that said existence theory is applicable. \subsection{Uniform A Priori Estimates} We will now derive the bounds necessary to ensure compactness of our families of approximate classical solutions in function spaces conducive to the construction of our desired weak solutions to (\ref{problem}) as limits of said approximate solutions along a suitable sequence of $\eps \in (0,1)$. \\[0.5em] Apart from the baseline established in \Cref{lemma:absolute_baseline} for the classical existence theory, which can be easily translated to our approximate solutions in an $\eps$-independent fashion, we will now derive some extended bounds based on an energy-type inequality as an additional baseline for later arguments in this section. This type of energy inequality was already used in the one-dimensional case in \cite{WinklerSingularStructureFormation2018}. \begin{lemma} \label{lemma:energy_baseline} For each $T > 0$, there exists a constant $C \equiv C(T) > 0$ such that \begin{align*} &\int_\Omega \ue \ln(\ue) + \int_\Omega \frac{\grad \we \cdot \De \grad \we }{\we} + \int_0^t \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \int_0^t\int_\Omega \ue^{r+\eps} \ln(\ue) \leq C \end{align*} holds for all $t \in (0,T)$ and all $\eps \in (0,1)$. \end{lemma} \begin{proof} Fix $T > 0$. \\[0.5em] By then testing the first equation in (\ref{approx_problem}) with $\ln(\ue)$ we gain that \begin{align*} &\hphantom{=\;}\frac{\d}{\d t} \int_\Omega \ue \ln(\ue) - \frac{\d}{\d t}\int_\Omega \ue = \int_\Omega {\ue}_t \ln(\ue) \\ &= \int_\Omega \ln(\ue) \div \left( \De \grad \ue + \ue \div \De \right) - \chi \int_\Omega \ln(\ue) \div (\ue\De \grad \we) + \logc\int_\Omega \ue(1 - \ue^{\loge + \eps - 1}) \ln(\ue) \\ &= - \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} - \int_\Omega (\div \De) \cdot \grad \ue + \chi \int_\Omega \grad \ue \cdot \De \grad \we + \logc\int_\Omega \ue(1 - \ue^{\loge + \eps - 1}) \ln(\ue) \\ &\leq - \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + B\int_\Omega (\grad \ue \cdot \De\grad \ue)^\beta + B + \chi \int_\Omega \grad \ue \cdot \De \grad \we + \logc\int_\Omega \ue(1 - \ue^{\loge + \eps - 1}) \ln(\ue) \\ &\leq - \frac{1}{2}\int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + 2^\frac{\beta}{1-\beta}B^\frac{1}{1-\beta} \int_\Omega \ue^\frac{\beta}{1-\beta} + B + \chi \int_\Omega \grad \ue \cdot \De \grad \we + \logc\int_\Omega \ue(1 - \ue^{\loge + \eps - 1}) \ln(\ue) \label{eq:ue_lnue_test} \numberthis \end{align*} for all $t\in(0,T)$ and $\eps \in (0,1)$ by partial integration, use of the no-flux boundary conditions and the divergence estimate (\ref{eq:De_div_estimate}) combined with Young's inequality. We can then further gain from the second equation in (\ref{approx_problem}) that \begin{align*} \frac{1}{2}\frac{\d}{\d t}\int_\Omega \frac{\grad \we \cdot \De \grad \we }{\we} &= \int_\Omega \frac{\grad {\we}_t \cdot \De \grad \we}{\we} - \frac{1}{2}\int_\Omega \frac{ {\we}_t (\grad \we \cdot \De \grad \we)}{\we^2} \\ &= - \int_\Omega \frac{\ue(\grad \we \cdot \De \grad \we)}{\we} - \int_\Omega \grad \ue \cdot \De \grad \we + \frac{1}{2}\int_\Omega \frac{\ue (\grad \we \cdot \De \grad \we)}{\we} \\ &= -\frac{1}{2}\int_\Omega \frac{\ue(\grad \we \cdot \De \grad \we)}{\we} - \int_\Omega \grad \ue \cdot \De \grad \we \leq - \int_\Omega \grad \ue \cdot \De \grad \we \numberthis \label{eq:grad_we_test} \end{align*} for all $t\in(0,T)$ and $\eps \in (0,1)$. Combining (\ref{eq:ue_lnue_test}) and (\ref{eq:grad_we_test}) now allows us to further estimate as follows due to the critical $\int_\Omega \grad \ue \cdot \De \grad \we$ terms in both equations neutralizing each other given the correct coefficients: \begin{align*} &\frac{\d}{\d t}\left\{ \int_\Omega \ue \ln(\ue) - \int_\Omega \ue + \frac{\chi}{2} \int_\Omega \frac{\grad \we \cdot \De \grad \we }{\we} \right\} + \frac{1}{2}\int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} \\ &\leq 2^\frac{\divrege}{1-\divrege}B^\frac{1}{1-\beta}\int_\Omega \ue^\frac{\divrege}{1-\divrege} + B + \logc \int_\Omega \ue \ln(\ue) - \logc \int_\Omega \ue^{\loge+\eps} \ln( \ue ) \numberthis \label{eq:energy_estimate_derivation} \end{align*} for all $t \in (0,T)$ and $\eps \in (0,1)$. \\[0.5em] As $\frac{\divrege}{1-\divrege} \leq \loge$ by assumption, there exists a constant $K > 0$ (independent of $\eps$) such that \[ 2^\frac{\divrege}{1-\divrege}B^\frac{1}{1-\divrege} z^{\frac{\divrege}{1-\divrege}} - \frac{\logc}{2} z^{\loge+\eps} \ln(z) \leq 2^\frac{\divrege}{1-\divrege}B^\frac{1}{1-\divrege} z^{\frac{\divrege}{1-\divrege}} - \frac{\logc}{2} z^{\loge} \ln(z) \leq K \] for all $z \geq 0$ and $\eps \in (0,1)$. Given this, we can then further estimate in (\ref{eq:energy_estimate_derivation}) to see that \begin{align*} &\frac{\d}{\d t}\left\{ \int_\Omega \ue \ln(\ue) + \frac{\chi}{2} \int_\Omega \frac{\grad \we \cdot \De \grad \we }{\we} \right\} + \frac{1}{2} \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \frac{\mu}{2}\int_\Omega \ue^{r+\eps} \ln(\ue) \\ &\leq \logc \int_\Omega \ue \ln(\ue) + \frac{\d}{\d t}\int_\Omega \ue + K|\Omega| + B \end{align*} for all $t \in (0,T)$ and $\eps \in (0,1)$. Time integration in combination with Gronwalls inequality and the uniform $L^1(\Omega)$ bound for $\ue$ due to \Cref{lemma:absolute_baseline} as well as the uniform initial data bounds from (\ref{eq:weak_initial_data_bounds}) then yields our desired result as the above differential inequality essentially means that the growth of the considered terms can be at most exponential. \end{proof}\noindent We now further extract some relevant but straightforward additional bounds for our approximate solutions from the previous lemma. \begin{corollary}\label{lemma:basic_bounds_weak} For each $T > 0$, there exists $C \equiv C(T) > 0$ such that \begin{equation} \int_0^T \int_\Omega u_\eps^{r+\eps} \ln(\ue^{r+\eps}) \leq C, \;\;\;\; \int_0^T \int_\Omega u_\eps^r \ln(\ue) \leq C, \label{eq:basic_bounds_weak_u} \end{equation} \begin{equation} \int_0^T \|\ue^\frac{1}{2}(\cdot, s)\|_{\WD{2}}^2 \d s \leq C, \;\;\;\; \int_0^T \|\ue(\cdot, s)\|^\frac{2r}{r+1}_\WD{\frac{2r}{r+1}} \d s \leq \int_0^T \|\ue(\cdot, s)\|^\frac{2r}{r+1}_{W^{1,\frac{2r}{r+1}}_{\De}(\Omega)} \d s \leq C \label{eq:basic_bounds_weak_grad_u} \end{equation} and \begin{equation} \int_0^T \|\we(\cdot, s)\|_\WD{2}^2 \d s \leq \int_0^T \|\we(\cdot, s)\|_{W^{1,2}_{\De}(\Omega)}^2 \d s \leq C \label{eq:basic_bounds_weak_w} \end{equation} for all $\eps \in (0,1)$. \end{corollary} \begin{proof} Fix $T > 0$. \\[0.5em] Then given that \[ \int_\Omega \grad \we \cdot \De \grad \we \leq \|\we\|_\L{\infty} \int_\Omega \frac{\grad \we \cdot \De \grad \we}{\we} \] for all $t \in(0,T)$ and $\eps \in (0,1)$ as well as knowing that $\D \leq \De$ according to (\ref{eq:De_estimate}) for all $\eps \in (0,1)$, \Cref{lemma:energy_baseline} combined with \Cref{lemma:absolute_baseline} and (\ref{eq:weak_initial_data_bounds}) yields (\ref{eq:basic_bounds_weak_w}). For the definition of the relevant spaces see \Cref{definition:spaces}. \\[0.5em] As \[ z^{r} \ln(z) \leq z^{r+\eps}\ln(z) \stext{ and } z^{r+\eps} \ln(z^{r+\eps}) = (r+\eps) z^{r+\eps}\ln(z) \leq (r+1)z^{r+\eps}\ln(z) + 1 \] for all $z\geq 0$ and $\eps \in (0,1)$, the result (\ref{eq:basic_bounds_weak_u}) follows directly from \Cref{lemma:energy_baseline}. \\[0.5em] To address the last remaining result (\ref{eq:basic_bounds_weak_grad_u}), we now note that \[ \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} = 4\int_\Omega \grad \ue^\frac{1}{2} \cdot \De \grad \ue^\frac{1}{2} \] and \begin{align*} \int_\Omega \left( \grad \ue \cdot \De \grad \ue \right)^\frac{\frac{2r}{r+1}}{2} &= \int_\Omega \ue^\frac{r}{r+1} \left( \frac{\grad \ue \cdot \De \grad \ue}{\ue} \right)^\frac{r}{r+1} \\ &\leq \int_\Omega \ue^r + \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} \leq r\int_\Omega \ue^r \ln(\ue) + |\Omega| + \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} \end{align*} for all $t \in(0,T)$ and $\eps \in (0,1)$ due to Young's inequality. Given this, the result (\ref{eq:basic_bounds_weak_grad_u}) also follows directly from \Cref{lemma:energy_baseline} and the fact that $\D \leq \De$ for all $\eps \in (0,1)$ according to (\ref{eq:De_estimate}). \end{proof}\noindent By another testing procedure for the first equation in (\ref{approx_problem}), which is very similar to the one already used by us in the proof of \Cref{lemma:energy_baseline}, we will now derive our final preliminary set of bounds for this section. \begin{lemma}\label{lemma:additional_baseline} For each $T > 0$, there exists a constant $C\equiv C(T) > 0$ such that \[ \int_0^T \int_\Omega \ue^{-\frac{1}{2}} |(\div \De) \cdot \grad \ue| \leq C \stext{ and } \int_0^T \int_\Omega u^{-\frac{3}{2}}\left(\grad \ue \cdot \De \grad \ue\right) \leq C \] for all $\eps \in (0,1)$. \end{lemma} \begin{proof} Fix $T > 0$. \\[0.5em] We first note that \begin{align*} \int_\Omega \ue^{-\frac{1}{2}} |(\div \De) \cdot \grad \ue| &= 2\int_\Omega |(\div \De) \cdot \grad \ue^\frac{1}{2}| \leq 2B \int_\Omega \left(\grad \ue^\frac{1}{2} \cdot \De \grad \ue^\frac{1}{2}\right)^\beta + 2B \\ &= 2B \int_\Omega \left(\frac{\grad \ue \cdot \De \grad \ue}{\ue}\right)^\beta + 2B\leq 2B\int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + 2B(1+|\Omega|)\numberthis \label{eq:u_-12+grad_estimate} \end{align*} for all $t \in (0,T)$ and $\eps \in (0,1)$ due to (\ref{eq:De_div_estimate}). Given the bounds established in \Cref{lemma:energy_baseline}, this directly gives us the first half of our desired result. \\[0.5em] We then further test the first equation in (\ref{approx_problem}) with $-\ue^{-\frac{1}{2}}$ to derive that \begin{align*} -2 \frac{\d}{\d t}\int_\Omega \ue^\frac{1}{2} &= -\int_\Omega \ue^{-\frac{1}{2}} u_{\eps t} \\ &= -\int_\Omega \ue^{-\frac{1}{2}} \div \left( \De \grad \ue + \ue \div \De \right) + \chi \int_\Omega \ue^{-\frac{1}{2}} \div (\ue\De \grad \we) - \logc\int_\Omega \ue^\frac{1}{2}(1 - \ue^{\loge + \eps - 1}) \\ &= -\frac{1}{2}\int_\Omega \ue^{-\frac{3}{2}} (\grad \ue \cdot \De \grad \ue) -\frac{1}{2} \int_\Omega \ue^{-\frac{1}{2}}((\div \De) \cdot \grad \ue) \\ &\hphantom{=\;}+ \frac{\chi}{2} \int_\Omega \ue^{-\frac{1}{2}}(\grad\ue \cdot \De \grad \we) - \logc\int_\Omega \ue^\frac{1}{2}(1 - \ue^{\loge + \eps - 1}) \\ & \leq -\frac{1}{2}\int_\Omega \ue^{-\frac{3}{2}} (\grad \ue \cdot \De \grad \ue) + \frac{1}{2}\int_\Omega \ue^{-\frac{1}{2}} |(\div \De) \cdot \grad \ue| \\ &\hphantom{=\;}+ \frac{\chi}{4} \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \frac{\chi}{4} \int_\Omega \grad \we \cdot \De \grad \we + \logc\int_\Omega \ue^{r+\eps-\frac{1}{2}} \end{align*} for all $t\in(0,T)$ and $\eps \in (0,1)$ by partial integration, use of the no-flux boundary conditions and the Cauchy--Schwarz inequality combined with Young's inequality. This then immediately implies \begin{align*} \frac{1}{2}\int_\Omega \ue^{-\frac{3}{2}} (\grad \ue \cdot \De \grad \ue) &\leq 2 \frac{\d}{\d t}\int_\Omega \ue^\frac{1}{2} + \frac{1}{2}\int_\Omega \ue^{-\frac{1}{2}} |(\div \De) \cdot \grad \ue| \\ &\hphantom{=\;} + \frac{\chi}{4} \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \frac{\chi}{4} \int_\Omega \grad \we \cdot \De \grad \we + \logc \int_\Omega \ue^{r+\eps-\frac{1}{2}} \numberthis \label{eq:secondary_ue_gradient_estimate} \end{align*} for all $t\in(0,T)$ and $\eps \in (0,1)$. As further \[ \int_\Omega \ue^{\loge + \eps - \frac{1}{2}} \leq \int_\Omega \ue^{r+\eps} + |\Omega| \leq \int_\Omega \ue^{r+\eps}\ln(\ue^{r+\eps}) + 2|\Omega| \] for all $t\in(0,T)$ and $\eps \in (0,1)$, the inequality (\ref{eq:secondary_ue_gradient_estimate}) combined with the already established bounds from \Cref{lemma:absolute_baseline}, \Cref{lemma:energy_baseline} and (\ref{eq:weak_initial_data_bounds}) as well as (\ref{eq:u_-12+grad_estimate}) gives us our desired estimate after an integration in time. \end{proof} \subsection{Construction of Weak Solutions} As our final preparation for a now soon following compactness argument (based on the Aubin--Lions lemma), which is used to construct the candidates for our weak solutions, we will now prepare uniform integrability estimates for the time derivatives of $\ue^{1/2}$ and $\we$. \\[0.5em] Note that the construction of a solution candidate for the second solution component $w$ could likely be achieved by less powerful means. But as we will already need to employ fairly extensive compact embedding arguments to handle the first solution components $\ue$ anyway and deriving the necessary additional uniform bounds for $\we$ is trivial, we will use the same compactness argument for the second solution component as well for the sake of uniformity of presentation. \begin{lemma}\label{lemma:dual_bounds_weak} Then for each $T > 0$, there exists $C \equiv C(T) > 0$ such that \[ \int_0^T \|(\ue^\frac{1}{2})_t(\cdot, t)\|_{(W^{n+1,2}(\Omega))^*} \d t \leq C \stext{ and } \int_0^T \|w_{\eps t}(\cdot, t)\|_{(W^{n+1,2}(\Omega))^*} \d t \leq C \] for all $\eps \in (0,1)$. \end{lemma} \begin{proof} Fix $T > 0$. \\[0.5em] We then begin by noting that the bound for ${\we}_t$ is an immediate and straightforward consequence of \Cref{lemma:absolute_baseline} combined with (\ref{eq:weak_initial_data_bounds}) and the second equation in (\ref{approx_problem}) as well as the fact that $W^{n+1, 2}(\Omega)$ embeds continuously into $L^\infty(\Omega)$. \\[0.5em] As such, we now focus our attention on deriving the $(\ue^\frac{1}{2})_t$ bound. To this end, we test the first equation in (\ref{approx_problem}) with $u_\eps^{-\frac{1}{2}} \phi$, $\phi \in C^\infty(\overline{\Omega})$, and apply partial integration and (\ref{eq:weak_De_bounds}) as well as the Hölder and Young's inequality to see that \begin{align*} &2\left|\int_\Omega (u^\frac{1}{2}_{\eps})_t \phi\right| = \left|\int_\Omega \ue^{-\frac{1}{2}} u_{\eps t} \phi \right|\\ &\leq \left|\int_\Omega \ue^{-\frac{1}{2}}\grad \ue \cdot \De \grad \phi \right| + \frac{1}{2}\left|\int_\Omega \ue^{-\frac{3}{2}}(\grad \ue \cdot \De \grad \ue) \phi \right| \\ &\hphantom{=\;}+ \left|\int_\Omega \ue^\frac{1}{2} ((\div \De) \cdot \grad \phi) \right| + \frac{1}{2}\left|\int_\Omega \ue^{-\frac{1}{2}} ((\div \De) \cdot \grad \ue) \phi \right| \\ &\hphantom{=\;}+ \chi\left|\int_\Omega\ue^\frac{1}{2} \grad \we \cdot \De \grad \phi \right| + \frac{\chi}{2} \left|\int_\Omega\ue^{-\frac{1}{2}} ( \grad \we \cdot \De \grad \ue) \phi \right| + \logc \left| \int_\Omega \ue^\frac{1}{2} ( 1- \ue^{\loge + \eps - 1})\phi \right|\\ &\leq \left(\int_\Omega \grad \phi \cdot \De \grad \phi \right)^\frac{1}{2}\left(\int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue}\right)^\frac{1}{2} + \frac{\|\phi\|_\L{\infty}}{2} \int_\Omega \ue^{-\frac{3}{2}} (\grad \ue \cdot \De \grad \ue) \\ &\hphantom{=\;}+ \|\grad \phi\|_\L{\infty} \left( \int_\Omega \ue + \int_\Omega |\div \De|^2 \right) + \frac{\|\phi\|_\L{\infty}}{2} \int_\Omega \ue^{-\frac{1}{2}}|(\div \De) \cdot \grad \ue| \\ &\hphantom{=\;}+ \chi\left( \int_\Omega \ue\grad \phi \cdot \De \grad \phi \right)^\frac{1}{2}\left(\int_\Omega \grad \we \cdot \De \grad \we\right)^\frac{1}{2} \\ &\hphantom{=\;}+ \frac{\chi \|\phi\|_\L{\infty}}{2} \left(\int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue}\right)^\frac{1}{2}\left(\int_\Omega \grad \we \cdot \De \grad \we \right)^\frac{1}{2} \\ &\hphantom{=\;}+\mu \|\phi\|_\L{\infty}\int_\Omega \ue^\frac{1}{2} + \mu \|\phi\|_\L{\infty} \int_\Omega \ue^{r+\eps - \frac{1}{2}} \\ &\leq K\left( \|\phi\|_{L^\infty(\Omega)} + \|\grad \phi\|_{L^\infty(\Omega)}\right) \left( \int_\Omega \frac{\grad \ue \cdot \De \grad \ue}{\ue} + \int_\Omega \ue^{-\frac{3}{2}} (\grad \ue \cdot \De \grad \ue) + \int_\Omega \ue^{-\frac{1}{2}}|(\div \De) \cdot \grad \ue|\right.\\ & \qquad\qquad\qquad\qquad\qquad\qquad \left.\hphantom{=\;\;\;\;}+ \int_\Omega \grad \we \cdot \De \grad \we + \int_\Omega \ue^{r+\eps}\ln(\ue^{r+\eps})+ 1 \right) \end{align*} for all $t\in(0,T)$ and $\eps \in (0,1)$ with some appropriate constant $K > 0$ only dependent on $\Omega$, $\mu$, $r$, $\chi$ and $M$. Given the above inequality, the remainder of our desired result follows from \Cref{lemma:energy_baseline}, \Cref{lemma:basic_bounds_weak} and \Cref{lemma:additional_baseline} as well as the continuous embedding of $W^{n+1,2}(\Omega)$ into $W^{1,\infty}(\Omega)$ and density of $C^\infty(\overline{\Omega})$ in $W^{n+1,2}(\Omega)$. \end{proof} \noindent Having prepared all the necessary bounds, we will now construct the solution candidates by using various compact embedding arguments to gain them as the limit of our approximate solutions. \begin{lemma}\label{lemma:convergence_properties} There exist a null sequence $(\eps_j)_{j\in\N}\subseteq (0,1)$ and a.e.\ non-negative functions \begin{align*} u &\in L_\loc^\frac{2r}{r+1}([0,\infty);\WD{\frac{2r}{r+1}}) \cap L_\loc^r(\overline{\Omega}\times[0,\infty)), \\ w &\in L_\loc^2([0,\infty);\WD{2})\cap L^\infty(\Omega\times(0,\infty)), \end{align*} such that \begin{align} \ue &\rightarrow u &&\text{ in } L_\loc^r(\overline{\Omega}\times[0,\infty)) \text{ and a.e.\ in } \Omega\times[0,\infty), \label{eq:ue_convergence}\\ \ue^{r+\eps} &\rightarrow u^r &&\text{ in }L^1_\loc(\overline{\Omega} \times [0,\infty)) \text{ and a.e.\ in } \Omega\times[0,\infty),\label{eq:ue_higher_r_convergence}\\ \ue &\rightharpoonup u && \text{ in } L^\frac{2r}{r+1}_\loc([0,\infty); W^{1,\frac{2r}{r+1}}_\D(\Omega)),\label{eq:ue_weak_convergence} \\ \we &\rightarrow w &&\text{ in } L_\loc^p(\overline{\Omega}\times[0,\infty)) \text{ for all } p \in [1,\infty) \text{ and a.e.\ in } \Omega\times[0,\infty),\label{eq:we_convergence} \\ \we &\rightharpoonup w && \text{ in } L^2_\loc([0,\infty); W^{1,2}_\D(\Omega))\label{eq:we_weak_convergence} \end{align} as $\eps = \eps_j \searrow 0$. \end{lemma} \begin{proof} Given that both the families $(\ue^\frac{1}{2})_{\eps\in(0,1)}$ and $(\we)_{\eps\in(0,1)}$ are bounded in $L_\loc^2([0,\infty); \WD{2})$ according to \Cref{lemma:basic_bounds_weak} and the families $(({\ue^\frac{1}{2}})_t)_{\eps\in(0,1)}$ and $({\we}_t)_{\eps\in(0,1)}$ are bounded in $L_\loc^1([0,\infty); (W^{n+1,2}(\Omega))^*)$ according to \Cref{lemma:dual_bounds_weak}, we can apply the Aubin--Lions lemma (cf.\ \cite{TemamNavierStokesEquationsTheory1977}) to the above families using the triple of embedded spaces $\WD{2} \subseteq L^1(\Omega) \subseteq (W^{n+1,2}(\Omega))^*$. Note that this is only possible as the first embedding is in fact compact by our assumptions (cf.\ \Cref{definition:comp_regularity}). Therefore, there exists a null sequence $(\eps_j)_{j\in\N} \subseteq (0,1)$ and functions $\tilde{u},w: \overline{\Omega} \times [0,T) \rightarrow \R$ such that \[ \ue^\frac{1}{2} \rightarrow \tilde{u} \stext{ and } \we \rightarrow w \;\;\;\; \text{ in } L_\loc^2([0,\infty);L^1(\Omega)) \text{ and therefore in } L_\loc^1(\overline{\Omega}\times[0,\infty)) \] as $\eps = \eps_j \searrow 0$. This sequence is constructed by applying the Aubin--Lions lemma countably infinitely many times on time intervals of the form $[0,T]$, $T\in\N$, combined with a straightforward extension and diagonal sequence argument. We can further choose the above sequence in such way as to ensure that $\ue^\frac{1}{2} \rightarrow \tilde{u}$ and $\we \rightarrow w$ pointwise almost everywhere as $\eps = \eps_j \searrow 0$ by potentially switching to another subsequence. Due to the family $(\we)_{\eps \in (0,1)}$ furthermore being uniformly bounded in $L^\infty(\Omega\times(0,\infty))$ (cf.\ (\ref{eq:weak_initial_data_bounds}) and \Cref{lemma:absolute_baseline}), the above convergence properties directly imply (\ref{eq:we_convergence}) as well as the fact that $w$ is non-negative almost everywhere and $w \in L^\infty(\Omega\times(0,\infty))$. \\[0.5em] We now set $u \defs \tilde{u}^2$ and observe that the above almost everywhere pointwise convergence for the already constructed sequences then ensures that \[ \ue \rightarrow u \stext{ and } \ue^{r+\eps} \rightarrow u^r \;\;\;\; \text{ a.e.\ pointwise} \] as $\eps = \eps_j \searrow 0$. This immediately gives us non-negativity of $u$ as well. Further as for every $T > 0$ there exists $K \equiv K(T) > 0$ such that \[ \int_0^T \int_\Omega \ue^r|\ln(\ue)| \leq K \stext{ and } \int_0^T \int_\Omega \ue^{r+\eps} |\ln(\ue^{r+\eps})| \leq K \] for all $\eps \in (0,1)$ according to \Cref{lemma:basic_bounds_weak}, we can use Vitali's theorem and the de La Valleé Poussin criterion for uniform integrability (cf.\ \cite[pp.\ 23-24]{DellacherieProbabilitiesPotential1978}) to gain convergence properties (\ref{eq:ue_convergence}), (\ref{eq:ue_higher_r_convergence}). \\[0.5em] The remaining weak convergence properties (\ref{eq:ue_weak_convergence}) and (\ref{eq:we_weak_convergence}) then follow immediately by another similar but fairly standard subsequence extraction argument as the respective families of functions are bounded in the relevant spaces according to \Cref{lemma:basic_bounds_weak}. \\[0.5em] As all not yet explicitly established regularity properties for $u$ and $w$ directly follow from the convergence properties and we have at this point proven all said properties, this completes the proof. \end{proof} \noindent For the remainder of this section, we will now fix the functions $u$, $w$ as well as the sequence $(\eps_j)_{j\in\N}$ constructed in the preceding lemma. While the convergence properties derived in \Cref{lemma:convergence_properties} are in fact already sufficient to allow us to translate the weak solution property from our approximate solutions to our now established solution candidates, we will as a last effort before the proof of \Cref{theorem:weak_solutions} derive some more specifically tailored convergence properties to handle some of the more complex terms in the weak solution definition. \\[0.5em] Note that the following lemma is the critical point in this section where the assumption $r \geq 2$ becomes important. The only other points in this section where this assumption was used are the argument ensuring the existence of the approximate solutions, where it could likely be dropped by introducing yet more regularizations to (\ref{approx_problem}), and our assumption that $\beta$ is an element of $[\frac{2}{3}, 1)$ without loss of generality, which was done mostly for convenience. \begin{lemma} \label{lemma:additional_convergence_properties} The convergence properties \begin{equation}\label{eq:weakish_convergence_1} \int_0^\infty\int_\Omega \grad \ue \cdot \De \grad \phi \rightarrow \int_0^\infty\int_\Omega \grad u \cdot \D \grad \phi \;\;\;\; \text{ as } \eps = \eps_j \searrow 0 \end{equation} and \begin{equation}\label{eq:weakish_convergence_2} \int_0^\infty\int_\Omega \ue \grad \we \cdot \De \grad \phi \rightarrow \int_0^\infty\int_\Omega u \grad w \cdot \D \grad \phi \;\;\;\; \text{ as } \eps = \eps_j \searrow 0 \end{equation} hold for all $\phi \in C_c^\infty(\overline{\Omega}\times[0,\infty))$. \end{lemma} \begin{proof} Fix $\phi \in C_c^\infty(\overline{\Omega}\times[0,\infty))$ and $T > 0$ such that $\supp \phi \subseteq \overline{\Omega}\times[0,T)$. We can then fix a constant $K_1 \geq 1$ such that \[ \int_0^T \int_\Omega \left(\grad \ue \cdot \De \grad \ue\right)^\frac{r}{r+1} \leq K_1 \stext{ and } \int_0^T \int_\Omega \grad \we \cdot \De \grad \we \leq K_1 \] for all $\eps \in (0,1)$ according to \Cref{lemma:basic_bounds_weak}. This implies that \begin{align*} \|\grad \ue\|_{L^1(\Omega\times(0,T))} &\leq (|\Omega| + 1)\left(\int_0^T\int_\Omega \left(\grad \ue \cdot \grad \ue\right)^\frac{r}{r+1} \right)^\frac{r+1}{2r} \\ &\leq (|\Omega| + 1) \left(\frac{1}{\eps^\frac{r}{r+1}}\int_0^T\int_\Omega \left(\grad\ue \cdot \De \grad \ue \right)^\frac{r}{r+1} \right)^\frac{r+1}{2r} \leq \frac{K_2}{\eps^\frac{1}{2}} \numberthis\label{eq:grad_ue_eps_estimate} \end{align*} and \[ \|\grad \we\|_{L^2(\Omega\times(0,T))} =\left(\int_0^T\int_\Omega \grad \we \cdot \grad \we \right)^\frac{1}{2} \leq (|\Omega| + 1) \left(\frac{1}{\eps}\int_0^T\int_\Omega \grad \we \cdot \De \grad \we \right)^\frac{1}{2} \leq \frac{K_2}{\eps^\frac{1}{2}}\numberthis\label{eq:grad_we_eps_estimate} \] for all $\eps \in (0,1)$ with $K_2 \defs K_1(|\Omega| + 1)$ due to the Hölder inequality and the estimate (\ref{eq:De_estimate}). We then observe that \begin{align*} &\left| \int_0^T\int_\Omega \grad \ue \cdot \De \grad \phi - \int_0^T\int_\Omega \grad u \cdot \D \grad \phi \right| \\ &\leq \left| \int_0^T\int_\Omega \grad \ue \cdot \De \grad \phi - \int_0^T\int_\Omega \grad \ue \cdot \D \grad \phi \right| + \left| \int_0^T\int_\Omega \grad \ue \cdot \D \grad \phi - \int_0^T\int_\Omega \grad u \cdot \D \grad \phi \right| \\ &\leq \|\grad \ue\|_{L^1(\Omega\times(0,T))}\|\De - \D\|_\L{\infty}\|\grad \phi\|_{L^\infty(\Omega\times(0,T))} + \left|\int_0^T\int_\Omega \grad \ue \cdot \D \grad \phi - \int_0^T\int_\Omega \grad u \cdot \D \grad \phi \right|\\ &\leq 3 K_2 \eps^\frac{1}{2} \|\grad \phi\|_{L^\infty(\Omega\times(0,T))} + \left| \int_0^T\int_\Omega \grad \ue \cdot \D \grad \phi - \int_0^T\int_\Omega \grad u \cdot \D \grad \phi \right| \end{align*} for all $\eps \in (0,1)$ because of (\ref{eq:De_estimate}) and (\ref{eq:grad_ue_eps_estimate}). This inequality immediately implies (\ref{eq:weakish_convergence_1}) due to the weak convergence property (\ref{eq:ue_weak_convergence}) presented in \Cref{lemma:convergence_properties}. \\[0.5em] We now similarly estimate that \begin{align*} &\left| \int_0^T\int_\Omega \ue \grad \we \cdot \De \grad \phi - \int_0^T\int_\Omega u \grad w \cdot \D \grad \phi \right| \\ &\leq \left| \int_0^T\int_\Omega \ue \grad \we \cdot \De \grad \phi - \int_0^T\int_\Omega u \grad \we \cdot \De \grad \phi \right| + \left| \int_0^T\int_\Omega u \grad \we \cdot \De \grad \phi - \int_0^T\int_\Omega u \grad \we \cdot \D \grad \phi \right|\\ &\hphantom{=\;}+ \left| \int_0^T\int_\Omega u \grad \we \cdot \D \grad \phi - \int_0^T\int_\Omega u \grad w \cdot \D \grad \phi \right| \\ &\leq \left(\int_0^T \int_\Omega \grad \we \cdot \De \grad \we\right)^\frac{1}{2}\left(\int_0^T \int_\Omega (u-\ue)^2 (\grad \phi \cdot \De \grad \phi)\right)^\frac{1}{2} \\ &\hphantom{=\;}+ \|u\|_{L^2(\Omega\times(0,T))} \|\grad \we\|_{L^2(\Omega\times(0,T))} \|\De - \D\|_\L{\infty} \|\grad \phi\|_{L^\infty(\Omega\times(0,T))} \\ &\hphantom{=\;}+ \left| \int_0^T\int_\Omega u \grad \we \cdot \D \grad \phi - \int_0^T\int_\Omega u \grad w \cdot \D \grad \phi \right| \\ &\leq K_1^\frac{1}{2} (\|\D\|_\L{\infty} + 3)^\frac{1}{2} \|\grad \phi\|_\L{\infty} \|u - \ue\|_{L^2(\Omega\times(0,T))} + 3 K_2 \eps^\frac{1}{2} \|u\|_{L^2(\Omega\times(0,T))} \|\grad \phi\|_{L^\infty(\Omega\times(0,T))} \\ &\hphantom{=\;}+ \left| \int_0^T\int_\Omega u \grad \we \cdot \D \grad \phi - \int_0^T\int_\Omega u \grad w \cdot \D \grad \phi \right| \end{align*} for all $\eps \in (0,1)$ because of (\ref{eq:De_estimate}) and (\ref{eq:grad_we_eps_estimate}). Due to the convergence properties (\ref{eq:ue_convergence}) and (\ref{eq:we_weak_convergence}) as well as the fact that $r \geq 2$ and therefore $u \in L_\loc^2(\overline{\Omega}\times[0,\infty))$, the above estimate implies (\ref{eq:weakish_convergence_2}) and thus completes the proof. \end{proof} \noindent As all convergence properties necessary to argue that $u$ and $w$ are in fact our desired weak solution have been established, we can now present the at this point fairly short proof of our second main existence result, namely \Cref{theorem:weak_solutions}. \begin{proof}[Proof for \Cref{theorem:weak_solutions}] We first note that $u$, $w$ are already sufficiently regular to ensure that (\ref{eq:weak_solution_regularity_1}) and (\ref{eq:weak_solution_regularity_2}) hold due to \Cref{lemma:convergence_properties}. \\[0.5em] It is further straightforward to verify that $\ue, \we$ are weak solutions in the sense of \Cref{definition:weak_solution} for all $\eps \in (0,1)$ with only slightly different parameters. As such, we only now need to confirm that all the terms in the weak solution definition converge to their counterparts without $\eps$. For all the terms that are structurally identical in both the approximated case as well as in the weak solution definition we want to achieve for $u$ and $w$, this is covered by \Cref{lemma:convergence_properties} as well as the convergence properties of the initial data laid out at the beginning of this section. The terms that differ because $\D$ was replaced by $\De$ are covered by either \Cref{lemma:additional_convergence_properties} or (\ref{eq:De_convergence}) combined with (\ref{eq:ue_convergence}) from \Cref{lemma:convergence_properties}. Finally, the logistic terms $\int_0^\infty\int_\Omega \ue(1-\ue^{r-1+\eps})\phi$ occurring in the weak solution definition for our approximate solutions converge to their proper counterpart $\int_0^\infty\int_\Omega u (1-u^{r-1})\phi$ due to (\ref{eq:ue_convergence}) and (\ref{eq:ue_higher_r_convergence}) from \Cref{lemma:convergence_properties} as well. We have now discussed that all the terms occurring in the weak solution definition of the approximate solutions converge to the correct terms for our solution candidates. Therefore $(u,w)$ is a weak solution of the type described in \Cref{definition:weak_solution}. \end{proof} \section*{Acknowledgment} The author acknowledges the support of the \emph{Deutsche Forschungsgemeinschaft} in the context of the project \emph{Emergence of structures and advantages in cross-diffusion systems}, project number 411007140.
proofpile-arXiv_069-1042
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Image style transfer has recently received significant attention in the computer vision and machine learning communities~\cite{jing2019neural}. A central problem in this domain is the task of transferring the style of an arbitrary image onto a \emph{photorealistic} target. The seminal work of Gatys et al.~\cite{gatys2016image} formulates this general artistic style transfer problem as an optimization that minimizes both style and content losses, but results often contain spatial distortion artifacts. Luan et al.~\cite{luan2017deep} seek to reduce these artifacts by adding a \emph{photorealism constraint}, which encourages the transformation between input and output to be \emph{locally affine}. However, because the method formulates the problem as a large optimization whereby the loss over a deep network must be minimized for every new image pair, performance is limited. The recent work of Yoo et al.~\cite{yoo2019photorealistic} proposes a wavelet corrected transfer based method which provides stable stylization but is not fast enough to run at practical resolutions. Another line of recent work seeks to pretrain a feed-forward deep model~\cite{dumoulin2017learned,huang2017arbitrary,johnson2016perceptual,li2019learning,li2017diversified,li2018closed,ulyanov2016texture,ulyanov2017improved} that once trained, can produce a stylized result with a single forward pass at test time. While these ``universal''~\cite{jing2019neural} techniques are significantly faster than those based on optimization, they may not generalize well to unseen images, may produce non-photorealistic results, and are still to slow to run in real time on a mobile device. In this work, we introduce a fast end-to-end method for photorealistic style transfer. Our model is a single feed-forward deep neural network that once trained on a suitable dataset, runs in real-time on a mobile phone at full camera resolution (i.e., 12 megapixels or ``4K")---significantly faster than the state of the art. Our key observation is that we can guarantee photorealistic results by strictly enforcing Luan et. al's photorealism constraint~\cite{luan2017deep}---locally, regions of similar color in the input must map to a similarly colored region in the output while respecting edges. Therefore, we design an deep learning algorithm in \emph{bilateral space}, where these local affine transforms can be compactly represented. We contribute: \begin{enumerate} \item A photorealistic style transfer network that learns local affine transforms. Our model is robust and degrades gracefully when confronted with unseen or adversarial inputs. \item An inference implementation that runs in real-time at 4K on a mobile phone. \item A bilateral-space Laplacian regularizer eliminates spatial grid artifacts. \end{enumerate} \begin{comment} We chose to build on HDRnet primarily because it can directly predict local affine transforms. Local affine transforms are particularly well suited for our task because: 1) They directly express the photorealism heuristic without expensive postprocessing. 2) The model is \emph{not} very expressive, which prevents it from creating spatial distortions, amplifying noise, or introducing false edges. 3) Scalability: inference can occur at low resolution, produce a compact model, and then be applied to an arbitrary-resolution input, gracefully degrading in quality. While HDRnet can be used to learn style transfer, it will not be efficient, nor will it have sufficient capacity to produce high-quality semantic results (Figure A, column 3). This is because the network does not explicitly model style transfer between the two inputs: it learns style transfer by memorizing what it sees during training and projecting it onto an affine bilateral grid. Therefore, it requires a lot of training data and does not generalize well. In contrast, we start from VGG~\cite{simonyan2014very} features (which has been shown to capture semantics and style) and use AdaIN (which has been shown to be efficient at transferring that style in [6, 18]). This allows the rest of the network to focus on learning local affine transforms. Compared to previous work, we claim as novel contributions: \vspace{-0.2em} \begin{itemize} \setlength\itemsep{0.1em} \item \textbf{An affine bilateral grid prediction network tailored for style transfer.} While our network uses AdaIN normalization, the architecture is not a naive combination of the AdaIN network at the front-end of HDRnet. A naive combination generates poor results. Instead, we propose a multi-resolution feature extraction network and a set of splatting blocks that lets us use AdaIN without correspondence supervision between content and style inputs. Our ablations show both structures are necessary. \item \textbf{A Laplacian regularizer in bilateral space.} We found that training with unmodified content and style losses will produce halos and gradient inversion artifacts. We propose a novel Laplacian regularizer that eliminates these artifacts. \item \textbf{A scalable photorealistic style transfer technique.} It performs well at even 12 megapixels. \end{itemize} \end{comment} \begin{comment} \begin{figure}[ht] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .11\linewidth,height=.14\linewidth]{figures/inspiration/5_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/5_adain.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/5_photowct.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/5_hdrnet.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/5_ours.jpg} &\\ \includegraphics[width = .11\linewidth,height=.14\linewidth]{figures/inspiration/1_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/1_adain.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/1_photowct.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/1_hdrnet.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/inspiration/1_ours.jpg} &\\ {(a) Inputs }& {(b) AdaIN } &{ (c) PhotoWCT}& {(d) HDRnet }& {(e) Ours} \\ \end{tabular} \vspace{-0.3cm} \caption{Inspiration. (a) Content/style inputs. (b) Traditional artistic style transfer methods such as AdaIN can cause distinctive distortions. (c) Recent photorealistic stylization method PhotoWCT uses a post-processing step to remove distortions. However, there are still noticable artifacts remain in the results. (d) The classical deep bilateral learning network HDRnet fails to produce high-quality stylization results. (e) Our proposed end-to-end joint bilateral learning network performs well on photorealistic style transfer task.} \label{fig:inspiration_old} \vspace{-1.em} \end{figure} \end{comment} \begin{figure}[ht] \hspace{-0.3cm} \begin{tabular}{c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c} \includegraphics[width = .08\linewidth,height=.12\linewidth]{figures/inspiration/5_inputs.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/5_adain.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/5_hdrnet.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/5_ours.jpg} & \includegraphics[width = .08\linewidth,height=.12\linewidth]{figures/inspiration/17_inputs.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/17_adain.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/17_hdrnet.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/17_ours.jpg} &\\ \includegraphics[width = .08\linewidth,height=.12\linewidth]{figures/inspiration/11_inputs.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/11_adain.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/11_hdrnet.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/11_ours.jpg} & \includegraphics[width = .08\linewidth,height=.12\linewidth]{figures/inspiration/13_inputs.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/13_adain.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/13_hdrnet.jpg} & \includegraphics[width = .14\linewidth,height=.12\linewidth]{figures/inspiration/13_ours.jpg} &\\ {Inputs}& {AdaIN}& {HDRnet}& {Ours}& {Inputs}& {AdaIN}& {HDRnet}& {Ours} \\ \end{tabular} \vspace{-0.3cm} \caption{\textbf{Inspiration.} Artistic style transfer methods such as AdaIN generalize well to diverse content/style inputs but exhibit distortions on photographic content. HDRnet, designed to reproduce arbitrary imaging operators, learns the transform representation we want but fails to capture universal style transfer. Our work combines ideas from both approaches.} \label{fig:inspiration} \vspace{-1.em} \end{figure} \subsection{Related Work} Early work in image style transfer operated by transferring global image statistics~\cite{reinhard2001color} or histograms of filter responses~\cite{pitie2005n}. As they rely on low-level statistics, they fail to capture semantics. However, we highlight that these techniques do produce photorealistic results, albeit not always faithful to the style or well exposed. Recently, Gatys et al.~\cite{gatys2016image} showed that style can be effectively captured by the statistics of layer activations within deep neural networks trained for discriminative image classification. However, due to its generality, the technique and its successors often contain non-photorealistic painterly spatial distortions. To remove such distortions, He et al. ~\cite{he2019progressive} propose to achieve a more accurate color transfer by leveraging semantically-meaningful dense correspondence between images. One line of work ameliorates this problem by imposing additional constraints on the loss function. Luan et al.~\cite{luan2017deep} observe that constraining the transformation to be locally affine in color space pushes the result towards photorealism. PhotoWCT~\cite{li2018closed} imposes a similar constraint as a postprocessing step, while LST~\cite{li2019learning} appends a spatial propagation network~\cite{liu2017learning} after the main style transfer network to learn to preserve the desired affinity. Similarily, Puy et al.~\cite{puy2019flexible} propose a flexible network to perform artistic style transfer, and applies postprocessing after each learned update for photorealistic content. Compared to these ad hoc approaches, where the photorealism constraint is a soft penalty, our model directly predicts local affine transforms, guaranteeing that the constraint is satisfied. Another line of recent work shows that matching the statistics of auto-encoders is an effective way to parameterize style transfer~\cite{huang2017arbitrary,li2017universal,li2018closed,li2019learning,yoo2019photorealistic}. Moreover, they show that distortions can be reduced by preserving high frequencies using unpooling~\cite{li2018closed} or wavelet transform residuals~\cite{yoo2019photorealistic}. Our work unifies these two lines of research. Our network architecture builds upon HDRnet~\cite{gharbi2017deep}, which was first employed in the context of learning image enhancement and tone manipulation. Given a large dataset of input/output pairs, it learns local affine transforms that best reproduces the operator. The network is small and the learned transforms that are intentionally constrained to be incapable of introducing artifacts such as noise or false edges. These are exactly the properties we want and indeed, Gharbi et. al. demonstrated style transfer in their original paper. However, when we applied HDRnet to our more diverse dataset, we found a number of artifacts (Figure~\ref{fig:inspiration}). This is because HDRnet does not explicitly model style transfer and instead learns by memorizing what it sees during training and projecting the function onto local affine transforms. Therefore, it will require a lot of training data and generalize poorly. Since HDRnet learns local affine transforms from low-level image features, our strategy is to start with statistical feature matching using Adaptive Instance Normalization~\cite{huang2017arbitrary} to build a joint distribution. By explicitly modeling the style transformation as a distribution matching process, our network is capable of generalizing to unseen or adversarial inputs (Figure~\ref{fig:inspiration}). \section{Method} Our method is based on a single feed-forward deep neural network. It takes as input two images, a \emph{content photo} $I_c$, and an arbitrary \emph{style image} $I_s$, producing a photorealistic output $O$ with the former's content but the latter's style. Our network is ``universal''---after training on a diverse dataset of content/style pairs, it can generalize to novel input combinations. Its architecture is centered around the core idea of learning local affine transformations, which inherently enforce the photorealism constraint. \subsection{Background} For completeness, we first summarize the key ideas of recent work. \paragraph{Content and Style.} The Neural Style Transfer~\cite{gatys2016image} algorithm is based on an optimization that minimizes a loss balancing the output image's fidelity to the input images' content and style: \begin{align} \centering \mathcal{L}_{g} &= \alpha \mathcal{L}_{c} + \beta \mathcal{L}_{s}~~~~~~~~~~~~~\mathrm{\ \ with\ \ } \label{eq:loss_gatys} \\ \mathcal{L}_{c} &= \sum_{i=1}^{N_c} \left\| F_{i}[O]-F_{i}[I_{c}] \right\|_{2}^{2} \mathrm{\ \ and\ \ } \mathcal{L}_{s} = \sum_{i=1}^{N_s} \left\| G_{i}[O]-G_{i}[I_{s}] \right\|_{F}^{2}, \label{eq:loss_content_and_style} \end{align} where $N_c$ and $N_s$ denote the number of intermediate layers selected from a pretrained VGG-19 network~\cite{simonyan2014very} to represent image content and style, respectively. Scene content is captured by the feature maps $F_i$ of intermediate layers of the VGG network, and style is captured by their Gram matrices $G_i[\cdot] = F_i[\cdot]F_i[\cdot]^T$. $||\cdot||_F$ denotes the Frobenius norm. \paragraph{Statistical Feature Matching.} Instead of directly minimizing the loss in Equation~\ref{eq:loss_gatys}, followup work shows that it is more effective to match the statistics of feature maps at the bottleneck of an auto-encoder. Variants of the whitening and coloring transform~\cite{li2017universal,li2018closed,yoo2019photorealistic} normalize the singular values of each channel, while Adaptive Instance Normalization (AdaIN)~\cite{huang2017arbitrary} proposes a simple scheme using the mean $\mu(\cdot)$ and the standard deviation $\sigma(\cdot)$ of each channel: \begin{equation} \label{eq:adain} \mathrm{AdaIN}(x, y) = \sigma(y)\left(\frac{x-\mu(x)}{\sigma(x)}\right) + \mu(y), \end{equation} where $x$ and $y$ are content and style feature channels, respectively. Due to its simplicity and reduced cost, we also adopt AdaIN layers in our network architecture as well as its induced style loss~\cite{huang2017arbitrary,li2017demystifying}: \begin{equation} \label{eq:loss_style_adain} \begin{split} \mathcal{L}_{sa} = \sum_{i=1}^{N_S} \left\| \mu(F_i[O]) - \mu(F_i[I_s]) \right\|_{2}^{2} + \sum_{i=1}^{N_S} \left\| \sigma(F_i[O]) - \sigma(F_i[I_s]) \right\|_{2}^{2}. \end{split} \end{equation} \begin{figure}[t] \vspace{-0.2cm} \hspace{-0.5cm} \includegraphics[width=1.05\linewidth]{figures/model_architecture.pdf} \vspace{-0.2cm} \caption{\textbf{Model architecture.} Our model starts with a low-resolution coefficient prediction stream that uses style-based splatting blocks \textbf{\textit{S}} to build a joint distribution between the low-level features of the input content/style pair. This distribution is fed to bilateral learning blocks \textbf{\textit{L}} and \textbf{\textit{G}} to predict an affine bilateral grid $\Gamma$. Rendering, which runs at full-resolution, performs the minimal per-pixel work of sampling from $\Gamma$ a $3 \times 4$ matrix and then multiplying. } \label{fig:architecture} \vspace{-0.4cm} \end{figure} \paragraph{Bilateral Space.} Bilateral space was first introduced by Paris and Durand~\cite{paris2006fastapprox} in the context of fast edge-aware image filtering. A 2D grayscale image $I(x,y)$ can be ``lifted'' into bilateral space as a sparse collection of 3D points $\{x_j, y_j, I_j\}$ in the augmented 3D space. In this space, linear operations are inherently edge-aware because Euclidean distances preserve edges. They prove that bilateral filtering is equivalent to \emph{splatting} the input onto a regular 3D \emph{bilateral grid}, blurring, and \emph{slicing} out the result using trilinear interpolation at the input coordinates $\{x_j, y_j, I_j\}$. Since blurring and slicing are low-frequency operations, the grid can be low-resolution, dramatically accelerating the filter. Bilateral Guided Upsampling (BGU)~\cite{chen2016bilateral} extends the bilateral grid to represent transformations between images. By storing at each cell an affine transformation, an \emph{affine bilateral grid} can encode any image-to-image transformation given sufficient resolution. The pipeline is similar: \emph{splat} both input and output images onto a bilateral grid, blur, and perform a per-pixel least squares fit. To apply the transform, \emph{slice} out a per-pixel affine matrix and multiply by the input color. BGU shows that this representation can accelerate a variety of imaging operators and that the approximation degrades gracefully with resolution when suitably regularized. Affine bilateral grids are constrained to produce an output that is a smoothly varying, edge-ware, and locally affine transformation of the input. Therefore, it fundamentally cannot produce false edges, amplify noise, and inherently obeys the photorealism constraint. Gharbi et al.~\cite{gharbi2017deep} showed that slicing and applying an affine bilateral grid are sub-differentiable and therefore can be incorporated as a layer in a deep neural network and learned using gradient descent. They demonstrated that their HDRnet architecture can effectively learn to reproduce many photographic tone mapping and detail manipulation tasks, regardless of whether they are algorithmic or artist-driven. \subsection{Network Architecture} \label{subsec:network_architecture} Our end-to-end differentiable network consists of two streams. The \emph{coefficient prediction} stream takes as input reduced resolution content $\widetilde{I_c}$ and style $\widetilde{I_s}$ images, learns the joint distribution between their low-level features, and predicts an affine bilateral grid~$\Gamma$. The \emph{rendering} stream, unmodified from HDRnet, operates at full-resolution. At each pixel $(x, y, r, g, b)$, it uses a learned lookup table to compute a ``luma'' value $z = g(r,g,b)$, slices out $A = \Gamma(x/w,y/h,z/d)$ (using trilinear interpolation), and outputs $O = A * (r, g, b, 1)^T$. By decoupling coefficient prediction resolution from that of rendering, our architecture offers a tradeoff between stylization quality and performance. Figure~\ref{fig:architecture} summarizes the entire network and we describe each block below. \begin{table}[t] \centering \begin{tabular}{c | cccccccc | cc | cccccc | cc } & $S_1^1$ & $S_1^2$ & $S_2^1$& $S_2^2$ & $S_3^1$ & $S_3^2$ & $C^7$ & $C^8$ & $L^1$ & $L^2$ & $G^1$ & $G^2$ & $G^3$ & $G^4$ & $G^5$& $G^6$& $F$ & $\Gamma$ \\ \hline type & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $c$ & $f_{c}$ & $f_{c}$ & $f_{c}$ & $f_{c}$ & $c$ & $c$ \\ stride & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 1 & 1 & 2 & 2 & - & - & - & - & 1 & 1 \\ size & 128 & 128 & 64 & 64 & 32 & 32 & 16 & 16 & 16 & 16 & 8 & 4 & - & - & - & - & 16 & 16 \\ channels & 8 & 8 & 16 & 16 & 32 & 32 & 64 & 64 & 64 & 64 & 64 & 64 & 256 & 128 & 64 & 64 & 64 & 96 \\ \hline \end{tabular} \caption{Details of our network architecture. $S_j^i$ denotes the $i$-$th$ layer in the $j$-$th$ splatting block. We apply AdaIN after each $S_j^1$. $L^i$, $G^i$, $F$, and $\Gamma$ refer to local features, global features, fusion, and learned bilateral grid respectively. Local and global features are concatenated before fusion $F$. $c$ and $f_c$ denote convolutional and fully-connected layers, respectively. Convolutions are all $3 \times 3$ except $F$, where it is $1 \times 1$. } \label{tbl:layers_specifics} \vspace{-0.2cm} \end{table} \subsubsection{Style-based Splatting.} We aim to first learn a multi-scale model of the joint distribution between content and style features, and from this distribution, predict an affine bilateral grid. Rather than using strided convolutional layers to directly learn from pixel data, we follow recent work~\cite{johnson2016perceptual,li2018closed,huang2017arbitrary} and use a pretrained VGG-19 network to extract low-level features from both images at four scales ($conv1\_1$, $conv2\_1$, $conv3\_1$, and $conv4\_1$). We process these multi-resolution feature maps with a sequence of \emph{splatting blocks} inspired by the StyleGAN architecture~\cite{karras2019style} (Figure~\ref{fig:architecture}). Starting from the finest level, each splatting block applies a stride-2 \emph{weight-sharing} convolutional layer to both content and style features, halving spatial resolution while doubling the number of channels (see Table~\ref{tbl:layers_specifics}). The shared-weight constraint crucially allows the following AdaIN layer to learn the joint content/style distribution without correspondence supervision. Once the content feature map is rescaled, we append it to the similarly AdaIN-aligned feature maps from the pretrained VGG-19 layer of the same resolution. Since the content feature map now contains more channels, we use a stride-1 convolutional layer to select the relevant channels between learned-and-normalized vs. pretrained-and-normalized features. We use three splatting blocks in our architecture, corresponding to the finest-resolution layers of the selected VGG features. While using additional splatting blocks is possible, they are too coarse and replacing them with standard stride-2 convolutions makes little difference in our experiments. Since this component of the network effectively learns the relevant bilateral-space content features based on its corresponding style, it can be thought of as \emph{learned style-based splatting}. \subsubsection{Joint Bilateral Learning.} With aligned-to-style content features in bilateral space, we seek to learn an affine bilateral grid that encodes a transformation that locally captures style and is aware of scene semantics. Like HDRnet, we split the network into two asymmetric paths: a fully-convolutional \emph{local path} that learns local color transforms and thereby sets the grid resolution, and a \emph{global path}, consisting of both convolutional and fully-connected layers, that learns a summary of the scene and helps spatially regularize the transforms. The local path consists of two stride $1$ convolutional layers, keeping the spatial resolution and number of features constant. This provides enough depth to learn local affine transforms without letting its receptive field grow too large (and thereby discarding any notion of spatial position). As we aim to perform universal style transfer without any explicit notion of semantics (e.g., per-pixel masks provided by an external pretrained network), we use a small network to learn a global notion of scene category. Our global path consists of two stride $2$ convolutional layers to further reduce resolution, followed by four fully-connected layers to produce a $64-$element vector ``summary''. We append the summary at each $x, y$ spatial location output from the local path and use a $1 \times 1$ convolutional layer to reduce the final output to $96$ channels. These $96$ channels can be reshaped into a $8$ ``luma bins'' that separate edges, each storing a $3 \times 4$ affine transform. We use the ReLU activation after all but the final $1 \times 1$ fusion layer and zero-padding for all convolutional layers. \subsection{Losses} \label{subsec:losses} Since our architecture is fully differentiable, we can simply define our loss function on the generated output. We augment the content and style fidelity losses of Huang et al.~\cite{huang2017arbitrary} with a novel \textbf{bilateral-space Laplacian regularizer}, similar to the one in~\cite{gupta2016monotonic}: \begin{equation} \label{eq:loss_total} \mathcal{L} = \lambda_c \mathcal{L}_c + \lambda_{sa} \mathcal{L}_{sa} + \lambda_r \mathcal{L}_r, \end{equation} where $\mathcal{L}_c$ and $\mathcal{L}_{sa}$ are the content and style losses defined in Equations~\ref{eq:loss_content_and_style}~and~\ref{eq:loss_style_adain}, and \begin{equation} \label{eq:laplacian_reg} \mathcal{L}_r(\Gamma) = \sum_s \sum_{t \in N(s)} ||\Gamma[s] - \Gamma[t]||_F^2, \end{equation} where $\Gamma[s]$ is one cell of the estimated bilateral grid, and $\Gamma[t]$ one of its neighbors. The Laplacian regularizer penalizes differences between adjacent cells of the bilateral grid (indexed by $s$ and finite differences computed over its six-connected neighbors $N(s)$) and encourages the learned affine transforms to be smooth in both space and intensity~\cite{chen2016bilateral,gupta2016monotonic}. As we show in our ablation study (Sec~\ref{subsec:ablation}), the Laplacian regularizer is necessary to prevent visible grid artifacts. We set $\lambda_{c}=0.5$, $\lambda_{sa}=1$, and $\lambda_{r}=0.15$ in all experiments. \subsection{Training} \label{subsec:training} We trained our model on high-quality photos using Tensorflow~\cite{abadi2016tensorflow}, without any explicit notion of semantics. We use the Adam optimizer~\cite{kingma2015adam} with hyperparameters $\alpha=10^{-4}, \beta_1=0.9, \beta_2=0.999, \epsilon=10^{-8}$, and a batch size of $12$ content/style pairs. For each epoch, we randomly split the data into 50000 content/style pairs. The training resolution is $256 \times 256$ and we train for a fixed $25$ epochs, taking two days on a single NVIDIA Tesla V100 GPU with 16 GB RAM. Once the model is trained, inference can be performed at arbitrary resolution (since the bilateral grid can be scaled). To significantly reduce training time, we train the network at a fairly low resolution. As shown in Figure~\ref{fig:hd_4k}, the trained network still performs well even with 12 megapixel inputs. We attribute this to the fact that our losses are derived from pretrained VGG features, which are relatively invariant with respect to resolution. \section{Results} \label{sec:results} For evaluation, we collected a test set of 400 high-quality images from websites. We compared our algorithm to the state of the art in photorealistic style transfer, and conducted a user study. Furthermore, we perform a set of ablation studies to better understand the contribution of various components. Detailed comparisons with high-resolution images are included in the supplement. \subsection{Ablation Studies} \label{subsec:ablation} \begin{figure}[ht] \centering \begin{tabular}{c@{\hspace{0.002\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}} \includegraphics[width = .12\linewidth,height=.14\linewidth]{figures/artistic_w_grids/1_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/1_adain_grid.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/1_wct_grid.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/1_adain_bgu.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/1_ours.jpg} \\ \includegraphics[width = .12\linewidth,height=.14\linewidth]{figures/artistic_w_grids/2_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/2_adain_grid.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/2_wct_grid.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/2_adain_bgu.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/artistic_w_grids/2_ours.jpg} \\ { (a) Inputs } & { (b) AdaIN $\rightarrow$ grid } & { (c) WCT $\rightarrow$ grid } & { (d) AdaIN+BGU } & { (e) Ours } \\ \includegraphics[width = .12\linewidth,height=.14\linewidth]{splat_viz/1_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/1_splat1.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/1_splat2.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/1_splat3.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/1_final.jpg}\\ \includegraphics[width = .12\linewidth,height=.14\linewidth]{splat_viz/3_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/3_splat1.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/3_splat2.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/3_splat3.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{splat_viz/3_final.jpg}\\ { (f) Inputs }& { (g) Block1 } &{(h) Block2}& { (i) Block3 }& { (j) Full results } \\ \end{tabular} \caption{\textbf{Ablation studies on splatting blocks.} (a)-(e): We demonstrate the importance of our splatting architecture by replacing it with baseline networks. (f)-(j): Visualization of the contribution of each splatting block by disabling statistical feature matching on the others.} \label{fig:splat_viz} \vspace{-0.1cm} \end{figure} \subsubsection{Style-based Splatting Design.} We conduct multiple ablations to show the importance of our style-based splatting blocks \textbf{\textit{S}}. First, we consider replacing \textbf{\textit{S}} with two baseline networks: AdaIN~\cite{huang2017arbitrary} or WCT~\cite{li2017universal}. Starting with the same features extracted from VGG-19, we perform feature matching using AdaIN or WCT. The rest of the network is unchanged: that is, we attempt to learn local and global features directly from the baseline encoders and predict affine bilateral grids. The results in Figure~\ref{fig:splat_viz} (b) and (c) show that while content is preserved, there is both an overall color cast as well as inconsistent blotches. The low-resolution features simply lack the information density to learn even global color correction. Second, to illustrate the contribution of each splatting block, we visualize our network's output when all but one block is disabled (including the top path inputs). As shown in Figure~\ref{fig:splat_viz}(f--j), earlier, finer resolution blocks learn texture and local contrast, while later blocks capture more global information such as the style input's dominant color tone, which is consistent with our intuition. By combining all splatting blocks at three different resolutions, our model merges these features at multiple scales into a joint distribution. \subsubsection{Network component ablations.} To demonstrate the importance of other blocks of our network, in Figure~\ref{fig:ablation_components}, we further compare our network with three variants: one trained without the bilateral-space Laplacian regularization loss (Equation~\ref{eq:loss_style_adain}), one without the global scene summary (Figure~\ref{fig:architecture}, yellow block), and one without ``top path'' inputs (Figure~\ref{fig:architecture}, dark green block). We also show that our network learns stylization parameterized as local affine transforms. Figure~\ref{fig:ablation_components} (b) shows distinctive dark halos when bilateral-space Laplacian regularization is absent. This is due to the fact that the network can learn to set regions of the bilateral grid to zero where it does not encounter image data (because images occupy a sparse 2D manifold in the grid's 3D domain). When sliced, the result is a smooth transition between black and the proper transform. In Figure~\ref{fig:ablation_components}(c), it shows the global summary helps with spatial consistency. For example, in \emph{mountain} photo, the left part of sky is saturated while the right part of mountain is slightly washed out, while the output of our full network in Figure~\ref{fig:ablation_components}(e) has more spatially consistent color. This is consistent with the observation in Gharbi et al.~\cite{gharbi2017deep}. Figure~\ref{fig:ablation_components}(d), demonstrates that selecting between learned-and-normalized vs. pretrained-and-normalized features (Figure~\ref{fig:architecture}, ``top path'') is also necessary. The results show distinctive patches of incorrect color characteristic of the network locally overfitting to the style input. Adaptively selecting between learned and pretrained features at multiple resolutions eliminates this inconsistency. Finally, we also show that our network learns stylization parameterized as local affine transforms and not a simple edge-aware interpolation. We run the full AdaIN network~\cite{huang2017arbitrary} on our $256 \times 256$ content and style images to produce a low-resolution stylized result. We then use BGU~\cite{chen2016bilateral} to fit a $16 \times 16 \times 8$ affine bilateral grid (the same resolution as our network) and slice it with the full-resolution input to produce a full-resolution output. Figure~\ref{fig:splat_viz} (d) shows that this strategy works quite poorly: since AdaIN's output exhibits spatial distortions even at $256 \times 256$, there is no affine bilateral grid for BGU to find that can fix them. \begin{figure}[t] \centering \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .12\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/8_s44_inputs.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/8_s44_no_reg.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/8_s44_no_global.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/8_s44_no_top.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/8_s44_ours.jpg} & \\ \includegraphics[width = .12\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/inputs_15_s5.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/no_reg_15_s5.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/15_s5_no_global.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/15_s5_no_top.jpg} & \includegraphics[width = .2\linewidth,height=.14\linewidth]{figures/ablation_adain_vgg/512size_15_s5_comb.jpg} &\\ (a) Inputs & (b) No $ \mathcal{L}_r$ & (c) No summary & (d) No top path & (e) Full results \end{tabular} \caption{Network component ablations.} \label{fig:ablation_components} \end{figure} \begin{figure}[t!] \begin{minipage}[t]{0.46\textwidth} \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/content_6_s5.jpeg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/1_8_6_s5.jpg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/2_8_6_s5.jpg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/8_8_6_s5.jpg} & \\ Content & 1x1x8 & 2x2x8 & 8x8x8 & \\ \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/style_6_s5.jpeg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/16_1_6_s5.jpg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/16_2_6_s5.jpg} & \includegraphics[width = .235\linewidth,height=.18\linewidth]{figures/grid_res/16_8_6_s5.jpg} & \\ Style & 16x16x1 & 16x16x2 & 16x16x8 & \\ \end{tabular} \vspace{-0.24em} \centering\caption{Output using grids with different spatial (top) or luma resolutions (bottom) ($\mathrm{w} \times \mathrm{h} \times \mathrm{luma\ bins}$).} \label{fig:grid_res} \end{minipage} \hfill \begin{minipage}[t]{0.52\textwidth} \begin{tabular}{c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c@{\hspace{0.005\linewidth}}c} \includegraphics[width = .12\linewidth,height=.18\linewidth]{figures/extreme/1_inputs.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/1_photowct.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/1_wct2.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/1_ours.jpg} &\\ \includegraphics[width = .12\linewidth,height=.18\linewidth]{figures/extreme/3_inputs.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/3_photowct.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/3_wct2.jpg} & \includegraphics[width = .27\linewidth,height=.18\linewidth]{figures/extreme/3_ours.jpg} &\\ inputs & PhotoWCT & WCT\textsuperscript{2} & Ours\\ \end{tabular} \vspace{-0.1em} \centering\caption{Our method is robust to adversarial inputs such as when the content image is a portrait (an unseen category) or even a monochromatic ``style''.} \label{fig:extreme} \end{minipage} \end{figure} \subsubsection{Grid Spatial Resolution.} Figure~\ref{fig:grid_res} (top) shows how the spatial resolution of the grid affects stylization quality. By fixing the number of luma bins at $8$, the $1 \times 1$ case is a single global curve, where the network learns an incorrectly colored compromise. Going up to $2 \times 2$, the network attempts to spatially vary the transformation, with slightly different colors applied to different regions, but the result is still an unsatisfying tradeoff. At $8 \times 8$, there is sufficient spatial resolution to yield a satisfying stylization result. \subsubsection{Grid Luma Resolution.} Figure~\ref{fig:grid_res} (bottom) also shows how the ``luma'' resolution affects stylization quality, with a fixed spatial resolution $16 \times 16$. With $1$ luma bin, the network is restricted to predicting a single affine transform per tile. Interpolating between $2$ luma bins reduces to a quadratic spline per tile, which is still insufficient for this image. In our experiments, $8$ luma bins is sufficient for most images in our test set. \subsection{Qualitative Results} \begin{figure}[t!] \vspace{-0.1cm} \begin{minipage}[b]{0.51\textwidth} \includegraphics[width = 0.97\columnwidth]{figures/hd_4k/hd4k.pdf} \centering \vspace{-0.1cm} \captionsetup{labelformat=empty} \caption{(a) Output at 12 megapixels.} \label{fig:hd_4k} \end{minipage} \hfill \begin{minipage}[b]{0.49\textwidth} \begin{minipage}[b]{1\textwidth} \scalebox{0.8}{\begin{tabular}{c | c | c | c | c} Image Size & PhotoWCT & LST & WCT\textsuperscript{2} & \textbf{Ours} \\ \hline 512 $\times$ 512 & 0.68s & 0.25s & 3.85s & $\pmb{<}$ \textbf{5 ms} \\ 1024 $\times$ 1024 & 1.51s & 0.84s & 6.13s & $\pmb{<}$ \textbf{5 ms} \\ 1000 $\times$ 2000 & 2.75s & OOM & 10.94s & $\pmb{<}$ \textbf{5 ms} \\ 2000 $\times$ 2000 & OOM & OOM & OOM & $\pmb{<}$ \textbf{5 ms} \\ 3000 $\times$ 4000 & OOM & OOM & OOM & $\pmb{<}$ \textbf{5 ms} \\ \hline \end{tabular}} \centering \captionsetup{labelformat=empty} \vspace{-1em} \caption{(b) Runtime.} \end{minipage} \vspace{0.2cm} \begin{minipage}[b]{1\textwidth} \scalebox{0.9}{\begin{tabular}{l |c | c | c | c} Mean Score & PhotoWCT & LST & WCT\textsuperscript{2} & \textbf{Ours} \\ \hline Photorealism & 2.02 & 2.89 & \textbf{4.21} & 4.14 \\ Stylization & 3.10 & 3.19 & 3.24 & \textbf{3.49} \\ Overall quality & 2.23 & 2.84 & 3.60 & \textbf{3.79} \\ \hline \end{tabular}} \vspace{-1em} \centering \captionsetup{labelformat=empty} \caption{(c) User study results (higher is better).} \end{minipage} \end{minipage} \vspace{-1em} \setcounter{figure}{7} \caption{(a) The output of our method running at 12 megapixels, a typical smartphone camera resolution. Despite being trained at a fixed low resolution, our method produces sharp results while faithfully transferring the style from a significantly different scene. See the supplement for full-resolution images. (b) Performance benchmarks on a NVIDIA Tesla V100 GPU with 16 GB of RAM. OOM indicates out of memory. Note that photorealistic postprocessing adds significant overhead to LST performance. Due to GPU loading and startup time, we were unable to get a precise measurement below 5ms. (c) Mean user study scores from 1200 responses. Raters scored the three output images in each sextet on a scale of 1-5 (higher is better).} \label{fig:hd_runtime_user} \vspace{-0.2cm} \end{figure} \subsubsection{Visual Comparison.} We compare our technique against three state-of-the-art photorealistic style transfer algorithms: PhotoWCT~\cite{li2018closed}, LST~\cite{li2019learning}, and WCT\textsuperscript{2}~\cite{yoo2019photorealistic}, using default settings. Note that for PhotoWCT, we use NVIDIA's latest FastPhotoStyle library. Comparisons with other algorithms are included in the supplementary material. Figure~\ref{fig:qualitative} features a small sampling of the test set with some challenging examples. Owing to its reliance on unpooling and postprocessing, PhotoWCT results contain noticeable artifacts on nearly all scenes. LST mainly focuses on artistic style transfer, and to generate photorealistic results, it uses a compute-intensive spatial propagation network as a postprocessing step to reduce distortion artifacts. Figure~\ref{fig:qualitative} shows that there are still noticeable distortions in several instances, even after postprocessing. WCT\textsuperscript{2} performs quite well when content and style are semantically similar, but when the scene content is significantly different from the landscapes on which it was trained, the results appear ``hazy''. Our method performs well even on these challenging cases. Thanks to its restricted output space, our method always produce sharp images which degrades gracefully towards the input (e.g., face, leaves) when given inputs outside the training set. Our primary artifact is a noticeable reduction in contrast along strong edges and is a known limitation of the local affine transform model~\cite{chen2016bilateral}. \subsubsection{Robustness.} Thanks to its restricted transform model, our method is significantly more robust than the baselines when confronted with adversarial inputs, as shown in Figure~\ref{fig:extreme}. Although our model was trained exclusively on landscapes, the restricted transform model allows it to degrade gracefully on portraits which it has never encountered and even a monochromatic ``style''. \subsection{Quantitative Results} \subsubsection{Runtime and Resolution.} As shown in Figure~\ref{fig:hd_runtime_user}(b), our runtime on a workstation GPU significantly outperforms the baselines and is essentially invariant to resolution at practical resolutions. This is due to the fact that coefficient prediction, the ``deep'' part of the network runs at a constant low resolution of $256 \times 256$. In contrast, our full-resolution stream does minimal work and has hardware acceleration for trilinear interpolation. On a modern smartphone GPU, inference runs comfortably above 30 Hz at full 12 megapixel camera resolution when quantized to 16-bit floating point. Figure~\ref{fig:hd_4k} shows one such example. More images and a detailed performance benchmark are included in the supplement. \subsubsection{User Study.} The question of whether an image is a faithful rendition of the style of another is inherently a matter of subjective taste. As such, we conducted a user study to judge whether our method delivers subjectively better results compared to the baselines. We recruited 20 users unconnected with the project. Each user was shown 20 sextets of images consisting of the input content, reference style, and four randomly shuffled outputs (PhotoWCT~\cite{li2018closed}, WCT\textsuperscript{2}~\cite{yoo2019photorealistic}, LST~\cite{li2019learning}, and ours). For each output, they were asked to rate the following questions on a scale of 1--5: \vspace{-0.5em} \begin{itemize} \item How noticeable are artifacts (i.e., less photorealistic) in the image? \item How similar is the output in style to the reference? \item How would you rate the overall quality of the generated image? \end{itemize} In total, we collected 1200 responses (400 images $\times$ 3 questions). As the results shown in Figure~\ref{fig:hd_runtime_user}(c) indicate, WCT\textsuperscript{2} achieves similar average scores to our results in terms of photorealism, and both results are significantly better than PhotoWCT. However, in terms of both stylization and overall quality, our technique outperforms all the other related work: PhotoWCT, LST, and WCT\textsuperscript{2}. \subsubsection{Video Stylization.} Although our network is trained exclusively on images, it generalizes well to video content. Figure~\ref{fig:video} shows an example where we transfer the style of a single photo to a video sequence that varies dramatically in appearance. The resulting video has a consistent style and is temporally coherent without any additional regularization or data augmentation. \begin{figure}[ht] \centering \includegraphics[width = 0.95\columnwidth,height=.16\linewidth]{figures/video_long.pdf} \caption{Transferring the style of a still photo to a video sequence. Although the content frames undergo substantial changes in appearance, our method produces a temporally coherent result consistent with the reference style. Please refer to the supplementary material for the full videos.} \label{fig:video} \end{figure} \section{Conclusion} We presented a feed-forward neural network for universal photorealistic style transfer. The key to our approach is using deep learning to predict affine bilateral grids, which are compact image-to-image transformations that implicitly enforce the photorealism constraint. We showed that our technique is significantly faster than state of the art, runs in real-time on a smartphone, and degrades gracefully even in extreme cases. We believe its robustness and fast runtime will lead to practical applications in mobile photography. As future work, we hope to further improve performance by reducing network size, and investigate how to relax the photorealism constraint to generate a continuum between photorealistic and abstract art. \begin{figure}[ht] \centering \begin{tabular}{c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c@{\hspace{0.002\linewidth}}c} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/6_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct6.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst6.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_6.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours6.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/19_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct19.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst19.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_19.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours19.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/7_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct7.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst7.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_7.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours7.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/8_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct8.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst8.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_8.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours8.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/10_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct10.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst10.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_10.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours10.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/12_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct12.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst12.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_12.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours12.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/14_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct14.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst14.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_14.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours14.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/16_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct16.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst16.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_16.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours16.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/3_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct3.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst3.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_3.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours3.jpg} &\\ \vspace{-0.1em} \includegraphics[width = .13\linewidth,height=.14\linewidth]{result_comparison/20_inputs.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/photowct20.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/lst20.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/wct2_20.jpg} & \includegraphics[width = .22\linewidth,height=.14\linewidth]{result_comparison/ours20.jpg} &\\ \vspace{-1em} Inputs & {\small PhotoWCT~\cite{li2018closed}} & LST~\cite{li2019learning} & WCT\textsuperscript{2}~\cite{yoo2019photorealistic} & Ours \\ \end{tabular} \vspace{-0.1em} \caption{\textbf{Qualitative comparison} of our method against three state of the art baselines on some challenging examples.} \label{fig:qualitative} \end{figure} \clearpage \bibliographystyle{splncs04}
proofpile-arXiv_069-1220
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{Challenging the paradigm for optical sensor design} The current revolution in sensor technologies is opening up for a wide number of new applications where optical components are required to be \textit{small, lightweight and cheap}, without compromise on optical quality. Relevant application areas include in-vivo medical imaging, drone-based imaging systems, mobile phones and wearables. A significant drawback for optical sensor technology in this context, however, is the fact that optical systems are generally \textit{big, heavy, and expensive} The recent developments within nanopatterning techniques and simulation tools have lead to the development of the research field known as \emph{metasurfaces} which may challenge this paradigm. For instance, the first proof-of-concepts have been published which show that metasurfaces can be used to move powerful microscopy techniques (which often require large table-mounted equipment) into the body. The authors of \cite{arbabi2018two}, demonstrate how dual-wavelength metasurface lenses can help to miniaturize two-photon imaging for e.g. in-vivo brain imaging, achieving comparable resolution to that of a conventional objective for a table top microscope. The authors of \cite{pahlevaninezhad2018nano} demonstrate superior resolution for their in-vivo optical coherence tomography relying on a metasurface lens. It is easy to imagine several other application areas where metasurfaces can make a significant change, e.g. miniaturizing hyperspectral or 3D imaging systems (which too can be large) so that they may be placed onto drones. Or how about spectrometers in cellular phones, or a holographic display on your clock? Metasurfaces are able to overcome the size, weight and cost constraints facing current optical sensor systems by allowing to fabricate optics using the same standard silicon (Si) processing technology used to fabricate electronics. In contrast to optical sensor systems, electronic sensor systems are generally small, lightweight and cheap. Although there currently do exist lithographical methods to make e.g. high quality curved microlenses, metasurfaces offer the advantage of being able to integrate a multitude of optical functions (e.g. lens, filter, polarizer) into a single surface. In this respect, metasurfaces have many similarities with diffractive optical elements. However, by utilizing optical resonances in nanostructures such as pillars, bricks or discs (rather than e.g. stepped gratings) metasurfaces offer unprecedented control over all degrees of freedom of the propagating field: The phase, intensity, polarization and dispersion. Furthermore metasurfaces can potentially be integrated into the same Si process lines which already are used for making e.g. detectors. This is a development with significant potential to save costs and reduce sizes, as microlenses and detectors currently rely on separate manufacturing lines in general. \subsection{Towards high throughput, large area patterning} \label{sec:TowardsHighThroughput} As the research field has until now been primarily interested in demonstrating the potential of metasurfaces, most dielectric metasurface lenses (or \emph{meta}lenses) are fabricated by using the best resolution nanopatterning methodologies, despite tending to be slow and costly. To be more specific, virtually every paper on state-of-the art dielectric metalenses to date has relied on Electron Beam Lithography (EBL) \cite{kamali2016highly,arbabi2016miniature,arbabi2018two,arbabi2018mems, pahlevaninezhad2018nano,devlin2016high,khorasaninejad2016metalenses, chen2018broadband, chen2019broadband}. Here EBL is typically used in one of two ways: (i) EBL is used to pattern resist for a metal lift-off, thereby attaining a hard mask for subsequent etching of the (typically silicon) metasurface structures (typically for operation in NIR, but also VIS) \cite{kamali2016highly,arbabi2016miniature,arbabi2018two,arbabi2018mems,pahlevaninezhad2018nano}, or (ii) EBL is used to pattern high aspect ratio resist (as much as 15:1) holes for subsequent ALD deposition of TiO$_2$ which, after lift-off, yield the metasurface structures (typically for operation in VIS) \cite{devlin2016high,khorasaninejad2016metalenses, chen2018broadband, chen2019broadband}). The latter technique is typically used when extreme structural requirements apply, such as for minimum gaps between metasurface pillars being less than 20nm. Moving on towards applications, it is therefore necessary to develop low cost, high throughput, large area patterning methods (as agreed upon in \cite{su2018advances, urbas2016roadmap, zhang2016printed, hsiao2017fundamentals, bishop2018metamaterials, barcelo2016nanoimprint, park2019all}) which at the same time offer comparable reproducibility and resolution as to that of EBL. Several fabrication methods relevant to large area patterning have been proposed and partially applied to metasurfaces\cite{bishop2018metamaterials, hsiao2017fundamentals}, including nanoimprint lithography \cite{makarov2017multifold, wang2017nanoimprinted,yao2016nanoimprint, chen2015large,fafarman2012chemically,ibbotson2015optical, wu2018moire, gao2014nanoimprinting, sharp2014negative, bergmair2011single,franklin2015polarization,lucas2008nanoimprint,rinnerbauer2015nanoimprinted,zhang2017printed,lee2018metasurface, briere2019semiconductors}, interference lithography \cite{zhang2015large}, plasmonic lithography \cite{luo2015fabrication,su2018advances}, immersion lithography \cite{hu2018demonstration}, deep-ultraviolet projection lithography \cite{park2019all}, pattern transfer \cite{kim2019facile, checcucci2019multifunctional}, additive manufacturing \cite{wu2019perspective}, self-assembly \cite{cai2019solution, fafarman2012chemically, kim2018chemically}, and associated high-throughput roll-to-roll and stepping processes\cite{she2018large}. Examples of NIL applied to non-lensing metasurface applications include Mie-resonant holes and line-structures for photoluminescence enhancement control\cite{makarov2017multifold, wang2017nanoimprinted}, line structures for unidirectional transmission \cite{yao2016nanoimprint}, colloidal Au nanocrystals acting as quarter wave plates\cite{chen2015large} and chemically tailored dielectric-to-metal transition surfaces \cite{fafarman2012chemically}, metallic nano-woodpiles (Moir\'e patterns) for photonic crystal bandgaps \cite{ibbotson2015optical, wu2018moire}, metal-dielectric stacked fishnet structures for negative index metamaterials \cite{gao2014nanoimprinting, sharp2014negative, bergmair2011single}, plasmonic structures for active tuning colour \cite{franklin2015polarization}, localized surface plasmon resonance control\cite{lucas2008nanoimprint}, plasmonic photonic crystal lattice acting as a plasmonic absorber \cite{rinnerbauer2015nanoimprinted}, and line structures which act as cylindrical beam generators \cite{zhang2017printed}. Despite the wide variety of publications on nanoimprint lithography applied to metasurfaces just mentioned, there are to our knowledge only a few examples in which dielectric metalenses have been made using nanoimprint \cite{lee2018metasurface, briere2019semiconductors}. This is possibly explained by the challenges involved for etching quality structures with vertical sidewalls and aspect ratios ranging between 2:1 to 15:1. Also, as mentioned above, for demonstrations and "proof of principle" prototypes, the time required by direct writing methods such as Electron Beam Lithography (EBL) is not critical. Nevertheless, transitioning into technological applications, this challenge must be addressed. The authors of \cite{briere2019semiconductors} found that using a classical parallel-plate Reactive Ion Etch (RIE) with a metallic mask yielded slanted sidewalls in the metasurface structures, which in turn seem likely to have reduced the optical quality of their lens. An alternative approach based on selective area sublimation was used to overcome this issue (but which is only applicable to crystalline materials). In \cite{lee2018metasurface} metalenses of good optical quality are reported, fabricated by evaporating a stack of SiO$_2$, Cr and Au onto a polymer stamp, after which the stack is transferred to a Si film on quartz substrate by imprinting. The de-attached SiO$_2$-Cr-Au stack is then used as an etch mask for the Si film. This method has the advantage of avoiding the need to pattern the hard mask through etching, but it seems likely that the polymer stamp must be cleaned or re-created for every imprint. \subsection{The Bosch process in comparison to competing etching techniques} The selection of the most appropriate plasma etch type for industrial metalens fabrication is not clear-cut. One group of process alternatives is a continuous reactive ion etch (RIE) - be it a classical parallel-plate RIE, or more advanced and better controlled inductively coupled plasma (ICP) based RIE, or a capacitively coupled plasma (CCP) RIE. The most advanced etchers are ICP-based. Another dry etch type is so-called cryogenic etch or cryo etch, which runs at temperatures lower than minus 100$^\text{o}$C, also in a continuous fashion. The pulsed Bosch-type process (with two pulses, or the extended Bosch with three pulses, for each etch step) is the third of the main categories/candidates. Bosch deep reactive ion etch (DRIE) produces sidewalls that are not formally straight, but indented with "scallops", which is the main feature distinguishing Bosch from the others in terms of wall appearance. The "envelope" wall can be made very close to vertical, though, and the scallops could be made as small as 10 nm (depending on mask thickness and selectivity). Cryo etch has experienced a certain popularity in R\&D - in particular due to its smooth and mirror-like sidewalls and capability of high aspect ratio (HAR) etching. Cryo etch has, however, been little used by industry, owing to its rather serious drawbacks - all stemming from a very high demand on accurate temperature control of the wafer and its etched structures. This translates into a lack of process controllability, uniformity, and repeatability, as well as the need for a continuously running line for substrate/wafer cooling by liquid nitrogen. For HAR, Bosch as well as Cryo can go much further than non-cryo continuous RIE alternatives, and Bosch is the HAR dry etch process of choice in industry. Some recent indications exist that Cryo is gaining increased interest also in industry \cite{Cryo2020} due to its specific merits compared with Bosch. One merit of interest for this paper is the entirely smooth walls, which are preferable in masters for nanoimprint lithography. Even a wall angle slightly lower than 90 degrees is preferred, and easier made by cryo than Bosch. For dielectric metalens structures, published papers show requirements on etch aspect ratios (ARs) ranging all the way from 2:1 to above 30:1 (see e.g. \cite{arbabi2018two,khorasaninejad2014silicon,khorasaninejad2016metalenses,chen2018broadband,kamali2016highly}, although \cite{khorasaninejad2014silicon} uses the structures for beam-splitting rather than lensing). Furthermore, the minimum gaps between neighboring pillars range from less than 20 nm to several hundreds of nm. These widely differing ranges stem from a combination of the wavelength of the application, other parts of the specification, and the applied technology and materials. Roughly speaking, dry etched silicon metastructures operating in NIR tend towards lower AR ranges \cite{arbabi2018two,khorasaninejad2014silicon}, whereas ALD deposited TiO$_2$ metastructures operating in VIS tend towards high aspect ratios (HAR) \cite{khorasaninejad2016metalenses, chen2018broadband, chen2019broadband} (if they instead were to be made by etching). In terms of selecting the most appropriate etch type, one should – perhaps a bit simplisticly - distinguish between low-to-medium range ARs, and a high aspect ratio (HAR) range. No strictly defined border exists between the two, and indeed it depends on several parameters and on one's final target, but a border could arguably lie very roughly at 10:1, or in some cases quite a bit higher. For low-to-medium ARs it is not always evident that a classical RIE or ICP-RIE (or CCP-RIE) must yield to a Bosch or a cryo etch, despite the latter two being clearly better than the others at HARs. Indeed, ref \cite{kamali2016highly} achieves etching of ARs of 9:1 by ICP-RIE. Still, continuous RIE could be more challenging than a Bosch or cryo process in obtaining straight (vertical) sidewalls (see e.g. \cite{khorasaninejad2014silicon}). Furthermore, a Bosch process stops more abruptly on a buried oxide layer (BOX), as provided by Silicon On Insulator (SOI) wafers, a convenient feature for precise height control. However, with the extremely small sideways dimensions in such metalens pillars (e.g., a pillar width of 55 nm in \cite{khorasaninejad2014silicon} and even 40 nm in the TiO2 case of \cite{khorasaninejad2016metalenses}), a very strict control of scallop size as well as sideways "notching" (a badly controlled sideways etch that can appear due to charging of the oxide) is required. The undesired notching effect could be mitigated by time-based stop of the Bosch DRIE just before the BOX is reached, followed by a well-tuned continuous RIE step. Another possible argument against the Bosch process is that it will always result in a pillar wall shape defined by scallops. However, this paper will show that this effect by itself does not seriously deteriorate metalens performance when it is corrected for in the NIL master - a key finding of our paper. As noted above, it is possible that for some metalens designs the distance between neighboring pillars could become seriously small; thus, a limit exists for how long one can compensate for scallops by making the master pillars wider. However, as scallops at least under some circumstances could be made as small as 10 nm, very little master correction may often be required. A published example \cite{khorasaninejad2016metalenses}, though, shows gaps smaller than 20 nm. This not only gives an AR of over 30:1 in their design, but also strains the viability of scallop correction. In such an extreme situation, a cryo etch may be the best process option – if it is available. In terms of access, Bosch process equipment is currently much more available than cryo, in R\&D as well as industrial facilities. Almost all labs that do any serious silicon etching have Bosch processes at hand. However, the same basic plasma tool can be used for cryo as well as Bosch, with relatively limited alterations to enable cryo. It is thus probably more likely that for a metalens development project a Bosch rather than a cryo process would be used in an R\&D lab, while for an industrial enterprise one would think that comparative performance is the decisive factor. All in all, there is in our opinion no clear and obvious "winner" in the etch type competition for metalenses. However, as long as the scallops of the Bosch-etched walls are not a serious hindrance performance-wise, and the pillar gaps are not extremely small combined with very high ARs, Bosch DRIE at the very least seems like a strong contender. \subsection{Our contribution} In this paper we present the utilization of standard industrial high throughput silicon processing techniques for the fabrication of diffraction-limited dielectric metasurface lenses for NIR: We have used UV Nano Imprint Lithography (UV-NIL) patterning of a resist mask with subsequent Continuous and Bosch Deep Reactive Ion Etching (DRIE) for fabricating quality high aspect ratio metastructures with vertical sidewalls. To our knowledge this is the first such demonstration of the combination of these techniques, which are highly relevant to the growing demand for developing high throughput, large area patterning techniques for dielectric metasurfaces. Furthermore, we present a detailed account of the processing steps and the challenges involved, in order to hopefully contribute to the advancement of UV-NIL and DRIE as a route to achieve this. Employing UV-NIL still requires the fabrication of a master wafer, typically using EBL, but the cost of this can be reduced by fabricating masters with single (or several) dies, which are replicated to pattern a full master wafer using stepper nanoimprint lithography (stepper NIL) and reactive ion etching. However, full-wafer patterning by stepper NIL is not addressed in this paper. \section{Design of metalens} \label{sec:Design} \subsection{Physical principle} \label{sec:PhysPrinc} The optical design of the metasurfaces relies on dielectric rectangular pillar arrays (Fig. \ref{fig:SimStructures}) and the widely used geometric phase principle \cite{kang2012wave, khorasaninejad2016metalenses,wang2017broadband, chen2019broadband}. The phase function $\phi(r)$ of a lens (which focuses normally incident light to a focal point a distance $f$ from the center of the lens) is given by \begin{eqnarray} \label{eq:LensPhase} \phi(r) = \frac{2\pi}{\lambda}\bigg (\sqrt{r^2 + f^2}-f \bigg ). \end{eqnarray} where $\lambda$ is the wavelength of interest and $r$ is the radial distance from the center. The job of the metalens is to add the phase amount $\phi(r)$ to the incoming field at each point $r$ on the metasurface. If the incoming field is circularly polarized, phase can be added to the field by transmitting it through rotated rectangular pillars on the metasurface, rotated by an angle \begin{eqnarray} \label{eq:Rotation} \alpha(r) = \phi(r)/2, \end{eqnarray}{} as sketched in Fig. \ref{fig:UnitCellPic}. This is known as the \emph{geometric phase principle}, in which the transmitted field $|E_\text{out}\rangle$ may be expressed as \begin{figure} \centering \begin{subfigure}[b]{0.40\textwidth} \centering \includegraphics[width=\textwidth]{Figures/Sketch.png} \caption{} \label{fig:UnitCellPic} \end{subfigure} \hfill \begin{subfigure}[b]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{Figures/StructuralTolerances_Simulation.pdf} \caption{} \label{fig:StructuralTolerances} \end{subfigure} \caption{(a) Sketch of Si rectangular pillars rotated by an angle $\alpha$ relative to the unit cell axes. When the incident field is circularly polarized, the rotation angle imposes a phase shift of $2\alpha$ to the transmitted cross-polarized field. (b) Simulated cross-polarized intensity for left-circular field passing through an array of the sketched pillars using the Rigorously Coupled Wave Analysis (RCWA) method. The target dimensions of $h=1200$ nm, width $w=230$ nm, length $l=354$ nm and periodicity $p=835$ nm give the solid curve. Simulations for structures with reduced or increased lateral dimensions (by -50nm and +40nm) are displayed: These demonstrate that fabrication tolerances of at least $\pm 40$ nm in the lateral dimensions should give functioning metasurfaces at either of two common telecom wavelengths $\lambda = 1.55\mu$m or $\lambda = 1.31\mu$m.} \label{fig:my_label} \end{figure}{} \begin{eqnarray} \label{eq:crossPol} |E_\text{out}\rangle = \frac{t_x + t_y}{2}|L\rangle + \frac{t_x - t_y}{2} \exp(i\phi)|R\rangle, \end{eqnarray} where we have assumed that the incoming field is left handed circular polarized $|L\rangle$, and $|R\rangle$ is then the cross-polarized, right handed circular polarized field. $t_x$ and $t_y$ are the complex transmission coefficients for linear polarization directions orthogonal to the surface normal (along the coordinate $x$ and $y$ axes, respectively). Observing the transmitted field, it is clear that the values of the phase function \eqref{eq:LensPhase} are applied to the cross-polarized field $|R\rangle$ through the term $\exp(i\phi)$: I.e. the cross-polarized field will be focused to the focal point $f$, while the field remaining in the original polarization state is not. By appropriately designing the dielectric pillar periodicity $p$, height $h$, width $w$ and length $l$ one can tune $t_x$ and $t_y$ to increase the proportion of the transmitted field which is focused: By tuning the parameters such that $t_y=-t_x\equiv -t$ the metasurface also acts as a quarter-wave plate in which all the field is cross polarized, giving \begin{eqnarray} \label{eq:crossPol-QWP} |E_\text{out}\rangle = t\exp(i\phi)|R\rangle, \end{eqnarray} where now all the transmitted field is focused at the focal point. Since the phase $\phi(r)$ is imposed through the rotation \eqref{eq:Rotation} alone, the simulation task is limited to finding the dimensions $p,h,w,l$ of the rectangular pillar array which optimize the degree of cross-polarization of the transmitted field. By fixing a common height $h$ for all of the pillars, the metasurface can be flat and well suited for fabrication using lateral patterning techniques. Furthermore, as is common in literature (e.g. \cite{kang2012wave,khorasaninejad2016metalenses}), we also apply the same values $w$ and $l$ to all rotated pillars and thereby disregard changes incurred upon $t_x$ and $t_y$ when rotating the rectangular pillars by the angle $\alpha$. This simplification allows us to use a continuous range of angles $\alpha \in [0,\pi)$ and using identical (although rotated) pillars yields a constant filling factor over the UV-Nanoimprint Lithography stamp, which is an advantage towards process optimization (Sec. \ref{sec:NIL}). Preliminary simulations for the Si rectangular pillars on a quartz substrate indicate that the phase discrepancies incurred by this simplification are at most (varying by angle $\alpha$) on the order of around 0.03 rad. The transmittance discrepancies due to rotation seem to be negligible, however. \subsection{Sweep simulations to find array dimensions} \label{sec:Simulations} We performed sweep simulations to find array dimensions that maximize transmission of the cross polarized field using Rigorously Coupled Wave Analysis (RCWA) in the GD-Calc implementation and the Finite Difference Time Domain method (FDTD) in the OptiFDTD implementation. We find that dimensions of $h=1200$ nm, width $w=230$ nm, length $l=354$ nm and periodicity $p=835$ nm give full cross-polarization for the target wavelength of $\lambda = 1.55\mu$m. The simulations assume the source is placed within the silicon (Si) substrate: I.e. reflections at the wafer backside are neglected because they can be effectively eliminated by use of an anti-reflection coating, and are not intrinsic to the metasurface design. The ratio of transmitted cross-polarization intensity to the intensity of the light incident on the metasurface is shown in Fig. \ref{fig:StructuralTolerances}. The lower than unity ratio may be largely attributed to reflections at the boundary between the Si substrate ($n_\text{Si}=3.5$) and air ($n_\text{air}=1$): The Fresnel equations at normal incidence give roughly 31\% reflectance at a Si-air interface for the relevant wavelength bandwidth. The efficiency of the metalens can be increased by e.g. placing the Si metasurface pillars on a quartz substrate ($n_\text{SiO2}=1.5$) instead, which would reduce the corresponding reflectance to around 4\%. The structures on the interface may of course also contribute to reduce the expected efficiencies somewhat: Some scattering to diffraction orders within the Si substrate is expected since $\lambda/n_\text{Si}p=0.53$ for $\lambda=1.55\mu$m. Development of a UV-NIL and Bosch DRIE patterning process for metalens fabrication involves many parameters that must be taken into account when aiming to end up with the desired lateral dimensions found from simulations. As such it is useful to know what tolerances are permitted in the lateral dimensions of the structure, when planning for the fabrication. Figure \ref{fig:StructuralTolerances} shows two additional simulations where the lateral dimensions $w$ and $l$ of the pillars are varied to determine the permitted lateral fabrication tolerances. Increasing the lateral dimensions by 40 nm shows that the metasurface continues to have a high cross-polarization transmission at $\lambda=1.55\mu$m. While reducing the lateral dimensions by $-40$nm gives low cross-polarization transmission at $\lambda=1.55\mu$m, a high transmission is achieved at another common telecom wavelength of $\lambda=1.31\mu$m. Therefore, when given the freedom of using either $\lambda=1.55\mu$m or $\lambda=1.31\mu$m, the fabrication tolerance in the lateral dimensions is expected to be on the order of $\pm 40$nm. It is important to note that discrepancies in the lateral dimensions primarily affect the \emph{efficiency} of the lens and \emph{not} the focal spot size owing to the geometric phase effect (phase is imposed by rotation of the structure rather than its particular dimensions). This explains why our lenses fabricated in Sec. \ref{sec:results} attain diffracton-limited focusing despite slightly missing the target dimensions. Since high precision in reaching the target dimensions will be challenging under process development we have designed three designs to account for scenarios in which we might over- and under-estimate the end result dimensions. The variants of dimensions used for the fabrication of the NIL master are outlined in the table below. \begin{table}[] \centering \caption{Lateral dimensions and filling factors of rectangular pillars for mask fabrication} \label{tab:LateralDimensions} \begin{tabular}{c|ccc} Metasurface & $w$ [nm] & $l$ [nm] & Filling factor (F) \\ \hline A & 292 & 416 & 0.17\\ B & 237 & 361 & 0.12\\ C & 351 & 475 & 0.24 \end{tabular} \end{table}It turned out that the smallest variant (i.e. variant B) of the table was the best suited due to broadening at the base of the resist pillars (as discussed in Sec. \ref{sec:NIL}). \subsection{Compensation for Bosch sidewall surface roughness} \label{sec:ScallopCompensation} The center picture in Fig. \ref{fig:SimStructures} shows a SEM image of a Si rectangular pillar fabricated after Bosch-type 3-steps Deep Reactive Ion Etching (DRIE). As can be seen, the alternation of isotropic etching, passivation and de-passivation in the Bosch process leads to washboard surface patterns in the form of "scallops" which for simplicity have been characterized in terms of a scallop radius $R$. In the research field it is sometimes pointed out that surface roughness poses a problem towards achieving high optical quality\cite{khorasaninejad2016metalenses}, however since the roughness in this case is regular and occurs on length scales that are much smaller than the wavelength they can be treated as giving rise to effectively reduced dimensions which can be compensated for. The structure displayed on the right hand side of Fig. \ref{fig:SimStructures} shows a simulation model mimicking a Bosch processed pillar using scallops with radius $R=50$ nm (as the one seen in the center of Fig. \ref{fig:SimStructures}). Fig. \ref{fig:SimPlot} presents the simulation labeled \textit{FDTD Scallopy} that shows that scallops can be compensated for by increasing the lateral dimensions corresponding to the volume loss represented by the scallops: Essentially the same transmitted cross-polarization is achieved by scaling the width and length according to \begin{eqnarray} w' &=& 1.0382 \bigg (w + \frac{\pi R}{2} \bigg ) = 320 \ \text{nm}, \\ l' &=& 1.0382 \bigg (l + \frac{\pi R}{2} \bigg ) = 449 \ \text{nm}, \end{eqnarray} which corresponds to a scaling that is $\sim 3.82\%$ larger than that required to compensate for the direct volume loss. \begin{figure} \centering \begin{subfigure}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{Figures/FDTD_ScallopsVsSmooth.pdf} \caption{} \label{fig:SimPlot} \end{subfigure} \hfill \begin{subfigure}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/ScallopySimulation2.png} \caption{} \label{fig:SimStructures} \end{subfigure} \caption{(a) Simulated cross-polarized intensity for left-circular field passing through a metasurface consisting of rectangular silicon pillars ($n=3.5$) using the Finite Element Time Domain (FDTD) method. A FDTD simulation is shown also for a metasurface structure with washboard sidewall roughness ("scallopy") sidewalls, resembling the appearance after Bosch Deep Reactive Ion Etching (DRIE), in which the lateral dimensions of the rectangular pillar have been increased to compensate for the volume loss. The scallopy structure gives qualitatively equal results to the former FDTD simulation. (b) \emph{Left}: The structure with smooth sidewalls for the FDTD simulations, with height $h=1200$ nm, width $w=230$ nm, length $l=354$ nm and periodicity $p=835$ nm. \emph{Center}: A SEM image of a rectangular pillar etched into Si using Bosch DRIE. \emph{Right}: Showing the refractive index cross-sectional profile of the FDTD simulaton structure for an imitation of a rectangular Si pillar with "scallopy" sidewalls (scallop radii of $R=50$nm) where the lateral dimensions have been scaled up to $w'=320$ nm and $l'=449$nm in order to compensate for the volume loss by the scallops.} \label{fig:Simulations} \end{figure} \section{Results} \label{sec:results} This section describes the results of the UV-nanoimprint lithrography (UV-NIL) and etching steps, as well as some of the challenges encountered. Proposed strategies towards process optimization are discussed in Sec. \ref{sec:Discussion}. \subsection{Imprint results} \label{sec:NIL} UV-NIL was done using the Micro Resist Technology mrNIL-210 series resist and a soft stamp (Solvay Fomblin MD-40). The stamp is an inverted copy of a silicon master wafer with nominally 500 nm tall pillars forming the metasurface (see Fig. \ref{fig:Sketch_Softstamp}). In order to transfer this pattern to the silicon process wafers two conditions must be fulfilled: 1) the resist needs to be thick enough to function as an etch mask for more than 1.2 µm silicon DRIE, and 2) the residual layer thickness (RLT) of the remaining resist between the wafer surface and the imprinted pattern must be minimized (see Fig. \ref{fig:Sketch_Imprint}). In order to completely fill the inverted metasurface-structures in the stamp with resist, only a thin initial resist layer of less than 100 nm was needed for our design. However, in some cases it could be beneficial to have a thicker resist layer outside the patterned area, in order to prevent this area from being etched. Thus, resist films of different initial thicknesses from 500 nm and thinner were explored. The mrNIL210-200nm formulation, spun at 3000 rpm, gave low enough RLT values and acceptable variation over the metasurface. The resist thickness before imprint was measured by ellipsometry to be approximately 150 nm. The RLT obtained after imprint varied between $48$nm $\leq$ RLT $\leq$74 nm over the metasurface. While these RLT values were acceptable for further processing, the results obtained from the thicker mr-NIL210-500nm formulation were not viable for the metasurface patterning by DRIE: The RLT varied considerably over the lens, being at most always close to the pre-imprint resist film thickness. This finding turned out to be crucial for the ensuing fabrication. \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figures/Sketch_SoftStamp2.png} \caption{} \label{fig:Sketch_Softstamp} \end{subfigure} \ \ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figures/Sketch_Imprint2.png} \caption{} \label{fig:Sketch_Imprint} \end{subfigure} \hfill \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{Figures/ImprintedResist_mrNIL210-200nm_3000rpm_img8_cropped2} \caption{} \label{fig:ImprintmrNIL210} \end{subfigure} \caption{(a) \textit{Sketch of stamp fabrication}: A soft stamp is made by spinning Solvay Fomblin MD-40 onto a Si master wafer covered with nominally 500nm tall rectangular pillars, rolling on a carrier foil, exposure by UV illumination and finally detaching the stamp. The stamp is therefore an inverted copy of the silicon master, consisting of rectangular holes which are nominally 500nm deep. (b) \textit{Sketch of resist mask after imprint}: The soft stamp has been rolled onto a film of Micro Resist Technology mr-NIL210 series resist covering a bulk Si wafer substrate primed with mr-APS1. After exposure to UV illumination, the stamp is removed. What remains is patterned resist (a copy of the original master). Between the patterned resist and the silicon substrate there exists a film of residual resist characterized by a Residual Layer Thickness (RLT). (c) \textit{Cross-sectional SEM image of imprinted and exposed resist on a silicon substrate}: The bright white line is probably caused by delaminated resist at the edge. Between the resist pillars and the silicon substrate one can observe the RLT. One also observes a broadening at the base of the resist pillar (in the shape of a "top-hat").} \label{fig:three graphs} \end{figure} In general metasurfaces consist of structures of varying geometry, which means that the filling factor $F$ varies over the surface. Optimizing the RLT therefore becomes challenging, as the amount of resist used to fill the structures varies over metasurface. In this respect, our optical design based on the geometric phase method (Sec. \ref{sec:PhysPrinc}) has the advantage of providing identical structures (although rotated) with identical filling factors over the metasurface. This makes process optimization of the residual layer thickness easier. As a side remark: We also attempted fabrication of another optical design based on cylinders of varying radii in which issues with delamination of resist upon stamp removal seemed to depend on filling factor of the cylinders (see Fig. \ref{fig:RLTIssues}). An issue with the imprinted structures is broadening close to the base of the resist pillars (a resist "foot") as seen in Fig. \ref{fig:ImprintmrNIL210}. Such broadening is also frequently observed in SEM images from literature \cite{hamdana2018nanoindentation, si2017consecutive, plachetka2013tailored}. This resist foot leads to an added length in the lateral dimensions of the rectangular pillars, which is transferred to the final pillar dimensions in the patterned silicon (Sec. \ref{sec:BoschEtch}). We believe this broadening effect likely originates from the master wafer (from which the soft stamp is made) since the UV-cured resist generally follows the pattern of the master. Section \ref{sec:BoschEtch} outlines how we resolved the issue by means of various etch parameters for the resist pillars. \begin{figure} \centering \includegraphics[width=.65\linewidth]{Figures/FillFactorCh4_NILTMasterCut_2.jpg} \caption{Imprint challenges when filling factor varies over metasurface. Here a cropped microscope image of imprinted resist (red border) is placed on top of the plotted fill factor (gray scale plot) of a metalens consisting of cylindrical pillars of varying radii. As can be seen, the structural fidelity of the imprint varies dramatically with filling factor (F): Areas with large F seem to turn out well, whereas areas with low F seem to detach with the stamp (apart from the center area).} \label{fig:RLTIssues} \end{figure} Attaining sufficient adhesion between the resist and the substrate remains an important issue. This is necessary to avoid delamination of the metastructure when the stamp is withdrawn from the surface after exposure. Such adhesion issues are unwanted if NIL is to become a high-throughput metasurface fabrication technique. To facilitate adhesion, RCA cleaned substrates were plasma activated (600W for 10mins) before spinning adhesion promoter (mr-APS1) immediately afterwards. Three different dimensions of resist pillars corresponding to filling factors $F_1=0.12$, $F_2 = 0.17$, $F_3 = 0.24$, respectively, were used for the metastructure (corresponding to the dimensions in Table \ref{tab:LateralDimensions}). However only the smallest gave reliable imprinting. The imprints with larger filling factors more or less consistently delaminated when withdrawing the stamp after exposure. \subsection{Etch Methodology and Results} \label{sec:BoschEtch} In order to transfer the imprint patterns (as shown in Fig. \ref{fig:ImprintmrNIL210}) to the silicon wafer we utilized first a continuous (un-pulsed) RIE step to etch through the residual layer of resist before commencing with Bosch 3-step DRIE - i.e. pulsed etching consisting of the three steps passivation, de-passivation and isotropic SF$_6$-based silicon etch. Fig.~\ref{fig:RIEOverview} shows that high pattern fidelity is achieved in the silicon: We observe vertical sidewalls (indented with scallops, discussed below) for the pillars of around 1.2 $\mu$m height. In a separate run we observed the same pattern fidelity to at least $1.6 \ \mu$m etch depth. The cyclically-pulsed etching of the Bosch process leaves a washboard-like surface roughness characterized by a scallop depth which depends on the parameters of the Bosch process. For Fig. \ref{fig:RIEOverview} the scallop depths are $\sim 14$ nm. In making these structures we used 6'' Si bulk wafers on which only a small area was patterned: 4 metalenses (rectangular pillars) of area 1.5mm $\times$ 1.5 mm, and one metalens (cylindrical pillars) of 0.75mm $\times$ 0.75mm. During the first RIE dry-etch step (for residue layer removal) the resist is completely removed from the surrounding wafer surface, resulting in an etch loading close to 100\% for the following Bosch DRIE step (to etch the Si pillars). As discussed in Sec. \ref{sec:NIL}, the broadening of the resist pillars seen in Fig. \ref{fig:ImprintmrNIL210} leads to added dimensions in the etched structures. The pillars shown in Fig. \ref{fig:RIEOverview} have lateral dimensions of around 420 nm $\times$ 530 nm, i.e. roughly 180nm too large in both directions in comparison to the simulation designs in Sec. \ref{sec:Simulations}. As a result the optical properties of this metasurface lens are poor. To solve this issue without redesigning the mask, three approaches were tested. First we attempted an increased length of the continuous dry-etch step to attempt to completely remove the resist "foot" at the base of the imprinted resist pillars. Although this somewhat deteriorated the quality of the imprinted pattern (turning the resist pillars into pyramids), this did not seem detrimental to the patterning of the Si pillars. We expect that further development of the process parameters of the continuous dry-etch step will likely remove the unwanted broadening (as e.g. seen in \cite{hamdana2018nanoindentation} where the resist "feet" are removed completely while keeping vertical sidewalls in the resist), but the aforementioned run did not reduce it sufficiently. Our second approach was to dramatically increase the scallop depths to $\sim 86$ nm (see Fig. \ref{fig:ComparisonDRIE} ii) and the lateral dimensions were on the order of 307nm $\times $460nm (measured between the tops of the washboard pattern). A third approach consisted in realizing less extreme scallop depths of $\sim 44$ nm and thereafter oxidizing the structures so that around 100 nm oxide resulted. After stripping this oxide away (which on a planar silicon surface would have resulted in 44 nm reduced silicon thickness on each surface) the scallop depths were reduced to around $\sim 29$ nm (see Fig \ref{fig:ComparisonDRIE} iii) and the lateral dimensions were on the order of 210nm $\times$ 320nm. \begin{figure} \centering \begin{subfigure}[b]{0.44\textwidth} \centering \includegraphics[width=\textwidth]{Figures/Overview_RIE9.png} \caption{} \label{fig:RIEOverview} \end{subfigure} \hfill \begin{subfigure}[b]{0.52\textwidth} \centering \includegraphics[width=\textwidth]{Figures/Comparison_DRIE.png} \caption{} \label{fig:ComparisonDRIE} \end{subfigure} \caption{(a) Patterned silicon after Bosch Deep Reactive Ion Etching (DRIE) of the silicon wafer with imprinted resist pictured in Fig. \ref{fig:ImprintmrNIL210}. The pulsed etching of the three-step Bosch process leads to washboard sidewall surface roughness. (b) Three different scallop depths achieved using DRIE: (i) A close-up of the metasurface pictured in (a) with scallop depths of around $\sim 14$nm, (ii) In order to reduce the effective dimensions of the pillars, closer to the target dimensions, the scallop depths were increased to $\sim 86$ nm. Note that the resist has not been stripped in this image (although the resist is not clearly seen in the image). (iii) Dimensions close to the target were achieved by first performing a Bosch DRIE leading to scallops of depths $\sim 44$ nm, thereafter performing an oxidation step and oxide strip which in the end lead to scallop depths of $\sim 29$ nm.} \label{fig:RIEResults} \end{figure} \subsection{Optical characterization} \label{sec:OpticalCharacterization} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figures/opticalCharactarization/opticalSetupIllustration.pdf} \caption{ Optical setup used to characterize the metalenses. A collimated laser beam passes through a right handed circular polarizer (CPR), before going through an aperture with diameter 0.9~mm and then the metalens (ML). The beam is converted to left handed circularly polarized light and focused by the metalens. The resulting focal spot is imaged onto an IR camera using a x20 infinity corrected microscope objective and a planoconvex lens. A left handed circular polarizer (CPL) is placed in reverse between the microscope objective and the planoconvex lens, such that only the light which is converted from right to left handed circular polarization by the metalens is let through. When measuring the focal spot for the aspherical lens, the right handed circular polarizer is moved in front of one of the alignment mirrors, such that the handedness is changed by the mirror and the light can pass through the left handed circular polarizer. The aperture is used to ensure the lenses have the same effective numerical aperture, and a powermeter is used to ensure the same amount of light is transmitted through the aperture for all measurements.} \label{fig:optical_setup} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/focal_spot_1310nm.png} \caption{} \label{fig:focalspot_1310} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/focal_spot_1550nm.png} \caption{} \label{fig:focalspot_1550} \end{subfigure} \caption{Focal spot profiles measured for two metalenses and an anti-reflection coated aspherical lens when focusing a fully polarized and collimated laser beam of wavelength 1330 nm (a) and 1550 nm (b). For all lenses and for both wavelengths an aperture with diameter 0.9 mm has been placed in front of the lens. } \label{fig:focalspot} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/aspherical_resolution_target_1550nm_2.jpg} \caption{} \label{fig:aspherical_resolution_target_1550} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/RIE14_resolution_target_1550nm_0.jpg} \caption{} \label{fig:RIE14_resolution_target_1550} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{Figures/opticalCharactarization/RIE15_resolution_target_1550nm_2.jpg} \caption{} \label{fig:RIE15_resolution_target_1550} \end{subfigure} \caption{Images of a resolution target using an aspherical lens (a), metalens MLII (b), and metalens MLIII (c). The groups of lines have widths 15.6 $\mu$m 13.9 $\mu$m and 12.4 $\mu$m. The target is illuminated by a 1550 nm laser beam, and to enable comparison of the lenses an aperture with diameter 0.9 mm has been placed in front of the lens. The contrast is significantly better for the aspherical lens, while the resolution is only slightly better - the 13.9 $\mu$m thick lines in (b) are resolved similarly to the 12.4 $\mu$m thick lines in (a).} \label{fig:resolution_target} \end{figure} The two metalenses from the fabrication steps discussed in Sec. \ref{sec:BoschEtch} were tested optically using the measurement setup shown in Fig.~\ref{fig:optical_setup}. One of the metalenses had comparatively large scallop depths of $\sim86$ nm and lateral dimensions on the order of 307nm $\times $460nm as shown in Fig.~\ref{fig:ComparisonDRIE}~(ii) (herafter called MLII). The other metalens had shallower scallops of $\sim29$ nm and lateral dimensions on the order of 210nm $\times$ 320nm, shown in Fig.~\ref{fig:ComparisonDRIE}~(iii) (hereafter referred to as MLIII). Note that while the metasurfaces were designed for left circular polarization, the transfer of the pattern to the nanoimprint master led to opposite rotation of cylinders compared to the simulations and consequently a change in the handedness of the circular polarized light for operation. Hence for the optical characterization, the metasurfaces are illuminated with right handed circular polarized light, and the metasurfaces focus the cross-polarized left handed circular polarized light. The focal spots of the metasurfaces are shown in Fig.~\ref{fig:focalspot} for two wavelengths, $1.31\mu$m and $1.55\mu$m, together with the focal spot of an anti-reflection coated aspherical lens with the same focal length (10~mm) and the same aperture (0.9~mm diameter) for comparison. We see that both metalenses achieve diffraction limited focusing, having the same spot size as the aspherical lens matching the theoretical diffraction limit. As discussed in Sec. \ref{sec:PhysPrinc}, discrepancies from the target dimensions in the fabricated structures do not primarily affect the focal spot, but rather affect the lens efficiencies. MLII and MLIII ended up with effective dimensions that are smaller than the target dimensions of the optical design. At $1.31\mu$m both metalenses have a measured efficiency of 17\% compared to the aspherical lens, while at $1.55\mu$m MLII has a measured efficiency of 30\% and MLIII has a measured efficiency of 8\%. The measurements were made by comparing the peak values in Fig.~\ref{fig:focalspot}, and by disregarding reflection at the substrate back-side (by dividing the measured intensity by $0.7$) for comparison with the simulations in Sec. \ref{sec:Simulations}. We expect the efficiency values to increase as further process optimization (as discussed in Sec. \ref{sec:Discussion}) leads to better precision in reaching the target dimensions of the optical design. For incident light that has left handed circular polarization, the metalenses are divergent, having a focal length of -10~mm. This was confirmed by switching the polarizers and observing the virtual focal spot visible when bringing the metalens 10~mm inside the working distance of the microscope objective. The same measurement setup was also used to take images of a resolution target using the metalenses. For these measurements the resolution target was placed 2~cm in front of the metalens, and the image plane 2~cm behind the metalens was imaged onto the camera using the microscope objective and planoconvex lens. Fig.~\ref{fig:resolution_target} shows the resulting images for the two metalenses and the aspherical lens using the 1550~nm laser. The aspherical lens has clearly better contrast owing to higher efficiency, while the resolution is only slightly better for the aspherical lens, since 13.9~$\mu$m thick lines are resolved similarly by the metalens as 12.4~$\mu$m thick lines are resolved by the aspherical lens. Since all three lenses are observed to have the same diffraction limited focal spot size when the incoming light is collimated parallel to the optical axis, the slight difference in resolution when imaging the resolution target is likely due to coma~\cite{arbabi2016miniature}. \section{Discussion} \label{sec:Discussion} Our results have demonstrated the feasibility of using UV-NIL with subsequent continuous RIE and Bosch DRIE to fabricate diffraction limited metalenses. Further optimization towards high throughput production relevant processing should aim at improving resist adhesion upon stamp detachment and a reduction of resist broadening at the base of the resist pillars in order to obtain greater precision in reaching the target dimensions (and thereby raising the efficiency of the lenses). This section discusses these challenges in turn. A significant challenge in our UV-NIL patterning process was to avoid delamination of the resist upon detachment of the soft-stamp. To some extent, our experiences seems to indicate a degree of trade-off between achieving a low residual layer thickess (RLT) in the resist (Sec. \ref{sec:NIL}) and its adhesion to the substrate. Despite the use of plasma-activation of cleaned substrates and quick subsequent application of an adhesion promoter (Sec. \ref{sec:NIL}), the soft-stamp was in need of replacement typically after 3-5 imprints due to delamination of resist into the holes of the stamp (see Fig. \ref{fig:Sketch_Softstamp} for sketch of the softstamp). This was in the case of the patterns of variant B in Table \ref{tab:LateralDimensions} which have the lowest filling factors $F=0.12$. For the case of variants A and C (which have larger filling factors of $F=0.17$ and $F=0.24$), the resist patterns more or less consistently delaminated on the first imprint. However, the issues with delamination seemed only to occur after having switched to the less viscous resist formulation (mr-NIL210-200nm) for which the desired low residual layer thickness (RLT) resist values were attained. While using the more viscous resist formulation (mr-NIL210-500nm) the imprint patterns of all filling factors more or less consistently turned out well. Unfortunately, as discussed in Sec. \ref{sec:NIL}, the resulting RLT values of the more viscous resist formulation were too large for the subsequent etching steps. Further process development of the UV-NIL patterning steps should therefore consider varying the RLT further by dilution of mr-NIL210-500nm and see if there exists a lower threshold of the RLT at which the adhesion issue ceases. The silicon pillars in our master wafer had slightly angled sidewalls ($>80^\text{o}$) which are known to facilitate the release of the imprint stamp \cite{schift2010nanoimprint}. Another strategy could be to test even larger angles: Tuning the etch properties of the master fabrication may allow for controllable sidewall angles, and for a systematic analysis of these with respect to soft stamp release. Although slight sidewall angles may be beneficial in this respect, the transfer of such sidewalls to the resist pillars in the mask may add uncertainty for reaching the desired lateral target dimensions. The occurence of broadening at the base of the resist pillars (like "top-hats", see Fig. \ref{fig:ImprintmrNIL210}) lead to a broadening of the etched Si pillars in comparison to the mask dimensions of variant B in Table \ref{tab:LateralDimensions} (becoming roughly 180 nm too large). While we demonstrated that this could be compensated for by both increasing the lateral etch depth (i.e. scallop depths) of the Bosch pulsed DRIE (shown in Fig. \ref{fig:ComparisonDRIE}(ii)) and through oxidizing and stripping (shown in Fig. \ref{fig:ComparisonDRIE}(iii)) as discussed in Sec. \ref{sec:BoschEtch}, the addition of these processing steps also add uncertainties in predicting the resulting dimensions of the Si pillars: As was noted in Secs. \ref{sec:BoschEtch} and \ref{sec:OpticalCharacterization}, the resulting effective dimensions of the Si pillars with large scallop depths became slightly too small in comparison to the target values, which in turn may explain why the lens efficiencies are lower than their theoretical limits. Similar resist broadening to what we have observed seems to be commonly encountered in literature \cite{hamdana2018nanoindentation, si2017consecutive, plachetka2013tailored}. The authors of \cite{hamdana2018nanoindentation} demonstrate that the unwanted broadening at the base of the resist pillars can be successfully removed along with the RLT layer by use of an O$_2$ plasma, leaving the resist pillars with almost vertical walls. However, achieving similar results for rotating rectangular pillars where the minimum distance between pillars vary over the lens will likely require significant process development. Both strategies of processing away the issues of resist broadening in the imprinted resist discussed so far may require significant process development in order to reduce tertainty in the resulting Si pillar dimensions. It would be preferable, therefore, to avoid the broadening in the first place: Avoiding the need to remove or correct for the resist "feet" at the base of the pillars, is expected to lead to better precision in reaching the target dimensions. This in turn should make it possible to develop more robust processes towards achieving diffraction limited metalenses of high efficiency. We believe the resist broadening likely originates from equivalent broadening being already present in master Si pillars and/or in the NIL stamp holes since the UV-cured resist generally follows the pattern of the stamp. It may be worth considering whether process development of the NIL master fabrication can lead to Si pillar patterns without curvature at the base. \section{Conclusion} Diffraction limited dielectric metalenses have been fabricated using UV-Nano Imprint Lithography (UV-NIL) and a combination of continuous Reactive Ion Etching (RIE) and pulsed Bosch Deep Reactive Ion Etching (DRIE). These are standard silicon (Si) processing techniques that are relevant as the metasurface research field turns towards applications. In particular UV-NIL has been proposed as a strong candidate to replace the use of Electron Beam Lithography when seeking to achieve a high throughput and large area patterning technique. Simulations show that the "washboard-type" sidewall surface roughness characteristic of the Bosch DRIE process can be compensated for by increasing the lateral dimensions of Si pillars, and the fabricated structures have demonstrated diffraction-limited imaging despite the fact that its metastructure contains relatively large scallop depths. As such, the characteristic sidewall surface roughness of Bosch DRIE does not prevent the technique from being a strong candidate towards industrial metalens fabrication. It may however face some fundamental challenges in compensating for its sidewall roughness if seeking to fabricate nano-structures separated by high aspect ratio gaps. The main challenges towards fabrication of the metalenses have been issues with delamination of the resist mask upon stamp removal and resist broadening at the base of the resist pillars. The latter lead to the lateral dimensions of the resulting Si pillars after etching being too large. These were compensated for by increasing the lateral etch depths in the pulsed Bosch Deep Reactive Ion Etching: I.e. the effective dimensions were reduced by increasing the scallop sizes. This resulted in well functioning diffraction limited lenses with measured efficiencies of 30\% and 17\% at wavelengths $\lambda=1.55\mu$m and $\lambda=1.31\mu$m, respectively. Process optimization strategies are discussed to improve resist adhesion and resolve the issue of resist broadening. The latter strategies should lead to improved precision in reaching the desired Si pillar dimensions, which in turn is expected to raise the efficiency of the lenses. \section*{Funding} The research leading to these results has received funding from EEA Grants 2014-2021, under Project contract no.5/2019. \section*{Disclosures} The authors declare no conflicts of interest.
proofpile-arXiv_069-1232
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Chaotic dynamics exists in many natural systems, such as heartbeat irregularities, weather and climate \cite{skinner1990, slingo2011}. Such dynamics can be studied through the analysis of proper mathematical models which generate nonlinear dynamics and determenistic chaos. Chaotic and regular dynamics can co-exist in the phase space of low-dimensional systems \cite{ott2002chaos}. To distinguish chaotic from regular dynamics, the tangent dynamics is used to compute Lyapunov exponents $\lambda$. In practice one integrates the tangent dynamics along a given trajectory and averages a finite time Lyapunov exponent $\lambda(t)$. The averaging time $T$ needed to reliably tell regular ($\lambda=0$) from chaotic ($\lambda \neq0$) trajectories apart is usually orders of magnitude larger than the Lyapunov time $T_{\lambda} \equiv 1/\lambda$. Here, we introduce a machine learning approach that alleviates the problems of calculating Lyapunov exponents and can be used as a new chaos indicator. Machine learning has shown tremendous performance e.g. in pattern recognition \cite{dodge2017study, al2017review}. Machine learning approaches turned useful to solve partial differential equations and identify hidden physics models from experimental data~\cite{rudy2017,han2018, raissi2018}. Machine learning was used recently to predict future chaotic dynamics details from time series data without knowledge of the generating equations~\cite{agrawal2019, pathak2018}. In this paper, we introduce a machine learning way to use short time series data for telling chaos from regularity apart. We train a neural network using chaotic and regular trajecories from the Chirikov standard map. Our method has a success rate of 98\% using trajectories with length $10T_{\lambda}$, while conventional methods need up to $10^4 T_{\lambda}$ to reach the same accuracy. The main reason for the small but finite failure rate of our machine learning method is due to sticky orbits. These orbits are chaotic,yet can mimic regular ones for long times due to trapping in fractal boundary phase space regions separating chaotic and regular dynamics. Our method is also surprisingly successfull when trained with Standard Map data but tested on maps with different dimensions such as the logistic map ($d=1$) and the Lorenz system ($d=3$). \section{The Chirikov Standard Map} The Chirikov standard map is an area-preserving map in dimension $d=2$ \cite{lichtenberg2013regular} also known as the kicked rotor \cite{ott2002chaos} : \begin{equation} \begin{aligned} \label{standard_map} p_{n+1}=p_n+\frac{K}{2\pi}sin(2\pi x_n) \qquad mod\ 1 \;,\\ x_{n+1}=x_n+p_{n+1} \qquad mod\ 1 \;. \\ \end{aligned} \end{equation} The kick strength $K$ controls the degree of nonintegrability and chaos appearing in the dynamics generated by the map. \begin{figure} [hbt!] \centering \includegraphics[width=\columnwidth]{chirikovmap.pdf} \caption{\label{standardmap_poincare}Examples of Poincare sections of the standard map. (a) K=0.5, (b) K=1.0, (c) K=2.0, (d) K=2.5. } \end{figure} Consider the case when $K=0$. Eq.~\ref{standard_map} reduces to $p_{n+1}=p_n \quad (mod\ 1)$ and $x_{n+1}=x_n+p_{n+1} \quad (mod\ 1)$ which is integrable and every orbit resides on an invariant torus. The orbit can exhibit periodic or quasi-periodic behavior depending on the initial conditions ($p_0, x_0$). For small values of $K$ e.g. $K=0.5$ (Fig.\ref{standardmap_poincare}(a)) most of these orbits persist, with tiny regions of chaotic dynamics appearing which are not visible on the presented plotting scales. At $K=K_c\approx 0.97$ the last invariant KAM tori are destroyed and a simply connected chaotic sea is formed which allows for unbounded momentum diffusion. For larger values of $K$ the chaotic fraction grows confining regular dynamics to regular islands embedded in a chaotic sea (Fig.~\ref{standardmap_poincare}). Further increase of $K$ leads to a flooding of the regular islands by the chaotic sea. \section{Lyapunov exponents and predictions} The Lyapunov exponent (LE) characterizes the exponential rate of separation of a trajectory $\{p_n,x_n\}$ and its infinitesimal perturbation $\{\delta_n,\zeta_n\}$: \begin{equation} \begin{aligned} \label{nearby_map} p_{n+1}+\delta _{n+1}&=(p_n+\delta _{n})+\frac{k}{2\pi}sin(2\pi (x_n+\zeta _{n})) \\ x_{n+1}+\zeta _{n+1}&=(x_n+\zeta _{n})+(p_{n+1}+\delta _{n+1}) \\ \end{aligned} \end{equation} Linearizing (\ref{nearby_map}) in the perturbation yields the tangent dynamics generated by the variational equations \begin{equation} \begin{aligned} \label{tan_map} \delta _{n+1}&=\delta _n + k \zeta _{n} cos(2\pi x_{n}) \\ \zeta _{n+1}&=\zeta _n + \delta _{n+1} \\ \end{aligned} \end{equation} For computational pruposes $\delta$ and $\zeta$ can be rescaled after any time step without loss of generality, while keeping the rescaling factor. The LE $\lambda$ for each trajectory is obtained from the time dependence of $\lambda_N$: \begin{equation} \label{lyapunov_exp} \lambda_{N}=\frac{1}{N}\sum_{n=2}^{N}\ln (\frac{\sqrt{\delta_{n}^{2}+\zeta_{n}^{2}}}{\sqrt{\delta_{n-1}^{2}+\zeta_{n-1}^{2}}}) \; , \; \lambda = \lim_{N\rightarrow \infty} \lambda_N \;. \end{equation} The Lyapunov time is then defined as $T_{\lambda} \equiv 1/\lambda$. For the main chaotic sea it is a function of the control parameter $K$. A suitable fitting function yields $\lambda \approx \ln(0.7+0.42K)$ \cite{harsoula2019characteristic}. For a regular trajectory $\lambda_N \sim 1/N$ and $\lambda=0$, at variance to a chaotic trajectory for which $\lambda_N$ saturates at $\lambda$ at a time $N\approx T_{\lambda}$. Technically this saturation, and the value of $\lambda$ can be safely confirmed and read off only on time scales $N \approx 10^2..10^3 T_{\lambda}$, without becoming a quantifiable distinguisher of the two types of trajectories, see Fig.\ref{K1_0}. \begin{figure} [ht] \centering \includegraphics[width= 0.99 \columnwidth]{K1_0} \caption{\label{K1_0} $\lambda_N$ versus $N$ for a chaotic (triangles) respectively regular (squares) trajectory with $K=1.0$. The dashed horizontal line indicates the value of $\lambda$ for the chaotic trajectory, and the dashed vertical one the corresponding value of $T_{\lambda}$. } \end{figure} To quantify our statements, we run the standard map at $K=2.5$ Fig.\ref{standardmap_poincare}(d). We use a grid of $51 \times 51$ points which partitions the phase space $\{p,x\}$ into a square lattice. We use the corresponding 2601 initial conditions and generate trajectories. Each trajectory returns a function $\lambda_N$. We plot the resulting histogram for $N=20$ and $N=3\cdot10^5$ in Fig.\ref{histogram} (a) and (b) respectively. For $N\rightarrow \infty$ the histogram should show two bars only - one at $\lambda_N=0$ (all regular trajectories) and one at $\lambda_N=\lambda$ (all chaotic trajectories). For finite $N$ the distributions smoothen. Note that even negative values $\lambda_N$ are generated due to fluctuations and finite averaging times. To tell chaotic from regular dynamics apart, we use the following protocol. We identify the two largest peaks in each histogram, and identify the threshold dividing dynamics into regular and chaotic as the deepest minimum between them (in case of a degeneracy, the one with the smallest value of $\lambda_N$). The location of the threshold is shown for $N=20$ and $N=3\cdot10^5$ in Fig.\ref{histogram} (a) and (b) respectively. We then assign a chaos respectively regular label to each trajectory. This label can fluctuate as a function of time for any given trajectory. We use the division for the largest simulation time $N=3 \cdot 10^5$ as a reference ('true') label for all trajectories. The success rate in predicting the correct regular $P_R$ or chaotic $P_C$ label is defined by the ratio of the correctly predicted labels within each subgroup of identical true labels. Likewise the success rate of predicting any label correctly is denoted by $P_{tot}$. The results are plotted versus time $N$ in Fig.\ref{histogram} (c). While regular labels are predicted with high accuracy, chaotic ones are reaching 98$\%$ at only $N \approx 10^3 T_{\lambda}$. The low success rate $P_C$ is therefore also lowering the total success rate $P_{tot}$. \begin{figure} [ht] \centering \includegraphics[width= 1.05 \columnwidth]{histogram.pdf} \caption{\label{histogram} Performance comparison of a Lyapunov exponent based method and a deep learning method to distinguish chaotic and regular trajectories for $K=2.5$ and $\lambda \approx 0.56$. (a) Histogram of of $\lambda_{N=20}$. the dashed vertical line indicates the location of the threshold (see text for details). (b) Same as (a) but $N=3\times 10^5$. (c) The success rates $P_R$, $P_C$ and $P_{tot}$ as a function of $N$ for the Lyapunov exponent based method (see text for details). (d) Same as in (c) but for the deep learning based method.The network was trained for K = 2.5 and 2081 trajectories. The remaining 520 trajectories are used for testing. $N$ in (d) represents the trajectory length used for network training and test. $K_{min}=K_{max}=2.5$, $M_{tr}=2081$, $M_{tt}=520$, $N_{K}\equiv N$} \end{figure} \section{Neural networks and predictions} The input data of an artificial neural network consisting of only fully connected layers are limited to a one-dimensional (array) form~\cite{ramsundar2018tensorflow}. Fully connected layers connect all the inputs from one layer to every activation unit of the next layer. The standard map generates sequences embedded in two dimensions. In order to learn data embedded in dimensions two or larger, the data must be flattened, and spatial information can get lost. A Convolutional Neural Network (CNN) is known to learn while maintaining spatial informations of images \cite{LeCun}. A CNN is usually configured with convolution and pooling layers. The former employ convolutional integrals with input data and filters to produce output feature maps. An additional activation function turns the network non-linear. At the end of the convolution layers a pooling layer is added which performs value extraction in a given pooling region. Through multiple convolution layers and pooling layers, the network can improve its prediction features. Finally, a fully connected layer generates classified output data. For binary classification, the last layer consists of one node. Its output value is either zero or one. We refer the reader to Appendix \ref{app1} for further technical details of the CNN we use. \subsection{The standard map} The input of the neural network is a time series $(p_{n},x_{n})$ from Eq.~\ref{standard_map}. The trajectory $(p_{n},x_{n})$ shows regular or chaotic behavior depending on the initial values $(p_{0},x_{0})$. Each of the trajectories is assigned a class label based on the Lyapunov time: Class $R$ corresponds to a non-chaotic trajectories while $C$ corresponds to a chaotic trajectories. We remind that the phase space is discretized into $51\times 51 =2601$ grid points. The training and testing is quantified with a set of parameters: i) $K_{min}$ and $K_{max}$ denote the range of training values of $K$ on an equidistant grid with $M_K$ values; ii) $M_{tr}$ is the number of training trajectories per $K$ value; iii) $N_K$ is the training trajectory length; iv) $M_{tt}$ is the number of test trajectories per $K$ value. To quantify the CNN performance, we assign a discrete label to each of the initial phase space points - $C$ respectively $R$ based on the Lyapunov exponent method with trajectory length $N=3 \cdot 10^5$. This way we separate all phase space points into two sets - $C$ and $R$, each containing $A_C$ and $A_R$ points. We then run the CNN prediction on trajectories of length $N=20$ which start from each of the gridded phase space points. We compute the accuracy quantifying probabilities \begin{equation}\label{accuracy} P_{C} = \frac{B_{C}}{A_{C}}, ~ P_{R} = \frac{B_{R}}{A_{R}}, ~ P_{tot}= \frac{B_{C}+B_{R}}{A_{C}+A_{R}} \end{equation} where $B_{C}$ and $B_R$ are the numbers of trajectories predicted by the CNN to be chaotic respectively regular within each of the true sets $A_C$ and $A_R$. Thus strictly $B_C \leq A_C$ and $B_R \leq A_R$. Fig. \ref{histogram}(d) compares the CNN performance to the standard Lyapunov base one. Accuracies of 98\% and more are reached by the CNN for trajectory length $N_K \geq 30$. Similar accuracies need trajectory length $N \approx 10^4$ and more when using standard Lyapunov testing. Fig.\ref{standardmap_lyapunov} shows the CNN performance with $N_K=10$ in the phase space of the standard map. We observe that most of the failures correspond to chaotic trajectories starting in the fractal border region close to regular islands. These trajectories can be trapped for long times in the border region, with trapping time distributions exhibiting power law tails \cite{zaslavsky1998physics}. \begin{figure} [hbt!] \centering \includegraphics[width=\columnwidth]{standardmap_lyapunov.pdf} \caption{\label{standardmap_lyapunov} Chaos classification in the standard map. The Lyapunov exponent classification with trajectory length $N=3 \cdot 10^5$ is used as a reference classifyer for $K=1$ (a) and $K=2$ (b). The CNN test results are shown for $K=1$ (c) and $K=2$ (d). Open circles - regular, gray circles - chaotic. Black circles show the error locations of the CNN prediction. The CNN parameters are $K_{min}=1.0$, $K_{max}=2.0$, $M_K=11$, $M_{tr}=2081$, $M_{tt}=520$, $N_K=10$. } \end{figure} To quantify the performance of the CNN, we first vary the $N_K$ from 1 to 20 (Table~\ref{accuracy_table}). The network is trained with chaotic and regular trajectories for $K_{min}=1.0$, $K_{max}=2.0$, $M_{K}=11$, and $1 \leq N_K \leq 20$ and the network performance is evaluated for $3 \leq K \leq 3.5$ and $M_{K}=6$. The CNN requires that the length of test trajectories is always kept equal to the length of the training trajectories. Note that the Lyapunov time $T_{\lambda} \approx 2$ for the test values of $K$. The CNN shows improvement of the accuracy with increasing $N_K$. While the performance fluctuates with varying $K$, it shows excellent results for $N_K$ values and clearly outperforms the Lyapunov exponent based method. \begin{table*}[t] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \diagbox[width=11em]{K}{$N_K$} & $20$ & $18$ & $16$ & $14$ & $12$ & $10$ & $2$ & $1$ \\\hline ~ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$ & $P_{C}$/$P_{R}$\\ 3.0 & 0.99/0.99 & 0.93/0.98 & 0.95/0.98 & 0.92/0.98 & 0.97/0.96 &0.83/0.95&0.89/0.97&0.78/1.0 \\ 3.1 & 0.90/0.98 & 0.94/0.96 & 0.96/0.96 & 0.93/0.96 & 0.90/0.96 &0.83/0.91&0.90/0.93&0.79/1.0 \\ 3.2 & 0.93/0.95 & 0.94/0.97 & 0.96/0.97 & 0.93/0.97 & 0.97/0.94 &0.85/0.91&0.90/0.92&0.79/1.0 \\ 3.3 & 0.97/0.99 & 0.93/0.99 & 0.95/0.99 & 0.93/0.99 & 0.94/0.96 &0.85/0.93&0.89/0.98&0.77/1.0 \\ 3.4 & 0.94/0.99 & 0.89/0.97 & 0.94/0.96 & 0.92/0.98 & 0.93/0.97 &0.82/0.93&0.88/0.98&0.76/1.0 \\ 3.5 & 0.93/0.94 & 0.93/0.93 & 0.96/0.88 & 0.92/0.99 & 0.92/0.91 &0.83/0.92&0.87/0.94&0.76/1.0 \\ \hline \end{tabular} \caption{CNN performance. For each K value, 2601 different initial values ($p_{0,i}, x_{0,j}$) were selected as ($p_{0,i}=(i-1)\frac{1}{50}, x_{0,j}=(j-1)\frac{1}{50},~ (i,j\in \mathbb{Z},\:1 \leq i,j \leq 51,\; )$). Other parameters are listed in the main text. } \label{accuracy_table} \end{table*} \begin{figure} [ht] \centering \includegraphics[width=\columnwidth]{Knum_rev.pdf} \caption{\label{Knum} Network performance versus $K$ for different trained K value numbers and ranges. (a), (b) Varying the number of K values used for network training in a fixed interval with equidistant spacing ($K_{min}=0.1,\;K_{max}=3.1\;M_{tr}=2081,\;M_{tt}=2601,\;N_K=20$). (black square) $M_K=4$. (red circle) $M_K=7$. (blue triangle) $M_K=16$. (magenta inverted triangle) $M_K=31$. (c), (d) Varying the interval of trained K values. The range of K values used in network learning are (black square) $K_{min}=1.0,\;K_{max}=3.7,\;M_K=28$, (red circle) $K_{min}=1.0,\;K_{max}=3.0,\;M_K=21$, (blue triangle) $K_{min}=1.0,\;K_{max}=2.5,\;M_K=16$,(magenta inverted triangle) $K_{min}=1.0,\;K_{max}=2.0,\;M_K=11$. The length of the input trajectories are $20$.} \end{figure} We then further test the CNN performance for untrained $K$ values by varying the training $K$ range and other relevant training parameters in Fig.~\ref{Knum}. The network shows better performance on untrained K values when trained with a set of different K values. As expected, smaller numbers of training $K$ values yield poorer accuracy due to overtraining. With increasing training range of $K$ values and ranges the network improves its chaos region predictions for untrained K values. \subsection{Training with the standard map, testing the logistic map} We proceed with testing how the CNN trained with standard map data performs in predicting chaos for other maps. We choose the logistic map as a simple one-dimensional chaotic test bed. The logistic map is written as $x_{n+1}=rx_{n}(1-x_{n})$. The parameter $r$ controls the crossover from regular to chaotic dynamics, which happens at $r_c \approx 3.56995$. We use two training methods. The first one trains the network only with the $p_n$ data sequence from the standard map in Eq.~\ref{standard_map}. We coin that trained network 1D. The second one is the original CNN discussed above, coined here 2D. As shown in Fig.~\ref{logistic}, the network mainly generates errors at the boundary of chaos region similar to the standard map. For $2.5 \leq r \leq 4.0$ the accuracy is 84$\%$ for 2D network and 90\% for the 1D network. \begin{figure} [ht] \includegraphics[width=200 pt]{logistic.pdf} \caption{\label{logistic} The result of predictions for the logistic map with a network trained from the standard map. The blue and red dots are the cases where the network correctly predicts chaotic and regular trajectories respectively. The black dots show where the prediction fails. The network is trained with $K_{min}=1.0,\; K_{max}=2.0,\; M_K=11,\; M_{tr}=2081,\; M_{tt}=520,\;$ and $N_K=20$. (a) Test results for the 2D training (see text for details). (b) Test results for the 1D training (see text for details).} \end{figure} \subsection{Training with the standard map, testing the Lorenz system} Next we test Lorenz system which is a three-dimensional map, with a CNN trained on the two-dimensional standard map. The Lorenz system is given by the following map equations: \begin{equation} \begin{aligned} \label{lorenz_system} X_{n+1}=X_{n}+\sigma \Delta(Y_n-X_n),\\ Y_{n+1}=Y_{n}+\rho \Delta X_{n}-\Delta X_{n}Z_{n}-\Delta Z_{n},\\ Z_{n+1}=z_{n}+\Delta X_{n}Y_{n}-\beta \Delta Z_n. \end{aligned} \end{equation} The parameters $\sigma=10$, $\beta=\frac{8}{3}$, and $\Delta n =0.001$. The chaos parameter $0 \leq \rho \leq 39.8$ was varied in steps of 0.2. Because the network is trained with 2D data (standard map), the prediction is performed by selecting only two dimensions in the 3D Lorenz system ($(X_n,Y_n), (X_n,Z_n), (Y_n,Z_n)$). As Fig.~\ref{lorenz} (a) shows, using trajectories obtained from Eq.~\ref{lorenz_system} directly as a network input classifies most of them as chaotic. We think this happens because the trajectory data of the standard map used for training are bounded between 0 and 1, but the trajectories from Lorenz system are not. Input values that exceed these boundaries cause nodes in the network to be active regardless of the input characteristics. Therefore we normalize the input data from the Lorenz system. This leads to a drastic increase of accuracy as shown in Fig.~\ref{lorenz} (b). We also tested the outcome when selecting only one dimension in the Lorenz system for the input vector. We find a strong reduction of the accuracy. We therefore conclude that the training and testing data are yielding best performance when for both the minimum of the two dimensions (training map, testing map) is chosen. \begin{figure} [ht] \includegraphics[width=\columnwidth]{lorenz_system.pdf} \caption{\label{lorenz} The result of predictions for the Lorenz system with a network trained from the standard map. The XY, XZ, YZ bars represent the dimensions of the Lorenz system used as input to the network trained with $(p, x)$ data from the standard map. The training conditions are $N_K = 20$, $K_{min}=1.0$, $K_{max}=2.0$, and $M_{K}=11$. The X, Y, Z bars represent the single dimensions of the Lorenz system used as input to the network trained with $p$ data only from the standard map. (a) Accuracy without normalizing the trajectories of the Lorenz system. (b) Accuracy when normalizing trajectories of the Lorenz system. } \end{figure} \section{Conclusion} We trained convolutional neural networks with time series data from the two-dimensional standard map. As a result, the network can classify unknown short trajectory sequences into chaotic or regular with high accuracy. To reach accuracies of up to 98\% we need trajectory segments with length less than 5-10 Lyapunov times. Similar accuracies need 100-1000 longer segments when using traditional classifiers based on measuring Lyapunov exponents. The main cause of errors is due to fractal phase space structures at the boundaries between chaotic and regular dynamics. Trajectories launched in these regions yield sticky trajectories which can mimick regular ones for long times, only to escape at even larger times into the chaotic sea. We also used a network trained with two-dimensional standard map data to classify chaotic and regular dynamics in one- and three-dimensional maps. Surprisingly high accuracy is reached when the training data are projected into one dimension for predictions on the one-dimensional logistic map, and when to-be-predicted data from the three-dimensional Lorenz system are projected onto two dimensions. We conclude that accuracy is optimized when the minimum of the two dimensions (training map, testing map) is chosen for both training and testing. \begin{acknowledgments} This work was supported by the Institute for Basic Science, Project Code IBS-R024-D1. SF thanks Konstantin Kladko for discussions during a visit to IBS, which led to the main idea of machine learning based chaos testing, and Natalia Khotkevych for early attempts to figure a realization pathway. \end{acknowledgments}
proofpile-arXiv_069-1233
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Force and acceleration measurements find numerous applications in science and technology. In recent times, traditional sensors based on mechanical springs have been complemented by more elaborate systems based on non-contact electromagnetic effects. These include atomic force microscopes~\cite{PhysRevLett.56.930}, optical~\cite{RepProgPhys.72.076901,ClassQuantumGrav.32.074001,ClassQuantumGrav.32.024001,NatAstro.3.2397} and atom interferometers~\cite{nature.400.849,Metrologia.38.25,PhysRevA.91.033629,PhysRevA.88.043610}, and falling corner-cube-gravimeters~\cite{Metrologia.32.159}, each emphasizing a different aspect of the measurement and generally achieving excellent sensitivity and accuracy. Since the pioneering work of Ashkin~\cite{PhysRevLett.24.156,ApplPhysLett.19.283,ApplPhysLett.30.202,OptLett.11.288}, optical tweezers have also been used as force sensors, with initial applications to particles in liquid suspensions, primarily in biology~\cite{NatPhot.5.318, MethEnzymology.475.377, RevSciInstrum.75.2787} and polymer science~\cite{Science.264.819}. In the last decade, the path to trapping dielectric microspheres (MSs) in high vacuum was established~\cite{NatPhys.7.527}, and various groups have used the technique to obtain highly sensitive force sensors that are well isolated from the environment~\cite{PhysRevA.93.053801,ApplPhysLett.111.133111,PhysRevA.99.023816,PhysRevLett.109.103603,IntJModPhys.B27.1330018,NatPhys.12.806,ApplPhysLett.111.133111,2001.1093,NatNanotechnol.2.89,Science.367.6480}. Force sensors using optically trapped MSs have the ability to carry out measurements at distances of sub-millimeter scale. This can be achieved by inducing a force between the MSs and specially designed sources of the desired interaction, placed in close proximity to the trapped MS. This has been demonstrated in a few cases~\cite{PhysRevA.99.023816,PhysRevLett.117.101101,PhysRevA.98.053831,PhysRevA.98.013852}, where separations between MSs and attractors have reached the scale of several micrometers. In this paper, a multipurpose optical tweezer system, evolved from the apparatus described in Ref.~\cite{PhysRevA.97.013842} and optimized to search for new fundamental interactions at the micrometer scale, is described. The system is currently used to trap silica MSs of diameters 4.7 and 7.6~$\upmu$m and has achieved separations of 1.6~$\upmu$m between the surfaces of a MS and a nearby device with nominal noise conditions. The trap uses a single beam of 1064~nm wavelength with interferometric readout on all three degrees of freedom, as demonstrated in Ref.~\cite{PhysRevA.97.013842}. A number of features, most notably a final focusing and recollimation employing off-axis parabolic mirrors, have been introduced to minimize beam halos at the focus, enabling closer access to the trapping region with minimal distortion of the optical field. A large vacuum chamber allows for the introduction of several motorized actuators, important for the manipulation of devices near the trapped MSs under vacuum. Three-dimensional metrology of these devices around the trap region is provided by two orthogonal microscopes. The readout of the polarization state of the trapping light after its interaction with the MS is used to measure the rotation of trapped MSs, owing to their residual birefringence. Rotation of the MS can be induced by producing a rotating electric field that couples with the permanent electric dipole moment generally present in the silica MSs used~\cite{PhysRevA.99.041802}. Trap stabilization against long term drifts of the interferometric platform affords a noise spectrum that is flat down to ${\sim}1$~Hz. Along with its primary motivation to search for new interactions at the micrometer scale~\cite{PhysRevLett.105.101101}, the system described may be used for the investigation of Casimir forces~\cite{Casimir,PhysRevLett.78.5,PhysRevLett.81.4549,PhysRevA.62.062104,PhysRevLett.81.4549,PhysRevLett.88.041804,PhysRevLett.91.050402,EPL.112.44001} and other applications~\cite{PhysRevLett.110.071105,ClassQuantumGrav.37.075002,PhysRevD.99.023005} requiring extreme force sensitivity. \section{Optics Setup} \label{OpticsSetup} The 1064~nm trapping light is produced using a distributed Bragg reflector laser (Innolume LD-1064-DBR-150) to seed a ytterbium-doped fiber amplifier (Thorlabs YDFA100P), resulting in a maximum power of 100~mW. The production of the trapping beam and the reference beams for the heterodyne detection system makes use of fiber optic components based on single-mode PM980 fiber or equivalent, as shown in Fig.~\ref{LaserSystem}. Light from the fiber amplifier first goes through a 50:50 fiber-coupled polarization maintaining (PM) beam splitter. The two output channels of this splitter are independently frequency shifted by ${\sim}150$~MHz to a final frequency difference of 125~kHz, using two fiber-coupled acousto-optic modulators (AOMs, Gooch and Housego T-M150-0.4C2G-3-F2P). One channel, the trapping beam, is then launched to free space for further manipulation, while the second channel is subsequently split into two halves to produce reference beams for the detection of the vertical ($z$) and horizontal ($x-y$) MS positions. In order to passively stabilize its temperature, the entire fiber optics system is heat sunk to the 4,000~kg granite table on which the trap is located, and embedded in foam to decouple it from the ambient air, which help to reduce long term drift of the interferometric readout. Continuous temperature measurements show that the laboratory air conditioning system maintains the air temperature at $23.0~^{\circ}$C$\pm0.5~^{\circ}$C. \begin{figure}[!tb] \includegraphics[width=1\columnwidth, bb=0 0 802 210]{LaserSystemRev5.pdf} \caption{Schematic depiction of the laser system: all of the components shown are fiber-coupled with PM fibers, and arrows indicate the fiber outcouplers to free space, shown in Fig.~\ref{OpticsSystem}. DBR: distributed Bragg reflector laser and AOM: fiber-coupled acousto-optic modulator.} \label{LaserSystem} \end{figure} \begin{figure*}[!tb] \includegraphics[width=1.9\columnwidth, bb=0 0 802 325]{ExperimentSetupRev8.pdf} \caption{Simplified free-space optics system: only the essential components are drawn, and some mirrors are omitted. Each reference beam has a pair of mirrors (not shown) between the fiber outcouplers and the NBSs to align their wavefronts to that of the trapping beam. The parabolic mirrors and MS are shown from a side view, while the remainder of the components are shown from a top view. The two auxiliary microscopes described in Section~\ref{SectionCamera} operate independently from the trap and are not shown here. BS: beam sampler, PBS: polarizing beam splitter, NBS: nonpolarizing beam splitter (50:50), HWP: half waveplate, QPD: quadrant photodiode, PD1-3: photodiodes, and XYM: $x-y$ microscope. } \label{OpticsSystem} \end{figure*} Both the input and output free-space optical systems are each mounted on a $60\times30$~cm$^2$ breadboard. The breadboards themselves are the actuated elements of six-axis stages so that, once internal alignment between components on the breadboards is achieved, they can be collectively adjusted relative to the trap, which is directly mounted on the vacuum chamber on the granite table (see Section~\ref{Vacuum}). The trap and the surrounding optics are schematically illustrated in Fig.~\ref{OpticsSystem}. The trapping beam is first launched to free space and collimated by a single aspheric lens (Thorlabs PAF2A-A15C), resulting in a 1/e$^2$ beam radius of $w=1.35$ mm. A part of the beam (${\sim}9\%$) is sent to a photodiode (PD, Thorlabs DET100A2) by a beam sampler (CVI W1-IF-1012-UV-1064-0) in order to monitor the power of the trapping beam. The rest of the beam passes through an optical isolator consisting of two polarizing beam splitters (PBSs, CVI PBS-1064-100), a Faraday rotator (Electro-Optics Technology 110-10299-0001-ROT), and a half waveplate (Newport 10RP02-34). The light is horizontally polarized after the optical isolator. The trapping beam is then sent to a piezoelectric deflector that steers the beam along two orthogonal axes and is used to provide feedback to the $x-y$ translational degrees of freedom. The deflector, placed in a Fourier plane of the trap, employs a 7~mm diameter mirror (Edmund Optics \#34-370) glued to a high bandwidth actuator (Thorlabs ASM003). The mechanical resonant frequency of this assembly is ${\sim}2$~kHz and feedback is applied predominantly at frequencies below ${\sim}1$~kHz. Finally, the beam is expanded with a 1:4 telescope using $d=5$~cm and $d=20$~cm lenses (Thorlabs AL2550H-B and Newport PAC32AR.16), with $d$ being the focal length, yielding $w=4.84$~mm, slightly smaller than the expected 5.4~mm. All vacuum chamber viewports are made from 5~cm diameter windows (CVI W2-IF-2037-UV-1064-0 for the trapping beam and CVI W2-IF-2037-UV-633-1064-0 for auxiliary imaging, see Section~\ref{SectionCamera}) custom-mounted with a tilt of 5$^{\circ}$ and sealed with O-rings. The trapping beam is focused inside the vacuum chamber by a $d=50.8$ mm off-axis parabolic mirror (Edmund Optics \#35-507). The mirror sits on a five-axis stage (Newport 9082-V), where rotation around the $x$ axis is the non-adjustable degree of freedom. Reflective optics are preferred here, owing to the absence of spurious reflections from optical interfaces in refractive optics. The 1/e$^2$ beam radius at the trap focus $w_0$ is estimated with a knife-edge method, making use of a nanofabricated device mounted on the main nanopositioning stage (see Section~\ref{NanoposStagePorts}). The measured beam size is $w_0=3.4$~$\upmu$m (3.1 $\upmu$m) in the $x$ ($y$) direction, as shown in Fig.~\ref{BeamProfileAtFocus}, averaged to 3.2 $\upmu$m corresponding to a Rayleigh range of 31 $\upmu$m. This is in reasonable agreement with the theoretical prediction of 3.5~$\upmu$m from the measured numerical aperture of 0.095. A careful choice of the optical components, the use of 5~cm aperture optics after expanding the beam radius, and a careful alignment are important to achieve close to nominal performance. It is estimated that the residual imperfections and astigmatism ($x$ and $y$ foci displaced by ${\sim} 10~\upmu$m in $z$) are dominated by nonideal alignment. The diverging beam emerging from the trapping region is recollimated by an identical parabolic mirror, also mounted on the same type of five-axis stage as the first parabolic mirror. The trapping beam exits the vacuum chamber horizontally polarized, while vertically polarized light is separated with a PBS (Edmund Optics \#65-606) and projected onto a PD (Thorlabs DET100A2) in order to monitor the MS rotation~\cite{PhysRevA.99.041802}. After being extracted from the vacuum chamber, the trapping beam passes through a 2:1 telescope composed of a $d=20$~cm and a $d=10$~cm lens (Newport PAC32AR.16 and Thorlabs LA1509-C), and is combined in a 50:50 non-polarizing beam splitter (NBS, Newport 10BC17MB.2) with a reference beam. The resulting superposition of light is projected onto a quadrant photodiode (QPD, Hamamatsu S5980). The 2:1 telescope coarsely matches the mode of light transmitted through the MS to that of the reference beam. The counter propagating beam, retroreflected by the surface of the MS, is extracted from the rejection port of the optical isolator on the input optics side, and combined with the second reference beam using a 50:50 NBS (Newport 10BC17MB.2) and projected onto a PD (Thorlabs DET100A2). Both reference beams are launched to free space by outcouplers with adjustable beam size and focal length (Thorlabs ZC618APC-C), which allow further optimization of the mode-matching to the transmitted and reflected components of the trapping beam. Prior to the NBSs, each reference beam undergoes two reflections so that the position and angle of incidence on the PD or the QPD can be adjusted. As mentioned, the output optics are also mounted on a breadboard that allows for the overall alignment with respect to the trap. Both input and output free-space optics are enclosed in lens tubes to reduce air currents and microphonic effects (except for the two motorized mirrors in the reference beam path for the $x-y$ position detection). The entirety of each of the six-axis stages holding the breadboards is enclosed in boxes made of acrylic to further suppress noise. \section{Data Acquisition and Feedback} On the PD dedicated to the $z$ position of the MS (PD2 in Fig.~\ref{OpticsSystem}), and the QPD dedicated to the $x-y$ position of the MS, the incident optical power is modulated by the $\Delta f = 125$~kHz frequency shift between the trapping and reference beams. The five photocurrents corresponding to PD2 and each quadrant of the QPD are individually amplified and then digitized at a sampling frequency $f_s = 500$~kS/s, exactly four times $\Delta f$. By phase-locking the radio frequency (RF) synthesizers driving the AOMs to the master clock driving the analog-to-digital converters (ADCs), real-time estimates of the amplitude and phase of the oscillating photocurrent are obtained. Every sample $A_i$ is spaced by a quarter wavelength, and thus, \begin{align} A_i = G R_{\rm t} I_{\rm photo} \begin{cases} \text{sin}(\phi) & i=1,5,... \\ -\text{cos}(\phi) & i = 2,6,... \\ -\text{sin}(\phi) & i=3,7,... \\ \text{cos}(\phi) & i=4,8,... \end{cases}, \label{eq:demod} \end{align} \noindent where $G$ is the unitless voltage-gain of the amplifier circuit, $R_{\rm t}$ is the transimpedance resistance used to convert the photocurrent $I_{\rm photo}$ to a voltage, and $\phi$ is an arbitrary, but fixed, phase offset between the photocurrent and the digitizer's master clock. The phase and amplitude of this signal can be different for each quadrant and for the $z$-position PD. The amplitude $G R_{\rm t} I_{\rm photo}$ is estimated as $G R_{\rm t} I_{\rm photo} = (A_i^2 + A_{i-1}^2)^{1/2}$, while the phase is estimated as $\phi = \tan^{-1} \left[ A_1 / (-A_2) \right]$, or as $\phi = \tan^{-1} \left[ -A_3 / (-A_2) \right]$ etc., where the appropriate negation and ratio repeats every fourth sample, as seen in Eq.~(\ref{eq:demod}). This procedure is often referred to as digital demodulation. Since displacements of the MS in the horizontal ($x-y$) plane at the trap produce displacements of the transmitted trapping beam in the plane of the QPD, while the reference beam is fixed, the $x-y$ degrees of freedom are read out as imbalances between the amplitude of interference photocurrent across the QPD, normalized by the total photocurrent amplitude. Vertical ($z$) displacements of the MS change the optical path length of the retroreflected light, which are read out directly from changes in the phase of the interference photocurrent from PD2. The aforementioned demodulation and construction of the $x$, $y$, and $z$ signals takes place within a field programmable gate array (FPGA), embedded with ADCs (National Instruments PXIe-7858R). This FPGA also computes and generates the stabilizing feedback on similarly embedded digital-to-analog converters, which are sent to the two orthogonal axes of the piezoelectric deflector for $x$ and $y$ and to the amplitude modulation port of the RF synthesizer driving the trapping beam AOM for $z$. The FPGA is connected to a host computer to which it transfers the demodulated amplitudes and phases of PD2 and each QPD-quadrant, together with the position estimates and the generated feedback for offline analysis. While in the $x$ and $y$ directions interferometry is used only to suppress stray light from sources displaced from the trap center, it is an essential feature to measure the $z$ position that allows the trap to operate with a single beam, affording unimpeded access in the horizontal plane~\cite{PhysRevA.97.013842,PhysRevA.99.023816}. Temperature drifts affect the optical path length of all light used for interferometry. In the $x$ and $y$ degrees of freedom, the amplitude of interference photocurrent in each quadrant of the QPD is, at first order, independent of the phase $\phi$. Fluctuations in the optical power are also suppressed by normalizing the $x$ and $y$ estimates by the total photocurrent amplitude. In the $z$ degree of freedom, optical path length fluctuations propagate directly into the estimate of the $z$ position. These fluctuations are attributed to both air currents and residual temperature drifts and have, roughly, a $1/f$ spectrum. They are suppressed passively by heat-sinking, the insulation, and the enclosures mentioned in the Sec. \ref{OpticsSetup}, and actively by using the image reconstructed by the $y-z$ microscope, described in Section~\ref{SectionCamera}. This latter technique addresses temperature fluctuations with periods of minutes. \begin{figure}[!tb] \includegraphics[width=1\columnwidth,bb=0 0 567 567]{BeamProfile0446XYForPaperRev_20190327.pdf} \caption{Beam profile at the trap for the $x$ and $y$ directions: the black lines are the data, and the orange curves are fits to Gaussian functions. The 1/e$^2$ beam radii are 3.4 $\upmu$m in the $x$ direction and 3.1 $\upmu$m in the $y$ direction. During the measurement, the nanofabricated device approaches the beam from the negative side. Coordinates of the $x$ and $y$ axes are determined by strain gauges in the piezoelectrically driven flexures (See Section \ref{NanoposStagePorts}). } \label{BeamProfileAtFocus} \end{figure} \section{Vacuum system and trap mechanics}\label{Vacuum} The optical trap is located within a $42.5\times42.5\times34.26$~cm$^3$ cubical aluminum vacuum chamber sealed on all six sides with International Standard Organization (ISO) flanges and Viton gaskets, which simplify access to the trap. High vacuum is achieved primarily with a 250~l/s turbomolecular pump (TMP, Pfeiffer HiPace 300), roughed by a scroll pump (Edward XDS35i) located in a separate room to reduce the acoustic noise. Although the use of Viton gaskets limits the attainable vacuum level to ${\gtrsim} 10^{-8}$ hPa, the actual base pressure of $2.4\times10^{-7}$~hPa achieved after a few days of pumping is thought to be limited by the outgassing of the motorized stages, some of which have stated vacuum compatibility of $10^{-6}$ hPa. A system bakeout is impractical with the current setup. A high conductance port is available directly at the top of the chamber for the future installation of a large getter pump (SAES CAPACITORR HV1600) to improve the vacuum level. The current vacuum level does not limit the performance of the system, and data presented here are collected when the pressure reaches ${\lesssim} 10^{-6}$~hPa. The bottom ISO-320 flange rests on an adapter to the granite table and contains a zero-length reducer from ISO-320 to ISO-100. The pumping system is connected to this flange and extends to the space below the granite table through a hole. A ceramic break (MDC 9632010) and a 10 cm long bellow are located between the pumping system and the main chamber for electrical and vibration isolation, respectively. The TMP can be isolated from the main chamber by a gate valve (MDC GV-4000M-P-01), while a custom tee between the chamber and gate valve connects a low conductance bypass for slow pumping at low vacuum with only the scroll pump. The bypass is throttled by a manual leak valve (Dunway VLVE-1000) and can be turned off by a pneumatically driven on-off valve (VAT 28324-GE41-0002/0068). Leaking N$_2$ gas into the chamber is accomplished with an electrically controlled leak valve (MKS 148JA53CR1M) together with a pneumatically driven on-off valve (US Solid PSV00032). The vacuum system is monitored by a full range vacuum gauge (Pfeiffer PKR 251), as well as a residual gas analyzer (MKS eVision+). However, the use of these devices is found to affect the charge state of the trapped MSs, and therefore, during charge-sensitive measurements, a capacitance manometer (MKS 627FU2TLE1B), with a minimum measurable pressure of $2 \times 10^{-5}$~hPa, is used to monitor the vacuum level. All gauges are mounted on conflat (CF) flange ports on an adapter nipple at the top of the main chamber. The rotational dynamics of the MS can also be used to measure the vacuum level, as has been demonstrated with the predecessor apparatus~\cite{JVSTB.38.024201}. \begin{figure}[!tb] \includegraphics[width=1\columnwidth, bb=0 0 4032 3024]{IMG_4518Rev.jpg} \caption{A photo of the components inside the vacuum chamber: the gold-coated cube in the middle (A) is the exterior of the six pyramidal electrodes surrounding the trapping region. The black diagonally-cut cylinder above the electrodes (B) is the recollimating parabolic mirror, which sends the collimated trapping beam to the PBS on the right (C). A small part of the optical surface of the focusing parabolic mirror is barely visible (D), under the cube. The motorized stage in the foreground of the PBS (E) supports the dropper (see Section~\ref{TrapOperation}), not installed in this image. } \label{InsidePhoto} \end{figure} Components inside the vacuum chamber are mounted on a 25.4 mm lattice of 1/4-20 screw holes on the interior of the bottom ISO-320 flange. A view of the components inside the chamber is shown in Fig.~\ref{InsidePhoto}. The trapping region is surrounded by six identical electrodes shaped as truncated pyramids. The electrode faces are 4.3~mm away from the trap center, forming a cubical cavity of 8.6~mm side, with narrow gaps between electrodes. Each electrode is hollowed out and, on the trap end, terminates with a 5.3~mm diameter aperture providing optical and mechanical access to the center. A cross section in the $x-y$ plane and at the nominal $z$ position of the trap is shown in Fig.~\ref{TrapRegionFig}. In addition to the shielding against stray electric fields, each electrode is electrically isolated and can be independently biased in order to apply electrical forces and torques to the MSs. Torque can be applied with a constant magnitude rotating electric field, generated by four phased sinusoids on four coplanar electrodes, that couples to the residual electric dipole moment generally present in the silica MSs used here~\cite{PhysRevA.99.041802,JVSTB.38.024201}. The electric field within the trapping region is calculated using finite element analysis for any configuration of electrode biases. The entire electrode assembly and mounting structure are composed of 6061 aluminum alloy and gold coated by electrode-poised plating. Surfaces facing the trap region, as well as the conical cavities of the top and bottom electrodes through which the trapping beam propagates, are further coated with colloidal graphite (Electron Microscopy Sciences 12660) to reduce the scattering of stray light. \section{Nanopositioning Stage Ports} \label{NanoposStagePorts} The system is designed to provide stable and reproducible access to the trapping region for devices with dimensions in the $\upmu$m to mm range. Mechanical access is realized through the holes in two of the four electrodes in the horizontal plane, shown at the bottom and right of Fig.~\ref{TrapRegionFig}. In the current configuration, these house the main nanopositioning stage and an auxiliary nanopositioning stage, respectively. \begin{figure}[!tb] \includegraphics[width=1\columnwidth,bb=0 0 604 487]{TrapRegionRev6.pdf} \caption{Components surrounding the trap in the $x-y$ plane: the main NP stage carries the nanofabricated device described in Section~\ref{NanoposStagePorts} mounted on its end. The y-z microscope objective is described in Section~\ref{SectionCamera}, and the fiber for UV light is described in Section~\ref{TrapOperation}. (Aux) NP stage: (auxiliary) nanopositioning stage.} \label{TrapRegionFig} \end{figure} The main nanopositioning stage consists of a stack of piezoelectrically driven flexures (Aerotech QNP40Z-100 for $z$ mounted on top of Aerotech QNP60XY-500 for $x-y$), which has a full range of $500~\upmu$m in $x$ and $y$, and $100~\upmu$m in $z$, with a resolution better than 1~nm and reproducibility of ${\sim}2$~nm. The actual positions along each axis are measured by strain gauges within the actuator and subsequently recorded together with the MS position. This actuator is mounted on a custom six-axis stage for coarse alignment to the trap center, with a long travel along the $x$ axis, required for insertion of the primary device into the trapping region. Four degrees of freedom are manually operated, and one degree of freedom is actuated with a piezoelectric motor (Newport 8301-UHV), while the translation along the $x$ axis is accomplished with a DC servo motor with 12~mm range (Thorlabs Z812V) for high repeatability. On top of the stack of piezoelectrically driven flexures, a gold-coated aluminum conical cantilever is mounted. To reduce mechanical load and optimize the bandwidth of the flexure's motion, the conical cantilever is hollowed out with a final wall thickness of 0.66 mm resulting in a mass of 3.13 g. The entire assembly is shown in Fig.~\ref{AttractorMount}. With the cantilever installed, the piezoelectrically driven flexure has a measured bandwidth of ${\sim}80~(100)$ Hz in the $x$ ($y$) direction, limited by mechanical resonances, and an estimated bandwidth of ${\sim}500$~Hz in the $z$ direction. The auxiliary nanopositioning stage consists of a three-axis piezoelectrically driven flexure (Newport NPXYZ100SGV6) with $100~\upmu$m range in each of the three orthogonal directions. For insertion into the electrode structure, a motorized stage with 12~mm range (Newport AG-LS25V6) is mounted on top of the three-axis actuator. A second, identical conical cantilever is mounted on this motorized stage for a secondary device to be inserted into the the trap orthogonally to the main device. Further coarse alignment and angular adjustment are provided by another manually adjusted platform (Newport 9071-V) onto which this assembly is mounted. The secondary system is intended for static use. \begin{figure}[!tb] \includegraphics[width=1\columnwidth, bb= 0 0 1615 1104]{AttractorMountCADRev3.pdf} \caption{CAD model of the conical cantilever (A) onto which the primary silicon device is mounted, the piezoelectrically driven flexures (B), and the custom six-axis stage (C) for coarse alignment and device insertion.} \label{AttractorMount} \end{figure} Nanofabricated devices, e.g. Ref.~\cite{AttractorPaper} are mounted at the ends of the two cantilevers with a conductive epoxy (Epo-Tek H21D). Typical devices have a high aspect ratio, being wide and long in the two horizontal dimensions (${\sim}1~$mm) and thin (10$~\upmu$m to 25$~\upmu$m) in the vertical dimension, in order to minimize their effect on the converging and diverging trapping beam. Devices are typically gold-coated to optimally define their electrical potentials (although charge patches are still present~\cite{PhysRevA.99.023816}). Both devices can be independently biased, allowing the production of electric fields with large gradients in the immediate vicinity of a trapped MS~\cite{PhysRevA.99.023816,PhysRevLett.117.101101}. Each can also be used as a knife edge to scan across the beam and characterize its radius, center position, and focal point, as noted in Section~\ref{OpticsSetup} (see Fig.~\ref{BeamProfileAtFocus}) and demonstrated with a previous system~\cite{PhysRevA.97.013842}. Typically with this technique, the center position of the trapping beam relative to the nanofabricated device is determined with a precision of ${\sim} 0.1 ~ \upmu$m, where the device's position is measured with strain gauges within the piezoelectrically driven flexure on which it is mounted. The coordinate system of the device is then registered to the position at which the edge of the device crosses the beam focus (See Fig. \ref{BeamProfileAtFocus}). Additionally, the size of these particular MSs is known with a precision of 0.04~$\upmu$m [49]. Thus, the distance between the surface of the sphere and the surface of the nanofabricated device is determined by the relative positions of the trap and the nearby device, and the size of the MS, assuming the MS is trapped at the beam focus. The closest separation achieved between the surface of a MS and a nearby device is 1.6 $\upmu$m, at which position the device can be translated in front of a stably trapped MS. The orientation and position of the devices on the two nanopositioning stages relative to the trap are determined by a combination of their interactions with the trapping beam and the auxiliary $y-z$ and $x-y$ microscopes, as discussed in Sec. \ref{SectionCamera}. The relative position between the two devices can be registered by bringing them into contact, which is clearly visible by their elastic deformation. This has been found to be reproducible to within ${\lesssim}1~\upmu$m. \section{Auxiliary Imaging Systems} \label{SectionCamera} The setup is equipped with two separate systems for auxiliary imaging and metrology near the trapping region. The primary system images the $y-z$ plane of the trap through one of the electrodes in the $x-y$ plane, as shown in Fig.~\ref{TrapRegionFig}. An infinity-corrected microscope objective (Nikon N10X-PF) is mounted, in vacuum, on a motorized linear stage with 12~mm range (Newport AG-LS25V6) to adjust the focal plane. The image from the objective is extracted through a viewport and focused by a $d=20$~cm lens onto a CMOS camera (Allied Vision MAKO U-030B). A trapped MS is visible in Fig.~\ref{SideMicroscopeImage} due to scattered light from the 1064~nm trapping beam. In the same image, the nanofabricated devices near the trap are illuminated with an 870~nm wavelength light-emitting diode (LED) (Thorlabs LED870E), injected into the imaging system through a 50:50 NBS (Newport 10BC17MB.2) located in the path between the microscope objective within the vacuum chamber and the $d=20$~cm lens. Additionally, filters can be inserted in the optical path to selectively attenuate different wavelengths and optimize the visibility of various components. The combination of the illumination and a filter of 1064 nm light provides an additional way of imaging the MS, which casts a shadow on the device surfaces behind it. While one pixel of the camera spans $0.5~\upmu$m in the image plane, the resolution of the imaging system is close to the diffraction limit ($1.5~\upmu$m for 870~nm light), verified using a USAF1951 resolution test chart. The entire imaging system can be cross-calibrated into the coordinate system of the piezoelectrically driven flexures by imaging objects of known size (e.g. the nanofabricated devices) within the trapping region. This $y-z$ microscope is also used in a slow loop of the feedback maintaining a constant vertical position of the MS. The stability of this slow feedback is better than $1~\upmu$m with a single image with an exposure time of 44 $\upmu$s taken every 10 s. The second imaging system provides a view of the $x-y$ plane at the $z$ position of the trap through the path for vertically polarized light, as shown in Fig.~\ref{OpticsSystem}. Half of the vertically polarized light from the output parabolic mirror is focused through a $d=7.5$~cm lens into an infinity-corrected objective, mounted on a manual linear translational stage to adjust the focal plane. The system is designed such that the objective can be switched to any model with a diameter of less than 3~cm, though a $\times 10$ magnification objective (Reichert Neoplan 1754) with a tube length of 16~cm is currently in use. Therefore, a broad range of magnifications are achievable. The image from the objective is focused by a second $d = 3$~cm lens onto another CMOS camera (Allied Vision MAKO U-029B). The field of view is illuminated by 870~nm light produced by an LED (Thorlabs LED870E) and injected in the system through a NBS. The resolution in the center of the image is $\sim1.5~\upmu$m, near the diffraction limit. However, substantial aberrations are present away from the center of the image, owing to the lack of correction for the parabolic element and possible imperfections in the system alignment. For this reason, the $x-y$ imaging is used only for qualitative assessment and rough alignment. An improved microscope, using custom correction optics, may be installed at a later time. \begin{figure}[!tb] \includegraphics[width=\columnwidth, bb=0 0 960 769]{y-z-microscope_collageRev9.pdf} \caption{Composite image of two nanofabricated devices in the vicinity of the trap, together with a trapped MS, captured by the $y-z$ microscope. Device~1 is a silicon and gold beam~\cite{AttractorPaper}, mounted on the main nanopositioning stage. Device~2, mounted on the auxiliary nanopositioning stage, is a $1000~\upmu$m long all-silicon device, which has an L-shaped cross-section when viewed in the $x-z$ plane but not visible here, which usually houses device~1, shielding the trapped MS from background forces associated with device~1. In this image, device 1 is translated vertically from its nominal position and is out of focus, the latter reducing the apparent vertical extent. The main frame of the image shows the scattering of the trapping laser by the MS (A), which leads to saturation of several pixels of the camera. The inset is taken with a notch filter that has an optical depth of 6 for 1064~nm light (Thorlabs NF1064-44), in order to demonstrate the shadow of the MS (B) blocking the 870~nm illumination of device~2. The bright and blurry region (C) behind the device is caused by a reflection of the illumination from another part of device 1.} \label{SideMicroscopeImage} \end{figure} \section{Trap Operation}\label{TrapOperation} Dielectric MSs are prepared by rubbing a fused silica cover slip (dropper) on a powder of MSs laying on a sheet of glass. The dropper is glued to a piezoelectric transducer (Thorlabs PA4DG) and inserted between the recollimating parabolic mirror and the electrodes, with the face coated with MSs oriented down. The dropper is then vibrated by driving the piezoelectric transducer with an oscillating voltage, chirped between 150~kHz and 400~kHz. The MSs, held on the surface of the dropper by van der Waals forces, are released by the vibration, which is expected to generate kilo-$g$ or larger accelerations \cite{TLiThesis}. One dropper can be used to refill the trap many times, provided that the drive amplitude is gradually increased, suggesting that MSs are bound to the dropper with varying force. Between the initial preparation of the dropper and its depletion, the RF power driving the transducer typically has to increase from 1 mW to 1 W. \begin{figure}[!tb] \includegraphics[width=1\columnwidth]{DischargeProcessPaper4_20190724.pdf} \caption{Typical discharging process: each data point corresponds to a 10-second measurement of the charge state (i.e. the amplitude of a MS response to a driving field), between which a certain number of flashes of the UV lamp occur. Quantization of the charge state in units of $e$ is observed. When near the neutral charge state, the resolution of the measurement can be increased by increasing the driving voltage.} \label{Discharge} \end{figure} The loading process has a small but finite efficiency as most MSs fall without being captured in the trap. Typically, ${\sim}6$~hPa pressure of N$_2$ is used to slow the falling MSs and make trapping possible, together with an increased laser power to increase the depth of the optical trap. For example, $4.7~\upmu$m diameter silica MSs~\cite{bangs_laboratories}, with mass ${\simeq}84$~pg~\cite{PhysRevApplied.12.024037} are caught with ${\sim}9$~mW of power in the trapping beam, while they are maintained at the focal point of the trap with ${\sim}1.7$~mW. Additionally, 7.6~$\upmu$m diameter, 420~pg mass silica MSs~\cite{German_sphere} are stably trapped with ${\sim}15$~mW of power and ${\sim}18$~hPa of N$_2$ pressure. It is expected that even larger MSs could be trapped with increased laser power. In order to minimize accumulation of MSs on the focusing parabolic mirror, a second, larger silica plate (catcher) is inserted between the electrode and the focusing parabolic mirror during loading. The dropper and catcher are inserted and removed independently using motorized stages (Newport AG-LS25-27V6). The trapped MS remains stable in the trap during the removal of the catcher, which is positioned 27 mm below the trap's focus, without any special effort. Once a MS is trapped, the low conductance bypass system is used to slowly pump down the vacuum chamber to 0.5~hPa (typically over 25~min), the pressure at which the feedback is initialized, typically with a reduced power of 1.8~mW (for $4.7~\upmu$m diameter MSs). Feedback from the position measurements of the three degrees of freedom of the MS is then applied to the laser power ($z$) and the pieozelectric deflector ($x-y$). The feedback for the vertical ($z$) degree of freedom includes proportional, integral, and derivative terms, and is applied in addition to an independent laser power stabilization which is primarily an integral term together with a proportional term. The slow, $z$ feedback mentioned in Section~\ref{SectionCamera} is only applied during long term measurements that typically last hours or days. The feedback for the horizontal ($x-y$) degrees of freedom is applied after the $z$ direction is stable and includes only derivative terms in order to damp the MS's resonant motion in the horizontal plane. \begin{figure*}[!tb] \begin{center} \includegraphics[width=2\columnwidth, bb=0 0 1440 540]{20200113_tf_horizontalRev.pdf} \caption{Transfer functions of a typical MS. Shown are the magnitude (left) and phase (right) of the complex, frequency-dependent matrix $\mathbf{H}_{ij}(f)$ discussed in Section~\ref{ForceCalibration}. The orange solid curves for $(i,j)=(x,x), (y,y)$ are fits of the measured response to that of a damped harmonic oscillator, whereas the dashed line for $(i,j) = (z,z)$ is a quadratic spline interpolation. Off-diagonal interpolations are not shown. Resonant frequencies $f_{ii}$ and damping coefficients $\Gamma_{ii}$ obtained from the fit are $f_{xx}=301$~Hz$\pm1 $~Hz, $\Gamma_{xx}= 50$~Hz$ \pm 2$ Hz, $f_{yy}=292$~Hz$ \pm1 $~Hz, and $\Gamma_{yy}= 27$~Hz$ \pm 2$~Hz.} \label{TransFunc} \end{center} \end{figure*} The $z$ direction stabilization also allows the $z$ position of the MS to be varied with respect to the trap focus and other mechanical devices in its proximity. At the focus, where the trapping beam intensity is greatest, the optical spring constant confining the MS in the $x$ and the $y$ degrees of freedom has a maximum. The stochastic force on the MS from the 0.5~hPa of residual gas impacting the MS has a white frequency spectrum that drives these degrees of freedom, resulting in MS motion with a frequency spectrum well modeled with a driven, damped harmonic oscillator with a clearly observable resonant frequency. For a given $z$ position, the slight astigmatism produces two distinct values of $f_x$ ($f_y$), the resonant frequencies for the $x$ ($y$) directions, and the final setpoint is determined empirically to minimize the difference between $f_x$ and $f_y$. For the $4.7~\upmu$m diameter silica MSs, typical values are $f_x \simeq f_y \simeq 300$~Hz and $|f_x - f_y| \simeq 10$~Hz. At this location, the harmonic trap for $z$ is generated by the feedback, with $f_z\simeq 300$ Hz. With $x$, $y$, and $z$ feedbacks on, the vacuum chamber is further pumped down, first with the low conductance bypass and then by opening the gate valve and starting the TMP at $<0.1$ hPa. The system reaches a pressure of ${\sim}10^{-6}$~hPa in a few minutes, at which point the noise floor of the MS position is no longer dominated by the Brownian motion of the MS. MSs trapped in this fashion have been observed to be stable in the trap indefinitely in the absence of external disturbances. The charge state of the trapped MS is measured by applying a sinusoidally oscillating voltage difference to a pair of opposing electrodes, and measuring the amplitude of the MS response to this driving field. For sufficiently long integration times, charge quantization is observed with a signal to noise ratio of ${\sim}20$. By increasing the amplitude of the driving field, the neutral state can be characterized exceedingly well as demonstrated in a simpler version of the trap in Ref.~\cite{PhysRevLett.113.251801}. As they are loaded in the trap, MSs can have either positive or negative overall charge. The charge state of MSs can be changed by flashing ultraviolet (UV) light from a xenon flash lamp (Hamamatsu L9455-13) into the trapping region. The light is brought into the trap with a multimode solarization-resistant UV fiber (Thorlabs UM22-600) and coupled to free space with a 4~mm fused silica ball lens (Edmund Optics \#67-385), as shown in the left of Fig.~\ref{TrapRegionFig}. The fiber is brought into the vacuum chamber using the feedthrough discussed in Ref.~\cite{ApplOpt.37.1762}. This system is capable of both increasing and decreasing the MS charge state. Pulsing the UV lamp with nothing in close proximity to a MS tends to eject electrons from the MS, yielding a more positive charge state. If the sputtered gold surface of a nanofabricated device is placed behind the MS, more electrons appear to be ejected from the gold surface, changing the charge of the MS in the opposite direction. MS charge states upwards of $\pm 500~e$, with $e$ the fundamental charge can be obtained in this way or, if desired, net neutrality can be achieved. A typical discharging cycle is shown in Fig.~\ref{Discharge}. Since conductive structures close to the MS distort the electric field, absolute charge calibration is only performed as the charge increases by removing electrons from the MS, when the nanofabricated device is fully retracted. \section{Force Calibration} \label{ForceCalibration} With charge quantization, it is possible to empirically calibrate the response of the MS without assumptions. This is achieved by applying a sinusoidally oscillating electric field $E=E_0 \sin (2\pi f t)$ oriented along the degree of freedom to be calibrated, with $E_0$ being the amplitude of the electric field and $f$ being the frequency of the oscillation. The electric field amplitude $E_0$ at the MS location is calculated by finite element analysis, starting from the electrode geometry and the applied voltage. The force applied is then $F=q_{MS} E$, where $q_{MS} = ke$ with $e$ being the fundamental charge is deduced by counting the number of quanta $k$. Simultaneously, the response of the MS, $R$, as determined by the imaging and demodulation procedure described previously, is also known. Thus, for a given frequency of an oscillating electric field, the ratio $R/F$ between the amplitude of MS response in arbitrary units and the amplitude of the applied force in physical units is derived. The same procedure also extracts any phase shift of the response relative to the drive. This can be done independently for each degree of freedom $x$, $y$, and $z$. To measure the frequency dependence of this calibration, the electric field is applied in the form of a frequency comb \begin{equation} \label{FreqComb} E(t) = E_0 \sum^{N}_{n=1} \sin \left(2\pi n f t + \phi_n \right) \end{equation} \noindent with $\phi_n = 2\pi n^2/N$ being a phase shift, implemented in order to avoid large spikes of the electric field (consider the Fourier series of a delta function). Typically, $f=7~$Hz, $N=100$, and $E_0 = 100$~V/m are used for the initial characterization of the system. This electric field drive and the corresponding MS response can be continued indefinitely and averaged until a desired precision is achieved. This method also provides a means to measure any cross talk inherent to the imaging, as well as the trap itself. With the symmetric electrode geometry employed here, forces are induced along specific coordinate axes, while the MS response along all three degrees of freedom is measured. Writing the applied force as $F_j = q_{\rm MS} E_j$ where $j \in \{x, y, z\}$, and $E_j$ of the form of Eq.~(\ref{FreqComb}), and writing the MS response as $R_i$ for $i \in \{ x, y, z \}$, each has a Fourier transform $\widetilde{F}_j(f)$ and $\widetilde{R}_i(f)$, respectively. The frequency dependent transfer function $\mathbf{H}_{ij}(f)$ is calculated as $\mathbf{H}_{ij} (f) = \widetilde{R}_i (f) / \widetilde{F}_j (f)$. The complex-valued $\mathbf{H}_{ij}(f)$ naturally includes amplitude ratios as well as phase shifts for each drive frequency. For a given $ij$, the amplitude and phase of $\mathbf{H}_{ij}(f)$ are smoothly interpolated (and extrapolated under certain assumptions) for frequencies not explicitly included in Eq.~(\ref{FreqComb}). Then, for any given measurement, the force applied to the MS is first constructed in the frequency domain from the measured MS response as $\widetilde{F}_j (f) = \mathbf{H}^{-1}_{ij} (f) \widetilde{R}_i (f)$. With sufficient optical alignment, the off-diagonal components of $\mathbf{H}_{ij}$ are typically five times smaller than their diagonal components. An estimate of the physical displacement of the MS at low frequencies is then obtained as $\widetilde{x}_j (f) = (1 / m_{\rm MS} \omega_j^2) \widetilde{F}_j(f)$, with $\omega_j$ being the angular resonant frequency for a particular coordinate axis, expressed in rad/s. An example of $\mathbf{H}_{ij}$ is shown in Fig.~\ref{TransFunc}. In both the $x$ and the $y$ (horizontal) directions, the optical spring constant $k_j = m_{\rm MS} \omega_j^2$ is $\sim3 \times 10^{-7}~$N/m, while the translational damping constant $\Gamma$ is approximately $\sim50$~Hz and can be adjusted over a wide range by the active feedback. The resonant frequency and optical spring constant are in agreement with estimates from the Optical Tweezers Computational Toolbox~\cite{ott}, which assumes perfectly spherical MSs with uniform density, as well as an ideal Gaussian beam and diffraction-limited focus. In the $z$ (vertical) direction, relatively large proportional and integral gain terms are required to stabilize the MS's position, and a traditional damped harmonic oscillator model is insufficient to describe the frequency spectrum of the MS motion. Between successive 10-s integrations, the response of the MS is stable to within 5\%, and reproducible on the timescale of days. The response of a charged MS to an oscillating electric field is linear up to $\sim2 \times 10^{-13}~$N of applied force, over a wide range of charge states $|q_{\rm MS}| \leq 500~e$. The linearity of the system described here matches or exceeds that of the previous version of the apparatus \cite{PhysRevA.99.023816}. \section{Sensitivity} \label{SensitivitySection} \begin{figure}[!tb] \includegraphics[width=1\columnwidth, bb=0 0 567 567]{ForceSensitivityInterpolate20XYZPaper_20200113Rev4.pdf} \caption{Force (left axis) and acceleration (right axis) sensitivity of the system using 4.7~$\upmu$m diameter MSs in the $x$ (upper panel), $y$ (middle panel), and $z$ (lower panel) directions, obtained by the procedure described in Section~\ref{SensitivitySection}. The power spectral density $S_{Fj}$ is estimated from the complex square of the Fourier transformation: $ \widetilde{F}_j^{\ast} \widetilde{F}_j $. } \label{ForceSensitivity} \end{figure} Utilizing the transfer function measurement described above, the noise spectrum of a neutral, 4.7~$\upmu$m diameter MS without any external driving field is converted into physical units (${\rm N}/\sqrt{\rm Hz}$), as shown in Fig.~\ref{ForceSensitivity}. This illustrates the typical force sensitivity of the system, reaching a level of $1\times10^{-17}~{\rm N}/\sqrt{\rm Hz}$. This force sensitivity is translated to an acceleration sensitivity of $12~\upmu g/\sqrt{\rm Hz}$, with $g=9.8~{\rm m/s}^2$. These values are comparable to the sensitivity previously reported \cite{PhysRevA.97.013842}. The observed noise floor is of additive, white Gaussian nature, and thus the sensitivity for any signal would improve with $\sqrt{t}$, with $t$ being the integration time, in the absence of background forces. The expected Brownian motion of the MS at the vacuum level achieved is estimated to be $2 \times10^{-19}~{\rm N}/\sqrt{\rm Hz}$. In addition, it is demonstrated that the noise floor is the same at different pressures, below $\sim10^{-5}$~hPa. The shot noise is subdominant as well and estimated to be of the order of $3 \times10^{-19}~{\rm N}/\sqrt{\rm Hz}$ \cite{PhysRevA.97.013842}. Hence the observed noise floor is likely technical in nature, dominated by pointing fluctuations of the trapping and reference beams or due to electronics noise of the detection system. The improvement in low frequency $1/f$ noise as compared to Ref.~\cite{PhysRevA.97.013842} is mainly due to the enclosure of the free-space optics, resulting in reduced air currents and microphonics. \section{Conclusion} A system using a single optical tweezer to trap silica microspheres with diameters (masses) of 4.7 and 7.6~$\upmu$m (84 and 420~pg), intended to measure the force between the microsphere and another device in close proximity, is described. The small radial extent of the beam at the trap focus and the lack of significant non-Gaussian tails opens the possibility of bringing mechanical devices, such as thin silicon cantilevers, as close as 1.6~$\upmu$m from the surface of the microsphere. Electrodes surrounding the trap provide shielding, allow control over the translational degrees of freedom of charged microspheres and the rotational degrees of freedom of neutral microspheres, coupling to their electric dipole moment. The same electrode system allows for the calibration of the system's response to forces applied to the microsphere. The observed force sensitivity of $1\times10^{-17}$ ${\rm N}/\sqrt{\rm Hz}$ is limited by non-fundamental noise sources, so that further improvements are expected in the future. Auxiliary imaging of the optical trap and devices in its immediate vicinity from both the $y-z$ plane and the $x-y$ plane, allows precise alignment of devices relative to the trapped microsphere, making use of a number of nanopositioning stages. The system can be utilized for force measurements in the range of $1~\upmu$m-$10~\upmu$m, and for other applications requiring high precision, easily adjusted to trap larger or smaller microspheres. \begin{acknowledgments} This work was supported by NSF grant PHY1802952, ONR grant N00014-18-1-2409, and the Heising-Simons Foundation. A.K. acknowledges the partial support of a William~M. and Jane~D. Fairbank Postdoctoral Fellowship of Stanford University. N.P. acknowledges the partial support of the Koret Foundation. We acknowledge regular discussions on the physics of trapped microspheres with the group of Prof.~D.C.~Moore at Yale. We also thank A.O.~Ames, A.D.~Rider, B.~Sandoval, J.~Singh, and T.~Yu, who contributed to early developments of the experimental apparatus and the Physics Machine Shop at Stanford for their technical support. The data that support the findings of this study are available from the corresponding author upon reasonable request. \end{acknowledgments} \bibliographystyle{apsrev4-1}
proofpile-arXiv_069-1246
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Under the term compliance of an elastic structure we understand the value of the elastic energy stored in the structure subjected to a given static load $F$. In the present paper we consider optimum design of the field of constitutive tensor of a prescribed class of anisotropy. The aim is to find within a domain $\Omega \subset \Rd$ a distribution of the constitutive tensor that minimizes the compliance. Our attention is focused on materials with elastic potential $j=j(\hk, \xi)$ whose arguments are: the 4th-order constitutive positive semi-definite tensor $\hk$ of suitable symmetries, that henceforward will be shortly called a \textit{Hooke tensor}, and the 2nd-order strain tensor $\xi$, defined as the symmetric part of the gradient of the displacement vector function $u$. The Hooke tensor field, point-wise restricted to a closed convex cone $\Hs$ of our choosing, will be the design variable while imposing a bound $\Totc$ on its total cost being integral of a norm $\cost = \cost(\hk)$. This problem will be referred to as the \textit{Free Material Design} problem (FMD) in general (also known in the literature under the name \textit{Free Material Optimization}), and as the \textit{Anisotropic Material Design} (AMD) if the anisotropy is not subject to any constraints, namely $\Hs$ is the whole set of Hooke tensors. In the context of the linear theory of elasticity in which $j(\hk,\xi)=\frac{1}{2} \pairing{\hk\,\xi , \xi}$ and with the unit cost function $\cost(\hk)=\tr \, \hk$ the above problem in the AMD setting has been for the first time put forward in \cite{ringertz1993}, where also a direct numerical method of solving this problem has been proposed. Soon then in \cite{bendsoe1994} the AMD was formally reformulated to a form in which only one scalar variable is involved: $ m :=\tr\, \hk$. There has also been shown that the optimal tensor assumes the singular form: $\check\hk = m\, \tilde{\xi} \otimes \tilde{\xi}$ where point-wise $\tilde{\xi}$ is the normalized strain tensor. Consequently, one eigenvalue of the optimal $\check\hk$ is positive, while the other five eigenvalues vanish. Due to reduction of the number of scalar design variables from 21 (in three dimensions) to 1 an efficient numerical method could be developed, cf. Section 5 in \cite{bendsoe1994}. The analytical method of paper \cite{bendsoe1994} has been applied in \cite{bendsoe1996} concerning minimum compliance of softening structures. This time the analytical work has been done one step forward showing, at the formal level, how to eliminate the design variable $m$, but due to the necessity of using the optimization tools for smooth optimization problems this reduction had not been used in the next steps, e.g. within the numerical tools, at the cost of increasing the number of design variables. Thus, the mentioned papers: \cite{bendsoe1994},\cite{bendsoe1996} did not make use of possibility of elimination of all the design variables in the AMD problem. Such elimination leads to the minimization problem of a functional of linear growth with respect to the stress tensor field running over the set of all stresses satisfying the equilibrium equations. In the present paper this problem is a particular case of the more general problem $\dProb$ if one assumes $\dro$ to be the Euclidean norm on the space of matrices, cf. \eqref{eq:dProb_intro} below. To the present authors' knowledge one of the first contributions that puts the AMD problem in rigorous mathematical frames is \cite{werner2000} where a variant of existence result is given. The author used a variational formula for the compliance thus rewriting the original problem as a min-max problem in terms of the Hooke tensor field $\hk$ and the vector displacement function $u$. In order to gain compactness in some functional space of Hooke tensor fields a uniform upper bound $\tr\, \hk(x) \leq m_{max}$ was additionally enforced in \cite{werner2000}, which allowed to establish existence of a solution in the form of a tensor-valued $L^\infty$ function $x \mapsto \check\hk(x)$. Based on a saddle-point result the author also proved that there exists a displacement function $u \in W^{1,2}(\Omega;\Rd)$ solving the linear elasticity problem in the optimally designed body characterized by $\check{\hk} \in L^\infty(\Omega;\Hs)$. Bounding point-wise the trace of Hooke tensor has an advantage of preventing the material density blowing up in the vicinity of singularity sets (e.g. the re-entrant corners of $\Omega$), which should potentially render the optimal design more practical. Contrarily, the extra constraint deprives us of some vital mathematical features: it is no longer possible to reduce the original formulation AMD to the problem $\dProb$ of minimizing the functional of linear growth. It is also notable that combining the local constraint $\tr\, \hk(x) \leq m_{\max}$ with the global one $\int_\Omega \tr\, \hk(x) \, dx \leq C_0$ must surely produce results that depend on the ratio $(m_{\max} \, \abs{\Omega})/\Totc$; in particular once it is below one the bound on the global cost is never sharp and thus may be disposed of. Furthermore, due to the uniform bound on the optimal Hooke tensor field $\check{\hk}$, we should not \textit{a priori} expect the regularity of the fields solving the corresponding elasticity problem to be higher than in the classical case: the displacement $u$ will in general lie in the Sobolev space $W^{1,2}(\Omega;\Rd)$ (discontinuities possible) and the stress tensor function $\sigma\in L^2(\Omega;\Sdd)$ may blow up to infinity. The local upper bound on the trace of Hooke tensor is also kept throughout the papers \cite{zowe1997} or \cite{kocvara2008} that concentrate rather on the numerical treatment. Another work that offers an existence result in a FMD-adjacent problem is \cite{haslinger2010} where authors put a special emphasis on controlling the displacement function $u$ in the optimally designed body -- therein a more general design problem is considered that includes additional constraints on both displacement $u$ and the the stress $\sigma$. In order to arrive at a well-posed problem a relaxation is proposed where, apart from initially considered uniform upper bound $\tr\, \hk(x) \leq m_{\max}$, a lower bound $\eps\, \mathrm{Id} \leq \hk(x)$ is imposed as well for some small $\eps>0$; the inequality ought to be understood in the sense of comparing the induced quadratic forms, whilst $\mathrm{Id}:\Sdd \mapsto \Sdd$ is the identity operator. As outlined above, the reformulation of AMD problem to the problem $\dProb$, the one of minimizing a functional of linear growth proposed first in \cite{bendsoe1996}, was not utilized in the subsequent works in years 1996-2010 keeping the uniform upper bound $\tr\, \hk(x) \leq m_{max}$ (that guaranteed compactness in $L^\infty$) and applying more direct numerical approaches. This matter was revisited in \cite{czarnecki2012} where the passage from AMD to $\dProb$ played a central role: a detailed, yet still formal, derivation of $\dProb$ via the optimality conditions is therein given. In the same work the problem $\dProb$ was treated numerically. Next, the paper \cite{czarnecki2014} formally put forward a problem dual to $\dProb$ where the virtual work of the load is maximized over displacement functions $u$ that produce strain $e(u)$ point-wise contained in a unit Euclidean ball. In the present work this dual problem may be recovered as a particular case of the problem $\Prob$ by choosing $\rho$ to be again Euclidean norm, cf. \eqref{eq:Prob_intro} below. The idea of reformulating an optimal design problem by a pair of mutually dual problems $\Prob$ and $\dProb$ was inspired by the theory of Michell structures where a pair of this form can be employed to obtain both analytical and highly accurate numerical solutions, cf. \cite{Lewinski2019} or \cite{bouchitte2008}. As stressed above the solution to the AMD problem is highly singular: only one eigenvalue of the elastic moduli tensor turns out to be positive, the other five vanish. A way of remedying this is by imposing some additional local condition on the type of material's anisotropy: in \cite{czarnecki2015a}, \cite{czarnecki2015b} and later in \cite{czarnecki2017b} the \textit{Isotropic Material Design} problem (IMD) was proposed as another setting of the family of FMD problems. Essentially IMD problem boils down to seeking two scalar fields $K$ and $G$ being, respectively, bulk and shear moduli that fully determine the field of isotropic Hooke tensor $\hk$ for which (in 3D setting) $\tr \,\hk = 3 K + 10 G$. Analogously to the AMD setting, the IMD problem was reformulated to a pair of mutually dual problems of the form $\Prob$, $\dProb$, only this time the functions $\rho, \dro$ are not the Euclidean norms but a certain pair of mutually dual norms on the space of symmetric matrices. A similar effort was made in \cite{czubacki2015} where the \textit{Cubic Material Design} problem (CMD) was approached: the cubic symmetry was imposed on the unknown Hooke tensor field $\hk$ reducing the CMD problem to minimizing over three moduli fields as well as directions of anisotropy. Once again reformulation to a pair $\Prob$, $\dProb$ proved to be feasible with $\rho, \dro$ chosen specifically to CMD problem. Finally, along with isotropy symmetry the Poisson ratio $\nu$ may be fixed as well leading to the \textit{Young Modulus Design} problem (YMD), where only one field of Young moduli $E$ is the design variable, see \cite{czarnecki2017a}. In summary, throughout years 2012-2017 a family of Free Material Design problems: AMD, CMD, IMD, YMD has been proposed and rewritten as pairs of mutually dual problems $\Prob$ and $\dProb$, specified for each problem via different functions $\rho, \dro$. The present contribution is aimed at mathematically rigorous unification of the theory of FMD family including showing existence results as well as the equivalence with the pair $\Prob$, $\dProb$. The latter issue excludes the possibility of imposing the uniform bound $\tr\,\hk(x) \leq m_{\max}$ hence compactness of the set of admissible Hooke tensor fields must be established in topology other than the one of $L^\infty(\Omega;\Hs)$. The global constraint $\int_\Omega \tr\,\hk\, dx \leq \Totc$ yields merely boundedness of $\hk$ in $L^1(\Omega;\Hs)$. Naturally, due to lack of reflexivity of $L^1(\Omega;\Hs)$, the compactness in in this space is impossible to obtain. Almost in parallel to the mathematical work \cite{werner2000} on the Free Material Design problem the so-called \textit{Mass Optimization Problem} (MOP) was developed in \cite{bouchitte2001}. In MOP we seek a mass distribution, being a non-negative scalar field $m$, that minimizes the compliance. Roughly speaking MOP is equivalent to a particular case of the FMD problem with the set of admissible Hooke tensors chosen as $\Hs = \bigl\{m\, \hk_0 \, : \, m \geq 0 \bigr \}$ where $\hk_0$ is a fixed strictly positive definite Hooke tensor (once $\hk_0$ is isotropic then MOP is equivalent to YMD problem). Consequently the only constraint is global and it reads $\int_\Omega m \,dx \leq M_0$ for some $M_0>0$. Similarly as in FMD problem the compactness of the set of admissible mass fields in any Lebesgue space $L^q(\Omega)$ is beyond reach. Therefore the authors of \cite{bouchitte2001} depart from the relaxed MOP from the beginning where the design variable is a positive Radon measure supported in the closure of the design domain: $\mu \in \Mes_+(\Ob)$ represents the mass distribution whereas the constraint is simply $\int d\mu \leq M_0$. According to Lebesgue decomposition theorem $\mu = \mu_{ac} +\mu_s$ where $\mu_{ac} = m \,\Leb^d \mres \Omega$ with $m \in L^1(\Omega)$ and $\mu_s$ is the singular part of $\mu$. In contrast to the FMD problem with the uniform bound $\tr \, \hk(x) \leq m_{\max}$ assumed in works like \cite{werner2000} or \cite{haslinger2010} it may happen in MOP that the optimal $\check{m}$ blows up to infinity, which is debatable in terms of manufacturability. On the other hand, however, the optimal singular part $\check{\mu}_s$ could concentrate e.g. on some curve $C \subset \Ob$, namely $\check{\mu}_s = m_C \,\Ha^1\mres C$, which would mean that via MOP we recognize a need for one-dimensional reinforcement of a $d$-dimensional body/structure. This feature of an optimal design problem is well-established in the theory of Michell structures. In the paper \cite{bouchitte2001} we find a rigorous reformulation of MOP to a pair of mutually dual problems $\Prob$, $\dProb$ mentioned above, which now, in virtue of the measure theoretic setting may be readily written down: \begin{alignat}{2} \label{eq:Prob_intro} &\Prob \qquad \qquad Z &&= \sup\biggl\{ \int \pairing{u,F} \ : \ u \in C^1(\Ob;\Rd), \ \rho\bigl( e(u) \bigr) \leq 1 \ \text{ in } \Omega \biggr\} \qquad \qquad\\ \label{eq:dProb_intro} &\dProb\qquad \qquad &&=\min \biggl\{ \int \dro(\TAU) \ : \ \TAU \in \Mes\bigl( \Ob;\Sdd \bigr), \ -\DIV\, \TAU = F \biggr\}. \qquad \qquad \end{alignat} It must be noted that the applied load is a vector valued measure $F \in \MesF$ which accounts for e.g. point loads, whilst the equilibrium equation $-\DIV\, \TAU = F$ must be understood in the sense of distributions. The variable $\tau$ in $\dProb$, being a tensor valued measure, seems to play a role of the stress field yet it is not the case. In the present work $\tau$ will be referred to as the \textit{force flux}: indeed $\dProb$ resembles the problem of optimally transporting parts of the vector source $F$ to its other parts; optimal $\TAU$ may be diffused over some subdomain of non-zero Lebesgue measure or rather concentrate on some curve. Once $F$ is balanced the problem $\dProb$ attains a solution $\hat{\TAU}$ while there exists a continuous displacement function $\hat{u} \in C(\Ob;\Rd)$ that solves a version $\relProb$ of the problem $\Prob$ where the differentiability condition is relaxed. One of the main theorems in \cite{bouchitte2001} allows to recover a solution of the original MOP based on solutions $\hat{u}, \hat{\tau}$: the optimal mass reads $\check{\mu} = \frac{M_0}{Z} \dro(\hat{\TAU})$ while the displacement function $\check{u} = \frac{Z}{M_0} \hat{u}$ and the stress function $\check{\sig} = \frac{d \hat{\TAU}}{d\check{\mu}}$ solve the underlying elasticity problem of the body given by mass $\check{\mu}$ and subject to the load $F$. It is remarkable that $\check{u}$ and $\check{\sig}$ gain extra regularity in comparison to classical elasticity: the function $\check{u}$ is differentiable $\Leb^d$-a.e. in $\Omega$ with $e(u) \in L^\infty(\Omega;\Sdd)$ (see Lemma 2.1 in \cite{bouchitte2008}) while $\check{\sig} \in L_{\check{\mu}}^\infty(\Ob;\Sdd)$ or more precisely the stress $\check{\sig}$ is uniform in the optimal body in the sense that $\dro(\check{\sig}) \equiv \frac{Z}{M_0}\ $ $\check{\mu}$-a.e. One may think of $\check\sig$ as "micro-stress" referred to elastic medium described by $\check{\mu}$; then the resulting force flux $\hat\TAU = \check{\sig} \check{\mu}$ could be thought of as "macro-stress". Then, although $\check{\sig}$ is bounded and uniform, $\hat{\TAU}$ is in some sense "point-wise unbounded", which is difficult to put in a precise way with $\hat{\TAU} \in \Mes(\Ob;\Sdd)$ being a tensor valued measure. Similar distinction between such "micro-stress" and "macro-stress" (called therein \textit{Hemp's forces}) occurs in theory of Michell structures, see \cite{Lewinski2019}. The theory of MOP was further developed and generalized in \cite{bouchitte2007}. The present work essentially adapts the methods in \cite{bouchitte2007} to provide a rigorous mathematical framework of the family of Free Material Design problems in the setting of the papers by Czarnecki et al (2012-2017). The choice of the class of anisotropy we are designing, i.e. whether we are in the setting of AMD, IMD etc., shall be determined by a set $\Hs$ being any closed convex cone contained in the set of Hooke tensors. The elastic potential that furnishes the constitutive law of elasticity and, at the same time, describes the dependence on the design variable shall be a two-argument real non-negative function $j:\Hs \times \Sdd \rightarrow \R$. The unit cost of the Hooke tensor at a point $\cost:\Hs \rightarrow \R$ may be picked as restriction of any norm on the space of 4th-order tensors to the set $\Hs$; the standard cost $c = \tr$ is a particular choice. The family of the FMD problems is thus parameterized by $\Hs$, $j$, $c$, which, considering assumptions (H1)-(H5) given in Section \ref{sec:elasticity_problem}, offers a wide variety of optimal design problems. In analogy to \cite{bouchitte2001} or \cite{bouchitte2007} the design variable shall be the tensor valued measure $\lambda \in \Mes(\Ob;\Hs)$ representing the Hooke tensor field; the constraint on the total cost shall read $\int c(\lambda) \leq \Totc$. The measure $\lambda$ may be decomposed to $\lambda = \hf \mu$ where $\hf \in L^\infty_\mu(\Ob;\Hs)$ with $c(\hf) = 1\ $ $\mu$-a.e.; the positive Radon measure $\mu$ again plays the role of the "mass" of the body. The objective is to minimize the elastic compliance $\Comp = \Comp(\lambda)$ under a balanced load $F \in \Mes(\Ob;\Rd)$ that is expressed variationally: $\Comp(\lambda) = \sup_{u\in C^1(\Ob;\Rd)}\bigl\{ \int \pairing{u,F} -\int j\bigl(\lambda,e(u)\bigr) \bigr\}$. The hereby proposed FMD problem falls into class of \textit{structural topology optimization} problems for it determines: \begin{enumerate}[(i)] \item the topology and shape of the optimal body via the closed set $\mathrm{spt} \, \check{\mu}$ which, in general, can be strictly contained in the design domain $\Ob$ (cutting-out property of FMD); \item variation of dimension of the optimal structure from point to point in $\Ob$ (solids, shells or bars may appear altogether) and this information is encoded in the geometric properties of $\check{\mu}$, see paper \cite{bouchitte1997} on the space tangent to measure at a point; \item the anisotropy $\check\hf(x)$ at $\check{\mu}$-a.e. $x$ that may imply certain singularities of the material, e.g. in the AMD setting the solution $\check\hf$ degenerates $\check{\mu}$-a.e. to have a single positive eigenvalue, whilst for IMD problem it appears typical for the optimal material to be auxetic, see \cite{czarnecki2015b}. \end{enumerate} After checking in Section \ref{sec:elasticity_problem} the well-posedness of the FMD problem by showing weak-* upper semi-continuity of the functional $\lambda \mapsto \int j\bigl(\lambda,e(u) \bigr)$, in Section \ref{sec:from_FMD_to_LCP} we move on to reduce the original problem to the pair of problems $\Prob$, $\dProb$ of identical form as in the work \cite{bouchitte2007} on MOP. One of the paramount differences between the two design problems is the following: for MOP the functions $\rho,\dro$ are data that can be inferred from the given constitutive law whereas here the gauge function $\rho$ is to be computed such that $\frac{1}{p}\bigl(\rho(\xi)\bigr)^p = \max_{\hk \in \Hs, \ \cost(\hk) \leq 1} j(\hk,\xi)$ where $p$ is the exponent of homogeneity of $j(\hk,\argu)$; then $\dro$ is defined as the polar of $\rho$. We stress that $\rho$ depends on all the parameters $\Hs, j, c$ of the family of FMD problems hence the pair $\rho,\dro$ in $\Prob,\dProb$ encodes the actual setting of the FMD problem (e.g. AMD, IMD etc.). The finite dimensional programming problem that gives the pair $\rho, \dro$ is thoroughly studied in Section \ref{sec:anisotropy_at_point}, where in particular we learn that $\frac{1}{p'}\bigl(\dro(\sig)\bigr)^{p'} = \min_{\hk \in \Hs, \ \cost(\hk) \leq 1} j^*(\hk,\sig)$. The final method of solving the FMD problem follows from Theorem \ref{thm:FMD_LCP}: we learn that for any solutions $\hat{u}$ and $\hat{\TAU}$ of, respectively, $\relProb$ and $\dProb$ the measure $\check{\mu} = \frac{\Totc}{Z}\,\dro(\hat{\TAU})$ is an optimal mass distribution in the FMD problem while the displacement $\check{u} = \bigl(\frac{Z}{\Totc}\bigr)^{p'/p} \hat{u}$ and the stress $\check{\sig} = \frac{d\hat{\TAU}}{d\check{\mu}}$ solve the elasticity problem in the optimal structure. Finally, the optimal function of Hooke tensor $\check{\hf}$ is the one that point-wise solves the problem $j^*\bigl(\check{\hf}(x),\check{\sig}(x)\bigr)=\min_{\hk \in \Hs, \ \cost(\hk) \leq 1} j^*\bigl(\hk,\check\sig(x)\bigr)$; Lemma \ref{lem:measurable_selection} guarantees that there exists such $\check{\mu}$-measurable function $\check{\hf}$. In Section \ref{sec:optimality_conditions} we again build upon \cite{bouchitte2007} to arrive at the optimality conditions for a quadruple $(u,\mu,\sig,\hf)$ in Theorem \ref{thm:optimality_conditions}. The optimality conditions are then employed in Section \nolinebreak \ref{sec:examples} to give analytical solutions of some simple example of FMD problem in its different settings. This includes the settings of AMD and IMD, but we also propose the new \textit{Fibrous Material Design} setting (FibMD) where the set $\Hs$ is chosen as convex hull of the (non-convex) cone of uni-axial Hooke tensors $\hk = a \ \eta \otimes \eta \otimes \eta \otimes \eta$ with $a \geq 0$ and $\eta$ being a unit vector. Remarkably, in the FibMD setting the pair $\Prob$, $\dProb$ represents exactly the Michell problem as $\rho$ turns out to be the spectral norm, see \cite{strang1983} and \cite{bouchitte2008}. The assumptions (H1)-(H5) allow to take potentials $j$ beyond the classical $j(\hk,\xi)= \frac{1}{2} \pairing{\hk\,\xi , \xi}$ and in Example \ref{ex:dissymetru_tension_compresion} we demonstrate this possibility by proposing a potential $j_\pm$ that is dissymetric for tension and compression while the dependence on $\hk$ is non-linear. The goal of the concluding Section \ref{sec:outlook} is to show that the theory developed in this work finds its application outside elasticity. The framework of the paper \cite{bouchitte2007} is very general and it applies e.g. to Kirchhoff plates. Section \ref{sec:FMD_for_plates} offers a sketch of adaptation of the FMD theory to Kirchhoff plates, cf. the work \cite{weldeyesus2016} on numerical methods for FMD in plates and shells. Treating the FMD problem in elasticity as vectorial one (the function $u$ is vector valued) in Section \ref{sec:FMD_for_heat_cond} we outline the theory of the scalar FMD problem that, in turn, furnishes optimal field of conductivity tensor. In analogy to \cite{bouchitte2001} we recognize connection of the scalar FMD problem to the \textit{Optimal Transport Problem} (cf. \cite{villani2003topics}), which allows to characterize the optimal conductivity field by means of the optimal transportation plan. \section{Elasticity framework} \label{sec:elasticity_problem} \subsection{Hooke tensor fields and constitutive law. Strain formulation of elasticity theory} By $\Omega$ we shall understand a bounded open set with Lipschitz boundary, contained in a $d$-dimensional space $\Rd$ (in this work $d = 2$ or $d =3$). The space $\Sdd$ will consist of symmetric 2nd-order tensors representing either the stress or the strain at a point $\Ob$, being the domain of an elastic body: a plate in case of $d=2$ and a solid for $d=3$. Naturally $\Sdd$ is isomorphic to the space of symmetric $d \times d$ matrices. We will use the symbol $\pairing{\argu,\argu}$ to denote the Euclidean scalar product in any finite dimensional space. In classical elasticity, point-wise in $\Omega$, the anisotropy of the body is characterized by a \textit{Hooke tensor}: a 4-th order tensor that enjoys certain symmetries and is positive semi-definite. In fact the set of Hooke tensors is isomorphic to the subset $\Hf = \big\{ \hk \in \LSdd : \hk \text{ is positive semi-definite} \bigr\}$ with $\LSdd$ standing for the space of symmetric operators from $\Sdd$ to $\Sdd$. We thus agree that henceforward we shall (slightly abusing the terminology) speak of Hooke tensors as elements of $\Hf$ being a closed convex cone in $\LSdd$. For a Hooke tensor $\hk \in \Hf$ the notions of eigenvalues $\lambda_i(\hk)$ or the trace $\tr\,\hk$ are thus well established; similarly we may speak of identity element $\mathrm{Id} \in \Hf$ or a tensor product $A \otimes A \in \Hf$ for $A \in \Sdd$. In the sequel we will restrict the admissible class of anisotropy by admitting Hooke tensors in a chosen subcone of $\Hf$, i.e. \begin{equation*} \Hs \text{ is an aribtrary non-trivial closed convex cone contained in } \Hf. \end{equation*} We now display some cases of the cones $\Hs$ that will be of interest to us: \begin{example} \label{ex:Hs_symmetry} The subcone $\Hs$ may be chosen so that the condition $\hk \in \Hs$ implies a certain type of anisotropy symmetry, for instance $\Hs = \Hs_{iso}$ will denote the set of isotropic Hooke tensors; we have the characterization \begin{equation} \label{eq:iso_K_G} \Hs_{iso} = \left\{\hk \in \Hf \, : \, \hk = d K \biggl( \frac{1}{d}\, \mathrm{I} \otimes \mathrm{I} \biggr) + 2 G\, \biggl( \mathrm{Id}- \frac{1}{d}\, \mathrm{I} \otimes \mathrm{I} \biggr), \ K,G\geq 0 \right\} \end{equation} where by $\mathrm{I} \in \Sdd$ we denote the identity matrix, while $\mathrm{Id}$ is the identity operator in $\mathscr{L}(\Sdd)$. The non-negative numbers $K, G$ are the so-called bulk and shear moduli respectively. In case of plane elasticity, i.e. $d=2$, for later purposes we give the relation between the moduli and the pair Young modulus $E$, Poisson ratio $\nu$: \begin{equation} \label{eq:Young_and_Poisson} E =2 \left(\frac{1}{2K}+\frac{1}{2G} \right)^{-1} = \frac{4 K G}{K+G}, \qquad \nu = \frac{K-G}{K+G}. \end{equation} It must be stressed, however, that some symmetry classes of the Hooke tensor generate cones that are non-convex. This is the case with classes that distinguishes directions, e.g. orthotropy, cubic symmetry. \end{example} \begin{example}. \label{ex:Hs_Michell} Let us denote by $\Hs_{axial}$ the set of uni-axial Hooke tensors, i.e. \begin{equation*} \Hs_{axial} = \bigl\{ \hk \in \Hf : \hk = a \ \eta \otimes \eta \otimes \eta \otimes \eta, \ a \geq 0, \, \eta \in S^{d-1} \bigr\}. \end{equation*} where by $S^{d-1}$ we mean the unit sphere in $\Rd$. The set $\Hs_{axial}$ is clearly a cone yet it is non-convex for $d>1$ and thus a natural step is to consider the smallest closed convex cone containing $\Hs_{axial}$, i.e. its closed convex hull: \begin{equation*} \Hs = \overline{\mathrm{conv}(\Hs_{axial})} = \mathrm{conv}(\Hs_{axial}), \end{equation*} where we have used the fact that in the finite dimensional space the convex hull of a closed cone is closed. This family of Hooke tensors relates to materials that are made of one-dimensional fibres. Although $\mathrm{conv}(\Hs_{axial})$ is properly contained in $\Hf$ it contains non-trivial isotropic Hooke tensors. \begin{comment} Indeed, for $d=2$ one may consider an tensor \begin{equation} \hk_{\circledast} = A \sum_{i=1}^3 \eta_i \otimes \eta_i \otimes \eta_i \otimes \eta_i \in \mathrm{conv}(\Hs_{axial}) \end{equation} where vector $\eta_1$ is arbitrary unit vector while $\eta_2$ and $\eta_3$ are its rotation about $2\pi/3$ and $-2\pi/3$ respectively; $A$ is any positive number. It is elementary to verify that \begin{equation} \hk_\circledast = \frac{3 A}{4} \bigl(\mathrm{I}\otimes\mathrm{I} + \mathrm{Id} \bigr ) \end{equation} and by comparing with \eqref{eq:iso_2D} we see that $\hk_{\circledast} \in \Hs_{iso}$, namely it is an isotropic Hooke tensor with Poisson ratio $\nu =1/2$ and Young modulus $E = 9\,A/16$. We therefore see that although we have started from uni-axial stiffness tensor set $\Hs_{axial}$ the convexification procedure forces us to work with tensors that essentially lose the one-dimensional features. The situation slightly changes once the vectors $\eta_i$ are required to be mutually orthogonal, i.e. for arbitrary $d$ one may consider $\hk_\oplus = \sum_{i=1}^d A_i\, \eta_i \otimes \eta_i \otimes \eta_i \otimes \eta_i$ with such vectors $\eta_i$ and any non-negative numbers $A_i$. The Hooke tensor $\hk_\oplus$ is ortohtropic with zero shear moduli and Poisson ratios. Stiffness tensors of the form $\hk_\oplus$ may therefore be associated with materials made of one-dimensional bars crossing orthogonally. It will turn out that in a certain scenario of the FMD problem the tensors $\hk_\oplus$ will prove the most efficient amongst the family $\HM$. This scenario shall coincide with the problem of Michell structures (see Section \textcolor{red}{???}). The convexification procedure may be obviously applied to other non-convex families of Hooke tensors, in particular to the set $\Hs_{cubic}$ of tensors of cubic symmetry -- this will be covered in a separate Section \textcolor{red}{??? (must be checked whether it makes sense)}. \end{comment} \end{example} We proceed to narrow down the class of constitutive laws of elasticity that shall be herein considered, i.e. point-wise relation between the stress tensor $\sig \in \Sdd$ and the strain tensor $\xi \in \Sdd$. We will deal with a family of constitutive relations parametrized by a Hooke tensor $\hk \in \Hs$, therefore the elastic energy potential will depend on two variables: \begin{equation*} j: \Hs \times \Sdd \rightarrow \R; \end{equation*} note that we assume that $j$ cannot admit infinite values. It is important that there is no explicit dependence on the spatial variable $x$. Below we state our assumptions on the function $j$. Throughout the rest of the paper we fix an exponent $p \in (1,\infty)$, while $p' = \frac{p}{p-1}$ will stand for its H\"{o}lder conjugate. We assume that for each $\hk \in \Hs$ there hold: \begin{enumerate}[(H1)] \item \label{as:convex} the function $j(\hk,\argu)$ is real-valued, non-negative and convex on $\Sdd$; \item \label{as:p-hom} the function $j(\hk,\argu)$ is positively $p$-homogeneous on $\Sdd$; \end{enumerate} whilst for each $\xi \in \Sdd$ there hold: \begin{enumerate}[(H1)] \setcounter{enumi}{2} \item \label{as:concave} the function $j(\argu,\xi)$ is concave and upper semi-contiunous on the closed convex cone $\Hs$; \item \label{as:1-hom} the function $j(\argu,\xi)$ is one-homogeneous on $\Hs$; \item \label{as:elip} there exists $\hk \in \Hs$ such that $j(\hk,\xi) >0$. \end{enumerate} It is worth to stress that the condition (H\ref{as:elip}) that gives a kind of ellipticity is weak as it allows \textit{degenerate} tensors $\hk \in \Hs$ for which there exists non-zero $\xi \in \Sdd$ such that $j(\hk,\xi) =0$. We shall say that a stress tensor $\sigma \in \Sdd$ and a strain tensor $\xi \in \Sdd$ satisfy the constitutive law of elasticity with respect to a Hooke tensor $\hk \in \Hs$ whenever \begin{equation*} \sigma \in \partial j(\hk,\xi), \end{equation*} where we agree that henceforward the subdifferential $\partial j(\hk,\xi)$ will be intended with respect to the second variable; similarly we shall later understand the Fenchel transform $j^*$. This way the constitutive law above may be rewritten as the equality $\pairing{\xi,\sigma} = j(\hk,\xi) + j^*(\hk,\sigma)$. \begin{example} The simplest case of a function $j$ in case when $p=2$ is the one from linear elasticity: \begin{equation*} j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi}. \end{equation*} It is trivial to see that the assumptions (H\ref{as:convex})-(H\ref{as:1-hom}) are satisfied by the function above. The assumption (H\ref{as:elip}) is virtually put on the set $\Hs$ as it has to contain "enough" Hooke tensors. \end{example} Next we state several results that will be useful in terms of integral functionals with $j$ as the integrand. \begin{proposition} \label{prop:Carath} For a given Radon measure $\mu \in \Mes_+(\Ob)$ let $\hf: \Ob \rightarrow \Hs$ be a $\mu$-measurable tensor valued function. Then the function $j\bigl(\hf(\argu),\argu\bigr) : \Ob \times \Sdd \rightarrow \R$ is a Carath\'{e}odory function, i.e. \begin{enumerate}[(i)] \item for $\mu$-a.e. $x\in \Ob$ the function $j\bigl(\hf(x),\argu\bigr) $ is continuous; \item for every $\xi \in \Sdd$ the function $j\bigl(\hf(\argu),\xi\bigr)$ is $\mu$-measurable. \end{enumerate} \end{proposition} \begin{proof} The statement (i) follows easily from the assumption (H\ref{as:convex}) since every convex function that is finite on the whole finite dimensional space (here $\Sdd$) is automatically continuous. We fix $\xi \in \Sdd$; to see that (ii) holds it is enough to show that for arbitrary $\alpha \in \R$ the set $\{x\in \nolinebreak \Ob \,:\, j\bigl(\hf(x),\xi\bigr) <\alpha \} $ is $\mu$-measurable. Due to the upper semi-continuity assumption (H\ref{as:concave}) the set $A =\{\hk \in \Hs \,: \, j\bigl(\hk,\xi\bigr) <\alpha \}$ is open in topology of $\mathscr{L}(\Sdd)$ relative to $\Hs$. Since $\hf$ is $\mu$-measurable we obtain that $\hf^{-1}(A)$ is $\mu$-measurable and the thesis follows. \end{proof} In compliance with convex analysis a convex function restricted to convex subset of a linear space can be equivalently treated as a function defined on the whole space if extended by $+\infty$. Since the real function $j:\Hs \times \Sdd \rightarrow \R$ is concave with respect to first variable $\hk$ we can by analogy speak of an extended real function $j:\mathscr{L}(\Sdd) \times \Sdd \rightarrow \Rb = [-\infty,\infty]$ such that $j(\hk,\xi) = -\infty$ for any $\xi \in \Sdd$ and any $\hk \in \mathscr{L}(\Sdd)\backslash \Hs$. \begin{proposition} \label{prop:usc_j} The function $j$ is upper semi-continuous on the product $\LSdd \times \Sdd$, i.e. jointly in variables $\hk$ and $\xi$. \begin{proof} W fix a pair $(\breve{\hk},\breve{\xi}) \in \LSdd \times \Sdd$. We may assume that $\breve{\hk} \in \Hs$ since otherwise $j(\breve{\hk},\breve{\xi}) = -\infty$ and the thesis follows trivially. Let us take any ball $U \subset \LSdd$ centred at $\breve{\hk}$ and introduce a compact set $K = \overline{U} \cap \Hs$. We observe that for any fixed $\xi \in \Sdd$ the set $\{j(\hk,\xi) : \hk \in \nolinebreak K \}$ is bounded in $\R$. The zero lower bound follows from non-negativity of $j \vert_\Hs$ whereas, since $j(\argu,\xi)$ is real-valued concave and upper semi-continuous on $\Hs$, it achieves its finite maximum on $K$. According to Theorem 10.6 in \cite{rockafellar1970} the shown point-wise boundedness combined with convexity of every $j(\hk,\argu)$ imply that the family of functions $\{j(\hk,\argu) : \hk \in K \}$ is equi-continuous on any bounded subset of $\Sdd$. Upon fixing $\eps>0$ we may thus choose $\delta_1>0$ such that \begin{equation*} \abs{j(\hk,\xi) - j(\hk,\breve{\xi})} <\frac{\eps}{2} \qquad \forall\, \xi \in B(\breve{\xi},\delta_1) \subset \Sdd, \quad \forall\, \hk \in K \subset \Hs, \end{equation*} where it must be stressed that $K$ does not depend on $\eps$. Due to the upper semi-continuity of $j(\argu,\breve{\xi})$ we can also choose $\delta_2>0$ for which $B(\breve{\hk},\delta_2) \subset U$ and \begin{equation*} j(\hk,\breve{\xi}) < j(\breve{\hk},\breve{\xi}) + \frac{\eps}{2} \qquad \forall\, \hk \in B(\breve{\hk},\delta_2). \end{equation*} For any pair $(\hk,\xi) \in \bigl(B(\breve{\hk},\delta_2)\cap\Hs \bigr) \times B(\breve{\xi},\delta_1)$ we therefore obtain \begin{equation*} j(\hk,\xi) = j(\hk,\breve{\xi}) + \bigl( j(\hk,\xi) - j(\hk,\breve{\xi}) \bigr) < j(\breve{\hk},\breve{\xi}) +\frac{\eps}{2} + \frac{\eps}{2}, \end{equation*} which proves that $j$ is upper semi-continuous on $\Hs \times \Sdd$ being a convex and closed subset of $\LSdd \times \Sdd$. Extending $j$ by $-\infty$ guarantees its upper semi-continuity on $\LSdd \times \Sdd$. \end{proof} \end{proposition} The elastic properties of a $d$-dimensional body contained in $\Ob$ and, in fact, the shape of the body itself shall be fully determined by a constitutive field or a \textit{Hooke tensor field} represented by a $\LSdd$-valued measure $\lambda \in \Mes\bigl(\Ob;\LSdd \bigr)$; we note that $\lambda$ is compactly supported in $\Rd$. Let $f$ be any norm on the space $\LSdd$ (chosen in the sequel as the cost function), then according to Radon-Nikodym theorem $\lambda$ can be decomposed as follows \begin{equation} \label{eq:lambda_decomp} \lambda = \hf \, \mu, \qquad \mu \in \Mes_+(\Ob), \quad \hf \in L^\infty_\mu\bigl(\Ob;\LSdd \bigr), \quad f(\hf) = 1 \ \ \mu\text{-a.e.}, \end{equation} that is $\mu$ can be computed as variation measure $\abs{\lambda}$ with respect to the norm $f$ while $\hf$ is the Radon-Nikodym derivative $\frac{d \lambda}{ d \abs{\lambda}}$. Unless any confusion may arise, henceforward $\hf, \mu$ shall always denote the unique decomposition of $\lambda$ as above. This way the information $\lambda$ about the Hooke tensor field has been split into two: i) information on the distribution of elastic material $\mu$ that after \cite{bouchitte2001} shall be called \textit{mass distribution}; ii) information on the anisotropy $\hf$. Displacement of the body shall be expressed by a vector valued function and although the body is essentially contained in the support of $\mu$ it is convenient to start with displacement fields $u$ determined in the whole $\Rd$, more precisely $u \in \Dd$ where $\D(\Rd)$ stands for the standard test space of smooth functions, while $\Dd = \D(\Rd;\Rd)$ denotes its $d$ copies. Next, the strain $\eps$ will be a tensor valued function being the symmetric part of the gradient of $u$: \begin{equation*} \eps = e(u) := \frac{1}{2} \left( \nabla u + (\nabla u)^\top \right) \quad \in \quad \D(\Rd;\Sdd) \ \text{ for } \ u\in \Dd. \end{equation*} With a Hooke field $\lambda$ fixed the total strain energy $\Jlam{\lambda}$ of an elastic body is a convex functional on a space of strain functions, more accurately $\Jlam{\lambda}: L^p_\mu\bigl(\Ob;\Sdd \bigr) \rightarrow \Rb$ and it is defined as follows \begin{equation} \label{eq:J_lambda} \Jlam{\lambda}(\eps) := \int j\bigl(\lambda,\eps\bigr) = \int j\bigl(\hf(x),\eps(x) \bigr) \mu(dx), \end{equation} where we have utilised one-homogeneity of $j$ with respect to the first argument. We note that $J_\lambda$ is proper (does not admit $-\infty$ anywhere and is not constantly $\infty$) and in fact non-negative if and only if $\hf(x) \in \Hs$ for $\mu$-a.e. $x \in \Ob$. Indeed, for any $\hk \notin\Hs$ and arbitrary $\xi \in \Sdd$ one obtains $j(\hk,\xi) = - \infty$. Therefore, although formally $\lambda \in \Mes\bigl(\Ob;\LSdd \bigr)$, the condition on $\Jlam{\lambda}$ being finite will virtually force that the Hooke function $\hf$ point-wise lies in $\Hs$ as desired. It is elementary that the convex functional $J_\lambda$ is weakly lower semi-continuous on $L^p_\mu\bigl(\Ob;\Sdd \bigr)$. In the process of optimization the Hooke field $\lambda$ will play a role of the design variable and hence we must examine the weak-* upper semi-continuity of a concave functional $J_{(\argu)}(\eps): \MesH \rightarrow \Rb$ for a fixed continuous function $\eps$. It is natural to take the convex functional $-J_{(\argu)}(\eps)$ instead, yet the issue with utilizing the classical theorems (see e.g. \cite{reshetnyak1968}) to show its lower semi-continuity is that $-J_{(\argu)}(\eps)$ admits negative values. \begin{proposition} \label{prop:usc_J} Let us take any $\eps \in C\bigl(\Ob;\Sdd \bigr)$, then the functional $J_{(\argu)}(\eps)$ is weakly-* upper semi-continuous in the space $\MesH$. \begin{proof} The idea is to show that there exists a continuous function $G: \Sdd \rightarrow \left(\LSdd\right)^* \equiv \LSdd$ such that for every $\xi \in \Sdd$ we obtain a majorization $\pairing{G(\xi),\argu} \geq j(\argu,\xi)$ on $\LSdd$. Once this is established we define $g:\Ob \times \LSdd \rightarrow \Rb$ by \begin{equation*} g(x,\hk) := \pairing{G\bigl(\eps(x) \bigr),\hk} - j(\hk,\eps(x)). \end{equation*} Since $G$ is continuous and $j$ is upper semi-continuous jointly on $\LSdd \times \Sdd$ by Proposition \ref{prop:usc_j}, we see by uniform continuity of $\eps$ that the function $g$ is lower semi-continuous jointly on $\Ob \times \nolinebreak \LSdd$. Then, due to non-negativity of $g$ and its convexity together with positive one-homogeneity with respect to the second variable, it is a classical result (see e.g. \cite{reshetnyak1968} or \cite{bouchitte1988}) that the functional $\lambda \mapsto \int g\bigl(x,\lambda(dx)\bigr)$ is weakly-* lower semi-continuous on $\MesH$. We observe that for any $\eps \in C(\Ob;\Sdd)$ \begin{equation*} J_\lambda(\eps) = \int \pairing{G\bigl(\eps(x) \bigr),\lambda(x)} - \int g\bigl(x,\lambda(x)\bigr), \end{equation*} hence the functional $J_{(\argu)}(\eps)$ is a difference of a continuous linear functional (the function $G \circ \nolinebreak \eps:\Ob \rightarrow \LSdd$ is uniformly continuous) and weakly-* lower semi-continuous functional on $\MesH$, which furnishes the thesis. To conclude the proof we must therefore show existence of the function $G$. We will work with function $j^- := - j$ (convex and proper in the first variable) instead, while the function $G:\Sdd \rightarrow \LSdd$ must be its linear minorant in the sense displayed above for the majorant (we keep the symbol $G$ nevertheless). First we show that for every $\xi \in \Sdd$ the proper, convex and l.s.c. function $j^-(\argu,\xi):\LSdd \rightarrow \Rb$ is subdifferentiable at the origin, i.e. $\partial_1 j^-(0,\xi) \neq \varnothing$ where in this proof by $\partial_1 j^-$ we shall understand the subdifferential with respect to the first argument. By Theorem 23.3 in \cite{rockafellar1970} the scenario $\partial_1 j^-(0,\xi) = \varnothing$ can occur only if there exists a direction $\Delta\hk \in \LSdd$ such that the directional derivative with respect to the first argument $j^-_{\Delta\hk}(0,\xi)$ equals $-\infty$. Since $j^-(\argu,\xi)$ is positively homogeneous our argument for subdifferentiability at the origin amounts to verifying that $j^-(\hk,\xi)> -\infty$ for every $\hk$ in a unit sphere in $\LSdd$. But this is trivial since we know that $j^-(\argu,\xi)$ is proper for each $\xi$. We have thus arrived at a multifunction $\Gamma: \Sdd \ni \xi \mapsto \partial_1 j^-(0,\xi) \in \bigl(2^{\LSdd} \backslash \varnothing\bigr)$ that is convex and closed valued. According to Theorem 3.2" in \cite{michael1956} in order to show that there exists a continuous selection $G$ of $\Gamma$ it suffices to show that $\Gamma$ is l.s.c (in the sense of theory of multifunctions). In turn, Lemma A2 in the appendix of \cite{bouchitte1988} states that $\Gamma$ is l.s.c. if and only if the function $(\Delta \hk,\xi) \mapsto \delta^*\bigl( \Delta \hk \,\vert\, \partial_1 j^-(0,\xi) \bigr)$ is l.s.c. on $\LSdd \times \Sdd$ where $\delta^*$ denotes the support function. The Theorem 23.2 in \cite{rockafellar1970} says that $\delta^*\left( \Delta \hk \,\vert\, \partial_1 j^-(0,\xi) \right)$ is exactly the directional derivative of $j^-(\argu ,\xi)$ at $\hk=0$ in the direction $\Delta \hk$, but due to homogeneity of $j^-(\argu ,\xi)$ this derivative is precisely $j^-(\Delta \hk,\xi)$ and all boils down to showing lower semi-continuity of $j^- = -j$, which is guaranteed by Proposition \ref{prop:usc_j} in this work. \end{proof} \end{proposition} A load that may be applied to an elastic body shall be modelled by a vector-valued measure $\Fl \in \Mes(\Ob;\Rd)$. We will assume that our body is not kinematically supported (fixed), e.g. on a boundary of $\Omega$, so in order to have equilibrium the load $\Fl$ must be balanced (see the next subsection for details). We give a definition of \textit{elastic compliance} of elastic body represented by a Hooke field $\lambda \in \MesH$ or, as we shall henceforward write, $\lambda \in \MesHH$: \begin{equation} \label{eq:compliance_def} \Comp(\lambda) := \sup \left\{ \int \pairing{u,F} - \int j\bigl(\lambda,e(u)\bigr) \ : \ u \in \Dd \right\}. \end{equation} The maximization problem in \eqref{eq:compliance_def} may be viewed as a strain formulation of elasticity problem. The compliance $\Comp(\lambda)$ is always non-negative and it obviously can equal $\infty$ in case when $\lambda$ is not suitably adjusted to $\Fl$. Naturally, even if $\Comp(\lambda)<\infty$, the maximization problem does not, in general, have a solution in the space of smooth functions. Once $j$ satisfies a suitable ellipticity condition, the relaxed solution may be found in a Sobolev space with respect to measure $\mu = \abs{\lambda}$ denoted by $W^{1,p}_\mu$ which was proposed in \cite{bouchitte1997} and then developed in e.g. \cite{bouchitte2007}. In this paper it is crucial that $j$ may be degenerate in the sense that, in particular, $j\bigl(\hf(x),\eps(x)\bigr)$ may vanish for some non-zero $\eps \in L^p_\mu(\Ob;\Sdd)$ on a set of non-zero measure $\mu$. The discussed theory cannot therefore be applied to every pair of measures $\Fl$ and $\lambda$. However, it will appear in Section \ref{sec:FMD_LCP} that the situation is better if $\lambda$ is optimally chosen for $\Fl$. \subsection{Stress formulation of elasticity theory} We begin this subsection with a definition of a field that we shall call a \textit{force flux}. By a force flux that equilibrates a load $\Fl \in \Mes\bigl(\Ob;\Rd \bigr)$ in a closed domain $\Ob$ we shall understand a tensor valued measure $\TAU \in \MesT$ that satisfies \begin{equation} \label{eq:eqeq} \DIV \,\tau + F = 0 \end{equation} in sense of distributions on the whole space $\Rd$. Naturally, the above equation can be equivalently written in the form of \textit{virtual work principle}: \begin{equation*} \int \pairing{e(\varphi),\TAU} = \int \pairing{\varphi,\Fl} \qquad \forall\, \varphi \in \Dd, \end{equation*} which is almost by definition up to using the fact that $\pairing{\nabla \varphi(x), \sigma} = \pairing{ e(\varphi)(x), \sigma}$ for all $\sigma \in \Sdd$. It is important to note that $\varphi$ above may not vanish on the boundary $\partial\Omega$ and therefore a Neumann boundary condition is accounted for in \eqref{eq:eqeq}, possibly non-homogeneous once $\Fl$ charges $\partial\Omega$. For existence of a force flux $\TAU$ that equilibrates a load $\Fl$, an assumption on this load is needed: we say that $\Fl$ is \textit{balanced} when one of the two equivalent conditions is satisfied: \begin{enumerate}[(i)] \item the virtual work of $\Fl$ is zero on \textit{the space of rigid body displacement functions} $\mathcal{U}_0$: \begin{equation*} \int \pairing{u_0,F} = 0 \qquad \forall\, u_0 \in \mathcal{U}_0 := \left \{ u \in C^1\bigl(\Ob;\Rd\bigr) \ : \ e(u) = 0 \right\}; \end{equation*} \item $\Fl$ has zero resultant force and moment: \begin{equation*} \int \Fl = 0 \quad \text{in} \quad \Rd \qquad \text{and} \qquad \int \bigl( x_i\, F_j - x_j \, F_i\bigr) = 0 \quad \forall \, i,j \in \{1,\ldots,d\}. \end{equation*} \end{enumerate} A proof that solution $\TAU$ of \eqref{eq:eqeq} exists if and only if $\Fl$ is balanced may be found in \cite{bouchitte2008}.\textit{ Henceforward we shall assume that the load $\Fl$ is indeed balanced.} For an elastic body represented by a Hooke field $\lambda \in \MesHH$ and subjected to a balanced load $\Fl \in \MesF$ we derive the dual problem to \eqref{eq:compliance_def} with one of the Fenchel transformations performed in duality pairing $L^p_\mu(\Ob;\Sdd) \, ,\, L^{p'}_\mu(\Ob;\Sdd)$ where $\mu = \abs{\lambda}$ and $\hf = \frac{d\lambda}{d\mu}$: \begin{equation} \label{eq:dual_comp} \Comp(\lambda) = \min \left\{ \int j^*\bigl(\hf(x),\sigma(x) \bigr) \, d\mu \ : \ \sigma \in L^{p'}_\mu(\Ob;\Sdd), \ -\DIV ( \sigma \mu ) =\Fl \right\} \end{equation} where $j^*$ denotes the Fenchel conjugate with respect to the second variable. Upon acknowledging Proposition \ref{prop:Carath} and the fact that the functional $\int j(\lambda,\argu)\, d\mu : L^p_\mu(\Ob;\Sdd) \rightarrow \R$ is continuous we find that the duality argument is a use of a standard algorithm from Chapter III in \cite{ekeland1999} hence we decide not to display the details. We note that as a part of \eqref{eq:dual_comp} we claim that $\Comp(\lambda) < \infty $ and that the minimizer exists, which is true for balanced $\Fl$. We observe that \eqref{eq:dual_comp} may be considered a dual definition of compliance while the minimization problem itself is a stress-based formulation of the elasticity problem. \begin{comment} \begin{remark} \label{rem:abs_cont_tau} In the duality process we could opt for an other duality pairing, namely for $\pairing{C(\Ob;\Sdd),\mathcal{M}(\Ob;\Sdd)}$. Let us for $J_\lambda:C(\Ob;\Sdd) \rightarrow \Rb $ defined in \eqref{eq:J_lambda} denote by $J^*_\lambda:\MesT \rightarrow \Rb$ its Fenchel transform in this very pairing. We arrive at another dual problem of the form: \begin{equation} \label{eq:dual_comp_alt} \Comp(\lambda) = \min \biggl\{ J^*_\lambda(\tau) \ : \ \tau \in \MesT, \ \DIV\,\tau +\Fl =0 \biggr\}. \end{equation} Since in the duality pairing $\pairing{C(\Ob;\Sdd),\mathcal{M}(\Ob;\Sdd)}$ we do not account for the elastic body given by $\lambda$ we end up with problem \eqref{eq:dual_comp_alt} where the body is not present in the constraints and thus we look for an abstract force flux $\tau$ with no explicit link to the elastic body. The situation in \eqref{eq:dual_comp} is seemingly different as we limit our search to force fluxs $\sigma \mu$, i.e. force fluxs having density with respect to the charge of the elastic body $\mu = \abs{\lambda}$. An elementary argument nevertheless shows that $J^*_\lambda(\tau) <\infty$ only if $\tau \ll \mu$ and for those force fluxs $J^*_\lambda(\tau)$ reduces to the integral in \eqref{eq:dual_comp}. Hence, in a way the two problems share the effective domain. This reasoning shows that $\mu = \abs{\lambda}$ is a right choice for a charge of the body spoken of above, while $\sigma$ in \eqref{eq:dual_comp} is an objective stress in elastic body. \end{remark} \end{comment} \section{The Free Material Design problem} \label{sec:FMD_problem} \subsection{Formulation of the optimal design problem} In the optimization problem herein considered the Hooke field $\lambda \in \MesHH$ will play a role of the design variable. The natural constraint on $\lambda$ will be the bound on the total cost, therefore we must choose a cost integrand $\cost:\Hs \rightarrow \R_+$ that satisfies essential properties: convexity, positive homogeneity, lower semi-continuity on $\Hs$ and $\cost(\hk) = 0 \Leftrightarrow \hk=0$. Since $\Hs$ is a closed convex cone consisting of positively semi-definite tensors, for every non-zero $\hk \in \nolinebreak \Hs$ necessarily $-\hk \notin \Hs$. Then it is easily seen that every such function $\cost$ extends to a norm on the whole space $\LSdd$. It is thus suitable that \begin{equation*} \text{we choose the cost function } \cost\text{ as restriction of any norm on } \LSdd \text{ to } \Hs. \end{equation*} \begin{example} In the pioneering work on the Free Material Design problem \cite{ringertz1993} the cost function $\cost$ was proposed as the trace function, i.e. \begin{equation*} \cost(\hk) = \tr \, \hk = \sum_{i=1}^{N(d)} \lambda_i(\hk) \qquad \forall\, \hk \in \Hs \end{equation*} where $\lambda_i(\hk)$ denotes $i$-th eigenvalue of tensor (in fact a symmetric operator) $\hk$; $N(d) =\frac{1}{2}\, d\, (1 + d)$ is the dimension of the space of symmetric tensors $\Sdd$ (e.g. $N(2)=3$ and $N(3)=6$). Note that $\tr:\Hs \rightarrow \R_+$ may be extended to the whole space $\LSdd$ by $ \sum_{i=1}^{N(d)} \abs{\lambda_i(\hk)}$ being a norm dual to the spectral one. This is an exceptional example of a cost function $\cost$ for it is linear on $\Hs$. \end{example} Our problem of designing in a domain $\Ob$ an optimal elastic body which equilibrates a balanced load $\Fl$ can be readily posed as a compliance minimization problem: \vspace{5mm} \begin{equation} \label{eq:FMD_problem} \FMD \qquad \qquad \qquad \Cmin = \min \biggl\{ \Comp(\lambda) \ : \ \lambda \in \MesHH, \ \int \cost(\lambda) \leq \Totc \biggr\} \qquad \qquad \qquad \end{equation} \vspace{0mm} \noindent which, due to the point-wise free choice of anisotropy $\hf(x) = \frac{d\lambda}{d\abs{\lambda}}(x) \in \Hs$, receives the name \textit{Free Material Design problem} (FMD). The positive number $\Totc$ is the maximal cost of an elastic body. In the decomposition \eqref{eq:lambda_decomp} the function $f$ could be any norm on $\LSdd$ therefore at this point it is convenient to assume $f=c$ and henceforward by a pair $\hf$, $\mu$ we shall always understand the decomposition $\lambda = \hf \mu$ with $\cost(\hf) = 1\ $ $\mu$-a.e. This way the constraint can be rewritten as $\int \cost(\lambda) = \int \cost(\hf) \, d\mu = \int d\mu \leq \Totc$ which is a constraint on the total mass of the elastic body, cf. \cite{bouchitte2001}. By recalling the definition \eqref{eq:compliance_def} of the compliance we find that, as a point-wise supremum of a family of convex and weakly-* lower semi-continuous functionals on $\MesH$ (see Proposition \ref{prop:usc_J}), $\Comp$ \nolinebreak is itself convex and weakly-* l.s.c. Since $c$ is a norm, in $\FMD$ we are actually performing minimization over a bounded and thus weakly-* compact set in $\MesH$. We infer that our problem has a solution $\check\lambda$ whenever $\Cmin$ is finite, which is indeed the case for a balanced load $\Fl$. \subsection{Reduction of the Free Material Design problem to a Linear Constrained Problem} \label{sec:from_FMD_to_LCP} With the definition \eqref{eq:compliance_def} of $\Comp(\lambda)$ plugged explicitly into $\FMD$ problem we arrive at a min-max problem: \begin{equation} \label{eq:min-max} \Cmin = \inf\limits_{\substack{\lambda \in \Mes(\Ob;\Hs), \\ \int \cost(\lambda) \leq \Totc^{}} } \sup\limits_{\ u\in \mathcal{D}(\Rd;\Rd)} \ \biggl\{ \int \pairing{u,F} - \int j\bigl(\lambda,e(u)\bigr) \biggr\}. \end{equation} By acknowledging Proposition \ref{prop:usc_J} from this work we easily verify the assumptions of Proposition 1 in \cite{bouchitte2007} which allows to interchange $\inf$ and $\sup$ above. We may thus formulate a variant of Theorem 1 from \cite{bouchitte2007}, but first we introduce some additional notions. The function $\jh :\Sdd \rightarrow \R$ shall represent a strain energy that is maximal with respect to admissible anisotropy represented by Hooke tensor $\hk \in \Hs$ of a unit $\cost$-cost: \begin{equation} \label{eq:jh_def} \jh(\xi) := \sup\limits_{\hk \in \Hc} j(\hk,\xi), \qquad \Hc := \bigl\{\hk \in \Hs \ : \ \cost(\hk) \leq 1 \bigr\}. \end{equation} As a point-wise supremum of a family of convex functions $\left\{j(\hk,\argu) : \hk \in \Hc \right\}$ the function $\jh$ is convex as well. Furthermore, since each $j(\hk,\argu)$ is positively $p$-homogeneous by assumption (H\ref{as:p-hom}), the function $\jh$ inherits this property. Next, due to concavity and upper semi-continuity of $j(\argu,\xi)$ together with compactness of $\Hc$, we see that $\jh(\xi) = \max\limits_{\hk \in \Hc} j(\hk,\xi) = j(\bar{\hk}_\xi,\xi)$ for some $\bar{\hk}_\xi \in \Hc$ and in particular $\jh$ is finite on $\Sdd$. It is natural to define \begin{equation*} \Hch(\xi) := \bigl\{ \hk \in \Hc : \jh(\xi) = j(\hk,\xi) \bigr\} \end{equation*} being a non-empty, convex and compact subset of $\Hc$ for every $\xi\in \Sdd$. The short over-bar $\bar{\argu}$ will be consistently used in the sequel to stress maximization with respect to Hooke tensor or field and should not be confused with long over-bar $\overline{\argu}$ denoting e.g. the closure of a set. We have just showed that $\jh$ is a convex, continuous and positively $p$-homogeneous function and it is well-known (see e.g. Corollary 15.3.1 in \cite{rockafellar1970}) that it can be written as \begin{equation} \label{eq:jh_rho} \jh(\xi) = \frac{1}{p} \bigl( \ro(\xi) \bigr)^p, \end{equation} where $\ro:\Sdd \rightarrow \R_+$ is a positively one-homogeneous function. From the ellipticity assumption (H5) follows that $\jh(\xi) = 0$ if and only if $\xi = 0$ and the same holds for $\rho$. It is thus straightforward that: \begin{proposition} \label{prop:rho} The function $\ro: \Sdd \rightarrow \R_+$ is a finite, continuous, convex positively one-homogeneous function that satisfies for some positive constants $C_1,C_2$ \begin{equation*} C_1 \abs{\xi} \leq \ro(\xi) \leq C_2 \abs{\xi} \qquad \forall\, \xi\in \Sdd. \end{equation*} \begin{comment} In other words, $\ro$ is a norm on $\Sdd$ if and only if it is symmetric with respect to the origin (is one-homogeneous instead of just positively one-homogeneous). \end{comment} \end{proposition} Our theorem can be readily stated: \begin{theorem} \label{thm:problem_P} For a balanced load $\Fl \in \MesF$ the minimum value of compliance in $\FMD$ problem \eqref{eq:FMD_problem} equals \begin{equation} \label{eq:Cmin} \Cmin = \frac{1}{p' \, \Totc^{p'-1}}\ Z^{\,p'} \end{equation} where we introduce an auxiliary variational problem with a linear objective \vspace{5mm} \begin{equation} \label{eq:Prob} \Prob \qquad \qquad Z := \sup \biggl\{ \int \pairing{u,F} \ : \ u\in \Dd, \ \ro\bigl( e(u) \bigr) \leq 1 \text { point-wise in } \Omega \biggr\}. \qquad \qquad \end{equation} \vspace{0mm} \begin{proof} Upon swapping $\inf$ and $\sup$ in \eqref{eq:min-max} the latter may be rewritten as \begin{equation*} \Cmin = \sup\limits_{\ u\in \mathcal{D}(\Rd;\Rd)} \ \biggl\{ \int \pairing{u,F} - \bar{J}\bigl(e(u) \bigr) \biggr\} \end{equation*} where for any continuous stress field $\eps \in C(\Ob;\Sdd)$ we have \begin{equation*} \bar{J}(\eps) = \sup\limits_{\lambda \in \Mes(\Ob;\Hs), \ \int \cost(\lambda) \leq \Totc^{} \ } J_\lambda(\eps) = \sup\limits_{\substack{\mu \in \Mes_+(\Ob), \ \int d\mu \,\leq \Totc^{} \\ \hf \in L^1_\mu(\Ob;\Hs), \ \cost(\hf) = 1} } \int j(\hf,\eps)\, d\mu, \end{equation*} where we decomposed $\lambda$ to $\hf \mu$ with $c(\hf) = 1\ $ $\mu$-a.e. (the symbol $\bar{J}$ is not to be confused with l.s.c. regularization of some functional $J$). Further we fix the strain field $\eps$. For any pair $\hf,\mu$ admissible above we easily find an estimate \begin{equation*} \int j(\hf,\eps)\, d\mu \leq \int \jh(\eps) \, d\mu \leq \, \norm{\jh(\eps)}_{L^\infty(\Ob)} \int d\mu\, \leq \Totc \,\norm{\jh(\eps)}_{L^\infty(\Ob)} \end{equation*} which yields $\bar{J}(\eps) \leq \Totc \,\norm{\jh(\eps)}_{L^\infty(\Ob)}$. We shall show that the RHS of this inequality is attainable for a certain competitor $\bar{\lambda}_\eps$. By Proposition \ref{prop:rho} we see that due to continuity of $\eps$ on $\Ob$ the function $\jh\bigl(\eps(\argu) \bigr)$ is continuous on a compact set $\Ob$ as well and thus there exists $\bar{x} \in \Ob$ such that $\norm{\jh(\eps)}_{L^\infty(\Ob)} = \jh\bigl(\eps(\bar{x}) \bigr)$. With a strain function $\eps$ fixed we propose $\bar{\lambda}_\eps = \Totc\, \bar{\hk}_{\eps(\bar{x})} \, \delta_{\bar{x}}$ where $\bar{\hk}_{\eps(\bar{x})}$ is any Hooke tensor from the non-empty set $\Hch\bigl(\eps(\bar{x}) \bigr)$ and $\delta_{\bar{x}}$ is a Dirac delta measure at $\bar{x}$. It is trivial to check that $\int \cost(\bar{\lambda}_\eps) =\Totc$ whilst \begin{equation*} J_{\bar{\lambda}_\eps}(\eps) = \Totc \int j\bigl(\bar{\hk}_{\eps(\bar{x})}, \eps(x) \bigr) \, \delta_{\bar{x}}(x) = \Totc\, j\bigl(\bar{\hk}_{\eps(\bar{x})}, \eps(\bar{x}) \bigr) = \Totc \, \jh\bigl(\eps(\bar{x}) \bigr) = \Totc \, \norm{\jh(\eps)}_{L^\infty(\Ob)}, \end{equation*} which proves that indeed $\bar{J}(\eps) =\Totc \, \norm{\jh(\eps)}_{L^\infty(\Ob)}$ and further that also $\bar{J}(\eps) =\frac{\Totc}{p} \, \left(\norm{\ro(\eps)}_{L^\infty(\Ob)} \right)^p$. \begin{comment} &= \sup\limits_{\substack{ u_1\in \mathcal{D}(\Rd;\Rd),\\ t\geq 0}} \biggl\{ \int \pairing{t\,u_1,F} - \frac{\Totc}{p} \, \left(\norm{\ro\bigl(e(t\,u_1)\bigr)}_{L^\infty(\Ob)} \right)^p \ : \ \norm{\ro\bigl(e(u_1)\bigr)}_{L^\infty(\Ob)} = 1\biggr\}\\ \end{comment} Next we use a technique that was already applied in \cite{golay2001}: by substitution $u = t \,u_1$ we obtain \begin{alignat*}{1} \Cmin &= \sup\limits_{\ u\in \mathcal{D}(\Rd;\Rd)} \ \biggl\{ \int \pairing{u,F} - \frac{\Totc}{p} \, \left(\norm{\ro\bigl(e(u)\bigr)}_{L^\infty(\Ob)} \right)^p \biggr\}\\ & = \sup\limits_{\substack{ u_1\in \mathcal{D}(\Rd;\Rd),\, t\geq 0}} \biggl\{ \biggl(\int \pairing{u_1,F}\biggr) \,t - \frac{\Totc}{p} \, t^p \ : \ \norm{\ro\bigl(e(u_1)\bigr)}_{L^\infty(\Ob)} = 1\biggr\}\\ &= \sup\limits_{u_1\in \mathcal{D}(\Rd;\Rd)} \biggl\{\frac{1}{p' \, \Totc^{p'-1}}\ \biggl(\int \pairing{u_1,F}\biggr)^{\,p'} \ : \ \norm{\ro\bigl(e(u_1)\bigr)}_{L^\infty(\Ob)} \leq 1\biggr\}, \end{alignat*} where, under the assumption that $\int \pairing{u_1,F}$ is non-negative, in the last step we have computed the maximum with respect to $t$ which was attained for $\bar{t} = \bigl(\frac{\int \pairing{u_1,F}}{\Totc}\bigr)^{p'-1}$. Since the power function $(\argu)^{p'}$ is increasing for non-negative arguments the thesis follows. \end{proof} \end{theorem} Following the contribution \cite{bouchitte2007} we move on by deriving the problem dual to $\Prob$; in contrary to the duality applied in \eqref{eq:dual_comp} the natural duality pairing here is $C(\Ob;\Sdd),\mathcal{M}(\Ob;\Sdd)$. Again the duality argument is standard up to noting that for any $\TAU \in \nolinebreak \MesT$ \begin{equation} \label{eq:duality_of_integral} \int \dro(\TAU) = \sup \biggl\{ \int \pairing{\eps,\TAU} \ : \ \eps \in C(\Ob;\Sdd),\ \ro(\eps) \leq 1 \text{ in } \Ob \biggr\}; \end{equation} the reader is referred to e.g. \cite{bouchitte1988} for the proof. The function $\dro:\Sdd \rightarrow \R_+$ represents the function polar to $\ro$, namely for the stress tensor $\sig \in \Sdd$ \begin{equation*} \dro(\sig) = \sup\limits_{\xi \in \Sdd}\biggl\{ \pairing{\xi,\sigma} \ : \ \ro(\xi) \leq 1 \biggr\} \end{equation*} where we recall that $\pairing{\argu,\argu}$ stands for the Euclidean scalar product in $\Sdd$. With the use of \eqref{eq:duality_of_integral} a standard duality argument (cf. Chapter III in \cite{ekeland1999}) readily produces the dual to the problem $\Prob$: \vspace{5mm} \begin{equation} \label{eq:dProb} \dProb \qquad \qquad \qquad Z = \min \biggl\{ \int \dro(\TAU) \ : \ \TAU \in \MesT, \ -\DIV \, \TAU = \Fl \biggr\} \qquad \qquad \end{equation} \vspace{0mm} \noindent and we emphasize that existence of a solution $\hat{\TAU}$ is part of the duality result ($\Fl$ is assumed to be balanced). After \cite{bouchitte2007} the pair of mutually dual problems $\Prob$ in \eqref{eq:Prob} and $\dProb$ in \eqref{eq:dProb} will be named a \textit{Linear Constrained Problem} (LCP). \begin{comment} The weak-* lower semi-continuity of the functional $\int \dro(\argu)$ stems from \eqref{eq:duality_of_integral}, while its coercivity is guaranteed by the opening statement in Theorem \ref{thm:rho_drho}; it is also elementary to show that the affine subspace $\left\{\TAU \in \MesT \ : \ \DIV \, \TAU + \Fl =0 \right\}$ is closed in weak-* topology -- existence of the solution follows independently of the duality argument. \end{comment} \subsection{Designing the anisotropy at a point} \label{sec:anisotropy_at_point} The function $\jh$ and therefore also the function $\ro$ (see \eqref{eq:jh_rho}) are expressed via finite dimensional programming problem \eqref{eq:jh_def} where function $j$ enters. It is thus a natural step to examine how the polar $\dro$ depends on $j$ or, as it will appear, on $j^*$. By definition of polar $\dro$ for any pair $(\xi,\sig) \in \Sdd\times\Sdd$ there always holds $\pairing{\xi,\sig} \leq \ro(\xi) \, \dro(\sig)$; we shall say that such a pair satisfies the \textit{extremality condition} for $\rho$ and its polar whenever this inequality is sharp. One of the main results of this subsection will state that this extremality condition is equivalent to satisfying the constitutive law $\sig/\dro(\sig) \in \partial j(\breve{\hk},\xi)$ for some $\breve{\hk} \in \Hc$ optimally chosen for $\sigma$. This link will be utilized while formulating the general optimality conditions for $\FMD$ problem in Section \ref{sec:optimality_conditions}. Beforehand we investigate the properties of the Fenchel conjugate $j^*$; by its definition, for a fixed $\hk \in \Hs$ we get a function $j^*(\hk,\argu) : \Sdd \rightarrow \Rb$ expressed by the formula \begin{equation} \label{eq:Fenchel_conj_def} j^*(\hk,\sig) = \sup\limits_{\zeta \in \Sdd} \bigl\{ \pairing{\zeta,\sig} - j(\hk,\zeta) \bigr\}. \end{equation} It is well-established that $j^*(\hk,\argu)$ is convex and l.s.c. on $\Sdd$, it is also proper and non-negative for each $\hk \in \Hs$ since $j(\hk,\argu)$ is real-valued and equals zero at the origin. For a given $\hk \in \Hs$, however, $j^*(\hk,\argu)$ may admit infinite values: take for instance $\hk\in \Hs$ that is a singular tensor and $j(\hk,\xi) = \frac{1}{2} \pairing{\hk\,\xi,\xi}$, then $j^*(\hk,\sig) = \infty$ for any $\sig \not\perp \Ker\, \hk$. Furthermore it is well-established that $j^*(\hk,\argu)$ is positively $p'$-homogeneous. Next, again for a fixed $\hk \in \Hs$, we look at the subdifferential $\partial j(\hk,\argu) : \Sdd \rightarrow 2^\Sdd$. Almost by definition for $\xi,\sig \in \Sdd$ \begin{equation} \label{eq:subdiff_Fenchel} \sig \in \partial j(\hk,\xi) \qquad \Leftrightarrow \qquad \pairing{\xi,\sig} \geq j(\hk,\xi) + j^*(\hk,\sig), \end{equation} while the opposite inequality on the RHS, known as Fenchel's inequality, holds always. By recalling positive $p$-homogeneity of $j(\hk,\argu)$ it is well established that the following repartition of energy holds (see e.g. \cite{rockafellar1970}) \begin{equation} \label{eq:repartition} \sig \in \partial j(\hk,\xi) \qquad \Leftrightarrow \qquad \left\{ \begin{array}{l} \pairing{\xi,\sig} = p \ \ j(\hk,\xi),\\ \pairing{\xi,\sig} = p'\, j^*(\hk,\sig). \end{array} \right. \end{equation} \begin{comment} Indeed, the implication $\Leftarrow$ above easily comes from implication $\Leftarrow$ in \ref{eq:subdiff_Fenchel} and the fact that $1/p+1/p'=1$. Conversely, assuming that $\sig \in \partial j(\hk,\xi)$ again by \eqref{eq:subdiff_Fenchel} we get $\pairing{\xi,\sig} - j(\hk,\xi) \geq \sup_{\zeta \in \Sdd} \bigl\{ \pairing{\zeta,\sig} - \nolinebreak j(\hk,\zeta) \bigr\} \geq \sup_{t\geq 0} \bigl\{ \pairing{t\,\xi,\sig} - j(\hk,t\,\xi) \bigr\} = \sup_{t\geq 0} \bigl\{ t\pairing{\xi,\sig} - t^p j(\hk,\xi) \bigr\}$ and the extremality condition for $t$ gives $\pairing{\xi,\sig} = p\, t^{p-1} j(\hk,\xi)$, yet we know that $t = 1$ is the solution by the chain of inequalities and this gives $\pairing{\xi,\sig} = p\, j(\hk,\xi)$, while the second equality is by RHS of \eqref{eq:subdiff_Fenchel} and Fenchel inequality. \end{comment} We can infer more about the function $j^*$. Since $j(\argu,\zeta)$ is concave and u.s.c. for every $\zeta\in \Sdd$ the mapping $(\hk,\sig) \mapsto \pairing{\zeta,\sig} - j(\hk,\zeta)$ is for each $\zeta$ convex and l.s.c. jointly in $\hk$ and $\sig$. As a result the function $j^*:\LSdd \times \Sdd \rightarrow \Rb$ is also jointly convex and l.s.c. as a point-wise supremum with respect to $\zeta$. It is, although, not so clear at this point whether the function $j^*(\argu,\sig)$ is proper for arbitrary $\sig \in \Sdd$, i.e. we question the strength of the assumption (H\ref{as:elip}). A positive answer to this question shall be a part of the theorem that we will state below. Beforehand we give another property of functions $j$, $j^*$ that we shall utilize later: for any $\hk_1,\hk_2 \in \Hs$ and any $\xi,\sig \in\Sdd$ we have \begin{alignat}{1} \label{eq:super_additiviy_j} j(\hk_1+\hk_2,\xi) &\geq j(\hk_1,\xi) + j(\hk_2,\xi), \\ \label{eq:sub_additivity_j_star} j^*(\hk_1+\hk_2,\sig) &\leq \min\bigl\{ j^*(\hk_1,\sig) , j^*(\hk_2,\sig) \bigr\}. \end{alignat} The first inequality can be obtained by combining concavity and 1-homogenity of $j(\argu,\xi)$. Next we see that $j^*(\hk_1 + \hk_2,\sig) = \sup_{\zeta \in \Sdd} \bigl\{ \pairing{\zeta,\sig_+} - j(\hk_1 + \hk_2,\zeta) \bigr\}$, which, with the use of \eqref{eq:super_additiviy_j} and non-negativity of $j$, furnishes \eqref{eq:sub_additivity_j_star}. \begin{theorem} \label{thm:rho_drho} Let $\ro:\Sdd \rightarrow \R_+$ be the real gauge function defined by \eqref{eq:jh_def} and \eqref{eq:jh_rho} and by $\dro$ denote its polar function. Then the polar function is another real gauge function satisfying $\tilde{C}_1 \abs{\sig} \leq \dro(\sig) \leq \tilde{C}_2 \abs{\sigma}$ for some positive $\tilde{C}_1,\tilde{C}_2$ and the following statements hold: \begin{enumerate}[(i)] \item for every stress tensor $\sigma \in \Sdd$ \begin{equation} \label{eq:conjugate_of_jhat} \min\limits_{\hk \in \Hc} j^*(\hk,\sig) = \jc(\sig) = \frac{1}{p'} \bigl(\dro(\sig) \bigr)^{p'} \end{equation} where the continuous function $\jc:\Sdd \rightarrow \R_+$ is the Fenchel conjugate of $\jh = \max_{\hk \in \Hc}j(\hk,\argu)$; \item for a strain tensor $\xi \in \Sdd$ satisfying $\ro(\xi) \leq 1$, an arbitrary non-zero stress tensor $\sig \in \Sdd$ and a Hooke tensor $\breve{\hk} \in \Hc$ the following conditions are equivalent: \begin{enumerate}[(1)] \item there hold the extremality conditions: \begin{equation*} \pairing{\xi,\sig} = \dro(\sig) \qquad \text{and} \qquad \breve{\hk} \in \Hcc(\sig) \end{equation*} where we introduce a non-empty convex compact set of Hooke tensors optimally chosen for $\sig$ \begin{equation*} \Hcc(\sig) := \biggl\{ \hk \in \Hc : j^*(\hk,\sig)= \jc(\sig) = \min\limits_{\tilde\hk \in \Hc} j^*\bigl(\tilde\hk,\sig \bigr) \biggr\}; \end{equation*} \item the constitutive law is satisfied: \begin{equation} \label{eq:extreme_const_law} \frac{1}{\dro(\sig)} \, \sig \in \partial j\bigl(\breve{\hk},\xi \bigr). \end{equation} \end{enumerate} Moreover for each of the two conditions (1), (2) to be true it is necessary to have $\ro(\xi)=1$; \item the following implication is true for every non-zero $\xi,\sig \in \Sdd$ \begin{equation*} \pairing{\xi,\sig} = \ro(\xi) \, \dro(\sig) \qquad \Rightarrow \qquad \Hcc(\sig) \subset \Hch(\xi), \end{equation*} while in general $\Hcc(\sig)$ may be a proper subset of $\Hch(\xi)$. \end{enumerate} \begin{proof} The lower and upper bounds on $\dro$ are a straightforward corollary from the analogous property for $\ro$ stated in Proposition \ref{prop:rho}. For the proof of statement (i) we fix a non-zero tensor $\sig \in \Sdd$, then directly by definition of the Fenchel transform \eqref{eq:Fenchel_conj_def} we obtain \begin{equation*} \inf\limits_{\hk \in \Hc} j^*(\hk,\sig) = \inf\limits_{\hk \in \Hc} \sup\limits_{\zeta \in \Sdd} \bigl\{ \pairing{\zeta,\sig} - j(\hk,\zeta) \bigr\} \end{equation*} thus arriving at a min-max problem of a very analogous (yet finite dimensional) form to \eqref{eq:min-max}. Again by Proposition 1 from \cite{bouchitte2007} we may swap the order of infimum and supremum and we arrive at \begin{equation*} \inf\limits_{\hk \in \Hc} j^*(\hk,\sig) = \sup\limits_{\zeta \in \Sdd} \left\{ \pairing{\zeta,\sig} - \sup\limits_{\hk \in \Hc} j(\hk,\zeta) \right\} = \sup\limits_{\zeta \in \Sdd} \biggl\{ \pairing{\zeta,\sig} - \jh(\zeta) \biggr\} = \jc(\sig) \end{equation*} or, by acknowledging that $\jh(\zeta) = \frac{1}{p} \bigl(\ro(\zeta)\bigr)^p$, we recover the well known result: \begin{alignat}{1} \label{eq:chain_sup} \jc(\sig)=\inf\limits_{\hk \in \Hc} j^*(\hk,\sig) &= \sup\limits_{\zeta \in \Sdd} \biggl\{ \pairing{\zeta,\sig} - \frac{1}{p} \bigl(\ro(\zeta)\bigr)^p \biggr\} = \sup\limits_{\substack{\zeta_1 \in \Sdd\\ t\geq 0}} \biggl\{ t \pairing{\zeta_1,\sig} - \frac{t^p}{p} \ : \ \ro(\zeta_1) = 1 \biggr\}\nonumber \\ & = \sup\limits_{\zeta_1 \in \Sdd} \biggl\{ \frac{1}{p'}\abs{\pairing{\zeta_1,\sig}}^{p'} \ : \ \ro(\zeta_1) \leq 1 \biggr\} = \frac{1}{p'} \bigl(\dro(\sig) \bigr)^{p'}, \end{alignat} where the maximal problem with respect to $t$ were solved with $\bar{t}_{\zeta_1} = \abs{\pairing{\zeta_1,\sig}}^{p'-1}$. Since the function $\dro$ is real-valued we have actually showed that $\inf_{\hk \in \Hc} j^*(\hk,\sig)$ is finite proving that the function $j^*(\argu,\sig)$ is proper for any $\sig$ and thus, due to its convexity and l.s.c, we know that it admits its minimum on the compact set $\Hc$, hence point (i) is proved. We move on to prove statement (ii); we fix $\xi,\sig \in \Sdd$ with $\rho(\xi) \leq 1$ and $\breve{\hk} \in \Hc$; it is not restrictive to assume that $\dro(\sig) = 1$. Let us first assume that (1) holds, i.e. that $\pairing{\xi,\sig} = \dro(\sig) = 1$ and $\breve{\hk}$ is an element of the non-empty set $\Hcc(\sig)$, so that $j^*(\breve{\hk},\sig) = \jc(\sig) = \frac{1}{p'} \bigl(\dro(\sig) \bigr)^{p'}$. Since $\pairing{\xi,\sig} \geq \ro(\xi) \dro(\sig)$ we necessarily have $\rho(\xi) = 1$ together with \begin{equation*} \pairing{\xi,\sig} = \sup\limits_{\zeta_1 \in \Sdd} \biggl\{ \pairing{\zeta_1,\sig} \ : \ \ro(\zeta_1) \leq 1 \biggr\} \end{equation*} and, since $\bar{t}_\xi= \abs{\pairing{\xi,\sig}}^{p'-1}=1$, we infer that $\bar{t}_\xi \, \xi = \xi $ solves all the maximization problems with respect to $\zeta$ or $\zeta_1$ in the chain \eqref{eq:chain_sup}, the first one in particular. Together with $\breve{\hk} \in \Hcc(\sig)$ this means that $(\breve{\hk},\xi)$ is a saddle point for the functional $(\hk,\zeta) \mapsto \pairing{\zeta,\sig} - j(\hk,\zeta)$, i.e. \begin{alignat*}{2} \pairing{\xi,\sig} - j(\breve{\hk},\xi) &= \max\limits_{\zeta \in \Sdd} \min\limits_{\hk \in \Hc} \biggl\{ \pairing{\zeta,\sig} - j(\hk,\zeta) \biggr\} = \max\limits_{\zeta \in \Sdd} \biggl\{ \pairing{\zeta,\sig} - j(\breve{\hk},\zeta) \biggr\} = j^*(\breve{\hk},\sig)\\ &=\min\limits_{\hk \in \Hc} \max\limits_{\zeta \in \Sdd} \biggl\{ \pairing{\zeta,\sig} - j(\hk,\zeta) \biggr\}= \min\limits_{\hk \in \Hc} \biggl\{ \pairing{\xi,\sig} - j(\hk,\xi) \biggr\} = \pairing{\xi,\sig} - \jh(\xi), \end{alignat*} which furnishes two equalities: $\pairing{\xi,\sig} - j(\breve{\hk},\xi) = j^*(\breve{\hk},\sig)$ and $\pairing{\xi,\sig} - j(\breve{\hk},\xi) = \pairing{\xi,\sig} - \jh(\xi)$ from which we infer that $\sig \in \partial j(\breve{\hk},\xi)$ and, respectively, $\breve{\hk} \in \Hch(\xi)$. The former conclusion gives the implication (1) $\Rightarrow$ (2) while the latter, since $\breve{\hk}$ was arbitrary element of $\Hcc(\sig)$ and $\pairing{\xi,\sig} = \dro(\sig)$, establishes the point (iii) for the case when $\rho(\xi)=1$; then the general setting of (iii) follows by the fact that $\Hch(\xi) = \Hch(t\, \xi)$ for any $t > 0$. For the implication (2) $\Rightarrow$ (1) we assume that for the triple $\xi,\sig,\breve{\hk}$ with $\rho(\xi)\leq 1$, $\dro(\sig) = 1$, $\breve{\hk} \in \Hc$ the constitutive law \eqref{eq:extreme_const_law} is satisfied. Then, by \eqref{eq:repartition} there holds the repartition of energy: $\pairing{\xi,\sig} = p \, j(\breve{\hk},\xi)$ and $\pairing{\xi,\sig} = p' \, j^*(\breve{\hk},\sig)$. The following chain can be written down: \begin{equation*} 1 = \dro(\sig) = p' \biggl( \frac{1}{p'} \bigl(\dro(\sig)\bigr)^{p'} \biggr) \leq p'\, j^*(\breve{\hk},\sig) = \pairing{\xi,\sig} = p \, j(\breve{\hk},\xi) \leq p\, \biggl( \frac{1}{p} \bigl(\ro(\xi)\bigr)^{p} \biggr) \leq 1 \end{equation*} and therefore all the inequalities above are in fact equalities; in particular we have \begin{equation*} \pairing{\xi,\sig} = \dro(\sig) = \rho(\xi) = 1, \qquad \jc(\sig) = \frac{1}{p'} \bigl(\dro(\sig)\bigr)^{p'} = j^*(\breve{\hk},\sig) \quad \Rightarrow \quad \breve{\hk} \in \Hcc(\sig), \end{equation*} which proves the implication (2) $\Rightarrow$ (1) and the "moreover part" of point (ii), concluding the proof. \end{proof} \end{theorem} \begin{example}[\textbf{Anisotropic Material Design problem}] \label{ex:rho_drho_AMD} We shall compute the functions $\ro$, $\dro$ together with the sets $\Hch(\xi)$, $\Hcc(\sig)$ in the setting of $\FMD$ problem mostly discussed in the literature: the \textit{Anisotropic Material Design} (AMD) setting for the linearly elastic body, more precisely we choose \begin{equation} \label{eq:AMD_setting} \Hs = \Hf, \qquad j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi , \xi} \quad (p=2), \qquad \cost(\hk) = \tr\, \hk, \end{equation} i.e. $\Hs$ contains all possible Hooke tensors. Upon recalling that here $\Hc = \left\{\hk \in \Hf \ : \ \tr \, \hk \leq 1 \right\}$ for each $\hk \in \Hc$ we may write down an estimate \begin{equation} \label{eq:est_upper_AMD} j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi} \leq \frac{1}{2} \left(\max\limits_{i \in \{1,\ldots,N(d)\}} \lambda_i(\hk) \right) \abs{\xi}^2 \leq \frac{1}{2}\, \bigl(\tr\,\hk \bigr)\, \abs{\xi}^2 \leq \frac{1}{2} \, \abs{\xi}^2 \end{equation} ($\abs{\xi}= \pairing{\xi,\xi}^{1/2}$ denotes the Euclidean norm of $\xi$) and therefore $\frac{1}{2}\bigl(\ro(\xi) \bigr)^2 = \jh(\xi) \leq \frac{1}{2} \, \abs{\xi}^2$. On the other hand, we may define for a fixed non-zero $\xi\in \Sdd$ \begin{equation} \label{eq:H_xi_AMD} \bar{\hk}_\xi = \frac{\xi}{\abs{\xi}} \otimes \frac{\xi}{\abs{\xi}} \end{equation} that is a tensor with only one non-zero eigenvalue being equal to 1 and the corresponding unit eigenvector $\xi / \abs{\xi}$ (in fact a symmetric tensor); obviously we have $\tr \, \bar{\hk}_\xi = 1$ and $j(\bar{\hk}_\xi,\xi) = \frac{1}{2} \pairing{\bar{\hk}_\xi \,\xi,\xi} = \frac{1}{2} \, \abs{\xi}^2$, which shows that in fact $\jh(\xi) = j(\bar{\hk}_\xi,\xi) = \frac{1}{2} \, \abs{\xi}^2$ and hence \begin{equation*} \ro(\xi) = \abs{\xi}, \qquad \frac{\xi}{\abs{\xi}} \otimes \frac{\xi}{\abs{\xi}} \in \Hch(\xi). \end{equation*} It is easy to observe that for a non-zero $\xi$ the tensor $\bar{\hk}_\xi$ is the unique element of the set $\Hch(\xi)$. Indeed, if $\xi /\abs{\xi}$ is kept as one of the eigenvectors of a chosen $\hk \in \Hc$ and $\lambda_1(\hk)$ is the corresponding eigenvalue, then $\hk \neq \bar{\hk}_\xi$ means that $\lambda_1(\hk) <1$ yielding $j(\hk,\xi) = \frac{1}{2} \langle\hk,\bar{\hk}_\xi \rangle \abs{\xi}^2< \frac{1}{2}\abs{\xi}^2$. One may easily verify that we obtain a similar result whenever $\xi /\abs{\xi}$ is not one of the eigenvectors of $\hk$. It is well known that the polar $\dro$ to the Euclidean norm $\ro = \abs{\argu}$ is again this very norm. Furthermore it is obvious that \begin{equation*} \pairing{\xi,\sig} = \ro(\xi)\, \dro(\sig) = \abs{\xi} \, \abs{\sig} \qquad \Leftrightarrow \qquad t_1 \xi = t_2 \sig \ \text{ for some }\ t_1,t_2 \geq 0. \end{equation*} Next, since $\Hch(\xi)$ is a singleton for non-zero $\xi$, for non-zero $\sig$ the point (iii) of Theorem \ref{thm:rho_drho} furnishes: \begin{equation*} \xi_\sig = t \,\sigma \ \text{ for some } \ t>0 \qquad \Rightarrow \qquad \Hcc(\sig) = \Hch(\xi_\sig) = \left\{ \frac{\xi_\sig}{\abs{\xi_\sig}} \otimes \frac{\xi_\sig}{\abs{\xi_\sig}}\right\} = \left\{ \frac{\sig}{\abs{\sig}} \otimes \frac{\sig}{\abs{\sig}}\right\}. \end{equation*} Therefore we have obtained \begin{equation} \label{eq:Hcc_char_AMD} \dro(\sig) = \abs{\sig}, \qquad \Hcc(\sig) = \left\{ \frac{\sig}{\abs{\sig}} \otimes \frac{\sig}{\abs{\sig}}\right\}. \end{equation} The latter results were given in \cite{czarnecki2012} and were obtained by solving the problem $\min_{\hk \in \Hc} j^*(\hk,\sig)$ directly. \end{example} \begin{remark} It is worth noting that in general neither of the sets $\Hch(\xi)$ or even $\Hcc(\sig)$ is a singleton, see Examples \ref{ex:rho_drho_FibMD} and \ref{ex:rho_drho_IMD} in Section \ref{sec:example_material_settings}. \end{remark} \section{The link between solutions of the Free Material Design problem and\\ the Linear Constrained Problem} \label{sec:FMD_LCP} In Section \ref{sec:from_FMD_to_LCP} we have expressed the value of minimum compliance $\Cmin$ by value $Z$ being the supremum and infimum in problems $\Prob$ and $\dProb$ respectively. Next we ought to show how solution of the original Free Material Design problem, the optimal Hooke field $\lambda$ in particular, may be recovered from the solution of much simpler Linear Constrained Problem, being the mutually dual pair $\Prob$, $\dProb$ exactly. Since the form of (LCP) is identical to the one discussed in the paper \cite{bouchitte2007} the present section may be recognized as a variant of the argument in the former work: herein we must additionally retrieve the optimal Hooke function $\hf$. Since, in contrast to $\dProb$, the problem $\Prob$ in general does not attain a solution, the condition on smoothness of $u$ in $\Prob$ must be relaxed. This was already done in \cite{bouchitte2001} or \cite{bouchitte2007} therefore we repeat the result by only sketching the proof of the compactness result: \begin{proposition} Let $\Omega$ be a bounded domain with Lipschitz boundary. Then the set \begin{equation} \label{eq:U1_def} \U_1 := \left\{ u \in \Dd : \ro\bigl(e(u)\bigr) \leq \nolinebreak 1 \text{ in } \Omega \right\} \end{equation} is precompact in the quotient space $C(\Ob;\Rd) / \U_0$ where $\mathcal{U}_0 = \left \{ u \in C^1\bigl(\Ob;\Rd\bigr) \, : \, e(u) = 0 \right\}$. \begin{proof}[Outline of the proof] Using Korn's inequality we infer that, up to a function in $\U_0$ (a rigid body displacement function), the set $\U_1$ is a bounded subset of $W^{1,q}(\Omega)$ for any $1 \leq q <\infty$. By taking any $q > d$ we employ Morrey's embedding theorem (which uses Lipschitz regularity of the boundary $\partial \Omega$) to conclude that the H\"{o}lder seminorm $\abs{\argu}_{C^{0,\alpha}}$ is uniformly bounded in $\U_1$ for any exponent $\alpha$ ranging in $(0,1)$ and the thesis follows. \end{proof} \end{proposition} The load $\Fl$, throughout assumed to be balanced, satisfies the condition $\int\pairing{u_0,\Fl} = 0$ for any $u_0 \in \U_0$. The results above justifies the following relaxation of the problem $\Prob$: \begin{equation} \label{eq:relProb} \relProb \qquad \qquad Z = \max \biggl\{ \int \pairing{u,F} \ : \ u \in \Uc \biggr\} \qquad \qquad \end{equation} where $\Uc$ stands for the closure of $\U_1$ in the topology of uniform convergence. The problem $\relProb$ attains a solution that is unique up to a rigid body displacement function $u_0 \in \U_0$. It is worth noting that each $u \in \Uc$ is $\Leb^d$-a.e. differentiable with $e(u) \in L^\infty(\Omega;\Sdd)$, yet still there are $u \notin \mathrm{Lip}(\Omega;\Rd)$ belonging to $\Uc$, which is possible due to the lack of Korn's inequality for $q = \infty$, cf. \cite{bouchitte2008} for details. It is left to relax the another displacement-based problem appearing in this work, i.e. the one of elasticity \eqref{eq:compliance_def}. After \cite{bouchitte2007} we shall say that for the Hooke field $\lambda \in \MesHH$ the function $\check{u} \in C(\Ob;\Rd)$ is a relaxed solution of \eqref{eq:compliance_def} when $\Comp(\lambda)$ admits a maximizing sequence in $u_n \in \Dd$ with $u_n \rightrightarrows u$ in $\Ob$ (uniformly) and $\ro\bigl(e(u_n)\bigr) \leq (Z/\Totc)^{p'/p}$ in $\Ob$. Simple adaptation of the result in point (ii) of Proposition 2 in \cite{bouchitte2007} allows to infer that such a maximizing sequence exists provided that $\lambda$ is the optimal solution for $\FMD$, which justifies the definition of the relaxed solution. For any force flux $\TAU \in \MesT$ by $\dro(\TAU)$ we understand a positive Radon measure that for any Borel set $B \subset \Rd$ gives $\dro(\TAU)\,(B) := \int_B \dro(\TAU)$, where the integral is indented in the sense of convex functional on measures (see \eqref{eq:duality_of_integral}). Since $\TAU$ is absolutely continuous with respect to $\dro(\TAU)$ the Radon-Nikodym theorem gives $\TAU = \sig \,\mu$ where $\mu = \dro(\TAU)$ and $\sig = \frac{d\TAU}{d\mu} \in L^1_\mu(\Ob;\Sdd)$; obviously there must hold $\dro(\sig) = 1\ $ $\mu$-a.e. so in fact $\sig \in L^\infty_\mu(\Ob;\Sdd)$. In \cite{bouchitte2007} the authors defined solution of the Linear Constrained Problem as a triple $u,\mu,\sig$ where $u$ solves $\relProb$ and $\tau = \sig\mu$ solves $\dProb$ with $\dro(\sig)=1\ $ $\mu$-a.e. For our purpose, i.e. in order to recover the full solution of the Free Material Design problem, we must speak of optimal quadruples $u,\mu,\sig,\hf$ where for $\mu$-a.e. $x$ the Hooke tensor $\hf(x) \in \Hc$ (of unit $\cost$-cost) is optimally chosen for $\sig(x)$, namely $\hf(x) \in \Hcc\bigl(\sig(x)\bigr)$. Beforehand we must make sure there always exists such a function $\hf$ that is $\mu$-measurable: \begin{lemma} \label{lem:measurable_selection} For a given Radon measure $\mu \in \Mes_+(\Ob)$ let $\gamma: \Ob \rightarrow \Sdd$ be a $\mu$-measurable function. We consider a closed and convex-valued multifunction $\Gamma_\gamma: \Ob \rightarrow 2^\Hc\backslash \varnothing$ as below \begin{equation*} \Gamma_\gamma(x) := \Hcc\bigl(\gamma(x) \bigr) = \left\{ \hk \in \Hc : \jc\bigl(\gamma(x)\bigr) = j^*\bigl(\hk,\gamma(x)\bigr) \right\}. \end{equation*} Then there exists a $\mu$-measurable selection $\hf_\gamma: \Ob\rightarrow \Hc$ of the multifunction $\Gamma_\gamma$, namely \begin{equation*} \hf_\gamma (x) \in \Hcc\bigl(\gamma(x)\bigr) \qquad \text{for } \mu\text{-a.e. } x \in \Ob. \end{equation*} \begin{proof} It suffices to prove that the multifunction $\sig \mapsto \Hcc(\sig)$ is upper semi-continuous on $\Sdd$. Then it is also a measurable multifunction and thus there exsits a Borel measurable selection $\bar{\hk}:\Sdd \rightarrow \Hc$, i.e. $\bar{\hk}(\sig) \in \Hcc(\sig)$ for every $\sig \in \Sdd$, see Corollary III.3 and Theorem III.6 in \cite{castaing1977}. Then $\hf_\gamma : = \bar{\hk} \circ \gamma: \Ob \rightarrow \Hc$ is $\mu$-measurable as a composition of Borel measurable and $\mu$-measurable functions. By definition of the upper semi-continuity of multifunctions we must show that for any open set $U \subset \Hc$ (open in the relative topology of the compact set $\Hc\subset \LSdd$) the set \begin{equation*} V = \bigl\{ \sig \in \Sdd \ : \ \Hcc(\sig) \subset U \bigr\} \end{equation*} is open in $\Sdd$. Below the set $U$ is fixed; assuming that $V$ is non-empty we choose arbitrary $\breve{\sig} \in V$. We must show that there exists $\delta >0$ such that $B(\breve{\sig},\delta) \subset V$, which may be rewritten as \begin{equation} \label{eq:lemma_proof_thesis} \text{for every } \sig \in B(\breve{\sig},\delta) \text{ there holds:} \qquad j^*(\hk,\sig) > \jc(\sig) \quad \forall\, \hk \in \Hc \backslash U. \end{equation} We start proving \eqref{eq:lemma_proof_thesis} by observing that there exists $\eps>0$ such that \begin{equation} \label{eq:3eps} \inf_{\tilde\hk \in \Hc \backslash U} j^*(\tilde\hk,\breve{\sig}) > \jc(\breve{\sig}) + 3 \eps. \end{equation} Indeed, the compact set $\Hcc(\breve{\sig})$ is a subset of the open set $U$ and therefore lower semi-continuity of $j^*(\argu,\breve{\sig})$ implies that $\inf_{\tilde\hk \in \Hc \backslash U} j^*(\tilde\hk,\breve{\sig})$ must be greater than $ \jc(\breve{\sig}) = \min_{\tilde\hk \in \Hc} j^*(\tilde\hk,\breve{\sig})$ because otherwise the minimum would be attained in the compact set $\Hc \backslash U$, which is in contradiction with $\Hcc(\breve{\sig}) \subset U$. For a fixed $\eps$ satisfying \eqref{eq:3eps} we shall choose $\delta>0$ so that for every $\sig \in B(\breve{\sig},\delta)$ there hold \begin{equation} \label{eq:cont_jc} \abs{\,\jc(\sig) - \jc(\breve{\sig})} < \eps, \end{equation} \begin{equation} \label{eq:uniform_l.s.c.} j^*(\hk,\sig) \geq \inf_{\tilde\hk \in \Hc \backslash U} j^*(\tilde\hk,\breve{\sig}) - \eps \qquad \forall\, \hk \in \Hc \backslash U. \end{equation} Possibility of choosing $\delta = \delta_1$ so that \eqref{eq:cont_jc} holds follows from continuity of $\jc$ (see point (i) of Theorem \ref{thm:rho_drho}), while estimate \eqref{eq:uniform_l.s.c.}, being uniform in $\Hc \backslash U$, is more involved. Since $j^*: \Hc \times \Sdd\rightarrow \Rb$ is lower semi-continuous (jointly in both arguments), for every $\tilde{\hk} \in \Hc$ we may pick $\tilde{\delta}=\tilde{\delta}(\tilde{\hk})>0$ such that \begin{equation*} j^*(\hk,\sig) \geq j^*(\tilde{\hk},\breve{\sig}) - \eps \qquad \forall\, (\hk,\sig) \in B\bigl(\tilde{\hk},\tilde{\delta}(\tilde{\hk})\bigr) \times B\bigl(\breve{\sig},\tilde{\delta}(\tilde{\hk})\bigr). \end{equation*} Since $\Hc\backslash U$ is compact one may choose its finite subset $\{\tilde{\hk}_i\}_{i=1}^m$ such that $\Hc \backslash U \subset \bigcup_{i=1}^m B\bigl(\tilde{\hk}_i,\tilde{\delta}(\tilde{\hk}_i) \bigr)$. By putting $\delta_2 = \min_{i=1}^m \tilde{\delta}(\tilde{\hk}_i)$ we find that \begin{equation*} j^*(\hk,\sig) \geq j^*(\tilde{\hk}_i,\breve{\sig}) - \eps \qquad \forall\, (\hk,\sig) \in B\bigl(\tilde{\hk}_i,\tilde{\delta}(\tilde{\hk}_i)\bigr) \times B\bigl(\breve{\sig},\delta_2\bigr) \qquad \forall\, i\in\{1,\ldots,m\}, \end{equation*} and thus, since the finite family of balls covers $\Hc \backslash U$ \begin{equation*} j^*(\hk,\sig) \geq \min\limits_{ i\in\{1,\ldots,m\}}j^*(\tilde{\hk}_i,\breve{\sig}) - \eps \qquad \forall\, (\hk,\sig) \in \bigl(\Hc\backslash U\bigr) \times B\bigl(\breve{\sig},\delta_2\bigr), \end{equation*} which, by the fact that $\tilde{\hk}_i \in \Hc\backslash U$ for all $i$, furnishes \eqref{eq:uniform_l.s.c.} for any $\sig \in B(\breve{\sig},\delta_2)$. We fix $\delta = \min\{\delta_1,\delta_2\}$ to have \eqref{eq:cont_jc} and \eqref{eq:uniform_l.s.c.} all together, which, combined with \eqref{eq:3eps}, give for any $\sig \in B(\breve{\sig},\delta)$ and any $\hk \in \Hc \backslash U$ \begin{equation*} j^*(\hk,\sig) \geq \inf_{\tilde\hk \in \Hc \backslash U} j^*(\tilde\hk,\breve{\sig}) - \eps > \left(\jc(\breve{\sig}) + 3 \eps \right) -\eps = \jc(\breve{\sig})+2\eps > \left(\jc(\sig) - \eps \right) + 2 \eps = \jc(\sig) +\eps, \end{equation*} which establishes \eqref{eq:lemma_proof_thesis} and thus concludes the proof. \end{proof} \end{lemma} The definition of a quadruple solving (LCP) may readily be given: \begin{definition} \label{def:LCP_solution} By a solution of (LCP) we will understand a quadruple: $\hat{u}\in C(\Ob;\Rd),\ \hat{\mu} \in \Mes_+(\Ob),\ \hat{\sig} \in L^\infty_{\hat{\mu}}(\Ob;\Sdd)$ and $\hat{\hf} \in L^\infty_{\hat{\mu}}(\Ob;\Hs)$ such that: $\hat{u}$ solves $\relProb$; $\hat{\TAU} = \hat{\sig} \hat{\mu} \in \MesT$ solves $\dProb$; $\dro(\hat{\sig}) = \nolinebreak 1\ $ $\hat{\mu}$-a.e.; $\hat{\hf}$ is any measurable selection of the multifunction $x \mapsto \Hcc\bigl( \hat{\sig}(x) \bigr)$ which exists by virtue of Lemma \ref{lem:measurable_selection}. \end{definition} Then we define a solution of the Free Material Design problem, yet, apart from the Hooke field $\lambda$ being the design variable, we also speak of the stress and the displacement function in the optimal body: \begin{definition} \label{def:FMD_solution} By a solution of (FMD) we will understand a quadruple: $\check{u}\in C(\Ob;\Rd),\ \check{\mu} \in \Mes_+(\Ob),\ \check{\sig} \in L^p_{\check\mu}(\Ob;\Sdd)$ and $\check{\hf} \in L^\infty_{\check{\mu}}(\Ob;\Hs)$ such that: $\check{\lambda} = \check{\hf} \check{\mu} \in \MesHH$ solves the compliance minimization problem $\Cmin$ with $\cost(\check{\hf}) = 1\ $ $\check\mu$-a.e.; $\check{\sig}$ solves the stress-based elasticity problem \eqref{eq:dual_comp} for $\lambda = \check{\lambda}$; $\check{u}$ is a relaxed solution of the displacement based elasticity problem \eqref{eq:compliance_def} for $\lambda = \check{\lambda}$. \end{definition} We give a theorem that links the two solutions defined above: \begin{theorem} \label{thm:FMD_LCP} Let us choose a quadruple $\hat{u}\in C(\Ob;\Rd), \hat{\mu} \in \Mes_+(\Ob), \hat{\sig} \in L^1_{\hat{\mu}}(\Ob;\Sdd)$ and $\hat{\hf} \in L^1_{\hat{\mu}}(\Ob;\Hs)$ and define \begin{equation} \label{eq:link_FMD_LCP} \check{\hf} = \hat{\hf}, \qquad \check{\mu} = \frac{\Totc}{Z}\, \hat{\mu}, \qquad \check{\sig} = \frac{Z}{\Totc}\, \hat{\sig}, \qquad \check{u} = \left(\frac{Z}{\Totc}\right)^{p'/p} \hat{u}. \end{equation} Then the quadruple $\hat{u},\hat{\mu},\hat{\hf},\hat{\sig}$ is a solution of (LCP) if and only if the quadruple $\check{u},\check{\mu},\check{\hf},\check{\sig}$ is a solution of (FMD) problem. \end{theorem} Before giving a proof we make an observation that is relevant from the mechanical perspective: \begin{corollary} The stress $\check{\sig}$ that due to the load $\Fl$ occurs in the structure of the optimal Hooke tensor distribution $\check{\lambda} =\check{\hf} \check{\mu}$ is uniform in the sense that \begin{equation} \check{\sig} \in L^\infty_{\check{\mu}}(\Ob;\Sdd), \qquad \dro(\check{\sig})=\frac{Z}{\Totc} \quad \check{\mu}\text{-a.e.} \end{equation} \end{corollary} \begin{proof}[Proof of Theorem \ref{thm:FMD_LCP}] Let us first assume that the quadruple $\hat{u},\hat{\mu},\hat{\hf},\hat{\sig}$ is a solution of (LCP) and the quadruple $\check{u},\check{\mu},\check{\hf},\check{\sig}$ is defined through \eqref{eq:link_FMD_LCP}. By definition $\hat{\TAU} = \hat{\sig} \hat{\mu}$ is a solution of the problem $\dProb$ and $\dro(\hat\TAU) = \dro(\hat\sig) \,\hat \mu = \hat\mu$. Since $\dro(\hat{\sig}) = 1\ $ $\hat\mu$-a.e. it is straightforward that $\cost(\hat{\hf}) = 1\ $ $\hat\mu$-a.e. as well: indeed, $\hk \in \Hcc(\zeta)$ for non-zero $\zeta$ only if $\cost(\hk) = 1$. Obviously the same concerns $\check{\hf}$. We verify that $\check{\lambda} = \check{\hf} \check{\mu}$ is a feasible Hooke tensor field by computing the total cost: \begin{equation} \label{eq:feas_lambda} \int \cost(\check{\lambda}) = \int \cost(\check{\hf})\, d \check{\mu} = \int d \check{\mu} = \frac{\Totc}{Z} \int d \hat{\mu} = \frac{\Totc}{Z} \int \dro(\hat{\TAU}) = \Totc, \end{equation} where we have used that $\hat{\TAU}$ is a minimizer for $\dProb$. In order to prove that $\check{\lambda}$ is a solution for $\Cmin$ it suffices to show that $\Comp(\check{\lambda}) \leq \Cmin$ where $\Cmin = \frac{1}{p' \, \Totc^{p'-1}}\ Z^{\,p'}$ by Theorem \ref{thm:problem_P}. We observe that $\hat\mu$-a.e. $\dro(\check{\sig}) = \frac{Z}{\Totc} \dro(\hat{\sig}) = \frac{Z}{\Totc}$. Since there holds $ \check{\sig} \check{\mu} = \bigl( \frac{Z}{\Totc} \hat{\sig}\bigr) \bigl(\frac{\Totc}{Z} \hat{\mu} \bigr) = \hat{\sig} \hat{\mu} = \hat{\TAU}$, obviously the equilibrium equation $-\DIV (\check{\sig} \check{\mu}) =\Fl$ is satisfied. Due to the assumption of $p$-homogeneity (H\ref{as:p-hom}) the field $\check{\hf} = \hat\hf$ is both a measurable selection for $x \mapsto \Hcc\bigl( \hat{\sig}(x) \bigr)$ and $x \mapsto \Hcc\bigl( \check{\sig}(x) \bigr)$. Then, by the dual stress-based version of the elasticity problem \eqref{eq:dual_comp} \begin{equation} \label{eq:min_lambda} \Comp(\check{\lambda}) \leq \int j^* \bigl(\check{\hf},\check{\sig} \bigr)\, d\check{\mu} = \int \jc\bigl(\check{\sig} \bigr) \, d\check{\mu} = \int \nonumber \frac{1}{p'}\biggl(\dro\bigl(\check{\sig}\bigr)\biggr)^{p'}\, d\check{\mu} = \int \frac{1}{p'}\biggl(\frac{Z}{\Totc}\biggr)^{p'}\, d\check{\mu} =\Cmin, \end{equation} where in the first equality we have used the fact that $\check{\hf}(x) \in \Hcc\bigl( \check{\sig}(x) \bigr)$ for $\hat\mu$-a.e. $x$; in the last equality we acknowledged that $\int d\check{\mu} = \Totc$, see \eqref{eq:feas_lambda}. This proves minimality of $\check{\lambda}$ and we have only equalities in the chain above, which shows that $\check{\sig}$ solves the dual elasticity problem \eqref{eq:dual_comp} for $\lambda = \check{\lambda}$. In order to complete the proof of the first implication we must show that $\check{u}$ is a relaxed solution for \eqref{eq:compliance_def}. Since $\hat{u}$ is a solution for $\relProb$ there exists a sequence $\hat{u}_n \in \U_1$ such that $\norm{\hat{u}_n - \hat{u}}_\infty \rightarrow 0$. By definition of $\U_1$ we have $\ro\bigl(e(u_n) \bigr) \leq 1$ and therefore by setting $\check{u}_n = \left(Z/\Totc\right)^{p'/p} \hat{u}_n$ we obtain $\ro\bigl( e(\check{u}_n) \bigr) \leq \left(Z/\Totc\right)^{p'/p}$ with $\norm{\check{u}_n - \check{u}}_\infty \rightarrow 0$. In order to prove that $\check{u}$ is a relaxed solution it is thus left to show that $\check{u}_n$ is a maximizing sequence for \eqref{eq:compliance_def}. We see that \begin{alignat*}{1} &\liminf\limits_{n \rightarrow \infty} \left\{ \int \pairing{\check{u}_n,\Fl} - \int j\bigl(\check{\hf},e(\check{u}_n)\bigr) \, d\check{\mu} \right\} \geq \liminf\limits_{n \rightarrow \infty} \left\{ \int \pairing{\check{u}_n,\Fl} - \int \frac{1}{p} \biggl(\ro\bigl(e(\check{u}_n) \bigr) \biggr)^p d\check{\mu} \right\} \\ \geq &\liminf\limits_{n \rightarrow \infty} \biggl\{ \int \pairing{\check{u}_n,\Fl} - \int \frac{1}{p} \biggl(\frac{Z}{\Totc} \biggr)^{p'}\! d\check{\mu} \biggr\} = \lim\limits_{n \rightarrow \infty} \biggl\{ \int \pairing{\check{u}_n,\Fl} \biggr\} - \frac{Z^{\,p'}}{p \, \Totc^{p'-1}}= \Cmin= \Comp(\check{\lambda}), \end{alignat*} where we have used the fact that $\lim_{n\rightarrow \infty} \int\pairing{\check{u}_n,\Fl} = \left(Z/\Totc\right)^{p'/p} \lim_{n\rightarrow \infty} \int\pairing{\hat{u}_n,\Fl} = \left(Z/\Totc\right)^{p'/p} Z$. This shows that $\check{u}_n$ is a maximizing sequence for \eqref{eq:compliance_def} thus finishing the proof of the first implication. Conversely we assume that the quadruple $\check{u},\check{\mu},\check{\hf},\check{\sig}$ is a solution of the (FMD) problem (by definition we have $\cost(\check{\hf}) = \nolinebreak 1\ $ $\check{\mu}$-a.e.) and the quadruple $\hat{u},\hat{\mu},\hat{\hf},\hat{\sig}$ is defined via \eqref{eq:link_FMD_LCP}. The H\"{o}lder inequality furnishes \begin{equation} \label{eq:Holder} \int \dro(\check{\sig}) \, d\check{\mu} \leq \biggl(\int d\check{\mu}\biggr)^{1/p} \biggl( \int \bigl( \dro(\check{\sig}) \bigr)^{p'} \, d\check{\mu} \biggr)^{1/p'} \leq \Totc^{1/p} \biggl( \int \bigl( \dro(\check{\sig}) \bigr)^{p'} \, d\check{\mu} \biggr)^{1/p'} \end{equation} and the equalities hold only if $\dro(\check{\sig})$ is $\check{\mu}$-a.e. constant and only if either $\int d\check{\mu} = \Totc$ or $\check\sig$ is zero. Based on the fact that $\check{\lambda} =\check{\hf} \check{\mu}$ is a solution of $\FMD$ and $\check{\sig}$ is a minimizer in \eqref{eq:dual_comp} we may write a chain \begin{alignat*}{1} \Cmin = \Comp(\check{\lambda}) = \int j^*(\check{\hf},\check{\sig}) \,d\check{\mu} \geq \int \jc(\check{\sig}) \,d\check{\mu} = \int \frac{1}{p'} \bigl(\dro(\check{\sig}) \bigr)^{p'} \,d\check{\mu} &\geq \frac{1}{p' \Totc ^{p'/p}}\biggl( \int \dro(\check{\sig}) \, d\check{\mu} \biggr)^{p'} \nonumber\\ &\geq \frac{Z^{p'}}{p' \Totc ^{p'/p}} = \Cmin, \end{alignat*} where in the last inequality we use the fact that $\check{\TAU} = \check{\sig} \check{\mu}$ is a feasible force flux in $\dProb$. We see that above we have equalities everywhere, which, assuming that $\Cmin >0$ (otherwise the theorem becomes trivial), implies several facts. First, we have $Z= \int \dro(\check{\TAU})$, which shows that $\check{\TAU}$ is a solution for $\dProb$. Then, by H\"{o}lder inequality \eqref{eq:Holder} and the comment below it, we obtain that $\int d\check{\mu} = \Totc$ and $\dro(\check{\sig}) = t = \mathrm{const} $ $\check{\mu}$-a.e. Combining those three facts we have $\dro(\check{\sig}) = \frac{Z}{\Totc}$ since $Z = \int \dro(\check{\TAU}) = \int \dro(\check{\sig}) \, d\check{\mu} = t\, \Totc$. From this follows that $\hat{\sig} = \frac{\Totc}{Z} \check{\sig}$ and $\hat\mu = \frac{Z}{\Totc} \check{\mu}$ are solutions for (LCP). As the last information from the chain of equalities we take the point-wise equality $j^*(\check{\hf},\check{\sig}) = \jc(\check{\sig})\ $ $\check{\mu}$-a.e. implying that $\check{\hf}(x) \in \Hcc\bigl(\check{\sig}(x)\bigr)= \Hcc\bigl(\hat{\sig}(x)\bigr)$ for $\hat{\mu}$-a.e. $x$ and thus $\hat\hf = \check{\hf}$ together with the pair $\hat{\sig}, \hat{\mu}$ are solutions for (LCP). To finish the proof we have to show that $\hat{u} = \bigl(\frac{\Totc}{Z}\bigr)^{p'/p} \check{u}$ is a solution for $\relProb$. It is straightforward to show that $\hat{u} \in \Uc$ based on our definition of the relaxed solution for \eqref{eq:compliance_def} and thus we only have to verify whether $\int \pairing{\hat u,\Fl} = Z$. One can easily show that for $\check{u}$ being a relaxed solution for $\Comp(\check{\lambda})$ there holds the repartition of energy $\int\pairing{\check{u},\Fl} = p' \, \Comp(\check{\lambda})$ (see Proposition 3 in \cite{bouchitte2007}). Since $\Comp(\check{\lambda}) = \Cmin = \frac{Z^{p'}}{p' \Totc^{p'/p}}$ we indeed obtain $\int \pairing{\hat u,\Fl} = \bigl(\frac{\Totc}{Z}\bigr)^{p'/p} \int\pairing{\check{u},\Fl} = Z$ and the proof ends here. \end{proof} \section{Optimality conditions for the Free Material Design problem} \label{sec:optimality_conditions} In order to efficiently verify whether a given quadruple $u,\mu,\sig,\hf$ is optimal for (FMD) problem we shall state the optimality conditions. Due to much simpler structure of the problem (LCP) and the link between the two problems in Theorem \ref{thm:FMD_LCP}, it is more natural to pose the optimality conditions for (LCP). Since the form of the latter problem is similar to the one from the paper \cite{bouchitte2007} we will build upon the concepts and results given therein: in addition we must somehow involve the Hooke tensor function $\hf$. We start by quickly reviewing elements of theory of space $T_\mu$ tangent to measure and its implications; for details the reader is referred to the pioneering work \cite{bouchitte1997} and further developments in \cite{bouchitte2003} or \cite{bouchitte2007}. This theory makes it possible to $\mu$-a.e. compute the tangent strain $e_\mu(u)$ for functions $u \in \overline{\U}_1$ that are not differentiable in the classical sense -- this will be essential when formulating point-wise relation between the stress $\sig(x)$ and the strain $e_\mu(u)(x)$. For given $\mu \in \Mes_+(\Ob)$ we define $j_\mu:\Hs \times \Sdd \times \Ob \rightarrow \R$ such that for $\mu$-a.e. $x$ \begin{equation*} j_\mu(\hk,\xi,x) := \inf \biggl\{ j(\hk,\xi + \zeta) \ : \ \zeta \in \mathcal{S}^\perp_\mu(x) \biggr\} \end{equation*} where $\mathcal{S}^\perp_\mu(x)$ is the space of symmetric tensors orthogonal to measure $\mu$ at $x$. The characterization follows: $ \mathcal{S}^\perp_\mu(x) = \bigl(\mathcal{S}_\mu(x)\bigr)^\perp$ with $ \mathcal{S}_\mu(x) = T_\mu(x) \otimes T_\mu(x)$ where $T_\mu(x) \subset \Rd$ is the space tangent to measure. We also introduce $\jh_\mu: \Sdd \times \Ob \rightarrow \R$ for $\mu$-a.e. $x$ \begin{equation} \label{eq:jh_mu} \jh_\mu(\xi,x) := \inf \biggl\{ \jh(\xi + \zeta) \ : \ \zeta \in \mathcal{S}^\perp_\mu(x) \biggr\} \end{equation} and, again by employing Proposition 1 in \cite{bouchitte2007} on interchanging $\inf$ and $\sup$, we observe that \begin{equation*} \jh_\mu(\xi,x) = \inf\limits_{\zeta \in \mathcal{S}^\perp_\mu(x)} \sup_{\hk\in \Hc} \biggl\{ j(\hk,\xi + \zeta) \biggr\} = \sup_{\hk\in \Hc} \inf\limits_{\zeta \in \mathcal{S}^\perp_\mu(x)} \biggl\{ j(\hk,\xi + \zeta) \biggr\} = \sup_{\hk\in \Hc} \biggl\{ j_\mu(\hk,\xi,x) \biggr\}, \end{equation*} namely the operations $\hat{(\argu)}$ and $(\argu)_\mu$ commute thus the symbol $\jh_\mu$ is justified. It is straightforward to show that for each $H \in \Hs$ and $\mu$-a.e. $x$ the function $j_\mu(H,\argu,x)$ inherits the properties of convexity and positive $p$-homogeneity enjoyed by the function $j(H,\argu)$ and therefore its convex conjugate $j^*_{\mu}(H,\argu,x)$ (with respect to the second argument) is meaningful and moreover the repartition of energy analogous to \eqref{eq:repartition} holds whenever $\sig \in \partial\, j_\mu(H,\xi,x)$. On top of that one easily checks that $j^*_{\mu}(H,\sig,x) = j^*(H,\sig)$ whenever $\sig \in \mathcal{S}_\mu(x)$ and $j^*_{\mu}(H,\sig,x) = \infty$ if $\sig \notin \mathcal{S}_\mu(x)$. By $P_\mu(x)$ for $\mu$-a.e. $x$ we will understand an orthogonal projection onto $T_\mu(x)$. Next we introduce an operator $e_\mu : \Uc \rightarrow L^\infty_\mu(\Ob;\Sdd)$ such that for $u \in \Uc$ \begin{equation*} e_\mu(u) := P^\top_\mu \, \xi \, P_\mu \quad \text{for any } \xi \in L^\infty_\mu(\Ob;\Sdd) \text{ such that } \exists u_n \in \U_1 \text{ with } u_n \rightrightarrows u, \ e(u_n) \stackrel{\ast}{\rightharpoonup} \xi \text{ in } L^\infty_\mu(\Ob). \end{equation*} The function $\xi$ always exists since the set $e(\U_1)$ is weakly-* precompact in $L^\infty_\mu(\Ob;\Sdd)$ and, although $\xi$ may be non-unique, the field $P^\top_\mu \, \xi \, P_\mu$ is, see \cite{bouchitte2007}. The following lemma inscribes point (ii) of Theorem \ref{thm:rho_drho} into the frames of theory of space tangent to measure $\mu$: \begin{lemma} \label{lem:tengantial_constiutive_law} Let us take any $u \in \Uc$, $\mu \in \Mes_+(\Ob)$. Then, for $\mu$-a.e. $x \in \Ob$, any non-zero $\sig \in \mathcal{S}_\mu(x)$ ($\sig$ \nolinebreak is a tensor, not a tensor function) and $\breve{\hk} \in \Hc$ the following conditions are equivalent: \begin{enumerate}[(i)] \item there hold extremality conditions: \begin{equation*} \pairing{\,e_\mu(u)(x)\,,\,\sig\,} = \dro(\sig) \qquad \text{and} \qquad \breve{\hk} \in \Hcc(\sig); \end{equation*} \item the constitutive law is satisfied: \begin{equation} \label{eq:tangential_constiutive_law} \frac{1}{\dro(\sig)} \, \sig \in \partial j_\mu\biggl(\breve{\hk},e_\mu(u)(x),x \biggr), \end{equation} with subdifferential intended with respect to the second argument of $j_\mu$. \end{enumerate} \begin{proof} Thanks to Lemma 1 in \cite{bouchitte2007} for a function $u \in \Uc$ we have $\jh_\mu\bigl(e_\mu(u)(x),x\bigr)\leq 1/p$ for $\mu$-a.e. $x$. or in other words for every $x$ in some Borel set $A \subset \Ob$ such that $\mu(\Ob \backslash A) =0$. In the sequel of the proof we fix $x \in A$ for which we treat $\mathcal{S}_\mu(x)$ as a well defined linear subspace of $\Sdd$. Since the minimization problem in \eqref{eq:jh_mu} always admits a solution we find $\zeta \in \mathcal{S}^\perp_\mu(x)$ such that $\jh_\mu\bigl(e_\mu(u)(x),x\bigr) = \jh\bigl(e_\mu(u)(x)+\zeta \bigr)\leq 1/p$ or equivalently $\ro\bigl(e_\mu(u)(x)+\zeta \bigr) \leq 1$ or alternatively $j\bigr(\hk,e_\mu(u)(x)+\zeta\bigl)\leq 1/p$ for each $\hk\in \Hc$. Next we notice that $j^*_{\mu}(\breve{\hk},\sig,x) = j^*(\breve{\hk},\sig)$ due to $\sig \in \mathcal{S}_\mu(x)$. Further we will assume that $\dro(\sig) =1$, which is not restrictive. First we prove the implication (i) $\Rightarrow$ (ii). We shall denote $\xi:=e_\mu(u)(x)+\zeta$ where $\zeta$ is chosen as above. Since $\zeta \in \mathcal{S}^\perp_\mu(x)$ and $\sig \in \mathcal{S}_\mu(x)$ we see that $\pairing{\xi,\sig} = \pairing{e_\mu(u)(x),\sig} = 1$. Since in addition $\rho(\xi) \leq 1$ we see that the triple $\xi,\sig,\breve{\hk}$ satisfies the condition (1) in point (ii) of Theorem \ref{thm:rho_drho} and therefore (2) follows, i.e. $\sig \in \partial j\bigl(\breve{\hk},\xi\bigr)$ or alternatively $\pairing{\xi,\sig} = j(\breve{\hk},\xi ) + j^*\bigl(\breve{\hk},\sig\bigr)$. Due to the remarks above there also must hold $\pairing{e_\mu(u)(x),\sig} = j_\mu\bigr(\breve{\hk},e_\mu(u)(x),x \bigr) + j^*_{\mu}\bigl(\breve{\hk},\sig,x\bigr)$ furnishing \eqref{eq:tangential_constiutive_law} and thus establishing the first implication. For the second implication (ii) $\Rightarrow$ (i) we shall modify the proof of implication (2) $\Rightarrow$ (1) in Theorem \ref{thm:rho_drho}. The constitutive law \eqref{eq:tangential_constiutive_law} implies repartition of energy $\pairing{e_\mu(u)(x),\sig} = p \, j_\mu\bigl(\breve{\hk},e_\mu(u)(x),x\bigr)$ and $\pairing{e_\mu(u)(x),\sig} = p' \, j^*_{\mu}(\breve{\hk},\sig,x)$. In addition we observe that $j^*\bigl(\breve{\hk},\sig\bigr) \geq \jc\bigl(\sig\bigr) = \frac{1}{p'}\bigl(\dro(\sig) \bigr)^{p'} = \frac{1}{p'}$ and we may readily write down a chain \begin{equation*} 1 \leq p'\, j^*\bigl(\breve{\hk},\sig\bigr) = p'\, j^*_{\mu}\bigl(\breve{\hk},\sig,x\bigr) = \pairing{e_\mu(u)(x),\sig} = p \, j_\mu\bigl(\breve{\hk},e_\mu(u)(x),x\bigr) \leq p\, \jh_\mu\bigl(e_\mu(u)(x),x\bigr) \leq 1 \end{equation*} being in fact a chain of equalities furnishing $\pairing{e_\mu(u)(x),\sig} = 1$ and $\breve{\hk} \in \Hcc(\sig)$, which completes the proof. \end{proof} \end{lemma} The optimality condition for the Linear Constrained Problem may readily be given: \begin{theorem} \label{thm:optimality_conditions} Let us consider a quadruple ${u}\in C(\Ob;\Rd), {\mu} \in \Mes_+(\Ob), {\sig} \in L^\infty_\mu(\Ob;\Sdd)$, ${\hf} \in L^\infty_\mu(\Ob;\Hs)$ with $\dro({\sig}) = 1 $ and $ \hf \in \Hc \ $ ${\mu}$-a.e. The quadruple solves (LCP) if and only if the following optimality conditions are met: \begin{enumerate}[(i)] \item $-\DIV ({\sig} {\mu}) = \Fl $; \item ${u} \in \Uc$; \item $\pairing{e_{\mu}({u})(x),\sig(x)} = 1\quad $ and $ \quad\hf(x) \in \Hcc\bigl( \sig(x) \bigr) \quad$ for $\mu$-a.e. $x$. \end{enumerate} Moreover, condition (iii) may be equivalently put as a constitutive law of elasticity: \begin{enumerate}[(i)'] \setcounter{enumi}{2} \item ${\sig}(x) \in \partial j_{{\mu}} \bigl( {\hf}(x), e_{{\mu}}({u})(x),x \bigr)\quad $ for ${\mu}$-a.e. $x$. \end{enumerate} \begin{proof} Since the form of the duality pair $\relProb$ and $\dProb$ is identical to the one from \cite{bouchitte2007} we may quote the optimality conditions given in Theorem 3 therein: for the triple $(u,\mu,\sig)$ with $u \in \Uc$, $\mu \in \Mes_+(\Ob)$ and $\dro({\sig}) =1$ the following conditions are equivalent: \begin{enumerate}[(1)] \item $u$ solves the problem $\relProb$ and $\TAU = \sig \mu$ solves the problem $\dProb$; \item conditions (i), (ii) hold and moreover $\pairing{e_{\mu}({u}),\sig} = 1\ $ $\mu$-a.e. \end{enumerate} By Definition \ref{def:LCP_solution} we see that the quadruple $(u,\mu,\sig,\hf)$ satisfying the assumptions of the theorem solves (LCP) if and only if: (1) holds and moreover $\hf(x) \in \Hcc\bigl( \sig(x) \bigr)$ for $\mu$-a.e. $x$. Thus we infer that the quadruple $(u,\mu,\sig,\hf)$ solves (LCP) if and only if: conditions (i), (ii), (iii) hold. The "moreover" part of the theorem follows directly from Lemma \ref{lem:tengantial_constiutive_law}. \end{proof} \end{theorem} \section{Case study and examples of optimal structures} \label{sec:examples} \subsection{Other examples of Free Material Design settings} \label{sec:example_material_settings} In Example \ref{ex:rho_drho_AMD} we have computed: $\rho$, $\dro$ together with the extremality conditions for $\xi,\sig$ and the sets of optimal Hooke tensors $\Hch(\xi), \Hcc(\sig)$ in the setting of Anisotropic Material Design (AMD) problem which assumed that $\Hs = \Hs_0$ (all Hooke tensors are admissible) and $j(H,\xi) = \frac{1}{2} \pairing{H \xi, \xi}$ (linearly elastic material). The computed functions and sets virtually define the (LCP) (in AMD setting) which, in accordance with sections above, paves the way to solution of the original (FMD) problem. In the present section we will compute $\rho, \dro$ and $\Hch(\xi), \Hcc(\sig)$ for other settings. Although we will mostly vary the set $\Hs$ of admissible Hooke tensors, we shall also give two alternatives for the energy function $j$ so that the fairly general assumptions (H1)-(H5) are worthwhile. We start with the first one, while the other will be presented at the end of this subsection (cf. Example \ref{ex:power_law}): \begin{example}[\textbf{Constitutive law of elastic material that is dissymmetric in tension and compresion}] \label{ex:dissymetru_tension_compresion} For a chosen convex closed cone $\Hs$ let $j: \Hs \times \Sdd \rightarrow \R$ be any elastic potential that meets the assumptions (H1)-(H5). We propose two functions $j_+,j_-: \Hs \times \Sdd \rightarrow \R$ such that for any $\hk \in \Hs$ \begin{equation} \label{eq:j_plus_j_minus_defintions} j_+(\hk,\argu) := \bigl(j^*(\hk,\argu)+\mathbbm{I}_{\mathcal{S}^{d\times d}_+}\bigr)^*\qquad \text{and} \qquad j_-(\hk,\argu) := \bigl(j^*(\hk,\argu)+\mathbbm{I}_{\mathcal{S}^{d\times d}_-}\bigr)^* \end{equation} which are proposals of elastic potentials of materials that are incapable of withstanding compressive and, respectively, tensile stresses. The sets $\mathcal{S}^{d\times d}_+$ and $\mathcal{S}^{d\times d}_-$ are the convex cones of positive and negative semi-definite symmetric tensors; $\mathbbm{I}_A$ for $A \subset X$ and any vector space $X$ denotes the indicator function, i.e. $\mathbbm{I}_A(x) = 0$ for $x\in A$ and $\mathbbm{I}_A(x) = \infty$ for $x \in X \backslash A$. For any $\xi \in \Sdd$ we obtain by introducing a Lagrange multiplier $\zeta$: \begin{alignat*}{1} j_+(\hk,\xi) &= \sup\limits_{\sig \in \Sdd} \biggl\{\pairing{\xi,\sig} - j^*(\hk,\sig) - \mathbbm{I}_{\mathcal{S}^{d\times d}_+}(\sig) \biggr\} = \sup\limits_{\sig \in \mathcal{S}^{d\times d}_+} \biggl\{\pairing{\xi,\sig} - j^*(\hk,\sig) \biggr\}\\ \nonumber &= \sup\limits_{\sig \in \Sdd} \inf\limits_{\zeta \in \mathcal{S}^{d\times d}_+} \biggl\{\pairing{\xi+\zeta,\sig} - j^*(\hk,\sig) \biggr\} = \inf\limits_{\zeta \in \mathcal{S}^{d\times d}_+} \sup\limits_{\sig \in \Sdd} \biggl\{\pairing{\xi+\zeta,\sig} - j^*(\hk,\sig) \biggr\}, \end{alignat*} where in order to swap the order of $\inf$ and $\sup$ we again used Proposition 1 in \cite{bouchitte2007} (from the beginning we may restrict $\sig$ to some ball in $\Sdd$, which is due to ellipticity $j^*(\hk,\sig) \geq C(\hk) \abs{\sig}^{p'}$ for any $\hk$). By repeating the same argument for $j_-$ we obtain formulas \begin{equation} \label{eq:j_plus_j_minus_formulas} j_+(\hk,\xi) = \inf\limits_{\zeta \in \mathcal{S}^{d\times d}_+} j(\hk,\xi +\zeta) \qquad \text{and} \qquad j_-(\hk,\xi) =\inf\limits_{\zeta \in \mathcal{S}^{d\times d}_-} j(\hk,\xi +\zeta). \end{equation} It is now easy to see that the functions $j_+, j_-$ satisfy assumptions (H1)-(H4). Conditions (H\ref{as:convex}) and (H\ref{as:p-hom}) follow directly from definitions \eqref{eq:j_plus_j_minus_defintions} and properties of Fenchel transform. Condition (H\ref{as:concave}) can be easily inferred from \eqref{eq:j_plus_j_minus_formulas}, where functions $j_+(\argu,\xi),j_-(\argu,\xi)$ are point-wise infima of concave u.s.c. functions $j(\argu,\xi)$; one similarly shows (H\ref{as:1-hom}). It is clear, however, that the assumption (H\ref{as:elip}) is not satisfied for either of functions $j_+,j_-$: indeed, there for instance holds $j_+(H,\xi)=0$ for any $H\in \Hs$ and $\xi \in \mathcal{S}^{d\times d}_-$. In order to restore the condition (H\ref{as:elip}) we define a function $j_\pm: \Hs \times \Sdd \rightarrow \R$ that shall model a composite material that is dissymmetric for tension and compresion: \begin{equation*} j_\pm(H,\xi) = (\kappa_+)^p \,j_+(H,\xi) + (\kappa_-)^p \,j_-(H,\xi) \end{equation*} where $\kappa_+,\kappa_-$ are positive reals and $p$ is the homogeneity exponent of $j(H,\argu)$. To show that the condition (H\ref{as:elip}) is met for $j_\pm$ it will suffice to show that $\bar{j}_\pm(\xi) = \max_{\hk \in \Hc} j_\pm(\hk,\xi)$ is greater than zero for any non-zero $\xi$. This amounts to verifying whether $\bar{j}^*_\pm(\sig)$ is finite for any $\sig \in \Sdd$. Using the formula \eqref{eq:conjugate_of_jhat} (its proof did not utilize property (H\ref{as:elip})) and by employing the inf-convolution formula for convex conjugate of sum of functions we obtain \begin{equation*} \bar{j}^*_\pm(\sig) = \inf\limits_{\hk\in \Hc} j^*_\pm(H,\sig) = \inf\limits_{\hk\in \Hc} \inf\limits_{\substack{\sig_+\in \mathcal{S}^{d\times d}_+ \\ \sig_-\in \mathcal{S}^{d\times d}_- } } \biggl\{ \frac{1}{(\kappa_+)^{p'}}\,j^*(\hk,\sig_+) + \frac{1}{(\kappa_-)^{p'}}\,j^*(\hk,\sig_-) \, : \, \sig_+ + \sig_- = \sig\biggr\}. \end{equation*} Next, since $\jc$ is a real function on $\Sdd$, for any $\sig_+,\sig_-\in \Sdd$ there exist $\hk_+,\hk_-\in \Hc$ such that $j^*(\hk_+,\sig_+) < \infty$, $j^*(\hk_-,\sig_-) < \infty$. We set $\hk_\pm = (\hk_+ + \hk_-)/2\in \Hc$ to discover that \eqref{eq:sub_additivity_j_star} gives $j^*(\hk_\pm,\sig_+) <\infty$ and $j^*(\hk_\pm,\sig_-) <\infty$ and therefore the RHS of the above is finite proving (H\ref{as:elip}). In summary, the function $j_\pm:\Hs \times \Sdd \rightarrow \R$ satisfies the conditions (H1)-(H5) and thus the Free Material Design problem is well posed for the material that $j_\pm$ models. In particular the function $j_\pm$ is an example of a function which in general non-trivially meets the concavity condition (H\ref{as:concave}): even in the case when $j(H,\xi) = \frac{1}{2} \pairing{\hk\,\xi,\xi}$ the function $j_\pm$ may be non-linear with respect to argument $\hk$. In fact, in the paper \cite{giaquinta1985} in Equation (3.19) the authors construct an explicit formula for energy function that happens to coincide with $j_-(H,\xi)$. The point of departure therein is 2D linear elasticity with an isotropic Hooke tensor $\hk$. We quote their result below (we use bulk and shear constants $K$ and $G$ instead of $E,\nu$, see \eqref{eq:Young_and_Poisson}): \begin{equation*} j_-(\hk,\xi) = \left\{ \begin{array}{ccl} \frac{1}{2} \pairing{\hk\,\xi,\xi} & \quad \text{if} \quad \xi \in& \Sigma_1(K,G),\\ \frac{1}{2}\, \frac{4\, K\, G}{K+G}\, \bigl(\min\{\lambda_1(\xi),\lambda_2(\xi)\} \bigr)^2 & \quad \text{if} \quad \xi \in& \Sigma_2(K,G),\\ 0 & \quad \text{if} \quad \xi \in& \mathcal{S}^{d\times d}_+ \end{array} \right. \end{equation*} where $\Sigma_1(K,G), \Sigma_2(K,G)$ are subregions of $\Sdd$, see \cite{giaquinta1985} for details; we note that the quotient $\frac{4\, K\, G}{K+G}$ above is the Young modulus $E$, see \eqref{eq:Young_and_Poisson}. If for the cone of admissible Hooke tensors $\Hs$ we choose $\Hs_{iso}$ (see \eqref{eq:iso_K_G} and Example \ref{ex:rho_drho_IMD} below) for a fixed $\xi$ we see that, provided the moduli $K,G$ vary such that $\xi \in \Sigma_2(K,G)$, the function $j_-(\argu,\xi)$ is not linear, i.e. the energy $j_-$ does not depend linearly on $K,G$ and the same will apply to $j_\pm$. This example justifies the need for a fairly general assumption (H\ref{as:concave}) which allows energy functions that does not vary linearly with respect to $\hk$. \end{example} We move on to present another three settings of the Free Material Design problem: \begin{example}[\textbf{Fibrous Material Design problem}] \label{ex:rho_drho_FibMD} We present the setting of the \textit{Fibrous Material Design} problem (FibMD) which differs from AMD problem in Example \ref{ex:rho_drho_AMD} only by the choice of admissible family of Hooke tensors: \begin{equation} \label{eq:FibMD_setting} \Hs = \HM, \qquad j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi}, \qquad \cost(\hk) = \tr\, \hk \end{equation} where $\Hax$ was defined in Example \ref{ex:Hs_Michell} as a closed, yet non-convex (for the case $d>1$) cone $\Hax$ of uni-axial Hooke tensors $a \ \eta \otimes \eta \otimes \eta \otimes \eta $ with $a \geq 0$ and $\eta\in S^{d-1}$. We first observe that for each $\hk \in \Hax$ with $\cost(\hk) \leq 1$, i.e. with $\tr \, \hk = a \leq 1$, there holds \begin{equation} \label{eq:est_uniaxial} j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi} = \frac{a}{2} \, \bigl(\pairing{\xi,\eta \otimes \eta} \bigr)^2 \leq \frac{1}{2} \,\left(\max\limits_{i \in \{1,\ldots,d\}} \abs{\lambda_i(\xi)} \right)^2 \end{equation} and at the same time \begin{equation} \label{eq:max_uniaxial} j(\bar{\hk}_\xi,\xi) = \frac{1}{2} \,\left(\max\limits_{i \in \{1,\ldots,d\}} \abs{\lambda_i(\xi)} \right)^2 \qquad \text{for} \qquad \bar{\hk}_\xi = \bar{v}(\xi) \otimes\bar{v}(\xi) \otimes \bar{v}(\xi) \otimes \bar{v}(\xi) \end{equation} where $\bar{v}(\xi)$ is any unit eigenvector of $\xi$ corresponding to an eigenvalue of maximal absolute value. Let us now take any $\tilde{\hk} \in \mathrm{conv} \bigl(\Hax \bigr)$, namely, since $\Hax$ is a cone, $\tilde\hk = \sum_{i=1}^m \alpha_i \hk_i$ for some $\alpha_i \geq 0$ and $\hk_i \in \Hax$ with $\cost(\hk_i) > 0$. Since both $\cost = \tr$ and $j(\argu,\xi)$ are linear there holds $\cost(\tilde{\hk}) = \nolinebreak \sum_{i=1}^m \alpha_i\, \cost\left(\hk_i\right)$ and thus \begin{alignat*}{1} j(\tilde{\hk},\xi) = \sum_{i=1}^m \alpha_i\, j(\hk_i,\xi) = \sum_{i=1}^m \alpha_i \, \cost(\hk_i)\ j\left(\frac{\hk_i}{\cost(\hk_i)},\xi\right) &\leq \biggl(\sup\limits_{\substack{\hk \in \Hax \\ \cost(\hk)\leq 1 }} j(\hk,\xi) \biggr) \sum_{i=1}^m \alpha_i \, \cost(\hk_i) \nonumber \\ & = \biggl(\sup\limits_{\substack{\hk \in \Hax \\ \cost(\hk)\leq 1 }} j(\hk,\xi) \biggr)\ \cost(\tilde{\hk}). \end{alignat*} By recalling \eqref{eq:est_uniaxial} and \eqref{eq:max_uniaxial} we arrive at \begin{equation} \label{eq:Hax_instead_of_HM} \jh(\xi) = \max\limits_{\substack{\hk \in \HM \\ \cost(\hk)\leq 1 }} j(\hk,\xi) = \max\limits_{\substack{\hk \in \Hax \\ \cost(\hk)\leq 1 }} j(\hk,\xi) = \frac{1}{2} \,\left(\max\limits_{i \in \{1,\ldots,d\}} \abs{\lambda_i(\xi)} \right)^2, \end{equation} where the first equality is by definition of $\jh$; moreover \begin{equation} \label{eq:Hch_Michell} \Hch(\xi) = \mathrm{conv} \biggl\{\bar{v}(\xi) \otimes\bar{v}(\xi) \otimes \bar{v}(\xi) \otimes \bar{v}(\xi) \ : \bar{v}(\xi) \text{ is an eigenvector } v_i(\xi) \text{ with maximal } \abs{\lambda_i(\xi)} \biggr\}. \end{equation} As a consequence $\ro$ becomes the spectral norm on the space of symmetric matrices $\Sdd$; we display it next to the well-established formula for its polar: \begin{equation} \label{eq:spectral_rho} \ro(\xi) = \max\limits_{i \in \{1,\ldots,d\}} \abs{\lambda_i(\xi)}, \qquad \dro(\sig) = \sum_{i =1}^d \abs{\lambda_i(\sig)}. \end{equation} The extremality condition for the pair $\rho,\dro$ may be characterized as follows \begin{equation} \label{eq:ext_cond_Michell} \pairing{\xi,\sig} = \ro(\xi) \, \dro(\sig) \qquad \Leftrightarrow \qquad \left\{ \begin{array}{l} \text{every eigenvector of } \sig \text{ is an eigenvector of } \xi \text{ and }\\ \lambda_i(\sig) \neq 0 \quad \Rightarrow \quad \lambda_i(\xi) = \sign\bigl(\lambda_i(\sig)\bigr)\, \ro(\xi). \end{array} \right. \end{equation} It is thus only left to characterize the set $\Hcc(\sig)$; we see that this time around we are forced to search the set $\Hs = \HM$, instead of just $\Hax$, which was the case while maximizing $j(\argu,\xi)$ (see \eqref{eq:Hax_instead_of_HM}): indeed, any $\sig$ of at least two non-zero eigenvalues yields $j^*(\hk,\sig)=\infty$ for each $\hk \in \Hax$. According to point (ii) of Theorem \ref{thm:rho_drho} for a given non-zero $\sigma$ the Hooke tensor $\hk \in \Hc$ is an element of $\Hcc(\sig)$ if and only if the constitutive law \eqref{eq:extreme_const_law} holds for any $\xi = \xi_\sig$ that satisfies: $\rho(\xi_\sig)=1$ and the extremal relation \eqref{eq:ext_cond_Michell} with $\sig$. Since the function $j$ was chosen as quadratic form (see \eqref{eq:FibMD_setting}) the constitutive law reads \begin{equation} \label{eq:criteria_for_optimality_of_H} \frac{\sig}{\dro(\sig)} = H \, \xi_\sig. \end{equation} With $v_i(\sig)$ denoting unit eigenvectors of $\sig$ for a non-zero stress $\sig$ we propose the Hooke tensor \begin{equation} \label{eq:optimal_H_for_Michell} \bar{\hk}_\sig = \sum_{i=1}^{d} \frac{\abs{\lambda_i(\sig)}}{\dro(\sig)} \ v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \end{equation} that is an element of $\Hc$, i.e. $\bar{\hk}_\sig \in \HM$ and $\tr \, \bar{\hk}_\sig = 1$. Since the pair $\xi_\sig,\sig$ satisfies \eqref{eq:ext_cond_Michell} each $v_i(\sig)$ is an eigenvector for $\xi_\sig$ and moreover $\pairing{\xi_\sig, v_i(\sig) \otimes v_i(\sig) } = \sign{\bigl(\lambda_i(\sig)\bigr)}$, therefore \begin{equation*} \bar{\hk}_\sig \, \xi_\sig = \sum_{i=1}^{d} \frac{\abs{\lambda_i(\sig)}}{\dro(\sig)} \, \sign{\bigl(\lambda_i(\sig)\bigr)} \ v_i(\sig) \otimes v_i(\sig) = \frac{\sig}{\dro(\sig)}, \end{equation*} which proves that $\bar{\hk}_\sig \in \Hcc(\sig)$. The full characterization of the set $\Hcc(\sig)$ is difficult to write down for arbitrary $d$ hence further we shall proceed in dimension $d=2$, where three cases must be examined: \noindent\underline{Case a) the determinant of $\sig$ is negative} In this case $\sig$ has two non-zero eigenvalues of opposite sign, let us say: $\lambda_1(\sig)<0$ and $\lambda_2(\sig)>0$. Therefore there exists a unique $\xi = \xi_\sig$ that satisfies $\rho(\xi_\sig) \leq 1$ and is in the extremal relation \eqref{eq:ext_cond_Michell} with $\sig$: there must hold $\xi_\sig = - v_1(\sig) \otimes v_1(\sig) + v_2(\sig) \otimes v_2(\sig)$ where $v_1(\sig),v_2(\sig)$ are the respective eigenvectors of $\sigma$. According to point (iii) of Theorem \ref{thm:rho_drho} there must hold $\Hcc(\sig) \subset \Hch(\xi_\sig)$ and thus from \eqref{eq:Hch_Michell} we deduce that each $\hk\in \Hch(\sig)$ satisfies $H = \sum_{i=1}^2 \alpha_i \,v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig)$ for $\alpha_1+\alpha_2 = 1$. Then the constitutive law \eqref{eq:criteria_for_optimality_of_H} enforces $\sig/\dro(\sig) = -\alpha_1\, v_1(\sig) \otimes v_1(\sig) + \alpha_2\, v_2(\sig) \otimes v_2(\sig)$ and we immediately obtain that $\alpha_i = \abs{\lambda_i(\sig)}/\dro(\sig)$ and therefore $\hk$ must coincide with $\bar{\hk}_\sig$ from \eqref{eq:optimal_H_for_Michell}. In summary, in the case when $d = 2$ and $\mathrm{det}\,\sig <0$ the set $\Hcc(\sig)$ is a singleton: \begin{equation} \label{eq:Hcc_char_Michell_negative} \Hcc(\sig) = \biggl\{\sum_{i=1}^{2} \frac{\abs{\lambda_i(\sig)}}{\dro(\sig)} \ v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \biggr\}, \end{equation} while $\Hch(\xi_\sig)$ is the convex hull of $\bigl\{ v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \,:\, i\in \{1,2\} \bigr\}$. \noindent\underline{Case b) the determinant of $\sig$ is positive} Without loss of generality we may assume that $\lambda_1(\sig),\lambda_2(\sig) > 0$. Once again there is unique $\xi_\sig$ with $\rho(\xi_\sig) = 1$ and satisfying \eqref{eq:ext_cond_Michell}: necessarily $\xi_\sig = \mathrm{I}$. Therefore any unit vector $\eta$ is an eigenvector of $\xi_\sig$ (but not necessarily of $\sig$) with eigenvalue equal to one and thus $\Hch(\xi_\sig) = \mathrm{conv}\bigl\{ \eta \otimes \eta \otimes \eta \otimes \eta \,:\, \eta \in S^{d-1} \bigr\}$. Therefore the inclusion $\Hcc(\sig) \subset \Hch(\xi_\sig)$ merely indicates that for $H \in \Hcc(\sig)$ there must hold $H = \sum_{i=1}^m \alpha_i\, \eta_i \otimes \eta_i \otimes \eta_i \otimes \eta_i$ where $m \in \mathbbm{N}$, $\eta_i\in S^{d-1}$ and $\sum_{i=1}^m \alpha_i = 1$. By plugging this form of $H$ into \eqref{eq:criteria_for_optimality_of_H} we obtain the characterization for a positive definite $\sig$ (recall that $\pairing{\eta_i \otimes \eta_i,\xi_\sig}\!=\!1$ for each $i$) \begin{equation} \label{eq:Hcc_char_Michell_positive} \Hcc(\sig) = \biggl\{ \sum_{i=1}^m \alpha_i\, \eta_i \otimes \eta_i \otimes \eta_i \otimes \eta_i \ : \ \eta_i\in S^{d-1},\ \alpha_i\geq 0,\ \sum_{i=1}^m \alpha_i = 1, \ \frac{\sig}{\tr \,\sig}= \sum_{i=1}^m \alpha_i\, \eta_i \otimes \eta_i \biggr\}, \end{equation} where we used the fact that $\dro(\sig) = \tr\,\sig$ for any positive semi-definite $\sig$. With the following example we show that optimal Hooke tensor for positive definite $\sig$ is highly non-unique and the characterization above cannot be sensibly simplified. With $e_1,e_2$ denoting a Cartesian base of $\Rd$ we consider $\sig = \frac{4}{5} \,e_1\otimes e_1+\frac{1}{5}\, e_2\otimes e_2$, we see that $\tr\,\sig = 1$. By \eqref{eq:Hcc_char_Michell_positive} it is clear that $H_1 =\frac{4}{5} \,e_1\otimes e_1\otimes e_1\otimes e_1\otimes e_1+\frac{1}{5}\, e_2\otimes e_2 \otimes e_2\otimes e_2\otimes e_2 $ is optimal for $\sig$ and it is the expected solution: it is the universally optimal tensor $\bar{\hk}_\sig$ given in \eqref{eq:optimal_H_for_Michell}. Next we choose non-orthogonal vectors $\eta_1 = \frac{2}{\sqrt{5}}\, e_1 + \frac{1}{\sqrt{5}}\, e_2$ and $\eta_2 = \frac{2}{\sqrt{5}}\, e_1 - \frac{1}{\sqrt{5}}\, e_2$ and we may check that the tensor $H_2 = \sum_{i=1}^2 \frac{1}{2} \,\eta_i\otimes \eta_i\otimes \eta_i\otimes \eta_i\otimes \eta_i$ is an element of $\Hcc(\sig)$ according to \eqref{eq:Hcc_char_Michell_positive}. Since $H_1 \neq H_2$ it becomes clear that elements of $\Hcc(\sig)$ for positive definite $\sig$ may be constructed in many ways. \noindent\underline{Case c) $\sig$ is of rank one} It is not restrictive to assume that $\lambda_1(\sig) = 0$, $\lambda_2(\sig)>0$ and so $\sig = \lambda_2(\sig) \,v_2(\sig) \otimes v_2(\sig)$. In this case there are infinitely many $\xi_\sig$ such that $\rho(\xi_\sig) = 1$ and \eqref{eq:ext_cond_Michell} holds. We can, however, test \eqref{eq:criteria_for_optimality_of_H} with only one: $\xi_\sig := v_2(\sig) \otimes v_2(\sig)$ for which $\Hch(\xi_\sig) = \big\{ v_2(\sig) \otimes v_2(\sig) \otimes v_2(\sig) \otimes v_2(\sig) \bigr\}$ which is necessarily equal to $\Hcc(\sig)$ due to point (iii) of Theorem \ref{thm:rho_drho}. Eventually, for a rank-one stress $\sig$ the set of optimal Hooke tensors may be written as a singleton \begin{equation} \label{eq:Hcc_char_Michell_rank_one} \Hcc(\sig) = \left\{ \frac{\sig}{\abs{\sig}} \otimes \frac{\sig}{\abs{\sig}}\right\}, \end{equation} where we used the fact that $\dro(\sig) = \abs{\sig}$ for $\sig$ of rank one. By comparing to Example \ref{ex:rho_drho_AMD} we learn that the AMD and FibMD problems furnish the same optimal Hooke tensor at points where $\sig$ is rank-one. \begin{remark} \label{rem:FibMD_Michell} The pair of variational problems $\Prob$ and $\dProb$ with $\ro$ and $\dro$ specified above are well known to constitute the \textit{Michell problem} which is the one of finding the least-weight truss-resembling structure in $d$-dimensional domain $\Ob$, cf. \cite{strang1983} and \cite{bouchitte2008}. An extensive coverage of the Michell structures may be found in \cite{Lewinski2019}. Typically one poses the Michell problem in the so-called plastic design setting, namely the structure is not a body that undergoes elastic deformation, it is merely a body made of perfectly rigid-plastic material and is being designed to work under given stress regime. Herein the Michell problem is recovered as a special case of the Free Material Design problem for elastic body: we start with the set $\Hax$ of uni-axial Hooke tensors that is supposed to mimic the truss-like behaviour of the design structure. Mathematical argument requires that $\Hax$ be convexified to $\HM$ and eventually the optimal structure is made of a fibrous-like material. Another work where a link between the Michell problem and optimal design of elastic body was made is \cite{bourdin2008} where the Michell problem was recovered as the asymptotic limit for structural topology design problem in the high-porosity regime. \end{remark} \end{example} \begin{example}[\textbf{Fibrous Material Design problem with dissymmetry in tension and compression}] \label{ex:rho_drho_FibMD_plus_minus} We revisit the problem of Fibrous Material Design with the linear constitutive law replaced by the constitutive law for material that responds differently in tension and compression (the design problem will be further abbreviated by FibMD$\pm$), i.e. we take \begin{equation} \label{eq:FibMD_setting_plus_minus} \Hs = \HM, \qquad j_\pm(\hk,\xi) = (\kappa_+)^2 \, j_+(\hk,\xi) + (\kappa_-)^2 \, j_-(\hk,\xi), \qquad \cost(\hk) = \tr\, \hk \end{equation} where $j_+, j_-$ are computed for $j(\hk,\xi) = \frac{1}{2} \pairing{\hk\,\xi,\xi}$, i.e. $p=2$, see Example \ref{ex:dissymetru_tension_compresion}. In contrast to Example \ref{ex:rho_drho_FibMD} we have no linearity of $j_\pm$ with respect to $\hk$ and therefore for given $\xi \in \Sdd$ we must test $j_\pm(\hk,\xi)$ with tensors $\hk$ in the whole $\HM$ instead of just $\Hs_{axial}$. We start with a remark: for every $\xi \in \Sdd$ there exist $\zeta_1 \in \mathcal{S}^{d\times d}_+$ and $\zeta_2 \in \mathcal{S}^{d\times d}_-$ such that $\xi +\zeta_1 = \xi_+$ and $\xi +\zeta_2 = \xi_-$ where $\xi_+ = \sum_i \max\{\lambda_i(\xi),0\}\, v_i(\xi) \otimes v_i(\xi)$ and $\xi_- = \sum_i \min\{\lambda_i(\xi),0\}\, v_i(\xi) \otimes v_i(\xi)$ are, respectively, the positive and negative part of the tensor $\xi$. Then, for any $\hk \in \Hc$, i.e. for $\hk = \sum_{i =1}^m \alpha_i\,\eta_i \otimes \eta_i \otimes \eta_i \otimes \eta_i$ with $\sum_{i=1}^m \alpha_i=1$, we estimate \begin{equation*} j_+(H,\xi) = \inf\limits_{\zeta \in \mathcal{S}^{d\times d}_+} \biggl\{ \sum_{i =1}^m \frac{\alpha_i}{2} \bigl(\pairing{\xi + \zeta,\eta_i \otimes \eta_i}\bigr)^2 \biggr\} \leq \sum_{i =1}^m \frac{\alpha_i}{2} \bigl(\pairing{\xi_+,\eta_i \otimes \eta_i}\bigr)^2 \end{equation*} and by repeating an analogous estimate for $j_-$ we obtain \begin{alignat*}{1} j_\pm(H,\xi) \leq \sum_{i =1}^m \frac{\alpha_i}{2} \biggl(\bigl(\kappa_+\pairing{\xi_+,\eta_i \otimes \eta_i}\bigr)^2 + \bigl( \kappa_- \pairing{\xi_-,\eta_i \otimes \eta_i}\bigr)^2 \biggr) \leq \sum_{i =1}^m \frac{\alpha_i}{2} \biggl(\pairing{\bigl(\kappa_+\xi_+ - \kappa_- \xi_-\bigr),\eta_i \otimes \eta_i} \biggr)^2 \end{alignat*} where we used the fact that $\bigl(\kappa_+\pairing{\xi_+,\eta_i \otimes \eta_i}\bigr) \, \bigl( \kappa_- \pairing{\xi_-,\eta_i \otimes \eta_i}\bigr) \leq 0$. We see that for any $\hk\in \Hc$ we have \begin{equation*} j_\pm(H,\xi) \leq \sum_{i =1}^m \frac{\alpha_i}{2} \bigl( \rho_\pm(\xi)\bigr)^2 \leq \frac{1}{2} \bigl( \rho_\pm(\xi)\bigr)^2 \end{equation*} where, upon denoting by $\rho$ the spectral norm from \eqref{eq:spectral_rho}, we introduce \begin{equation*} \rho_\pm(\xi) := \rho\bigl(\kappa_+\,\xi_+ - \kappa_- \,\xi_-\bigr) = \max\limits_{i \in \{1,\ldots,d\}} \biggl\{ \max \bigl\{ \kappa_+ \lambda_i(\xi),-\kappa_- \lambda_i(\xi) \bigr\} \biggr\}. \end{equation*} By choosing $\eta$ parallel to a suitable eigenvector of $\xi$ we easily obtain $\jh_\pm(\xi) \geq j_\pm(\eta\otimes \eta\otimes \eta\otimes \eta,\xi) = \frac{1}{2} \bigl( \rho_\pm(\xi)\bigr)^2$. The two estimates furnish $\jh_\pm(\xi) = \frac{1}{2} \bigl( \rho_\pm(\xi)\bigr)^2$ hence $\rho_\pm$ is the gauge function for the FibMD$\pm$ problem. We observe that \begin{equation*} \rho_\pm(\xi) \leq 1 \qquad \Leftrightarrow \qquad -\frac{1}{\kappa_-} \leq \lambda_i(\xi) \leq \frac{1}{\kappa_+} \quad \forall\, i \in \{1,\ldots,d\}. \end{equation*} For $\sig \in \Sdd$ the polar $\rho_\pm^0$ reads \begin{equation*} \rho_\pm^0(\sig) = \sum_{i=1}^{d} \max \left\{ \frac{1}{\kappa_+} \lambda_i(\sig),-\frac{1}{\kappa_-} \lambda_i(\sig) \right\}= \frac{1}{2} \left( \frac{1}{\kappa_+} - \frac{1}{\kappa_-}\right) \tr\,\sig + \frac{1}{2} \left( \frac{1}{\kappa_+} + \frac{1}{\kappa_-}\right) \dro(\sig) \end{equation*} where $\dro$ is the polar to the spectral norm, see \eqref{eq:spectral_rho}; it is worth to note that $\tr\, \sig$ enters the formula with a sign. The formula for $\rho_\pm^0$ was already reported in Section 3.5 in \cite{Lewinski2019}. The extremality conditions between $\xi$ and $\sig$ for $\rho_\pm$ and $\rho_\pm^0$ are very similar to those displayed for FibMD problem (see \eqref{eq:ext_cond_Michell}) thus we shall neglect to write them down. The same goes for characterizations of the sets $\Hch(\xi)$ and $\Hcc(\sig)$; we merely show a formula for \begin{equation*} \bar{\hk}_\sig = \sum_{i=1}^{d} \frac{\max \bigl\{ \frac{1}{\kappa_+} \lambda_i(\sig),-\frac{1}{\kappa_-} \lambda_i(\sig) \bigr\}}{\rho_\pm^0(\sig)} \ v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \otimes v_i(\sig) \end{equation*} being a universal (but in general non-unique) element of the set $\Hcc(\sig)$. \begin{remark} In Remark \ref{rem:FibMD_Michell} the FibMD problem, characterized by the spectral norm $\rho$ and its polar $\dro$ and posed for an elastic body, was recognized it as equivalent to the Michell problem of designing a truss-like plastic structure of minimum weight -- this observation was valid under the condition that the permissible stresses in the second model are equal in tension and compression, i.e. $\bar{\sigma}_+ = \bar{\sigma}_- \in \R_+$. The theory of plastic Michell structures is developed in the case $\bar{\sigma}_+ \neq \bar{\sigma}_-$ as well, see Section 3.4 in \cite{Lewinski2019}. If one chooses $\kappa_+/\kappa_- = \bar{\sigma}_+ / \bar{\sigma}_-$ in the FibMD$\pm$ problem then again the duality pair $\Prob$, $\dProb$ with gauges $\rho_\pm$, $\rho_\pm^0$ is the very same as the one appearing in the Michell problem with permissible stresses $\bar{\sigma}_+ \neq \bar{\sigma}_-$. To the knowledge of the present authors the FibMD$\pm$ problem is the first formulation for elastic structure design known in the literature that is directly linked to the Michell problem for uneven permissible stresses in tension and compression. \end{remark} \end{example} \begin{example}[\textbf{Isotropic Material Design problem}] \label{ex:rho_drho_IMD} The following variant of (FMD) problem is known as the \textit{Isotropic Material Design} problem (IMD), see \cite{czarnecki2015a}: \begin{equation} \label{eq:IMD_setting} \Hs = \Hs_{iso}, \qquad j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi}, \qquad \cost(\hk) = \tr\, \hk, \end{equation} where $\Hs_{iso} = \bigl\{d K \bigl( \frac{1}{d}\, \mathrm{I} \otimes \mathrm{I} \bigr) + 2\, G\, \bigl( \mathrm{Id}- \frac{1}{d}\, \mathrm{I} \otimes \mathrm{I} \bigr)\, : \, K,G\geq 0 \bigr\}$ is a two-dimensional closed convex cone of isotropic Hooke tensors in a $d$-dimensional body, $d\in \{2,3\}$. For any $\hk \in \Hs_{iso}$ and $\xi \in \Sdd$ we have $j(H,\xi) = \frac{1}{2}\,\bigl( K \abs{\tr\,\xi}^2 + 2G\, \abs{\mathrm{dev} \,\xi}^2 \bigr)$ where $\mathrm{dev} \,\xi = \xi - \frac{1}{d} \, (\tr\, \xi)\, \mathrm{I} = \bigl( \mathrm{Id}- \frac{1}{d}\, \mathrm{I} \otimes \mathrm{I} \bigr) \,\xi$ and $\abs{\mathrm{dev}\,\xi}$ denotes the Euclidean norm. It is well established that $\hk$ has a single eigenvalue $d K$ and $N(d)-1$ eigenvalues $2G$ (we recall that $N(d)=d\,(d+1)/2$) therefore $\tr\,\hk = d K + \bigl(N(d)-1\bigr)\,2G$. Upon introducing auxiliary variables $A_1 = d K$ and $A_2 = \bigl(N(d)-1\bigr)\,2G$ we obtain $\tr\,\hk = A_1+A_2$ and \begin{equation*} j(\hk,\xi) = \frac{1}{2} \left( A_1 \biggl(\frac{\abs{\tr \,\xi}}{\sqrt{d}} \biggr)^2 + A_2 \biggl(\frac{\abs{\mathrm{dev} \,\xi}}{\sqrt{N(d)-1}} \biggr)^2\right). \end{equation*} Thus we have \begin{equation*} \jh(\xi) = \max_{\hk \in \Hc} j(\hk,\xi) = \max_{A_1,A_2 \geq 0} \bigl\{j(\hk,\xi) \, :\, A_1+A_2 \leq 1 \bigr\} = \frac{1}{2} \bigl( \rho(\xi)\bigr)^2 \end{equation*} where \begin{equation} \label{eq:rho_IMD} \rho(\xi) = \max\left\{ \frac{\abs{\tr \,\xi}}{\sqrt{d}}, \frac{\abs{\mathrm{dev} \,\xi}}{\sqrt{N(d)-1}} \right\}, \end{equation} while, for a non-zero $\xi \in \Sdd$ \begin{equation*} \Hch(\xi) \!= \!\biggl\{ \hk\in \Hs_{iso} : d K+ \bigl(N(d)-1\bigr)2G = 1,\, \biggl(\frac{\abs{\tr \,\xi}}{\sqrt{d}}-\rho(\xi)\biggr) K =0,\ \biggl(\frac{\abs{\mathrm{dev} \,\xi}}{\sqrt{N(d)-1}} - \rho(\xi) \biggr) G=0 \biggr\}. \end{equation*} By using the fact that $\pairing{\xi,\sig} = \frac{1}{d} (\tr\,\xi)(\tr\,\sig) + \pairing{\mathrm{dev}\,\xi,\mathrm{dev}\,\sig}$ we arrive at the polar \begin{equation} \label{eq:drho_IMD} \dro(\sig) = \frac{1}{\sqrt{d}} \, \abs{\tr\,\sig} + \sqrt{N(d)-1}\, \abs{\mathrm{dev}\,\sig} \end{equation} and the extremality conditions for non-zero $\xi,\sig$ follow: \begin{equation} \label{eq:ext_cond_IMD} \pairing{\xi,\sig} = \ro(\xi) \, \dro(\sig) \qquad \Leftrightarrow \qquad \left\{ \begin{array}{l} \tr\, \sig = \abs{\tr\,\sig}\, \frac{\tr\,\xi}{\sqrt{d} \,\rho(\xi)}, \\ \mathrm{dev}\,\sig = \abs{\mathrm{dev}\,\sig}\, \frac{\mathrm{dev}\,\xi}{\sqrt{N(d)-1} \,\rho(\xi)}. \end{array} \right. \end{equation} In order to characterize optimal Hooke tensors for non-zero $\sig$ we use point (ii) of Theorem \ref{thm:rho_drho}: $\hk$ is an element of $\Hcc(\sig)$ if and only if, for any $\xi = \xi_\sig$ satisfying $\rho(\xi_\sig)=1$ and the extremality conditions above, the constitutive law $\sig/\dro(\sig) = H\,\xi_\sig$ holds, which, considering \eqref{eq:ext_cond_IMD}, may be rewritten as: \begin{equation*} \frac{1}{\dro(\sig)} \left( \frac{1}{d}\,\abs{\tr\,\sig} \,\frac{\tr\,\xi_\sig}{\sqrt{d}} \, \mathrm{I} + \abs{\mathrm{dev}\,\sig} \, \frac{\mathrm{dev}\,\xi_\sig}{\sqrt{N(d)-1}} \right) = K (\tr\,\xi_\sig) \, \mathrm{I} + 2G \, \mathrm{dev}\,\xi_\sig. \end{equation*} It is easy to see that for any $\sig$ the tensor $\xi_\sig$ may be chosen so that both $\tr\,\xi_\sig \neq 0$ and $\mathrm{dev}\,\xi_\sig \neq 0$ and then comparing the left and right-hand side above yields \begin{equation} \label{eq:Hcc_char_IMD} \Hcc(\sig) = \biggl\{ \hk\in \Hs_{iso} \ : \ K = \frac{1}{d\sqrt{d}}\, \frac{\abs{\tr\,\sig}}{\dro(\sig)}, \ G = \frac{1}{2 \sqrt{N(d)-1}}\, \frac{\abs{\mathrm{dev}\,\sig}}{\dro(\sig)} \biggr\}. \end{equation} We notice that $\Hcc(\sig)$ is always a singleton for non-zero $\sig$, while $\Hch(\xi)$ may be a one dimensional affine subset of $\Hs_{iso}$, provided $\abs{\tr\,\xi}/\sqrt{d} = \abs{\mathrm{dev}\,\xi} /\sqrt{N(d)-1} = \rho(\xi) \neq 0$. \begin{example}[\textbf{Isotropic Material Design in the case of the power-law }] \label{ex:power_law} For $p\in (1,\infty)$ different than 2 one may propose a generalization of the constitutive law of linear elasticity. The conditions (H1)-(H5) can be easily satisfied if one assumes admissible Hooke tensors to be isotropic, i.e. again $\hk \in \Hs = \Hs_{iso}$. For instance we may choose \begin{equation*} j(\hk,\xi) = \frac{1}{p}\,\biggl( K \abs{\tr\,\xi}^p + 2G\, \abs{\mathrm{dev} \,\xi}^p \biggr) \end{equation*} where once more the moduli $K,G \geq 0$ identify an isotropic Hooke tensor $\hk \in \Hs_{iso}$ by means of \eqref{eq:iso_K_G}. A similar potential is proposed in \cite{castaneda1998} and referred to as the \textit{power-law} potential. The authors therein, however, allow to choose different exponents for the two tensor invariants $\abs{\tr\,\xi}$ and $\abs{\mathrm{dev} \,\xi}$, whilst here the assumption (H\ref{as:p-hom}) obligates us to apply a common exponent $p$. Naturally the results from Example \ref{ex:rho_drho_IMD} hold here with only slight modifications, for instance \begin{equation*} \rho(\xi) = \max\left\{ \frac{\abs{\tr \,\xi}}{d^{\,1/p}} , \frac{\abs{\mathrm{dev} \,\xi}}{\bigl(N(d)-1\bigr)^{1/p}}\right\}, \qquad \dro(\sig) = \frac{1}{d^{\,1/p'}} \, \abs{\tr\,\sig} + \bigl(N(d)-1\bigr)^{1/p}\, \abs{\mathrm{dev}\,\sig}. \end{equation*} \end{example} \end{example} \begin{remark} The \textit{Cubic Material Design} problem (CMD) considered in the paper \cite{czubacki2015} \textit{a priori} lies outside the scope of the present contribution. The set $\Hs_{cubic}$ of all the Hooke tensors of cubic symmetry is not a convex set, which is due to distinction of anisotropy directions. Thus $\Hs_{cubic}$ cannot be directly chosen as $\Hs$ herein. Nevertheless, in case when $j(\hk,\xi) = \frac{1}{2} \pairing{\hk \,\xi,\xi}$ and $\cost(\hk) = \tr \hk$, it turns out that the set of solutions of the problem $\max_{\hk \in \Hs_{cubic}, \ \cost(\hk) \leq 1} j(\hk,\xi)$ is convex for any $\xi \in \Sdd$, see \cite{czubacki2015} for details. This implies that the original CMD problem can be recovered as a special case of the (FMD) problem provided we set $\Hs = \mathrm{conv} \bigl(\Hs_{cubic}\bigr)$. We shall not formulate this result rigorously herein. \end{remark} \subsection{Examples of solutions of the Free Material Design problem in settings: AMD, FibMD, FibMD$\pm$ and IMD} For one load case $\Fl$ that simulates the uni-axial tension we are to solve a family of Free Material Design problems in several settings listed in this paper. Thanks to Theorem \ref{thm:FMD_LCP} we may solve the corresponding (LCP) problem instead, for which we have at our disposal the optimality conditions from Theorem \ref{thm:optimality_conditions}. Our strategy will be to first put forward a competitor $u,\mu,\sig,\hf$ for which we shall validate the optimality conditions. While solutions in Cases a) and b) are fairly easy to guess, it is clear that solution (the exact coefficients) in Case c), i.e. for IMD problem, had to be derived first. We stress that displacement solutions $u$ are given up to a rigid body displacement function $u_0\in \U_0$. It is also worth explaining that the Hooke functions $\hf$ and their underlying moduli are given without physical units as they are normalized by the condition $\hf(x) \in \Hc$: one can see that the ultimate Hooke field is $\lambda = \hf \mu$ and the suitable units are included in the "elastic mass distribution" $\mu$; an analogous comment concerns the stress function $\sigma$. \begin{example}[\textbf{Optimal material design of a plate under uni-axial tension test}] For a rectangle being a closed set $R = A_1 A_2 B_2 B_1 \subset \Rd = \R^2$ (we set $d=2$ the throughout this example) with $A_1 = (-a/2,-b/2)$, $A_2 = (-a/2,b/2)$, $B_1 = (a/2,-b/2)$ and $B_2 = (a/2,b/2)$ we consider a load \begin{equation*} \Fl = F_q +F_Q, \qquad F_q=- q\,e_1\, \Ha^1\mres[A_1,A_2] + q\,e_1\, \Ha^1\mres[B_1,B_2], \quad F_Q=- Q\, e_1 \, \delta_{A_0} + Q \, e_1\, \delta_{B_0} \end{equation*} where $A_0 = (-a/2,0)$, $B_0 = (a/2,0)$ and $q$ and $Q$ are non-negative constants that represent, respectively, loads diffused along segments and point loads, see Fig. \ref{fig:optimal_structure}. It is straightforward to check that $\Fl$ is balanced. For $\Omega$ we can take any bounded domain such that $R \subset \Ob$. \begin{figure}[h] \centering \includegraphics*[width=0.65\textwidth]{optimal_mass.eps} \caption{Graphical representation of load $\Fl = F_q +F_Q$ and optimal mass distribution $\mu$.} \label{fig:optimal_structure} \end{figure} \noindent\underline{Case a) the Anisotropic Material Design} In the AMD setting where $\rho = \dro = \abs{\argu}$, see \eqref{eq:AMD_setting} in Example \ref{ex:rho_drho_AMD}, we propose the following quadruple \begin{alignat}{2} \label{eq:AMD_quadruple_1} u(x) &= x_1\,e_1, \qquad \qquad &\mu &= q \, \mathcal{L}^2 \mres R + Q \, \Ha^1 \mres [A_0,B_0],\\ \label{eq:AMD_quadruple_2} \sig &= e_1 \otimes e_1, \qquad \qquad &\hf &= e_1 \otimes e_1 \otimes e_1 \otimes e_1. \end{alignat} We see that $\dro(\sig) =\abs{\sig} = 1$ and $\tr\,\hf = 1$, which are the initial assumptions in Theorem \ref{thm:optimality_conditions}. An elementary computation shows that $-\DIV(\sig \mu) = \Fl$, which gives the optimality condition (i) in Theorem \ref{thm:optimality_conditions}. The function $u$ is smooth and thus checking the condition $u\in \overline{\U}_1$ boils down to verifying whether $\rho\bigl( e(u) \bigr) = \abs{e(u)} \leq 1$. We have $e(u) = e_1 \otimes e_1$ and clearly the optimality condition (ii) follows. Next we can choose which of the conditions (iii) or (iii)' in Theorem \ref{thm:optimality_conditions} we shall check. First we list essential elements of theory of space tangent to measure $\mu$ for $\mu$-a.e. $x$: \begin{equation*} \mathcal{S}_\mu(x) = \left\{ \begin{array}{cl} \mathcal{S}^{2 \times 2}& \text{for } \mathcal{L}^2\text{-a.e. } x \in R,\\ \mathrm{span}\,\{e_1 \otimes e_1\} & \text{for } \Ha^1\text{-a.e. } x \in [A_0,B_0], \end{array} \right. \ P_\mu(x) = \left\{ \begin{array}{cl} \mathrm{I}& \text{for } \mathcal{L}^2\text{-a.e. } x \in R,\\ e_1 \otimes e_1 & \text{for } \Ha^1\text{-a.e. } x \in [A_0,B_0], \end{array} \right. \end{equation*} see e.g. \cite{bouchitte2001}. Since $u$ is smooth we simply compute $e_\mu(u)(x) = P^{\top}_\mu(x)\, e(u)(x) \,P_\mu(x)$; having $e(u) = e_1 \otimes e_1$ we clearly obtain that $e_\mu(u) = e_1 \otimes e_1$ for $\mu$-a.e. $x$ as well. We check that $\pairing{\sig(x),e_\mu(u)(x)} = 1$ for $\mu$-a.e. $x$. In addition, since $\hf = \sig \otimes \sig$, we have $\hf \in \Hcc(\sig)$ (see \eqref{eq:Hcc_char_AMD}) and the last optimality condition (iii) follows. We have thus already proved that the quadruple $u,\mu,\sig,\hf$ is an optimal solution for the (LCP) problem and Theorem \ref{thm:FMD_LCP} furnishes a solution for the original Free Material Design problem in the AMD setting. For the sake of demonstration we will in addition check the condition (iii)' as well: to this purpose we must compute the formula for $j_\mu\bigl(\hf(x),\argu\bigr)$. For $\mathcal{L}^2$-a.e. $x\in R$ clearly $j_\mu\bigl(\hf(x),\xi \bigr) = j\bigl(\hf(x),\xi \bigr)$ since for such $x$ we have $\mathcal{S}_\mu^\perp(x) = \{0\}$. For $\Ha^1$-a.e. $x \in [A_0,B_0]$ we have $\pairing{e_1 \otimes e_1,\zeta} = 0$ whenever $\zeta \in \mathcal{S}_\mu^\perp(x)$ hence for any $\xi \in \mathcal{S}^{2 \times 2}$ \begin{equation*} j_\mu\bigl(\hf(x),\xi \bigr) = \inf\limits_{\zeta \in \mathcal{S}^\perp_\mu(x)} j\bigl(\hf(x),\xi +\zeta \bigr) = \inf\limits_{\zeta \in \mathcal{S}^\perp_\mu(x)} \frac{1}{2} \bigl(\pairing{e_1 \otimes e_1,\xi +\zeta}\bigr)^2 = \frac{1}{2} \bigl(\pairing{e_1 \otimes e_1,\xi}\bigr)^2 \end{equation*} and ultimately we obtain $j_\mu\bigl(\hf(x),\argu \bigr) = j\bigl(\hf(x),\argu \bigr)$ for $\mu$-a.e. $x$. Therefore, verifying the condition (iii)' boils down to checking if $\mu$-a.e. $\sig = \hf \, e_\mu(u)$ and this is straightforward. \noindent\underline{Case b) the Fibrous Material Design} In the case of Fibrous Material Design it is enough to shortly note that the quadruple $u,\mu,\sig,\hf$ proposed in Case a) is also optimal in the setting of FibMD problem: indeed, both $e(u)$ and $\sig$ are of rank one, thus spectral norm $\rho\bigl(e(u) \bigr)$ and its polar $\dro(\sig)$ (see \eqref{eq:spectral_rho}) coincide with $\abs{e(u)}$ and $\abs{\sig}$ respectively. Moreover, again for a rank-one field $\sig$, the sets $\Hcc(\sig)$ are identical for AMD and FibIMD, see \eqref{eq:Hcc_char_Michell_rank_one} and the comment below. An additional comment is that the field $\tilde{u} = \tilde{u}(x) = x_1\, e_1 + \beta(x_2) \, e_2$ will also be optimal for FibMD problem provided that $\beta:\R\rightarrow \R$ is 1-Lipschitz; note that this was not the case for AMD problem where $u$ was uniquely determined up to a rigid body displacement function. Further, the same solution \eqref{eq:AMD_quadruple_1},\eqref{eq:AMD_quadruple_2} of (LCP) will be shared by the FibMD$\pm$ provided that one assumes $\kappa_+ = 1$. This is a consequence of $\sigma$ being positive definite $\mu$-a.e. \noindent\underline{Case c) the Isotropic Material Design} For the IMD problem the norms $\rho$ and $\dro$ are given in \eqref{eq:rho_IMD} and \eqref{eq:drho_IMD} respectively. We put forward a quadruple that shall be checked for optimality in the IMD problem: \begin{alignat*}{2} u(x) &= \frac{2+\sqrt{2}}{2}\,x_1\,e_1 - \frac{2-\sqrt{2}}{2}\,x_2\,e_2, \qquad \qquad &\mu& = \frac{2+\sqrt{2}}{2}\ \biggl( q \, \mathcal{L}^2 \mres R + Q \, \Ha^1 \mres [A_0,B_0] \biggr),\\ \sig &= \frac{2}{2+\sqrt{2}}\ e_1 \otimes e_1, \qquad\qquad &\hf& = 2K\,\biggl( \frac{1}{2}\,\mathrm{I} \otimes \mathrm{I}\biggr) + 2G\left(\mathrm{Id} - \frac{1}{2}\mathrm{I} \otimes \mathrm{I} \right) \end{alignat*} with \begin{equation} \label{eq:optimal_K_G} K = \frac{1}{2+2\sqrt{2}}, \qquad G = \frac{1}{4+2\sqrt{2}}. \end{equation} First we check that $\tr \,\hf = 2K+2\cdot2G = 1$, thus $\hf \in \Hc$ as assumed in Theorem \ref{thm:optimality_conditions}. Since the force flux $\tau = \sig \mu$ is identical to the one from Case a) the optimality condition (i) in Theorem \ref{thm:optimality_conditions} clearly holds. The function $u$ is again smooth so we compute $e(u) = \frac{2+\sqrt{2}}{2}\,e_1 \otimes e_1 - \frac{2-\sqrt{2}}{2}\,e_2 \otimes e_2$ and \begin{equation*} \tr\bigl(e(u)\bigr) = \sqrt{2}, \quad \abs{\mathrm{dev}\bigl(e(u) \bigr)} = \abs{e_1 \otimes e_1 - e_2 \otimes e_2} = \sqrt{2}, \quad \tr\,\sig = \frac{2}{2+\sqrt{2}}, \quad \abs{\mathrm{dev}\,\sig} = \frac{1}{1+\sqrt{2}} \end{equation*} yielding \begin{equation*} \rho\bigl( e(u) \bigr) = \max\left\{ \abs{\tr\bigl(e(u)\bigr)}/\sqrt{2}\,,\, \abs{\mathrm{dev}\bigl(e(u) \bigr)}/\sqrt{2} \right\} =1, \qquad \dro(\sig) = \frac{1}{\sqrt{2}} \, \abs{\tr\,\sig} + \sqrt{2}\, \abs{\mathrm{dev}\,\sig} = 1 \end{equation*} and therefore $u \in \overline{\U}_1$, which validates the optimality condition (ii); moreover $\dro(\sig) = 1$ as required in Theorem \ref{thm:optimality_conditions}. We move on to check the last optimality condition in version (iii). Since $\mu$ above is coincides with the one from Case a) (up to multiplicative constant), the formulas for $\mathcal{S}_\mu$ and $P_\mu$ derived therein are also correct here. Due to smoothness of $u$ we have $\mu$-a.e. $\pairing{\sig,e_\mu(u)} = \pairing{\sig, P^{\top}_\mu\, e(u) \,P_\mu} = \pairing{\sig,e(u)}$, where we used the fact that $\sig \in \mathcal{S}_\mu\ $ $\mu$-a.e. We easily check that $\pairing{\sig,e(u)} = 1$ and the extremality condition $\pairing{\sig,e_\mu(u)} = 1$ follows. Then one may easily check that the moduli $K, G$ agree with the characterization of the set $\Hcc(\sig)$ in \eqref{eq:Hcc_char_IMD}, hence $\hf \in \Hcc(\sig)\ $ $\mu$-a.e. and the optimality condition (iii) follows proving that the quadruple $u,\mu,\sig,\hf$ is indeed optimal for (LCP) problem in the IMD setting. In order to be complete we will show that the optimality condition (iii)' holds as well. It is clear that for $\mathcal{L}^2$-a.e. $x \in R$, where $\mathcal{S}_\mu(x) = \mathcal{S}^{2 \times 2}$, we have $j_\mu\bigl(\hf(x), \argu \bigr) =j\bigl(\hf(x), \argu \bigr)$. Meanwhile for $\Ha^1$-a.e. $x \in [A_0,B_0]$ the tensors $\zeta \in \mathcal{S}^\perp_\mu(x)$ are exactly those of the form $\zeta = e_2 \diamond \eta$ where $\eta \in \R^2$ and $\diamond$ denotes the symmetrized tensor product. Hence, for $\Ha^1$-a.e. $x \in [A_0,B_0]$ after performing the minimization (being non-trivial here) we obtain \begin{equation} \label{eq:uniaxial_constitutive_law} j_\mu\bigl(\hf(x),\xi \bigr) = \inf\limits_{\eta \in \R^2} j\bigl(\hf(x),\xi + e_2 \diamond \eta \bigr) = \frac{1}{2} \frac{4 K G}{K+G} \ \pairing{e_1 \otimes e_1 \otimes e_1 \otimes e_1,\xi\otimes \xi}, \end{equation} where the constant $\frac{4 K G}{K+G}$ can be readily recognized as Young modulus $E$, cf. \eqref{eq:Young_and_Poisson}. For chosen $\xi$ the minimizer $\eta = \eta_\xi$ above is exactly the one for which $\hf(x)\,(\xi + e_2 \diamond \eta_\xi) = s\,e_1\otimes e_1 $ for $s \in \R$. The potential $j_\mu$ in \eqref{eq:uniaxial_constitutive_law} induces the well-known uni-axial constitutive law in the bar $[A_0,B_0]$ that spontaneously emerges as a singular (with respect to $\mathcal{L}^2$) part of $\mu$. Upon computing: $e_\mu(u) (x)= e(u)(x)$ for $\mathcal{L}^2$-a.e. $x \in R$ and $e_\mu(u(x)) = \frac{2+\sqrt{2}}{2}\, e_1\otimes e_1$ for $\Ha^1$-a.e. $x\in [A_0,B_0]$, we see that eventually verifying condition (iii)' boils down to checking if \begin{equation*} \sig(x) = \left\{ \begin{array}{cl} \hf(x) \, e(u)(x) & \text{for } \mathcal{L}^2\text{-a.e. } x \in R,\\ \left(\frac{4 K G}{K+G} \ e_1 \otimes e_1 \otimes e_1 \otimes e_1 \right)\bigl(\frac{2+\sqrt{2}}{2} e_1\otimes e_1 \bigr) & \text{for } \Ha^1\text{-a.e. } x \in [A_0,B_0]. \end{array} \right. \end{equation*} The equations above are verified after elementary computations; in particular using formulas \eqref{eq:optimal_K_G} for optimal $K,G$ gives the Young modulus and the Poisson ratio: \begin{equation} \label{eq:optimal_E_nu} E = \frac{4 K G}{K+G} = \left( \frac{2}{2+\sqrt{2}}\right)^2 = 6-4\sqrt{2}, \qquad \nu = \frac{K-G}{K+G}=3-2\sqrt{2}. \end{equation} We finish the example with an observation: the computed value of Young modulus $E$ turns out to be maximal among all pairs $K,G\geq 0$ satisfying $\tr \,\hf = 2K+2\cdot2G \leq 1$. This is not surprising, since the plate under tension test has minimum compliance whenever its relative elongation along direction $e_1$, which here equals $\check\eps := \pairing{\check{u}(a/2,0)-\check{u}(-a/2,0)\,,\,e_1/b}$, is minimal. It must be carefully noted that $\check{u}$ is a solution of (FMD) problem and not (LCP) problem, cf. Definitions \ref{def:LCP_solution}, \ref{def:FMD_solution} and Theorem \ref{thm:FMD_LCP}. For the Hooke law $\check\sig = \check\hf\,e(\check{u})$ with isotropic $\check\hf$ and $\check\sig = \check{s}\, e_1 \otimes e_1$ (representing uni-axial tensile stress) it is well established that $\check\eps = \pairing{e(\check{u}),e_1 \otimes e_1} = \check{s} / \check{E}$. Since the stress coefficient $\check{s}$ is predetermined by the load we see that minimizing $\check{\eps}$ (or minimizing the compliance of the plate) reduces here to maximizing the Young modulus $\check{E}$. Since the cost assumed in the IMD problem was $c = \tr$ maximizing Young modulus is non-trivial and furnishes \eqref{eq:optimal_E_nu}, which includes the optimal Poisson ratio $\nu \cong 0.172$. \end{example} \section{The scalar settings of the Free Material Design problem} \label{sec:outlook} On many levels the presented paper has built upon the work \cite{bouchitte2007} on the optimal design of mass $\mu \in \Mes_+(\Ob)$. In some sense we have rigorously shown that the simultaneous design of the mass $\mu$ and the material's anisotropy described by Hooke tensor function $\hf \in L^\infty_\mu(\Ob;\Hs)$ consists of three steps: \begin{enumerate}[(i)] \item computing functions $\rho=\rho(\xi)$ and $\dro = \dro(\sig)$ that, respectively, are maximum strain energy $j(\hk,\xi)$ and minimum stress energy $j^*(\hk,\sig) $ with respect to Hooke tensors $\hk \in \Hs$ of unit $c$-cost; \item finding the solutions $\hat{u}$ and $\hat{\tau}$ of the problems $\relProb$ and $\dProb$, formulated with the use of $\rho$ and $\dro$ respectively, and retrieving the optimal mass $\check{\mu} = \frac{\Totc}{Z} \dro(\hat{\tau})$; \item with $\check{\sig} = \frac{d \hat{\tau}}{d\check{\mu}} \in L^\infty_{\check{\mu}}(\Ob;\Sdd)$ finding point-wise the optimal Hooke tensor $\mathscr{C}(x) \in \Hc$ that for $\check{\mu}$-a.e. $x$ minimizes the stress energy $j^*\bigl(\mathscr{C}(x),\check{\sig}(x) \bigr)$, which may be done in a $\check{\mu}$-measurable fashion. \end{enumerate} The step (ii) alone is the essence of the approach for the optimal mass design presented in \cite{bouchitte2007}, where the functions $\rho$, $\dro$ are in fact data. At the same time it is the most difficult step here since the steps (i) and (iii) involve finite dimensional programming problems. The present work concerns the problem of elasticity in two or three dimensional bodies, where the state function $u$ is vectorial and the differential operator is $e=e(u)$ being the symmetric part of the gradient. The framework of the paper \cite{bouchitte2007} is, however, far more general as \textit{a priori} it allows to choose any linear operator $A$, while the function $u$ may be either scalar or vectorial. The particular interest of the authors of \cite{bouchitte2007} is the case of $u:\Omega \rightarrow \R$ and $A = \nabla^2$ (the Hessian operator) that reflects the theory of elastic Kirchhoff plates (thin plates subject to bending). It appears that the theory of the Free Material Design problem herein developed is also easily transferable to problems other than classical elasticity and this last section shall serve as an outline of FMD theory in the context of two scalar problems: the aforementioned Kirchhoff plate problem and the stationary heat conductivity problem. \subsection{The Free Material Design problem for elastic Kirchhoff plates (second order scalar problem)} \label{sec:FMD_for_plates} For a plane bounded domain $\Omega\subset \R^2$ with Lipschitz boundary let there be given a first order distribution $f \in \D'(\R^2)$ with its support contained in $\Ob$. We assume that $f$ is balanced, i.e. $\pairing{u_0,f} = 0$ for any $u_0$ of the form $u_0(x) = \pairing{a,x}+b$ with $a \in \R^2$, $b\in \R$ ($u_0$ are functions of rigid plate out-of-plane displacements). With the cone of admissible Hooke tensors $\Hs$ and energy function $j:\Hs \times \Sdd \rightarrow \R$ defined as in Section \ref{sec:elasticity_problem}, for a Hooke tensor field given by a measure $\lambda \in \Mes(\Ob;\Hs)$ (the term \textit{bending stiffness field} would be more suited) we define compliance of an elastic Kirchhoff plate: \begin{equation} \label{eq:compliance_def_plate} \Comp(\lambda) = \sup \left\{ f(u) - \int j\bigl(\lambda,\nabla^2 u\bigr) \ : \ u \in \D(\R^2) \right\}, \end{equation} where the scalar function $u$ represents the plate deflection. With the compliance expressed as above the Free Material Design problem for Kirchhoff plates is formulated exactly as in the case of elasticity, i.e. $\Cmin = \min \bigl\{ \Comp(\lambda) \ : \ \lambda \in \MesHH, \ \int \cost(\lambda) \leq \Totc \bigr\}$. Since the elastic potential $j$ remains unchanged with respect to classical elasticity, the energy functional $J_\lambda$ from \eqref{eq:J_lambda} is identical as well and therefore Propositions \ref{prop:Carath}, \ref{prop:usc_j}, \ref{prop:usc_J} follow directly. Next it is straightforward to observe that in Theorem \ref{thm:problem_P} we do not utilize the structure of the operator $e$ and a counterpart of the result for operator $\nabla^2$ instead yields a pair of mutually dual problems: \begin{alignat*}{2} &\relProb \qquad \qquad Z &&= \max \biggl\{ f(u) \ : \ u \in \overline\V_1 \biggr\} \qquad \qquad\\ &\dProb\qquad \qquad &&=\min \biggl\{ \int \dro(\chi) \ : \ \chi \in \MesT, \ \DIV^2 \chi = f \biggr\} \qquad \qquad \end{alignat*} where the functions $\rho$ and its polar $\dro$ are defined exactly as in Section \ref{sec:FMD_problem}, see \eqref{eq:jh_rho}. Above for the maximization problem we have already given its relaxed version where (see \cite{bouchitte2007} for details) $\overline\V_1$ is the closure of the set $\V_1 = \bigl\{ u \in \D(\R^2)\, :\, \rho(\nabla^2 u) \leq 1 \ \text{ in }\Omega \bigr\}$ in the norm topology of $C^1(\Ob)$. Proposition \nolinebreak 6 in \cite{bouchitte2007} offers a characterization \begin{equation*} \overline\V_1 = \biggl\{ u \in W^{2,\infty}(\Omega) \ : \ \rho(\nabla^2 u) \leq 1 \ \text{ a.e. in } \Omega \biggr\}, \end{equation*} which tells us that the problem $\relProb$ above admits a solution $\hat{u}$ whose first derivative is Lipschitz continuous (note that no analogous characterization was available for the elasticity case). The second order equilibrium equation $\DIV^2 \chi = f$ in $\dProb$ renders the tensor valued measure $\chi$ a \textit{bending moment field}. It is clear that Section \ref{sec:anisotropy_at_point} on the point-wise maximization and minimization of energy functions $j$ and $j^*$ respectively remains valid here since the definitions of $j$, $\Hs$ and $c$ did not change. Consequently Lemma \ref{lem:measurable_selection} on the existence of an optimal measurable Hooke tensor function $\hf$ still holds true. Eventually, with Linear Constrained Problem defined for the pair $\relProb$ and $\dProb$ above, the analogue of Theorem \ref{thm:FMD_LCP} paves a way to constructing the solution of the (FMD) problem for Kirchhoff plates based on the solution of (LCP). Thereby we have sketched how the Sections \ref{sec:FMD_problem}, \ref{sec:FMD_LCP} on the (FMD) theory for elasticity can be translated to the setting of Kirchhoff plates; of course the contribution \cite{bouchitte2007} played a key factor. The Section \ref{sec:optimality_conditions} on the optimality conditions could be adjusted as well, yet this would be more involved as it requires more insight on the theory of the $\mu$-intrinsic counterpart of the second order operator $\nabla^2$; the reader is referred to \cite{bouchitte2003} for details. \subsection{The Free Material Design problem for heat conductor (first order scalar problem)} \label{sec:FMD_for_heat_cond} In this section $\Om \subset \Rd$ is any bounded domain in $d$-dimensional space ($d$ may equal 2 or 3) with Lipschitz boundary. The heat inflow and the heat outflow shall be given by two positive, mutually singular Radon measures $f_+\in \Mes_+(\Ob)$ and $f_- \in \Mes_+(\Ob)$ respectively; we assume the measures to be of equal mass: $f_+(\Ob) = f_-(\Ob)$. Next, let $\mathscr{A}$ be a set of admissible conductivity tensors being any closed convex cone contained in the set of symmetric positive semi-definite 2nd-order tensors $\mathcal{S}^{d \times d}_+$. The constitutive law of conductivity will be determined by the energy $j_1:\mathscr{A} \times \Rd \rightarrow \R$ that for some $p \in (1,\infty)$ meets assumptions analogous to (H1)-(H5) for function $j:\Hs \times \Sdd \rightarrow \R$. The compliance or the potential energy of the conductor given by a tensor-valued measure $\alpha \in \Mes(\Ob;\mathscr{A})$ may be defined as \begin{equation} \label{eq:compliance_def_cond} \Comp(\alpha) = \sup \left\{ \int u \, df - \int j_1\bigl(\alpha,\nabla u\bigr) \ : \ u \in \D(\R^d) \right\} \end{equation} where we put $f = f_+ - f_- \in \Mes(\Ob)$; the function $u$ plays a role of the temperature field. The Free Material Design problem for heat conductor may be readily posed: \begin{equation} \label{eq:FMD_heat_cond_def} \FMD \qquad \quad \Cmin = \min \biggl\{ \Comp(\alpha) \ : \ \alpha \in \Mes(\Ob;\mathscr{A}), \ \int \cost_1(\alpha) \leq \Totc \biggr\} \qquad \quad \end{equation} where the cost function $c_1$ is the restriction to $\mathscr{A}$ of any norm on the space of symmetric tensors $\Sdd$; for instance $c_1$ may be taken as $\tr$. Below we shall also shortly use the name: \textit{the scalar $\FMD$ problem}. Upon studying Sections \ref{sec:elasticity_problem}, \ref{sec:FMD_problem}, \ref{sec:FMD_LCP} we may observe that in the main results we did not make use of the structure of the space $\LSdd$ (being isomorphic to a subspace of 4-th order tensors) and neither of the fact that $\Hs$ contained positive semi-definite tensors only. In fact $\LSdd$ could be replaced by any finite dimensional linear space, while $\Hs$ by any convex closed cone $K \subset V$. In other words, the well-posedness of the $(\mathrm{FMD})$ problem stemmed from assumptions (H1)-(H5) alone and the set $\Hs \subset \LSdd$ was chosen merely to stay within natural framework of elasticity. The other choice could be precisely $V = \Sdd$ and $K = \mathscr{A}$. The argumentation for switching from vectorial $u$ to scalar one and from operator $e$ to $\nabla$ runs similarly to the one outlined for Kirchhoff plates and in addition it is again not an issue that the second argument of the function $j_1$ lies in $\Rd$ instead of $\Sdd$ in case of $j$: it could as well be any other finite dimensional linear space $W$. In summary, the conductivity framework presented above is well suited to the theory developed in this paper. We are now in a position to quickly run through the main results for the scalar $(\mathrm{FMD})$ problem. We start by analogous definitions of mutually polar gauges $\rho_1,\rho_1^0 : \Rd \rightarrow \R$: for any $v, q \in \Rd$ \begin{equation*} \frac{1}{p}\bigl(\rho_1(v) \bigr)^p = \max\limits_{\substack{A\in \mathscr{A} \\ c_1(A)\leq 1 }} j_1(A,v), \qquad \frac{1}{p'}\bigl(\rho_1^0(q) \bigr)^{p'} = \min\limits_{\substack{A\in \mathscr{A} \\ c_1(A)\leq 1 }} j^*_1(A,q), \end{equation*} while by $\bar{\mathscr{A}}_1(v)$ and $\ubar{\mathscr{A}}_1(q)$ we will denote the sets of, respectively, maximizers and minimizers above. The counterpart of Theorem \ref{thm:problem_P} for the scalar case furnishes the pair of mutually dual problems: \begin{alignat*}{2} &\relProb \qquad \qquad Z &&= \max \biggl\{ \int u \, df \ : \ u \in W^{1,\infty}(\Om), \ \rho_1(\nabla u) \leq 1 \ \text{ a.e. in } \Om \biggr\} \qquad \qquad\\ &\dProb\qquad \qquad &&=\min \biggl\{ \int \rho_1^0(\vartheta) \ : \ \vartheta \in \Mes(\Ob;\Rd), \ -\DIV\, \vartheta = f \biggr\}, \qquad \qquad \end{alignat*} where $\vartheta$ plays the role of the \textit{heat flux}. The problem $\relProb$ is already in its relaxed form, i.e. the set of admissible functions $u$ is the closure of the set $\bigl\{ u \in \D(\R) \, : \, \rho_1(\nabla u) \leq 1 \text{ in } \Om \bigr\}$ in the topology of uniform convergence. Recall that respective characterization via vector-valued functions $u \in W^{1,\infty}(\Om;\Rd)$ was not available for the $(\mathrm{FMD})$ problem in elasticity, see the comment below \eqref{eq:relProb}. Theorem \ref{thm:FMD_LCP} adjusted for the scalar setting states that the conductivity tensor field $\check{\alpha} \in \nolinebreak \Mes(\Ob;\mathscr{A})$ solves the scalar $\FMD$ problem if and only if it is of the form \begin{equation} \label{eq:FMD_LCP_scalar} \check{\alpha} = \check{A} \,\check\mu, \quad \check\mu = \frac{\Totc}{Z}\, \hat{\mu}, \quad \check{A} \in L^\infty_{\check{\mu}}(\Ob;\mathscr{A}) \text{ is any } \check{\mu} \text{-meas. selection of } x \mapsto \ubar{\mathscr{A}}_1\bigl( \hat{q}(x) \bigr) \end{equation} where $\hat{\mu} = \rho_1^0(\hat{\theta})$ and $\hat{q} = \frac{d \hat{\theta}}{d \hat{\mu}}$ for some solution $\hat\vartheta$ of the problem $\dProb$ above. Existence of the measurable selection referred to above follows from an adapted version of Lemma \ref{lem:measurable_selection}. In the sequel we shall consider the AMD version of the design problem along with the Fourier constitutive law, more precisely \begin{equation} \label{eq:scalar_AMD_setting} \mathscr{A} = \mathcal{S}_+^{d \times d}, \qquad j_1(A,v)= \frac{1}{2} \pairing{A\,v , v}, \qquad c_1(A) = \tr\,A. \end{equation} Following the argument in Example \ref{ex:rho_drho_AMD} on the AMD setting in the case of elasticity, for non-zero vectors $v,q \in \Rd$ we arrive at \begin{equation*} \rho_1 = \rho_1^0 = \abs{\argu}, \qquad \bar{\mathscr{A}}_1(v) = \left\{ \frac{v}{\abs{v}} \otimes \frac{v}{\abs{v}} \right\}, \qquad \ubar{\mathscr{A}}_1(q) = \left\{ \frac{q}{\abs{q}} \otimes \frac{q}{\abs{q}} \right\} \end{equation*} with $\abs{\argu}$ being Euclidean norm on $\Rd$. Therefore, owing to \eqref{eq:FMD_LCP_scalar}, the tensor valued measure $\check{\alpha} \in \Mes(\Ob;\mathcal{S}^{d \times d}_+)$ is a solution of the scalar $\FMD$ problem in the AMD setting if and only if \begin{equation} \label{eq:alpha_theta} \check{\alpha} = \frac{\Totc}{Z} \biggl( \frac{d\hat{\vartheta}}{d\lvert\hat{\vartheta}\rvert} \otimes \frac{d\hat{\vartheta}}{d\lvert\hat{\vartheta}\rvert} \biggr) \lvert\hat{\vartheta}\rvert \end{equation} for some solution $\hat\vartheta$ of the problem $\dProb$ with $\rho_1^0 = \abs{\argu}$. An important feature of the optimal conductivity field readily follows: \begin{proposition} Let $\check{\alpha}$ be any solution of the scalar $\FMD$ problem in the AMD setting \eqref{eq:scalar_AMD_setting}. Then $\check{\alpha}$ is rank-one, namely $\frac{d \check{\alpha}}{d\abs{\check{\alpha}}}$ is a rank-one matrix $\abs{\check{\alpha}}$-a.e. \end{proposition} \begin{remark} Up to a multiplicative constant, the only isotropic gauge function $\rho$ on $\Rd$ is the Euclidean norm $\abs{\argu}$. We thus find that every "isotropic scalar $(\mathrm{FMD})$ problem" reduces to the pair $\relProb$, $\dProb$ above with $\rho =b \abs{\argu}$, $b>0$ (note that no similar conclusion was true for the vectorial $(\mathrm{FMD})$ problem, see Examples \ref{ex:rho_drho_FibMD}, \ref{ex:rho_drho_FibMD_plus_minus}, \ref{ex:rho_drho_IMD} where all the gauges $\rho$ are isotropic). For instance in \eqref{eq:scalar_AMD_setting} we could instead take $\mathscr{A} = \mathscr{A}_{iso}$, i.e. the set of all isotropic conductivity tensors while of course: $A \in \mathscr{A}_{iso}$ if and only if $A = a \,\mathrm{I}$, for $a \geq 0$. Then the scalar $(\mathrm{FMD})$ problem is equivalent to the Mass Optimization Problem from \cite{bouchitte2001} and, since in $\Rd$ space $c_1(\mathrm{I})=\tr \,\mathrm{I} = d$, we have for any $v\in \Rd$ \begin{equation*} \frac{1}{2}\bigl(\rho_1(v)\bigr)^2 = \max\limits_{\substack{A\in \mathscr{A}_{iso} \\ c_1(A)\leq 1 }} j_1(A,v) = \max\limits_{\substack{a \geq 0 \\ d \cdot a \leq 1 }} \ \frac{1}{2}\pairing{a \, \mathrm{I}, v \otimes v} = \frac{1}{d} \ \frac{1}{2} \abs{v}^2 \end{equation*} yielding $\rho_1(v) = \frac{1}{\sqrt{d}} \abs{v}$ and $\rho_1^0(q) = \sqrt{d}\, \abs{q}$. In dimension $d=2$, once $f$ charges the boundary $\partial\Om$ only, the problem $\dProb$ can be reformulated as the Least Gradient Problem, see \cite{gorny2017}. Upon acknowledging this equivalence, a study of $\dProb$ for anisotropic functions $\rho_1^0$ can be found in \cite{gorny2018planar}. \end{remark} In the remainder of this section we shall assume that $\Om$ is convex, which, upon recalling that $\rho_1=\abs{\argu}$, allows to rewrite the condition $\rho_1(\nabla u) \leq 1$ a.e. in $\Om$ by a constraint on the Lipschitz constant: $\mathrm{lip}(u) \leq 1$. Then the Rubinstein-Kantorovich theorem combined with a duality argument allows to replace the problem $\dProb$ with the \textit{Optimal Transport Problem} $(\mathrm{OTP})$, see \cite{villani2003topics}: \begin{alignat*}{2} Z &= \max \biggl\{ \int u \, d(f_+-f_-) \ : \ u \in C(\Ob), \ \ u(x)-u(y) \leq \abs{x-y} \ \ \forall\, (x,y)\in \Ob \times \Ob \biggr\}\\ &=\min \biggl\{ \int_{\Ob \times \Ob} \abs{x-y}\, \gamma(dx dy) \ : \ \gamma \in \Mes_+(\Ob \times \Ob), \ \begin{array}{rl} \pi_{\#,1} \gamma =& f_+,\\ \pi_{\#,2} \gamma =& f_- \end{array} \biggr\}; \qquad \quad (\mathrm{OTP}) \end{alignat*} in $(\mathrm{OTP})$ we enforce the left and the right marginals of the \textit{transportation plan} $\gamma$ to be $f_+$ and $f_-$ respectively. Whenever $\hat\gamma$ is a solution of $(\mathrm{OTP})$ the measure $\hat{\vartheta}$ defined via acting on $v \in C(\Ob;\Rd)$ by \begin{equation} \label{eq:theta_gamma} \bigl(v;\hat{\vartheta} \bigr) := \int_{\Ob \times \Ob} \left( \int_{[x,y]} \pairing{v(z),\frac{x-y}{\abs{x-y}}} \, \Ha^1(dz) \right) \hat\gamma(dx dy) \end{equation} solves the problem $\dProb$ with $\rho_1^0=\abs{\argu}$. The passage from the problem $\Prob$ to the Optimal Transport Problem along with validation of the formula \eqref{eq:theta_gamma} may be found in \cite{bouchitte2001}. Upon plugging a solution $\hat\vartheta$ of the form \eqref{eq:theta_gamma} into formula \eqref{eq:alpha_theta}, however, it is not clear whether $\check{\alpha}$ enjoys a characterization of the type \eqref{eq:theta_gamma}. We conclude the paper with a result showing that there indeed exists an optimal tensor field $\check{\alpha}$ that decomposes to segments on which $\check{\alpha}$ is uni-axial (rank-one): \begin{theorem} For a convex bounded design domain $\Om\subset \Rd$ let $\hat{\gamma} \in \Mes_+(\Ob \times \Ob)$ denote any solution of $(\mathrm{OTP})$; then the conductivity tensor field $\check{\alpha} \in \Mes(\Ob;\mathcal{S}^{d \times d}_+)$ defined as a linear functional for any $M \in C(\Ob;\Sdd)$ by \begin{equation} \label{eq:alpha_gamma} \bigl(M;\check{\alpha}\bigr) = \frac{\Totc}{Z}\, \int_{\Ob \times \Ob} \left( \int_{[x,y]} \pairing{M(z),\frac{x-y}{\abs{x-y}} \otimes \frac{x-y}{\abs{x-y}}} \, \Ha^1(dz) \right) \hat\gamma(dx dy) \end{equation} is a solution of the scalar $(\mathrm{FMD})$ problem in the AMD setting \eqref{eq:scalar_AMD_setting}. \end{theorem} \begin{proof} First we must verify whether $\check{\alpha}$ is a competitor for $(\mathrm{FMD})$. The functions on the space of symmetric tensors $g,g^0:\Sdd \rightarrow \R$ given by formulas $g(M) = \max_i \abs{\lambda_i(M)}$ and $g^0(A) = \sum_i \abs{\lambda_i(A)}$ ($\lambda_i$ are the eigenvalues) are mutually dual norms. We note that for any $A$ that is positive semi-definite we have $g^0(A) = \tr\,A = c_1(A)$. Based on the paper \cite{bouchitte1988} we discover \begin{align*} &\int c_1(\check{\alpha}) = \int g^0(\check{\alpha}) = \sup\biggl\{\int \pairing{M,\check{\alpha}} \ : \ M \in C(\Ob;\Sdd), \ g(M)\leq 1 \text{ in } \Omega \biggr\} \\ \leq& \frac{\Totc}{Z}\, \int_{\Ob \times \Ob} \left( \int_{[x,y]} g^0 \!\left(\frac{x-y}{\abs{x-y}} \otimes \frac{x-y}{\abs{x-y}} \right) \, \Ha^1(dz) \right) \hat\gamma(dx dy) = \frac{\Totc}{Z}\,\int_{\Ob \times \Ob} \abs{x-y}\, \hat\gamma(dx dy) = \Totc, \end{align*} where we used the fact that $\hat{\gamma}$ solves $(\mathrm{OTP})$; the feasibility of $\check{\alpha}$ in $(\mathrm{FMD})$ is validated. Let $\hat{\vartheta} \in \Mes(\Ob;\Rd)$ be given by the formula \eqref{eq:theta_gamma}; since $\hat{\vartheta}$ solves the problem $\dProb$ in particular it satisfies $-\DIV \, \hat{\vartheta} = f$ or equivalently $\int u \,df = \int \langle\nabla u, \hat{\vartheta} \rangle$ for any $\,u \in \D(\Rd)$. The compliance $\Comp(\check{\alpha})$ in \eqref{eq:compliance_def_cond} can be thus rewritten and then estimated as follows \begin{align*} &\Comp(\check{\alpha}) = \sup \left\{ \int \langle\nabla u, \hat{\vartheta}\rangle - \frac{1}{2} \int \pairing{\check{\alpha},\nabla u \otimes \nabla u} \ : \ u \in \D(\R^d) \right\} \\ \leq & \sup \left\{ \int \langle v, \hat{\vartheta}\rangle - \frac{1}{2} \int \pairing{\check{\alpha},v \otimes v} \ : \ v \in \bigl(\D(\R^d) \bigr)^d \right\}\\ = & \sup \left\{ \int_{\Ob \times \Ob} \int_{[x,y]} \left( \pairing{v(z),\frac{x-y}{\abs{x-y}}} -\frac{\Totc}{2Z}\biggl(\pairing{v(z),\frac{x-y}{\abs{x-y}}}\biggr)^2 \right)\!\Ha^1(dz) \, \hat\gamma(dx dy) : v \in \bigl(\D(\R^d) \bigr)^d \right\}\\ \leq & \int_{\Ob \times \Ob} \left( \int_{[x,y]} \frac{Z}{2\Totc} \, \Ha^1(dz) \right) \hat\gamma(dx dy) = \frac{Z}{2\Totc} \,\int_{\Ob \times \Ob} \abs{x-y}\, \hat\gamma(dx dy) = \frac{Z^2}{2\Totc}, \end{align*} where in the last inequality we substituted $t:= \langle v(z),\frac{x-y}{\abs{x-y}} \rangle$ and used the fact that $\sup_{t\in \R} \bigl\{t- \frac{\Totc}{2 Z}\, t^2\bigr\} = \frac{Z}{2\Totc}$. The last term in the chain above: $\frac{Z^2}{2\Totc}$ is precisely the value of minimum compliance $\Cmin$ (see Theorem \ref{thm:problem_P} for the vectorial case) proving $\check{\alpha}$ is a solution of the scalar $(\mathrm{FMD})$ problem. \end{proof} \begin{comment} The arguments that runs from \eqref{eq:linearity_c} to \eqref{eq:Hch_Michell} uses only the linearity structure of $c$ and $j(\argu,\xi)$, therefore it can be generalized to: \begin{proposition} Let us choose a closed (but not necessarily convex) cone $\Hnc$ being a subcone of $\Hf \subset \LSdd$ and then let us fix $\Hs = \mathrm{conv}(K)$, i.e. we choose the set of admissible Hooke tensors as the closed convex hull of $K$. Let $\cost:\Hs \rightarrow \R_+$ and $j(\argu,\xi):\Hs \rightarrow \R_+$ be linear for each $\xi \in \Sdd$. We assume that all the conditions (H\ref{as:convex}) -- (H\ref{as:elip}) are met. Then, for any $\xi \in \Sdd$ \begin{equation} \frac{1}{p}\bigl(\ro(\xi) \bigr)^p = \jh(\xi)= \max\limits_{\hk \in \Hc} j(\hk,\xi) = \max\limits_{\substack{\hk \in \Hnc \\ \cost(\hk) \leq 1}} j(\hk,\xi) \end{equation} and \begin{equation} \Hch(\xi) = \mathrm{conv}\bigl( \hat{K}_1(\xi) \bigr) \qquad \text{with} \qquad \hat{K}_1(\xi) = \left\{ \hk \in \Hnc \ : \cost(\hk)\leq 1,\ j(\hk,\xi)=\jh(\xi) \right\} . \end{equation} \end{proposition} The proof of this proposition is fully analogical to the argument used in the example, hence we shall not repeat it here. By acknowleding point (iii) of Theorem \ref{thm:rho_drho} one simply arrives at a corollary that allows to examine the well-posedeness of the Free Matrial Problem with the Hooke tensors limited to symmetry classes that generate non-convex cones $K$ (see \textcolor{red}{(hopefully...)} Example \ref{ex:CMD} on the Cubic-symmetric Material Design problem): \begin{corollary} If for a non-zero $\sig \in \Sdd$ there exists $\xi_\sig$ such that extremality condition $\pairing{\xi_\sig,\sig} = \ro(\xi_\sig) \, \dro(\sig)$ holds with $\hat{K}_1(\xi_\sig)$ being a convex set, then \begin{equation*} \Hcc(\sig) \subset \hat{K}_1(\xi_\sig) \subset K \qquad \text{and} \qquad \frac{1}{p'}\bigl(\dro(\xi) \bigr)^{p'} = \jc(\sig)= \min\limits_{\hk \in \Hc} j^*(\hk,\sig) = \min\limits_{\substack{\hk \in \Hnc \\ \cost(\hk) \leq 1}} j^*(\hk,\sig). \end{equation*} In particular, if such $\xi_\sig$ can be found for each non-zero $\sig\in \Sdd$ then the point-wise design of the anisotropy in $\Hs$ can be reduced to design of the anisotropy in $K$. \end{corollary} \end{comment} \bigskip \footnotesize \noindent\textbf{Acknowledgments.} The authors would like to thank the National Science Centre (Poland) for the financial support: the first author would like to acknowledge the Research Grant no 2015/19/N/ST8/00474 (Poland), entitled "Topology optimization of thin elastic shells - a method synthesizing shape and free material design"; the second author would like to acknowledge the Research Grant no 2019/33/B/ST8/00325 entitled "Merging the optimum design problems of structural topology and of the optimal choice of material characteristics. The theoretical foundations and numerical methods".
proofpile-arXiv_069-1276
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introducci\'on} \label{intro} erbirevpieviebvbvevbvbvibvibrebr \cite{blennow}. \section{Naturaleza de las im\'agenes} \label{natu} The most suitable printer is a laser printer. A dot matrix printer should only be used if it possesses an 18 or 24 pin printhead (``letter-quality''). The printout submitted should be an original; a photocopy is not acceptable. Please make use of good quality plain white A4 (or US Letter) paper size. {\em The dimensions shown here should be strictly adhered to: do not make changes to these dimensions, which are determined by the document style}. The document style leaves at least 3~cm at the top of the page before the head, which contains the page number. Printers sometimes produce text which contains light and dark streaks, or has considerable lighting variation either between left-hand and right-hand margins or between text heads and bottoms. To achieve optimal reproduction quality, the contrast of text lettering must be uniform, sharp and dark over the whole page and throughout the article. If corrections are made to the text, print completely new replacement pages. The contrast on these pages should be consistent with the rest of the paper as should text dimensions and font sizes. \section{Estrategia} \label{estra} \section{Extracción y selección de características} \label{extracc} \section{Algoritmo de clasificación} \label{clasi} \section{Evaluación de resultados} \label{eval} \section{Métodos estadísticos} \label{estad} \section{Conclusiones} \label{conclu} \bibliographystyle{elsarticle-harv} \section{Introduction} \file{elsarticle.cls} is a thoroughly re-written document class for formatting \LaTeX{} submissions to Elsevier journals. The class uses the environments and commands defined in \LaTeX{} kernel without any change in the signature so that clashes with other contributed \LaTeX{} packages such as \file{hyperref.sty}, \file{preview-latex.sty}, etc., will be minimal. \file{elsarticle.cls} is primarily built upon the default \file{article.cls}. This class depends on the following packages for its proper functioning: \begin{enumerate} \item \file{pifont.sty} for openstar in the title footnotes; \item \file{natbib.sty} for citation processing; \item \file{geometry.sty} for margin settings; \item \file{fleqn.clo} for left aligned equations; \item \file{graphicx.sty} for graphics inclusion; \item \file{txfonts.sty} optional font package, if the document is to be formatted with Times and compatible math fonts; \item \file{hyperref.sty} optional packages if hyperlinking is required in the document. \end{enumerate} All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. Furthermore, users are free to make use of \textsc{ams} math packages such as \file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty}, \file{amsfonts.sty}, etc., if they want to. All these packages work in tandem with \file{elsarticle.cls} without any problems. \section{Major Differences} Following are the major differences between \file{elsarticle.cls} and its predecessor package, \file{elsart.cls}: \begin{enumerate}[\textbullet] \item \file{elsarticle.cls} is built upon \file{article.cls} while \file{elsart.cls} is not. \file{elsart.cls} redefines many of the commands in the \LaTeX{} classes/kernel, which can possibly cause surprising clashes with other contributed \LaTeX{} packages; \item provides preprint document formatting by default, and optionally formats the document as per the final style of models $1+$, $3+$ and $5+$ of Elsevier journals; \item some easier ways for formatting \verb+list+ and \verb+theorem+ environments are provided while people can still use \file{amsthm.sty} package; \item \file{natbib.sty} is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with \file{hyperref.sty} in combination with \file{hypernat.sty}; \item long title pages are processed correctly in preprint and final formats. \end{enumerate} \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). It can also be found in any of the nodes of the Comprehensive \TeX{} Archive Network (\textsc{ctan}), one of the primary nodes being \url{http://www.ctan.org/tex-archive/macros/latex/contrib/elsevier/}. Please download the \file{elsarticle.dtx} which is a composite class with documentation and \file{elsarticle.ins} which is the \LaTeX{} installer file. When we compile the \file{elsarticle.ins} with \LaTeX{} it provides the class file, \file{elsarticle.cls} by stripping off all the documentation from the \verb+*.dtx+ file. The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Usage}\label{sec:usage} The class should be loaded with the command: \begin{vquote} \documentclass[<options>]{elsarticle} \end{vquote} \noindent where the \verb+options+ can be the following: \begin{description} \item [{\tt\color{verbcolor} preprint}] default option which format the document for submission to Elsevier journals. \item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+ option, but increases the baselineskip to facilitate easier review process. \item [{\tt\color{verbcolor} 1p}] formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. \item [{\tt\color{verbcolor} 3p}] formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use \verb+twocolumn+ option in combination. \item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This is always of two column style. \item [{\tt\color{verbcolor} authoryear}] author-year citation style of \file{natbib.sty}. If you want to add extra options of \file{natbib.sty}, you may use the options as comma delimited strings as arguments to \verb+\biboptions+ command. An example would be: \end{description} \begin{vquote} \biboptions{longnamesfirst,angle,semicolon} \end{vquote} \begin{description} \item [{\tt\color{verbcolor} number}] numbered citation style. Extra options can be loaded with\linebreak \verb+\biboptions+ command. \item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1--3]. \item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. \item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if available in the system to use Times and compatible math fonts. \item[] All options of \file{article.cls} can be used with this document class. \item[] The default options loaded are \verb+a4paper+, \verb+10pt+, \verb+oneside+, \verb+onecolumn+ and \verb+preprint+. \end{description} \section{Frontmatter} There are two types of frontmatter coding: \begin{enumerate}[(1)] \item each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow; \item authors of same affiliations are grouped together and the relevant affiliation follows this group. An example coding of the first type is provided below. \end{enumerate} \begin{vquote} \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is a collaborative effort.} \tnotetext[t2]{The second title footnote which is a longer longer than the first one and with an intention to fill in up more than one line while formatting.} \end{vquote} \begin{vquote} \author[rvt]{C.V.~Radhakrishnan\corref{cor1}\fnref{fn1}} \ead{cvr@river-valley.com} \author[rvt,focal]{K.~Bazargan\fnref{fn2}} \ead{kaveh@river-valley.com} \author[els]{S.~Pepping\corref{cor2}\fnref{fn1,fn3}} \ead[url]{http://www.elsevier.com} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the specimen author footnote.} \fntext[fn2]{Another author footnote, but a little more longer.} \fntext[fn3]{Yet another author footnote. Indeed, you can have any number of author footnotes.} \address[rvt]{River Valley Technologies, SJP Building, Cotton Hills, Trivandrum, Kerala, India 695014} \address[focal]{River Valley Technologies, 9, Browns Court, Kennford, Exeter, United Kingdom} \address[els]{Central Application Management, Elsevier, Radarweg 29, 1043 NX\\ Amsterdam, Netherlands} \end{vquote} The output of the above TeX source is given in Clips~\ref{clip1} and \ref{clip2}. The header portion or title area is given in Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}. \vspace*{6pt} \deforange{blue!70} \src{Header of the title page.} \includeclip{1}{132 571 481 690}{els1.pdf} \deforange{orange} \deforange{blue!70} \src{Footer of the title page.} \includeclip{1}{122 129 481 237}{els1.pdf} \deforange{orange} \pagebreak Most of the commands such as \verb+\title+, \verb+\author+, \verb+\address+ are self explanatory. Various components are linked to each other by a label--reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \verb+\label+ string of the \verb=\tnotetext=. We have used similar commands such as \verb=\tnoteref= (to link title note to title); \verb=\corref= (to link corresponding author text to corresponding author); \verb=\fnref= (to link footnote text to the relevant author names). \TeX{} needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. \begin{vquote} \tnoteref{<label(s)>} \corref{<label(s)>} \fnref{<label(s)>} \tnotetext[<label>]{<title note text>} \cortext[<label>]{<corresponding author note text>} \fntext[<label>]{<author footnote text>} \end{vquote} \noindent where \verb=<label(s)>= can be either one or more comma delimited label strings. The optional arguments to the \verb=\author= command holds the ref label(s) of the address(es) to which the author is affiliated while each \verb=\address= command can have an optional argument of a label. In the same manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \begin{vquote} \author{C.V.~Radhakrishnan\corref{cor1}\fnref{fn1}} \ead{cvr@river-valley.com} \address{River Valley Technologies, SJP Building, Cotton Hills, Trivandrum, Kerala, India 695014} \end{vquote} \begin{vquote} \author{K.~Bazargan\fnref{fn2}} \ead{kaveh@river-valley.com} \address{River Valley Technologies, 9, Browns Court, Kennford, Exeter, UK.} \end{vquote} \begin{vquote} \author{S.~Pepping\fnref{fn1,fn3}} \ead[url]{http://www.elsevier.com} \address{Central Application Management, Elsevier, Radarweg 43, 1043 NX Amsterdam, Netherlands} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \end{vquote} The output of the above TeX source is given in Clip~\ref{clip3}. \vspace*{12pt} \deforange{blue!70} \src{Header of the title page..} \includeclip{1}{132 491 481 690}{els2.pdf} \deforange{orange} The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{vquote} \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a .... \end{abstract} \end{vquote} \begin{vquote} \begin{keyword} quadruple exiton \sep polariton \sep WGM \PACS 71.35.-y \sep 71.35.Lk \sep 71.36.+c \end{keyword} \end{vquote} \noindent Each keyword shall be separated by a \verb+\sep+ command. \textsc{pacs} and \textsc{msc} classifications shall be provided in the keyword environment with the commands \verb+\PACS+ and \verb+\MSC+ respectively. \verb+\MSC+ accepts an optional argument to accommodate future revisions. eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1 \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. \file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts \file{*.pdf}, \file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. The \verb+table+ environment is handy for marking up tabular material. If users want to use \file{multirow.sty}, \file{array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and \file{elsarticle.cls} will work in combination with all loaded packages. \section[Theorem and ...]{Theorem and theorem like environments} \file{elsarticle.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. \file{elsarticle.cls} provides three commands to format theorem or theorem-like environments: \begin{vquote} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{vquote} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{vquote} \begin{thm} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{thm} \end{vquote} Clip~\ref{clip4} will show you how some text enclosed between the above code looks like: \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}} \includeclip{2}{1 1 453 120}{jfigs.pdf} \deforange{orange} The \verb+\newdefinition+ command is the same in all respects as its\linebreak \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. \vspace*{12pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}} \includeclip{1}{1 1 453 105}{jfigs.pdf} \deforange{orange} The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}} \includeclip{3}{1 1 453 65}{jfigs.pdf} \deforange{orange} Users can also make use of \verb+amsthm.sty+ which will override all the default definitions described above. \section[Enumerated ...]{Enumerated and Itemized Lists} \file{elsarticle.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \end{vquote} \vspace*{12pt} \deforange{blue!70} \src{List -- Enumerate} \includeclip{4}{1 1 453 185}{jfigs.pdf} \deforange{orange} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. Take a look at the example below: \begin{vquote} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{vquote} \deforange{blue!70} \src{List -- enhanced} \includeclip{5}{1 1 313 83}{jfigs.pdf} \deforange{orange} \vspace*{-18pt} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section[Mathematical ...]{Mathematical symbols and formulae} Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard \LaTeX. A useful package for additional symbols is the \file{amssymb} package, developed by the American Mathematical Society. This package includes such oft-used symbols as $\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or $\hbar$ (\verb+\hbar+). Note that your \TeX{} system should have the \file{msam} and \file{msbm} fonts installed. If you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the package \file{latexsym}. Another point which would require authors' attention is the breaking up of long equations. When you use \file{elsarticle.cls} for formatting your submissions in the \verb+preprint+ mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section~\ref{sec:usage}, \nameref{sec:usage}. This allows authors to fix any equation breaking problem before submission for publication. \file{elsarticle.cls} supports formatting the author submission in different types of final format. This is further discussed in section \ref{sec:final}, \nameref{sec:final}. \section{Bibliography} Three bibliographic style files (\verb+*.bst+) are provided --- \file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and \file{elsarticle-harv.bst} --- the first one for the numbered scheme, the second for the numbered with new options of \file{natbib.sty} and the last one for the author year scheme. In \LaTeX{} literature, references are listed in the \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as \verb+\citet{ESG96}+. \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package \file{natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. \file{natbib} package is loaded by \file{elsarticle} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the \file{natbib} package, you can do so with the \verb+\biboptions+ command, which is described in the section \ref{sec:usage}, \nameref{sec:usage}. For details of various options of the \file{natbib} package, please take a look at the \file{natbib} documentation, which is part of any standard \LaTeX{} installation. \subsection*{Displayed equations and double column journals} Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: \bigskip \setlength\Sep{6pt} \src{See equation (6)} \deforange{blue!70} \includeclip{4}{134 391 483 584}{els1.pdf} \deforange{orange} \noindent When this document is typeset for publication in a model 3+ journal with double columns, the equation will overlap the second column text matter if the equation is not broken at the appropriate location. \vspace*{6pt} \deforange{blue!70} \src{See equation (6) overprints into second column} \includeclip{3}{61 531 532 734}{els-3pd.pdf} \deforange{orange} \pagebreak \noindent The typesetter will try to break the equation which need not necessarily be to the liking of the author or as it happens, typesetter's break point may be semantically incorrect. Therefore, authors may check their submissions for the incidence of such long equations and break the equations at the correct places so that the final typeset copy will be as they wish. \section{Final print}\label{sec:final} The authors can format their submission to the page size and margins of their preferred journal. \file{elsarticle} provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. \lmrgn=3em \begin{description} \item [\texttt{1p}:] $1+$ journals with a text area of 384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$ 7.78in, single column style only. \item [\texttt{3p}:] $3+$ journals with a text area of 468pt $\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$ 8.6in, single column style. \item [\texttt{twocolumn}:] should be used along with 3p option if the journal is $3+$ with the same text area as above, but double column style. \item [\texttt{5p}:] $5+$ with text area of 522pt $\times$ 682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in, double column style only. \end{description} Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model $1+$ and $3+$ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column $3+$ and $5+$ journal article pages. The only difference will be wider text width of higher models. Therefore we will look at the different portions of a typical single column journal page and that of a double column article in the final format. \vspace*{2pc} \begin{center} \hypertarget{bsc}{} \hyperlink{sc}{ {\bf [Specimen single column article -- Click here]} } \vspace*{2pc} \hypertarget{bsc}{} \hyperlink{dc}{ {\bf [Specimen double column article -- Click here]} } \end{center} \newpage \vspace*{-2pc} \src{}\hypertarget{sc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{121 81 497 670}{els1.pdf}} \deforange{orange} \newpage \src{}\hypertarget{dc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{55 93 535 738}{els-3pd.pdf}} \deforange{orange} \end{document} \section{FORMAT} Text should be produced within the dimensions shown on these pages: each column 7.5 cm wide with 1 cm middle margin, total width of 16 cm and a maximum length of 20.2 cm on first pages and 21 cm on second and following pages. The \LaTeX{} document style uses the maximal stipulated length apart from the following two exceptions (i) \LaTeX{} does not begin a new section directly at the bottom of a page, but transfers the heading to the top of the next page; (ii) \LaTeX{} never (well, hardly ever) exceeds the length of the text area in order to complete a section of text or a paragraph. \subsection{Spacing} We normally recommend the use of 1.0 (single) line spacing. However, when typing complicated mathematical text \LaTeX{} automatically increases the space between text lines in order to prevent sub- and superscript fonts overlapping one another and making your printed matter illegible. \subsection{Fonts} These instructions have been produced using a 10 point Computer Modern Roman. Other recommended fonts are 10 point Times Roman, New Century Schoolbook, Bookman Light and Palatino. \section{PRINTOUT} The most suitable printer is a laser printer. A dot matrix printer should only be used if it possesses an 18 or 24 pin printhead (``letter-quality''). The printout submitted should be an original; a photocopy is not acceptable. Please make use of good quality plain white A4 (or US Letter) paper size. {\em The dimensions shown here should be strictly adhered to: do not make changes to these dimensions, which are determined by the document style}. The document style leaves at least 3~cm at the top of the page before the head, which contains the page number. Printers sometimes produce text which contains light and dark streaks, or has considerable lighting variation either between left-hand and right-hand margins or between text heads and bottoms. To achieve optimal reproduction quality, the contrast of text lettering must be uniform, sharp and dark over the whole page and throughout the article. If corrections are made to the text, print completely new replacement pages. The contrast on these pages should be consistent with the rest of the paper as should text dimensions and font sizes. \begin{table*}[hbt] \setlength{\tabcolsep}{1.5pc} \newlength{\digitwidth} \settowidth{\digitwidth}{\rm 0} \catcode`?=\active \def?{\kern\digitwidth} \caption{Biologically treated effluents (mg/l)} \label{tab:effluents} \begin{tabular*}{\textwidth}{@{}l@{\extracolsep{\fill}}rrrr} \hline & \multicolumn{2}{l}{Pilot plant} & \multicolumn{2}{l}{Full scale plant} \\ \cline{2-3} \cline{4-5} & \multicolumn{1}{r}{Influent} & \multicolumn{1}{r}{Effluent} & \multicolumn{1}{r}{Influent} & \multicolumn{1}{r}{Effluent} \\ \hline Total cyanide & $ 6.5$ & $0.35$ & $ 2.0$ & $ 0.30$ \\ Method-C cyanide & $ 4.1$ & $0.05$ & & $ 0.02$ \\ Thiocyanide & $60.0$ & $1.0?$ & $ 50.0$ & $ <0.10$ \\ Ammonia & $ 6.0$ & $0.50$ & & $ 0.10$ \\ Copper & $ 1.0$ & $0.04$ & $ 1.0$ & $ 0.05$ \\ Suspended solids & & & & $<10.0?$ \\ \hline \multicolumn{5}{@{}p{120mm}}{Reprinted from: G.M. Ritcey, Tailings Management, Elsevier, Amsterdam, 1989, p. 635.} \end{tabular*} \end{table*} \section{TABLES AND ILLUSTRATIONS} Tables should be made with \LaTeX; illustrations should be originals or sharp prints. They should be arranged throughout the text and preferably be included {\em on the same page as they are first discussed}. They should have a self-contained caption and be positioned in flush-left alignment with the text margin within the column. If they do not fit into one column they may be placed across both columns (using \verb-\begin{table*}- or \verb-\begin{figure*}- so that they appear at the top of a page). \subsection{Tables} Tables should be presented in the form shown in Table~\ref{tab:effluents}. Their layout should be consistent throughout. Horizontal lines should be placed above and below table headings, above the subheadings and at the end of the table above any notes. Vertical lines should be avoided. If a table is too long to fit onto one page, the table number and headings should be repeated above the continuation of the table. For this you have to reset the table counter with \verb|\addtocounter{table}{-1}|. Alternatively, the table can be turned by $90^\circ$ (`landscape mode') and spread over two consecutive pages (first an even-numbered, then an odd-numbered one) created by means of \verb|\begin{table}[h]| without a caption. To do this, you prepare the table as a separate \LaTeX{} document and attach the tables to the empty pages with a few spots of suitable glue. \subsection{Line drawings} Line drawings should be drawn in India ink on tracing paper with the aid of a stencil or should be glossy prints of the same; computer prepared drawings are also acceptable. They should be attached to your manuscript page, correctly aligned, using suitable glue and {\em not transparent tape}. When placing a figure at the top of a page, the top of the figure should be at the same level as the bottom of the first text line. All notations and lettering should be no less than 2\,mm high. The use of heavy black, bold lettering should be avoided as this will look unpleasantly dark when printed. \subsection{Black and white photographs} Photographs must always be sharp originals ({\em not screened versions\/}) and rich in contrast. They will undergo the same reduction as the text and should be pasted on your page in the same way as line drawings. \subsection{Colour photographs} Sharp originals ({\em not transparencies or slides\/}) should be submitted close to the size expected in publication. Charges for the processing and printing of colour will be passed on to the author(s) of the paper. As costs involved are per page, care should be taken in the selection of size and shape so that two or more illustrations may be fitted together on one page. Please contact the Technical Editor in the Camera-Ready Publications Department at Elsevier for a price quotation and layout instructions before producing your paper in its final form. \begin{figure}[htb] \vspace{9pt} \framebox[55mm]{\rule[-21mm]{0mm}{43mm}} \caption{Good sharp prints should be used and not (distorted) photocopies.} \label{fig:largenenough} \end{figure} \begin{figure}[htb] \framebox[55mm]{\rule[-21mm]{0mm}{43mm}} \caption{Remember to keep details clear and large enough.} \label{fig:toosmall} \end{figure} \section{EQUATIONS} Equations should be flush-left with the text margin; \LaTeX{} ensures that the equation is preceded and followed by one line of white space. \LaTeX{} provides the document-style option {\tt fleqn} to get the flush-left effect. \begin{equation} H_{\alpha\beta}(\omega) = E_\alpha^{(0)}(\omega) \delta_{\alpha\beta} + \langle \alpha | W_\pi | \beta \rangle \end{equation} You need not put in equation numbers, since this is taken care of automatically. The equation numbers are always consecutive and are printed in parentheses flush with the right-hand margin of the text and level with the last line of the equation. For multi-line equations, use the {\tt eqnarray} environment. For complex mathematics, use the \AmS-\LaTeX{} package. \section{} \label{} \subsection*{References}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \usepackage{textgreek} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage{siunitx} \sisetup{scientific-notation = true} \def{\mathbf x}{{\mathbf x}} \def{\cal L}{{\cal L}} \newcommand{\mathbb{R}}{\mathbb{R}} \let\footnotemark \begin{document} \title{Uncertainty-driven ensembles of deep architectures for multiclass classification. Application to COVID-19 diagnosis in chest X-Ray images} \def\@name{ \emph{Juan E. Arco$^{1,*}$\thanks{\textsuperscript{*} Corresponding author: jearco@ugr.es}}, \emph{Andr\'es Ortiz$^{2}$}, \emph{Javier Ram\'irez$^{1}$}, \emph{Francisco J. Mart\'inez-Murcia$^{2}$}, \\ \emph{Yu-Dong Zhang$^{3}$}, \emph{Juan M. G\'orriz$^{1}$}} \address{\normalsize $^{1}$ Department of Signal Theory, Networking and Communications, Universidad de Granada\\ \normalsize $^{2}$ Department of Signal Theory, Networking and Communications, Universidad de Malaga \\ \normalsize $^{3}$ School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK \\ } \maketitle \begin{abstract} Respiratory diseases kill million of people each year. Diagnosis of these pathologies is a manual, time-consuming process that has inter and intra-observer variability, delaying diagnosis and treatment. The recent COVID-19 pandemic has demonstrated the need of developing systems to automatize the diagnosis of pneumonia, whilst Convolutional Neural Network (CNNs) have proved to be an excellent option for the automatic classification of medical images. However, given the need of providing a confidence classification in this context it is crucial to quantify the reliability of the model's predictions. In this work, we propose a multi-level ensemble classification system based on a Bayesian Deep Learning approach in order to maximize performance while quantifying the uncertainty of each classification decision. This tool combines the information extracted from different architectures by weighting their results according to the uncertainty of their predictions. Performance of the Bayesian network is evaluated in a real scenario where simultaneously differentiating between four different pathologies: control \textit{vs} bacterial pneumonia \textit{vs} viral pneumonia \textit{vs} COVID-19 pneumonia. A three-level decision tree is employed to divide the 4-class classification into three binary classifications, yielding an accuracy of 98.06\% and overcoming the results obtained by recent literature. The reduced preprocessing needed for obtaining this high performance, in addition to the information provided about the reliability of the predictions evidence the applicability of the system to be used as an aid for clinicians. \end{abstract} \begin{keywords} Pneumonia; COVID-19; Bayesian Deep Learning; Uncertainty; Ensemble classification. \end{keywords} \section{Introduction} \label{sec:intro} Respiratory illness is the most common cause of death and disability in the world. According to the World Health Organization (WHO), tuberculosis kills 1.4 million people each year, whereas pneumonia is a leading cause of death among children under 5 years old \citep{who1}. Although the rate of pneumonia is decreasing worldwide \citep{who2}, an annual fatality rate of approximately 4 million is still observed. This disease is a form of acute respiratory infection that affects lungs, and based on the infectious pathogen, it can be bacterial, viral and fungal \citep{neu1}. Doctors can identify the presence of pneumonia from a wide range of medical imaging such as computed tomography (CT) \citep{ct}, chest X-ray (CXR) \citep{x-ray} or magnetic resonance imaging (MRI) \citep{mri}. The quality improvement in X-ray imaging and its low cost has popularized the use of CXR as a diagnostic tool for pneumonia. However, this is not a straightforward task and success in pneumonia detection depends on many factors. One of the most important ones is that diagnosis is still largely dependent on the expertise of the radiologist \citep{chandra2020}. The pathology associated with pneumonia is often overlapping with other abnormal conditions of the lungs. Besides, the complex and vague anatomical structures in the lung fields can also affect the expert's opinion \citep{maduskar2016}. This leads to a manual, time-consuming process that has inter and intra-observer variability, which may delay diagnosis and treatment. The use of image processing methods along with machine learning algorithms directed to find disease-related patterns play a decisive role in the improvement of the diagnosis accuracy. Previous works have employed machine learning (ML) algorithms for the automatic detection of a wide range of pathologies such as Parkinson's or Alzheimer's disease (\citealp{gorrizjm2020artificial,ZHANG2020149,castillo2018}), and most recently, pneumonia \citep{zhang2020_1,WANG2021208,chandra2020,elaziz2020}. In this direction, CAD (computer-aided diagnosis) systems can be an excellent tool for overcoming the weakness of current procedures for detecting pneumonia. In fact, they can assist radiologists by reducing their workload, serving as an B-reader in diagnosis and reducing the variability across doctors. Classification systems employed in CAD tools have the following general structure: i) delimitation of the regions of interest (ROI) to focus the analysis on them, ii) features extraction from these regions, iii) classification based on those features \citep{xu2006,jaeger2014,caixia2020}. Since pneumonia affects lungs, it seems obvious that the ROI must delimit the shape and boundaries of lungs \citep{van2001,hogeweg2015}. Several studies have provided different methods for lung segmentation \citep{akhila2017,candemir2014,munirah2015,guan2020,yang2018,vajda2018,donia2013}. \cite{munirah2015} proposed an unsupervised approach based on Gaussian derivatives filters and Fuzzy C-Means clustering. This method demonstrated not only good performance measures (accuracy of 0.9) but also robustness and speed. Regardless how features are computed they are then classified using a specific algorithm. Previous studies have successfully employed a wide range of classifiers for the detection of pulmonar diseases, such as DT (Decision Tree,\citealp{porcel2008,zhang2020}), NB (Na\"{i}ve Bayes, \citealp{chapman2011,ma2015}), or KNN (\textit{k}-Nearest Neighbors (KNN), \citealp{ajin2017,chen2015}). However, literature has shown that Support Vector Machine (SVM) \citep{yahyaoui2018,pan2018} usually outperforms the other algorithms \citep{uppaluri1999,chandra2020}. Unlike classical methods based on the extraction of predefined features, deep neural networks build a specific feature space for the optimal class separation by means of a learning process. The emergence of these approaches has revolutionized the automatic classification of medical images. Recently, a number of studies have demonstrated the high flexibility and performance that this approach provides \citep{wang2017,varshni2019,kermany2018,mittal2020}. \cite{rajpurkar2017} proposed a 121-layer convolutional neural network (CNN) to identify pneumonia and localize the most indicative areas of this pathology. The algorithm provided a relatively low accuracy (76.8\%), but it was able to distinguish between 14 different pathologies. Other works have utilized transfer learning on the ImageNet dataset, yielding an accuracy of 82\%, 87\% and 92\% for Xception, VGG16 and VGG19 models, respectively \citep{abiyev2018}. It is clear that deep learning models can effectively identify the presence of a certain pathology. However, there are some scenarios where they take a decision (i.e. if a patient suffers from pneumonia or not) even though they do not know the answer since the classification outcome only relies on the most activated neuron of the output layer. \cite{kendall2017uncertainties} demonstrated the need of evaluating the uncertainty of a model's predictions in order to improve the decisions of the system. Bayesian deep learning models offer a practical solution for understanding the uncertainty of the decisions of a deep learning model \citep{ygal}. Specifically, they model a combination of aleatoric and epistemic uncertainty in order to increase loss robustness to noisy data, which usually leads to a boost in performance \citep{kendall2017uncertainties}. Most importantly, the additional information related to the reliability of the classification results makes this alternative quite interesting for being used in situations where the consequences of an error could be critical. The recent COVID-19 pandemic has demonstrated the need of developing systems to automatize the diagnosis of pneumonia. It seems clear that a wrong diagnosis can have a dramatic effect in patient's health. In this work, we employ an ensemble classification system based on a Bayesian Deep Learning approach in order to maximize performance while quantifying the uncertainty of each classification decision. In particular, we combine seven CNN with the same structure, but differing in the kernel sizes of their convolutional layers. This allows the classification system to extract relevant features of different size and shape. The global classification is performed by combining the predictions of the different classifiers. The contribution of each individual classifier depends on the uncertainty of their predictions: the lower the uncertainty, the higher the weight, and vice versa. Performance of the Bayesian network is evaluated in a range of real scenarios of incremental difficulty: from the simplest one where trying to distinguish between control \textit{vs} pneumonia patients to a multiclass context where simultaneously differentiating between four different pathologies: control \textit{vs} bacterial pneumonia \textit{vs} viral pneumonia \textit{vs} COVID-19 pneumonia. The main contributions of our work can be summarized as follows: \begin{itemize} \item{A novel and accurate tool for the automatic diagnosis of pneumonia, in addition to the identification of the cause of the pathology (bacteria, virus, COVID-19).} \item{The Bayesian nature of the Residual Network proposed in this work quantifies the reliability of the classification predictions.} \item{The combination of networks with different kernel sizes allows the identification of pneumonia patterns regardless of their shape and extension.} \item{Our approach employs the uncertainty of the predictions of each individual classifier to weigh their contribution to the ensemble global decision.} \end{itemize} \section{Material} \label{sec:materials} \subsection{Dataset} \label{subsec:dataset} We have used the dataset available in \cite{kaggle} for controls and patients who suffered from a bacterial or a no-COVID19 pneumonia. According to the information described in \cite{kermany2018}, the CXR images were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children's Medical Center, Guangzhou. All CXR images were obtained as part of patient's routines clinical care. Institutional Review Board (IRB)/Ethics Committee approvals were obtained. The work was conducted in a manner compliant with the United States Health Insurance Portability and Accountability Act (HIPAA) and was adherent to the tenets of the Declaration of Helsinki. \cite{kermany2018} collected and labeled a total of 6374 CXR images from children, including 4273 characterized as depicting pneumonia and 1583 normal. From those patients diagnosed with pneumonia, 2786 were labeled as bacterial pneumonia, whereas 1487 were labeled as viral pneumonia. The dataset containing COVID-19 patients is available in \cite{kaggle1} and includes 576 CXR images from adults. Figure \ref{fig:figurauno} shows the CXR image from a control (CTL), and a patient suffering from a bacterial (BAC), a viral (VIR) and a COVID19 (CVD19) pneumonia. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/control_vs_pneumo.pdf} \caption{From left to right, CXR image of a control, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia. Note some clear artifacts in COVID-19 image.} \label{fig:figurauno} \end{figure*} \subsection{Image preprocessing} \label{subsec:prepro} When working with medical images, it is crucial to apply a preprocessing that improves the subsequent classification performance.This is especially important in CXR images, where low X-ray radiation and movement during image acquisition result in noisy and low-resolution images. However, this preprocessing must adapt images to the needs of the neural network. Due to computational and memory requirements, we downsampled the input images to obtain a final map of size 224x224. We also performed an intensity normalization procedure for each individual image based on standardization. Each image was transformed such the resulting distribution has a mean (\begin{math} \mu \end{math}) of 0 and a standard deviation (\begin{math} \sigma \end{math}) of 1, as follows: \begin{equation} I' = \frac{I-\mu}{\sigma} \label{eq:clahe3} \end{equation} \noindent where \begin{math} I \end{math} is the original image and I' is the resulting one. \section{Methods} \label{sec:methods} \subsection{Deep learning} \label{subsec:deep} The use of algorithms based on deep learning has revolutionized the analysis of medical images \citep{cnn1,cnn2,cnn3,cnn4}. From the ImageNet classification benchmark \citep{cnn5}, CNNs have been used more than any other pattern recognition algorithm in medical image classification. This architecture emerged as an attempt of replicating the behavior of neurons. Briefly, CNNs combine different steps based on convolution and pooling to allow the identification of different patterns and low and high-level features \citep{cnn6,cnn1}. The main component of a CNN is known as convolutional layer. This operator takes the tensor \begin{math} \mathbf{V}_{i-1} \end{math} containing the activation map of the previous layer \begin{math} i-1 \end{math}. Thus, the target layer (\begin{math} i \end{math}) learns a set of \begin{math} N \end{math} filters \begin{math} \mathbf{W}_i \end{math} with a bias term \begin{math} \mathbf{b}_i \end{math}, as follows: \begin{equation} \label{eq:cnn1} \mathbf{V}_i = f_{a}(\mathbf{W}_i*\mathbf{V}_{i-1}+\mathbf{b}_i) \end{equation} \noindent where \begin{math} f_a(*) \end{math} is the activation function \citep{cnn1}. For a three-dimensional environment (\begin{math} \mathbf{V}_{i-1}\end{math}) of size \begin{math} H \times W \times D \times C \end{math} (height, width, depth and number of channels, respectively), \begin{math} \mathbf{W}_i \end{math} is of size \begin{math} P \times Q \times R \times S \times K \end{math} where \begin{math} K \end{math} is the number of filters. The \textit{k}th convolution term for the \textit{k}th filter is \begin{equation} \label{eq:cnn2} \begin{split} \mathbf{W}_{ik} * \mathbf{V}_{i-1} = \sum_{u=0}^{P-1} \sum_{v=0}^{Q-1} \sum_{w=0}^{R-1} [\mathbf{W}_{ik}(P-u, Q-v, R-w) \\ ·\mathbf{V}_{i-1}(x+u,y+v,z+w)] \end{split} \end{equation} Once convolution is performed, the activation of the filters in layer \begin{math} i \end{math} are stored and passed to the next layer \begin{math} i+1 \end{math}. It is of great importance to set properly the values for all the hyperparameters, striking a balance between performance and model complexity. One of these parameters is the number of filters: the higher this number is, the more patterns the model is able to learn. There is no consensus in literature about the ideal number of filters, probably because different problems need CNNs with different configurations, but numbers that are a power of 2 are usually taken. \subsection{Bayesian Deep learning} \label{subsec:bay_deep} Despite the high performance that Deep Learning models have demonstrated, recent works have claimed the need of computing the uncertainty of a model, a measure that allows to identify situations where the classifier does not know the answer. To do so, it would be necessary to estimate the level of uncertainty of a prediction in order to reject it in case its value was too high. Bayesian deep learning offers a framework for understanding uncertainty with deep learning models \citep{bdl2016}. There are two main types of uncertainty that can be estimated in Bayesian modeling: epistemic and aleatoric \citep{KIUREGHIAN2009105,ygal}. Epistemic uncertainty is inherent to the model, which means that it can be reduced by increasing the data processed by the model. Estimating the epistemic uncertainty requires to model distributions over the different parameters of the model. This allows to optimize the network according to the average of all possible weights. Let \begin{math} \mathbf{x} \end{math} be a feature vector and \begin{math} \mathbf{W} \end{math} the weights of a Bayesian Neural Network (BNN). Considering the output of the network as \begin{math} \mathbf{f}^{\mathbf{W}}(\mathbf{x}) \end{math}, the model likelihood can be defined as \begin{math}p(\mathbf{y}|\mathbf{f}^{\mathbf{W}}(\mathbf{x})) \end{math}. For a given dataset \begin{math} \mathbf{X} = \{\mathbf{x}_{1},\cdots, \mathbf{x}_{N}\} \end{math}, \begin{math} \mathbf{Y} = \{\mathbf{y}_{1},\cdots, \mathbf{y}_{N} \} \end{math}, the Bayesian inference computes the posterior probability over the weights \begin{math} p(\mathbf{W}|\mathbf{X},\mathbf{Y}) \end{math}. \cite{cipolla_unc} demonstrated that applying dropout before every weight layer in a neural network is mathematically equivalent to an approximation to the probabilistic deep Gaussian process \citep{damianou2013}. Briefly, they showed that the dropout objective minimizes the Kullback-Leibler divergence between an approximate distribution and the posterior one of a deep Gaussian process. A popular technique relies on the use of Monte Carlo dropout sampling to place a Bernoulli distribution over the network's weights. Dropout is widely used as a regularization procedure during training \citep{cnn8}. However, when applied during the testing phase, this method allows to obtain a distribution for the output predictions \citep{jospin2020,dropout2017}. The statistics of this distribution reflect the model's epistemic uncertainty. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures/bayesian_network.pdf} \caption{Diagram of the bayesian framework of each individual network within the ensemble.} \label{fig:bayesian_network} \end{figure*} Aleatoric uncertainty is usually referred as the uncertainty inherent to the data, and can be divided into two sub-categories: i) homoscedatic uncertainty, which remains stable for every input of the model; and ii) heteroscedastic, which assumes that noise varies for the different inputs of the model \citep{nix1994,lequoc2005}. Heteroscedastic uncertainty can be modeled by modifying the loss function used by the neural network. Since this uncertainty is a function of the input data, employing a deterministic mapping from inputs to model outputs can allow the estimation of the uncertainty. For a typical Euclidean loss \begin{math} L = ||y-\hat y||^2 \end{math}, the Bayesian version will be given by \begin{math} L = \frac{||y-\hat y||^2} {2\sigma ^2} + \frac{1}{2} \log \sigma ^2 \end{math}. In the latter one, the model predicts both \begin{math} \hat{y} \end{math} and variance \begin{math} \sigma^2 \end{math}, so that if model prediction is not good, the residual term will be attenuated by increasing \begin{math} \sigma^2 \end{math}. Therefore, the term \begin{math} \log \sigma^2\end{math} prevents uncertainty growing until infinite, leading to a learned loss attenuation. The process for homoscedastic uncertainty is essentially the same, but considering the uncertainty like a free parameter instead of a model output. In this work, we have employed a Bayesian version of the ResNet-18 CNN \citep{he_zhang}. The output layer contained 2 neurons with softmax activation. Besides, dropout was used to prevent overfitting, and Batch Normalization for convergence. The Bayesian nature of this net is obtained by replacing the deterministic weights along the network by a distribution over these parameters.This means that instead of optimising the network weights, an average of all possible weights was computed. As a result, the loss function depends on two factors: the softmax values (as in the non-Bayesian modality) and the Bayesian categorical cross entropy, which is based on the input variance (see \cite{kendall2017uncertainties} for more details). Figure \ref{fig:bayesian_network} summarizes the architecture of the Bayesian network. \subsection{Multi-level Ensemble Classification} \label{subsec:ensemble} Patterns associated with each type of pneumonia are similar among different subjects. However, there are some factors like the virulence of the disease and the presence of other pulmonary findings that can affect the identification of the patterns associated with the different pathologies. One crucial aspect is to select an optimal kernel size for the convolutional operators of the neural network that can properly extract the relevant information. Moreover, this is even more important when images used to train the network come from different sources, and when they can have different sizes and aleatory artifacts. To overcome this issue, we employed seven neural networks, each one of them with a different kernel size value in the range \begin{math} [3-15] \end{math} with increments of two. This means that the kernel size assigned to the first network was 3, 5 for the second network and so on, until a size of 15 for the seventh CNN. The number of neural networks and their kernel sizes were selected in order to strike a balance between performance and computational cost. Finally, each individual classifier was then combined into a global one following an ensemble classification procedure. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures/ensemble.pdf} \caption{Schema of the ensemble architecture proposed in this work based on the uncertainty in the prediction of each individual classifier.} \label{fig:ensemble} \end{figure*} Previous studies have employed majority voting to fuse the output of the base classifiers \citep{chandra2021,zhou2020}. Given the Bayesian nature of the networks employed in this work, we computed the weights of each classifier as a function of the uncertainty given by each one of them for each test image (see Figure \ref{fig:ensemble}). If the uncertainty of a classifier in a specific prediction was high, it would have a low contribution to the final ensemble, and vice versa \citep{ensemble2012}. Defining \begin{math} u_{l}^{k}(\mathbf{y}) \end{math} as the uncertainty of the test sample \begin{math} \mathbf{y} \end{math} obtained from the \textit{k}-th classifier corresponding to the \textit{l}-th class, the empirical average of the \textit{l}-th weights (inverse of uncertainties) over the \textit{K} classifiers can be calculated as follows: \begin{equation} \label{eq:ensemble1} E_{l}(\mathbf{y}) = \frac{\sum_{k=1}^{K}{\frac{1}{u_{l}^{k}(\mathbf{y})}}}{K} \end{equation} The class label of the test sample \begin{math} \mathbf{y} \end{math} is then assigned to the class with the maximum average weight as: \begin{equation} \label{eq:ensemble2} Label(\mathbf{y}) = \argmax_{l} \hspace{0.5mm} E_{l}(\mathbf{y}) \end{equation} Detecting the presence of pneumonia when comparing to healthy subjects is an interesting initial step in the development of a CAD system. However, it is much more useful to identify exactly the type of pneumonia patients suffer from. As described in Section \ref{subsec:dataset}, the database used in this work contains CXR images of healthy subjects (controls) and images from three types of pneumonia: bacterial, viral and COVID-19. In order to perform the multiclass classification, we employed a decision tree based on the One-versus-all (OVA) approach \citep{multiclass1,multiclass2,multiclass3}. This alternative divides a multiclass problem into a number of binary sub-problems. In each one of them, one of the classes is considered as the positive class, whereas the other classes are the negative class. Following this framework, we used a decision tree with three levels in order to distinguish between the different pathologies. In each level, an ensemble of different kernel sizes was employed. This led to a two-level ensemble classification: one ensemble for the combination of different kernels, and another one for combining binary classifiers to perform multiclass classification. The decision tree relies on a process that can be summarized as follows: \begin{itemize} \item{First level: classification between normal \textit{vs} pneumonia. The second class contains subjects diagnosed from the three different types of pneumonia (bacterial, viral, and COVID-19)}. \item{Second level: classification between bacterial \textit{vs} viral pneumonia. The second class corresponds to images from subjects with pneumonia due to different viruses (no-COVID-19 or COVID-19)}. \item{Third level: classification between no-COVID-19 \textit{vs} COVID-19.} \end{itemize} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures/decision_tree.pdf} \caption{Schematic representation of the decision tree employed for the multiclass classification.} \label{fig:decision_tree} \end{figure*} Figure \ref{fig:decision_tree} depicts a visual representation of how the decision tree works. Images that are labelled as pneumonia in the first level are passed to the second one. Similarly, images labelled as viral pneumonia continue to the third level in order to identify whether the virus that produced the pneumonia was COVID-19 or not. It is worth mentioning that the binary classifier employed in each level has the same ensemble structure that the one explained in Section \ref{subsec:ensemble}. \subsection{Performance evaluation} \label{subsec:performance} For all experiments, a 5-fold stratified cross-validation scheme was used to estime the generalization ability of our method \citep{Kohavi95}. The performance of the classification frameworks was evaluated in terms of different parameters from the confusion matrix, which can be computed as follows: \begin{align} \label{eq:metrics} \begin{split} Acc = \frac {T_{P}+T_{N}}{T_{P}+T_{N} + F_{P}+F_{N}} \hspace{0.3cm} Sens = \frac {T_{P}}{T_{P}+F_{N}} \\ Spec = \frac {T_{N}}{T_{N}+F_{P}} \hspace{0.5cm} AUC = \frac{1}{2} \Big( \frac{TP}{P} + \frac{TN}{N}\Big) \\ Prec = \frac {T_{P}}{T_{P}+F_{P}} \hspace{0.5cm} F1-score = \frac{2 \times Prec \times Sens}{Prec + Sens} \end{split} \end{align} \noindent where \begin{math} T_{P} \end{math} is the number of pneumonia patients correctly classified (true positives), \begin{math} T_{N} \end{math} is the number of control patients correctly classified (true negatives), \begin{math} F_{P} \end{math} is the number of control subjects classified as pneumonia (false positives) and \begin{math} F_{N} \end{math} is the number of pneumonia patients classified as controls (false negatives). We also employed the area under the curve ROC (AUC) as an additional measure of the classification performance \citep{auc1,auc2}. Since classes were unbalanced (e.g. the number of pneumonia patients was higher than controls), we incorporated the weights of the classes into the cost function in order to the majority class does not contribute more than the minority one. Given the ensemble nature of the system proposed in this work, we employed a kappa-uncertainty diagram to evaluate the level of agreement of the different classifier outputs while correcting for chance \citep{rodriguez2006,wang2019}. This measure is based on Cohen's kappa coefficient \citep{cohen1960}, which is widely accepted as the de facto standard for measurement of interannotator agreement \citep{kappa_statistic2004}. Specifically, the kappa statistic compares an observed accuracy with an accuracy obtained by chance, providing a measure of how closely instances classified by a classifier match the ground truth. Mathematically, Cohen's kappa can be defined as: \begin{equation} \label{eq:cohen} k = \frac{p_A-p_E}{1-p_E} \end{equation} \noindent where \begin{math} p_A \end{math} is the observed relative agreeement between two annotators, and \begin{math} p_E\end{math} is the probability of agreement by chance. Although acceptable kappa statistic values vary on the context, the closer to 1, the better the classification. Section \ref{sec:results} summarizes the kappa scores obtained by different members of the ensemble classifier, as well as revealing the relationship between the uncertainty of Bayesian networks and kappa values. As explained in Section \ref{subsec:ensemble}, a decision tree was employed for multiclass classification. In order to build the kappa-uncertainty diagram explained above, a combination of the uncertainties of the different levels of the tree has to be computed. To do so, we employed a method known as summation in quadrature \citep{unc_combination}, described as follows: \begin{equation} \label{eq:unc_combined} u_{c}(y) = \sqrt{\sum_{i=1}^n [c_i u(x_i) ]^2} \end{equation} \noindent where \begin{math}u_{c}(y) \end{math} is the combined uncertainty, \begin{math} c_i \end{math} is the sensitivity coefficient and \begin{math} u(x_i)\end{math} is the standard uncertainty. \section{Evaluation} \label{sec:eval} \begin{table*}[ht] \caption{Performance of the ensemble classification approach proposed in this work in the different contexts evaluated.} \label{table:results1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccccccc|} \hline Experiment & Acc (\%) & Sens (\%) & Spec (\%) & Prec (\%) & AUC (\%) & F1-score (\%) \\ \hline CTRL vs PNEU & 97.27 \textpm 3.37 & 96.41 \textpm 4.47 & 99.94 \textpm 0.13 & 99.98 \textpm 0.04 & 98.17 \textpm 2.22 & 98.11 \textpm 2.38\\ BAC vs VIR & 98.43 \textpm 0.95 & 98.16 \textpm 1.17 & 98.79 \textpm 0.73 & 99.09 \textpm 0.56 & 98.48 \textpm 0.92 & 98.62 \textpm 0.84\\ COVID-19 vs NO COVID-19 & 99.69 \textpm 0.56 & 99.83 \textpm 0.35 & 99.6 \textpm 0.8 & 99 \textpm 1.98 & 99.71 \textpm 0.4 & 99.4 \textpm 0.99\\ Multiclass & 98.06 \textpm 1.63 & 97.24 \textpm 2.67 & 99.38 \textpm 0.33 & 99.6 \textpm 0.21 & 98.31 \textpm 1.33 & 98.39 \textpm 1.38 \\ \hline \end{tabular*} \end{table*} \subsection{Experimental setup} \label{subsec:setup} In this work we propose a method to extract the relevant information from CXR images that allows the identification of pneumonia. To do so, we define two experiments: \begin{itemize}[leftmargin=*] \item{\textbf{Experiment 1: Binary Classification} between different groups under three scenarios: \textbf{CTL \textit{vs} PNEU}}, which includes all images labelled as CTL and PNEU; \textbf{BAC \textit{vs} VIR}, which divides the images from people diagnosed from pneumonia regarding the cause of the disease is a bacteria or a virus; \textbf{NO-COVID-19 \textit{vs} COVID-19} for viral pneumonia. The aim is to identify whether the virus that produced pneumonia was COVID-19 or not. The whole Bayesian CNN was trained using the Adam optimization algorithm \citep{adam}, with learning rate 0.001, \begin{math} \phi = 0.9 \end{math} and a decay of 0.001). The number of epochs employed for training the system was 15, 20 and 25, for the CTL \textit{vs} PNEU, BAC \textit{vs} VIR and NO-COVID-19 \textit{vs} COVID-19 scenarios, respectively. We used the Keras library over Tensorflow with some custom modules. \item{\textbf{Experiment 2: Multiclass Classification} by using a decision tree in order to distinguish between the four different pathologies contained in the database. A binary classification is employed in each of the three levels of the tree. The first level corresponds to the CTL \textit{vs} PNEU classification, the second one contains the BAC \textit{vs} VIR comparison, whereas in the third level, the distinction between NO-COVID-19 \textit{vs} COVID-19 is performed. These binary classifiers employ the same framework and configuration as in Experiment 1.} \end{itemize} \section{Results} \label{sec:results} We first explore how performance varies for the different kernel sizes of the individual classifiers for all the binary classifications performed (see Figure \ref{fig:kernel_size}). We can see that kappa score slightly varies when increasing the kernel size in the three classification contexts. With reference to uncertainty, only in the BAC \textit{vs} VIR scenario uncertainty values drastically change for different kernel sizes. Therefore, there is not a tendency that let us assure that there is a relationship between these two variables. It is important to note the high levels of uncertainty in this classification context when comparing to the first and the third one, which manifests the extreme difficulty of this specific classification. It is not surprising that differentiating between a control and a patient who suffer from pneumonia is a considerably easier task. However, these findings point out that there is a larger difference in the spatial patterns associated with COVID-19 \textit{vs} no-COVID-19 than in the one between bacterial \textit{vs} viral. This can be explained by the severity of the pulmonary affection that COVID-19 usually causes, whereas pneumonia derived from another viruses can show a more heterogeneous severity. \begin{figure*} \centering \includegraphics[width=0.68\textwidth]{figures/curvas_apiladas.pdf} \caption{Performance associated with the different kernel sizes for the three classification contexts under study. Scores evaluated were kappa and uncertainty.} \label{fig:kernel_size} \end{figure*} We observe that the discrimination ability of the system is very high for the three binary classifications regardless of the kernel size employed . Results in terms of different performance measures are shown in Table \ref{table:results1}, whereas Figure \ref{fig:roc_curves} depicts the ROC curves for the different classifiers. Large values are obtained, as expected, in the CTL \textit{vs} PNEU context. However, these results confirm that our system can also separate patients with the same diagnosis (pneumonia) but with a different cause (bacteria, virus, COVID-19). We also use the kappa-uncertainty diagram to evaluate the level of agreement between the classifier outputs. Figure \ref{fig:kappa_unc} shows these diagrams for the three binary classifiers and the multiclass derived from the decision tree, represented by a different colour. The cloud points represent the kappa score-uncertainty obtained in each fold of the cross-validation scheme, whereas large stars represent the centroid of the resulting distribution. From this figure, we can see that there is not a great difference between individual classifiers, in consonance with results derived from ROC curves. \begin{figure*} \centering \includegraphics[width=0.42\textwidth]{figures/roc_curves} \caption{ROC curves obtained by the classifiers of each level of the decision tree.} \label{fig:roc_curves} \end{figure*} It is interesting how this figure reveals that the combination of classifiers with a certain performance (high kappa score and low uncertainty) leads to an ensemble classifier with these features. However, uncertainty is higher in the multiclass classifier for a similar kappa score compared to individual ones.This means that, although classification performance of the decision tree is high, the uncertainty of the resulting prediction is also higher than in binary classification. This evidences the extreme utility of this kind of diagrams in Bayesian deep learning and in contexts when reliability of predictions is of core interest. According to Table \ref{table:results1}, the multiclass classifier has a superior performance than the CTRL \textit{vs} PNEU in most of metrics evaluated. However, the uncertainty of the predictions is also higher (centroid of the multiclass is farther to the right than the CTRL \textit{vs} PNEU centroid). Further discussion regarding the results obtained and their clinical implications are provided in Section \ref{sec:discussion}. \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{figures/kappa_unc} \caption{Diversity-uncertainty diagrams of the different levels of the multiclass classifier. The x-axis represents the combined uncertainty of each individual classifier and the resulting mutlticlass. The y-axis represents diversity of the classifiers evaluated by the kappa measure. Each dot represents the kappa-uncertainty score obtained by a classifier in one fold, whereas large stars represent the centroid of the resulting distribution.} \label{fig:kappa_unc} \end{figure*} \section{Discussion} \label{sec:discussion} In this study, we proposed a classification method for the detection of different types of pneumonia from CXR images. This approach relies on the use of a Bayesian version of a Residual Network (ResNet), which allows the optimization of the network according to the uncertainty of its predictions. We employed different networks modifying their kernel sizes and combined them within an ensemble classifier so that the contribution of each individual network depends on the uncertainty of its predictions. We evaluated the performance of this approach in different classification scenarios. In the first context, the two classes generated relatively big differences in the observed pattern (pneumonia \textit{vs} control), whereas in the second (bacterial \textit{vs} viral pneumonia) and in the third one (COVID-19 \textit{vs} no-COVID-19) these differences were extremely small. Besides, the performance of a multiclass classifier was also evaluated in order to check if this method could simultaneously differentiate between the different pathologies. The high performance shown by the proposed method in all scenarios provides us with a new tool to detect the presence of pneumonia in CXR images, in addition to distinguish whether the source of the pathology is viral or bacterial, and if the virus is COVID-19 or not. The features extracted by convolutional blocks of different kernel sizes contained relevant information that enhanced the separability between the different classes. The combination of convolutional blocks of different kernel sizes is especially interesting in this context where the database contains images of people from a wide range of age. Pulmonary affections caused by the different pathologies evaluated in this work mainly depend on the severity of the disease. However, the shape and size of these manifestations also depend on the shape and size of lungs. The ensemble method proposed in this work allows the identification of patterns associated with pneumonia without focusing on a specific size for the informative regions. Another crucial aspect of the method proposed in this work is its Bayesian nature. The aim of CAD systems regardless of the application context is to maximize the classification performance, in terms of accuracy, AUC, etc. However, in most scenarios it is also important to know the reliability of the prediction itself. Neural networks are prone to overfitting, which means that taking decisions based only on the prediction can be counterproductive. In an extreme case, it is possible that the classifier does not know the class a test image belongs to, but it always has to assign a label, even though the output probability is near to chance level. This is particularly problematic when developing a tool for the diagnosis of a disease. Doctors need to know not only the global accuracy obtained during the training and test of the model, but how reliable is the prediction of new individual samples. This problem is addressed with the inclusion of Bayesian elements in neural networks. However, our findings reveal that this is not the only advantage that this approach provides. We have demonstrated the high performance of ensemble classification, even in situations where differences between the pulmonary patterns of the different pathologies are extremely small. The novelty of our approach relies on the way the contribution of each individual classifier to the global decision is computed. Weights are usually derived from the accuracy of each individual classifier. However, results can be biased if part of the predictions are obtained by chance, i.e. when the output probabilities of the different classes are almost equal. We overcome this problem by weighting the contribution of each classifier according to the uncertainty of its predictions. It is worth remembering that part of the database (normal, bacterial and viral (no-COVID) pneumonia patients) contains pediatric chest radiographies, whereas COVID-19 images correspond to adults. Detecting pneumonia from pediatric chest radiographies is more challenging than in adults for several reasons. First, the dose of X-ray radiation is considerably lower than in adults, which results in a reduced image resolution and a higher overlapping between the different anatomical parts. Second, lungs appearance changes dramatically along the pediatric development stages, both in size and shape (more similar to a triangle in infants). The dataset employed in this work contains CXR images of children of a wide range of ages, increasing variability and complexity of the classification process. Finally, CXRs are noisier than in adults because of movement, legs positioning or when they are being hold by adult hands. For this reason, it is worth highlighting the high performance obtained in this work, improving the results obtained in previous works even when applied to detect pneumonia in children \citep{rajaraman2018,liang2020,measurement2020}, adults \citep{pneu_adults1,pneu_adults2,pneu_adults3}, and also when tried to identify the presence of COVID-19 \citep{covid1,covid2,covid_multiclass,covid3}. We have developed a tool that is able to distinguish between patterns associated with different pathologies, but it is worth highlighting the high performance obtained in the multiclass classification. In this case, the accuracy and the AUC obtained were 98.06\% and 98.31\%, respectively, which is considerably higher than the results provided by similar techniques in previous studies \citep{zhang2020_1,wang2020_ct,zhou2020,hemdan2020covidxnet,apostolopoulos2020}. There are two main relevant aspects derived from these excellent results to be mentioned. First, the only preprocessing applied to the data was the rescaling of the images to a lower resolution in order to reduce the computational burden of the classification pipeline. We did not perform other complex processes such as lung segmentation, but the RAW rescaled images were used as the inputs of the classification system. Thus, it is remarkable the high performance obtained by the method proposed in this extreme situation. Second, results obtained in the multiclass scenario allow the application of the tool proposed in this work in a more real context. The multiclass scenario is more similar to a real context than binary classifications, where the simplest case only distinguishes the presence (or not) of pneumonia. Results obtained by the multiclass classifier reveal the usefulness of this kind of techniques. \section{Conclusion} \label{sec:conclusion} Respiratory illness is leading cause of death and disability in the world. The annual fatality rate of pneumonia is approximately 4 million people, whereas is the leading cause of death among children under 5 years old. The pathology associated with pneumonia is often overlapping with other abnormal conditions of the lung, leading to a time-consuming process that may delay diagnosis and treatment. In this paper we proposed an uncertainty-driven ensemble of deep neural networks to identify patterns associated with different types of pneumonia. This tool combined the information extracted from different architectures according to the uncertainty of their predictions, instead of using the accuracy of individual classifiers as most studies usually do. The information provided about the reliability of the predictions, in addition to the large performance obtained (accuracy of 98.06\% when distinguished between four pathologies) evidences the applicability of the system to be used as an aid for clinicians. The combination of CNNs of different kernel sizes allows the identification of pneumonia patterns regardless of their size and shape. Moreover, the reduced preprocessing needed for obtaining these results guarantees a limited computational cost. Our results pave the way for the application of Bayesian deep neural networks to other image modalities such as CT, which offers much more resolution than XR images and can provide key information for the detection of pneumonia. \section*{Acknowledgments}\label{sec:Acknowledgments} This work was partly supported by the MINECO/ FEDER under the PGC2018-098813-B-C32, RTI2018-098913-B100, CV20-45250 and A-TIC-080-UGR18 projects. \vfill \pagebreak \bibliographystyle{elsarticle/elsarticle-harv}
proofpile-arXiv_069-1464
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Parity switch and $4\pi$ periodicity in an explicit minimal lattice model} In this section, we demonstrate the parity-switch and $4\pi$ periodicity effects presented in the main text in an explicit minimal lattice model corresponding to the two-leg ladder illustrated in Fig.~1 of the main text, and described by Eq.~(3) thereof, with hoppings $t_\parallel$ and $t_\perp$ between nearest-neighboring sites along and across ladder legs, respectively. Explicitly, we start from a description in the standard Landau gauge where the intra-leg Hamiltonian takes the explicit form \begin{equation} \label{eq:hpar} H_\sigma = -\frac{t_\parallel}{2} \sum_{j=0}^{N-1} \left[ e^{\frac{i}{N} \left( \Phi + \frac{1+\sigma}{2} \chi \right)} c^\dagger_{j+1,\sigma} c_{j,\sigma} + \mbox{h.c.} \right], \end{equation} with periodic boundary conditions, and the inter-leg coupling reads \begin{equation} \label{eq:hperp} H_{+-} = -t_\perp \sum_{j=0}^{N-1} \left( c^\dagger_{j,+} c_{j,-} + \mbox{h.c.} \right). \end{equation} Performing the gauge transformation $\tilde{c}_{j,\sigma} = e^{ij(\Phi + \chi/2)/N} c_{j,\sigma}$ (symmetric gauge), and moving to momentum space via the Fourier transformation $\tilde{c}_{j,\sigma} = \sum_k e^{ikj} \tilde{c}_{k,\sigma}/\sqrt{N}$ (where $k \in \{ k_n = 2\pi n/N + \Phi/N + (\chi/2)/N \}$ as in the main text), the ladder Hamiltonian $H = H_+ + H_- + H_{+-}$ takes the same form as in Eq.~(3) of the main text, with the replacements $h_\parallel[k \pm \chi/(2N)] = -t_\parallel \cos[k \pm \chi/(2N)]$ and $h_\perp(k) = -t_\perp$. The momentum-space Hamiltonian can be diagonalized via a straightforward Bogoliubov transformation $\tilde{c}_{k,\pm} = u_\mp \tilde{d}_{k,-} \pm u_\pm \tilde{d}_{k,+}$ with \begin{equation} u_{k,\pm} = \sqrt{\frac{1}{2} \left( 1 \pm \frac{\sin(k)\sin(\frac{\chi}{2N})}{\sqrt{\sin^2(k)\sin^2(\frac{\chi}{2N}) + \tau^2}} \right)}, \end{equation} where we have defined $\tau = t_\perp / t_\parallel$~\cite{narozhny05,*carr06,tai16}. The eigenmodes $\tilde{d}_{k,\pm}$ correspond to two hybridized bands [Fig.~2 in the main text] \begin{equation} \label{eq:spec} \epsilon_{\pm}(k) = -t_\parallel \left[ \cos(k)\cos\left(\frac{\chi}{2N}\right) \pm \sqrt{\sin^2(k)\sin^2\left(\frac{\chi}{2N}\right) + \tau^2} \right], \end{equation} shown in Fig.~\ref{fig:short} for different values of $\chi$ and $\tau = t_\perp$ (setting $t_\parallel = 1$). We recall that $k \in \{ k_n = 2\pi n/N + \Phi/N + (\chi/2)/N \}$, where $n = 0, \ldots, N-1$. We focus on the situation where the Fermi energy $E_F$ lies in the ``gap'' (avoided crossing) opened by $\tau > 0$ at $k = 0$, as ensured by the condition $-\tau < E_F/t_\parallel + \cos[\chi/(2N)] < \tau$~\footnote{To ensure that the Fermi energy crosses the lower band, one must also have $E_F/t_\parallel < -(\sin^2[\chi/(2N)] + \tau^2)^{1/2}$.}. As discussed in the main text, the parity-switch and $4\pi$ periodicity effects can be observed in that case. We demonstrate this explicitly in Fig.~\ref{fig:short} for a relatively short ladder of $N = 10$ sites per leg [with Aharonov-Bohm flux $\Phi = 0$ and an integer number of transverse-flux quanta $\chi/(2\pi)$]. Figure~\ref{fig:short} illustrates how the filling of single-particles eigenstates and the corresponding ground-state degeneracy change for different values of the inter-leg coupling $\tau$. When $\tau$ is such that the Fermi level enters the upper band, the ground-state degeneracy does not change with modifications of $\chi$ by $\pm 2\pi$ anymore, and the parity-switch and $4\pi$ periodicity effects disappear. \begin{figure} \begin{center} \includegraphics[width=.5\textwidth]{figS1_1.png}\includegraphics[width=.5\textwidth]{figS1_2.png} \includegraphics[width=\textwidth]{figS1_3.png} \includegraphics[width=\textwidth]{figS1_4.png} \caption{Controlled parity switch in a short two-leg ladder of $N = 10$ sites per leg. \textbf{Top row}: Analog of Fig.~4 in the main text (where $N = 150$). For large $\tau$ where the lower band only is occupied, the $4\pi$ periodicity of the persistent current is clearly visible in this shorter ladder. \textbf{Middle row}: Energy spectrum at $\Phi = 0$ and $\chi/(2\pi) = 2$ (even), for different values of the inter-leg coupling $\tau$. The filling of single-particle eigenstates is shown for different fermion numbers $N_f$ (corresponding to distinct parities). In a similar way as in Figs.~(2) and~(3) of the main text, red filled squares indicate occupied states, while unfilled squares indicate degenerate states that share a single fermion. Note that, for a fixed number of fermions, the ground-state degeneracy changes as soon as $\tau$ is reduced enough for the upper band to become occupied. \textbf{Lower row}: Same as middle row, for an odd number of transverse-flux quanta $\chi/(2\pi) = 3$. As opposed to the case with even $\chi/(2\pi)$, the ground-state degeneracy does not change when $\tau$ is decreased such that the upper band becomes occupied.} \label{fig:short} \end{center} \end{figure} We conclude this section by examining the situation where the number of fermions --- and, hence, the parity thereof --- is not controlled, which is typically the case in cold-atom experiments. For long ladders in which the behavior of the persistent current is typically described by Eq.~(4) in the main text, one readily sees that the process of averaging measurements corresponding to distinct particle numbers (with completely random parity) leads to an effective reduction by a half of the periodicity of persistent currents in $\chi$ and $\Phi$, as shown in Fig.~\ref{fig:ensemble}. \begin{figure} \begin{center} \includegraphics[height=.25\textwidth]{figS2_1.png} \put (0,60) {\Large{$\Rightarrow$}} \qquad \qquad \includegraphics[height=.25\textwidth]{figS2_2.png} \includegraphics[height=.25\textwidth]{figS2_3.png} \put (0,60) {\Large{$\Rightarrow$}} \qquad \qquad \includegraphics[height=.25\textwidth]{figS2_4.png} \caption{Average of persistent currents over measurements with different (random) fermion parity. \textbf{Top}: Plots on the left side correspond to Fig.~(4) in the main text, while the plot on the right side corresponds to the average between the two, showing the apparent reduction by half of the periodicity on $\chi$ and $\Phi$. \textbf{Bottom}: Same as the top row, for weaker coupling $\tau$ where the upper band is occupied. In that case, changing the parity of the number of fermions does not lead to a shift of $\pi$ along the $\Phi$ axis anymore. The average of the plots for even and odd fermion-number parities does not lead to a complete reduction by half of the periodicity in $\chi$ and $\Phi$ of the persistent current anymore.} \label{fig:ensemble} \end{center} \end{figure} \section{Extension to multi-leg ladders with weak transverse flux --- connection to Landau levels} In the following two sections, we show how the parity-switch and $4\pi$ periodicity effects extend to multi-leg ladders, thereby providing explicit connections between the mesoscopic effects presented in the main text and more conventional quantum Hall effects. We start by focusing on scenarios where the transverse flux is weak, namely, $\chi/N \lesssim 2\pi$, for a ladder with $L \geq 2$ legs. In this regime, the parity-switch effect discussed in the main text can be interpreted as a mesoscopic manifestation, in a two-leg ladder, of changes in the number of occupied states per Landau level in larger, multi-leg ladders. For $L \geq 2$ under the condition $\chi/N \lesssim 2\pi$, the backfolding of bands into the first Brillouin zone is irrelevant for the low-energy physics. In that case, the behavior illustrated in Fig.~(2) of the main text for $L = 2$ --- where bands of individual ladder legs are shifted in momentum space by the transverse flux $\chi$ --- readily generalizes to multiple bands (see Fig.~\ref{fig:multi_small}). The situation is analogous to the one considered by Kane \emph{et al.} in Ref.~\cite{kane02}, where \textit{continuous} 1D systems with parabolic dispersion are tunnel-coupled to each other. The transverse flux $\chi$ shifts all bands by $(\chi/N)/(L-1)$, and the inter-leg coupling $h_\perp$ opens gaps at band crossings, leading to hybridized bands that can be interpreted as Landau levels~\cite{kane02}. Note that gaps decrease exponentially as one moves towards higher energies where crossings occur between bands corresponding to more distant ladder legs [as $h_\perp$ is the only (nearest-neighbor) direct coupling between legs (or bands)]. \\ In the usual Landau gauge, and in the limit of decoupled legs, the energy dispersion of individual legs with index $l$ (where $l = 0, \ldots, L-1$) reads \begin{equation} h_l(k) = h_\parallel[k + l(\chi/N)/(L-1)]. \end{equation} As in the previous section, one can think of $h_l(k)$ as cosine bands with minima located at $k = -l(\chi/N)/(L-1)$. At low energy close to these minima, the situation is thus similar to that of free fermions in the continuum with parabolic energy dispersions centered around the same values of $k$ (see Ref.~\cite{kane02}). As in the main text, the parity-switch effect can be understood more easily by moving to the symmetric gauge defined by the gauge transformation $\tilde{c}_{j,l} = e^{ij(\chi/2)/N} c_{j,l}$. In this gauge, the symmetry of the system under the effective time-reversal symmetry operator $\Theta = \sigma_x \mathcal{K}$ becomes apparent, where, for $L > 2$, the operator $\sigma_x$ generalizes to a mirror symmetry around the center of the ladder system [exchanging leg indices $l$ and $(L-1)-l$]. As in the main text, $\chi$ imposes the twisted boundary condition $\tilde{c}_{N,l} = e^{i\chi/2} c_{0,l}$ (independent of $L$). Therefore, for $L \geq 2$, the presence of states at $k = 0$ is crucially allowed or forbidden depending on the parity of $\chi/(2\pi)$. As in two-leg ladders, changing $\chi$ by $2\pi$ generically leads to parity switches, as illustrated in Fig.~\ref{fig:multi_small} for $L = 4$. Specifically, the parity of the number of single-particle eigenstates appearing below a fixed energy changes when shifting $\chi \to \chi \pm 2\pi$ if and only if the Fermi energy lies in a gap between Landau levels and the filling is such that an \textit{odd} number $\nu$ of levels is occupied. The parity-switch and $4\pi$ periodicity effects are therefore sensitive to the parity of the number of occupied Landau levels (see Fig.~\ref{fig:multi_small}). We emphasize that, in the limit where $L$ is larger than the typical correlation length of the system (controlled by the gap $\sim 2 h_\perp$), the integer $\nu$ coincides with the \emph{topological} number of chiral edge states appearing at the edges of the ladder (around $l = 0$ and $l = L-1$). These states can already be seen for small $L = 4$ in Fig.~\ref{fig:multi_small}: for $\nu = 1$, for example, two counter-propagating modes (at $k$ and $-k$) are found in the gap, exponentially located at $l = 0$ and $l = 3$, respectively. \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{figS3.png} \caption{Parity-switch effect for weak overall transverse flux $\chi/N \lesssim 2\pi$. The schematic band structure depicted here corresponds to the low-energy spectrum of a ladder with $L = 4$ legs. This figure is the direct extension to a multi-leg ladder of Fig.~(2) in the main text. In the symmetric gauge (see text), bands corresponding to individual ladder legs with index $l$ always cross at $k = 0$ where single-particle eigenstates are present or not depending on the parity of $\chi/(2\pi)$. In the general case $L \geq 2$, a similar parity-switch effect as discussed in the main text can be observed whenever an odd number $\nu$ of hybridized bands (or ``Landau levels'') is occupied.} \label{fig:multi_small} \end{center} \end{figure} We remark that the above picture holds provided that the transverse flux threads the lateral surface of the cylindric ladder system \emph{uniformly} --- at least before inserting or removing a small number of flux quanta to observe parity switches. The small changes $\chi \to \chi \pm 2\pi \equiv \chi + \Delta \chi$ required for parity switching, in contrast, need not be made in a completely uniform way. Additional flux quanta must only be inserted in a way that preserves: (i) translation invariance in the $x$ direction along ladder legs, and (ii) the effective time-reversal symmetry $\Theta$ involving a mirror symmetry about the center of the ladder, in the $y$ direction perpendicular to ladder legs. Condition (i) is satisfied provided that $\Delta \chi$ is uniform in the $x$ direction. Condition (ii), in contrast, does not require $\Delta \chi$ to be uniform in the $y$ direction --- it only requires the flux to be symmetric about the ladder center in the $y$ direction. This has the following consequence for the observation of the parity-switch effect: in ladders with an odd number of legs $L$ and, hence, an even number $L-1$ of unit cells in the $y$ direction, the insertion of a single flux quantum must be done uniformly in the $y$ direction to preserve the symmetry $\Theta$. For $L$ even, instead, a single flux quantum can be inserted through the surface between ladder legs $L/2 - 1$ and $L/2$ without breaking $\Theta$. Finally, we remark that $\chi$ must be modified by $L-1$ quanta (one quantum per unit cell in the $y$ direction) if one wants to ensure that the transverse flux does not induce any current along ladder legs in the limit where the inter-leg coupling vanishes. In that case, parity switches can only be observed in ladders with an even number of legs, where $L-1$ is odd. \section{Extension to multi-leg ladders with large transverse flux --- connection to the Harper-Hofstadter model on a cylinder} We now consider extensions of the two-leg ladder model discussed in the main text to multi-leg ladders with the same transverse flux in the case of large $\chi/N \gtrsim 2\pi$, namely, for transverse fluxes of the order of one flux quantum per plaquette. To investigate this regime, we first notice that our model coincides, for multiple legs, with the standard Harper-Hofstadter model~\cite{harper55,hofstadter76} with transverse flux $\chi$, on a cylinder threaded by a Aharonov-Bohm flux $\Phi$. Large fluxes $\chi$ generically induce the opening of topological gaps crossed by chiral edge states~\cite{bernevig13}, in which case the parity-switch and $4\pi$ periodicity effects investigated in the main text can exhibit an enhanced robustness against disorder. The same is true in the low-flux regime examined in the previous section. Here, however, topological gaps are not only controlled by the inter-leg coupling $h_\perp$, and may be sizeable all across the energy spectrum. As we demonstrate below, all results presented in the main text are directly applicable to cases where the Fermi energy $E_F$ lies in a topological gap crossed, as in the low-flux regime, by an \emph{odd} number $\nu$ of pairs of counter-propagating edge modes (where each mode crosses $E_F$ exactly once, as generically expected). Counter-propagating states at $E_F$ not only correspond to opposite quasimomenta $k$ and $-k$, but are also located on opposite edges of the multi-leg ladder, leading to a crucial suppression of disorder-induced scattering between them (exponential suppression with increasing number of ladder legs, or increasing ``bulk'' size). The direct extension of the two-leg model defined by Eqs.~\eqref{eq:hpar} and~\eqref{eq:hperp} to multiple legs leads, in the standard Landau gauge, to the following Harper-Hofstadter model in cylinder geometry: \begin{equation} \label{eq:hh} H_\text{H-H} = -\frac{t_\parallel}{2} \sum_{x=0}^{N-1} \sum_{y=0}^{qM-1} \Big[ e^{i y \frac{\chi}{(L-1)N}} c^\dagger_{x+1,y} c_{x,y} + \mbox{h.c.} \Big] - t_\perp \sum_{x=0}^{N-1} \sum_{y=0}^{qM-2} \Big[ c^\dagger_{x,y} c_{x,y+1} + \mbox{h.c.} \Big], \end{equation} where $\chi = 2\pi (L-1)N p/q$ is the (uniform) transverse flux threading the system (where $q$ is a prime number and $p$ can take values from $1$ to $q$), $x$ indexes positions along ladder legs, and $y$ indexes ladder legs for a total of $L = qM$ legs, with integer $M$. As in the main text, we consider periodic boundary conditions $c_{x+N,y} = c_{x,y}$, leading to the aforementioned cylinder geometry. \begin{figure}[t] \begin{center} \includegraphics[height=.41\textwidth]{figS4_1.png} \includegraphics[height=.4\textwidth]{figS4_2.png} \caption{\textbf{Left}: Energy spectrum of a multi-leg ladder described by Eq.~\eqref{eq:hh} (Harper-Hofstadter Hamiltonian) for $L = 48$ legs of $N = 60L$ sites each, with transverse flux $2\pi/3$ per unit cell and hopping amplitudes $t_\perp/t_\parallel = 1/2$. In this regime where a macroscopic transverse flux threads the system, the ladder exhibits topological gaps crossed by counter-propagating edge modes (thick colored lines). For the chosen flux, the energy dispersion of the edge modes exactly coincides with the band structure of a two-leg ladder with the same flux $\mathcal{F}$ per unit cell (and the same couplings $t_\perp, t_\parallel$). \textbf{Right}: Zoom on the edge states alone in the region delimited by the two vertical dashed lines in the left plot. The smaller black dots correspond to the gapped bands in the left panel. The lower plot is the same as the upper one, except for an additional flux $2\pi/[(L-1)N]$ per unit cell --- leading to the disappearance of states at $k = 0$ and to the parity-switch effect discussed in the main text.} \label{fig:edges} \end{center} \end{figure} The multi-leg ladder model defined by Eq.~\eqref{eq:hh} supports a variety of topological phases well suited for the observation of robust parity-switch and $4\pi$ periodicity effects. For concreteness, we focus on the special case of a transverse flux with $p/q = 1/3$, for which the multi-leg ladder features topological edge states whose energy dispersion exactly coincide with the band structure of the two-leg model examined in the main text (see Fig.~\ref{fig:edges} and discussion below). We start by demonstrating this remarkable correspondence: since $q = 3$, the magnetic unit cell (smallest cell containing an integer number of flux quanta) consists of $3$ regular unit cells, and the spectrum of the system, accordingly, consists of $3$ subbands separated by $q - 1 = 2$ gaps available for topological edge states. This shows that $q = 3$ is a necessary condition for the desired correspondence: the \emph{two} bands of the two-leg ladder can only correspond to topological edge states in the multi-leg ladder if the latter exhibits exactly \emph{two} topological gaps. To establish the full correspondence explicitly, one must choose a gauge in which the crystal momentum $k$ in the $x$ direction along ladder legs is preserved as in the two-leg ladder, i.e., one must use a gauge in which the magnetic unit cell is fully oriented along the $y$ direction perpendicular to ladder legs, such that the system is invariant under usual translations (by one unit cell) in the $x$ direction and invariant under magnetic translations (by $3$ unit cells) in the $y$ direction. In that case, the Schr\"odinger equation corresponding to Eq.~\eqref{eq:hh} can be expressed as \begin{equation} \label{eq:schroedingerEq} \epsilon \psi_{x,y} = - \frac{t_\parallel}{2} e^{i y \frac{\chi}{(L-1)N}} \psi_{x+1,y} - \frac{t_\parallel}{2} e^{-i y \frac{\chi}{(L-1)N}} \psi_{x-1,y} - t_\perp \psi_{x,y+1} - t_\perp \psi_{x,y-1}, \end{equation} where $\psi_{x,y}$ denotes the single-particle wavefunction on site $(x,y)$. Equation~\eqref{eq:schroedingerEq} is valid in the bulk, with straightforward modifications at edges corresponding to open boundary conditions in the $y$ direction. Our goal is to find edge solutions that map to the modes of the two-leg ladder. Taking advantage of translation invariance in the bulk, we look for solutions of the (Bloch) form $\psi_{x,y} = e^{i k x} e^{i k_y y} u_y$, where $u_y$ is a periodic mode function satisfying $u_{y+q} = u_q$, $k \equiv k_x = 2\pi n/N$ with integer $n$ is the crystal momentum in the $x$ direction, and $k_y$ is the analog of the crystal momentum in the $y$ direction [which would take values $k_y = 2\pi m/(qM)$ with integer $m$ if the system was periodic in the $y$ direction]. Plugging this ansatz into Eq.~\eqref{eq:schroedingerEq}, we obtain \begin{equation} \label{eq:schroedingerEq2} \epsilon u_y = - t_\parallel \cos\left[ k + y \frac{\chi}{(L-1)N} \right] u_y - t_\perp e^{i k_y} u_{y+1} - t_\perp e^{-i k_y} u_{y-1}, \end{equation} which looks very similar to the Schr\"odinger equation of the two-leg ladder. To make the similarity even more apparent, we denote $y \equiv (m, s)$, where $m = 0, \ldots, M-1$ indexes magnetic unit cells and $s = 0, \ldots, q-1$ indexes sites within the latter (or, equivalently, subbands). We then focus on the mode function $u'_y \equiv u'_{m,s} = e^{i k_y s} u_y$, for which Eq.~\eqref{eq:schroedingerEq2} reduces to \begin{equation} \label{eq:schroedingerEq3} \epsilon u'_y = - t_\parallel \cos\left[ k + y \frac{\chi}{(L-1)N} \right] u'_y - t_\perp e^{i k_y \delta_{s,q}} u'_{y+1} - t_\perp e^{-i k_y \delta_{s,0}} u'_{y-1}. \end{equation} Although the ``crystal momentum'' $k_y$ is not a good quantum number due to edges in the $y$ direction, exponentially decaying edge solutions can be found by making the replacement $k_y \to i \xi$, where $\xi$ is the corresponding localization length. By doing so, plane-wave propagation factors $e^{\pm i k_y}$ become exponential-decay envelope factors $e^{\pm \xi}$, and Eq.~\eqref{eq:schroedingerEq3} reduces to the Schr\"odinger equation of the two-leg ladder: within a magnetic unit cell (i.e., for fixed $m$), the mode functions $u'_y \equiv u'_{m,s}$ satisfy the same Schr\"odinger equation as the single-particle wavefunctions of a two-leg ladder with the same flux $2\pi p/q$ per unit cell. The edge solutions $u'_y$ of the multi-leg ladder correspond to copies of the states of the two-leg ladder translated by $q$ sites in the $y$ direction, with an exponentially decaying envelope $\propto e^{-\xi}$. More importantly, the energy dispersion $\epsilon$ of these edge modes coincides with the band structure of the two-leg ladder, as mentioned in the main text. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{figS5_1.png} \quad \includegraphics[width=.45\textwidth]{figS5_2.png} \caption{\textbf{Left}: Ground-state energy $E_{\rm GS}$ as a function of $\Phi$ for the multi-leg ladder described by Eq.~\eqref{eq:hh} (Harper-Hofstadter Hamiltonian) with on-site disorder as described by Eq.~\eqref{eq:hdis}. The zero of energy is set as the minimum of $E_{\rm GS}$ in the absence of disorder. As in Fig.~\ref{fig:edges}, we consider a system with transverse flux $2\pi/3$ per unit cell and hopping amplitudes $t_\perp/t_\parallel = 1/2$. The number of sites per ladder leg is fixed ($N = 780$), and we examine cases corresponding to a different number of legs $L$. In the clean case (black solid line), and for an odd number $N_f$ of fermions with Fermi energy in the lower topological gap [we choose $N_f = (L - 2)N/3 + 3N/4$], the energy is continuous and minimum at $\Phi = 0$, and the same for all $L$. When disorder is added (dashed-dotted curves), $E_ {\rm GS}$ is slightly shifted in a random direction along the $\Phi$ axis and discontinuities in $\partial_\Phi E_{\rm GS}$ are smoothed out at small values of $L$ where disorder-induced scattering between edge states at the Fermi energy is not entirely suppressed. The solid-dotted lines correspond to the same disorder realization with $\chi$ shifted by $2\pi(L-1)$ (odd number of transverse-flux quanta), showing the robustness of the parity-switch effect. \textbf{Right}: Average difference $\Delta$ between maximum and minimum of $E_{\rm GS}$ as a function of $\Phi$ for increasing number of legs $L$, demonstrating the (exponential) increase in the robustness of $\Delta$ (and, hence, the enhanced robustness of the parity switch of persistent currents) with increasing $L$.} \label{fig:single} \end{center} \end{figure} The spectrum of the multi-leg ladder with transverse flux $2\pi p/q = 2\pi/3$ per unit cell is shown in Fig.~\ref{fig:edges}: the system is in a topological phase~\cite{bernevig13} with $q = 3$ subbands and $q-1 = 2$ topological gaps induced by the macroscopic transverse flux $\chi$. Each gap is crossed by a pair of counter-propagating edge modes located on opposite edges of the cylinder. As expected, the energy dispersion of these topological modes coincides with the band structure of a two-leg ladder with the same flux per unit cell (Eqs.~\ref{eq:hpar} and~\ref{eq:hperp} with $\chi = 2\pi N/3$). As argued above and in the main text, the parity-switch effect can also be observed in that case, induced by the disappearance/appearance of states at the time-reversal invariant quasimomentum $k = 0$ as $\chi$ is varied by $\pm 2\pi$: for an arbitrary energy level $E$ set in one of the two topological gaps, the parity of the number of single-particle eigenstates below $E$ switches every time $\chi$ is modified by $\pm 2\pi$. As in the low-flux regime discussed in the previous section, transverse-flux quanta should be inserted in a uniform way or, more broadly, in a way that preserves translation invariance in the $x$ direction along ladder legs, and the effective time-reversal symmetry $\Theta$. To demonstrate that topology enhances the robustness of the parity-switch effect against disorder, we solve numerically the Harper-Hofstadter model defined by~\eqref{eq:hh} in the presence of local (on-site) disorder of the form \begin{equation} \label{eq:hdis} H_{\rm disorder} = \sum_{x,y} \varepsilon_{x,y} c^\dagger_{x,y} c_{x,y}, \end{equation} with on-site energies $\varepsilon_{x,y}$ uniformly distributed in the window $[-W,W]$ (where $W$ can be regarded as the disorder ``strength''). We examine the $\Phi$ dependence of the ground-state energy $E_{\rm GS}$ of the system for an odd fixed number of fermions with Fermi energy in the lower topological gap. For ``clean'' systems ($W = 0$), the contribution to $E_{\rm GS}$ of fermions in the bulk (``valence'' band) is the same irrespective of the number of ladder legs (chosen as $L = 3M + 2$ with integer $M$, such that the system consists of an integer number of magnetic unit cells in the $y$ direction). The derivative $\partial_\Phi E_{\rm GS}$ is proportional to the persistent current along the periodic $x$ direction of the cylinder, and completely filled bands do not contribute to this current. The left panel of Fig.~\ref{fig:single} shows $E_{\rm GS}$ as a function of $\Phi$ in the clean case and for individual realizations of the disorder potential, respectively, for an increasing number of legs $L$. For single realizations of the disorder, the energy generically does not exhibit a minimum at $\Phi = 0$ anymore, which corresponds to the existence of a finite persistent current in the absence of any Aharonov-Bohm flux. This can be understood by noticing that the nonzero transverse flux $\chi$ induces chiral currents (as can be seen from the existence of counter-propagating edge modes), and that disorder generically favors a specific chirality by breaking the ``mirror'' symmetry between the two edges of the cylinder (the analog of the time-reversal symmetry $\Theta$ defined in the main text for a two-leg ladder). For $L = 2$, as expected, discontinuities in $\partial_\Phi E_{\rm GS}$ are generally ``smoothed out'' by disorder as a result of Anderson localization~\cite{cheung88,bouzerar94,*filippone16}. More importantly, however, the effect of disorder is clearly suppressed with increasing number of ladder legs $L$ (i.e., with increasing cylinder width). This suppression is a direct consequence of the topological nature of the two edge states at the Fermi energy: since the latter are located on opposite edges of the cylinder, disorder-induced scattering between them is strongly suppressed (exponentially with $L$). The right panel of Fig.~\ref{fig:single} shows the average difference $\Delta$ between the minimum and maximum of $E_{\rm GS}$ as a function of $\Phi$: as expected, $\Delta$ becomes more stable against disorder as the number of legs $L$ increases. We have also verified that the parity-switch effect, corresponding to an effective shift $\Phi \to \Phi \pm \pi$ induced by $\chi \to \chi \pm 2\pi$, is increasingly robust against disorder for increasing $L$ (solid-dotted lines in the left panel of Fig.~\ref{fig:single}). \bibliographystyle{apsrev4-1}
proofpile-arXiv_069-1571
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} The region of space above the Earth is plenty of satellites with different purposes: Earth's observation including remote sensing and meteorological satellites, the International Space Station (ISS), the Space Shuttle, the Hubble Space Telescope. All of them are moving in the so-called Low-Earth-Orbit (hereafter LEO) region, which ranges between 90 and 2\,000 $km$ of altitude above the Earth's surface. Satellites in LEO are characterized by a high orbital speed and might possess different inclinations, even reaching very high values as in the case of polar orbits, among which Sun-synchronous satellites can be found. Satellites can be permanently located in LEO or they can just cross that region as in the case of \sl highly elliptical orbits, \rm characterized by a large eccentricity that leads to big excursions, possibly across the LEO region. Being easy to reach, LEOs are convenient for building space platforms and installing instruments. The disadvantages of placing a satellite in LEO are due to the closeness to the Earth and to the air drag. Indeed, the Earth's oblateness has a key role and must be accurately modeled by including a suitable number of coefficients of the series expansion of the geopotential (compare with \cite{FV,Liu}). On the other hand, the presence of the atmosphere provokes an air drag, which acts as a dissipative force (see, e.g., \cite{BV2004,Chao,Del1991,Gaias2015}). Its strength depends on the altitude, since the air density decreases with the distance from the Earth's surface and it may change due to the Solar activity (for density models we refer to \cite{Jacchia,Hedin0, Hedin,ISO}). The drag provokes a tidal decay of the satellite on time scales which depend on the altitude of the satellite, hence on the density of the atmosphere. Beside the gravitational attraction of the Earth, the air drag and the Earth's oblateness, a comprehensive model includes also the influence of the Moon, the attraction of the Sun and the Solar radiation pressure (see \cite{Kaula1962,CGfrontier,CGPbif,EH}). We refer to \cite{ADRRVDQM, CGmajor, CGminor, CGexternal, CEGGP2016, DRADVR15, Gedeon, GDGR2016, LDV, Rosengren2013, RS2, VDLC} (and references therein) for a description of the dynamics at distances from the Earth higher than LEO.\\ The large number of satellites in LEO unavoidably provokes a huge amount of space junk, as a consequence of collisions between satellites or due to the fact that the satellites' remnants are left there at the end of their operational life. The spatial density of the debris has a peak around 800 $km$, as a consequence of the collisions between the satellites Iridium and Cosmos in 2009 and the breakup of Fengyun-1C in 2007. Collisions with space debris might provoke dramatic events, due to the high speed during the impact. The U.S. Space Surveillance Network tracks in LEO about 400\,000 debris between 1 $cm$ and 10 $cm$, and 14\,000 debris larger than 10 $cm$. Objects of 1 $cm$ size might damage a spacecraft and even break the ISS shields; debris of 10 $cm$ size might provoke a fragmentation of a satellite.\\ More than half of the total amount of space debris is in LEO, thus increasing the interest for the dynamical behavior of objects in this region, which is the main goal of the present work. The knowledge of the dynamics of the space debris can considerably contribute to the development of mitigation measures, most notably through the design of suitable post-mission disposal orbits (see, e.g., \cite{DSBS}). Among the possible mitigation strategies, one can provoke a re-entry of the debris in the lower atmosphere or rather a transfer to an orbit with a different lifetime. It is therefore of paramount importance to know whether an object is located in a regular, resonant or chaotic region, as well as to know how much time will spend in such regions. This paper aims to contribute to give an answer to these question. This work extends the research performed in \cite{CGmajor,CGminor}, where analytical and numerical methods, mostly adopting Hamiltonian formalism, have been used to study the dynamics of objects within resonances located at large distances from the Earth (the so-called geostationary and GPS regions at distances, respectively, equal to 42\,164 $km$ and 26\,560 $km$). We also mention \cite{EH, FV, LDV, VDLC} for accurate modeling and analytical studies of space debris dynamics. With respect to \cite{CGmajor,CGminor}, the current work presents the novelty that, dealing with LEO, the model becomes more complicated, due to the effect of the geopotential, being the Earth very close, and moreover the dynamics is dissipative because of the air drag. The dynamics is described through a set of equations of motion which include the geopotential, the atmospheric drag and the contribution of Sun and Moon. In particular, we study four specific resonances located in LEO at different altitudes; such resonances are due to a commensurability between the orbital period of the debris and the period of rotation of the Earth. The geopotential is expanded in spherical harmonics, although only a limited number of coefficients is taken into account, precisely those which contribute to shape the dynamics, being the dominant terms in a specific region of orbital parameters. The atmospheric drag is modeled through a set of equations which are first averaged with respect to the mean anomaly and then translated in terms of the Delaunay actions. A qualitative study of the resonances is based on the construction of a \sl toy model, \rm which provides a sound analytical support to the numerical investigation of the problem. We are thus able to draw conclusions about the role of the dissipation, the location and stability character of the equilibria, the occurrence of temporary capture into a resonance or rather a straight passage through it. Once the results for the toy model are obtained, we pass to investigate a problem which includes the change of the local density of the atmosphere due to the effect of the solar cycle and the gravitational influence given by Sun and Moon. The study leads to interesting results, which can be used in concrete cases to make a thorough analysis of the dynamics of space debris and even to design possible disposal orbits, or rather to provide practical solutions for control and maintenance of LEO satellites. Due to dissipative effects, frequent maneuvers are required to keep the orbital altitude. Our study reveals strong evidence that there exists equilibrium points in LEO that might be used in practice by parking operational satellites in their close vicinity, thus reducing the cost of maintenance.\\ This paper is organized as follows. In Section~\ref{sec:equations_of_motion} we provide the equations of motion in Delaunay action-angle coordinates derived from a Hamiltonian including the Keplerian part and the effect of the oblateness of the Earth. The geopotential is expanded in Section~\ref{sec:geopotential_Ham} using a classical development in terms of the spherical harmonic coefficients. A model for the atmospheric drag is provided in Section~\ref{sec:diss_effect_drag}. Resonances, equilibria and their stability are analyzed in Section~\ref{sec:qualitative_resonance}, while the effect of the the solar cycle and lunisolar perturbations are studied in Section~\ref{sec:results}. \section{Equations of motion}\label{sec:equations_of_motion} We consider a small body, say $S$, located in the LEO region around the Earth. We study its perturbed motion, taking into account the oblateness of the Earth, the rotation of our planet and the atmospheric drag. To introduce the equations of motion, we use the action--angle Delaunay variables, denoted as $(L,G,H,M,\omega,\Omega)$, which are related to the orbital elements $(a,e,i,M,\omega,\Omega)$ by the expressions \begin{equation}\label{LGH_aei} L=\sqrt{\mu_E a}\,,\qquad G=L \sqrt{1-e^2}\,, \qquad H=G \cos i\,, \end{equation} where $a$ is the semimajor axis, $e$ the eccentricity, $i$ the inclination, $M$ the mean anomaly, $\omega$ the argument of perigee, $\Omega$ the longitude of the ascending node and $\mu_E={\mathcal G} m_E$ with ${\mathcal G}$ the gravitational constant and $m_E$ the mass of the Earth. We denote by $\mathcal{H}$ the geopotential Hamiltonian (see \cite{CGmajor}), which can be written as \beq{H} \mathcal{H}(L,G,H,M,\omega,\Omega,\theta)=-{\mu^2_E\over {2L^2}}+\mathcal{H}_{earth}(L,G,H,M,\omega,\Omega,\theta)\ , \end{equation} where $\theta$ is the sidereal time, $-{\mu^2_E\over {2L^2}}$ is the Keplerian part and $\mathcal{H}_{earth}$ represents the perturbing function (for which an explicit approximate expression is given in Section~\ref{sec:geopotential_Ham}). We denote by $F_{_L}$, $F_{_G}$, $F_{_H}$ the components of the dissipative effects due to the atmospheric drag, whose explicit expressions are given in Section~\ref{sec:diss_effect_drag}. Then, the dynamical equations of motion are given by \begin{equation} \label{canonical_eq} \begin{split} \dot{M}=\frac{\partial \mathcal{H}}{\partial L}\,,\qquad \quad & \qquad \dot{\omega}=\frac{\partial \mathcal{H}}{\partial G}\,, \ \quad \qquad \qquad \dot{\Omega}=\frac{\partial \mathcal{H}}{\partial H}\, ,\\ \dot{L}=-\frac{\partial \mathcal{H}}{\partial M}+F_{_L}\,, & \qquad \dot{G}= -\frac{\partial \mathcal{H}}{\partial \omega}+F_{_G}\,, \qquad \dot{H}= -\frac{\partial \mathcal{H}}{\partial \Omega}+F_{_H}\ . \end{split} \end{equation} \section{The geopotential Hamiltonian} \label{sec:geopotential_Ham} Following \cite{Kaula}, we expand $\mathcal{H}_{earth}$ as \beq{Rearth} \mathcal{H}_{earth}=- {{\mu_E}\over a}\ \sum_{n=2}^\infty \sum_{m=0}^n \Bigl({R_E\over a}\Bigr)^n\ \sum_{p=0}^n \overline{F}_{nmp}(i)\ \sum_{q=-\infty}^\infty G_{npq}(e)\ \overline{S}_{nmpq}(M,\omega,\Omega,\theta)\ , \end{equation} where $R_E$ is the Earth's radius, $\overline{F}_{nmp}$ the normalized inclination function defined as $$ \overline{F}_{nmp}=\sqrt{\frac{(2-\delta_{0n}) (2n+1)(n-m)!}{(n+m)!}}\, F_{nmp}\ , $$ where $\delta_{0n}$ is the Kronecker function, the inclination and eccentricity functions $F_{nmp}$, $G_{npq}$ are computed by well--known recursive formulae (see, e.g., \cite{Kaula, Chao, CGmajor}), while $\overline{S}_{nmpq}$ is expressed as \beq{Snmpq} \overline{S}_{nmpq}=\left[% \begin{array}{c} \overline{C}_{nm} \\ -\overline{S}_{nm} \\ \end{array}% \right]_{n-m \ odd}^{n-m \ even} \cos \Psi_{nmpq}+ \left[% \begin{array}{c} \overline{S}_{nm} \\ \overline{C}_{nm} \\ \end{array}% \right]_{n-m \ odd}^{n-m \ even} \sin \Psi_{nmpq}\ , \end{equation} where $\overline{C}_{nm}$ and $\overline{S}_{nm}$ are, respectively, the cosine and sine normalized coefficients of the spherical harmonics potential terms (see Table~\ref{table:CS} for concrete values) and \beq{psi} \Psi_{nmpq}=(n-2p) \omega+(n-2p+q)M+m(\Omega-\theta)\ . \end{equation} The normalized coefficients $\overline{C}_{nm}$ and $\overline{S}_{nm}$ are related to the geopotential coefficients $C_{nm}$ and $S_{nm}$ through the expressions (see \cite{Kaula, MG}): $$ \left(% \begin{array}{c} \overline{S}_{nm} \\ \overline{C}_{nm} \\ \end{array}% \right)= \sqrt{\frac{(n+m)!}{(2-\delta_{0n}) (2n+1)(n-m)!}} \left(% \begin{array}{c} {S}_{nm} \\ {C}_{nm} \\ \end{array}% \right). $$ As we shall see later, we consider resonant motions which involve the rate of variations of the mean anomaly and the sidereal angle through a linear combination with integer coefficients (see Definition~\ref{def:resonance} below). We shall be interested in specific resonances, which will correspond to linear combinations involving the index $m$ with $m\geq 11$ (see Table~\ref{table:res_location} below). Since we deal with harmonic terms with large order (precisely $m\geq 11$), we use the normalized coefficients, which have the advantage of being more uniform in magnitude than the unnormalized coefficients. In fact, the size of the normalized coefficients is expressed approximately by the empirical Kaula's rule (see \cite{Kaula}): $\overline{C}_{nm}, \, \overline{S}_{nm} \simeq 10^{-5}/n^2$, and therefore they decay less rapidly with $n$. This allows us to avoid some computational complications which might appear when working with very small numbers, such as $C_{nm}$, $S_{nm}$ for large $n$, or very big numbers, which are involved in the computation of $F_{nmp}$. As common in geodesy, we introduce also the quantities $\overline{J}_{nm}$ defined by $$ \overline{J}_{nm} = \sqrt{\overline{C}_{nm}^2+\overline{S}_{nm}^2} \quad \textrm{if} \ m\neq 0\ , \qquad \overline{J}_{n0} \equiv \overline{J}_n= -\overline{C}_{n0} $$ and the quantities $\lambda_{nm}$ defined through the relations \begin{equation}\label{lambda_nm} \overline{C}_{nm}=-\overline{J}_{nm} \cos(m \lambda_{nm}) \ , \qquad \overline{S}_{nm}=-\overline{J}_{nm} \sin(m \lambda_{nm}) \ . \end{equation} The coefficients $ \overline{J}_{nm}$ in units of $10^{-6}$ as well as the values of $\lambda_{nm}$, involved in the study of the resonances, are given in Table 1; they are computed according to the Earth's gravitational model EGM2008 (\cite{EGM2008}). \begin{table}[h] \begin{tabular}{|c|c|c|c||c|c|c|c||c|c|c|c|} \hline \\ $n$ & $m$ & $\overline{J}_{nm}$ & $\lambda_{nm}$ & $n$ & $m$ & $\overline{J}_{nm}$ & $\lambda_{nm}$ & $n$ & $m$ & $\overline{J}_{nm}$ & $\lambda_{nm}$ \\ \hline 2 &0&484.1651&$0^{\circ}$ & 15& 11& 0.0186 & $-7_{\cdot}^{\circ}82$ & 19 & 11 & 0.0193 & $19_{\cdot}^{\circ}31$ \\ 3 &0& -0.9572 & $0^{\circ}$ & 15 & 12 & 0.036 & $-2_{\cdot}^{\circ}14$ & 19 & 12 & 0.0098 & $-6_{\cdot}^{\circ}29$\\ 4 & 0 & -0.54 & $0^{\circ}$ & 15 & 13 & 0.0287 & $ 0_{\cdot}^{\circ}70$ & 19 & 13 & 0.0295 & $5_{\cdot}^{\circ}78$ \\ 5 & 0& -0.0687 & $0^{\circ}$ & 15 & 14& 0.0249 & $7_{\cdot}^{\circ}29$ & 19 & 14 & 0.0137 & $4_{\cdot}^{\circ}98$ \\ 6 & 0 & 0.15 & $0^{\circ}$ & 16 & 11 & 0.0194 & $15_{\cdot}^{\circ}50$ & 20 & 11 & 0.024 & $11_{\cdot}^{\circ}55$ \\ 7 & 0& -0.0905 & $0^{\circ}$ & 16 & 12 & 0.0207 & $16_{\cdot}^{\circ}58$ & 20 & 12 & 0.0193 & $-5_{\cdot}^{\circ}86$ \\ 11 & 11 & 0.0836 & $11_{\cdot}^{\circ}23$ & 16 & 13& 0.0138 &$14_{\cdot}^{\circ}18$ & 20 & 13 & 0.0282 & $14_{\cdot}^{\circ}91$ \\ 12 & 11 & 0.013 & $13_{\cdot}^{\circ}70$ & 16 & 14& 0.0432 & $4_{\cdot}^{\circ}53$ & 20 & 14 & 0.0184 & $ 9_{\cdot}^{\circ}19 $\\ 12 & 12 & 0.0114 & $6_{\cdot}^{\circ}47$ & 17 & 11& 0.0195 & $-3_{\cdot}^{\circ}15$ & 21 & 12 & 0.0151 & $-6_{\cdot}^{\circ}44$ \\ 13 & 11 & 0.0448 & $0_{\cdot}^{\circ}56$ & 17 & 12 & 0.0353 & $17_{\cdot}^{\circ}96$ & 21 & 13 & 0.0239 & $-2_{\cdot}^{\circ}75$ \\ 13 & 12 & 0.0933 & $-5_{\cdot}^{\circ}87$ & 17 & 13 & 0.026 & $17_{\cdot}^{\circ}74$ & 21 & 14 & 0.0216 & $ 14_{\cdot}^{\circ}28 $\\ 13 & 13 & 0.0916 & $-3_{\cdot}^{\circ}70$ & 17 & 14 & 0.0184 & $-2_{\cdot}^{\circ}79 $ & 22 & 13 & 0.026 & $-3_{\cdot}^{\circ}74$ \\ 14 & 11 & 0.0421 & $10_{\cdot}^{\circ}17$ & 18 & 11 & 0.0072 & $-1_{\cdot}^{\circ}56$ & 22 & 14 & 0.0137 & $ 15_{\cdot}^{\circ}53$ \\ 14 & 12 & 0.0323 & $8_{\cdot}^{\circ}77$ & 18 & 12 & 0.034 & $2_{\cdot}^{\circ}43$ & 23 & 14 & 0.0071 & $12_{\cdot}^{\circ}01$ \\ 14 & 13 & 0.0555 & $18_{\cdot}^{\circ}04$ & 18 & 13 & 0.0355 & $6_{\cdot}^{\circ}14$ & & & & \\ 14 & 14 & 0.0521 & $0_{\cdot}^{\circ}38$ & 18 & 14& 0.0153 & $4_{\cdot}^{\circ}08$ & & & &\\ \hline \end{tabular} \vskip.1in \caption{The values of $\overline{J}_{nm}$ (in units of $10^{-6}$) and the quantities $\lambda_{nm}$ computed from \cite{EGM2008}.} \label{table:CS} \end{table} \subsection{Approximation of the Hamiltonian}\label{sec:secres} The expansion of the disturbing function $\mathcal{H}_{earth}$ in \equ{Rearth} contains an infinite number of trigonometric terms, but the long term variation of the orbital elements is mainly governed by the secular and resonant terms. Moreover, for the gravitational resonances located in the GEO and MEO regions, we pointed out in \cite{CGmajor,CGexternal,CGminor} that just some of these terms are really relevant for the dynamics. In the present work, we perform the study of the effects of the \sl gravitational resonances \rm (also called \sl tesseral \rm resonances, see \cite{Gedeon,EH}), within the LEO region. The precise definition of resonance is given as follows. \vskip.1in \begin{definition}\label{def:resonance} A tesseral (or gravitational) resonance of order $j:k$ with $j$, $k\in{\mathbb Z}\backslash\{0\}$ occurs when the orbital period of the debris and the rotational period of the Earth are commensurable of order $j:k$. In terms of the orbital elements, a $j:k$ gravitational resonance occurs if $$ k\ \dot{M}-j\ \dot{\theta} = 0\ , \qquad j,k \in \mathbb{N}\ . $$ \end{definition} Following \cite{CGmajor,CGexternal,CGminor}, we approximate $\mathcal{H}_{earth}$ by $$ \mathcal{H}_{earth}=\mathcal{H}^{sec}_{earth}+\mathcal{H}_{earth}^{res}+\mathcal{H}_{earth}^{nonres}\cong \sum_{n=2}^N \sum_{m=0}^n \sum_{p=0}^n \sum_{q=-\infty}^{\infty} \mathcal{T}_{nmpq} \ , $$ where $\mathcal{H}^{sec}_{earth}$, $\mathcal{H}_{earth}^{res}$, $\mathcal{H}_{earth}^{nonres}$ denote, respectively, the secular, resonant and non--resonant contributions to the Earth's potential, the approximation index $N\in{\mathbb Z}_+$ will be given later, while the coefficients $\mathcal{T}_{nmpq}$ are defined by: \begin{equation}\label{T_nmpq_term} \mathcal{T}_{nmpq}=-\frac{\mu_E R_E^n}{a^{n+1}}\ \overline{F}_{nmp}(i)G_{npq}(e) \overline{S}_{nmpq}(M, \omega, \Omega , \theta)\ . \end{equation} In the following we describe the secular part of the expansion \equ{Rearth} by computing the average over the fast angles, say $\mathcal{H}_{earth}^{sec}$, and the resonant part associated to a given $j:k$ tesseral resonance, say $\mathcal{H}_{earth}^{resj:k}$. Since the value of the oblateness coefficient $\overline{J}_2=\overline{J}_{20}$ is much larger than the value of any other zonal coefficient (see Table~\ref{table:CS}), we consider the same secular part for all resonances; the explicit expression of the secular part will be given in Section~\ref{sec:secular}. Concerning the resonant part, say $\mathcal{H}_{earth}^{res\,j:k}$, it is essential to retain a minimum number of significant terms in practical computations. The criteria for selecting these terms are described in Section~\ref{sec:relevant}. \subsubsection{The secular part of $\mathcal{H}_{earth}$}\label{sec:secular} With reference to the expression for $\overline{S}_{nmpq}$ given in \equ{Snmpq}-\equ{psi}, the secular terms correspond to $m=0$ and $n-2p+q=0$. From Table~\ref{table:CS}, it is clear that $\overline{J}_2\gg \overline{J}_n$ for all $n \in \mathbb{N}$, $n>2$. Therefore, in the secular part the most important harmonic is $\overline{J}_2$. Moreover, from Table~\ref{table:CS} it follows that $|\overline{J}_3|$ and $|\overline{J}_4|$ are larger than $|\overline{J}_n|$, $n>4$. Since we are interested in orbits having small eccentricities, for our purposes it is enough to consider just a few harmonic terms in the expansion of the secular part. In practical computations, for all resonances considered in the forthcoming sections, we approximate the secular part with the following expression, computed e.g. in \cite{CGmajor}: \beqa{Rsec} \mathcal{H}_{earth}^{sec}&=&\frac{\sqrt{5} \mu_E R^2_E \overline{J}_{2}}{a^3} \Bigl(\frac{3}{4} \sin^2 i -\frac{1}{2}\Bigr) (1-e^2)^{-3/2} \nonumber\\ &+&\frac{2 \sqrt{7}\mu_E R^3_E \overline{J}_{3}}{a^4} \Bigl(\frac{15}{16} \sin^3 i -\frac{3}{4} \sin i\Bigr) e (1-e^2)^{-5/2} \sin \omega \nonumber \\ &+&\frac{3 \mu_E R^4_E \overline{J}_{4}}{a^5} \Bigl[\Bigl(-\frac{35}{32} \sin^4 i +\frac{15}{16} \sin^2 i\Bigr) \frac{3e^2}{2}(1-e^2)^{-7/2} \cos(2\omega) \nonumber \\ &+& \Bigl(\frac{105}{64} \sin^4 i -\frac{15}{8} \sin^2 i+\frac{3}{8}\Bigr) (1+\frac{3e^2}{2})(1-e^2)^{-7/2} \Bigr]\ . \end{eqnarray} It is important to stress that the numerical results, obtained by taking into account the above approximation of the secular part, may be analytically explained by considering only the influence of $\overline{J}_2$; this will lead to consider a \sl toy model, \rm which well describes the dynamics, as it will be explained in Section~\ref{sec:qualitative_resonance}. The results based on the toy model will allow to draw conclusions about the importance of $\overline{J}_2$ with respect to the other harmonics. Clearly, in view of \eqref{LGH_aei}, $\mathcal{H}_{earth}^{sec}$ can be written as a function of $L$, $G$, $H$ and $\omega$. \subsubsection{The resonant part of $\mathcal{H}_{earth}$}\label{sec:resonant} From \equ{Snmpq}-\equ{psi} we see that the terms associated to a resonance of order $j:k$ correspond to $j(n-2p+q)=k\, m$. We consider the resonant part corresponding to the following resonances located in the close vicinity of the Earth: 11:1, 12:1, 13:1 and 14:1. As we will show in Table~\ref{table:res_location} below, the resonances 11:1, 12:1, 13:1, 14:1 range from an altitude equal to 2\,146.61 $km$ down to an altitude equal to 880.55 $km$. Hence, we consider $k=1$ and, within all possible combinations, the solution for which $j=m$ and $n-2p+q=1$ is relevant for our purposes. Since the majority of infinitesimal bodies of the LEO region moves on almost circular orbits, we focus our analysis on small eccentricities with $e\in [0,0.02]$. For such orbits, just some harmonic resonant terms are significant for the dynamics; their selection will be made by using an analytical argument. In fact, we will see that the resonant part can be approximated with a large degree of accuracy by the sum of some terms, whose formal expression is: \begin{equation} \label{Resonant_part} \mathcal{H}_{earth}^{res\,m:1}= \left\{ \begin{array}{lc} \sum_{\alpha=0}^N \, A^m_\alpha(L,G,H) \cos(\sigma_{m 1} -m\, \lambda_{m+2\alpha, \, m})\,,& \textrm{if } m=11 \textrm{ or } m=13\,, \\ \sum_{\alpha=0}^N \, A^m_\alpha(L,G,H) \sin(\sigma_{m 1} -m \,\lambda_{m+2\alpha+1, \, m})\,, & \textrm{if } m=12 \textrm{ or } m=14\,, \\ \end{array}% \right . \end{equation} where the resonant angle is defined by \begin{equation}\label{sigma_angle} \sigma_{m 1}=M-m \theta+ \omega +m \Omega\,, \end{equation} $N$ is a natural number sufficiently large so that the approximation of the resonant part includes all harmonic terms with high magnitude (in this work we take $N=4$), $A^m_\alpha(L,G,H)$ might be computed by using \eqref{Rearth} and \eqref{LGH_aei}, once $\overline{F}_{nmp}$ and $G_{npq}$ are known, while the values of the constants $\lambda_{nm}$ are given in Table~\ref{table:CS}. In a more compact notation, $\mathcal{H}_{earth}^{res\,m:1}$ is written as: \begin{equation}\label{Resonant_part_2} \mathcal{H}_{earth}^{res\,m:1} = \mathcal{A}_0^{(m)}(L,G,H) \cos (\sigma_{m 1} -\varphi_0^{(m)}(L,G,H))\,, \end{equation} where $\mathcal{A}_0^{(m)}(L,G,H)$ and $\varphi_0^{(m)}(L,G,H)$ are defined through the relations \begin{equation}\label{A_varphi_11_13} \begin{split} & \mathcal{A}_0^{(m)}(L,G,H) \cos \varphi_0^{(m)}(L,G,H)=\sum_{\alpha=0}^N A_\alpha^m(L,G,H) \cos (m \lambda_{m+2\alpha , m})\,,\\ & \mathcal{A}_0^{(m)}(L,G,H) \sin \varphi_0^{(m)}(L,G,H)=\sum_{\alpha=0}^N A_\alpha^m(L,G,H) \sin (m \lambda_{m+2\alpha , m}) \qquad \textrm{if } m=11 \textrm{ or } m=13 \end{split} \end{equation} and \begin{equation}\label{A_varphi_12_14} \begin{split} & \mathcal{A}_0^{(m)}(L,G,H) \cos \varphi_0^{(m)}(L,G,H)=-\sum_{\alpha=0}^N A_\alpha^m(L,G,H) \sin (m \lambda_{m+2\alpha+1 , m})\,,\\ & \mathcal{A}_0^{(m)}(L,G,H) \sin \varphi_0^{(m)}(L,G,H)=\sum_{\alpha=0}^N A_\alpha^m(L,G,H) \cos (m \lambda_{m+2\alpha+1 , m}) \qquad \textrm{if } m=12 \textrm{ or } m=14\ . \end{split} \end{equation} To provide the analytical explanation of how the relevant harmonic terms can be selected, we need two essential comments on the index $q$ labeling the term $\mathcal{T}_{nmpq}$ (see \eqref{T_nmpq_term}). First, we notice that the coefficients $G_{npq}(e)$ decay as powers of the eccentricity, precisely $G_{npq}(e)= \mathcal{O}(e^{|q|})$ (see \cite{Kaula, CGmajor}). Henceforth, the term $\mathcal{T}_{nmpq}$ is of order $|q|$ in the eccentricity. On the other hand, in view of \eqref{Snmpq}, \eqref{psi}, \eqref{lambda_nm} and \eqref{sigma_angle}, it follows that the argument of the resonant term $\mathcal{T}_{nmpq}$ has the form $\sigma_{m1}-q \omega+const$. Therefore, we conclude that the resonant harmonic terms can be grouped into terms of the same order in the eccentricity and having the same argument (modulo a constant). Let us denote by $\mathcal{M}^m_q$ the set of the resonant terms associated to the resonance $m:1$ and having the same index $q$, namely \beq{Mmq} {\mathcal M}_q^m\equiv \{{\mathcal T}_{nmpq}:\ n-2p+q=1\ ,\ n\in\mathbb{N}\ ,\ \ p\in\mathbb{N}\ , \ \ n \geq m\ , \ \ p\leq n \}\ . \end{equation} The sets ${\mathcal M}_q^m$ with $q=-1,0,1$ and for the resonances 11:1, 12:1, 13:1, 14:1 are given in Table~\ref{tab:resonant_terms}. The introduction of the set ${\mathcal M}_q^m$ is motivated by the fact that, from a dynamical point of view, the terms belonging to $\mathcal{M}^m_q$ combine to give rise to a single resonant island at the same altitude. Indeed, as it was pointed out in \cite{CGmajor} and \cite{CGminor}, each resonance splits into a multiplet of resonances; the exact location of the resonance for each component of the multiplet is obtained as the solution of the relation $\dot{\sigma}_{m1}-q \dot{\omega}=0$. However, since the elements of the set $\mathcal{M}^m_q$ have the same argument $\sigma_{m1}-q \omega$ (modulo a constant), a single resonant island is obtained when $n$ and $p$ vary, even if $\mathcal{M}^m_q$ includes terms which are all different from each other. Using \equ{Rearth}, \equ{Snmpq}, \equ{psi}, \equ{T_nmpq_term}, \equ{sigma_angle}, we have the following result. \begin{lemma}\label{lem:Mmq} The sum of the terms of the set $\mathcal{M}^m_q$ in \equ{Mmq} can be written formally as $$ \sum_{\mathcal{T} \in \mathcal{M}_q^m} \mathcal{T}=\mathcal{A}^{(m)}_q(L,G,H) \cos(\sigma_{m 1} -q \omega -\varphi_q^{(m)} (L,G,H))\ , $$ where $\mathcal{A}^{(m)}_q(L,G,H)$ and $\varphi_q^{(m)} (L,G,H)$ can be explicitly computed for each set $\mathcal{M}^m_q$, once its elements are known. \end{lemma} Without loss of generality, we assume that $\mathcal{A}^{(m)}_q(L,G,H)$ is non-negative for every $L$, $G$, $H$, possibly shifting the argument of the trigonometric function. \vskip.2in \begin{table}[h] \begin{tabular}{|c|c|c|} \hline $m:1$ & $\mathcal{M}^m_q$ & terms \\ \hline & $\mathcal{M}^{1\!1}_0$ & $\mathcal{T}_{1\!1\,1\!1\,5\,0},\, \mathcal{T}_{1\!3\,1\!1\,6\,0},\, \mathcal{T}_{1\!5\,1\!1\,7\,0},\, \mathcal{T}_{1\!7\,1\!1\,8\,0},\, \mathcal{T}_{1\!9\,1\!1\,9\,0}$ \\ 11:1 & $\mathcal{M}^{1\!1}_{-1}$ & $\mathcal{T}_{1\!2\,1\!1\,5\,-1},\, \mathcal{T}_{1\!4\,1\!1\,6\,-1},\, \mathcal{T}_{1\!6\,1\!1\,7\,-1},\, \mathcal{T}_{1\!8\,1\!1\,8\,-1},\, \mathcal{T}_{2\!0\,1\!1\,9\,-1}$ \\ & $\mathcal{M}^{1\!1}_1$ & $\mathcal{T}_{1\!2\,1\!1\,6\,1},\, \mathcal{T}_{1\!4\,1\!1\,7\,1},\, \mathcal{T}_{1\!6\,1\!1\,8\,1},\, \mathcal{T}_{1\!8\,1\!1\,9\,1},\, \mathcal{T}_{2\!0\,1\!1\,1\!0\,1}$ \\ \hline & $\mathcal{M}^{1\!2}_0$ & $\mathcal{T}_{1\!3\,1\!2\,6\,0},\, \mathcal{T}_{1\!5\,1\!2\,7\,0},\, \mathcal{T}_{1\!7\,1\!2\,8\,0},\, \mathcal{T}_{1\!9\,1\!2\,9\,0},\, \mathcal{T}_{2\!1\,1\!2\,1\!0\,0}$ \\ 12:1 & $\mathcal{M}^{1\!2}_{-1}$ & $\mathcal{T}_{1\!2\,1\!2\,5\,-1},\, \mathcal{T}_{1\!4\,1\!2\,6\,-1},\, \mathcal{T}_{1\!6\,1\!2\,7\,-1},\, \mathcal{T}_{1\!8\,1\!2\,8\,-1},\, \mathcal{T}_{2\!0\,1\!2\,9\,-1}$ \\ & $\mathcal{M}^{1\!2}_1$ & $\mathcal{T}_{1\!2\,1\!2\,6\,1},\, \mathcal{T}_{1\!4\,1\!2\,7\,1},\, \mathcal{T}_{1\!6\,1\!2\,8\,1},\, \mathcal{T}_{1\!8\,1\!2\,9\,1},\, \mathcal{T}_{2\!0\,1\!2\,1\!0\,1}$ \\ \hline & $\mathcal{M}^{1\!3}_0$ & $\mathcal{T}_{1\!3\,1\!3\,6\,0},\, \mathcal{T}_{1\!5\,1\!3\,7\,0},\, \mathcal{T}_{1\!7\,1\!3\,8\,0},\, \mathcal{T}_{1\!9\,1\!3\,9\,0},\, \mathcal{T}_{2\!1\,1\!3\,1\!0\,0}$ \\ 13:1 & $\mathcal{M}^{1\!3}_{-1}$ & $ \mathcal{T}_{1\!4\,1\!3\,6\,-1},\, \mathcal{T}_{1\!6\,1\!3\,7\,-1},\, \mathcal{T}_{1\!8\,1\!3\,8\,-1},\, \mathcal{T}_{2\!0\,1\!3\,9\,-1},\, \mathcal{T}_{2\!2\,1\!3\,1\!0\,-1}$ \\ & $\mathcal{M}^{1\!3}_1$ & $ \mathcal{T}_{1\!4\,1\!3\,7\,1},\, \mathcal{T}_{1\!6\,1\!3\,8\,1},\, \mathcal{T}_{1\!8\,1\!3\,9\,1},\, \mathcal{T}_{2\!0\,1\!3\,1\!0\,1},\, \mathcal{T}_{2\!2\,1\!3\,1\!1\,1},\,$ \\ \hline & $\mathcal{M}^{1\!4}_0$ & $ \mathcal{T}_{1\!5\,1\!4\,7\,0},\, \mathcal{T}_{1\!7\,1\!4\,8\,0},\, \mathcal{T}_{1\!9\,1\!4\,9\,0},\, \mathcal{T}_{2\!1\,1\!4\,1\!0\,0},\, \mathcal{T}_{2\!3\,1\!4\,1\!1\,0}$ \\ 14:1 & $\mathcal{M}^{1\!4}_{-1}$ & $ \mathcal{T}_{1\!4\,1\!4\,6\,-1},\, \mathcal{T}_{1\!6\,1\!4\,7\,-1},\, \mathcal{T}_{1\!8\,1\!4\,8\,-1},\, \mathcal{T}_{2\!0\,1\!4\,9\,-1},\, \mathcal{T}_{2\!2\,1\!4\,1\!0\,-1}$ \\ & $\mathcal{M}^{1\!4}_1$ & $ \mathcal{T}_{1\!4\,1\!4\,7\,1},\, \mathcal{T}_{1\!6\,1\!4\,8\,1},\, \mathcal{T}_{1\!8\,1\!4\,9\,1},\, \mathcal{T}_{2\!0\,1\!4\,1\!0\,1},\, \mathcal{T}_{2\!2\,1\!4\,1\!1\,1},\,$ \\ \hline \end{tabular} \vskip.1in \caption{The sets $\mathcal{M}^{m}_{0}$, $\mathcal{M}^{m}_{-1}$, $\mathcal{M}^{m}_{1}$ for the resonances 11:1, 12:1, 13:1, 14:1.}\label{tab:resonant_terms} \end{table} \vskip.1in \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{whoisbig13_1.pdf} \includegraphics[width=6truecm,height=5truecm]{whoisbig14_1.pdf} \vglue0.4cm \caption{Dominant sets for the 13:1 (left) and 14:1 (right) resonances as a function of eccentricity and inclination: $\mathcal{M}^m_0$ -- black, $\mathcal{M}^m_{-1}$ -- brown, $\mathcal{M}^m_1$ -- yellow, where $m=13,\, 14$ and the sets $\mathcal{M}^m_0$, $\mathcal{M}^m_{-1}$, $\mathcal{M}^m_1$ are defined in Section~\ref{sec:resonant}.} \label{fig:big_terms} \end{figure} \subsection{The most relevant terms of the Hamiltonian}\label{sec:relevant} Our next task is to retain those sets $\mathcal{M}^m_q$ which are important for the dynamics, as well as to keep only the most relevant elements of each selected set. Since our analysis involves small eccentricities, one expects that $\mathcal{M}^m_0$ will play the most important role, while the influence of the other sets, precisely $\mathcal{M}^m_{-1}$, $\mathcal{M}^m_{1}$, will be weaker. Concerning the elements of the set $\mathcal{M}^m_q$, it is important to stress that the coefficients of degree $n$ decay as $(R_E/a)^n$, so the role of the harmonic terms with higher degree becomes increasingly less influent. However, since we are considering resonances which are very close to the Earth, the quantity $(R_E/a)^n$ decays slowly for increasingly higher values of $n$. In conclusion, to get a reliable model, the set $\mathcal{M}^m_q$ should contain as many harmonic terms as possible. However, due to computational limitations, in this paper the maximum number of elements of $\mathcal{M}^m_q$ is 5, which is a good compromise between accuracy and complexity. It is meaningful to consider a larger number of coefficients when dealing with specific concrete cases. To give an explicit example, let us take the set $\mathcal{M}^{11}_0$. Comparing the coefficient $(R_E/a)^{1\!1}$ of the term $\mathcal{T}_{1\!1\, 1\!1\, 5\,0}$ (see \eqref{T_nmpq_term} and Table~\ref{tab:resonant_terms}) with the coefficient $(R_E/a)^{2\!1}$ of $\mathcal{T}_{2\!1\, 1\!1\, 1\!0\,0}$ (namely, the first term of $\mathcal{M}^{11}_0$ neglected in our computations), we find that for $a=8524.75$ $km$ (see Table~\ref{table:res_location} below) the term $\mathcal{T}_{2\!1\, 1\!1\, 1\!0\,0}$ is 18 times smaller than $\mathcal{T}_{1\!1\, 1\!1\, 5\,0}$, thus showing that the neglected harmonic terms are smaller in magnitude than those considered in our model. Of course, the conclusion is valid for all other sets, although with different ratios. We report in Table~\ref{tab:resonant_terms} the terms of the sets $\mathcal{M}^m_0$, $\mathcal{M}^m_{-1}$ and $\mathcal{M}^m_1$ that we are going to consider for each resonance. Once the elements of $\mathcal{M}^m_q$ are selected, it remains to discriminate which are the most important ones. Making use of Lemma~\ref{lem:Mmq}, we introduce the following definition, which gives a hierarchy between the sets ${\mathcal M}_q^m$. \begin{definition} \label{def_dominant} Let $\mathcal{H}_{earth}^{res\,m:1}$ be the resonant part of $\mathcal{H}_{earth}$, corresponding to the resonance $m:1$. For given values of the orbital elements $(a,e,i)$, equivalently for given values of $(L,G,H)$, we say that a set $\mathcal{M}^m_q$, for some $q \in \mathbb{Z}$, is {\it dominant} with respect to the other sets $\mathcal{M}^m_{\widetilde{q}}$, where $\widetilde{q} \in \mathbb{Z}$ with $\widetilde{q} \neq q$, if $\mathcal{A}^{(m)}_q(L,G,H) \geq \mathcal{A}^{(m)}_{\widetilde{q}}(L,G,H)$ for all $\widetilde{q} \in \mathbb{Z}$. \end{definition} A plot of the dominant sets according to Definition~\ref{def_dominant} for the resonances $13:1$ and $14:1$ is provided in Figure~\ref{fig:big_terms}, within the orbital elements' intervals $e\in [0,0.02]$ and $i \in [0^o, 120^o]$. The black, brown and yellow colors are, respectively, used to show the regions where $\mathcal{M}^m_0$, $\mathcal{M}^m_{-1}$ and $\mathcal{M}^m_{1}$ dominate. Similar plots are also obtained for the 11:1 and 12:1 resonances, but in these cases the regions associated to ${\mathcal M}_{-1}^m$ are very small and those related to ${\mathcal M}_1^m$ are negligible. From the analysis of Figure~\ref{fig:big_terms}, we conclude that $\mathcal{M}^m_0$ is dominant in almost all regions of the $(e,i)$ - plane, except for some small inclinations and for $i=86.18^o$ in the case of the 14:1 resonance. Taking into account the fact that the amplitudes of the two resonant islands associated to $\mathcal{M}^m_{-1}$ and $\mathcal{M}^m_{1}$ are small (at most few hundred meters as it will be shown in Section~\ref{sec:qualitative_resonance}), we may approximate the resonant part $\mathcal{H}_{earth}^{res\,m:1}$ by the sum of the terms of $\mathcal{M}^m_0$. Therefore, from Table~\ref{tab:resonant_terms} and collecting \eqref{Snmpq}, \eqref{psi}, \eqref{lambda_nm}, \eqref{T_nmpq_term}, \eqref{sigma_angle}, it follows that $\mathcal{H}_{earth}^{res\,m:1}$ can be written in the form \eqref{Resonant_part} (or equivalently in the form \eqref{Resonant_part_2}) for a suitable integer $N$, which counts the number of terms generated by ${\mathcal M}_0^m$. Section~\ref{sec:qualitative_resonance} will confirm that the analytical model, constructed on the basis of this approximation, leads to reliable results. In fact, the numerical investigation will be performed by taking into account the effects of all three sets $\mathcal{M}^m_0$, $\mathcal{M}^m_{-1}$ and $\mathcal{M}^m_{1}$, but we will obtain results that can be easily explained in terms of an analytical model which includes just the influence of $\mathcal{M}^m_0$. Since the normalized inclination functions $\overline{F}_{nmp}$ involve very long expressions (often more than half page for each function), we avoid giving the explicit forms of the terms $\mathcal{T}_{nmpq}$ and of the functions $A^{m}_{\alpha}(L,G,H)$, $\mathcal{A}_0^{(m)}(L,G,H)$ and $\varphi_0^{(m)}(L,G,H)$. The reader can compute these quantities by using the recursive formulae for the functions $F_{nmp}$, $G_{npq}$ (see \cite{Kaula,CGmajor}) and by using the relations presented in Section~\ref{sec:secres}. \section{Dissipative effects: the atmospheric drag} \label{sec:diss_effect_drag} During its motion within the Earth's atmosphere, an infinitesimal object (satellite or space debris) encounters air molecules, whose change of momentum gives rise to a dissipative force oriented opposite to the motion of the body and known as atmospheric drag. The atmospheric drag force depends on the local density of the atmosphere, the velocity of the object relative to the atmosphere and the cross--sectional area in the direction of motion. The purpose of this Section is to derive the functions $F_{_L}$, $F_{_G}$, $F_{_H}$, characterizing the atmospheric drag perturbations in the dynamical equations \eqref{canonical_eq}. To this end, we use the following averaged equations of variation of the orbital elements (see~\cite{Liu, Chao}): \beqa{dissae} \dot a&=&-{1\over {2\pi}}\int_0^{2\pi}B\, \rho\, v{a\over {1-e^2}}\ \Big[1+e^2+2e\cos f-\omega_E\cos i\sqrt{{a^3(1-e^2)^3}\over {\mu_E}}\Big]\ dM\nonumber\\ &\equiv&{\mathcal F}^{(a)}(a,e,i)\ ,\nonumber\\ \dot e&=&-{1\over {2\pi}}\int_0^{2\pi}B\, \rho\, v\ \Big[e+\cos f-{{r^2\omega_E\cos i}\over{2\sqrt{\mu_E a(1-e)^2}}}\Big(2(e+\cos f)-e\sin^2f\Big)\Big]\, dM\nonumber\\ &\equiv&{\mathcal F}^{(e)}(a,e,i)\ , \end{eqnarray} where $f$ is the true anomaly, $\omega_E$ (coinciding with $\dot{\theta}$) is the Earth's rotation rate, $\rho$ the atmospheric density, $B$ the ballistic coefficient, while the body's speed relative to the atmosphere is given by \begin{equation}\label{speed} v=\sqrt{{\mu_E\over a(1-e^2)}(1+e^2+2e\cos f)}\ \Big(1-{{(1-e^2)^{3\over 2}}\over {1+e^2+2e\cos f}}\, {\omega_E\over n^*}\cos i\Big)\ , \end{equation} where $n^*$ is the mean motion of the satellite. Notice that $r$, $f$ (hence $v$) are functions of $M$. We stress that the atmospheric drag affects just $\dot{a}$ and $\dot{e}$ and not the other variables (namely, the inclination and the angle variables). We recall that the ballistic coefficient is expressed in terms of the cross--sectional area $A$ with respect to the relative wind and in terms of the mass $m$ of the object through the formula $B=C_D\, A/m$, where $C_D$ is the drag coefficient. For a debris, the coefficient $B$ can vary by a factor 10 depending on the satellite's orientation (see Table 8-3 in \cite{LW} for a list of estimated ballistic coefficients associated to various LEO satellites; note that this table provides $1/B$). Although the ballistic coefficient of a satellite slightly modifies in time, in all simulations we suppose that $B$ is constant. This assumption is motivated by the fact that we are interested in studying the equilibrium points, and therefore in such dynamical configuration the small variation of $B$ can be neglected in a first approximation. Moreover, in order to show the existence of the equilibrium points even for strong dissipative effects, in our simulations we shall often use large values for the ballistic coefficient, up to $2200 \, cm^2/kg$, although the value for a satellite is much smaller, typically $25 \leq B\leq 500 \, cm^2/kg$ (see \cite{ISO}). \vskip.2in \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Altitude & Atm. scale & Minimum density & Mean density & Maximum density \\ $h_0$ ($km$) & height $H_0$ ($km$) & ($kg/m^3$) & ($kg/m^3$) & ($kg/m^3$) \\ \hline 700 & 99.3 & $5.74\cdot 10^{-15}$ & $2.72\cdot 10^{-14}$ & $1.47\cdot 10^{-13}$\\ 800 & 151 & $2.96\cdot 10^{-15}$ & $9.63\cdot 10^{-15}$ & $4.39\cdot 10^{-14}$\\ 1000 & 296 & $1.17\cdot 10^{-15}$ & $2.78\cdot 10^{-15}$ & $8.84\cdot 10^{-15}$\\ 1250 & 408 & $4.67\cdot 10^{-16}$ & $1.11\cdot 10^{-15}$ & $2.59\cdot 10^{-15}$ \\ 1500 & 516 & $2.30\cdot 10^{-16}$ & $5.21\cdot 10^{-16}$ & $1.22\cdot 10^{-15}$ \\ 2000 & 829 & $-$ & $-$ & $-$ \\ \hline \end{tabular} \vskip.1in \caption{The scaling height $H_0$ as well as the minimum, mean and maximum densities at the reference altitude $h_0$, from MSIS atmospheric model (\cite{Hedin}, see also \cite{LW}).}\label{table:rho} \end{table} To complete the discussion of equations \equ{dissae}, let us mention that the atmospheric density can be computed from density models such as that developed by Jacchia (\cite{Jacchia}), the Mass Spectrometer Incoherent Scatter - MSIS model (\cite{Hedin0, Hedin}) and other models (see \cite{ISO}). Following the dynamical density MSIS model, the local density is a function of various parameters such as the altitude of the body, the solar flux, the Earth's magnetic index, etc. (see~\cite{Hedin0, Hedin}). Of particular interest is the variation of density as effect of the solar activity, which fluctuates with an 11--year cycle. In this work we use the numbers provided by the MSIS model. Therefore, we assume that the local density varies with the altitude above the surface, say $h=r-R_E$, with $r$ the distance from the Earth's center, and we use the the following barometric formula: \begin{equation}\label{rho_h} \rho(h)=\rho_0\ \exp \biggl(-{{h-h_0}\over {H_0}}\biggr)\ , \end{equation} where $\rho_0$ is the (minimum, mean or maximum) density, estimated for (minimum, mean or maximum) solar activity at the reference altitude $h_0$, while $H_0$ is the scaling height at $h_0$. Reference empirical values are given in Table~\ref{table:rho} (see also \cite{LW} for a more detailed list of values and further explanations). Although our investigation involves small eccentricities, say up to $e=0.02$, the difference in altitude between apogee and perigee is not negligible and it amounts to about 300 $km$ (see Table~\ref{table:res_location}). In fact, comparing the altitudes reported in Tables~\ref{table:res_location} and \ref{table:rho}, it is clear that $\rho=0$ for the 11:1 resonance, while for the other resonances one should use the formula \eqref{rho_h} with the corresponding values for $\rho_0$, $h_0$ and $H_0$ taken from Table~\ref{table:rho}. \vskip.2in \begin{table}[h] \begin{tabular}{|c|c|c|c|c|} \hline $m:1$ & $a$ & Altitude & Perigee altitude & Apogee altitude \\ & ($km$) & ($km$) & for $e=0.02$ ($km$) & for $e=0.02$ ($km$) \\ \hline 11:1 & 8524.75 & 2146.61 & 1976.25 & 2317.25 \\ 12:1 & 8044.32 & 1666.18 & 1505.43 & 1827.21 \\ 13:1 & 7626.31 & 1248.17 & 1095.78 & 1400.84 \\ 14:1 & 7258.69 & 880.55 & 735.52 & 1025.86 \\ \hline \end{tabular} \vskip.1in \caption{The semimajor axis and the altitude corresponding to some resonances of order $m:1$, as well as the perigee and apogee altitudes of a resonant elliptic orbit with $e=0.02$. The altitudes are computed by considering the reference value $R_E=6378.14$ $km$ for the Earth's radius. }\label{table:res_location} \end{table} \vskip.2in Once the framework has been settled, we can approximate in the computations the true anomaly $f$ (entering \equ{dissae}, \equ{speed}) and the altitude $h$ (entering in \equ{rho_h}) by the following well known series (\cite{Roy,Alebook}): \begin{equation}\label{anomaly_rovera} \begin{split} & f = M+2e \sin M+\frac{5 e^2}{4}\sin(2M)+O(e^3)\ ,\\ & h=a(1-e \cos f)-R_E= a\Bigl\{1-e \cos M+\frac{e^2}{2}\Bigl[1- \cos(2M)\Bigr]\Bigr\}-R_E +O(e^3)\ , \end{split} \end{equation} where $O(e^3)$ denotes terms of order 3 in the eccentricity. Casting together the relations \eqref{dissae}, \eqref{speed}, \eqref{rho_h} and \eqref{anomaly_rovera}, by the algebraic manipulator \verb"Mathematica"$^\copyright$ we compute the integrals appearing in the right hand side of $\eqref{dissae}$. In this way, we deduce that the right hand sides in the first of $\eqref{dissae}$, thereby denoted as $\mathcal{F}^{(a)}$ and $\mathcal{F}^{(e)}$, are functions of $a$, $e$, $i$, while $\rho_0$ and $B$ are parameters. As in Section~\ref{sec:resonant} we do not provide the explicit form of $\mathcal{F}^{(a)}(a,e,i)$ and $\mathcal{F}^{(e)}(a,e,i)$, since they involve long expressions. The reader can self-compute these functions by a simple implementation of the above formulae, possibly using an algebraic manipulator. Once $\mathcal{F}^{(a)}$ and $\mathcal{F}^{(e)}$ are computed as a function of the orbital elements, it is trivial to express them in terms of the Delaunay actions: $\mathcal{F}^{(a)}=\mathcal{F}^{(a)}(L,G,H)$ and $\mathcal{F}^{(e)}=\mathcal{F}^{(e)}(L,G,H)$. Since the atmospheric drag does not affect the inclination, from \eqref{LGH_aei} we obtain \begin{eqnarray*} \dot L&=&{{1}\over {2}} \sqrt{\frac{\mu_E}{a}}\ \dot a\,,\nonumber\\ \dot G&=&{{1}\over {2}} \sqrt{\frac{\mu_E (1-e^2)}{a}}\ \dot a-e \sqrt{{{ \mu_E a}\over {1-e^2}}} \ \dot e\,,\nonumber\\ \dot H&=&\Bigl({{1}\over {2}} \sqrt{\frac{\mu_E (1-e^2)}{a}}\ \dot a-e \sqrt{{{ \mu_E a}\over {1-e^2}}} \ \dot e \Bigr) \cos i \ .\nonumber\\ \end{eqnarray*} Using the relations $a=L^2/\mu_E$, $e=\sqrt{1-G^2/L^2}$ and $\cos i=H/G$, we deduce that the functions $F_{_L}$, $F_{_G}$, $F_{_H}$, characterizing the atmospheric drag perturbations in \equ{canonical_eq}, are given by \begin{equation}\label{dissipative_functions} \begin{split} &F_{_L}={{\mu_E}\over {2 L}}\ {\mathcal{F}^{(a)}}(L,G,H)\,,\\ &F_{_G}={{\mu_E G}\over {2L^2}}\ \mathcal{F}^{(a)}(L,G,H)-{{ L^2}\over {G}} \sqrt{1-\frac{G^2}{L^2}}\ \mathcal{F}^{(e)}(L,G,H)\,,\\ &F_{_H}=\frac{H}{G}\Bigl({{\mu_E G}\over {2L^2}}\ \mathcal{F}^{(a)}(L,G,H)- {{ L^2}\over {G}} \sqrt{1-\frac{G^2}{L^2}}\ \mathcal{F}^{(e)}(L,G,H)\Bigr)\ . \end{split} \end{equation} In conclusion, to study the main dynamical features of tesseral resonances, we have introduced (Sections~\ref{sec:equations_of_motion}, \ref{sec:geopotential_Ham} and \ref{sec:diss_effect_drag}) a mathematical model characterized by the equations \eqref{canonical_eq}, where the secular part of the Hamiltonian \eqref{H} is given by \eqref{Rsec}, the resonant part of $\mathcal{H}$ is obtained as the sum of the resonant harmonic terms of Table~\ref{tab:resonant_terms}, while the dissipative part is described by the functions $F_{_L}$, $F_{_G}$, $F_{_H}$ defined by \eqref{dissipative_functions}. Hereafter, this model will be called the \emph{dissipative model of LEO resonances}, or simply DMLR. \section{A qualitative study of resonances}\label{sec:qualitative_resonance} This section presents a qualitative study of the resonances. Precisely, it includes an analysis of the conservative and dissipative effects, an estimate of the amplitude of the resonances, a study related to the existence, location and stability of the equilibrium points. Some analytical results based on a toy model that will be introduced in Section~\ref{sec:toy} are confirmed by numerical simulations obtained by using the DMLR. We stress that, although the degree $n$ of the resonant terms is large ($n\geq 11$), which implies that the magnitude of these terms is small, the effects of the conservative part can be quantified; in particular, for some inclinations the resonant regions have a width larger than one or two kilometers. Since at high altitudes the drag effect is sufficiently low, even if the solar activity reaches its maximum, for such inclinations one can show that equilibrium points exist. \subsection{The toy model}\label{sec:toy} To give an analytical support to the numerical results that will be performed on the DMLR, we construct in parallel a simplified model, to which we refer as the \sl toy model, \rm allowing to explain the main features of the dynamics. In this model the secular part contains just the $\bar{J}_2$ term (first term of \eqref{Rsec}), the resonant part is defined by \eqref{Resonant_part_2} and the dissipative functions are given by~\eqref{dissipative_functions_circular} below. Following \cite{Chao}, for nearly circular orbits, the function $\mathcal{F}^{(a)}$ can be simplified as \begin{equation}\label{adot_circular} \mathcal{F}^{(a)}=-B \rho n^* a^2 \Bigl(1-\frac{\omega_E}{n^*} \cos i\Bigr)^2\ , \end{equation} where $\rho$ is assumed to be constant at a fixed altitude of the orbit and $n^*=\sqrt{\mu_E/a^3}$. As mentioned before, the variation of the eccentricity can be considered a small quantity; therefore, in the simplified model we take $\mathcal{F}^{(e)}=0$. Using \eqref{LGH_aei}, \eqref{dissipative_functions} and \eqref{adot_circular} we get \begin{equation}\label{dissipative_functions_circular} \begin{split} F_{_L}=-\frac{1}{2} B \rho \mu_E\Bigl(1-\frac{\omega_E L^3 H}{\mu_E^2 G} \Bigr)^2\,,\qquad F_{_G}=\frac{G}{L} F_{_L}\ ,\qquad F_{_H}=\frac{H}{L} F_{_L}\ . \end{split} \end{equation} \begin{figure}[h] \centering \vglue0.1cm \hglue0.1cm \includegraphics[width=6truecm,height=5truecm]{amplitude11_1.pdf} \includegraphics[width=6truecm,height=5truecm]{amplitude12_1.pdf}\\ \vglue-0.6cm \includegraphics[width=6truecm,height=5truecm]{amplitude13_1.pdf} \includegraphics[width=6truecm,height=5truecm]{amplitude14_1.pdf} \vglue0.4cm \caption{The amplitude of the resonances for different values of the eccentricity (within 0 and 0.02 on the horizontal axis) and the inclination (within $0^o$ and $120^o$ on the vertical axis); the color bar provides the measure of the amplitude in kilometers. In order from top to bottom, left to right: 11:1, 12:1, 13:1, 14:1.} \label{fig:amplitude} \end{figure} In view of \eqref{LGH_aei}, \eqref{H}, \eqref{Rsec}, \eqref{Resonant_part_2}, the conservative part of the toy model is given by \begin{equation}\label{H_toy} \mathcal{H}^{m:1}_{toy}(L,G,H,\sigma_{m1})=-\frac{\mu_E^2}{2 L^2}+\frac{\alpha}{L^3 G^3} \Bigl(1-3 \frac{H^2}{G^2}\Bigr) +\mathcal{A}_0^{(m)} (L,G,H) \cos (\sigma_{m1}-\varphi_0^{(m)} (L,G,H))\,, \end{equation} where $$ \alpha=\frac{\sqrt{5}R_E^2 \overline{J}_2 \mu_E^4}{4} $$ and $\sigma_{m1}$, $\mathcal{A}_0^{(m)}$, $\varphi_0^{(m)}$ are given by \eqref{sigma_angle}, \eqref{A_varphi_11_13}, \eqref{A_varphi_12_14}. Let us now perform a canonical change of coordinates, similar to that presented in~\cite{CGmajor}, which transforms the variables $(L,G,H, M, \omega, \Omega)$ into $(\widetilde{L}, \widetilde{G}, \widetilde{H}, \sigma_{m1}, \omega, \Omega)$, where $\sigma_{m1}$ is given by \eqref{sigma_angle}, $\omega$ and $\Omega$ are kept unaltered and \begin{equation}\label{canonical_transformation} \widetilde{L}=L\,, \qquad \widetilde{G}=G-L\,, \qquad \widetilde{H}=H-m L\,. \end{equation} In terms of the new variables, the Hamiltonian \equ{H_toy} takes the form \begin{equation}\label{toy_ham_canonical} \widetilde{\mathcal{H}}^{m:1}_{toy}(\widetilde L,\widetilde G,\widetilde H,\sigma_{m1})= \widetilde{h}^{(m)}(\widetilde L,\widetilde G,\widetilde H) +\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde L,\widetilde G,\widetilde H) \cos (\sigma_{m1}-\widetilde{\varphi}^{(m)}(\widetilde L,\widetilde G,\widetilde H))\ , \end{equation} where \begin{equation}\label{hAphi_tilde} \begin{split} &\widetilde{h}^{(m)}(\widetilde{L}, \widetilde{G}, \widetilde{H})=-\frac{\mu_E^2}{2 \widetilde{L}^2}-m \widetilde{L}+\frac{\alpha}{\widetilde{L}^3 (\widetilde{G}+\widetilde{L})^3} \Bigl(1-3 \frac{(\widetilde{H}+m \widetilde{L})^2}{(\widetilde{G}+ \widetilde{L})^2}\Bigr)\,,\\ &\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}, \widetilde{G}, \widetilde{H})=\mathcal{A}_0^{(m)}(\widetilde{L}, \widetilde{G}+\widetilde{L}, \widetilde{H}+m \widetilde{L})\,,\\ &\widetilde{\varphi}^{(m)}(\widetilde{L}, \widetilde{G}, \widetilde{H})=\varphi_0^{(m)}(\widetilde{L}, \widetilde{G}+\widetilde{L}, \widetilde{H}+m \widetilde{L})\, \end{split} \end{equation} and $\varepsilon$ is a small coefficient introduced for convenience, so that $\widetilde{h}^{(m)}$ and $\widetilde{\mathcal{A}}^{(m)}$ have comparable sizes, when measured at the same point. Strictly speaking, the quantity $\widetilde{\varphi}^{(m)}(\widetilde{L}, \widetilde{G}, \widetilde{H})$ depends on the variable $\widetilde{L}$ and not on $\widetilde{L}_{res}$, which is the value of $\widetilde{L}$ at the resonance. However, the numerical tests show that the error is very small, of the order of few arcseconds, if $\widetilde{\varphi}^{(m)}(\widetilde{L}, \widetilde{G}, \widetilde{H})$ is replaced by $\widetilde{\varphi}^{(m)} (\widetilde{L}_{res},\widetilde{G},\widetilde{H})$. Since we are interested in obtaining a reduced model, allowing to explain the results provided by the DMLR, we take $\widetilde{\varphi}^{(m)}$ as constant in $\widetilde{L}$ and write $\widetilde{\varphi}^{(m)}=\widetilde{\varphi}^{(m)}(\widetilde{G}, \widetilde{H})$ in order to underline this aspect. Before analyzing the dissipative part, let us study first the conservative effects. Therefore, we disregard for the moment the influence of the drag force and we focus our attention on the Hamiltonian \eqref{toy_ham_canonical}. Since $\omega$ and $\Omega$ are cyclic variables, it results that $\widetilde{G}$ and $\widetilde{H}$ are constants, so that the dynamics is described by a pendulum type Hamiltonian. In particular, following the method described in \cite{CGmajor}, the width of the resonances can be easily computed for the pendulum-like model. We refer to \cite{CGminor} for the formulae necessary to compute the amplitudes of the islands associated to \equ{H_toy}. Figure~\ref{fig:amplitude} provides the amplitudes of the 11:1, 12:1, 13:1 and 14:1 resonances as the eccentricity varies between $0$ and $0.02$, while the inclination ranges between $0^o$ and $120^o$. The color bar indicates the size of the amplitude in kilometers. Figure~\ref{fig:amplitude} shows that for inclinations less than $30^o$ the amplitude is small, at most $350\, m$, while for larger inclinations, the amplitude could reach about two (or three for the 13:1 resonance) kilometers. Having in mind these results, we can anticipate what happens when the dissipative effects are taken into account for the 12:1, 13:1, 14:1 resonances: we expect the equilibrium points to persist for those inclinations which lead (in the conservative case) to large amplitudes, even if the ballistic coefficient is high. On the contrary, for small inclinations -- since the amplitude is small -- one has the opposite situation: the magnitude of the drag force is large in comparison with the resonant part and, therefore, we anticipate that the equilibrium points do not exist. These statements are proved analytically in Sections~\ref{sec:existence}, \ref{sec:type}. For the moment, let us go back to the equations of motion and discuss about the dissipative part. Collecting \eqref{canonical_eq}, \eqref{dissipative_functions_circular}, \eqref{canonical_transformation} and \eqref{toy_ham_canonical}, we obtain: \begin{equation} \label{toy_canonical_eq} \begin{split} \dot{\sigma}_{m1}=\frac{\partial \widetilde{\mathcal{H}}^{m:1}_{toy}}{\partial \widetilde{L}}\,,\qquad \quad & \qquad \dot{\omega}=\frac{\partial \widetilde{\mathcal{H}}^{m:1}_{toy}}{\partial \widetilde{G}}\,, \ \quad \qquad \qquad \dot{\Omega}=\frac{\partial \widetilde{\mathcal{H}}^{m:1}_{toy}}{\partial \widetilde{H}}\,,\\ \dot{\widetilde{L}}=-\frac{\partial \widetilde{\mathcal{H}}^{m:1}_{toy}}{\partial \sigma_{m1}}- \eta D^{(m)}_{_L}(\widetilde L,\widetilde G,\widetilde H)\,, & \qquad \dot{\widetilde{G}}=-\eta D^{(m)}_{_G}(\widetilde L,\widetilde G,\widetilde H)\,, \qquad \dot{\widetilde{H}}=-\eta D^{(m)}_{_H}(\widetilde L,\widetilde G,\widetilde H)\,, \end{split} \end{equation} where the dissipative effects are described by the time depending parameter $\eta=\rho B$ and the functions $D^{(m)}_{_L}$, $D^{(m)}_{_G}$, $D^{(m)}_{_H}$ are defined as \beqa{diss_D} D^{(m)}_{_L}(\widetilde L,\widetilde G,\widetilde H)&=&\frac{\mu_E}{2} \biggl(1-\frac{\omega_E \widetilde{L}^3 (\widetilde{H}+m \widetilde{L})}{\mu_E^2 (\widetilde{G}+\widetilde{L})} \biggr)^2\ ,\nonumber\\ D^{(m)}_{_G}(\widetilde L,\widetilde G,\widetilde H)&=&\frac{\widetilde{G}}{\widetilde{L}} D^{(m)}_{_L}(\widetilde L,\widetilde G,\widetilde H)\ ,\nonumber\\ D^{(m)}_{_H}(\widetilde L,\widetilde G,\widetilde H)&=&\frac{\widetilde{H}}{\widetilde{L}} D^{(m)}_{_L}(\widetilde L,\widetilde G,\widetilde H)\ . \end{eqnarray} Since $\eta$ is a small quantity, from $\eqref{toy_canonical_eq}$, it follows that $\widetilde{G}$ and $\widetilde{H}$ modify slightly in time, due to the effect of the dissipation. Being interested in equilibria located in the $(\sigma_{m1}, \widetilde{L})$ plane, and also in obtaining a very reduced model apt to explore the dynamics of infinitesimal bodies close to resonances, we define a \sl dissipative toy model \rm governed by the following differential equations: \beqa{toy_canonical_eq_final} \dot{\sigma}_{m1}&=&\widetilde{h}^{(m)}_{,L}(\widetilde L,\widetilde G,\widetilde H)+ \varepsilon \widetilde{\mathcal{A}}^{(m)}_{,L}(\widetilde L,\widetilde G,\widetilde H)\ \cos(\sigma_{m1}-\widetilde{\varphi}^{(m)}(\widetilde G,\widetilde H))\ ,\nonumber\\ \dot{\widetilde{L}}&=& \varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde L,\widetilde G,\widetilde H) \sin (\sigma_{m1}-\widetilde{\varphi}^{(m)}(\widetilde G,\widetilde H))- \eta D^{(m)}_{_L}(\widetilde L,\widetilde G,\widetilde H)\ , \end{eqnarray} where $\widetilde{G}$ and $\widetilde{H}$ are considered constants; let us stress this aspect by replacing them in the following by $\widetilde{G}_0$ and $\widetilde{H}_0$. Also, we will use the customary differentiation convention stating that subscripts preceded by a comma denote partial differentiation with respect to the corresponding variable. Since the main goal of this section is to present a qualitative description of the interplay between the resonances and the dissipative effects (including the existence, type and location of the equilibrium points as a function of various parameters), we shall consider the parameter $\eta$ as a constant, leaving to Section~\ref{sec:results} the study of the case of a variable $\eta$, which corresponds to study the effects of the solar cycle. In order to validate the toy model and to show numerically the existence of the equilibrium points, we present in Figure~\ref{fig:cartography} some results obtained by using the DMLR described in the previous sections, including also the air resistance effect for the 12:1, 13:1 and 14:1 resonances. Plotting the Fast Lyapunov Indicator\footnote{The Fast Lyapunov Indicator is a measure of the regular and chaotic dynamics; it was introduced in \cite{froes} and it amounts, in short, to the Lyapunov exponent computed on finite times.}, hereafter denoted as FLI (see, e.g, \cite{froes,GLF2002,GL2013,CGmajor}) for some given values of the parameters (i.e. eccentricity, inclination, ballistic coefficient, etc.), we can infer a very good agreement between the equilibria of the toy model and those of the DMLR. Indeed, the equilibrium points are clearly revealed for small dissipations (or for the non--dissipative case of the 11:1 resonance), the resonant islands have the amplitude as predicted by the conservative toy model (compare with Figures~\ref{fig:amplitude} and \ref{fig:cartography}) and, as we will see in the next sections, the dissipative toy model is able to predict the existence and location of the equilibrium points. Since the upper left panel of Figure~\ref{fig:cartography} is obtained for a conservative model, more precisely a pendulum-type Hamiltonian, the stable and unstable points, as well as the separatrix are clearly marked. Since the other plots of Figure~\ref{fig:cartography} take also into account the dissipative effect, the separatrix of each plot is not longer a single line, as for a conservative system; the gradual decrease of the orbits' altitude due to dissipation leads the paths located above the resonant region to reach, after some time, the separatrix. This is the reason why in all other plots of Figure~\ref{fig:cartography} we notice a larger chaotic region above the resonant island, than below it. The plots are obtained by integrating the equations of motion for an interval of 1500 sidereal days. Due to the orbital decay process, a longer time span integration is considered, hence a much larger chaotic region is obtained above the resonant zone. Once an orbit reaches the resonant region, two scenarios are possible: either it passes through resonance, or it is captured into resonance. Numerical simulations show that the capture is a rare and temporary phenomenon, depending on different factors including that $\eta$ varies in time as effect of the solar cycle (see Section~\ref{sec:results}). In any case, even if the object is captured temporarily by a resonance, it does not usually reach the center of the island, where the spiral point is located. Figure~\ref{fig:pass_capture}, obtained by using the DMLR, shows an example of the two different phenomena: a passage through the 14:1 resonance and a temporary capture in the 12:1 resonance. \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=5truecm,height=4truecm]{cart_res11_1_e=0_005_i=80.pdf} \includegraphics[width=5truecm,height=4truecm]{cart_res12_1_e=0_005_i=70_Am=0_01.pdf} \includegraphics[width=5truecm,height=4truecm]{cart_res13_1_e=0_005_i=75_Am=0_01_v2.pdf}\\ \vglue-0.6cm \includegraphics[width=5truecm,height=4truecm]{cart_res14_1_e=0_005_i=60_Am=0_0014.pdf} \includegraphics[width=5truecm,height=4truecm]{cart_res14_1_e=0_005_i=60_Am=0_01.pdf} \vglue0.4cm \caption{FLI (using the DMLR) for the 11:1, 12:1, 13:1, 14:1 resonances for $e=0.005$, $\omega=0^o$, $\Omega=0^o$. Top left: 11:1 resonance for $i=80^o$; top middle: 12:1 resonance for $i=70^o$, mean atmospheric density and $B=220$ $[cm^2/kg]$; top right: 13:1 resonance for $i=75^o$, mean atmospheric density and $B=220$ $[cm^2/kg]$; bottom: 14:1 resonance for $i=60^o$, mean atmospheric density and $B=30$ $[cm^2/kg]$ (left panel), respectively $B=220$ $[cm^2/kg]$ (right panel). The time span is $1500$ sidereal days (about $4$ years).} \label{fig:cartography} \end{figure} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{y_pass_res14_1_e=0_005_i=70_Am=0_01.pdf} \includegraphics[width=6truecm,height=5truecm]{y_capture_res12_1_e=0_005_i=70_Am=0_01_sig=100.pdf} \vglue0.8cm \caption{Passage through the 14:1 resonance (left) and temporary capture into the 12:1 resonance (right). The plots are obtained for $e=0.005$, $i=70^o$, $\omega=0^o$, $\Omega=0^o$, $\sigma_{m1}=100^o$ and $B=220$ $[cm^2/kg]$. } \label{fig:pass_capture} \end{figure} \subsection{Existence of equilibrium points} \label{sec:existence} Using the toy model introduced in Section~\ref{sec:toy}, we can prove the following result. \begin{theorem} \label{Theorem:existence} For fixed values of $e\in [0,0.02]$ and $i\in [0^o,120^o]$ (or equivalently, given $\widetilde{G}_0$ and $\widetilde{H}_0$ in the corresponding intervals), let $(\sigma_{m1}^{(0)}$, $\widetilde{L}_0)$ be an equilibrium point for the model described by the Hamiltonian \equ{toy_ham_canonical}. Let $\widetilde{\mathcal{A}}^{(m)}$ be as in \equ{hAphi_tilde} and $D^{(m)}_{_L}$ as in \equ{diss_D}; assume that $\eta$, $\varepsilon$ satisfy the inequalities: \beqa{existence_condition} &&\left| \frac{\eta D^{(m)}_{_L} (\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)}{\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} \right| + \frac{ 2 \varepsilon \Bigl( \widetilde{{\mathcal A}}_{,L}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)\Bigr)^2 +2\eta \left| \widetilde{{\mathcal A}}_{,L}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) D_{L,L}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)\Bigr)\right|}{ | \widetilde{h}_{,LL}^{(m)} (\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) | \ \widetilde{{\mathcal A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} \leq 1-\delta\ ,\nonumber\\ \nonumber\\ &&\qquad\qquad\varepsilon^2<\gamma\ \delta \end{eqnarray} for some constants $0<\delta<1$ and $\gamma>0$. Then, the dissipative toy model described by the equations \equ{toy_canonical_eq_final} admits equilibrium points. At first order in $\eta$, the point ($\sigma_{m1}^{(1)}$, $\widetilde{L}_1$) defined by \begin{equation}\label{eq_L_sigma} \sigma_{m1}^{(1)}= \sigma_{m1}^{(0)}+ \frac{D^{(m)}_{_L} (\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} {\varepsilon \widetilde{ \mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \cos (\sigma_{m1}^0-\widetilde{\varphi}^{(m)}) }\eta \ , \qquad \widetilde{L}_1=\widetilde{L}_0 \end{equation} is an equilibrium point for the dissipative model. \end{theorem} {\bf Proof.} Since $(\sigma_{m1}^{(0)}$, $\widetilde{L}_0)$ is an equilibrium point for the conservative model \equ{toy_ham_canonical}, one has \beqa{eq_points_cons_cond} \widetilde{h}_{,L}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)+ \varepsilon \widetilde{\mathcal{A}}_{,L}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \cos (\sigma_{m1}^{(0)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0))&=&0\ ,\nonumber\\ \qquad \sin(\sigma_{m1}^{(0)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0))&=&0\ . \end{eqnarray} The relations \eqref{eq_points_cons_cond} represent an uncoupled system of two equations. The second of \eqref{eq_points_cons_cond} provides two values for $\sigma_{m1}^{(0)}$ in the interval $[0^o, 360^o)$. Once $\sigma_{m1}^{(0)}$ is known, $\widetilde{L}_0$ is found by solving the first of \eqref{eq_points_cons_cond} for fixed values of $\widetilde{G}_0$, $\widetilde{H}_0$. It is important to stress that, since $\varepsilon$ is small, $\widetilde{L}_0$ has the form $ \widetilde{L}_{0}=\widetilde{L}^{sec} + \varepsilon L_0^* +O(\varepsilon^2)$, where $ L_0^*$ is independent of $\varepsilon$ and $\widetilde{L}^{sec}$ satisfies the equation $$\widetilde{h}_{,L}^{(m)}(\widetilde{L}^{sec}, \widetilde{G}_0, \widetilde{H}_0)=0\ .$$ Inserting $ \widetilde{L}_{0}=\widetilde{L}^{sec} + \varepsilon L_0^* +O(\varepsilon^2)$ in the first of \eqref{eq_points_cons_cond} and expanding to the first order in $\varepsilon$, we find that $\widetilde{L}_0$ has the form \begin{equation}\label{Lsec} \widetilde{L}_{0}=\widetilde{L}^{sec} \pm \varepsilon \ \frac{\widetilde{\mathcal{A}}_{,L}^{(m)} (\widetilde{L}^{sec}, \widetilde{G}_0, \widetilde{H}_0)}{\widetilde{h}_{,LL}^{(m)}(\widetilde{L}^{sec}, \widetilde{G}_0, \widetilde{H}_0)} +O(\varepsilon^2)\ , \end{equation} where the signs $\pm$ correspond to the two solutions of the second of \eqref{eq_points_cons_cond}. In obtaining \eqref{Lsec}, we took into account the fact that $\widetilde{h}^{(m)}$ in \equ{hAphi_tilde} assures that $\widetilde{h}_{,LL}^{(m)}$ cannot be zero for the resonances and parameter values considered in this work (notably, $J_2$ is sufficiently small). On the other hand, for the dissipative toy model we have the following coupled equations for the determination of an equilibrium point, say $(\sigma_{m1}^{(d)}$, $\widetilde{L}_d)$: \beqa{eq_points_dissipative_equations} \widetilde{h}_{,L}^{(m)}(\widetilde{L}_d, \widetilde{G}_0, \widetilde{H}_0)+ \varepsilon \widetilde{\mathcal{A}}_{,L}^{(m)}(\widetilde{L}_d, \widetilde{G}_0, \widetilde{H}_0) \cos (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0))&=&0\ ,\nonumber\\ \qquad \varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_d, \widetilde{G}_0, \widetilde{H}_0) \sin (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0)) -\eta D_L^{(m)} ( \widetilde{L}_d, \widetilde{G}_0, \widetilde{H}_0)&=&0\ . \end{eqnarray} The first of \eqref{eq_points_dissipative_equations} can always be satisfied; that is, for any value of $\sigma_{m1}^{(d)}$ in the interval $[0^o, 360^o)$ we may find a value $\widetilde{L}_d$ which verifies this equation. However, the second of \eqref{eq_points_dissipative_equations} is satisfied only if the dissipative effects do not exceed a threshold value. To show this, let us fix an arbitrary value for $\sigma_{m1}^{(d)}$ in the interval $[0^o, 360^o)$ and let $\widetilde{L}_d^\sigma$ be such that $(\sigma_{m1}^{(d)}, \widetilde{L}_d^\sigma)$ satisfies the first of \eqref{eq_points_dissipative_equations}. Using the same argument as the one used to obtain \eqref{Lsec}, we deduce that $\widetilde{L}_d^\sigma$ has the form \begin{equation}\label{Ldis} \widetilde{L}_{d}^\sigma=\widetilde{L}^{sec} - \varepsilon \ \frac{\widetilde{\mathcal{A}}_{,L}^{(m)} (\widetilde{L}^{sec}, \widetilde{G}_0, \widetilde{H}_0) \cos (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0))}{\widetilde{h}_{,LL}^{(m)}(\widetilde{L}^{sec}, \widetilde{G}_0, \widetilde{H}_0)} +O(\varepsilon^2)\ . \end{equation} Now, we note that if $f$ is a differentiable function of $\widetilde{L}$, then in view of the relation $ \widetilde{L}^{sec}=\widetilde{L}_{0} - \varepsilon L_0^* +O(\varepsilon^2)$, we can write $f(\widetilde{L}^{sec})=f(\widetilde{L}_{0})-\varepsilon L_0^*\ f_{,L}(\widetilde{L}_0)+O(\varepsilon^2)$. Using this argument, from \eqref{Lsec} and \eqref{Ldis} we get \begin{equation}\label{Ldis0} \widetilde{L}_{d}^\sigma=\widetilde{L}_0 - \varepsilon \ \frac{\widetilde{\mathcal{A}}_{,L}^{(m)} (\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \Bigl(\cos (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0)) \mp 1\Bigr)}{\widetilde{h}_{,LL}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} +O(\varepsilon^2)\ . \end{equation} Inserting $\widetilde{L}_{d}^\sigma$ given by \eqref{Ldis0} in the second of \eqref{eq_points_dissipative_equations}, then after some computations we get the following equation for the unknown variable $\sigma_{m1}^{(d)}$ \begin{equation} \label{sincond} \sin (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}) =\frac{\eta D_L^{(m)}}{\varepsilon \widetilde{{\mathcal A}}^{(m)}}+ \frac{\widetilde{{\mathcal A}}_{,L}^{(m)} \ \Bigl(\cos (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}) \mp 1\Bigr)}{ \widetilde{h}_{,LL}^{(m)} \widetilde{{\mathcal A}}^{(m)}} \Bigl( \varepsilon \widetilde{{\mathcal A}}_{,L}^{(m)} \sin (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)})-\eta D_{L,L}^{(m)} \Bigr) +O(\varepsilon^2)\,, \end{equation} where all functions are evaluated at $\widetilde{L}_0$, $\widetilde{G}_0$, $\widetilde{H}_0$. In view of \eqref{existence_condition}, bounding the terms of second order in $\varepsilon$ by $C_0\varepsilon^2$ for a suitable constant $C_0>0$, we have $$ \Bigl| \frac{\eta D_L^{(m)}}{\varepsilon \widetilde{{\mathcal A}}^{(m)}}+ \frac{\widetilde{{\mathcal A}}_{,L}^{(m)} \ \Bigl(\cos (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)}) \mp 1\Bigr)}{ \widetilde{h}_{,LL}^{(m)} \widetilde{{\mathcal A}}^{(m)}} \Bigl( \varepsilon \widetilde{{\mathcal A}}_{,L}^{(m)} \sin (\sigma_{m1}^{(d)}-\widetilde{\varphi}^{(m)})-\eta D_{L,L}^{(m)} \Bigr)\Bigr| +C_0\varepsilon^2$$ $$\hspace{3cm} \leq \left| \frac{\eta D^{(m)}_{_L} }{\varepsilon \widetilde{\mathcal{A}}^{(m)}} \right| + \frac{ 2 \varepsilon \Bigl(\widetilde{{\mathcal A}}_{,L}^{(m)}\Bigr)^2 +2 \eta |\widetilde{{\mathcal A}}_{,L}^{(m)} D_{L,L}^{(m)}|}{ |\widetilde{h}_{,LL}^{(m)}| \ \widetilde{{\mathcal A}}^{(m)}} +C_0\varepsilon^2 \leq 1-\delta+C_0 \varepsilon^2 \leq 1\ , $$ for $\varepsilon$ sufficiently small with respect to $\delta$ as in the second of \eqref{existence_condition} with $\gamma \equiv 1/C_0$. Therefore, if \eqref{existence_condition} are satisfied, then the right hand side of \eqref{sincond} is subunitary, which implies that the dissipative toy model admits equilibrium points. Assuming that $\eta$ is sufficiently small, so that \eqref{existence_condition} holds, then at first order in $\eta$, it is natural to look for an equilibrium point of the dissipative toy model $\eqref{toy_canonical_eq_final}$ of the type $(\sigma_{m1}^{(1)}$, $\widetilde{L}_1)$, where \begin{equation}\label{Lsig_form} \sigma_{m1}^{(1)}=\sigma_{m1}^{(0)}+ \eta \sigma_{m1}^*+O(\eta^2)\,, \qquad \widetilde{L}_1=\widetilde{L}_0+\eta L^*+O(\eta^2) \end{equation} with $L^*$ and $\sigma_{m1}^*$ independent of $\eta$. In fact, we shall suppose that $\eta$ is smaller than $\varepsilon$, ensuring thus that $\sigma_{m1}^{(1)}$ is close to $\sigma_{m1}^{(0)}$. As a consequence, if $g$ is a differentiable function of $\sigma_{m1}$, then it follows that $g(\sigma_{m1}^{(1)})=g(\sigma_{m1}^{(0)})+\eta \sigma_{m1}^*\ g_{,\sigma}(\sigma_{m1}^{(0)})+O(\eta^2)$. Inserting \eqref{Lsig_form} in the right hand side of \eqref{toy_canonical_eq_final} and using \eqref{eq_points_cons_cond}, we obtain after some computations: \begin{equation}\label{eq_points_diss_cond} \begin{split} &\widetilde{h}_{,L}^{(m)} (\widetilde{L}_1, \widetilde{G}_0, \widetilde{H}_0)+\varepsilon \widetilde{\mathcal{A}}_{,L}^{(m)} (\widetilde{L}_1, \widetilde{G}_0, \widetilde{H}_0) \cos (\sigma_{m1}^{(1)} -\widetilde{\varphi}^{(m)}(\widetilde{G}_0, \widetilde{H}_0))\\ &= \eta [\widetilde{h}_{,LL}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) +\varepsilon \widetilde{\mathcal{A}}_{,LL}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \cos(\sigma_{m1}^{(0)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0))] L^*+O(\eta^2)\,,\\ & \varepsilon \widetilde{{\mathcal A}}^{(m)} (\widetilde{L}_1, \widetilde{G}_0, \widetilde{H}_0) \sin(\sigma_{m1}^{(1)}-\widetilde{\varphi}^{(m)}( \widetilde{G}_0, \widetilde{H}_0)) - \eta D_{L}^{(m)} (\widetilde{L}_1, \widetilde{G}_0, \widetilde{H}_0) \\ &=\eta [\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \sigma_{m1}^*\cos (\sigma_{m1}^{(0)}-\widetilde{\varphi}^{(m)}(\widetilde{G}_0, \widetilde{H}_0))- D_L^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)] +O(\eta^2)\,.\\ \end{split} \end{equation} Taking into account that $\varepsilon$ is a small parameter, it follows that the quantity in brackets at the right hand side of the first of $\eqref{eq_points_diss_cond}$ is different from zero for $\varepsilon$ sufficiently small. Therefore, for $(\sigma_{m1}^{(1)}$, $\widetilde{L}_1)$ to be an equilibrium point (at first order in $\eta$) for the dissipative toy model, one should have $L^*=0$ and, consequently, \begin{equation}\label{Lsigma_star} \sigma_{m1}^*= \frac{D_L^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} {\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \cos(\sigma_{m1}^{(0)}-\widetilde{\varphi}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0))}\ . \end{equation} From \eqref{Lsig_form} and \eqref{Lsigma_star}, we get \eqref{eq_L_sigma}. $\square$ \vskip.1in \begin{remark} Since $\varepsilon$ and $\eta$ are small (for instance, for the 14:1 resonance the parameter $\varepsilon$ is of the order of $10^{-9}$, while $\eta$ is smaller than $\varepsilon$), the existence condition can be replaced by the following simplified inequality $$ \left| \frac{\eta D^{(m)}_{_L} (\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)}{\varepsilon \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)} \right| \leq 1-\delta\ , $$ where $\varepsilon$ and $\eta$ satisfy the relation $$ \gamma_1 \varepsilon+\gamma_2 \eta+ \gamma_3 \varepsilon^2< \delta, $$ for some positive constants $\gamma_1$, $\gamma_2$ and $\gamma_3$. \end{remark} Besides the existence condition \eqref{existence_condition}, Theorem~\ref{Theorem:existence} shows that a change in magnitude of the dissipative effects leads to a shift of the equilibrium points on the $\sigma_{m1}$ axis, $\widetilde{L}$ (or equivalently the semimajor axis $a$) remaining unchanged. Indeed, in the bottom panels of Figure~\ref{fig:cartography}, obtained for $B=30\, [cm^2/kg]$ (left) and $B=220\, [cm^2/kg]$ (right), the centers of the islands are located at about $\sigma_{14,1}=48^o$ and $\sigma_{14,1}=60^o$, respectively, revealing thus the shift of equilibrium points on the $\sigma_{14,1}$ axis, while confirming that the value of $\widetilde L$ at equilibrium does not change. \subsection{Type of equilibrium points} \label{sec:type} In Section~\ref{sec:existence} we investigated the existence of equilibrium points without specifying their character. Since the conservative toy model reduces to a pendulum problem, the equilibrium points are centers and saddles. Therefore, it remains to clarify the nature of equilibria for the dissipative toy model. The link between the character of the equilibria in the conservative and dissipative frameworks is given by the following result. \begin{theorem}\label{Theorem:type} For given values of $\widetilde{G}_0$ and $\widetilde{H}_0$, let $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$, $m\in \{11,12,13,14\}$, be an equilibrium point for the conservative toy model described by the Hamiltonian \equ{toy_ham_canonical}, satisfying \eqref{existence_condition} for some $\delta>0$, $0<\varepsilon<1$. Assume that the existence condition \eqref{existence_condition} is satisfied and that $\eta<\varepsilon$. Then, the following statement holds true: if $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$ is a center (respectively a saddle) for the conservative toy model, then the equilibrium point at first order in $\eta$, say $(\sigma_{m1}^{(1)}, \widetilde{L}_1)$, defined by \eqref{eq_L_sigma} is an unstable spiral (respectively a saddle) for the dissipative toy model described by \equ{toy_canonical_eq_final}. \end{theorem} {\bf Proof.} Using \equ{eq_points_cons_cond}, the Jacobian matrix associated to the conservative case has the form: $$ {J}_{C}=\left(% \begin{array}{cc} 0 & \widetilde{h}^{(m)}_{,LL} +\varepsilon \widetilde{\mathcal{A}}_{,LL}^{(m)} \cos(\sigma_{m1}^{(0)}- \widetilde{\varphi}^{(m)}) \\ \varepsilon \widetilde{\mathcal{A}}^{(m)} \cos(\sigma_{m1}^{(0)}- \widetilde{\varphi}^{(m)}) & 0 \\ \end{array}% \right)\ , $$ where all functions are computed at $(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)$. Since $\varepsilon$ is a small parameter and $\widetilde{h}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)$, $\widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)$ have the same order of magnitude, the sign of $ \det (J_C)$ is given by the expression $ - \varepsilon \widetilde{h}^{(m)}_{,LL}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0) \cos(\sigma_{m1}^{(0)}- \widetilde{\varphi}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0))$, provided $\varepsilon$ is sufficiently small. Moreover, taking into account that $\widetilde{h}^{(m)}_{,LL}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)<0$ (provided $\alpha$ in \equ{hAphi_tilde} is sufficiently small) and (as we mentioned in Section~\ref{sec:resonant}) $\widetilde{\mathcal{A}}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)>0$, then for $\sigma_{m1}^{(0)}=\widetilde{\varphi}^{(m)}(\widetilde{L}_0, \widetilde{G}_0, \widetilde{H}_0)+2 k \pi $, $k \in \mathbb{Z}$, one has that $\det (J_C) >0$. As a consequence, $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$ is a center, while for $\sigma_{m1}^{(0)}=\widetilde{\varphi}^{(m)}+\pi +2 k \pi $, $k \in \mathbb{Z}$, the equilibrium point $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$ is a saddle. Assuming that the existence condition \eqref{existence_condition} is satisfied, then for the dissipative case, the Jacobian matrix is \beq{Jdissipative} {J}_{D}=\left(% \begin{array}{cc} - \varepsilon \widetilde{\mathcal{A}}_{,L}^{(m)} \sin(\sigma_{m1}^{(1)}- \widetilde{\varphi}^{(m)}) & \widetilde{h}^{(m)}_{,LL} +\varepsilon \widetilde{\mathcal{A}}_{,LL}^{(m)} \cos(\sigma_{m1}^{(1)}- \widetilde{\varphi}^{(m)}) \\ \varepsilon \widetilde{\mathcal{A}}^{(m)} \cos(\sigma_{m1}^{(1)}- \widetilde{\varphi}^{(m)}) & \varepsilon \widetilde{\mathcal{A}}_{,L}^{(m)} \sin(\sigma_{m1}^{(1)}- \widetilde{\varphi}^{(m)})-\eta D_{L,L}^{(m)} \\ \end{array}% \right)\ , \end{equation} where all functions are evaluated at $\widetilde{L}_1$, $\widetilde{G}_0$, $\widetilde{H}_0$. Using that $\eta$ is smaller than $\varepsilon$ (thus ensuring that $\sigma_{m1}^{(1)}$ is close to $\sigma_{m1}^{(0)}$), then if $f$ and $g$ are two differentiable functions of $\widetilde{L}$ and $\sigma_{m1}$, respectively, in view of \eqref{Lsig_form} and the fact that $L^*=0$, we can write \begin{equation}\label{fgL0L1} \begin{split} & f(\widetilde{L}_1)= f(\widetilde{L}_0)+ f_{,L}(\widetilde{L}_0) (\widetilde{L}_1-\widetilde{L}_0)+O(\eta^2)= f(\widetilde{L}_0)+O(\eta^2)\,\\ & g(\sigma_{m1}^{(1)})= g(\sigma_{m1}^{(0)})+g_{,\sigma}(\sigma_{m1}^{(0)}) (\sigma_{m1}^{(1)}-\sigma_{m1}^{(0)})+O(\eta^2)=g(\sigma_{m1}^{(0)})+\eta g_{,\sigma}(\sigma_{m1}^{(0)}) \sigma_{m1}^{*}+O(\eta^2). \end{split} \end{equation} From \eqref{eq_points_cons_cond}, \eqref{Jdissipative} and \eqref{fgL0L1} it follows that ($tr$ is the trace of the matrix and det its determinant) \begin{equation}\label{JDtrace} \begin{split} &tr (J_D)= -\eta D_{L,L}^{(m)}+ O (\eta^2)\,,\nonumber\\ &\det(J_D)= -\varepsilon \widetilde{h}^{(m)}_{,LL} \mathcal{A}^{(m)} \cos(\sigma_{m1}^{(0)}- \widetilde{\varphi}^{(m)})+ O (\eta \varepsilon)+ O (\eta^2)+ O(\varepsilon^2)\,,\nonumber\\ & \Bigl(tr (J_D)\Bigr)^2/4-\det(J_D)=\varepsilon \widetilde{h}^{(m)}_{,LL} \mathcal{A}^{(m)} \cos(\sigma_{m1}^{(0)}- \widetilde{\varphi}^{(m)})+ O (\eta \varepsilon)+ O (\eta^2)+ O(\varepsilon^2)\ ,\nonumber \end{split} \end{equation} where all functions are evaluated at $\widetilde{L}_0$, $\widetilde{G}_0$, $\widetilde{H}_0$. In order to establish the nature of the equilibrium point $(\sigma_{m1}^{(1)}, \widetilde{L}_1)$, we must know the sign of the above quantities. Therefore, let us discuss in more detail the sign of $D_{L,L}^{(m)}$. In view of the first of $\eqref{diss_D}$, we get \begin{equation}\label{D_derivative} D_{L,L}^{(m)}=-\frac{\omega_E}{\mu_E} \biggl(1-\frac{\omega_E \widetilde{L}^3_0 (\widetilde{H}_0+m \widetilde{L}_0)} {\mu_E^2 (\widetilde{G}_0+\widetilde{L}_0)} \biggr) \frac{[3 \widetilde{L}^2_0 (\widetilde{H}_0+m \widetilde{L}_0)+m \widetilde{L}_0^3](\widetilde{G}_0+\widetilde{L}_0)- \widetilde{L}_0^3 (\widetilde{H}_0+m \widetilde{L}_0)}{(\widetilde{G}_0+\widetilde{L}_0)^2}\ . \end{equation} To evaluate the sign of the above expression, we take into account that the eccentricity is a small quantity, say $e=O(\epsilon)$ with $\epsilon$ small. Therefore, from \eqref{LGH_aei} and \eqref{canonical_transformation}, it follows that $\widetilde{G}_0=O(\epsilon)$, $\widetilde{H}_0=\widetilde{L}_0(\cos i -m)+O(\epsilon)$, which leads to $$ [3 \widetilde{L}^2_0 (\widetilde{H}_0+m \widetilde{L}_0)+m \widetilde{L}_0^3](\widetilde{G}_0+\widetilde{L}_0) -\widetilde{L}_0^3 (\widetilde{H}_0+m \widetilde{L}_0) = \widetilde{L}_0^4 (2\cos i +m)+O(\epsilon) >0 $$ for $m>2$. Since the term in round brackets at the right hand side of \eqref{D_derivative} is positive for all resonances within the geostationary distance, we deduce that $D_{L,L}^{(m)}$ is negative and, as a consequence, $tr (J_D)$ is positive, provided $\eta$ is sufficiently small. We are therefore led to the following conclusion. If $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$ is a center for the conservative model and using that $\eta$ is smaller than $\varepsilon$ (thus ensuring that $\sigma_{m1}^{(1)}$ is close to $\sigma_{m1}^{(0)}$), then one has $tr (J_D)>0$, $\det(J_D) > 0$, $\Bigl(tr (J_D)\Bigr)^2/4-\det(J_D) <0$ and, as a consequence, $(\sigma_{m1}^{(1)}, \widetilde{L}_1)$ is an unstable spiral for the dissipative toy model. Otherwise, if $(\sigma_{m1}^{(0)}, \widetilde{L}_0)$ is a saddle for the conservative model, then $\det(J_D) < 0$, $\Bigl(tr (J_D)\Bigr)^2/4-\det(J_D) >0$, which means that $(\sigma_{m1}^{(1)}, \widetilde{L}_1)$ is a saddle point for the dissipative toy model. $\square$ \subsection{Location of equilibrium points} \label{sec:location} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{eq_ares_11_1.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_ares_12_1_Am=0_01_mean.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_ares_13_1_Am=0_01_mean.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_ares_14_1_Am=0_01_mean.pdf} \vglue0.5cm \caption{Position of the equilibrium points obtained using \equ{eq_L_sigma} on the semi-major axis as a function of eccentricity and inclination. The color bar provides the distance of the equilibrium points from the Earth's center. From top left to bottom right: 11:1, 12:1, 13:1, 14:1 resonances. Excluding the 11:1 resonance obtained within the conservative case, all other plots are given for $B=220\, [cm^2/kg]$ and mean values of the atmospheric density. For the white zones, the existence condition \eqref{existence_condition} is not satisfied, which implies that the equilibrium points do not exist.} \label{fig:eq_a_axis} \end{figure} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{eq_sigma_11_1.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_sigma_drag12_1_Am=0_01_mean.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_sigma_drag13_1_Am=0_01_mean.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_sigma_drag14_1_Am=0_01_mean.pdf} \vglue0.5cm \caption{Location the equilibrium points (centers for the 11:1 resonance and spirals for the other resonances) on the $\sigma_{m1}$ axis obtained using \equ{eq_L_sigma}. The color bar provides the position of equilibria in degrees. From top left to bottom right: 11:1, 12:1, 13:1, 14:1 resonances. Excluding the 11:1 resonance obtained within the conservative setting, all other plots are derived for $B=220\, [cm^2/kg]$ and mean values of the atmospheric density. For the white zones, the existence condition \eqref{existence_condition} is not satisfied, which implies that the equilibrium points do not exist.} \label{fig:eq_sigma_axis} \end{figure} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{eq_sigma_drag13_1_i=75.pdf} \includegraphics[width=6truecm,height=5truecm]{eq_sigma_drag14_1_i=60.pdf} \vglue0.8cm \caption{Location of the spiral equilibrium points on the $\sigma_{m1}$ axis, expressed in degrees, as a function of the ballistic coefficient $B$. The thinner lines are obtained for minimum values of the atmospheric density, the dotted lines correspond to mean atmospheric densities, while the thicker curves provide the results for maximum values of the atmospheric density (see Table~\ref{table:rho}). Left: the 13:1 resonance for $i=75^o$, $e=0.005$, $\omega=0^o$, $\Omega=0^o$. Right: the 14:1 resonance for $i=60^o$, $e=0.005$, $\omega=0^o$, $\Omega=0^o$.} \label{fig:shift_sigma} \end{figure} Using the toy model introduced in Section~\ref{sec:toy}, we investigate the existence and location of the equilibrium points for each resonance and for all values of eccentricity, inclination, ballistic coefficient and atmospheric density. We should stress that, although all equilibria are unstable for the dissipative model, the instability effects are small in the case of spiral points, in the sense that a body placed close to this point will remain a long time in a neighborhood. This will become evident in Section~\ref{sec:results} where various simulations are presented. We report in Figures~\ref{fig:eq_a_axis}, \ref{fig:eq_sigma_axis} the locations of the semimajor axis and the values of the angles $\sigma_{m1}$ for the equilibrium points (the centers for the 11:1 resonance and the spiral points for the other resonances), as a function of eccentricity and inclination. The plots corresponding to the 11:1 resonance are obtained within the conservative framework, while all other plots are obtained by using the dissipative toy model with $B=220\, [cm^2/kg]$ and for mean values of the atmospheric density. The white color in Figures~\ref{fig:eq_a_axis}, \ref{fig:eq_sigma_axis} shows the regions for which the existence condition \eqref{existence_condition} is not satisfied. In other words, the dissipative effects are larger than the resonant ones, which implies that the equilibrium points do not exist. Some transcritical bifurcation phenomena, as described in \cite{CGmajor,CGminor}, occur for the 12:1 and 14:1 resonances at $i_0=85.99^o$ and at $i_0=86.18^o$, respectively (see the location of the unstable spiral equilibrium points on the $\sigma_{m1}$ axis close to these inclinations on the right plots of Figure~\ref{fig:eq_sigma_axis}). For example, in the case of the 14:1 resonance, the spiral point is located somewhere between $0^o$ and $30^o$ for $i \in [70^o , 80^o]$, while for $i>90^o$ the position of the spiral point is close to $200^o$. A similar remark can be made for the resonance 12:1. The reason for the occurrence of this phenomenon is the change of sign of a specific resonant term. More precisely, from the set $\mathcal{M}_0^{14}$, the resonant term with the largest magnitude at high inclinations is $\mathcal{T}_{1\!5 \, 1\!4\,7\,0}$. This term changes its sign, precisely at $i_0=86.18^o$. Therefore, in the neighborhood of $i_0$ it happens that for $i<i_0$ the spiral points are located close to the solution of $14\, \lambda_{15,14} -90^o = 12^o$, while for $i>i_0$ the equilibrium points are located at about $\lambda_{15,14}=192^o$. Of course, the equilibria are not located exactly at these positions, since $\mathcal{M}_0^{14}$ contains five terms, but very close to them. In the case of the 12:1 resonance, $\mathcal{T}_{1\!5 \, 1\!2\,7\,0}$ is the resonant term with the greatest magnitude for large inclinations and it changes its sign at $i_0=85.99^o$. In view of Theorem~\ref{Theorem:existence}, it follows that for increasing values of $\eta$ (equivalently the ballistic coefficient and/or the atmospheric density) the white regions increase their area, while the {\it surviving} equilibria shift on the $\sigma_{m1}$ axis. For each inclination and eccentricity, one can compute the maximum value of $\eta$ up to which the inequality \eqref{existence_condition} is satisfied. In the case of the resonances 12:1 and 13:1, the simulations show that the existence condition \eqref{existence_condition} is usually fulfilled for inclinations larger than about $40^o$, even if the ballistic coefficient is large. On the other hand, since the atmospheric density is much larger at the altitude of 880 $km$, with notable variations during a solar cycle, the dissipative effect has an important contribution for the 14:1 resonance. Figure~\ref{fig:shift_sigma} shows the location of spiral points on the $\sigma_{m1}$ axis as a function of the ballistic coefficient, for minimum (thin line), mean (dotted curve) and maximum (thick curve) atmospheric density in the case of the 13:1 resonance, for $i=75^o$ and $e=0.005$, as well as for the 14:1 resonance when $i=60^o$ and $e=0.005$. The equilibrium points of \eqref{toy_canonical_eq_final} have been numerically obtained via the bisection method. For the 13:1 resonance, even though $B$ varies on a large interval, all three curves are straight line segments. On the contrary, for the 14:1 resonance a curvature of the mean (dotted curve) and maximum (thick curve) atmospheric density is clearly visible for increasing values of $B$, thus pointing out the limits of the approximations \eqref{eq_L_sigma}, corresponding to the toy model. Besides, as the right panel of Figure~\ref{fig:shift_sigma} shows, the equilibrium points do not exist for $B>200$ $cm^2/kg$ and a maximum value for $\rho$, and respectively for $B>924$ $cm^2/kg$ and a medium value of the atmospheric density. We remark that plots like those in Figure~\ref{fig:shift_sigma} can be used to analyze the shift of the equilibrium points on the $\sigma_{m 1}$ axis during a solar cycle. For instance, supposing that an infinitesimal body has the ballistic coefficient $B=150\, cm^2/kg$ then, within an interval of 11 years, the location of the spiral point varies between $48^o$ and $92^o$ for the 14:1 resonance, when $e=0.005$ and $i=60^o$. A satellite placed at, let say, $\sigma_{1\!4 \, 1}=70^o$ will stay very close to the spiral point, otherwise one should slightly correct its position to remain at the equilibrium point. \section{Solar cycle and third body effects}\label{sec:results} In this Section we consider a more complete model, which also takes into account the variation of the local density of the atmosphere as effect of the solar cycle, as well as the perturbations induced by Sun and Moon. We provide numerical evidence that the analytical results obtained in the previous Sections are valid when a more complete physical model is considered. In particular, we show that an object (satellite) placed at an equilibrium point remains there for a long time (of the order of dozens of years), even if solar cycle and third body effects are taken into account. Thus, we show a strong evidence that these points can be exploited in practice by parking satellites in their close vicinity. We exemplify just the case of the 14:1 resonance. Since the dissipative effects gradually decrease in magnitude with the altitude, for the other resonances studied in this paper the results are definitely better. \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{cycle_rho2.pdf} \includegraphics[width=6truecm,height=5truecm]{y_e=0_005_i=70_Am=0_004545_sig=80_7230_sunmoonblack_v_nosonmoongreen.pdf} \vglue0.8cm \caption{Left: variation of density in $kg/m^3$ at the altitude of $800$ $km$, between the years 2000 and 2025, computed with the formula \eqref{rho_variation_cycle}.\\ Right: behavior of semi-major axis for $B=100$ $[cm^2/kg]$ and the initial conditions $a=7230$ $km$, $e=0.005$, $i=70^o$, $\omega=0^o$, $\Omega=0^o$ and $\sigma_{1\!4\,1}=80^0$. The results obtained for the model that disregards the influence of Sun and Moon are represented with the green color, while the black color is used for the model that includes the attraction of Sun and Moon. The initial epoch is J2000 (January 1, 2000, 12:00 GMT). } \label{fig:rho_solar_cycle} \end{figure} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{y_res14_1_e=0_005_i=60_Am=0_00454545_v35.pdf} \includegraphics[width=6truecm,height=5truecm]{y_res14_1_e=0_005_i=60_Am=0_009090_v23.pdf} \vglue0.8cm \caption{Integration of several orbits showing the behavior of the semi-major axis inside the 14:1 resonance, for $B=100$ $[cm^2/kg]$ (left) and $B=200$ $[cm^2/kg]$ (right). The initial conditions, at the initial Epoch J2000 (January 1, 2000, 12:00 GMT), are $a=7215.7$ $km$, $e=0.005$, $i=60^o$, $\omega=0^o$ and $\Omega=0^o$, while for the resonant angle we used the following values. Left: $\sigma_{1\!4\,1}=50^0$ (blue), $\sigma_{1\!4\,1}=110^0$ (green), $\sigma_{1\!4\,1}=130^0$ (red), $\sigma_{1\!4\,1}=150^0$ (black). Right: $\sigma_{1\!4\,1}=70^0$ (blue), $\sigma_{1\!4\,1}=80^0$ (black), $\sigma_{1\!4\,1}=90^0$ (red), $\sigma_{1\!4\,1}=100^0$ (purple), $\sigma_{1\!4\,1}=110^0$ (green).} \label{fig:14_1_inside} \end{figure} \begin{figure}[h] \centering \vglue0.1cm \hglue0.2cm \includegraphics[width=6truecm,height=5truecm]{y_res14_1_e=0_005_i=60_Am=0_00454545_sigma=130_inclination.pdf} \includegraphics[width=6truecm,height=5truecm]{y_res14_1_e=0_005_i=60_Am=0_004545_sig=1_2_7226.pdf} \vglue0.8cm \caption{Left: behavior of the inclination inside the 14:1 resonance, for $B=100$ $[cm^2/kg]$, $a=7215.7$ $km$, $e=0.005$, $i=60^o$, $\omega=0^o$, $\Omega=0^o$ and $\sigma_{1\!4\,1}=130^0$. For this orbit, the variation of the semi-major axis is represented in red color in the left panel of Figure~\ref{fig:14_1_inside}. \\ Right: passage through the 14:1 resonance and temporary capture into the 14:1 resonance. The plot is obtained for $B=100$ $[cm^2/kg]$, $a=7226$ $km$, $e=0.005$, $i=60^o$, $\omega=0^o$, $\Omega=0^o$ and $\sigma_{1\!4\,1}=2^0$ for the black line (passage) and, respectively, $\sigma_{1\!4\,1}=1^0$ for the green line (capture). The initial epoch for all orbits is J2000 (January 1, 2000, 12:00 GMT).} \label{fig:14_1_outside} \end{figure} We suppose that the atmospheric density fluctuates with an 11-year cycle, as effect of the solar activity. To mimic the solar cycle, we shall use the following simple formula, which allows the density to vary periodically between its limits, minimum and maximum, at an altitude $h$: \begin{equation}\label{rho_variation_cycle} \rho(h)=\frac{\rho_{max}(h)+\rho_{min}(h)}{2}+\frac{\rho_{max}(h)-\rho_{min}(h)}{2} \cos \Bigl(\frac{2 \pi t}{T}-\phi_0\Bigr)\,, \end{equation} where $\rho_{max}(h)$ and $\rho_{min}(h)$ are computed by using the relation \eqref{rho_h}, $T$ is the period of the solar cycle equal to 11 years, $t$ is the time and $\phi_0$ is the phase angle. For instance, in the left panel of Figure~\ref{fig:rho_solar_cycle} we represent the variation of the density $\rho$ at the altitude of $h=800$ $km$, between the years 2000 and 2025. The solar activity depends on many factors and, of course, one could refine or propose other equations to model the variation of the density $\rho$. However, since our aim is to validate the analytical results presented in the previous Section, we shall keep the formulation as simple as possible. Beside the influence of the solar cycle, we also take into account the lunisolar perturbations. In this case, the conservative part is described by the Hamiltonian $$ \mathcal{K}=\mathcal{H} -\mathcal{R}_{Sun}-\mathcal{R}_{Moon}\ , $$ where $\mathcal{H}$ is the geopotential Hamiltonian \eqref{H}, while $\mathcal{R}_{Sun}$ and $\mathcal{R}_{Moon}$ are the solar and lunar disturbing functions. We express these functions in terms of the orbital elements of both the perturbed and perturbing bodies by considering the Kaula's expansion of the solar disturbing function (see \cite{Kaula1962}), and the Lane's expansion of the lunar disturbing function (see \cite{Lane1989, CGPR2016}). More precisely, the coefficients of $R_{Sun}$, $R_{Moon}$ expanded in Fourier series are functions of $(a/a_b)^n$, $e$, $e_b$, $i$, and $i_b$, while the trigonometric arguments are linear combinations of $M$, $M_b$, $\omega$, $\omega_b$, $\Omega$, $\Omega_b$, where $n \in \mathbb{N}$, $n \geq 2$ and $a_b$, $e_b$, $i_b$, $M_b$, $\omega_b$ and $\Omega_b$ are the orbital elements of the third body (Sun or Moon). Since in computations we deal with finite expressions, we truncate the series expansions of the solar and lunar disturbing functions to a given order in the ratio of the semi-major axes, and moreover we average over the fast angles. As pointed out in various studies investigating the dynamics in the MEO region (see, e.g., \cite{CGfrontier, CGPbif, DRADVR15, CGPR2016, GDGR2016}), a reliable model is obtained by truncating the expansions to second order in the ratio of the semi-major axes and averaging over both mean anomalies of the point mass and of the third body. Because LEO is closer to the Earth than MEO, then the ratio $a/a_b$ is smaller. Therefore, in LEO the lunisolar perturbations are smaller in magnitude than in MEO. In view of this argument, we shall truncate the series expansions to second order in the ratio of the semi-major axes. On the other hand, since in LEO the angles $\omega$ and $\Omega$ are much faster than in MEO, some resonances of the type (see \cite{CEGGP2016} for further details) $$ \alpha \dot\omega+ \beta \dot\Omega +\alpha_b \dot\omega_b+ \beta_b \dot\Omega_b-\gamma \dot{M}_b=0\ , \quad \alpha\,, \alpha_b \in \{\pm 2, 0\}\,, \hspace{0.3cm} \beta, \beta_b \in \{\pm 2, \pm 1, 0\}\ , \quad \gamma \in \mathbb{Z}\backslash\{0\}\ , $$ where the suffix is $b = S$ when the third-body perturber is the Sun and it is $b = M$ for the Moon, called {\it (lunar or solar) semi-secular resonances}, might influence the long--term evolution of the orbital elements. For small eccentricities and inclinations between 40 and 120 degrees, an analysis similar to that presented in \cite{CEGGP2016} shows that lunar semi-secular resonances occur at an altitude below 600 $km$, while solar semi-secular resonances could occur at any altitude in LEO. For this reason, we average the Hamiltonian over the mean anomalies of the satellite and of the Moon, but not over the mean anomaly of the Sun. In this way, we take into account the influence of some possible solar semi-secular resonances. We refer the reader to \cite{CGPR2016} for the explicit expansions of the disturbing functions $\mathcal{R}_{Sun}$ and $\mathcal{R}_{Moon}$. The numerical tests done so far show that lunisolar perturbations have a relatively small influence on the long-term evolution of the semi-major axis. In the majority of the cases, we have basically obtained the same behavior of the semi-major axis, either we have used the full model described above or we have integrated a model that disregards the lunisolar perturbations. However, there are some cases that show a remarkable difference. More precisely, as noticed in Section~\ref{sec:qualitative_resonance}, Figure~\ref{fig:pass_capture}, an orbit reaching the resonant region either it passes through resonance or it is temporarily captured into resonance. This behavior has a strong stochastic feature. Indeed, a small perturbation might lead to a different scenario than expected. For instance, in Figure~\ref{fig:rho_solar_cycle} (right panel), we describe the evolution of the semi-major axis of an orbit, both under the model that disregards the lunisolar perturbations (green line) as well as under the full model that considers the attraction of Moon and Sun (black line). In the first case one gets the phenomenon of temporary capture into resonance, and in the second case the phenomenon of passage through the resonance. For other initial conditions the scenario could be opposite. Therefore, even if the lunisolar perturbations are small in magnitude, they could be important in some cases, as the right panel of Figure~\ref{fig:rho_solar_cycle} shows. The study of lunisolar perturbations and of semi-secular resonances will be a subject of future work. In Figure~\ref{fig:14_1_inside} we report some results obtained by propagating several initial conditions for a large time, starting from January 1.5, 2000 (J2000). All these initial data are located inside the libration regions of the 14:1 resonance or, to be precise, in the basins of repulsion of the spiral equilibrium points. As the stability analysis presented in Section~\ref{sec:type} shows, the equilibria of the dissipative toy model \eqref{toy_canonical_eq_final} are repellors. From results of dynamical systems theory, the initial conditions located in the neighborhood of these points do not evolve toward but rather away from them. Thus, within the framework of the dissipative system, the libration regions of the conservative system should become sort of {\it basins of repulsion}. However, this effect is very small even on long time scales. We will still use the terminology {\it libration regions}, even in the dissipative case, and not {\it basins of repulsion} as we should normally adopt in the framework of dissipative dynamical systems. An important aspect, which enhances the complexity of the dynamics, is the variation of both the position of the equilibrium points, as well as the position and width of the resonant regions, as effect of the solar cycle. Indeed, we find that inside the libration region the initial conditions evolve (slowly) away from the spiral points, as effect of the dissipation. Furthermore, the position of the equilibrium points, and as a consequence the position and width of the libration regions, fluctuates with an 11-year cycle along the $\sigma_{1\!4 \, 1}$ axis. The amplitude of this variation depends on the value of the ballistic coefficient. Figure~\ref{fig:14_1_inside} is described better if the results are corroborated with the analytical study presented in Section~\ref{sec:qualitative_resonance}. In particular, the right panel of Figure~\ref{fig:shift_sigma} is relevant for our discussion, since it provides the shift of the equilibria on the $\sigma_{1\!4 \, 1}$ axis during a solar cycle. Thus, from Figure~\ref{fig:shift_sigma} it follows that for $B=100$ $cm^2/kg$, the position of the spiral point oscillates between $47^o$ and $75^o$, while for $B=200$ $cm^2/kg$ between $50^o$ and $120^o$. In the left panel of Figure~\ref{fig:14_1_inside}, obtained for $B=100$ $cm^2/kg$, we integrate four orbits, characterized by the same initial conditions with the exception of the resonant angle $\sigma_{1\!4 \, 1}$ for which we took the following initial values: $50^o$ (blue), $110^o$ (green), $130^o$ (red) and $150^o$ (black). Being sufficiently close to the spiral point, the first two initial conditions lead to {\it trapped motions} for more than $300$ years. Increasing the distance from the spiral point, one obtains {\it escape motions} with increasingly smaller escape times. For $i=60^o$, $e=0.005$ and ballistic coefficients larger than $200$ $cm^2/kg$, the right panel of Figure~\ref{fig:shift_sigma} shows that equilibrium points do not exist when the solar activity attains its maximum. Thus, for $B=200$ $cm^2/kg$, we do not expect to obtain trapped motions for hundreds of years. Indeed, the right panel of Figure~\ref{fig:14_1_inside} shows only {\it escape motions}, but even so, the escape time is very long in some cases. It seems that the longest escape time is obtained for initial values of the resonant angle between $80^o$ (black line) and $90^o$ (red line), namely at the middle of the interval $[50^o, 120^o]$, which represents the range of variation of the position of the spiral point. Another aspect to be noted is the fact that none of the curves drawn in Figure~\ref{fig:14_1_inside} is horizontal, but rather the semi-major axis slowly decreases in time for each orbit trapped into resonance. For example, in the left plot of Figure~\ref{fig:14_1_inside}, the semi-major axis for the orbits represented by blue and green lines decreases of about 4.5 $km$ within 300 years. This is due to the resonance, which slowly decreases the inclination. Indeed, the left plot of Figure~\ref{fig:14_1_outside} shows the evolution of the inclination for the same orbit for which the variation of the semi-major axis is represented in red color in the left panel of Figure~\ref{fig:14_1_inside}. For the trapped motion inside the resonance we notice a slow decrease of inclination from $60^o$ to $58.2^o$ within about 100 years. Then, after the escape from the resonance, the inclination becomes nearly constant. Since the position of the equilibrium points on the semi-major axis depends on the inclination, see Figure~\ref{fig:eq_a_axis} and in particular the bottom right plot of Figure~\ref{fig:eq_a_axis} for the 14:1 resonance, a slow decrease of the inclination leads to a shift of the position of equilibrium points along the semi-major axis. Finally, the right panel of Figure~\ref{fig:14_1_outside} underlines again the stochastic behavior of the orbits reaching the resonant region. We propagate two orbits, whose initial angle $\sigma_{1\!4 \, 1}$ differs by only one degree. One orbit passes through the resonance and the other is captured temporarily into the resonance. At the light of the results presented in this work, we believe that it would be interesting to study passage or escape from resonances in specific case studies as well as to move parameters or initial conditions to foster one of the two situations, whose exploitation could be conveniently used to design disposal orbits. \vskip.1in \bf Acknowledgements. \rm A.C. was partially supported by GNFM/INdAM. C.G. was supported by the Romanian Space Agency (ROSA) within Space Technology and Advanced Research (STAR) Program (Project no.: 114/7.11.2016). \vglue1cm \bibliographystyle{spmpsci}
proofpile-arXiv_069-1739
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} The construction of a superspace path integral formulation for maximal supersymmetry is still an open question. To get a supersymmetry algebra that admits a functional representation on the fields is at the heart of the problem and it seems inevitable in dimensions $d\geqslant 7$ that this implies a breaking of the manifest Lorentz invariance.\footnote{In lower dimensions, there is still the possibility to formulate the maximally supersymmetric YM theory in terms of a subalgebra of the whole super-Poincar\'e algebra, while maintaining manifest Lorentz invariance. In $d=4$ for instance, a harmonic superspace formulation was given preserving $3/4$ of the supersymmetries \cite{N=3}, which then received a full quantum description \cite{Delduc}.} Such a functional representation was determined in \cite{10DSYM} for the ${\bar N}=1,d=10$ theory, by a supersymmetry algebra made of $9$ generators and a restriction of the ten-dimensional Lorentz group to $SO(1,1)\times Spin(7)\subset SO(1,9)$. This led us to a reduced superspace with $9$ fermionic coordinates. Covariant constraints were found, which do not imply equations of motion. They were solved in function of the fields of the component formalism and analogous results have been obtained for the ${\bar N}=2,d=4,8$ cases \cite{twistsp}. A path integral formulation was given for ${\bar N}=2,d=4$ in terms of the connection superfields themselves, which required an implementation of the constraints directly in the path integral. On the other hand, dimensional arguments show that the introduction of a prepotential is needed in the higher dimensional cases. Moreover, we expect such higher dimensional cases to be formulated in terms of complex representations of $SU(4)\subset Spin(7)$. Using an $SU(4)$ holomorphic formulation in 8 or 10 dimensions implies a framework that is formally ``similar'' to the holomorphic formulation in four dimensions that we study in this paper. We thus display a holomorphic superspace formulation of the simple ${\bar N}=1,d=4$ super-Yang--Mills theory in its twisted form, by applying the general procedure of \cite{twistsp}. This superspace formulation involves $3$ supercharges, a scalar and a $(1,0)$-vector. It completes the previous works for the ${\bar N}=2,\ d=4$ and ${\bar N}=1,\ d=10$ twisted superspace with $5$ and $9$ supercharges, respectively. We also provide a short discussion of the resolution of the constraints in terms of a prepotential. It must be noted that reality conditions are a delicate issue for the ${\bar N}=1\ d=4 $ superspace in holomorphic coordinates. However, this question does not arise in the 10-dimensional formulation, so we will not discuss it here. The first section defines the notations of the holomorphic ${\bar N}=1,d=4$ super-Yang--Mills theory and its formulation in components. The second gives its superspace formulation together with the coupling to matter. The third provides a discussion on the alternative formulation in superspace involving a prepotential. The fourth section is devoted to the ${\bar N}=2$ case, both in components and superspace formulations. \section { Holomorphic ${\bar N}=1,d=4$ Yang--Mills supersymmetry } \def {\bar m}{ {\bar m}} \def {\bar n}{ {\bar n}} The twist procedure for the ${\bar N}=1,d=4$ super--Yang--Mills theory has been described in \cite{Johansen,Witten1,recons} in the context of topological field theory. For a hyperK\"ahler manifold, one can use a pair of covariantly constant spinors $\zeta_{\pm}$, normalized by $\zeta_{-\dot{\alpha}}\zeta^{\dot{\alpha}}_+=1$. They can be defined by $iJ^{m \bar n}\sigma_{m\bar n}\zeta_{\pm}=\pm\zeta_{\pm}$, where $J^{m\bar n}$ is the complex structure \begin{equation} \label{J} J^{mn} = 0 \ , \quad\quad J^{ {\bar m} {\bar n}}=0 \ , \quad\quad J^{m\bar n} = i g^{m\bar n} \end{equation} It permits one to decompose forms into holomorphic and antiholomorphic components. For the gauge connection $1$-form $A$, one has \begin{equation} A = A_{(1,0)} + A_{(0,1)} \quad \textrm{with} \quad JA_{(1,0)} = i A_{(1,0)} \ , JA_{(0,1)} = - i A_{(0,1)} \end{equation} and the decomposition of its curvature is $F=dA+AA=F_{(2,0)}+F_{(1,1)}+F_{(0,2)}$. A Dirac spinor decomposes as \begin{equation} \lambda_\alpha=\Psi_m\,\sigma^m_{\alpha\dot{\alpha}}\,\zeta_-^{\dot{\alpha}}\qquad \lambda^{\dot{\alpha}}=\eta\,\zeta_+^{\dot{\alpha}}+\chi_{\bar m\bar n}\,\sigma^{\bar m\bar n\dot{\alpha}}_{\phantom{\bar m\bar n\dot{\alpha}}\dot{\beta}}\,\zeta^{\dot{\beta}}_+ \end{equation} In the case of a flat manifold, the twist is a mere rewritting of the Euclidean supersymmetric theory, obtained by mapping all spinors onto ``holomorphic'' and ``antiholomorphic'' forms after reduction of the $Spin(4)$ covariance to $SU(2)$. Notice that the Euclidean formulation of the ${\bar N}=1$ theory is defined as the analytical continuation of the Minkowski theory. The Euclideanization procedure produces a doubling of the fermions \cite{Nicolai.1978}, so that the complex fields $\eta, \chi_{\bar m\bar n},\Psi_m$ are truly mapped onto a Dirac spinor $\lambda$. However, the twisted and untwisted actions do not depend on the complex conjugate fields and the path integral can be defined as counting only four real degrees of freedom\footnote{In the Euclideanization procedure, one also gives up hermiticity of the action, but a ``formal complex conjugation'' can be defined and extended in the twisted component formalism that restores hermiticity \cite{recons}}. The twist also maps the four ${\bar N}=1$ supersymmetry generators onto a (0,0)-scalar $\delta$, a (0,1)-vector $\delta_{\bar m}$ and a (2,0)-tensor $\delta_{mn}$ generators. For formulating the ``holomorphic superspace", we will only retain 3 of the four generators, the scalar one $\delta$ and the vector one $\delta_{\bar m}$. The invariance under $\delta$ and $\delta_{\bar m}$ has been shown to completely determine the supersymmetric action \cite{recons}. Moreover, the absence of anomaly for the tensor symmetry implies that this property can be conserved at the quantum level (at least at any given finite order in perturbation theory) \cite{TSSlong}. \subsection{Pure ${\bar N}=1$ super--Yang--Mills theory} \label{pure N1 components} The bosonic fields content of the ${\bar N}=1$ pure super--Yang--Mills theory is made of the Yang--Mills field $A= A_m dz^m +A_ {\bar m} dz^ {\bar m} $, and an auxiliary scalar field $T$, while the fermionic fields are one scalar $\eta$, one $(1,0)$-form $\Psi_m$ and one $(0,2)$-form $\chi_{\bar m\bar n}$. The transformation laws of the various fields in twisted representations are \begin{equation} \label{lois N1 d4}\begin{split} \delta\,A_m &= \Psi_m\\ \delta\,A_{\bar m} &= 0\\ \delta\,\Psi_m &= 0\\ \delta\,\eta &= T\\ \delta\,T &= 0\\ \delta\,\chi_{\bar m\bar n} &= F_{\bar m\bar n} \end{split} \hspace{10mm} \begin{split} \delta_{\bar m}\,A_n &= g_{\bar m n}\eta\\ \delta_{\bar m}\,A_{\bar n} &= \chi_{\bar m\bar n}\\ \delta_{\bar m}\,\Psi_n &= F_{\bar m n}-g_{\bar m n} T\\ \delta_{\bar m}\,\eta &= 0\\ \delta_{\bar m}\,T &= D_{\bar m} \eta\\ \delta_{\bar m}\,\chi_{\bar p\bar q} &= 0 \end{split} \end{equation} The three equivariant generators $\delta$ and $\delta_{\bar m}$ verify the following off-shell supersymmetry algebra \begin{equation}\label{com} \delta^2=0\, ,\quad \{\delta,\delta_{\bar m}\}=\partial_{\bar m}+\delta^{\scriptscriptstyle \,gauge}(A_{\bar m})\, , \quad \{\delta_{\bar m},\delta_{\bar n}\}=0 \end{equation} The action for the pure ${\bar N}=1,d=4$ super--Yang--Mills is completely determined by the $\delta,\delta_{\bar m}$ invariance. It is given by \cite{Johansen,recons} \begin{equation} \label{action N1 d4} \mathcal{\,S}_{\,YM}^{{\bar N}=1} = \int\! \mathrm{d}^4x\sqrt{g}\,\hbox {Tr}~\Bigl(\frac{1}{2}F^{mn}F_{mn}+T(T+iJ^{m\bar n}F_{m\bar n})-\chi^{mn}D_m\Psi_n+\eta D^m\Psi_m\Bigr) \end{equation} The Wess and Zumino matter multiplet and its coupling to pure ${\bar N}=1$ super--Yang-Mills will only be discussed in the framework of superspace. \subsection{Elimination of gauge transformations in the closure relations} \label{Monemvasia} The algebra (\ref{com}) closes on gauge transformations, due to the fact that in superspace, where supersymmetry is linearly realized, one breaks the super-gauge invariance to get the transformation laws of the component fields (\ref{lois N1 d4}). To be consistent with supersymmetry, this in turn implies to modify the supersymmetry transformations by adding field dependent gauge transformations, resulting in non linear transformation laws. This super-gauge is analogous to the Wess and Zumino gauge in ordinary superspace and such an algebra is usually referred as an algebra of the Wess and Zumino type. In this section, we show how the use of shadow fields makes it possible to remove these gauge transformations, by applying the general formalism of~\cite{ TSSlong} to the ${\bar N}=1,d=4$ case. This in turn permits one to make contact with the general solution to the superspace constraints given in the next section. To introduce the shadows, one replaces the knowledge of the $\delta, \delta_{\bar m}$ generators by that of graded differential operators $Q$ and $Q_\kappa$, which represent supersymmetry in a nilpotent way. Let $\omega$ and $\kappa^{\bar m}$ be the commuting scalar and $(0,1)$-vector supersymmetry parameters, respectively. The actions of $Q$ and $Q_\kappa$ on the (classical) fields are basically supersymmetry transformations as in (\ref{lois N1 d4}) minus a field dependent gauge transformation, that is \begin{equation} \label{def Q} Q\equiv \omega\delta -\delta^{\scriptscriptstyle \,gauge}(\omega c)\, ,\quad Q_\kappa\equiv\delta_\kappa -\delta^{\scriptscriptstyle \,gauge}(i_\kappa\gamma_1) \end{equation} with $\delta_\kappa\equiv \kappa^{\bar m}\delta_{\bar m}$ and $i_\kappa$ is the contraction operator along $\kappa^{\bar m}$. These operators obey $Q^2=0,Q_\kappa^2=0,\{Q,Q_\kappa\}=\omega\mathcal{L}_\kappa$. The scalar shadow field $c$ and the $(0,1)$-form shadow field $\gamma_1$ are a generalization of the fields introduced in \cite{shadow}. They carry a $U(1)_{\scriptstyle{R}}$ charge $+1$ and $-1$, respectively. The action of $Q$ and $Q_\kappa$ increases it by $1$ and $-1$, respectively. Let moreover $\mathcal{Q}\equiv Q+Q_\kappa$. The property $\mathcal{Q}^2=\omega\mathcal{L}_\kappa$ fixes the transformation laws of $c$ and $\gamma_1$. In fact, the action of $\mathcal{Q}$ on all fields, classical and shadow ones, is given by the following horizontality equation \begin{equation} \label{horizN1} (d+\mathcal{Q}-\omegai_\kappa)(A+\omega c+i_\kappa\gamma_1) +(A+\omega c+i_\kappa\gamma_1)^2=F+\omega\Psi_{(1,0)}+g(\kappa)\eta +i_\kappa\chi \end{equation} together with its Bianchi identity \begin{equation} (d+\mathcal{Q}-\omegai_\kappa)(F+\omega\Psi_{(1,0)}+g(\kappa)\eta +i_\kappa\chi)+[A+\omega c+i_\kappa\gamma_1,F+\omega\Psi_{(1,0)}+g(\kappa)\eta +i_\kappa\chi]=0 \end{equation} implied by $(d+\mathcal{Q}-\omegai_\kappa)^2=0$. Here and elsewhere $g(\kappa)\equiv g_{m \bar m}\kappa^{\bar m} dz^m$. The transformation laws (\ref{lois N1 d4}) can indeed be recovered from these horizontality equations by expansion over form degree and $U(1)_{\scriptstyle{R}}$ number, modulo gauge transformations with parameters $\omega c$ or $ i_\kappa\gamma_1$. The auxiliary $T$ scalar field is introduced in order to solve the degenerate equation involving $Q \, g(\kappa) \eta +Q_\kappa \,\omega \Psi$, with $Q \eta = \omega T -[\omega c,\eta]$. Moreover, the fields in the r.h.s of (\ref{horizN1}) can be interpreted as curvature components. Let us turn to the action of $\mathcal{Q}$ on the shadow fields. % For the sake of notational simplicity, we will omit from now on the dependence on the scalar parameter $\omega$. To recover its dependence, it is sufficient to remember that $Q$ increases the $U(1)_{\scriptstyle{R}}$ number by one unit. The horizontality conditions imply three equations for the shadow fields \begin{equation} Qc=-c^2\, ,\quad Q(i_\kappa\gamma_1) +Q_\kappa c +[c,i_\kappa\gamma_1]= i_\kappa A\, ,\quad Q_\kappa (i_\kappa\gamma_1)=-(i_\kappa\gamma_1)^2 \end{equation} Due to the nilpotency of $i_\kappa$, the third equation is defined modulo a contracted $(0,2)$ even form $\gamma_2$ of $U(1)_{\scriptstyle{R}}$ number $-2$, that is $Q_\kappa\gamma_1 =i_\kappa\gamma_2+\frac{1}{2}[\gamma_1,i_\kappa\gamma_1]$. To solve the second equation, we introduce an odd $(0,1)$-form $c_1$ of $U(1)_{\scriptstyle{R}}$ number zero. This gives $Q\gamma_1 = c_1-[c,\gamma_1]$ and $Q_\kappa c =i_\kappa c_1 +i_\kappa A$. Since we must have $\mathcal{Q}^2=\mathcal{L}_\kappa$ on all fields, we find \begin{equation} \begin{split} Q\gamma_1 &= c_1-[c,\gamma_1]\\ Q\gamma_2 &= c_2-[c,\gamma_2]-\frac{1}{2}[c_1,\gamma_1] \end{split}\hspace{10mm}\begin{split} Q c &= -c^2\\ Q c_1 &= -[c,c_1]\\ Q c_2 &= -[c,c_2]-c_1^2 \end{split}\end{equation} and \begin{equation} \begin{split} Q_\kappa\gamma_1 &=i_\kappa\gamma_2+\frac{1}{2}[\gamma_1,i_\kappa\gamma_1]\\ Q_\kappa \gamma_2 &= \frac{1}{2}[\gamma_1,i_\kappa\gamma_2]-\frac{1}{12}[\gamma_1,[\gamma_1,i_\kappa\gamma_1]] \end{split}\hspace{10mm}\begin{split} Q_\kappa c &=i_\kappa c_1 +i_\kappa A\\ Q_\kappa c_1 &=i_\kappa c_2+\mathscr{L}_\kappa\gamma_1\\ Q_\kappa c_2 &= \mathscr{L}_\kappa\gamma_2 -\frac{1}{2}[\gamma_1, \mathscr{L}_\kappa\gamma_1] \end{split}\end{equation} with $\mathscr{L}_\kappa\equiv[i_\kappa,d_A]$. \section{${\bar N}=1,\ d=4$ holomorphic superspace} \subsection{Definition of holomorphic superspace} We now define a `` twisted holomorphic " superspace for ${\bar N}=1$ theories by extending the $z_m$, $z_{\bar{m}}$ bosonic space with three Grassmann coordinates, one scalar $\theta$ and two $(0,1)$ $\vartheta^{\bar{p}}$ ($m,\bar{p}=1,2$). The supercharges are given by \begin{eqnarray} \label{susy gen} \mathbb{Q}\,\, \equiv \frac{\partial}{\partial\theta}+\vartheta^{\bar{m}}\partial_{\bar{m}}, \hspace{20mm} \mathbb{Q}_{\bar{m}} \equiv\frac{\partial}{\partial\vartheta^{\bar m}} \nonumber \\* \mathbb{Q}^2=0,\hspace{10mm} \{\mathbb{Q},\mathbb{Q}_{\bar m}\}= \partial_{\bar m}, \hspace{10mm} \{\mathbb{Q}_{\bar m}, \mathbb{Q}_{\bar n}\}=0 \end{eqnarray} The covariant superspace derivatives and their anticommuting relations are \begin{eqnarray} \nabla\,\, \equiv \frac{\partial}{\partial\theta} \hspace{20mm} \nabla_{\bar m} \equiv\frac{\partial}{\partial\vartheta^{\bar m}}-\theta\partial_{\bar m} \nonumber \\* \nabla^2=0 \hspace{10mm} \{\nabla,\nabla_{\bar m}\}= -\partial_{\bar m} \hspace{10mm} \{\nabla_{\bar m}, \nabla_{\bar n}\}=0 \end{eqnarray} They anticommute with the supersymmetry generators. They can be gauge-covariantized by the introduction of connection superfields $\mathcal{A} \equiv (\mathbb{C},\mathbf{\Upgamma}_{\bar m},\mathbb{A}_m,\mathbb{A}_{\bar m})$ valued in the adjoint of the gauge group of the theory \begin{equation} \hat{\nabla} \equiv \nabla + \mathbb{C},\quad \hat{\nabla}_{\bar m} \equiv \nabla_{\bar m}+\mathbf{\Upgamma}_{\bar m},\quad \hat{\partial}_m\equiv \partial_m+\mathbb{A}_m,\quad \hat{\partial}_{\bar m}\equiv \partial_{\bar m}+\mathbb{A}_{\bar m} \end{equation} The associated covariant superspace curvatures are defined as ($M=m,\bar{m}$) \begin{equation} \label{ricci}\begin{split} &\mathbb{F}_{MN} \equiv [\hat{\partial}_M,\hat{\partial}_N] \\ &\mathbf{\Uppsi}_M \equiv [\hat{\nabla},\hat{\partial}_M] \\ &\boldsymbol{\upchi}_{\bar{m}N} \equiv [\hat{\nabla}_{\bar m},\hat{\partial}_N] \end{split} \hspace{10mm}\begin{split} &\mathbf{\Upsigma} \equiv \hat{\nabla}^2 \\ &\mathbb{L}_{\bar m} \equiv \{\hat{\nabla}, \hat{\nabla}_{\bar m}\}+\hat{\partial}_{\bar m} \\ &\mathbf{\bar{\Upsigma}}_{\bar m \bar n} \equiv {\ \scriptstyle \frac{1}{2} } \{\hat{\nabla}_{\bar m},\hat{\nabla}_{\bar n} \} \end{split}\end{equation} so that \begin{equation} \label{curv}\begin{split} \mathbb{F}_{M N} &= \partial_M\mathbb{A}_N -\partial_N\mathbb{A}_M +[\mathbb{A}_M,\mathbb{A}_N] \\ \mathbf{\Uppsi}_M &= \nabla \mathbb{A}_M - \partial_M\mathbb{C} - [\mathbb{A}_M,\mathbb{C}] \\ \boldsymbol{\upchi}_{\bar{m} N} &= \nabla_{\bar m}\mathbb{A}_N - \partial_N \mathbf{\Upgamma}_{\bar m} - [\mathbb{A}_N,\mathbf{\Upgamma}_{\bar m}] \end{split} \hspace{10mm}\begin{split} \mathbf{\Upsigma} &=\nabla\mathbb{C}+\mathbb{C}^2 \\ \mathbb{L}_{\bar m} &= \nabla\mathbf{\Upgamma}_{\bar m} +\nabla_{\bar m}\mathbb{C}+\{\mathbf{\Upgamma}_{\bar m}, \mathbb{C}\}+\mathbb{A}_{\bar m} \\ \mathbf{\bar{\Upsigma}}_{\bar m \bar n} &= \nabla_{\{ \bar m} \mathbf{\Upgamma}_{\bar n\}} + \mathbf{\Upgamma}_{\{\bar m} \mathbf{\Upgamma}_{\bar n\}} \end{split}\end{equation} Bianchi identities are given by $\Delta \mathcal{F} =-[ \mathcal{A} , \mathcal{F} ] $, where $\Delta$ and $\mathcal{F}$ denote collectively $(\nabla,\nabla_{\bar m},\partial_m,\partial_{\bar m})$ and the superspace curvatures. The super-gauge transformations of the super-connection $\mathcal{A}$ and super-curvature $\mathcal{F}$ are \begin{equation} \label{gauge_transf N1} \mathcal{A} \rightarrow e^{-\boldsymbol{\upalpha}}(\mathbf{\Updelta}+\mathcal{A})e^{\boldsymbol{\upalpha}}, \quad \mathcal{F} \rightarrow e^{-\boldsymbol{\upalpha}}\mathcal{F}e^{\boldsymbol{\upalpha}} \end{equation} where the gauge superparameter $\boldsymbol{\upalpha}$ can be any given general superfield valued in the Lie algebra of the gauge group. The ``infinitesimal" gauge transformation is $\delta \mathcal{A} =\mathbf{\Updelta} \boldsymbol{\upalpha} +[\mathcal{A} ,\boldsymbol{\upalpha} ]$. \subsection{Constraints and their resolution} The superfield interpretation of shadow fields is that they parametrize the general $\boldsymbol{\upalpha}$-dependance of the solution of the superspace constraints, while in components they provide differential operators with no gauge transformations in their anticommutation relations. To eliminate superfluous degrees of freedom and make contact with the component formulation, we must impose the following gauge covariant superspace constraints \begin{equation} \label{constraints N1} \mathbf{\Upsigma}= \mathbf{\bar{\Upsigma}}_{\bar m \bar n}=\mathbb{L}_{\bar m} = 0,\quad \boldsymbol{\upchi}_{\bar m n} = \frac{1}{2}g_{\bar m n}\boldsymbol{\upchi}^p_{\phantom{p}p}\equiv g_{\bar m n}\boldsymbol{\eta} \end{equation} They can be solved in terms of component fields as follows. The super-gauge symmetry (\ref{gauge_transf N1}) allows one to choose a super-gauge so that every antisymmetric as well as the first component of $\mathbf{\Upgamma}_{\bar m}$ is set to zero. We also fix the first component $\mathbb{C}|_0=0$, so that we are left with the ordinary gauge degree of freedom corresponding to $\boldsymbol{\upalpha}|_0$. The constraint $\mathbf{\bar{\Upsigma}}_{\bar m \bar n}=0$ then implies that the whole $\mathbf{\Upgamma}_{\bar m}$ super-connection is zero. The constraint $\mathbf{\Upsigma}=0 $ implies that one must have \begin{equation} \mathbb{C}=\tilde{A}-\theta\tilde{A}^2, \quad \tilde{A}|_0=0 \end{equation} where $ \tilde{A} $ is a function of the $\vartheta_ {\bar m}$. One defines $(\frac{\partial}{\partial\vartheta^{\bar m}}\tilde{A})|_0\equiv - A_{\bar m}$. The constraint $\mathbb{L}_{\bar m} = 0$ implies \begin{equation} \mathbb{A}_{\bar m} = - \nabla_{\bar m} \mathbb{C}=-\frac{\partial}{\partial\vartheta^{\bar m}}\tilde{A}+\theta\Bigl(\partial_{\bar m}\tilde{A}-\frac{\partial}{\partial\vartheta^{\bar m}}\tilde{A}^2\Bigr) \end{equation} Then, with $(\frac{\partial}{\partial\vartheta^{\bar m}}\frac{\partial}{\partial\vartheta^{\bar n}}\tilde{A})|_0\equiv \chi_{\bar m\bar n}$, we have \begin{equation} \boldsymbol{\upchi}_{\bar m\bar n}=\nabla_{\bar m}\mathbb{A}_{\bar n}=-\chi_{\bar m\bar n}-\theta F_{\bar m\bar n} \end{equation} It follows that \begin{equation} \mathbb{C}=-\vartheta^{\bar m}A_{\bar m}-\frac{1}{2}\vartheta^{\bar m}\vartheta^{\bar n}\chi_{\bar m\bar n}-\theta\Bigl(\frac{1}{2}\vartheta^{\bar m}\vartheta^{\bar n}[A_{\bar m},A_{\bar n}]\Bigr) \end{equation} It only remains to determine the field component content of $\mathbb{A}_m$. We define $\mathbb{A}_m|_0\equiv A_m$, $(\frac{\partial}{\partial\theta}\mathbb{A}_m)|_0\equiv \Psi_m$ and $\boldsymbol{\eta}|_0\equiv \eta$. The trace constraint on $\boldsymbol{\upchi}_{\bar m n}= \nabla_{\bar m}\mathbb{A}_n$ implies \begin{equation} \mathbb{A}_m = A_m +\vartheta^{\bar p}g_{\bar p m}\eta +\theta\Bigl(\Psi_m-\vartheta^{\bar p}(\partial_{\bar p}A_m+g_{\bar p m}T)+\vartheta^{\bar p}\vartheta^{\bar q}g_{m[\bar{p}}\partial_{\bar{q}]}\eta\Bigr) \end{equation} We see that the whole physical content in the component fields stand in the $\theta$ independant part of the curvature superfield $\mathbf{\Uppsi}_m$, \begin{equation} \mathbf{\Uppsi}_m|_{\theta=0} = \Psi_m +\vartheta^{\bar p}(F_{\bar p m}-g_{\bar p m}T)+\frac{1}{2}\vartheta^{\bar p}\vartheta^{\bar q}(2 g_{m[\bar p}D_{\bar q]}\eta-D_m\chi_{\bar p\bar q}) \end{equation} The general solution to the constraints can be obtained by a super-gauge transformation, whose superfield parameter has vanishing first component. It can be parametrized in various manners. The following one allows one to recover the transformation laws that we computed in components in the section (\ref{Monemvasia}) for the full set of fields, including the scalar and vectorial shadows \begin{equation} e^{\boldsymbol{\upalpha}}=e^{\theta\vartheta^{\bar m}\partial_{\bar m}}e^{\tilde{\gamma}}e^{\theta\tilde{c}}= e^{\tilde{\gamma}} \scal{1+\theta( \tilde{c}+e^{-\tilde{\gamma}}\vartheta^{\bar m}\partial_{\bar m} e^{\tilde{\gamma}})} \end{equation} where $\tilde{\gamma}$ and $\tilde{c}$ are respectively commuting and anticommuting functions of $\vartheta^{\bar m}$ and the coordinates $z^m,z^{\bar m}$, with the condition $\tilde{\gamma}\vert_0=0$. These fields appear here as the longitudinal degrees of freedom in superspace. The transformation laws given in Eqs.~(\ref{lois N1 d4}) are recovered for $\tilde{\gamma}=\tilde{c}=0$, modulo field-dependent gauge-restoring transformations. \subsection{Pure ${\bar N}=1,d=4$ super-Yang--Mills action} To express the pure super--Yang--Mills action in the twisted superspace, we observe that the Bianchi identity $\nabla\Psi_m +[\mathbb{C},\Psi_m]$ implies that the gauge invariant function $\hbox {Tr}~\Psi_m\Psi_n$ is $\theta$ independent. Its component in $\vartheta^{\bar m}\vartheta^{\bar n}$ can thus be used to write an equivariant action as an integral over the full superspace \begin{multline} \label{actionEQ} \mathcal{S}_{EQ} = \int {\rm d}\vartheta^m {\rm d}\vartheta^n\,\, \hbox {Tr}~ \Bigl(\mathbf{\Uppsi}_m\mathbf{\Uppsi}_n\Bigr)= \int {\rm d}\vartheta^m {\rm d}\vartheta^n\, {\rm d}\theta\,\hbox {Tr}~\Bigl( \mathbb{A}_m \, \mathbf{\Uppsi}_n - \mathbb{C}\partial_m\mathbb{A}_n \Bigr)\\= \int {\rm d}\vartheta^m {\rm d}\vartheta^n\,\,\mathrm{d}\theta\,\hbox {Tr}~\Bigl( \mathbb{A}_m \,\nabla \mathbb{A}_n -\mathbb{C}\mathbb{F}_{m n}\Bigr) \end{multline} Berezin integration is defined as $\int {\rm d}\vartheta^m {\rm d}\vartheta^n\, \mathbb{X}_{m n} \equiv - \frac{1}{2}\frac{\partial}{\partial\vartheta_{ m}}\frac{\partial}{\partial\vartheta_{n}}\mathbb{X}_{m n}$, where $\mathbb{X}_{m n}$ is a $(2,0)$-form superfield. By use of the identity $\hbox {Tr}~(-\frac{1}{2}F^m_{\phantom{m}n} F^n_{\phantom{m}m}+\frac{1}{2}F^m_{\phantom{m}m}F^n_{\phantom{m}n})= \hbox {Tr}~(\frac{1}{2}F_{m n}F^{m n}) + \textrm{''surface term``}$, one recovers after implementation of the constraints the twisted form of the ${\bar N}=1$ supersymmetric Yang--Mills action (\ref{action N1 d4}), up to a total derivative \cite{Galperin.1991}. Here, the constraints (\ref{constraints N1}) have been solved in terms of component fields without using a prepotential. They must be implemented directly in the path integral when one quantizes the theory, which run over the unconstrained potentials. This is performed by the following superspace integral depending on Lagrange multipliers superfields \begin{equation} \label{actionC N1} \mathcal{S}_C = \int {\rm d}\vartheta^m {\rm d}\vartheta^n\, {\rm d}\theta\,\Omega_{mn}\hbox {Tr}~\Bigl(\bar{\mathbb{B}}\,\Upsigma +\bar{\mathbb{B}}^{\bar m\bar n}\,\Upsigma_{\bar m\bar n} +\bar{\mathbb{K}}^{\bar m}\,\mathbb{L}_{\bar m}+\bar{\mathbf{\Uppsi}}^{m\bar n}\,\upchi_{m\bar n}\Bigr) \end{equation} where $\bar{\mathbb{B}}^{\bar m\bar n}$ is symmetric and $\bar{\mathbf{\Uppsi}}^{m\bar n}$ is traceless. The resolution of the constraints is such that the formal integration over the above auxiliary superfields gives rise to the non-manifestly supersymmetric formulation of the theory in components, without introducing any determinant contribution in the path-integral. However, due to the Bianchi identities, $\bar{\mathbb{B}}$, $\bar{\mathbb{B}}^{\bar m\bar n}$ and $\bar{\mathbf{\Uppsi}}^{m\bar n}$ admit a large class of zero modes that must be considered in the manifestly supersymmetric superspace Feynman rules. They can be summarized by the following invariance of the action \begin{equation} \label{zero N1} \delta^{\rm \scriptscriptstyle zero}\,\bar{\mathbb{B}}= \hat{\nabla}\,\mathbf{\uplambda}\, , \quad \delta^{\rm \scriptscriptstyle zero}\, \bar{\mathbb{B}}^{\bar m\bar n} = \hat{\nabla}_{\bar p}\, \mathbf{\uplambda}^{(\bar m\bar n\bar p)} - \partial_p\, \mathbf{\uplambda}^{p\bar m\bar n}\, , \quad \delta^{\rm \scriptscriptstyle zero}\, \bar{\mathbf{\Uppsi}}^{m\bar n} = \hat{\nabla}_{\bar p}\, \mathbf{\uplambda}^{m \bar n\bar p} \end{equation} where $\mathbf{\uplambda}^{(\bar m\bar n\bar p)}$ is completely symmetric and $\mathbf{\uplambda}^{m \bar n\bar p}$ is traceless in its $m \bar n$ indices and symmetric in $\bar n\bar p$. This feature is peculiar to twisted superspace and the appearance of this infinitely degenerated gauge symmetry was already underlined in \cite{twistsp} and is detailed in \cite{TSSlong}. We will not go in further details in this paper, and let the reader see in \cite{TSSlong} how it may be possible to deal with this technical subtelty by use of suitable projectors in superspace. One needs a gauge fixing-action $\mathcal{S}_{GF}$. It is detailed for the analogous ${\bar N}=2$ twisted superspace in \cite{twistsp,TSSlong} as a superspace generalization of the Landau gauge fixing action in components. One also needs a gauge-fixing part $\mathcal{S}_{CGF}$ for the action of constraints (\ref{actionC N1}), and the total action for ${\bar N}=1,d=4$ super-Yang--Mills in holomorphic superspace reads \begin{equation} \mathcal{S}_{\tiny SYM}^{\tiny {\bar N}=1} = \mathcal{S}_{EQ}+\mathcal{S}_C +\mathcal{S}_{GF} + \mathcal{S}_{CGF} \end{equation} \subsection{Wess and Zumino model} We then turn to the matter content of the theory and consider as a first step the Wess and Zumino superfield formulation. We introduce two scalar superfields $\mathbf{\Upphi}$ and $\bar{\mathbf{\Upphi}}$, and one $(2,0)$-superfield $\bar{\boldsymbol{\upchi}}_{m n}$. These superfields correspond to the scalar chiral and anti-chiral superfields of ordinary superspace. They take their values in arbitrary representations of the gauge group. The chirality constraints of the super-Poincar\'e superspace are replaced by the following constraints \begin{equation} \nabla\mathbf{\Upphi} = 0, \quad \nabla_{\bar m}\bar{\mathbf{\Upphi}}=0, \quad \nabla_{\bar p}\bar{\boldsymbol{\upchi}}_{m n}=2\,g_{\bar p[m}\partial_{n]}\bar{\mathbf{\Upphi}} \end{equation} We define the following component fields corresponding to the unconstrained components of the superfields as $\bar{\boldsymbol{\upchi}}_{m n}|_0\equiv \bar{\chi}_{m n}, (\frac{\partial}{\partial\theta}\bar{\boldsymbol{\upchi}}_{m n})|_0\equiv T_{m n}, \bar{\mathbf{\Upphi}}|_0 \equiv\bar{\Phi}, (\frac{\partial}{\partial\theta}\bar{\mathbf{\Upphi}})|_0\equiv\bar{\eta}, \mathbf{\Upphi}|_0 \equiv\Phi, (\frac{\partial}{\partial\vartheta^{\bar m}}\mathbf{\Upphi})|_0\equiv-\bar{\Psi}_{\bar m}, (\frac{\partial}{\partial\vartheta^{\bar m}}\frac{\partial}{\partial\vartheta^{\bar n}}\mathbf{\Upphi})|_0\equiv \bar{T}_{\bar m\bar n}$. We then deduce \begin{eqnarray} \mathbf{\Upphi}&=&\Phi-\vartheta^{\bar m}\bar{\Psi}_{\bar m}-\frac{1}{2}\vartheta^{\bar m}\vartheta^{\bar n}\bar{T}_{\bar m\bar n} \nonumber \\* \bar{\mathbf{\Upphi}}&=&\bar{\Phi}+\theta\Bigl(\bar{\eta}-\vartheta^{\bar m}\partial_{\bar m}\bar{\Phi}\Bigr)\nonumber \\* \bar{\boldsymbol{\upchi}}_{m n}&=&\bar{\chi}_{m n}+2\vartheta_{[m} \partial_{n]}\bar{\Phi}+\theta\Bigl(T_{m n}+\vartheta^{\bar m}(-\partial_{\bar m}\bar{\chi}_{m n}+2g_{\bar m [m}\partial_{n]}\bar{\eta}) +\vartheta_m\vartheta_n\partial_{\bar p}\partial^{\bar p}\bar{\Phi}\Bigr) \end{eqnarray} The free Wess and Zumino action can be written as \begin{eqnarray} \mathcal{S}_{WZ} &=& \int {\rm d}\vartheta^m {\rm d}\vartheta^n\, d\theta\, \Bigl(-\mathbf{\Upphi}\,\bar{\boldsymbol{\upchi}}_{m n}\Bigr)\nonumber \\* &=& \int\! \mathrm{d}^4x\sqrt{g}\,\hbox {Tr}~\Bigl(\frac{1}{2}T^{\bar m\bar n}\bar{T}_{\bar m \bar n}-\chi^{\bar m\bar n}\partial_{\bar m}\bar{\Psi}_{\bar n}+\bar{\eta}\partial_m\bar{\Psi}^m-\bar{\Phi}\partial_m \partial^m\Phi\Bigr) \end{eqnarray} \subsection{Gauge coupling to matter} In order to get the matter coupling to the pure super--Yang--Mills action, we covariantize the constraints. This can be shown to be consistent with those of (\ref{constraints N1}). We thus have \begin{equation} \hat{\nabla}\mathbf{\Upphi} = 0, \quad \hat{\nabla}_{\bar m}\bar{\mathbf{\Upphi}}=0, \quad \hat{\nabla}_{\bar p}\bar{\boldsymbol{\upchi}}_{m n}=2\,g_{\bar p[m}\hat{\partial}_{n]}\bar{\mathbf{\Upphi}} \end{equation} In order to fulfil these new constraints, we modify the matter superfields as follows \begin{eqnarray} \mathbf{\Upphi}&=&\Phi-\vartheta^{\bar m}\bar{\Psi}_{\bar m}-\frac{1}{2}\vartheta^{\bar m}\vartheta^{\bar n}T_{\bar m\bar n}+\theta\Bigl(\vartheta^{\bar m}A_{\bar m}\Phi+\vartheta^{\bar m}\vartheta^{\bar n}(\frac{1}{2}\chi_{\bar m\bar n}\Phi-A_{\bar m}\Psi_{\bar n})\Bigr)\nonumber \\* \bar{\boldsymbol{\upchi}}_{m n}&=&\bar{\chi}_{m n}+2\vartheta_{[m}D_{n]}\bar{\Phi}+\vartheta_m\vartheta_n\eta\bar{\Phi} +\theta\Bigl(T_{m n}+\vartheta^{\bar m}(-\partial_{\bar m}\bar{\chi}_{m n}+2g_{\bar m [m}D_{n]}\bar{\eta}\nonumber \\* &&\hspace{50mm}-2g_{\bar m [m}\Psi_{n]}\bar{\Phi}) +\vartheta_m\vartheta_n(\partial_{\bar p}D^{\bar p}\bar{\Phi}+\eta\bar{\eta}-h\bar{\phi})\Bigr) \end{eqnarray} The total action of super--Yang--Mills coupled to matter then reads \begin{equation} \label{Kithira} \mathcal{S}_{SYM+Matter} = \int {\rm d}\vartheta^m {\rm d}\vartheta^n\, d\theta\, \Bigl(\hbox {Tr}~(\mathbb{A}_m \,\nabla \mathbb{A}_n -\mathbb{C}\mathbb{F}_{m n})-\mathbf{\Upphi}\,\bar{\boldsymbol{\upchi}}_{m n}\Bigr) \end{equation} which matches that of \cite{recons}. A WZ superpotential can be added in the twisted superspace formalism as the sum of two terms, one which is written as an integral over d$\theta$ and the other as an integral over d$\vartheta^m$d$\vartheta^n$. \section{Prepotential} We now turn to the study of a twisted superspace formulation for the pure ${\bar N}=1$ super-Yang--Mills theory that involves a prepotential. It is sufficient to consider here the abelian case. The super-connections ($\mathbb{C},\mathbf{\Upgamma}_{\bar m},\mathbb{A}_{\bar m},\mathbb{A}_{m}$) count altogether for $(1+2+2+2)\cdot 2^3=56$ degrees of freedom, $8$ of which are longitudinal degrees of freedom associated to the gauge invariance in superspace (\ref{gauge_transf N1}). The constraints (\ref{constraints N1}) for $\Upsigma$ and $\Upsigma_{\bar m\bar n}$ can be solved by the introduction of unconstrained prepotentials as \begin{equation} \mathbb{C} = \nabla \mathbb{D}\, , \quad \mathbf{\Upgamma}_{\bar m} = \nabla_{\bar m} \mathbf{\Updelta} \end{equation} which reduces the $16$ degrees of freedom in $\mathbf{\Upgamma}_{\bar m}$ to the $8$ degrees of freedom in $\mathbf{\Updelta}$. Gauge invariance for the prepotentials now reads \begin{equation} \mathbb{D} \rightarrow \mathbb{D}+\upalpha\, ,\quad \mathbf{\Updelta} \rightarrow \mathbf{\Updelta}+\upalpha \end{equation} Owning to their definition, the prepotentials are not uniquely defined. Indeed, they can be shifted by the additional transformations $\mathbb{D}\rightarrow \mathbb{D}+\mathbf{S}$ and $\mathbf{\Updelta} \rightarrow \mathbf{\Updelta}-\mathbf{T}$, where $\mathbf{S}$, $\mathbf{T}$ obey $\nabla \mathbf{S}=0$ respectively $\nabla_{\bar m}\mathbf{T}=0$. The constraint $\mathbb{L}_{\bar m}=0$ implies that $\mathbb{A}_{\bar m}$ can be expressed in terms of the prepotentials, $\mathbb{A}_{\bar m}=-\nabla\nabla_{\bar m}\mathbf{\Updelta} -\nabla_{\bar m}\nabla\mathbb{D}$. Its gauge invariance is given by \begin{equation} \mathbb{A}_{\bar m} \rightarrow -\nabla\nabla_{\bar m}(\mathbf{\Updelta} +\upalpha -\mathbf{T}) -\nabla_{\bar m}\nabla(\mathbb{D} +\upalpha +\mathbf{S}) = \mathbb{A}_{\bar m} +\partial_{\bar m}\upalpha \end{equation} We now perform a gauge choice and choose $\upalpha =-\mathbf{\Updelta}$ so that we fix $\mathbf{\Upgamma}_{\bar m}=0$. The remaining gauge invariance is then given by $\mathbb{D} \rightarrow \mathbb{D}+\mathbf{S}+\mathbf{T}$. The last constraint $\boldsymbol{\upchi}_{\bar m n} = g_{\bar m n}\boldsymbol{\eta}$ is then solved by introducing \begin{equation} \mathbb{A}_{m}=-\nabla^n\mathbb{P}_{nm} \end{equation} so that $\boldsymbol{\upchi}_{\bar m n}=\nabla_{\bar m}\mathbb{A}_{n}=\frac{1}{2}g_{\bar m n}\nabla_{\bar p}\nabla_{\bar q}\mathbb{P}^{\bar p\bar q}$. Although the residual gauge invariance is well established in the case when we consider the theory with the full set of generators, it is still unclear how exactly it happens in reduced superspace. But as a matter of fact, we are left with the two unconstrained prepotentials $\mathbb{D}$ and $\mathbb{P}_{mn}$, counting for $16$ degrees of freedom, which permits one to write the classical action. We consider the curvature $\mathbf{\Uppsi}_m$, which reads in terms of the prepotentials as \begin{equation} \label{Psim sf} \mathbf{\Uppsi}_m = \nabla\nabla^n\mathbb{P}_{mn}-\partial_m\nabla\mathbb{D} \end{equation} It obeys trivialy the Bianchi identity $\nabla\mathbf{\Uppsi}_m=0$, and its first component is a $(1,0)$-vector of canonical dimension $3/2$, that we identify with $\Psi_m-\partial_mc$ of the previous section, when the super-gauge invariance is restored. Since the first component of the superfield (\ref{Psim sf}) is the same as that of $\mathbf{\Uppsi}_m$ in the previous section, both superfields are equal. It follows that the classical ${\bar N}=1,d=4$ super-Yang--Mills action can also be written as an integral over the full superspace, in function of an unconstrained prepotential \begin{multline} \label{actionEQ prep} \mathcal{S}_{EQ} = \int {\rm d}\vartheta^m {\rm d}\vartheta^n\,\, \hbox {Tr}~ \Bigl(\mathbf{\Uppsi}_m\mathbf{\Uppsi}_n\Bigr)= \int {\rm d}\vartheta^m {\rm d}\vartheta^n\, {\rm d}\theta\,\hbox {Tr}~\Bigl( \mathbb{P}_{mp}\nabla^p\nabla\nabla^q\mathbb{P}_{nq} + \mathbb{P}_{mp}\partial_n\nabla^p\nabla\mathbb{D}\Bigr) \end{multline} In fact, it corresponds to the twisted version of the super-Poincar\'e superspace action, once the tensorial coordinate has been eliminated. We are currently studying how this formulation could be extended to a ${\bar N}=2,d=8$ superspace in a $SU(4)$ formulation \cite{TSSlong}. \section{${\bar N}=2 ,\, d=4$ holomorphic Yang--Mills supersymmetry} We now define the holomorphic formulation of the ${\bar N}=2,\,d=4$ Yang--Mills supersymmetry and see how it can be decomposed into that of the ${\bar N}=1$ supersymmetry. We first focus on the component formulation and afterwards we give its superspace version. The latter will involve $5$ fermionic coordinates, as compared to the $3$ fermionic coordinates of the ${\bar N}=1$ twisted superspace. \subsection{Component formulation} The component formulation of ${\bar N}=2,d=4$ super-Yang--Mills in terms of complex representations has been discussed in \cite{Witten1, Marino.thesis, Park}. We consider a reduction of the euclidean rotation group $SU(2)_L\times SU(2)_R$ to $SU(2)_L\times U(1)_R$, with $U(1)_R\subset SU(2)_R$. The two dimensional representation of $SU(2)_R$ decomposes under $U(1)_R$ as a sum of one dimensional representations with opposite charges. In particular, the scalar and vector supersymmetry generators decompose as $\updelta=\delta +\bar\delta$ and $\updelta_{K}=\delta_\kappa +\bar\delta_{\bar \kappa}$, where $\bar \kappa$ is the complex conjugate of $\kappa$, so that $\vert\kappa\vert^2=i_{\bar \kappa}g(\kappa)$. The subsets $(\delta,\delta_\kappa)$ and $(\bar\delta,\bar\delta_\kappa)$ form two ${\bar N}=1$ subalgebras of the ${\bar N}=2$ supersymmetry, $(\delta,\delta_\kappa)$ being related to those of the previous sections \begin{eqnarray} \delta^2=0\, ,\qquad \,\{\delta,\bar\delta\}&=&\delta^{\scriptscriptstyle \,gauge}(\Phi)\, , \qquad \bar\delta^2=0\nonumber \\* \{\delta_m,\delta_n\}=0\, ,\qquad \{\delta_m,\delta_{\bar m}\}&=&g_{m\bar m}\delta^{\scriptscriptstyle \,gauge}(\bar\Phi)\, ,\qquad \{\delta_{\bar m},\delta_{\bar n}\}=0 \end{eqnarray} Quite concretely, the transformation laws for pure ${\bar N}=2$ super-Yang--Mills can be obtained from the holomorphic and antiholomorphic decomposition of the horizontality equation in $SU(2)\times SU(2)^{'}$ twisted formulation \cite{recons}. This equation involves the graded differential operator \begin{equation} {\cal Q} \equiv Q + Q_{K} \end{equation} which verify $(d + Q + Q_K - \varpi i_{K})^2=0$. One defines \begin{equation} {\cal A}= A_{(1,0)} +A_{(0,1)} + \varpi c +i_K\gamma_{1} \end{equation} where $c$ is a scalar shadow field and $\gamma_1=\gamma_{1\,(1,0)}+\gamma_{1\,(0,1)}$ involves ``holomorphic'' and ``antiholomorphic'' vector shadow fields. $Q$ and $Q_K$ are constructed out of the five $\updelta$, $\delta_\kappa$ and $\bar\delta_{\bar\kappa}$ supersymmetry generators with shadow dependent gauge transformations \begin{equation} \label{def Q N2} Q\equiv \varpi\updelta -\delta^{\scriptscriptstyle \,gauge}(\varpi c)\, ,\quad Q_K\equiv\delta_K -\delta^{\scriptscriptstyle \,gauge}(i_K\gamma_1) \end{equation} An antiselfdual 2-form splits in holomorphic coordinates as $\chi \rightarrow (\chi_{(2,0)}, \chi_{(0,2)}, \chi_{(1,1)})$ where $\chi_{(1,1)}$ is subjected to the condition $\chi_{m\bar n}=\frac{1}{2}J_{m\bar n}J^{p\bar q}\chi_{p\bar q}$. We thus define a scalar $\chi$ as $\chi_{m\bar n}=g_{m\bar n}\chi$ and the holomorphic horizontality equation can be written as \begin{eqnarray}\label{lois N2 d4} {\cal F }&\equiv&{\cal Q} {\cal A} +{\cal A} {\cal A} \nonu\\ &=& F_{(2,0)} + F_{(1,1)} + F_{(0,2)} +\varpi\Psi +g(\kappa) (\eta +\chi) +g(\bar \kappa) (\eta -\chi) \nonumber \\* &&\hspace{70mm}+i_{\bar \kappa}\chi_{(2,0)} +i_\kappa \chi_{(0,2)} +\varpi^2 \Phi + \vert\kappa\vert^2 \bar \Phi \end{eqnarray} with Bianchi identity \begin{eqnarray}\label{Bianchi N2 d4}{ \cal Q} {\cal F } = -[{\cal A } , {\cal F }] \end{eqnarray} By expansion over form degrees and $U(1)_{\scriptstyle{R}}$ number, one gets transformation laws for ${\bar N}=2$ super--Yang--Mills in holomorphic and antiholomorphic components. In order to recover the transformation laws for ${\bar N}=1$ supersymmetry (\ref{lois N1 d4}) together with those of the matter multiplet in the adjoint representation $ \varphi = (T _{ {\bar m} {\bar n}}^{}, T_{mn } ^{}, \Psi_ {\bar m}^{}, \chi_{mn}^{} , \bar\eta, \Phi^{}, \bar\Phi^ {} )$, one can proceed as follows. One first derive from (\ref{def Q N2},\ref{lois N2 d4},\ref{Bianchi N2 d4}) the transformation laws for the physical fields under the equivariant operator $\delta_K$. One then obtains the action of the ${\bar N}=1$ vector generator by restricting the constant vector $K$ to its antiholomorphic component $\kappa^{\bar m}$, so that $\delta_K\rightarrow \kappa^{\bar m}\delta_{\bar m}$. Finally, the action of the holomorphic component $\delta$ of the scalar operator $\updelta$ on the various fields is completely determined by the requirement that it satisfies the ${\bar N}=1$ subalgebra $\delta^2=0$ and $\{\delta,\delta_{\bar m}\}=\partial_{\bar m}+\delta^{\scriptscriptstyle \,gauge}(A_{\bar m})$. \subsection{${\bar N}=2$ holomorphic superspace} We extend the superspace of the ${\bar N}=1$ case into one with complex bosonic coordinates $z_m$, $z_{\bar{m}}$ and five Grassmann coordinates, one scalar $\theta$, two ``holomorphic'' $\vartheta^{m}$ and two ``antiholomorphic'' $\vartheta^{\bar{m}}$ ($m,\bar{m}=1,2$). The supercharges are now given by \begin{equation} \label{susy gen} \mathbb{Q}\,\, \equiv \frac{\partial}{\partial\theta}+\vartheta^{m}\partial_{m}+\vartheta^{\bar{m}}\partial_{\bar{m}}, \hspace{10mm} \mathbb{Q}_{m} \equiv\frac{\partial}{\partial\vartheta^{m}},\hspace{10mm} \mathbb{Q}_{\bar{m}} \equiv\frac{\partial}{\partial\vartheta^{\bar m}} \end{equation} They verify \begin{equation} \mathbb{Q}^2=0,\hspace{10mm} \{\mathbb{Q},\mathbb{Q}_{M}\}= \partial_{M}, \hspace{10mm} \{\mathbb{Q}_{M}, \mathbb{Q}_{N}\}=0 \end{equation} with $M=m,{\bar m}$. The covariant derivatives and their anticommuting relations are \begin{eqnarray} \nabla\,\, \equiv \frac{\partial}{\partial\theta} \hspace{20mm} \nabla_{M} \equiv\frac{\partial}{\partial\vartheta^{M}}-\theta\partial_{M} \nonumber \\* \nabla^2=0 \hspace{10mm} \{\nabla,\nabla_{M}\}= -\partial_{M} \hspace{10mm} \{\nabla_{M}, \nabla_{N}\}=0 \end{eqnarray} They anticommute with the supersymmetry generators. The gauge covariant superderivatives are \begin{equation} \hat{\nabla} \equiv \nabla + \mathbb{C},\quad \hat{\nabla}_{M} \equiv \nabla_{M}+\mathbf{\Upgamma}_{M},\quad \hat{\partial}_M\equiv \partial_M+\mathbb{A}_M \end{equation} from which we define the covariant superspace curvatures \begin{equation} \label{ricci}\begin{split} &\mathbb{F}_{MN} \equiv [\hat{\partial}_M,\hat{\partial}_N] \\ &\mathbf{\Uppsi}_M \equiv [\hat{\nabla},\hat{\partial}_M] \\ &\boldsymbol{\upchi}_{MN} \equiv [\hat{\nabla}_{M},\hat{\partial}_N] \end{split} \hspace{10mm}\begin{split} &\mathbf{\Upsigma} \equiv \hat{\nabla}^2 \\ &\mathbb{L}_{M} \equiv \{\hat{\nabla}, \hat{\nabla}_{M}\}+\hat{\partial}_{M} \\ &\mathbf{\bar{\Upsigma}}_{MN} \equiv {\ \scriptstyle \frac{1}{2} } \{\hat{\nabla}_{M},\hat{\nabla}_{N} \} \end{split}\end{equation} They are subjected to Bianchi identities and the super-gauge transformations of the various connections and curvatures follow analogously to the ${\bar N}=1$ case. The constraints for the ${\bar N}=2$ case are \begin{equation} \mathbb{L}_{M}=0,\quad \mathbf{\bar{\Upsigma}}_{mn}=\mathbf{\bar{\Upsigma}}_{\bar m\bar n}=0,\quad\mathbf{\bar{\Upsigma}}_{m\bar n}=\frac{1}{2}\,g_{m\bar n}\mathbf{\bar{\Upsigma}}_{p}^{\phantom{p}p},\quad \boldsymbol{\upchi}_{\bar m n}=\frac{1}{2}\,g_{\bar m n}\boldsymbol{\upchi}_p^{\,\,p} \end{equation} Their solution can be directly deduced from \cite{twistsp,TSSlong}, by decomposition into holomorphic and antiholomorphic coordinates. The full physical vector supermultiplet now stands in the scalar odd connexion, which in the Wess--Zumino-like gauge is \begin{equation} \mathbb{C}= \tilde{A}+\theta(\tilde{\Phi}-\tilde{A}^2) \end{equation} where \begin{multline} \tilde{A}=-\vartheta^m A_m -\vartheta^{\bar m}A_{\bar m} -\frac{1}{2}\vartheta^m\vartheta^n\bar{\chi}_{mn}-\frac{1}{2}\vartheta^{\bar m}\vartheta^{\bar n}\chi_{\bar m\bar n}-\vartheta^m\vartheta_m\chi\\ +\frac{1}{2}\vartheta^n\vartheta_n(\vartheta^m D_m\bar{\Phi}+\vartheta^{\bar m}D_{\bar m}\bar{\Phi})-\vartheta^m\vartheta_m\vartheta^{\bar m}\vartheta_{\bar m} [\bar{\Phi},\eta] \end{multline} There is an analogous decomposition for $\tilde{\Phi}$ in \cite{twistsp}. The general solution to the constraints is recovered by the following super-gauge-transformation \begin{equation} e^{\boldsymbol{\upalpha}}=e^{\theta(\vartheta^{\bar m}\partial_{\bar m}+\vartheta^m\partial_m)}e^{\tilde{\gamma}}e^{\theta\tilde{c}}= e^{\tilde{\gamma}} \scal{1+\theta( \tilde{c}+e^{-\tilde{\gamma}}(\vartheta^{\bar m}\partial_{\bar m}+\vartheta^m\partial_m) e^{\tilde{\gamma}})} \end{equation} where $\tilde{\gamma}$ and $\tilde{c}$ are respectively commuting and anticommuting functions of $\vartheta^{\bar m},\vartheta^m$ and the coordinates $z^m,z^{\bar m}$, with the condition $\tilde{\gamma}\vert_0=0$. Transformation laws in components can then be recovered, which match those in (\ref{lois N2 d4}) and (\ref{Bianchi N2 d4}). The action is then given by \begin{equation} \label{actionEQN2} \mathcal{S}_{SYM}^{{\bar N}=2} = \int\,{\rm d}^4\vartheta\, {\rm d}\theta\,\hbox {Tr}~\Bigl(\mathbb{C}\nabla\mathbb{C}+\frac{2}{3}\mathbb{C}^3 \Bigr)\end{equation} To recover the previous results of the ${\bar N}=1$ super--Yang--Mills theory with matter in the adjoint representation, one first integrates (\ref{actionEQN2}) over the $\theta$ variable, which gives \begin{equation} \mathcal{S}_{SYM}^{{\bar N}=2} = \int\,{\rm d}^4\vartheta\, \hbox {Tr}~ \mathbf{\Upsigma}\mathbf{\Upsigma} \end{equation} Further integration over the $\vartheta^m$ variables, or equivalently derivation with $\nabla_m$, yields two terms which are both invariant under the ${\bar N}=1$ scalar supersymmetry generator. In turn, they can be expressed as an integral over the full twisted ${\bar N}=1$ superspace, which yields the superspace action given in (\ref{Kithira}) with matter in the adjoint representation. \subsection*{Acknowledgments} We thank very much G.~Bossard for many useful discussions on the subject. This work has been partially supported by the contract ANR (CNRS-USAR), \texttt{05-BLAN-0079-01}. A.~M. has been supported by the Swiss National Science Foundation, grant \texttt{PBSK2-119127}.
proofpile-arXiv_069-1807
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Hyperspherical Adiabatic (HA) method is based on the parametrization of the internal degrees of freedom with hyperspherical coordinates (see Refs.\cite{nielsen} and references therein). The method then consists in expanding the system's wavefunction on a basis made of hyperangular optimized functions (the adiabatic basis set) times (unknown) hyperradial functions. The hyperangular basis elements are taken as the eigenvectors of the Hamiltonian operator for a fixed value of the hyperradius $\rho$. Once those eigenvectors have been calculated, the hyperradial functions are obtained as the solutions of a system of coupled one-dimensional differential equations. The advantages of such approach are that the HA basis should drive a quick convergence for the expansion, due to its optimization, the payback is represented by the necessity and the difficulty in calculating accurately the first and second derivatives of the adiabatic basis set with respect to the hyperradius. Those terms are crucial to the method as they represent the coupling terms between the various hyperradial differential equations. In some applications of the HA method it was shown that the strong coupling between pair of elements of the adiabatic basis makes the hyperradial problem particularly hard to solve \cite{bge00}. The properties of the adiabatic basis functions have been object of several studies and are well-known. In particular, in the asymptotic limit of large hyperradius the HA functions are known to converge towards the scattering states of the three-body system, both below and above break-up. This characteristic makes the adiabatic expansion a valid choice to describe the three-body continuum states. In the literature there are several studies of the bound spectrum of a three-nucleon system by means of the HA method \cite{dcf82,bfr82,ffs88}, but very few dealing with continuum states \cite{fab1}. This paper investigates the possibility of using the HA approach to describe a three-body elastic process in which a particle collides the other two, initially forming a bound state. The object of this work is the study of the appropriate boundary conditions to be imposed to the hyperradial functions as $\rho\rightarrow\infty$ and a careful analysis of the convergence properties of the HA expansion. In order to quantitatively understand the pattern of convergence of the HA expansion we make use of the parallelism that can be built between the HA method and the Hyperspherical Harmonic (HH) expansion. In fact, we can consider two different expansions for the system's wavefunction, one in terms of $N_A$ HA basis elements, and the second in terms of $N_H$ HH basis elements. When $N_A=N_H$ the two expansions are connected by a unitary transformation and therefore must yield identical results. Since the HH basis has been used several times to describe scattering states~\cite{krv94,kvr97}, we exploit this knowledge to study the convergence of the HA expansion. In particular, we will study the convergence properties of the $L=0$ phase shift at low energies in a $1+2$ collision, which has been used as a benchmark problem in literature (see for example~\cite{cpf89}). The problem of the boundary conditions to be imposed to the hyperradial functions is related to the difficulties associated with obtaining the eigenvectors and eigenvalues of the adiabatic Hamiltonian at large values of the hyperradius. As the lowest adiabatic functions tend to the two-body bound wavefunctions, an accurate description of those states using, for example, the expansion in HH functions is known to be very difficult. This is because, as $\rho\rightarrow\infty$, the two-body bound states are localized in a very small zone of the hyperangular phase-space. Consequently, this particular configuration necessitates a large number of HH functions to be described \cite{bge00}. In fact, it can be shown that the number of HH required to reproduce this type of spatial configuration grows exponentially with the hyperradius. If the interest is limited to study deep three-body bound states, the problem just described does not manifest and a tractable number of HH functions suffices for a good accuracy. Due to the finite hyperradial size of the associated wavefunction, the adiabatic Hamiltonian needs to be solved only up to a non so large value of the hyperradius. However, there are cases in which shallow bound states are present (as Efimov states) and the adiabatic Hamiltonian needs to be solved for very large values of $\rho$. Furthermore, for energies in the continuum, the associated three-body scattering wavefunction has an infinite extension and a direct application of the HA necessitates of the solution of the adiabatic Hamiltonian at very large values of $\rho$, too. In order to obtain accurate asymptotic solutions to the adiabatic Hamiltonian we have followed in detail the procedure outlined by Nielsen and co-workers~\cite{nielsen}. Finally, interest in this work is also sparkled by an article of Fabre de la Ripelle~\cite{fab1}, where he suggested the possibility of expanding the three-body asymptotic scattering states into the adiabatic basis set, and retaining only the first term in such an expansion, resulting in a considerable reduction of the numerical burden. We will analyze this truncation together with the contribution of higher terms. This paper is organized as follows: in the next section the HA method is presented, by first introducing the notation. The expansion of the HA basis in terms of the HH functions is given as well as the method to describe the HA functions and the adiabatic potentials at large values of $\rho$. Section III treats the problem of scattering states. Two different methods of implementing the Kohn Variational Principle are given in conjunction with the HA basis. The asymptotic conditions are given in terms of the distance between the incident particle and the two-body system and in terms of $\rho$. Section IV is devoted to numerical applications. Results are presented using a simple Gaussian two-body potential and the semi-phenomenological $s$-wave MT-III potential~\cite{pfg82}. The final Section is devoted to the conclusions and perspectives. \section{Hyperspherical Adiabatic Method} Let us consider a system of three identical particles of mass $m$, in a state of total orbital angular momentum $L=0$. Other quantum numbers are represented by the total spin $S$, total isospin $T$, and the symmetry under particle permutation $\Pi$, which can take the values $a$ (anti-symmetric, for three fermions) or $s$ (symmetric, in the case of three bosons). A further quantum number needed to uniquely identify each wavefunction is given by the vibrational number $n$ ($n=1,2, ...$) for bound states or the energy $E$ for continuum states. Let us start from the definition of Jacobi coordinates $\{\xv_i,\yv_i\}$ \bea \xv_i & = & {1\over \sqrt{2}} (\rv_j - \rv_k) \nb \yv_i & = & {1\over \sqrt{6}} (\rv_j + \rv_k - 2 \rv_i) \label{jacs} \eea where $\{ \rv_i \}$ are the Cartesian coordinates of the three particles and $i,j,k=1,2,3$ cyclic. The hyperspherical variables $\{\rho,\theta_i\}$ are defined as follows \be X_i=\rho\cos\theta_i, \quad Y_i=\rho\sin\theta_i \label{hangle} \ee where $\rho$ is the hyperradius which is symmetric under any permutation of the three particles and $\theta_i$ is the hyperangle, which is dependent on the particular choice of the Jacobi coordinate system. In terms of the interparticle distances $r_{ij}=|\rv_i-\rv_j|=\sqrt{2} X_k$ the hyperradius reads: \be \rho={1\over \sqrt{3}}\sqrt{r^2_{12}+r^2_{23}+r^2_{31}} \ . \ee In addition to $\rho$ and $\theta_i$ there are four more coordinates needed to parametrize all the possible spacial configurations of the three particles, for example the four polar angles which define the orientation of the two Jacobi vectors with respect to the laboratory frame of reference. However, in the particular case of total orbital angular momentum $L=0$, the number of such coordinates can be reduced to just one non-trivial functional dependence, represented by the cosine $\mu_i$ of the angle between the two Jacobi vectors $\{\xv_i,\yv_i\}$: \be \mu_i = \xv_i \cdot \yv_i / (X_i Y_i) . \ee In the following we will refer to the set of hyperangles $\{\theta_i,\mu_i\}$ as $\Omega_i$, or more in general as $\Omega=\{\theta,\mu\}$ when there is no need to specify the choice of a particular permutation of the particles defining a set of Jacobi coordinates. The Hamiltonian operator $\ham$ takes the following expression in hyperspherical coordinates \be \ham = -\frac{\hbar^2}{2 m} T_\rho + \frac{\hbar^2}{2 m \rho^2}G^2 + V(\rho,\Omega) , \label{hhc} \ee where $V$ is the potential energy operator, $T_\rho$ is the hyperradial operator \be T_\rho = \frac{d^2}{d \rho^2} + \frac{5}{\rho} \frac{d}{d\rho} \label{trho} \ee and $G^2$ is the grand-angular operator \be G^2 = \frac{4}{\sqrt{1-z^2}}\frac{d}{dz}(1-z^2)^{3/2}\frac{d}{dz} +\frac{\ell_x^2}{\cos^2\theta} +\frac{\ell_y^2}{\sin^2\theta} . \label{gqua} \ee where $z=\cos 2\theta$ and $\ell_x$ and $\ell_y$ are the angular momentum operators associated with the $\xv$ and $\yv$ vectors, respectively. The volume element is $\rho^5 d\rho \sqrt{1-z^2}dz d\mu$. The system wavefunction $\Psi$, with quantum numbers $L$, $S$, $T$, $\Pi$, and $n$ (or $E$), is expanded as follows: \be \Psi^{LST\Pi}_n = \sum_{\nu=1}^{\infty} u^{n}_\nu(\rho) \Phi^{LST\Pi}_\nu(\rho,\Omega), \label{adbasis} \ee where $\{\Phi^{LST\Pi}_\nu\}$ are the eigenfunctions of the operator $\ham_\Omega$ made of the hyperangular part of the kinetic operator plus the potential energy operator, in which $\rho$ acts only as a parameter: \be \ham_\Omega \Phi^{LST\Pi}_\nu = \left[ \frac{\hbar^2}{2 m \rho^2}G^2 + V \right] \Phi^{LST\Pi}_\nu(\rho,\Omega) = U_\nu(\rho) \Phi_\nu^{LS\Pi}(\rho,\Omega). \label{adeq} \ee The set of eigenfunctions $\{\Phi_\nu^{LS\Pi}\}$ is known as the adiabatic basis set, and the associated eigenvalues $\{ U_\nu(\rho)\}$ as the adiabatic curves or potentials. In practical calculations, the infinite expansion of eq. \refeq{adbasis} needs to be truncated to a finite number of basis elements, say $N_A$. The convergence for the observables of interest with respect to this parameter is then checked. The initial Hamiltonian problem is thus tackled in two steps: firstly, the HA basis functions $\{\Phi^{LS\Pi}_\nu\}$ and the associated potentials $\{U_\nu(\rho)\}$ are calculated by solving eq. \refeq{adeq}. Secondly, the hyperradial functions $u^{n}_\nu(\rho)$ are obtained as the solutions of a system of $N_A$ coupled one-dimensional differential equations, which can be expressed as follows \cite{gpk90}: \bea \sum_{\nu=1}^{N_A} & & \left[ \left(-\frac{\hbar^2}{2 m}T_\rho +U_\nu-E\right) \delta_{\nu'\nu} + B_{\nu'\nu} \right] u_\nu + C_{\nu'\nu}\frac{d}{d\rho}u_\nu \nb &+& \frac{d}{d\rho}\left( C_{\nu'\nu} u_\nu \right) =0 \ \ \ (\nu'=1,\dots,N_A), \label{usys} \eea where the coupling terms $B_{\nu'\nu},C_{\nu'\nu}$ follow from the dependence on $\rho$ of the HA basis : \be B_{\nu'\nu}(\rho) = \frac{\hbar^2}{m \rho^2} \bra \frac{d\Phi_{\nu'}}{d\rho} | \frac{d \Phi_\nu}{d\rho} \ket_\Omega, \ee and \be C_{\nu'\nu}(\rho) = \frac{\hbar^2}{m \rho^2} \bra \Phi_{\nu'} | \frac{d \Phi_\nu}{d\rho} \ket_\Omega. \ee For bound states solutions, and short range potentials, the functions $\{ u_\nu \}$ tend to zero exponentially as $\rho \rightarrow \infty$, whereas for scattering states the boundary conditions to be imposed to the $\{ u_\nu \}$ will be discussed in the next Section. The first step in the implementation of an HA calculation consists in obtaining the adiabatic basis elements and the associated adiabatic potentials, solutions of eq. \refeq{adeq}, for a number of values of $\rho$. Among several available techniques we have chosen to use a variational approach, by expanding the functions $\{\Phi^{LST\Pi}_\nu\}$ onto a set of Hyperspherical Harmonics (HH) of size $N_H$. In order to define a basis set with the desired properties under particle permutation, we combine opportunely hyperspherical polynomials based on different set of Jacobi coordinates \cite{kvr97}. The expansion for $\Phi^{LST\Pi}_\nu$ reads: \be \Phi^{LST\Pi}_\nu = \sum^{N_H}_{kl} D_{kl}^\nu(\rho) | kl, LST\Pi \ket , \label{adexp} \ee with the basis element given, for $L=0$, by \be | kl, 0S\Pi \ket = \sum_{i=1}^3 \left[ ^{(2)}P^{l,l}_{k}(\Omega_i) \otimes T_i \otimes S_i \right] , \label{basis} \ee where $S_i$ ($T_i$) indicates the coupling of particles $jki$ to a state of total spin $S$ (total isospin $T$), and the hyperspherical polynomial is written as (see for instance Ref. \cite{hh2,hh3} for more details): \be ^{(2)}P^{l,l}_{k}(\Omega) = N_{kl} (1-z^2)^{(l/2)}P_k^{l+1/2,l+1/2}(z) P_l(\mu)\;\; , \ee where $P^{\alpha,\beta}_k$ is a Jacobi polynomial, $P_l$ is a Legendre polynomial and $N_{kl}$ is a normalization factor. The HH so defined are eigenfunctions of the grand-angular operator, \be G^2 | kl, 0ST\Pi \ket = K(K+4) | kl, 0ST\Pi \ket, \ee where $K$ is the grand-angular quantum number ($K=2k+2l$). The unknown coefficients $\{ D_{kl}^\nu \}$ in eq. \refeq{adexp}, and the adiabatic potential $\{ U_\nu \}$ are obtained as the eigenvectors and eigenvalues, respectively, of the following generalized eigenvalue problem \be \sum_{kl}^{N_H} \bra k'l',LST\Pi |\ham_\Omega - U | kl,LST\Pi \ket D_{kl} = 0 . \label{hhsys} \ee In practical calculations the size $N_H$ of the HH basis set is increased until convergence is reached for the desired number $N_A$ of adiabatic potentials $\{ U_\nu\}$. However, it is well known that the convergence becomes harder to achieve the larger the value of $\rho$. The reason for this behavior is connected to the specific properties of the HA basis set at large $\rho$. Namely, the lowest adiabatic potentials tend to the binding energies of all possible two-body subsystems, and the associated HA basis elements to the two-body wavefunctions, opportunely normalized. The HH expansion is not optimal for reproducing wavefunctions with similar characteristics, which become the more localized the larger $\rho$ . This convergence problem can be further enhanced by the presence of a hard core repulsion in the two-body potential. If the calculation is to be limited to the three-body bound states, and in absence of very extended ones such as the Efimov states, the limited radius of convergence of the HH expansion does not constitute a problem. When the calculation is extended to the continuum energy region, however, the accurate determination of the adiabatic curves and functions at very large $\rho$ becomes essential for the convergence of the results. In order to overcome this problem Blume and co-workers \cite{bge00} advocate the use of splines, which at large $\rho$ converge significantly faster than the HH. Alternatively, when $\rho$ is much larger than the range of the two-body interaction, approximations for the HA basis elements and potentials can be obtained by solving a non-homogeneous one-dimensional differential equation. A brief illustration of this second approach is summarized below, based on the work of Nielsen and co-workers \cite{nielsen}. Let us start from the definition of the reduced amplitudes $\phi_\nu$ \be \Phi^{LST\Pi}_\nu = \sum_{i=1,3}^3 \Phi_\nu^{(i)} = \sum_{i=1}^3 \frac{\phi_\nu(\theta_i,\rho)}{\cos\theta_i\sin\theta_i}. \ee each one having the set of quantum numbers ${LST\Pi}$. They are the solutions of the Faddeev equations, that for $s$-wave potentials read \bea && \left(-\frac{\hbar^2}{2 m \rho^2}\frac{d^2}{d\theta^2_i} + V(\sqrt{2}\rho\cos\theta_i) - \lambda_\nu(\rho) \right) \phi_\nu(\rho,\theta_i) = \nb && -{\cos\theta_i\sin\theta_i} V(\sqrt{2}\rho\cos\theta_i) \int_{-1}^{1}d\mu_i\left(\Phi_\nu^{(j)}+\Phi_\nu^{(k)}\right ) \label{fadeq} \eea where $\lambda_\nu(\rho) = U_\nu(\rho) - 4 \hbar^2/(2 m \rho^2)$. Defining $r_0=\sqrt2\rho\cos\theta_0$ the range of the (short-range) potential, we observe that, for large values of $\rho$, the potential $V(\sqrt{2}\rho\cos\theta_i)$ can be considered different for zero only for values of $\theta_i$ in the interval $\theta_0 \le \theta_i \le \pi/2$, which is the smaller the larger $\rho$. Accordingly, the above equation has two regimes depending the values of $\theta_i$. It is homogeneous for $\theta_i<\theta_0$. For values in which the potential is not zero we have to evaluate the non-homogeneous term which depends on the amplitudes $j,k$. From the relation between the different sets of Jacobi coordinates, the region of values of $\theta_i$ where $V$ is different from zero correspond to the values $\theta_j\approx \pi/6$ and $\theta_k\approx \pi/6$. In this region each of these amplitudes is governed by the corresponding homogeneous Faddeev equation. For example, for the $j$-amplitude, the possible solutions depending on the value of $\lambda_\nu$ are \bea \phi_\nu(\rho,\theta_j)=A\sin(k_\nu\theta_j) & \lambda_\nu> 0 \cr \phi_\nu(\rho,\theta_j)=A({\rm e}^{k_\nu\theta_j}-{\rm e}^{-k_\nu\theta_j}) & \lambda_\nu < 0 \;\;\; , \eea and similarly for the $k$-amplitude, where $k_\nu^2=2 m |\lambda_\nu|/\hbar^2$. Replacing these expressions in the Faddeev equation \refeq{fadeq}, its asymptotic form can be obtained: \be \left(-\frac{\hbar^2}{2 m \rho^2}\frac{d^2}{d\theta^2} + V(\sqrt{2}\rho\cos\theta) - \lambda_\nu(\rho) \right) \phi_\nu(\rho,\theta) = V(\sqrt{2}\rho\cos\theta) A f(\rho,\theta) \label{fedeq} \ee When the equation describes a two-body bound state with a third particle far away, $\lambda_\nu$ is negative and tends to the two-body bound state energy. The corresponding non-homogeneous term is \be f(\rho,\theta)= - 2 \frac{e^{k(\pi/2-\theta)}-e^{-k(\pi/2-\theta)}}{k}\frac{e^{k\pi/6} -e^{-k\pi/6}}{\sin{(\pi/3)}}. \ee For positive values of $\lambda_\nu $ the adiabatic functions describe asymptotically three free particles and \be f(\rho,\theta)= - \frac{8 \sin{(k \pi/6)}}{\sqrt{3}} \sin{[k(\pi/2-\theta)]}/k . \ee $A$ is a normalization constant to be determined from the solutions. The boundary conditions for the functions $\phi_\nu$ are $\phi_\nu(\rho,0)=\phi_\nu(\rho,\pi/2)=0$, which determine completely the solutions of eq. \refeq{fedeq}. In practical applications the adiabatic potentials $\{U_\nu\}$ and the HA basis elements $\{\Phi^{0ST\Pi}_\nu\}$ are obtained as solutions of eq. \refeq{hhsys} for $\rho \le \rho_0$ and of eq. \refeq{fedeq} for $\rho> \rho_0$, respectively. The matching point $\rho_0$ needs to be chosen larger than the range $r_0$ of the two-body potential $V$. There is a zone around the matching point in which, for a sufficient large value of $N_H$, the solutions obtained from the HH expansion or by solving eq. \refeq{fedeq} for each value of $\nu$ become indistinguishable from each other. In this way we link the definitions of $\rho_0$ and $N_H$ as the values for which the solutions of eqs.\refeq{hhsys} and \refeq{fedeq} can be accurately matched. In fact, if the functions $\phi_\nu$ obtained by solving eq. \refeq{fedeq} are themselves expanded into the HH basis, the coefficients of this expansion can be individually matched to the equivalent coefficients obtained through solving eq. \refeq{hhsys} for the same value of $\rho$. In the following we discuss the solutions of the the system of coupled differential equations \refeq{usys} in the case of bound states. The hyperradial functions $\{ u_\nu^n\}$ can be expanded into normalized generalized Laguerre polynomials times and exponential function \cite{abr}: \be u_\nu^n(\rho)=\sum_{m=0}^{N_p-1} A_{m\nu}^{n} L^{(5)}_m (\beta \rho) \exp{[-\beta \rho/2]}, \label{lagbasis} \ee where $\beta$ is a non-linear parameter which can be used to improve the convergence of the expansion \cite{pb1}. The coefficients $\{A_{m\nu}^{n}\}$ can be found by means of the Rayleigh-Ritz variational principle, whose implementation requires the solution of the following eigenvalue problem: \be \sum_{m\nu} \bra m'\nu' |\ham - E | m\nu \ket A_{m\nu} = 0 , \label{bsystem} \ee where the ortonormalized basis element $| m \nu \ket$ is defined as \be | m \nu \ket = L^{(5)}_m (\beta \rho) \exp{[-\beta \rho/2]} \Phi^{0ST\Pi}_\nu(\rho,\Omega) . \ee The size of the variational problem is $M=N_p \times N_A$, where $N_A$ is the number of adiabatic basis functions retained in expansion of eq. \refeq{adbasis}, and $N_p$ is the number of Laguerre polynomials used in expansion of eq. \refeq{lagbasis}. For sake of simplicity all functions $u_\nu^n$ are expanded using the same number of Laguerre polynomials, although this is not strictly necessary. The eigenvalues $\{ E_n^{(M)}\}$ ($n=1,2,\dots$) represent upper bounds to the eigenvalues of the Hamiltonian problem \refeq{hhc} and converge towards them monotonically as $M$ is increased. The associated set of coefficients $\{A_{m\nu}^n\}$ provide approximations to the system wavefunctions. As it has been mentioned before, there is a complete equivalence between the two methods if they include the same number of HH functions. In fact the expansion for $\Psi$ in eq. \refeq{adbasis} can be written also as: \be \Psi^{LST\Pi}_n = \sum_{kl}^{N_H} w^{n}_{kl}(\rho) |kl,LST \Pi \ket , \ee and from eq. \refeq{adexp} the following relation can be obtained \be w^n_{kl}(\rho)= \sum_{\nu}^{N_A} u^{n}_\nu(\rho) D^\nu_{kl}(\rho) \,\, . \ee If $N_A$ is set equal to $N_H$ the matrix $D^\nu_{kl}$ represents a unitary transformation between the HA and HH basis sets, therefore the two expansions must produce identical sets of eigenvalues and eigenvectors. Consequently, if in a specific problem, the desired accuracy is reached using $N_H$ HH basis functions, the use of a larger number of HH basis elements in the expansion of the adiabatic basis functions is superfluous. However, we can expect that the number of adiabatic functions $N_A$ needed to reach the same accuracy will be $N_A\ll N_H$. This is because the HA functions have been optimized to the specific Hamiltonian problem by solving eq. \refeq{adeq} for each value of the hyperradius. We would like to stress the fact that the equivalence between the HH and the HA method using a tractable number $N_H$ of HH functions applies in presence of deep bound states. When shallow bound states, as Efimov states, are present the situation changes and a direct application of the HH method encounter the problem of the inclusion of a very large number of basis states in the expansion of the wavefunction. This is related to the correct description of the adiabatic potentials in the asymptotic regime. In this case the use of the asymptotic form of the Faddeev equations given above proves to be extremely useful, as for example in the solution of three Helium atoms system~\cite{nielsen}. \section{Scattering Observable Calculations} \label{scatt} In this section we apply the HA expansion to the study of continuum states of a three-body system. The case considered will be the scattering of one particle colliding other two forming a dimer, at energies below the three-body breakup threshold. The wavefunction for the system can be written as \be \Psi = \Psi_c + \Psi_a , \ee where the first term is $\lleb^2$ and describes the system configurations in which the three particles are all close to each other. The second term represents the solution of the Schroedinger equation in the asymptotic region in which the incident particle does not interact with the other two ( the discussion will be limited to short range potentials). Moreover, we will consider the case of a two-body interaction that supports only one dimer bound state of energy $E^{2b}$. Accordingly, we will consider energies $E^{2b} \le E < 0$. The explicit form of the term $\Psi_a$ depends on the energy $E$ of the system. However, the particular choice of the function $\Psi_a$ is rather arbitrary, as it can be modified by adding or subtracting any $\lleb^2$ function. In the following we will consider and compare two different expressions for the asymptotic function $\Psi_a$. Practical applications will be shown for the case of nucleon-deuteron scattering using the semi-realistic $s$-wave MT-III potential, as the repulsive core of the potential allows a better understanding of the numerical problems associated with the method's implementation. \subsection{Scattering below Break-up: Method 1} The $\Psi_a$ term must describe the asymptotic state of the dimer plus a third particle. Therefore, the most natural choice for this term leads to building two independent and symmetrized states, that for $L=0$, read as follows: \be \Omega^R_{ST} = \sum_i \n \frac{g(r_i)}{r_i} \frac{\sin{[ k_y y_i]}}{k_y y_i} P_0(\mu_i)|ST \ket , \label{ha1r} \ee and \be \Omega^I_{ST} = \sum_i \n \phi_d(r_i) \frac{\cos{[ k_y y_i]}(1-\exp[-\gamma y_i])}{k_y y_i} P_0(\mu_i)|ST\ket . \label{ha1i} \ee The distance between particle $i$ and particles $j,k$ forming a dimer is $y_i$, $\phi_d(r)$ is the dimer wavefunction of energy $E^{2b}$, $k^2_y =4m(E-|E^{2b}|)/3\hbar^2$ and $\n $ is a normalization factor chosen so that \be \bra \Omega^R_{ST} |\ham-E| \Omega^I_{ST} \ket - \bra \Omega^I_{ST} |\ham-E| \Omega^R_{ST} \ket = 1/2 \,\, . \label{norma} \ee The behavior of the function $\Omega^I_{ST}$ for $y_i\rightarrow 0$ has been regularized by means of an opportune factor. The constant $\gamma$ can be consider a non linear parameter of the scattering wave function. The final result should be independent of the value chosen for it but a wrong choice can slow down the convergence significantly. A reasonable choice could be $\gamma\approx\sqrt{m|E^{2b}|/\hbar^2}$. A general scattering state is given by defining the following linear combinations \be \Omega^0_{ST} = u_{0R} \Omega^R_{ST} + u_{0I} \Omega^I_{ST} , \label{omega0} \ee and \be \Omega^1_{ST} = u_{1R} \Omega^R_{ST} + u_{1I} \Omega^I_{ST} . \label{omega1} \ee The term $\Psi_a$, having total spin $S$ and total isospin $T$, can thus be written as \be \Psi_a = \Omega^0_{ST} + \lscat \Omega^1_{ST} \label{psia} \ee where different choices for the matrix $u$ can be used to define the scattering matrix $\lscat$ \cite{k1}. Here we will use \be u = \left( \begin{array}{cc} i & -1 \\ i & 1 \end{array} \right) \ee defining $\lscat\equiv$ $S$-matrix and $\det u =2 \imath$. Another possible choice used here corresponds to $u_{0R}=u_{1I}=1$ and $u_{1R}=u_{0I}=0$ defining $\lscat\equiv$ ${\cal R}$, the reactance matrix. The two representations are related as \be {\cal S}= (1+i{\cal R})(1-i{\cal R})^{-1} . \ee This identity holds for the exact matrices therefore it can be used as a check of the accuracy of the calculation by comparing the results using both schemes. At energies below the three-body breakup, the $\Psi_c$ term is $\lleb^2$. Accordingly it can be represented by means of an expansion in the same $\lleb^2$ basis used for bound states, namely \be \Psi_c=\sum_{m\nu} A_{m\nu}|m \nu \ket \label{lagscat} \ee From the above definitions we can construct the scattering state as \be \Psi = \sum_{m\nu} A_{m\nu}|m \nu \ket + \Omega^0_{ST} + \lscat \Omega^1_{ST} \ee The solution of a scattering problem at a given energy requires the determination of the amplitude $\lscat$ and the linear coefficients $A_{m\nu}$. To this aim we make use of the Kohn variational principle \cite{k1} that can be written as \be [\lscat] = \lscat - \frac{2}{\det u} \bra \Psi^* | \ham - E | \Psi \ket. \ee The numerical implementation of the variational principle leads to a first order approximation of the amplitude $\lscat$ obtained through the solution of a linear system of equations of size $M+1$, where $M$ is the size of the basis set for the expansion of the core part of the wavefunction. If we define an array of unknowns $(\{A_{m\nu}\}, \lscat )$ of dimension $M+1$, the linear system can be written as: \be \left( \begin{array}{cc} H_{m'\nu',m\nu} & H_{m'\nu',\Omega^1} \\ H_{\Omega^1,m\nu} & H_{\Omega^1,\Omega^1} \end{array} \right) \left( \begin{array}{c} A_{m\nu} \\ \lscat \end{array} \right) = \left( \begin{array}{c} -H_{m'\nu,\Omega^0} \\ \frac{1}{4}\left( \det u - 2 H_{\Omega^1,\Omega^0} - 2 H_{\Omega^0,\Omega^1} \right) \end{array} \right) , \label{lsystem} \ee where $H_{x',x}$ stands for the matrix element \be H_{x',x} = \bra x'^* | \ham - E | x \ket . \ee The second order estimate for $\lscat$ is then given by \be \lscat^{2nd} = \lscat^{1st} - \frac{2}{\det u} \bra {\Psi^{1st}}^* | \ham - E | \Psi^{1st} \ket , \label{2ndorder} \ee where $\Psi^{1st}$ is the wavefunction obtained solving the linear system of eq.\refeq{lsystem}. Let us now discuss in more detail the structure of eq. \refeq{lsystem}. The top left part of the coefficient matrix, of dimension $M \times M$ contains the matrix elements used for the bound state calculation when the scattering state has the same quantum numbers as the bound state (compare it to eq. \refeq{bsystem}). Otherwise specific states $|m\nu\ket $ having proper quantum numbers have to be constructed. The additional matrix elements needing to be computed are those between the $\lleb^2$ basis functions and the scattering functions, and among the scattering functions themselves, for a total of $2M+4$ different terms. The number of such extra terms grows linearly with the basis set size, and due to the functional form of $\Omega^0_{ST}$ and $\Omega^1_{ST}$, they need to be calculated at every different choice of the system energy $E$. The application discussed above employ the HA basis in the expansion of the $\lleb^2$ $\Psi_c$ term. Alternatively, $\Psi_c$ could also have been expanded in terms of sole HH functions as \be \Psi_c=\sum_{mkl} A_{mkl}|m kl \ket , \ee where we have defined the ket \be | m kl \ket = L^{(5)}_m (\beta \rho) \exp{[-\beta \rho/2]}\otimes |kl, 0ST\Pi \ket . \ee After including a sufficient number of Laguerre polynomials, both expansions, in terms of HH or HA functions, are equivalent leading to the same value of $\lscat$. Example of this equivalence will be shown and discussed in the next Section. \subsection{Scattering below Break-up: Method 2} An alternative approach considered is represented by a direct solution of eq. \refeq{usys} which represents a different form of the three-body Schroedinger equation. The bound state solutions have been discussed in Sect.II, and here we will discuss the scattering solutions below three-body breakup: $E^{2b} \le E < 0$. For this purpose it is important to determine the boundary conditions to be imposed to the functions $\{u_\nu^E(\rho)\}$. Firstly, let us observe that at very large $\rho$ the only open channel in the system of eqs.\refeq{usys} is the lowest one, and that the system uncouples: \be \left(-\frac{\hbar^2}{2 m} T_\rho +U_1-E + B_{11} \right) u_1=0. \label{usysa} \ee At $\rho=0$ corresponds $u_1(0)=0$, whereas the boundary conditions at large $\rho$ depend on the specific asymptotic forms of the hyperradial potentials $U_1(\rho)$ and of the terms $B_{1 1}(\rho)$. A detailed study of their asymptotic expressions will be object of a forthcoming publication \cite{nota}. For the purpose of this work it suffices to say that \be \left( -\frac{\hbar^2}{2 m} T_\rho +U_1-E + B_{11} \right) u_1 \rightarrow \left( \frac{d^2}{d\rho^2} + k_\rho^2 + o(\rho^{-3})\right) (\rho^{5/2} u_1), \label{asym1} \ee where the wavenumber $k_\rho$ is defined from the relation: \be E=E^{2B} + \frac{\hbar^2}{2 m} k_\rho^2. \ee The boundary conditions associated with $u_1$ thus are \be u_1(0)=0 , \ \ \ \ \lim_{\rho\rightarrow \infty} u_1(\rho) \rightarrow \tilde{u_1} = \frac{\sin{(k_\rho \rho)}}{\rho^{5/2}} + \tan\delta\frac{\cos{(k_\rho \rho)}}{\rho^{5/2}}, \label{asym} \ee all other $u_\nu\rightarrow 0$ sufficiently fast, as $\rho\rightarrow\infty$. Furthermore, the lowest adiabatic function $\Phi_1^{0ST\Pi}(\rho,\Omega)\rightarrow \rho^{3/2}\phi_d(r)|ST\ket$ at very large values of $\rho$~\cite{ffs88}. Therefore, the asymptotic behavior of the scattering wave function in terms of the adiabatic basis results: \be \Psi= \sum_\nu u_\nu(\rho)\Phi_\nu^{0ST\Pi}(\rho,\Omega) \rightarrow \phi_d(r)\left[ \frac{\sin{(k_\rho \rho)}}{\rho} + \tan\delta\frac{\cos{(k_\rho \rho)}}{\rho}\right]|ST\ket . \label{eqas} \ee In the limit $\rho \rightarrow \infty$ the relation $k_y y \approx k_\rho \rho$ holds as $r$ is constrained by the finite size of the dimer wavefunction, therefore $r/\rho \ll 1$. Consequently eq. \refeq{eqas} represent the asymptotic limit of $\Omega^R_{ST}+\tan \delta \Omega^I_{ST}$, for $\rho\rightarrow \infty$. The full equivalence between the above expression for the asymptotic wavefunction and that one given by eqs. (\ref{ha1r},\ref{ha1i}) can be established by noticing that the $\tilde{u_1}$ constitutes the leading term in the expansion of $\Omega^R_{ST}$ and $\Omega^I_{ST}$ in terms of the small parameter $r/\rho$~\cite{fab1}, which yields \be \bra \Omega^R_{ST} |\Phi_1 \ket \approx \frac{\sin{[k_\rho \rho]}}{\rho^{5/2}} + {\cal O}(\rho^{-7/2}), \label{omexp1} \ee and \be \bra \Omega^R_{ST}|\Phi_\nu \ket \approx \frac{\cos{[k_\rho \rho]}}{\rho^{5}} + {\cal O}(\rho^5) (\nu > 1) . \label{omexpnu} \ee and a similar expansion yields for $\Omega^I_{ST}$. From the above discussion, we can define an alternative asymptotic term $\Phi_a$ as combination of the following functions: \be \Omega^R_{\rho,ST} = \sqrt{\frac{m}{2\hbar^2 k_\rho}} (1-\exp[-\gamma \rho])^{\eta} \frac{\sin{[k_\rho \rho]}}{\rho^{5/2}} \Phi_1(\Omega,\rho), \label{ha2r} \ee and \be \Omega^I_{\rho,ST} = \sqrt{\frac{m}{2\hbar^2 k_\rho}} (1-\exp[-\gamma \rho])^{\eta} \frac{\cos{[k_\rho \rho]}}{\rho^{5/2}} \Phi_1(\Omega,\rho), \label{ha2i} \ee where the factor $(1-\exp[-\gamma \rho])^{\eta}$ is introduced as usual to regularize the behavior of the functions for $\rho\rightarrow 0$ (in practical calculations we have set $\eta=4$), and the functions are normalized as in eq. \refeq{norma}. The same approach as in the previous Section can now be applied where. Accordingly the scattering wave function can be written as \be \Psi=\sum_{M\nu}B_{m\nu}|m\nu\ket + \Omega^0_{ST} +{\cal S} \Omega^1_{ST} \;\;\; \ee where the asymptotic part is now given in terms of $\Omega^R_{\rho,ST}$ and $\Omega^I_{\rho,ST}$, and the core part $\Psi_c$ is expanded onto the HA basis times a set of $\lleb^2$ functions $u_\nu(\rho)$. We can refer to this expansion as HA2. This approach is justified as the neglected terms in the $r/\rho$ expansion of $\Omega^R$ and $\Omega^I$ do not carry flux and can be incorporated into the unknown term $\Psi_c$. The approximated expression for the term $\Psi_a$ allows to speed up the calculation significantly as there is no need to calculate the overlap integrals between the HA basis functions and the asymptotic functions as in eq. \refeq{lsystem}. On the other hand, its implementation suffers from the following problems. At intermediate distance the expansion on $r/\rho$ of the asymptotic functions converges very slowly, resulting in a large number of HA functions which need to be taken into account. At large $\rho$, the implementation of the functions of eqs. (\ref{ha1r},\ref{ha1i}) results in a very awkward behavior of the $\lleb^2$ term $\Psi_c$. Continuing the expansion of eq. \refeq{omexp1}, for instance, it is possible to show that the next term is $\cos{[k_\rho \rho]}/\rho^{7/2}$, which imposes the asymptotic behavior that the function $u_1$ has to reproduce. This particular functional form is very slow decaying, and it is particularly hard to reproduce with a polynomial expansion. This problem is further enhanced by the presence of oscillations associated with cosine and sine terms. In order to solve the linear system \refeq{lsystem} taking into account the oscillatory behavior of the hyperradial functions for large $\rho$ values, we have implemented a Discrete Variable Representation (DVR) scheme \cite{dvr} rather than the standard variational approach. In a previous work \cite{lbk04} we have shown how to combine the variational Kohn principle with a DVR scheme, for the case of a two-body system, which corresponds to a single one-dimensional differential equation. In this work we have a set of N$_A$ one-dimensional coupled differential equations. Therefore we define a $(N_A M +1)\times (N_A M +1)$ unitary transformation matrix $\mdvr$ which is a direct product of $N_A+1$ matrices \be \mdvr = \mdvr^{1d} \otimes \mdvr^{1d} \otimes \dots \otimes 1 , \ee where $\mdvr^{1d}$ is a $M\times M$ unitary matrix associated to a customary one-dimensional DVR of size $N_{DVR}=M$ built in $\rho$: \be \mdvr^{1d}_{ij} = L^{(5)}_{i} (t_j) \exp{[-t_j/2]} \sqrt{w_j} , \ee where $t_j$ and $w_j$ are the appropriate quadrature points and weights. By mean of a parameter $\beta$, the end quadrature point $t_{N_{DVR}}$ can be associated to different physical values $\rho_{max}$, by setting $t_j = \beta \rho_j$. In this fashion we can constrain the quadrature points to be distributed between $0$ and $\rho_{max}$. \section{Numerical Applications} In order to illustrate the method outlined in the previous Sections we present two applications to the $n-d$ system in a quartet state ($S=3/2$). The potential energy of the system is taken as the sum of three pairwise potentials. We consider the MT-III interaction $V_{MT-III}$ for which benchmarks results exist in the literature~\cite{cpf89}. It reads: \be V_{MT-III}(r) = \left( 1438.72 \exp{[-3.11 \, r]} - 626.885 \exp{[-1.55 \, r]}\right)/r. \ee To make contact with the results of Ref.~\cite{fab1}, we have also used the Gaussian potential (named $V_G$): \be V_G(r) = -66.327 \exp{[-(0.64041 \, r)^2]}, \ee For both potentials we assume nuclear distances in fm and energies in MeV. The nucleon mass used is such that $\hbar^2/m=41.47$ MeV fm$^2$. Furthermore, we consider both potentials as acting only on the $l=0$ two-body partial wave. The potential $V_G$ supports one deuteron bound state, with zero angular momentum, of energy $E_{2b}=-2.22448$ MeV. The zero-energy scattering length is $a_s=5.4208$ fm, whereas for the MT-III potential the values are $E_{2b}=-2.23069$ MeV, and $a_s=5.5132$ fm. For the potential $V_G$ we consider the three-body system with quantum numbers $\Pi=a$, $T=1/2$ and $S=1/2$, whereas for $V_{MT-III}$ $\Pi=a$, $T=1/2$ and $S=3/2$. As the potentials are projectors on $s-$wave, the index $l$ in eq. \refeq{basis} is restricted to the value $l=0$, and the index $k$ can take the values $k=0,2,3,4,5,\dots,\infty$ in the first case and $k=1,2,3,4,5,\dots,\infty$ in the second case. \subsection{Bound states} \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[scale=0.20,angle=-90]{adbasis.ps} & \includegraphics[scale=0.20,angle=-90]{conv.ps} \end{tabular} \caption{The top panel shows the lowest adiabatic curves $U_\nu(\rho)$ for different values of $\rho$. In order to display the behavior at large $\rho$ the curves are multiplied by a factor $\rho^2$. The lowest curve thus tends to $E_{2b}\rho^2$, and the others to the spectrum $\hbar^2 K(K+4)/m$, with $K=0,2,3,4,\dots$ for $\nu=2,3,4,5,\dots$. The bottom panel shows the convergence of the lowest adiabatic curve $U_1(\rho)$ as a function of the number of HH used in the expansion of eq. \refeq{adexp}. The asymptote at $E_{2b}=-2.2245$ MeV is plotted for comparison.} \label{fig1} \end{figure} Figure \ref{fig1} shows, in the upper panel, the lowest adiabatic curves $U_\nu(\rho)$ calculated for the $V_G$ potential. In order to highlight their asymptotic behavior, the curves have been multiplied by a factor $\rho^2$. The lowest curve $U_1(\rho)$ thus tends to the deuteron energy times $\rho^2$, whereas the upper curves tend to the free HH spectrum, that is $4k(k+2)\hbar^2/m$ with $k=0,2,3,4,5,\dots $ for $\nu=2,3,4,5,6,\dots$. The value $k=1$ is not allowed as there is no completely symmetric HH with $k=1$ and $l=0$. Subsequently, the adiabatic function $\Phi_1(\rho,\Omega)$ tends to the deuteron wavefunction, whereas $\Phi_\nu$, $\nu > 1$, to the HH functions, with the appropriate normalization factors. The lower panel shows the convergence of the lowest curve $U_1(\rho)$ as a function of the number of HHs employed in the expansion of eq. \refeq{adexp}. It shows that the larger $\rho$ becomes, the larger the expansion basis must be in order to properly describe the function $\Phi_1$. In practice, the radius of convergence of expansion \refeq{adexp} increases rather slowly when the basis set size is increased. The reason for this behavior is that when $\rho$ is increased the function $\Phi_1$ becomes more and more localized in the hyperangular phase-space, therefore its description by means of the HH requires a larger and larger basis set size. This behavior is not connected with any particular feature of the potential used in this specific calculation but it can be considered a general one, as it is induced by the geometric localization of the deuteron wavefunction in connection with the HH expansion. The thick curve is the solution of eq.~\refeq{fedeq} starting at $\rho=20$ fm. For large values of $\rho$, the corresponding eigenvalue reproduces the two-body binding energy $E_{2b}$. The description of a three-nucleon bound state using a central potential has to been taken as a homework problem and preliminary to check the usefulness of the HA basis to treat scattering states, in comparison to the HH expansion. The $V_G$ potential predicts two bound states in the three-body system, a very deep ground state and a very shallow excited state. Table \ref{tab0} reports the convergence patterns for the upper bounds $E_1^{N}$ and $E_2^{N}$ to the two bound states supported by the potential $V_G$, as a function of the number $N$ of HA and HH basis elements. The HA functions were expanded in 80 HH functions which is the number required for the HH expansion to describe accurately the deep and shallow bound states. The number of HH functions necessary to obtain a full convergence of the energy for the deep bound state is much smaller, around 10 functions. The most striking feature to be observed in the table is the much rapid convergence of the HA basis expansion compared to the HH. Not only full convergence can be achieved with a basis which is one order of magnitude smaller, but already the inclusion of only one basis element yields an energy for the excited state within 90$\% $ of its converged value. \begin{table}[hbt] \begin{tabular}{lllclll} \hline \hline & \multicolumn{2}{c}{n=1} & \hspace{2cm}& & \multicolumn{2}{c}{n=2} \\ \cline{2-3} \cline{6-7} $N$ & HH & HA & & $N$ & HH & HA \\ 1 & -21.5808 & -22.0520 & & 1 & 0.0620 & -2.3484 \\ 2 & -21.9567 & -22.0850 & & 4 & -0.9576 & -2.3627 \\ 3 & -22.0694 & -22.0873 & & 10 & -2.0348 & -2.3632 \\ 4 & -22.0805 & -22.0874 & & 20 & -2.3036 & -2.3632 \\ 5 & -22.0852 & -22.0874 & & 30 & -2.3474 & -2.3632 \\ 6 & -22.0869 & -22.0874 & & 40 & -2.3582 & -2.3632 \\ 7 & -22.0872 & -22.0874 & & 50 & -2.3615 & -2.3632 \\ 8 & -22.0873 & -22.0874 & & 60 & -2.3626 & -2.3632 \\ 9 & -22.0874 & -22.0874 & & 70 & -2.3631 & -2.3632 \\ 10 & -22.0874 & -22.0874 & & 80 & -2.3632 & -2.3632 \\ \hline \hline \end{tabular} \caption{Patterns of convergence for the three-nucleon bound states obtained with the $V_G$ potential, as a function of the number $N$ of hyperangular basis functions included in the expansion. The HA basis elements were calculated with 80 HH, $\beta=1.6$ fm$^{-1}$, and 33 Laguerre polynomials were employed in the expansion of eq. \refeq{lagbasis}. Note the different scales for the ground and excited state patterns of convergence.} \label{tab0} \end{table} \subsection{Scattering States} In the following, results obtained combining the HA basis expansion with the expressions of eqs.(\ref{ha1r},\ref{ha1i}) are given and will be referred to as HA1. Table \ref{tab1} reports the full patterns of convergence of the $L=0,S=3/2$ MT-III phase shift $\delta$, at $E_{cm}=1$ MeV, as a function of the number of Laguerre polynomials $N_p$ used in expanding the hyperradial functions in eq. \refeq{lagscat} and the number $N_A$ of adiabatic channels included. The HA functions have been calculated using 200 HH functions. This number of HH functions is sufficient to accurately describe the phase shifts below the three-body breakup. From the table it can be seen that the convergence requires a rather high number of HA basis elements, more than 100, whereas $12$ Laguerre polynomials are enough to achieve final convergence. \begin{table}[htb] \begin{tabular}{llllllll} \hline \hline $N_p \backslash N_A$ & 20 & 40 & 60 & 80 & 120 & 160 & 200 \\ \hline 5 & -55.974 & -55.912 & -55.902 & -55.898 & -55.897 & -55.896 & -55.896 \\ 9 & -55.937 & -55.879 & -55.870 & -55.867 & -55.865 & -55.864 & -55.864 \\ 13 & -55.932 & -55.878 & -55.868 & -55.865 & -55.864 & -55.863 & -55.863 \\ 17 & -55.934 & -55.878 & -55.868 & -55.865 & -55.863 & -55.863 & -55.863 \\ 21 & -55.932 & -55.878 & -55.868 & -55.865 & -55.864 & -55.863 & -55.863 \\ 25 & -55.933 & -55.878 & -55.868 & -55.865 & -55.864 & -55.863 & -55.863 \\ 29 & -55.932 & -55.878 & -55.868 & -55.865 & -55.864 & -55.863 & -55.863 \\ 33 & -55.931 & -55.878 & -55.868 & -55.865 & -55.864 & -55.863 & -55.863 \\ \hline \hline \end{tabular} \caption{Convergence of the phase-shift $\delta$ in function of the number of Laguerre polynomials $N_p$ (see eq. \refeq{lagbasis}) and of the size $N_A$ of the HA basis set, at an incident energy of $E=1.00$ MeV. The HA basis is calculated with 200 HH elements. The non-linear parameter was fixed to $\beta=1.9$ fm$^{-1}$.} \label{tab1} \end{table} In order to analyze deeply the pattern of convergence, in Table \ref{tab2} results obtained by means of the HH expansion \cite{pap1} are compared to those obtained with the HA approach. In each row of the table $N_A$ indicates the number of HH functions used in the calculation and the number of HA functions used calculated using 200 HH functions. As already pointed out, for the special case of $N_H=N_A$ the two expansions are equivalent and the results become identical, provided that a sufficiently high number of Laguerre polynomials is employed to describe the $\{u_\nu(\rho)\}$ set of functions. Therefore the equivalence can be seen in the last row of the table in correspondence with $N_A=200$ (in some cases the equivalence is reached already at $N_A=160$). For the case of $E=2.00$ MeV, two patterns of convergence are shown for two different HA bases, obtained with 120 HH and 200 HH, respectively. Here the equivalence can be seen also at $N_A=120$. In this energy range there is little difference in the results obtained with the two bases, for example when 20 or 40 HA basis elements are employed. To be noticed that the results shown in Table \ref{tab2} present a different pattern of convergence with respect to the ones given in Table 2 of Ref \cite{pap1}: the reason is that in the previous paper the $S-$matrix representation was chosen for the matrix $u$, whereas in this work the $R-$matrix was preferred. The two choices are equivalent and lead, once convergence is achieved, to the same results. We can conclude that although there is some improvement, the table shows that the convergence is not speed up significantly by transforming the HH basis into the HA basis. This suggests that the HA basis does not provide as an optimized basis for the scattering problem as it does for the bound state problem. \begin{table}[h] \begin{tabular}{c|ccccccccc} \hline \hline & \multicolumn{2}{c}{0.20 MeV} &\hspace{2cm}& \multicolumn{2}{c}{1.00 MeV} &\hspace{2cm}& \multicolumn{3}{c}{2.00 MeV} \\ \hline $N_A$ & HH & HA1 && HH & HA1 && HH & \multicolumn{2}{c}{HA1} \\ \hline & & && & && & 120 & 200 \\ \hline 20 & -28.263 & -28.312 && -56.913 & -55.931 && -70.741 & -71.594 & -71.597 \\ 40 & -28.201 & -28.299 && -55.948 & -55.878 && -71.701 & -71.501 & -71.500 \\ 60 & -28.306 & -28.295 && -55.922 & -55.868 && -71.508 & -71.485 & -71.483 \\ 80 & -28.296 & -28.294 && -55.872 & -55.865 && -71.483 & -71.480 & -71.478 \\ 120 & -28.294 & -28.294 && -55.865 & -55.864 && -71.476 & -71.476 & -71.475 \\ 160 & -28.294 & -28.294 && -55.863 & -55.863 && -71.474 & - & -71.474 \\ 200 & -28.294 & -28.294 && -55.863 & -55.863 && -71.474 & - & -71.474 \\ \hline \hline \end{tabular} \caption{ Convergence of the phase-shift $\delta$ at three different energies below break-up threshold for the MT-III potential, in function of the size $N$ of the basis. The patterns of convergence for the HH and HA1 methods are shown for comparison. The HA basis was calculated employing 200 HH basis elements. For $E=2.00$ MeV the calculation with 120 HH basis elements is also shown. All calculations employed 33 Laguerre polynomials (see Table \ref{tab1}), and $\beta=1.9$ fm$^{-1}$.} \label{tab2} \end{table} Table \ref{tab3a} shows the convergence pattern for the phase-shift at E=1.00 MeV, obtained using the HA2 expansion for the asymptotic term. As anticipated in the previous Section, in order to obtain stability in the phase shift, we have employed a much larger and finer hyperradial grid, consisting of 4153 points, distributed up to $\rho=2000 $ fm. At the same time the HA basis set and associated eigenvalues were obtained with a bigger number, up to 2000, of HH basis functions, or by solving the asymptotic differential eq.\refeq{fedeq} for $\rho\ge\rho_0$ ($\rho_0=40$ fm). This calculation has been performed using the Laguerre polynomials as an expansion basis for the hyperradial functions. As anticipated, the polynomials are not an appropriate choice to reproduce the long range oscillatory behavior of the hyperradial functions. This can be seen from the poor convergence pattern in terms of $N_p$ as the number of HA functions increases. For $N_A>8$ more than 100 polynomials are necessary. Furthermore, the convergence pattern is also poor relative to the increase of the number of HA basis elements. Differences with results of Table \ref{tab1} are remarkable. \\ \begin{table}[hbt] \begin{tabular}{r|ccccccc} \hline \hline $N_p \backslash N_A$ & 4 & 8 & 16 & 24 & 32 & 36 & 40 \\ \hline 21 & -57.753 & -57.231 & -57.063 & -57.037 & -57.230 & -57.028 & -57.027 \\ 41 & -57.638 & -56.915 & -56.581 & -56.511 & -56.489 & -56.484 & -56.480 \\ 61 & -57.628 & -56.868 & -56.456 & -56.348 & -56.310 & -56.300 & -56.293 \\ 81 & -57.627 & -56.858 & -56.414 & -56.281 & -56.228 & -56.214 & -56.204 \\ 101 & -57.626 & -56.855 & -56.399 & -56.251 & -56.188 & -56.169 & -56.156 \\ 121 & -57.626 & -56.853 & -56.393 & -56.237 & -56.166 & -56.145 & -56.129 \\ \hline \hline \end{tabular} \caption{Convergence of the phase-shift $\delta$, using the HA2 method, in function of the number of Laguerre polynomials $N_p$ (see eq. \refeq{lagbasis}) and of the size $N_A$ of the HA basis set, at an incident energy of $E=1.00$ MeV. The HA basis is calculated with 2000 HH elements. The non-linear parameter was fixed to $\beta=1.9$ fm$^{-1}$.} \label{tab3a} \end{table} \begin{figure}[hbt] \begin{tabular}{cc} \includegraphics[scale=0.20,angle=-90]{beta_delta_HA1.ps} & \includegraphics[scale=0.20,angle=-90]{beta_delta_HA2.ps} \end{tabular} \caption{The phase-shift $\delta$ in terms of different choices of the non-linear parameter $\beta$ and of the size of the expansion in Laguerre polynomials. The top panel shows the convergence for expansion HA1, and the bottom panel for expansion HA2. Note the different scales on the $y-$axis of the two graphs.} \label{fig2} \end{figure} Figure \ref{fig2} shows the effect on the phase-shift of varying the non-linear parameter $\beta$. The upper panel shows results for the HA1 expansion, whereas the lower panel refers to the HA2 expansion. Different sizes of the Laguerre basis are shown. In principle, for a complete basis set, that is $N_p=\infty$, there should be no effect in varying the parameter $\beta$. When the basis set is finite, the stability of the result, in this case the phase-shift, with respects to changes of $\beta$ is a measure of the completeness of the expansion. In particular, by comparing the upper and lower panels, one can see that the HA1 polynomial expansion of the functions $u_\nu(\rho)$ is much more effective than for the case HA2 (also note the different scales of the $y-$axis). In the first case, the expansion with $17$ polynomials is completely unaffected by changes in $\beta$, whereas in the second case even a basis set as large as $120$ polynomials yields significantly different results with different choices of $\beta$, indicating that the result is far from convergence. \begin{table} \begin{tabular}{l|cccccccc} \hline \hline $\rho_{\rm max} \backslash N_{DVR} $ & 100 & 150 & 200 & 250 & 300 && \multicolumn{2}{c}{350} \\ \cline{8-9} & & & & & && 1$^{st}$& 2$^{nd}$ \\ 200 & -56.179 & -56.161 & -56.159 & -56.159 & -56.159 && -56.161 & -56.160 \\ 400 & -56.124 & -56.100 & -56.095 & -56.093 & -56.092 && -56.091 & -56.092 \\ 600 & -56.096 & -56.089 & -56.085 & -56.084 & -56.084 && -56.085 & -56.083 \\ 800 & -56.119 & -56.089 & -56.084 & -56.083 & -56.082 && -56.080 & -56.081 \\ 1000 & -56.162 & -56.087 & -56.082 & -56.081 & -56.081 && -56.082 & -56.081 \\ 1200 & -56.149 & -56.088 & -56.082 & -56.081 & -56.081 && -56.082 & -56.081 \\ 1400 & -56.106 & -56.084 & -56.082 & -56.081 & -56.081 && -56.077 & -56.080 \\ 1600 & -56.154 & -56.090 & -56.082 & -56.081 & -56.081 && -56.082 & -56.080 \\ \hline \hline \end{tabular} \caption{Convergence of the phase-shift $\delta$ at $E=1.00$ MeV, using the HA2 method, for the MT-III potential, as a function of the number $N_{DVR}$ of DVR points employed, and of the last grid point $\rho_{\rm max}$. Convergence is shown for the second order estimate of $\delta$ for all values of M, but the last, where both first and second order are shown.} \label{tab3} \end{table} In order to circumvent this problem we use the DVR technique in the hyperradius variable. Table \ref{tab3} shows the convergence, in terms of different choices of $\rho_{max}$ and the number of DVR points employed, of a case calculation, with 40 adiabatic functions, for the MT-III potential and $E=1.00$ MeV. For the biggest case ($N_{DVR}=350$), we show both the first and second order values of the phase-shift obtained by using the Kohn Variational Principle. In order to obtain a good convergence of the second order value it is important that the integral in eq. \refeq{2ndorder} is calculated with a very high numerical accuracy. The hyperradial grid used in the calculation consists in more than 4000 grid points up to $\rho=2000$ fm. The use of the DVR technique allowed for stable results in terms of the hyperradial expansion. The use of 350 DVR points is equivalent to a calculation with 350 Laguerre polynomials which in general is much more involved to be carried. However the number $N_A=40$ of HA functions used in this calculation is not enough to well describe the phase shift. At E=1.00 MeV the HA1 method as well as the HH method predict $\delta=-55.863$ degrees to be compared to the result of the HA2 method, $\delta=-56.081$ degrees, using $N_A=40$. In order to have an stable result for $\delta$ using the HA2 method, the value $N_A=120$ has to be considered and $N_{DVR}>350$ since the number of DVR points has to be increased as $N_A$ increases. The dimension of the HA2 problem is $N_A\times N_{DVR}$ and is clear that very soon the problem becomes computationally unsustainable, unless exceptional computational resources are considered. \begin{table}[h] \begin{tabular}{c|llllllll} \hline \hline & \multicolumn{2}{c}{0.20 MeV} && \multicolumn{2}{c}{1.00 MeV} && \multicolumn{2}{c}{2.00 MeV} \\ \cline{2-3} \cline{5-6} \cline{8-9} $N_A$ & HA1 & HA2 && HA1 & HA2 && HA1 & HA2 \\ 4 & -28.364 & -29.065 && -56.136 & -57.625 && -72.344 & -71.988 \\ 8 & -28.340 & -28.739 && -56.038 & -56.852 && -71.965 & -71.871 \\ 12 & -28.328 & -28.604 && -55.984 & -56.545 && -71.770 & -71.437 \\ 16 & -28.319 & -28.532 && -55.947 & -56.385 && -71.660 & -71.210 \\ 20 & -28.312 & -28.487 && -55.922 & -56.286 && -71.597 & -71.070 \\ 24 & -28.308 & -28.456 && -55.906 & -56.218 && -71.558 & -71.975 \\ 28 & -28.304 & -28.434 && -55.895 & -56.169 && -71.534 & -71.907 \\ 32 & -28.302 & -28.417 && -55.888 & -56.133 && -71.518 & -71.855 \\ 36 & -28.300 & -28.404 && -55.882 & -56.104 && -71.507 & -71.815 \\ 40 & -28.299 & -28.394 && -55.877 & -56.081 && -71.500 & -71.783 \\ \cline{2-3} \cline{5-6} \cline{8-9} Table \ref{tab2} & \multicolumn{2}{c}{-28.294} && \multicolumn{2}{c}{-55.863} && \multicolumn{2}{c}{-71.474} \\ \hline \hline \end{tabular} \caption{Patterns of convergence for the two different choice of the asymptotic term, in terms of the number $N_A$ of HA basis elements, at three different energies. The MT-III potential has been used. The last row reports the converged values from Table \ref{tab2}. The columns refers to a choice of $\beta=1.9$ fm$^{-1}$ for HA1, and $\rho_{max}=1200$ fm for the HA2 expansion. Moreover, the HA1 values are associated to a calculation with 200 HH, whereas the HA2 to a calculation with 2000 HH. The HA2 results have been obtained with the DVR scheme.} \label{tab4} \end{table} Table \ref{tab4} compares the convergence patterns for $\delta$ at three different energies for the two suggested choices for the asymptotic term $\Psi_a$, namely the one in eqs. (\ref{ha1r},\ref{ha1i}), referred to as HA1, and the one in eqs. (\ref{ha2r},\ref{ha2i}), referred to as HA2, in terms of the number $N_A$ of adiabatic channels. Due to the very large basis sets required to obtain convergence with the HA2 term, the pattern of convergence is limited to few channels, less than required to obtain a full convergence. The last row reports the converged values from Table \ref{tab2}. It is possible to see that the expansion HA1 converges faster towards the final number, whereas expansion HA2 moves rather slowly. The reason is the difference in the treatment of the asymptotic wavefunction. In the HA1 method, as well as in the HH method, the asymptotic configuration described by $\Psi_a$ is reached at intermediate distances. Conversely, in the HA2 the the configuration described by $\Psi_a$ is reached at much larger values of $\rho$. Furthermore, at intermediate distances, in order to reproduce the correct behavior a big number of HA functions are needed. The following figures present important characteristics of the hyperradial functions used in the expansion HA2.\\ \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[scale=0.20,angle=-90]{ufunc.ps} & \includegraphics[scale=0.20,angle=-90]{ufunc_asym.ps} \end{tabular} \caption{The functions $u_\nu(\rho)$ ($\nu=1,2,3,4$) at $E=2.00$ MeV. The top panel shows the short-range region. Some of the functions were magnified by the factor shown in the legend. The circles represent the DVR amplitudes at the DVR grid points. The lower panel shows the long range region. In order to highlight the asymptotic behavior of each function, $u_1$ is multiplied by $\rho^{7/2}$, and $u_2$, $u_3$ and $u_4$ by $\rho^5$. In the bottom panel $u_1$ is also magnified by a factor 50000.} \label{fig3} \end{figure} Fig. \ref{fig3} shows the functions $u_\nu(\rho)$ calculated with $N_A=4$, $N_{DVR}=300$ and $\rho_{max}=1200$ fm. The dots indicate the DVR amplitudes at the DVR points, whereas the lines represent the $u_\nu(\rho)$ functions obtained by back-transforming to the original polynomial basis. The top panel displays the short-range region ($0\le\rho\le 20$ fm), where the function $u_1$ is predominant. The bottom panel shows a part of the long-range region ($100\le\rho\le 300$ fm). Here the situation is drastically different, and the functions $u_2$, $u_3$ and $u_4$ have a much larger amplitude than $u_1$ (which is magnified by a factor $50000$). Also, in order to highlight the asymptotic behavior, $u_1$ is multiplied by $\rho^{7/2}$, and $u_2$, $u_3$ and $u_4$ by $\rho^5$. The most striking feature are the oscillations present in all curves. This behavior is a consequence of the decomposition of the asymptotic configuration in terms of HA functions. This resulting peculiar long range behavior is the cause of the very slow convergence of the phase-shift shown in Tables \ref{tab3a}, \ref{tab3}, \ref{tab4}. The behavior obtained for the curves $u_\nu$ is the one expected by the analytical expansion of the asymptotic terms indicated in eqs. (\ref{omexp1},\ref{omexpnu}). \\ \begin{figure}[h] \includegraphics[scale=0.25,angle=-90]{u4.ps} \caption{The function $u_4(\rho)$ (see eq.\refeq{adbasis}) obtained with the method HA1 (continuum line) and HA2 (dotted line). In the first case the function $u_4$ is short range, decaying exponentially with $\rho$, whereas in the second case it shows the long range oscillations.} \label{fig4} \end{figure} Figure \ref{fig4} compares the hyperradial function $u_4(\rho)$ obtained with method HA1 and HA2. In particular it highlights as the former is short range and exponentially decaying with $\rho$, compared to the latter which is oscillating as indicated in eq. \refeq{omexp1}. As mentioned in the Introduction, in Ref. \cite{fab1} the phase shift for the potential $V_G$ has been calculated from eq.~\refeq{usys} in the so-called uncoupled adiabatic approximation (UUA) retaining one hyperradial function. Namely, the following equation has been solved: \be \left[ -\frac{\hbar^2}{2 m} T_\rho +U_1-E+ B_{11} \right] u_1(\rho) =0 \label{usys1} \ee with the asymptotic condition $u_1(\rho)\rightarrow \sin(k\rho+\delta+3\pi/2)$ as $\rho\rightarrow\infty$. Besides the factor $3\pi/2$, this is equivalent to the method HA2 given in the previous section taking into account one HA function.\\ \begin{figure}[hbt] \includegraphics[scale=0.25,angle=-90]{sfasamento.ps} \caption{Elastic deuteron-nucleon phase-shift below three-body break-up (marked by the dotted line) for the $V_G$ potential. The full line corresponds to a calculation retaining one HA basis element, and using the HA2 method. The dots correspond to the full calculation. }\label{fig5} \end{figure} In Figure \ref{fig5} we show the phase-shift $\delta(E)$. The dots represent fully converged results obtained with the HA1 expansion, whereas the continuum line represent results obtained by including just one adiabatic function in the expansion HA2. It is possible to notice that the UUA provides a very good first order estimate of the phase-shift. However, the deviation from the complete expansion can be as big as 10\%. Also notice that in Figure \ref{fig5} the phase-shifts have been normalized so that $\delta(E=0)-\delta(E=\infty)=360$, as there are two bound trimer states. \section{Conclusions} In this paper we have investigated the capability of the HA basis to describe scattering states in a three-nucleon problem. The basis was generated from the hyperangular Hamiltonian by means of an expansion in HH functions. We have shown the complete equivalence between the adiabatic basis generated using $N$ HH functions and the HH basis of dimension $N$. This equivalence provides a useful benchmark when the convergence of the quantities of interest is studied in terms of the number $N_A$ of adiabatic functions. For example, for bound states it is well known that $N_A<<N$ suffices for the convergence of the binding energies. One goal of this paper was to investigate whether the same relation holds for scattering states. In particular, we studied the convergence of the $L=0$ phase shift $\delta$ corresponding to a process in which a nucleon collides a deuteron at low energies in the state $S=3/2$. For this purpose we have used the MT-III potential. In the calculation of the phase shift using the HA basis we have followed two different procedures. They were both based on a decomposition of the scattering wavefunction as a sum of two terms. One term describes the configurations when the three particles are all close to each other and goes to zero as the interparticle distances increase. The second term describes the asymptotic configurations and has been regularized so that goes to zero as $y\rightarrow 0$. In the first procedure the HA basis has been used to expand the short range part of the scattering wave function. The second order estimate of the phase-shift has been obtained from the Kohn variational principle. A similar approach has been used before with the HH basis. Therefore, a detailed comparative analysis of the convergence patterns was possible. The conclusion is that the number of basis elements needed to achieve a comparable level of convergence for the phase-shift is of the same order for the two bases, that is $N_A\approx N$, which is a surprising difference with respect to what happens in bound state calculations. A possible explanation could be the following. In bound state calculations the wavefunction expansion benefits from the initial optimization performed by constructing the HA basis. Conversely, in scattering state calculations the solution of the linear system of eq.\refeq{lsystem} requires a different short range behavior in the HA basis elements due to the presence of the terms $\Omega^0_{ST}$ and $\Omega^1_{ST}$ in the short distance region. The second procedure considered was based in a direct solution of the system of equations for the hyperradial functions given in eq.\refeq{usys}. This method however suffers from the following complications. The hyperradial boundary conditions to be imposed are those required to reconstruct the asymptotic configuration given by the functions defined in eqs.(\ref{ha1r},\ref{ha1i}). For very large values of $\rho$ the boundary conditions are simple and are given by eq.~\refeq{asym} for the lowest function ($\nu=1$). All other functions go to zero as $\rho\rightarrow\infty$. This means that the solution of the linear system has to be obtained over a very extended hyperradial grid. Moreover, the adiabatic potentials and functions have to be accurately known in the grid. In the present work we have solved partially the numerical difficulties associated to the solution of eq.~\refeq{usys} introducing a variational DVR procedure. From the present study we can conclude that the use of the HA basis in the description of scattering states is not as advantageous as for bound states. The main drawback is that then number of basis elements required to reach convergence is not as low (in proportion) as in bound state calculations. Secondly, a number of numerical problems arise from the need of calculating the adiabatic curves and the associated basis elements at large distances. Further studies to improve the description of scattering states using the HA expansion are at present underway.
proofpile-arXiv_069-1831
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} High resolution observations have revealed the existence of many young OB stars in the galactic center (GC). Accurate measurements of the orbital paramters of these stars give strong evidence for the existence of a massive black hole (MBH) which govern the dynamics in the GC \citep{sch+02b,ghe+03a}. Most of the young stars are observed in the central 0.5 pc around the MBH. The young stars population in the inner 0.04 pc (the 'S-stars' or the 'S-cluster') contain only young B-stars, in apparently isotropic distribution around the MBH, with relatively high eccentricities ($0.3\le e\le0.95$) \citep{ghe+03a,eis+05}. The young stars outside this region contain many O-stars residing in a stellar disk moving clockwise in the gravitational potential of the MBH \citep{lev+03,gen+03a,lu+06,pau+06,tan+06}. The orbits of the stars in this disk have average eccentricity of $\sim0.35$ and the opening of the disk is $h/R\sim0.1$, where $h$ is the disk height and $R$ is its radius. \citet{pau+06} and \citet{tan+06} suggested the existence of another stellar disk rotating counter clockwise and almost perpendicular to the CW disk. This disk is currently debated as many of the young stars are have intermediate inclinations, and are possibly just outliers that do not form a coherent disk structure \citep{lu+06}. Here we briefly report on our study of the dynamical evolution of the young stars in the GC, both in the stellar disk and in the S-cluster. We use extensive N-body simulations with realistic number of stars ($10^{3}-10^{5}$) using the {\tt gravitySimulator}, a 32-node cluster at the Rochester Institute of Technology that incorporates GRAPE accelerator boards in each of the nodes \citep{har+07}. Thus we are able to probe the dynamics of the stars near the MBH and their stellar environment. We study two basic issues: (1) the long term evolution of the S-stars up to their lifetime of a few $10^{7}$ yrs, including their dynamical interaction with stars in the vicinity of the MBH; (2) The evolution of a realistic stellar disk, taking into account both the effects of non-equal mass stars, as studied earlier, and more importantly the effect of the interactions of disk stars with the stellar cusp around the MBH. As we show, the latter component proves to be more important than the other components discussed in previous studies. A detailed report of our complete set of out simulations (not shown here), in addition to analytic calculations will be presented in upcoming papers (Perets et al., in preparation) \section{Formation and/or migration origin} Analytic calculations and simulations have shown that a young stars could have formed and grown over short times of thousands to millions of years in a gaseous disk around the MBH (e.g. \citep{nay+05b,lev07}). Such stars could then form the stellar disk currently observed in the GC. It was suggested that the 'S-stars' with their very different properties migrated from the stellar disk through a planetary migration like process \citep{lev07}. This interesting possibility has not yet been studied quantitatively, but would suggest that the migrating stars should have relatively low eccentricities. Another possibility is that these stars have a different origin, possibly from the disruption of young binaries and the following capture of one of their components \citep{gou+03}. It was recently shown that such a scenario could be consistent with the current knowledge regarding the number of the observed 'S-stars' \citep{per+06}. The initial eccentricity of the captured stars should then be very high $(>0.96)$ in this scenario. We note that other scenarios were suggested for the origin of the young stars in the GC, but seem to be disfavored by current observations (see e.g. \cite{ale05} and \cite{pau+06}). \section{The S-stars} Only one study has been published of the dynamical evolution of the S-stars since their capture / formation, which explored the possible role on IMBH on their evolution \citep{mik+08}. Here we present the results of N-body simulations dedicated to study such evolution. In order to do so we modeled a small isotropic cusp of 1200 stars, with masses of $3\, M_{\odot}$ (200 stars) and $10\, M_{\odot}$(1000) around a MBH of $3.6\times10^{6}\, M_{\odot}$. We used a power law radial distribution of $r^{-\alpha}$ extending from $0.001-0.05$ pc near the MBH, with $\alpha=2$ for the more massive stars and $\alpha=1.5$ for the lower mass stars. The more massive stars correspond to the many stellar black holes (SBHs) thought to exist in this region, whereas the lower mass stars correspond to the S-stars in the same region. Since some of the S-stars may have higher masses of $\sim10\, M_{\odot}$, the higher mass stars in the simulation could also be treated as S-stars. We did not see any major differences in the evolution of the more massive and the less massive stars, and we discuss the evolution of both together. We studied two evolutionary scenarios for the S-stars. In the first we assumed that the S-stars were captured through a binary disruption scenario by the MBH \citep{gou+03,per+06} and therefore have initially highly eccentric orbit $(>0.96)$ and they evolve for few $10^{7}$ yrs. In the second scenario we assumed the S-stars formed in a gaseous disk and migrated to their current position, and therefore have low eccentricities ($<0.3$) and they evolved for $5\,$Myrs (the lifetime of the observed stellar disk. In order to check both scenarios we followed the evolution of those stars in our simulation with highly eccentric initial orbits (the first scenario) and those with low eccentricities (the second scenario) for the appropriate time scales. In Fig. (1) we show the final eccentricity distribution of the S-stars in both scenarios, as compared to the the orbits of the observed S-stars (taken from Gillessen et al.). These results suggest that, given the small statistics (16 S-stars with known orbits), the first scenario is much favored since it could explain the currently observed orbits of the S-stars, i.e. stars on highly eccentric orbits could be scattered by other stars or SBHs to smaller, and even much smaller eccentricities during their lifetimes. The second scenario, however, seems to be excluded (for the given assumptions), since it has major difficulties in explaining the large number of eccentric orbits in the S-stars observations vs. the bias towards low eccentricity orbits seen in the N-body simulations simulations. This is clearly seen both after $5$ Myrs of evolution, and even after longer evolution, if these stars formed in an earlier epoch of star formation in a disk (not currently observed) $20$ Myrs ago. \begin{figure}[h!] \resizebox{\hsize}{!}{\includegraphics[clip=true]{f1a.eps}} \resizebox{\hsize}{!}{\includegraphics[clip=true]{f1b.eps}} \caption{\footnotesize The eccentricities of observed and simulated S-stars from the binary disruption scenario (after $5$ Myrs; upper figure) and from the disk migration scenario (after 5 and 20 Myrs; lower figure). } \label{f1} \end{figure} \section{The disk stars} \citet{ale+07} and \citet{cua+08} explored the dynamical evolution of a single stellar disk using small N-body simulations ($\sim100$ stars), where they studied the effects of massive stars in the disk (the mass function), and the structure of the disk (eccentric vs. circular). \citet{cua+08} also studied the role of wide binaries following \citet{per+08} who suggested binaries could have an important role in the evolution of the disk, somewhat similar to their role in stellar clusters \citep{heg75} and in the ejection of OB runaway stars. These studies showed that although the different components contribute to the disk evolution, it is difficult to explain the current eccentricities of the observed stars and the thickness of the disk, with only these components. We studied a single disk of $\sim5000\, M_{\odot}$, composed of either $5000$ equal mass stars, or $\sim2500$ stars with a Salpeter mass function between $0.6-50\, M_{\odot}$ . The initial conditions are of a thin disk ($H/R\sim0.01)$ with a surface density of $r^{-2}$, with all stars on circular orbits ($e\lesssim0.01$). In addition we studied the evolution of such disks both with and without an isotropic stellar cusp component around the MBH in which the stellar disk is embedded in. The region of the GC disk is thought to contain a few $10^{5}$ up to $10^{6}$stars; simulating such a larger number of stars in orbits close to a MBH, even with a GRAPE cluster is currently difficult. However, the dynamics and relaxation processes close to the MBH are dominated mostly by the much smaller number of SBHs in this region (a few $10^{3}$ up to $10^{4}$ SBHs are though to exist in the GC disk region ;\citealp{mir+00,hop+06b,fre+06}). Simulating only this SBHs component could therefore be more efficient in running time and at the same time capture most of the important relaxation processes effecting the the dynamics of the disk. In our simulations we put $1.6\times10^{4}$ SBHs (of $10\, M_{\odot}$ each) with an isotropic power law distribution $(n\propto r^{-2}$), between $0.01-0.8$ pc. The evolution of the mean eccentricity of the disk stars is shown in Fig. (2), both for low mass stars ($<15\, M_{\odot}$) and high mass stars ($M\ge15\, M_{\odot}$, i.e. such as the observed disk stars in the GC). The evolution of a stellar disk embedded in a cusp of SBHs, is compared with that of an isolated stellar disk with a Salpeter mass function between $0.6-80\, M_{\odot}$ (i.e. higher mass cutoff than used in the disk+cusp simulation, to allow for the disk heating by more massive stars, as discussed in \citealp{ale+07}). The results show that the SBHs cusp component has an important contribution to the disk evolution. Although one disk has a lower mass cutoff than the other, it is heated much more rapidly, due to the contribution of the cusp stars. More importantly, the high mass stars corresponding to the stars that are actually observed in the GC disk, have relatively low eccentricities in the isolated disk, and would present difficulty to our understanding of the disk evolution, as discussed by \citealp{ale+07,cua+08}. Adding the cusp component solves the problem, as even the eccentricities of the higher mass stars are high in this case and comparable with the observed eccentricities of the disk stars in the GC. We note that these simulations did not take into account the contribution of low mass disk stars (i.e. not the SBHs in the cusp) which will further accelerate the heating of the stellar disk. \begin{figure}[h!] \resizebox{\hsize}{!}{\includegraphics[clip=true]{f2.eps}} \caption{\footnotesize The evolution of the mean eccentricity of the disk stars (both low mass, $M<15\, M_{\odot}$ and high mass stars, $M>15\, M_{\odot}$) with and without interactions with the stellar cusp around the MBH. } \label{f2} \end{figure} \section{Summary} \label{sec:summary} The dynamical evolution of the young stars in the GC both in the stellar disk(s) and in the inner S-cluster is not yet understood. We used N-body simulations to study the dynamics and origin of these stars. We found that the S-stars close to the MBH in the GC could be stars that were captured following a binary disruption by the MBH, and later on dynamically evolved due to scattering by other stars, or stellar black holes, to obtain their currently observed orbits. We also show the the young stellar disk could have formed as a cold (thin) circular disk and evolve to its currently observed thick (hot) disk, mostly due to scattering by cusp stars, whereas self relaxation of the disk plays a more minor role, especially in regard to the more massive stars seen in observations. \bibliographystyle{apj}
proofpile-arXiv_069-1859
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The general-mass variable-flavor-number scheme (GM-VFNS) provides a rigorous theoretical framework for the theoretical description of the inclusive production of single heavy-flavored hadrons, combining the fixed-flavor-number scheme (FFNS) and zero-mass variable-flavor-number scheme (ZM-FVNS), which are valid in complementary kinematic regions, in a unified approach that enjoys the virtues of both schemes and, at the same time, is bare of their flaws. Specifically, it resums large logarithms by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution of non-perturbative fragmentation functions (FFs), guarantees the universality of the latter as in the ZM-VFNS, and simultaneously retains the mass-dependent terms of the FFNS without additional theoretical assumptions. It was elaborated at next-to-leading order (NLO) for photo- \cite{KS} and hadroproduction \cite{KKSS,PRL}. In this presentation, we report recent progress in the implementation of the GM-VFNS at NLO. In Sec.~\ref{sec:d}, we present mass-dependent FFs for $D$-mesons extracted from global fits to $e^+e^-$ annihilation data \cite{Kneesch:2007ey}. In Sec.~\ref{sec:b}, we compare with transverse-momentum ($p_T$) distributions of $B$ mesons produced in run~II at the Tevatron \cite{Kniehl:2008zz}. Our conclusions are summarized in Sec.~\ref{sec:conclusions}. \boldmath \section{$D$-meson fragmentation functions} \label{sec:d} \unboldmath \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \includegraphics[height=3.7cm]{kniehl_bernda.fig1.eps} & \includegraphics[height=3.7cm]{kniehl_bernda.fig2.eps} & \includegraphics[height=3.7cm]{kniehl_bernda.fig3.eps}\\ (a) & (b) & (c) \end{tabular} \end{center} \vspace{-0.5cm} \caption{Comparison of (a) Belle, CLEO, (b) ALEPH, OPAL \cite{ddata} (middle), and (c) CDF~II data \cite{Acosta:2003ax} on $D^+$ mesons with global fit. The dotted line in panel (b) refers to the $c$-quark-initiated contribution.} \label{fig:d} \end{figure} In Ref.~\cite{Kneesch:2007ey}, we determined non-perturbative FFs for $D^0$, $D^+$, and $D^{*+}$ mesons by fitting experimental data from the Belle, CLEO, ALEPH, and OPAL Collaborations \cite{ddata}, taking dominant electroweak corrections due to photonic initial-state radiation into account. The fits for $D^0$, $D^+$, and $D^{*+}$ mesons using the Bowler ansatz \cite{Bowler:1981sb} yielded $\chi^2/\mathrm{d.o.f.}=4.03$, 1.99, and 6.90, respectively. We assessed the significance of finite-mass effects through comparisons with a similar analysis in the ZM-VFNS. Under Belle and CLEO experimental conditions, charmed-hadron mass effects on the phase space turned out to be appreciable, while charm-quark mass effects on the partonic matrix elements are less important. In Figs.~\ref{fig:d}(a) and (b), the scaled-momentum distributions from Belle and CLEO and the normalized scaled-energy distributions from ALEPH and OPAL, respectively, for $D^+$ mesons are compared to the global fits. We found that the Belle and CLEO data tend to drive the average $x$ value of the $c\to D$ FFs to larger values, which leads to a worse description of the ALEPH and OPAL data. Since the $b\to D$ FFs are only indirectly constrained by the Belle and CLEO data, their form is only feebly affected by the inclusion of these data in the fits. Usage of these new FFs leads to an improved description of the CDF data \cite{Acosta:2003ax} from run~II at the Tevatron, as may be seen by comparing Fig.~\ref{fig:d}(c) with Fig.~2(b) of Ref.~\cite{PRL}. \boldmath \section{$B$-meson hadroproduction} \label{sec:b} \unboldmath \begin{wrapfigure}{r}{0.3\columnwidth} \centerline{\includegraphics[height=3.7cm]{kniehl_bernda.fig4.eps}} \vspace{-0.5cm} \caption{Comparison of the ALEPH, OPAL, and SLD data \cite{bdata} on $B$ mesons with global fit.}\label{fig:b} \end{wrapfigure} In Ref.~\cite{Kniehl:2008zz}, we performed a comparative analysis of $B$-meson hadroproduction in the ZM-VFNS and GM-VFNS. For this, we also updated the determination of $B$-meson FFs in the ZM-VFNS \cite{BKK} by fitting to recent $e^+e^-$ data from ALEPH, OPAL, and SLD \cite{bdata} and also adjusting the values of $m_b$ and the energy scale $\mu_0$ where the DGLAP evolution starts to conform with modern PDF sets. The fit using the Kartvelishvili-Likhoded ansatz \cite{Kartvelishvili:1985ac} yielded $\chi^2/\mathrm{d.o.f.}=1.495$ (see Fig.~\ref{fig:b}). We found that finite-$m_b$ effects moderately enhance the $p_T$ distribution; the enhancement amounts to about 20\% at $p_T=2m_b$ and rapidly decreases with increasing value of $p_T$, falling below 10\% at $p_T=4m_b$ (see Fig.~\ref{fig:bcdf}a). Such effects are thus comparable in size to the theoretical uncertainty due to the freedom of choice in the setting of the renormalization and factorization scales. This finding contradicts earlier assertions~\cite{Cacciari:2003uh} that mass corrections have a large size up to $p_T\approx20$~GeV and that {\it lack of mass effects \cite{BKK} will therefore erroneously overestimate the production rate at small $p_T$} in all respects. \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \includegraphics[height=4.5cm,viewport=40 4 420 471,clip]{% kniehl_bernda.fig5.eps} & \includegraphics[height=4.5cm,viewport=12 0 518 494,clip]{% kniehl_bernda.fig6.eps} & \includegraphics[height=4.5cm,viewport=14 18 283 293,clip]{% kniehl_bernda.fig7.eps} \\ (a) & (b) & (c) \end{tabular} \end{center} \vspace{-0.5cm} \caption{Comparison of CDF~II data \cite{CDF1,CDF2} on $B$ mesons with GM-VFNS, ZM-VFNS, FFNS, and FONLL predictions taken from Refs.\ (a) \cite{Kniehl:2008zz}, (b) \cite{CDF1}, and (c) \cite{CDF2}.} \label{fig:bcdf} \end{figure} In this connection, we also wish to point out that the statement made in Ref.~\cite{Cacciari:2002xb} that {\it large logarithmic corrections in the function $D(x,m^2)$ are simply discarded} in the approach of Ref.~\cite{BKK} is misleading. In fact, in the ZM-VFNS with non-perturbative FFs adopted in Ref.~\cite{BKK}, the Sudakov logarithms are fully included at NLO, namely both in the coefficient functions and evolution kernels, and there is no room for large logarithmic corrections in the ansatz for the heavy-quark FF at the initial scale $\mu_0$, which represents non-perturbative input to be fitted to experimental data. Looking at Fig.~1 in Ref.~\cite{BKK}, we observe that the theoretical results for $(1/\sigma_{\mathrm{had}})({\mathrm d}\sigma/{\mathrm d}x)(e^+e^-\to B+X)$ exhibit excellent perturbative stability and nicely agree with the OPAL data \cite{OPAL} in the large-$x$ regime, indicating that Sudakov resummation is dispensable in this scheme, in contrast to the fixed-order-next-to-leading-logarithm (FONLL) scheme \cite{Cacciari:2003uh,FONLL}, where the FFs are arranged to have perturbative components. \begin{wrapfigure}{r}{0.3\columnwidth} \centerline{ \includegraphics[height=4.5cm,viewport=9 69 528 645,clip]{% kniehl_bernda.fig8.eps}} \vspace{-0.5cm} \caption{Comparison of preliminary CDF~II data \cite{CDF3} on $B$ mesons with GM-VFNS, ZM-VFNS, and FFNS predictions \cite{Kniehl:2008zz}.} \label{fig:kkss} \end{wrapfigure} We must also caution the reader of the potential of comparisons of experimental data with theoretical predictions in recent CDF~II publications \cite{CDF1,CDF2} to be misinterpreted. In Fig.~11 of Ref.~\cite{CDF1} (see Fig.~\ref{fig:bcdf}b), the variation of the ad-hoc weight function, $G(m,p_T)=p_T^2/(p_T^2+c^2m^2)$ with $c=5$ \cite{Cacciari:2003uh,FONLL}, which has a crucial impact on the prediction in the small-$p_T$ range by substantially suppressing its ZM-VFNS component, is not included in the theoretical error. In Fig.~11 of Ref.~\cite{CDF2} (see Fig.~\ref{fig:bcdf}c), the FFNS result, labeled {\it NLO}, is evaluated with the obsolete MRSD0 proton PDFs \cite{Martin:1992as}, revoked by their authors long ago, and a value of $\alpha_s^{(5)}(m_z)$ falling short of the present world average \cite{pdg} by 3.3 standard deviations. Unfortunately, this historical result is still serving as a benchmark \cite{Happacher}. Despite unresummed large logarithms and poorly implemented fragmentation, the FFNS prediction, evaluated with up-to-date input, happens to almost coincide with the GM-VFNS one in the range 15~GeV${}\:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: p_T\alt25$~GeV. It also nicely reproduces the peak exhibited about $p_T\approx2.5$~GeV by the CDF~II data of Ref.~\cite{CDF1} (see Fig.~\ref{fig:bcdf}a). In Fig.~\ref{fig:kkss}, preliminary CDF~II data \cite{CDF3}, which explore the range 25~GeV${}<p_T<40$~GeV for the first time, are compared with NLO predictions in the GM-VFNS, ZM-VFNS, and FFNS \cite{Kniehl:2008zz}. In the large-$p_T$ limit, the GM-VFNS result steadily merges with the ZM-VFNS one as per construction, while the FFNS breaks down due to unresummed large logarithms. The CDF~II data point in the bin 29~GeV${}<p_T<40$~GeV favors the GM-VFNS and ZM-VFNS results, while it undershoots the FFNS result. \section{Conclusions} \label{sec:conclusions} The GM-VFNS provides a rigorous theoretical framework for global analyses of heavy-flavored-hadron inclusive production, retaining the full mass dependence of the FFNS, preserving the scaling violations and universality of the FFs in the ZM-VFNS, avoiding spurious $x\to1$ problems, and doing without ad-hoc weight functions. It has been elaborated at NLO for single production in $\gamma\gamma$, $\gamma p$ \cite{KS}, $p\overline{p}$ \cite{KKSS,PRL,Kniehl:2008zz}, and $e^+e^-$ collisions \cite{Kneesch:2007ey}. More work is in progress. \section*{Acknowledgments} The author thanks T.~Kneesch, G.~Kramer, I.~Schienbein, and H.~Spiesberger for the collaboration on the work presented here. This work was supported in part by DFG Grant No.\ KN~365/7--1 and by BMBF Grant No.\ 05~HT6GUA. \begin{footnotesize}
proofpile-arXiv_069-1883
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Preliminaries} Let us define \[ \{0,1\}^n = \{(\epsilon_1, \epsilon_2, \ldots, \epsilon_n), \epsilon_i = 0 \,\mbox{or} \,1, 1 \leq i \leq n\},\quad D(n):=\{0,1\}^n \times \{0,1\}^n. \] We can think of the points of $\{0,1\}^n$ as vertices of an $n$-cube $Q^n$, and $D(n)$ as all the line segments joining two vertices of $Q^n$. We will ordinarily assume that the two vertices are distinct. We can represent the points $(X,Y) \in D(n)$ schematically by the diagram shown in Figure 1. \begin{figure*}[htbp] \begin{center} \includegraphics [scale=.3]{slide1.pdf} \end{center} \caption{Representing points in $D(n)$.} \label{fig1} \end{figure*} In the diagram, we place $X$ to the left of $X'$ if $w(X) < w(X')$ where $w(Z)$ denotes the number of $1$'s in the binary $n$-tuple $Z$. Similarly, we place $Y'$ below $Y$ if $w(Y') < w(Y)$. (If $w(X) = w(X')$ or $w(Y) = w(Y')$, then the order doesn't matter). With $[n]:=\{1, 2,\ldots, n\}$, $I \subseteq [n]$ and $\overline{I} = [n] \backslash I$, a {\bf line} $L = L(I,C)$ consists all the pairs $((x_1, x_2, \ldots, x_n),(y_1, y_2, \ldots, y_n))$ where $C = (c_j)_{j \in \overline{I}}$ with $y_i = 1-x_i$ if $ i \in I$, and $x_j = y_j = c_j$ if $j \in \overline{I}$. Thus, \[ |L(I,C)| = 2^{|I|}. \] In this case we say that $I$ has {\it dimension} $|I|$. \noindent {\bf Fact.} Every point $(X,Y) \in D(n)$ lies on a unique line.\\[.1in] {\bf Proof}. Just take $I = \{i \in [n]: x_i \neq y_i\}$ and $c_j = x_j = y_j$ for $j \in \overline{I}$.\\ \begin{figure*}[htbp] \begin{center} \includegraphics [scale=.3]{slide2.pdf} \end{center} \caption{A corner in $D(n)$.} \label{corner} \end{figure*} \noindent By a {\it corner} in $D(n)$, we mean a set of three points of the form $(X,Y), (X',Y), (X,Y')$ where $(X,Y')$ and $(X',Y)$ are on a common line $L$ (see Figure \ref{corner}). We can think of a corner as a binary tree with one level and root $(X,Y)$. More generally, a {\it binary tree} $B(m)$ with $m$ levels and root $(X,Y)$ is defined by joining $(X,Y)$ to the roots of two binary trees with $m-1$ levels. All of the $2^k$ points at level $k$ are required to be a common line (see Figure \ref{3_levels}). \begin{figure*}[htbp] \begin{center} \includegraphics [scale=.3]{slide3.pdf} \end{center} \caption{A binary tree with 3 levels} \label{3_levels} \end{figure*} \section{The main result.} Our first theorem is the following.\\[.2in] {\bf Theorem 1.} For all $r$ and $m$, there is an $n_0 = n_0(r,m)$ such if $n \geq n_0$ and the points of $D(n)$ are arbitrarily $r$-colored, then there is always a monochromatic binary tree $B(m)$ with $m$ levels formed. In fact, we can take $n_0(r,m) = c\,6^{rm}$ for some absolute constant $c$.\\[.1in] {\bf Proof.} Let $n$ be large (to be specified later) and suppose the points of $D(n)$ are $r$-colored. Consider the $2^n$ points on the line $L_0 = L([n])$. Let $S_0 \subseteq L_0 $ be the set of points having the ``most popular'' color $c_0$. Thus, $|S_0| \geq \frac{2^n}{r}$. Consider the {\it grid} $G_1$ (lower triangular part of a Cartesian product) defined by: \[ G_1 = \{(X,Y'): (X,Y) \in S_0, (X', Y') \in S_0 \, \mbox{with} \, X < X'\}. \] (See Figure \ref{grid}). \begin{figure*}[htbp] \begin{center} \includegraphics [scale=.3]{slide4.pdf} \end{center} \caption{A grid point} \label{grid} \end{figure*} \noindent Thus, \[ |G_1| \geq \left( S_0 \atop 2 \right) > \frac{1}{4} |S_0|^2 \geq \frac{1}{4r^2} \cdot 4^n:=\alpha_1 4^n. \] Let us call a line $L$ of dimension $t$ {\it small} if $t < \frac{n}{3}$ and {\it deficient} if \\$|L \cap G_1| \leq (\frac{\alpha_1}{4}) 2^t$.\\ Thus, the total number of points on small or deficient lines is at most \begin{eqnarray*} && \sum_{t<\frac{n}{3}} 2^t \left( n \atop t \right) 2^{n-t} + \sum_{t \geq \frac{n}{3}} \frac{\alpha_1}{4} 2^t \left( n \atop t \right) 2^{n-t}\\ &=& \frac{\alpha_1}{4} \sum_t 2^n \left( n \atop t \right) + (1 - \frac{\alpha_1}{4}) \sum_{t < \frac{n}{3}} 2^n \left( n \atop t \right)\\ & \leq & (\frac{\alpha_1}{4}) 4^n + (1 - \frac{\alpha_1}{4})(3.8^n)\\ &&\quad \quad \quad \quad (\mbox{since} \sum_{t<\frac{n}{3}} \left( n \atop t \right) < 1.9^n \,\mbox{follows easily by induction})\\[-.2in] & \leq & (\frac{\alpha_1}{2}) 4^n \end{eqnarray*} provided $\alpha_1 \geq 2 \cdot (.95)^n$.\\ Thus, if we discard these points, we still have at least $(\frac{\alpha_1}{2})4^n$ points remaining in $G_1$, and all these points are on ``good'' lines, i.e., not small and not deficient. \noindent Let $L_1$ be such a good line, say of dimension $|I_1| = n_1 \geq \frac{n}{3}$. Let $S_1$ denote the set of points of $L_1 \cap G_1$ with the most popular color $c_1$. Therefore \[ |S_1| \geq (\frac{\alpha_1}{4r}) 2^{n_1}. \] Observe that $G_2 \subset G_1$ (see Figure \ref{G1_in_G2}). \begin{figure*}[htbp] \begin{center} \includegraphics [scale=.3]{slide5.pdf} \end{center} \caption{$G_2 \subset G_1$} \label{G1_in_G2} \end{figure*} \noindent Now let $G_2$ denote the ``grid'' formed by $S_1$, i.e., \[ G_2 = \{(X,Y'): (X,Y \in S_1, (X',Y') \in S_1, \, \mbox{with} \, X < X'\}. \] Therefore, we have \[ |G_2| \geq \left(|S_1| \atop 2 \right) \geq (\frac{\alpha_1}{8r})^2\,4^{n_1} :=\alpha_2 4^{n_1}. \] \noindent As before, let us classify a line $L$ of dimension $t$ as {\it small} if $t < \frac{n_1}{3}$, and as {\it deficient} if $|L \cap G_2| \leq (\frac{\alpha_2}{4}) 2^t.$ \noindent A similar calculation as before shows that if we remove from $G_2$ all the points on small or deficient lines, then at least $(\frac{\alpha_2}{2})4^{n_1}$ points will remain in $G_2$, provided $\alpha_2 \geq 2 \cdot (.95)^{n_1}.$ \noindent Let $S_2 \subseteq L_2 \cap G_2$ have the most popular color $c_2$, so that \[ |S_2| \geq (\frac{\alpha_2}{4r})2^{n_2}. \] Then, with $G_3$ defined to the the ``grid'' formed by $S_2$, we have $|G_3| \geq (\frac{\alpha_2}{8r})^2 4^{n_2}$, and so on. Note that $G_3 \subset G_2 \subset G_1$. \noindent We continue this process for $rm$ steps.\\ In general, we define \[ \alpha_{i+1} = (\frac{\alpha_i}{8r})^2, i \leq i \leq rm -1 \] with $\alpha_1 = \frac{1}{4r^2}$. By construction, we have $n_{i+1} \geq \frac{n_i}{3}$ for all $i$. In addition, we will need to have $\alpha_i \geq 2 \cdot (.95)^{n_i}$ for all $i$ for the argument to be valid. In particular, this implies that in general \[ \alpha_k = \frac{1}{2^{2^{k+2}- 6} r^{2^{k+1} - 2}}. \] \noindent It is now straightforward to check that all the required inequalities are satisfied by choosing $n \geq n_0(r,m) = c \cdot 6^{rm}$ for a suitable absolute constant $c$. \noindent Hence, there must be $m$ indices $i_1 < i_2 < \ldots < i_m$ such that all the sets $S_{i_k}$ have the same color. \noindent These $m$ sets $S_{i_k}$ contain the desired monochromatic binary tree $B(m)$. \section{Some interpretations} \subsection{Self-crossing paths} As we stated at the beginning, we can think of $D(n)$ as the set of all the diagonals of the $n$-cube $Q^n$. Let us call a pair $\{x,\bar x\} = \{(x_1, \ldots, x_n),(\bar x_1, \ldots, \bar x_n)\} $ a {\bf main} diagonal of $Q^n$ where $\bar x_i = 1 - x_i$. \noindent An affine $k$-subcube of $Q^n$ is defined to be a subset of $2^k$ points of the form $\{(y_1, \ldots, y_n) : y_i = 0$ or $1$ if and only if $i \in I \}$ for some $k$-subset $I \subseteq [n] = \{1,2, \ldots, n\}$. \noindent We will say that three connected diagonals of the form $\{x,y\}, \{y,z\}, \{z,w\}$ form a {\it self-crossing path}, denoted by $\ltimes$, if $\{x,y\}$ and $\{z,w\}$ are both main diagonals of the same subcube. \noindent {\bf Corollary 1}. In any $r$-coloring of the edges in $D(n)$, there is always a monochromatic self-crossing path $\ltimes$, provided $n > c \cdot 6^r$ (where $c$ is a suitable absolute constant). \noindent The same argument works for any subgraph $G$ of $D(n)$, provided that $G$ has enough edges and for any pair of crossing main diagonals, $G$ has all the edges between the pair's endpoints. \subsection{Corners} The preceding techniques can be used to prove the following.\\ {\bf Theorem 3.} For every $r$, there exists $\delta = \delta(r)$ and $n_0 = n_0(r)$ with the following property:\\[.2in] If $A$ and $B$ are sets of real numbers with $|A| = |B| = n \geq n_0$ and $|A + B| \leq n^{1+ \delta}$, then any $r$-coloring of $A \times B$ contains a monochromatic ``corner'', i.e., a set of $3$ points of the form $(a,b), (a+d,b), (a,b+d)$ for some positive number $d.$ In fact, the argument shows that we can choose $\delta = \frac{1}{{2^{r+1}}}$. The calculation goes as follows; The Cartesian product $A\times B$ can be covered by $n^{1+\delta}$ lines of slope -1. Choose the line with the most points from $A\times B,$ denoted by $L_0.$ There are at least $n^2/n^{1+\delta}$ points in $L_0\cap A\times B.$ Choose the set of points $S_1$ with the most popular color in $L_0\cap A\times B.$ ($|S_1|\geq n^{1-\delta}/r$) As before, consider the grid $G_1$ defined by $S_1,$ and choose the slope -1 line, $L_2,$ which has the largest intersection with $G_1.$ Choose the set of points, $S_2,$ having the most popular color and repeat the process with $G_2,$ the grid defined by $S_2.$ We can't have more than $r$ iterations without having a monochromatic corner. Solving the simple recurrence $a_{n+1}=2a_n+1$ in the exponent, one can see that after $r$ steps the size of $S_r$ is at least $c_rn^{1-\delta(2^{r+1}-1)}.$ If this quantity is at least 2, then we have at least one more step and the monochromatic corner is unavoidable. The inequality \[ c_rn^{1-\delta(2^{r+1}-1)} \geq 2 \] can be rearranged into \[ n^{1-\delta{2^{r+1}}}\geq {2\over{c_rn^{\delta}}} . \] From this we see that choosing $\delta=2^{-r-1}$ guarantees that for large enough $n$ the inequality above is true, proving our statement. By iterating these techniques, one can show that the same hypotheses on $|A|$ and $|B|$ (with appropriate $\delta = \delta(r,m)$ and $n_0 = n_0(r,m)$, imply that if $A \times B$ is $r$-colored then each set contains a monochromatic translate of a large ``Hilbert cube'', i.e., a set of the form \[ H_m(a,a_1, \ldots, a_m) = \{a + \sum_{1 \leq i \leq m} \epsilon_i a_i \} \subset A, \] \[ H_m(b,a_1, \ldots, a_m) = \{b + \sum_{1 \leq i \leq m} \epsilon_i a_i \} \subset B \] where $\epsilon_i = 0$ or $1, \, 1 \leq i \leq m$. \subsection{Partial Hales-Jewett lines} {\bf Corollary 2.} For every $r$ there is an $n = n_0(r)\leq c6^r,$ with the following property. For every $r$-coloring of $\{0,1,2,3\}^n$ with $n > n_0$, there is always a monochromatic set of $3$ points of the form: \begin{eqnarray*} (\ldots,a,\ldots,0,\ldots,b,\ldots,3,\ldots,0,\ldots,c,\ldots,3,\ldots,d,\ldots)\\ (\ldots,a,\ldots,1,\ldots,b,\ldots,2,\ldots,1,\ldots,c,\ldots,2,\ldots,d,\ldots)\\ (\ldots,a,\ldots,2,\ldots,b,\ldots,1,\ldots,2,\ldots,c,\ldots,1,\ldots,d,\ldots) \end{eqnarray*} In other words, every column is either {\it constant}, {\it increasing} from $0$, or {\it decreasing} from 3.\\ {\bf Proof.} To each point $(x_1, x_2, \ldots, x_n)$ in $\{0,1,2,3\}^n$, we associate the point $\big( (a_1, a_2, \ldots, a_n), (b_1, b_2, \ldots, b_n) \big)$ in $\{0,1\}^n \times \{0,1\}^n$ by the following rule: \[ \begin{array}{cccc} x_k & \leftrightarrow & a_k & b_k \\ \hline 0 & \vline & 0 & 0 \\ 1 & \vline & 0 & 1 \\ 2 & \vline & 1 & 0 \\ 3 & \vline & 1 & 1 \end{array} \] Then it not hard to verify that a monochromatic corner in $D(n)$ corresponds to a monochromatic set of $3$ points as described above, a structure which we might call a partial Hales-Jewett line. \medskip \noindent {\bf Corollary 3.} For every $r$ there is an $n = n_0(r)\leq c6^r,$ with the following property. For every $r$-coloring of $\{0,1,2\}^n$ with $n > n_0$, there is always a monochromatic set of $3$ points of the form: \begin{eqnarray*} (\ldots,a,\ldots,0,\ldots,b,\ldots,0,\ldots,0,\ldots,c,\ldots,0,\ldots,d,\ldots)\\ (\ldots,a,\ldots,1,\ldots,b,\ldots,2,\ldots,1,\ldots,c,\ldots,2,\ldots,d,\ldots)\\ (\ldots,a,\ldots,2,\ldots,b,\ldots,1,\ldots,2,\ldots,c,\ldots,1,\ldots,d,\ldots) \end{eqnarray*} {\bf Proof.} Map the points $(a_1, a_2, \ldots, a_n) \in \{0,1,2,3\}$ to \\points $(b_1, b_2, \ldots, b_n) \in \{0,1,2\}^n$ by:\\[-.3in] \begin{center} $a_i = 0$ or $3 \Rightarrow b_i = 0$, $a_i = 1 \Rightarrow b_i = 1$, $a_i = 2. \Rightarrow b_i = 2$ \end{center} The theorem now follows by applying Corollary 2. \hfill$\square$ \subsection{3-term geometric progressions.} The simplest non-trivial case of van der Waerden's theorem \cite{vW} states that for any natural number $r$, there is a number $W(r)$ such that for any $r$-coloring of the first $W(r)$ natural numbers there is a monochromatic three-term arithmetic progression. Finding the exact value of $W(r)$ for large $r$-s is a hopelessly difficult task. The best upper bound follows from a recent result of Bourgain \cite{Bo}; \[W(r)\leq ce^{r^{3/2}}.\] One can ask the similar problem for geometric progressions; What is the maximum number of colors, denoted by $r(N),$ that for any $r(N)$-coloring of the first $N$ natural numbers there is a monochromatic geometric progression. Applying Bourgain's bound to the exponents of the geometric progression $ \{2^i\}_{i=0}^{\infty},$ shows that $r(N)\geq c\log\log{N}.$ Using our method we can obtain the same bound, without applying Bourgain's deep result. \medskip \noindent Observe that if we associate the point $(a_1, a_2,\ldots,a_k, \ldots, a_n)$ with the integer $\prod_k p_k^{a_k}$, where $p_i$ denotes the $i^{th}$ prime, then the points \begin{eqnarray*} (\ldots,a,\ldots,0,\ldots,b,\ldots,3,\ldots,0,\ldots,c,\ldots,3,\ldots,d,\ldots)\\ (\ldots,a,\ldots,1,\ldots,b,\ldots,2,\ldots,1,\ldots,c,\ldots,2,\ldots,d,\ldots)\\ (\ldots,a,\ldots,2,\ldots,b,\ldots,1,\ldots,2,\ldots,c,\ldots,1,\ldots,d,\ldots) \end{eqnarray*} correspond to a $3$-term geometric progression. Our bound from Corollary 2 with an estimate for the product of the first $n$ primes imply that $r(N)\geq c\log\log{N}.$ \section{Concluding remarks} It would be interesting to know if we can ``complete the square'' for some of these results. For example, one can use these methods to show that if the points of $[N] \times [N]$ are colored with at most $c \log \log N$ colors, then there is always a monochromatic ``corner'' formed, i.e., $3$ points $(a,b), (a',b), (a,b')$ with $a' + b = a + b'$. By projection, this gives a $3$-term arithmetic progression (see \cite{gs}). \noindent Is it the case that with these bounds (or even better ones), we can guarantee the $4^{th}$ point $(a',b')$ to be monochromatic as well? \noindent Similarly, if the diagonals of an $n$-cube are $r$-colored with $r < c \log \log n$, is it true that a monochromatic $\bowtie$ must be formed, i.e., a self-crossing $4$-cycle (which is a self-crossing path with one more edge added)? \noindent Let $\boxtimes$ denote the structure consisting of the set of $6$ edges spanned by $4$ {\bf coplanar} vertices of an $n$-cube. In this case, the occurrence of a monochromatic $\boxtimes$ is guaranteed once $n \geq N_0$, where $N_0$ is a {\bf very} large (but well defined) integer, sometimes referred to as Graham's number (see \cite{wiki}). The best lower bound currently available for $N_0$ is $11$ (due to G. Exoo \cite{exoo}). One can also ask for estimates for the density analogs for the preceding results. For example, Shkredov has shown the following:\\ {\bf Theorem} (\cite{shkredov}). Let $\delta > 0$ and $N \ll \exp \exp(\delta^{-c})$, where $c > 0$ is an absolute constant. Let $A$ by a subset of $\{1, 2,\ldots, N\}^2$ of cardinality at least $\delta N^2$. Then $A$ contains a corner. \noindent It would be interesting to know if the same hypothesis implies that $A$ contains the $4^{th}$ point of the corner, for example.
proofpile-arXiv_069-1953
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Lattice techniques have proved remarkably useful in the quantization of usual gauge theories. This raised the hope that they may also prove useful in the quantization of gravity. A major difference however is that most theories of gravity of interest are invariant under diffeomorphisms and the introduction of a discrete structure breaks diffeomorphism invariance. One of the appealing features of lattice gauge theories is therefore lost in this case, one breaks the symmetry of the theory of interest. The situation gets further compounded in the case of canonical general relativity, since there one also breaks four dimensional covariance into a $3+1$ dimensional split. Spatial diffeomorphisms get implemented via a constraint that has a natural geometrical action and the usual algebra of diffeomorphisms is implemented via the constraint algebra. But the remaining space-time diffeomorphism gets implemented through the complicated Hamiltonian constraint, that has a challenging algebra with spatial diffeomorphisms. In particular the algebra of constraints has structure functions. If we call $C(\vec{N})$ the diffeomorphism constraint smeared by a test vector field (shift) $\vec{N}$ and $H(N)$ the Hamiltonian constraint smeared by a scalar lapse $N$, the constraint algebra is, \begin{eqnarray} \left\{C(\vec{N}),C(\vec{M})\right\}=C([\vec{N},\vec{M}])\\ \left\{C(\vec{N}),H(M)\right\}=H({\cal L}_{\vec{N}}M)\\ \left\{H({N}),H(M)\right\}=C(\vec{K}(q)), \end{eqnarray} where the vector $K^a=q^{ab}(N\partial_a M-M\partial_a N)$ and $q^{ab}$ is the spatial metric. The last Poisson bracket therefore involves structure functions depending on the canonical variables on the right hand side. The algebra of constraints poses important complications in the context of loop quantum gravity when one wishes to implement it as an operator algebra at a quantum level (see \cite{ThiemannGiesel} for a lengthier discussion). In particular, if one chooses spin network states with the usual Ashtekar-Lewandowski \cite{AsLe} measure, they form a non-separable Hilbert space. In it, diffeomorphisms are not implemented in a weakly continuous fashion, i.e. finite diffeomorphisms can be represented but infinitesimal ones cannot. This implies that in loop quantum gravity one treats very asymetrically the spatial and temporal diffeomorphisms. Whereas invariance under spatial diffeomorphisms is implemented via a group averaging procedure \cite{groupaveraging}, invariance under the remaining space-time diffeomorphisms is to be implemented by solving a quantum operatorial equation corresponding to the Hamiltonian constraint. Since the Poisson bracket of two Hamiltonian constraints involves the infinitesimal generator of diffeomorphisms, which is not well defined as a quantum operator, one cannot expect to implement the Poisson algebra at an operatorial level in the quantum theory, at least in the kinematical Hilbert space. A symmetric treatment of the diffeomorphism and Hamiltonian constraints requires to develop a technique that allows to implement the generators of spatial diffeomorphisms as operators in the loop representation. One could attempt to treat the diffeomorphism and Hamiltonian constraints on the same footing, for instance by lattice regularizing them. Unfortunately, such discretized versions of the constraints are not first class. If one treats them properly with the Dirac procedure, the resulting theory is vastly different in symmetries and even in the number of degrees of freedom from what one expects to have in the continuum theory. Therefore there is little chance that one could define a continuum theory as a suitable limit of the constructed lattice theories. These problems have led to the consideration of extensions of the Dirac procedure that could better accommodate this particular problem with the constraint algebra. One such approach is the ``master constraint'' programme of Thiemann and collaborators \cite{master}. Another approach that we have been studying in the last few years are the ``uniform discretizations'' \cite{uniform}. Both approaches have some elements in common. Uniform discretizations are discrete versions of a constrained theory in which the discretized form of the constraints are quantities whose values are under control throughout the system's evolution. Notice that this would not be the case, for instance, if one simply takes a constrained theory and discretizes it. Initial data on which the discrete version of the constraints vanishes will evolve into data with non-vanishing values of the discrete constraints, without any control on the final value. This situation is well known, for instance, in numerical relativity. Uniform discretizations are designed in such a way that the discrete constraints are kept under control upon evolution and that one can take appropriate limits in the initial data such that one can satisfy the constraints to an arbitrary (and controlled) degree of accuracy. This therefore guarantees the existence of a well defined continuum limit at the level of the classical theory. It has been shown \cite{discreteexamples} that the uniform discretization technique is classically equivalent to the Dirac procedure when the constraints are first class. For second class constraints, like the ones that arise when one discretizes continuum systems with first class constraints the uniform discretization technique is radically different from the Dirac procedure, yielding a dynamical evolution that recovers in the continuum limit the continuum theory one started with. Although the existence of a continuum limit is generically guaranteed at a classical level, it is not obvious that it is at the quantum level. It is known \cite{discreteexamples} that there are models in which the continuum limit cannot be achieved and one is left with a non-zero minimum value of the expectation value of the sum squared of the constraints. It is therefore of interest to show that in examples of growing complexity and of increasing similarity to general relativity one can indeed define a continuum quantum theory with the desired symmetries by applying the uniform discretization procedure. The purpose of this paper is to discuss one such model. We will consider the quantization via uniform discretizations of a $1+1$ dimensional model with diffeomorphism symmetry and we will show that the symmetry is recovered at the quantum level correctly. This raises the hopes of having a theory where all the constraints are treated on an equal footing. The organization of this paper is as follows. In section II we discuss the model we will consider. In section III we discretize the model. In section IV we review the uniform discretization procedure and how it departs from the Dirac traditional approach. Section VI discusses the quantization using uniform discretizations and how one recovers the correct continuum limit. We conclude with a discussion. \section{The model} We would like to construct a model by considering spherically symmetric gravity and ignoring the Hamiltonian constraint. This is analogous to building a ``Husain--Kuchar'' \cite{husainkuchar} version of spherically symmetric gravity. It is known that these models correspond to degenerate space-times when translated in terms of the metric variables. We refer the reader to our previous work on spherically symmetric gravity \cite{spherical} for the setup of the model in terms of Ashtekar's new variables. Just as a recap, the model has two canonical pairs $K_x, E^x$ and $K_\varphi,E^\varphi$. The relation to the more traditional metric canonical variables is, \begin{eqnarray} g_{xx}&=& \frac{(E^\varphi)^2}{|E^x|},\qquad g_{\theta\theta} = |E^x|,\\ K_{xx}&=&-{\rm sign}(E^x) \frac{(E^\varphi)^2}{\sqrt{|E^x|}}K_x,\qquad K_{\theta\theta} = -\sqrt{|E^x|} {K_\varphi} \end{eqnarray} and we have set the Immirzi parameter to one for simplicity, since it does not play a role in this analysis. The Lagrangian for spherically symmetric gravity ignoring the Hamiltonian constraint is, \begin{equation} L = \int dx E^x \dot{K}_x+E^\varphi \dot{K}_\varphi +N \left((E^x)'K_x - E^\varphi (K_\varphi)'\right) \end{equation} with $N$ a Lagrange multiplier (the radial component of the shift vector). The equations of motion are \begin{eqnarray} \dot{K}_x-\left(NK_x\right)' &=&0,\\ \dot{E}_x-N\left(E^x\right)' &=&0,\\ \dot{K}_\varphi-NK'_\varphi &=&0,\\ \dot{E}^\varphi-\left(NE^\varphi\right)' &=&0. \end{eqnarray} The theory has one constraint, which is the remaining diffeomorphism constraint in the radial $(x)$ direction, $\phi= -\left(E^x\right)'K_x+E^\varphi K'_\varphi$, which we will write smeared as $\phi(N)=\int dx N \phi$. The constraint generates diffeomorphisms of the fields, with $K_\varphi$ and $E^x$ behaving as scalars and $K_x$ and $E^\varphi$ as a densities of weight one, \begin{eqnarray} \delta K_\varphi &=& \left\{K_\varphi,\phi(N)\right\}=N K'_\varphi,\\ \delta K_x &=& \left\{K_x,\phi(N)\right\}=\left(N K_x\right)',\\ \delta E^\varphi &=& \left\{E^\varphi\phi(N)\right\}=\left(N E^\varphi\right)',\\ \delta E^x &=& \left\{E^x,\phi(N)\right\}=N \left(E^x\right)'. \end{eqnarray} The constraint has the usual algebra of diffeomorphisms, \begin{equation} \left\{\phi(N),\phi(M)\right\}=\phi\left(N M'-M N'\right). \end{equation} Observables are integrals of densities of weight one constructed with the fields, for example, $O=\int dx f(E^x,K_\varphi)K_x$ with $f$ a function. One then has \begin{equation} \left\{O, \phi(N)\right\}=\int dx \left[\frac{\partial f}{\partial E^x} N \left(E^x\right)' +\frac{\partial f}{\partial K_\varphi} N K'_\varphi+ \left(NK_x\right)' f\right] = \int dx \partial_x \left(f NK_x\right)=0, \end{equation} if one considers a compact spatial manifold, $S^{1}$, which we will do throughout this paper. (This may not make a lot of sense if one is thinking of the model as a reduction of $3+1$ spherical symmetry, but we are just avoiding including boundary terms, which are straightforward to treat in the spherical case, see \cite{spherical}, in order to simplify the discussion of diffeomorphism invariance). \section{Discretization} We now proceed to discretize the model. The spatial direction $x$ is discretized into points $x_i$ such that $x_{i+1}-x_i=\epsilon_i$ and the distances are smaller than a bound $d(\epsilon_i)<d_\epsilon$ when measured in some fiducial metric. To simplify notation, from now on we will assume the points are equally spaced and drop the suffix $i$ on $\epsilon$, but the analysis can be straightforwardly extended to the case with variable $\epsilon_i$. The variables of the model become $K_{x,i}=K_x(x_i)$, $K_{\varphi,i}=K_\varphi(x_i)$ and $E^x_i=\epsilon E^x(x_i)$ and $E^\varphi_i=\epsilon E^\varphi(x_i)$. The constraint is, \begin{equation} \phi_i =E^\varphi_i\left(K_{\varphi,i+1}-K_{\varphi,i}\right) -K_{x,i} \left(E^x_{i+1}-E^x_i\right). \end{equation} The constraint algebra is not first class, i.e., \begin{eqnarray} \left\{\phi_i,\phi_j\right\}&=&-E^\varphi_{i-1}\left(K_{\varphi,i+1}-K_{\varphi,i} \right) \delta_{i,j+1}+E^\varphi_{j-1}\left(K_{\varphi,j+1}-K_{\varphi,j} \right)\delta_{j,i+1}\nonumber\\ &&K_{x,{i-1}}\left(E^{x}_{i+1}-E^x_i \right) \delta_{i,j+1}-K_{x,j-1}\left(E^x_{j+1}-E^x_j \right)\delta_{j,i+1} \end{eqnarray} which does not reproduce the constraint. What one has is a ``classical anomaly'' of the form $\left(E^\varphi_{i+1}-E^\varphi_{i}\right) \left(K_{\varphi,i}-K_{\varphi,i-1}\right) -\left(E^x_{i+1}-E^x_i\right)\left(K_{x,i}-K_{x,i-1}\right)$. These terms would tend to zero if one takes the separation $\epsilon$ to zero and the variables behave continuously in such a limit. So if one were to simply quantize the discrete model, one would run into trouble since one would be quantizing a classical theory with second class constraints. We will expand more on the problems one faces in the next section. In this paper we would like to show that in spite of this problem of the classical theory, which implies that the discrete theory loses diffeomorphism invariance, if one follows the uniform discretization approach to quantization the diffeomorphism invariance is recovered in the limit $\epsilon\to 0$ both at the classical and quantum level. In the uniform discretization approach one constructs a ``master constraint'' ${\mathbb H}$ by considering the sum of the discretized constraints squared. One then promotes the resulting quantity to a quantum operator and seeks for the eigenstates of $\hat{\mathbb H}$ with minimum eigenvalue. In the full theory the quantity ${\mathbb H}$ would be constructed from the diffeomorphism constraints $\phi_a$ as, \begin{equation} {\mathbb H} =\frac{1}{2} \int dx \phi_a \phi_b \frac{g^{ab}}{\sqrt{g}}, \end{equation} which motivates in our example to choose, \begin{equation} {\mathbb H} = \frac{1}{2}\int dx \phi \phi \frac{\sqrt{E^x}}{\left(E^\varphi\right)^3}, \end{equation} or, in the discretized theory as, \begin{equation} {\mathbb H}^\epsilon = \frac{1}{2}\sum_{i=0}^N \phi_i \phi_i \frac{\sqrt{E^x_i}}{\left(E^\varphi_i\right)^3} \epsilon^{3/2}. \end{equation} To understand better how to promote these quantities to quantum operators, it is best to start with the constraint itself. Let us go back for a second to the continuum notation, and write, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \epsilon N(x_j) \left\{ -\frac{\left[E^x(x_{j+1})-E^x(x_j)\right]}{\epsilon} K_x(x_j)+ \frac{1}{2}\left[E^\varphi(x_{j})+E^\varphi(x_{j+1})\right] \frac{\left(K_\varphi(x_{j+1})-K_\varphi(x_j)\right)}{\epsilon}\right\} , \end{equation} which would reproduce the constraint $\phi(N)=\lim_{\epsilon\to0} \phi^\epsilon(N)$ though we see that the explicit dependence on $\epsilon$ drops out. We have chosen to regularize $E^\varphi$ at the midpoint in order to simplify the action of the resulting quantum operator as we will see later. When one is to promote these quantities to quantum operators, one needs to remember that although the $E$ variables promote readily to quantum operators in the loop representation, the $K$'s need to be written in exponentiated form. To this aim, we write, classically, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \frac{N(x_j)}{2i\epsilon}\left\{ \exp\left(-2i\epsilon{\left[E^x(x_{j+1})-E^x(x_j)\right]} K_x(x_j)+ i\epsilon{\left[E^\varphi(x_{j})+E^\varphi(x_{j+1})\right]} \left(K_\varphi(x_{j+1})-K_\varphi(x_j)\right)\right)-1\right\}, \end{equation} which again would reproduce the constraint in the continuum limit. Let us rewrite it in terms of the discrete variables, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \frac{N_j}{2i\epsilon}\left\{ \exp\left[i\left({-2\left[E^x_{j+1}-E^x_j\right]} K_{x,j}+ {\left[E^\varphi_{j}+E^\varphi_{j+1}\right]} \left(K_{\varphi,j+1}-K_{\varphi,j}\right)\right)\right]-1\right\}. \end{equation} For later use, it is convenient to rewrite $\phi_j^\epsilon = (D_j-1)/(2i\epsilon)$ and then one has that, \begin{equation} {\mathbb H}^\epsilon = \sum_{j=0}^N \left(D_j-1\right)\left(D_j-1\right)^* \epsilon^{-1/2} \frac{\sqrt{E^x_j}} {\left(E^\varphi_j\right)^3}.\label{27} \end{equation} We dropped the $\epsilon$ in $D$ since it does not explicitly depend on it, but it does through the dependence on $E^x$ and an irrelevant global factor of $1/8$ to simplify future expressions. \section{Uniform discretizations} Before quantizing, we will study the classical theory using uniform discretizations and we will verify that one gets in the continuum limit a theory with diffeomorphism constraints that are first class. The continuum theory can be treated with the Dirac technique and has first class constraints that generate diffeomorphisms on the dynamical variables. However, the discrete theory, when treated with the Dirac technique, has second class constraints and does not have the gauge invariances of the continuum theory. The number of degrees of freedom changes and the continuum limit generically does not recover the theory one started with. As mentioned before, it has been shown \cite{discreteexamples} that the uniform discretization technique is equivalent to the Dirac procedure when the constraints are first class. For second class constraints, like the ones that appear when one discretizes continuum systems with first class constraints the uniform discretization technique is radically different from the Dirac procedure, yielding a dynamical evolution that recovers in the continuum limit the continuum theory one started with. Let us review how this works. We start with a classical canonical system with $N$ configuration variables, parameterized by a continuous parameter $\alpha$ such that $\alpha\to 0$ is the ``continuum limit''. We will assume the theory in the continuum has $M$ constraints $\phi_j = \lim_{\alpha\to 0} \phi_j^\alpha$. In the discrete theory we will assume the constraints generically fail to be first class, \begin{equation} \left\{\phi^\alpha_j,\phi^\alpha_k\right\} = \sum_{m=1}^M C^\alpha_{jk}{}^m \phi^\alpha_m+ A^\alpha_{jk}, \end{equation} where the failure is quantified by $A^\alpha_{jk}$. We assume that in the continuum limit one has $\lim_{\alpha\to 0} A^\alpha_{jk}=0$ and that the quantities $C^\alpha_{jk}{}^m$ become in the limit the structure functions of the (first class) constraint algebra of the continuum theory $C_{jk}{}^m=\lim_{\alpha\to 0} C^\alpha_{jk}{}^m$, so that, \begin{equation} \left\{\phi_j,\phi_k\right\} =\sum_{m=1}^M C_{jk}{}^m \phi_m. \end{equation} If one were to insist on treating the above discrete theory using the Dirac procedure, that is, taking the constraints $\phi^\alpha_j=0$ and a total Hamiltonian $H_{T}=\sum_{j=1}^M C_j \phi^\alpha_j$ with $C_j$ functions of the canonical variables, one immediately finds restrictions on the $C_j{}'s$ of the form $\sum_{j=1}^M C_j A^\alpha_{jk}=0$ in order to preserve the constraints upon evolution. Only in the continuum $\alpha\to 0$ limit are the $C_j$ free functions and one has in the theory $2N-2M$ observables. Notice that away from the continuum limit the number of observables is generically larger and could even reach $2N$ if the matrix $A^\alpha_{jk}$ is invertible. Therefore one cannot view the theory in the $\alpha\to 0$ limit as a limit of the theories for finite values of $\alpha$, since they do not even have the same number of observables and have a completely different evolution. The uniform discretizations, on the other hand, lead to discrete theories that have the same number of observables and an evolution resembling those of the continuum theory. One can then claim that the discrete theories approximate the continuum theory and the latter arises as the continuum limit of them. The treatment of the system in questions would start with the construction of the ``master constraint'' \begin{equation} {\mathbb H}^\alpha=\frac{1}{2} \sum_{i=j}^M \left(\phi^\alpha_j\right)^2 \end{equation} and defining a discrete time evolution through ${\mathbb H}$. In particular, this implies a discrete time evolution from instant $n$ to $n+1$ for the constraints of the form, \begin{eqnarray} \phi^\alpha_j(n+1) &=& \phi^\alpha_j(n)+ \label{31} \left\{ \phi^\alpha_j(n),{\mathbb H}^\alpha\right\}+ \frac{1}{2} \left\{\left\{ \phi^\alpha_j(n),{\mathbb H}^\alpha\right\}, {\mathbb H}^\alpha\right\}+ \ldots\\ &=&\phi^\alpha_j(n)+ \sum_{i,k=1}^M C^\alpha_{ji}{}^k \phi^\alpha_k(n) \phi^\alpha_i(n)+ \sum_{i=1}^M A^\alpha_{ji} \phi^\alpha_i(n)+\ldots\label{evolution} \end{eqnarray} This evolution implies that ${\mathbb H}^\alpha$ is a constant of the motion, which for convenience we denote via a parameter $\delta$ such that ${\mathbb H}^\alpha=\delta^2/2$. The preservation upon evolution of ${\mathbb H}^\alpha$ implies that the constraints remain bounded $|\phi^\alpha_j|\leq \delta$. If one now divides by $\delta$ and defines the quantities $\lambda^\alpha_i\equiv \phi^\alpha_i/\delta$ one can rewrite (\ref{evolution}) as, \begin{equation} \frac{\phi^\alpha_j(n+1)-\phi^\alpha_j(n)}{\delta}= \sum_{i,j=1}^M C^\alpha_{ji}{}^k \phi^\alpha_k(n) \lambda^\alpha_i(n)+ \sum_{i,j=1}^M A^\alpha_{ji} \lambda^\alpha_i(n)+\ldots \end{equation} Notice that the $\lambda^\alpha_i$ remain finite when one takes the limits $\delta\to 0$ and $\alpha\to 0$. If one now considers the limit of small $\delta$'s, one notes that the first term on the right is of order $\delta$, the second one goes to zero with $\alpha\to 0$, at least as $\alpha$ and the rest of the terms are of higher orders in $\delta,\alpha$. If one identifies with a continuum variable $\tau$ such that $\tau=n\delta+\tau_0$, then $\phi^\alpha_j(\tau)\equiv\phi^\alpha_j(n)$ and $\phi^\alpha_j(\tau+\delta)\equiv\phi^\alpha_j(n+1)$ one can take the limits $\alpha\to 0$ and $\delta\to 0$, irrespective of the order of the limits one gets that the evolution equations (\ref{evolution}) for the constraints become those of the continuum theory, i.e., \begin{equation} \dot{\phi}_j \equiv \lim_{\alpha,\delta\to 0} \frac{\phi^\alpha_j(\tau+\delta)-\phi^\alpha_j(\tau)}{\delta}= \sum_{i,k=1}^M C_{ji}{}^k \phi_k \lambda_i \end{equation} with $\lambda_i$ become the (freely specifiable) Lagrange multipliers of the continuum theory. At this point the reader may be puzzled, since the $\lambda$'s are defined as limits of those of the discrete theory and therefore do not appear to be free. However, one has to recall that the $\lambda$'s in the discrete theory are determined by the values of the constraints evaluated on the initial data, and these can be chosen arbitrarily by modifying the initial data. If one considers the limit $\delta\to 0$ for a finite value of $\alpha$ (``continuous in time, discrete in space'') and considers the evolution of a function of phase space $O$, one has that, \begin{equation} \dot{O}=\left\{O,{\mathbb H}^\alpha\right\}=\left\{O,\phi^\alpha_i\right\}\lambda^\alpha_i+\sum_{j=1}^M \left\{O,\phi^\alpha_i\right\}A^\alpha_{ij} \lambda^\alpha_j +\sum_{j,k=1}^M\left\{O,\phi^\alpha_i\right\} A^\alpha_{ij}A^\alpha_{jk} \lambda^\alpha_k+\ldots \end{equation} The necessary and sufficient condition for $O$ to be a constant of the motion (that is, $\dot{O}=0$) is that \begin{equation} \left\{O,\phi^\alpha_i\right\}=\sum_{j=1}^M C_{ij} \phi^\alpha_j+B^\alpha_i, \end{equation} with $B^\alpha_i$ a vector, perhaps vanishing, that is annihilated by the matrix, \begin{equation} \Lambda^\alpha_{ij} = \delta_{ij} + A^\alpha_{ij}+ \sum_{k=1}^MA^\alpha_{ik}A^\alpha_{kj}+\ldots+\sum_{k_1=1,\ldots,k_s=1}^M A^\alpha_{i,k_1}\cdots A^\alpha_{k_s,j}+\ldots \end{equation} Up to now we have assumed $\lambda^\alpha_i$ arbitrary and not necessarily satisfying that $\sum_{j=1}^N A^\alpha_{ij} \lambda_j =0$. It is clear that $\lim_{\alpha\to 0} \Lambda^\alpha_{ij}=\delta_{ij}$ and therefore $\lim_{\alpha\to 0} B^\alpha_i = 0$ which implies that conserved quantities in the discrete theory yield in the limit $\alpha\to 0$ the observables of the continuum theory. Since the $\lambda_i$'s are free the theory with continuous time is not the one that would result naively from applying the Dirac procedure since in the latter the Lagrange multipliers are restricted by $\sum_{j=1}^M A_{ij} \lambda^\alpha_j=0$ and therefore the theory admits more observables than the $2N-2M$ of the continuum theory. That is, if one takes the ``continuum in time'' limit first, the discrete theory has a dynamics that differs from the usual one unless $A^\alpha_{ij} \phi^\alpha_i(n)=0$ and one is really treating two different theories. At this point it would be good to clarify a bit the notation. The above discussion has been for a mechanical system with $M$ configuration degrees of freedom. When one discretizes a field theory with $M$ configuration degrees of freedom on a lattice with $N$ points one ends up with a mechanical system that has $M\times N$ degrees of freedom. An example of such a system would be the diffeomorphism constraints of general relativity in $3+1$ dimensions when discretized on a uniform lattice of spacing $\alpha$ \cite{rentelnsmolin}. Of course, it is not clear at this point if such a system could be completely treated with our technique up to the last consequences, we just mention it here as an example of the type of system one would like to treat. The above discussion extends immediately to systems of this kind, only the bookkeeping has to be improved a bit. If we consider a parameter $\alpha(N)=1/N$, such that the continuum limit is achieved in $N\to\infty$ the classical continuum constraints can be thought of as limits \begin{equation} \phi_j(x)= \lim_{N\to\infty} \phi^{\alpha(N)}_{j,i(x,N)} \end{equation} where $i(x,N)$ is such that the point $x$ in the continuum lies between $i(x,N)$ and $i(x,N)+1$ on the lattice for every $N$. We are assuming a one dimensional lattice. Similar bookkeepings can be set up in higher dimensional cases. Just like we did in the mechanical system we can define \begin{equation} \left\{ \phi^{\alpha(N)}_{j,i},\phi^{\alpha(N)}_{k,i\pm 1}\right\} =\sum_{l,m=1}^MC^{\alpha(N)}_{j,i,k,i\pm1}{}^{lm}\phi^{\alpha(N)}_{l,m} +A^{\alpha(N)}_{j,i,k,i\pm1}, \end{equation} (where we have assumed that for the sites different from $i\pm 1$ on the lattice the Poisson bracket vanishes, the generalization to other cases is immediate) and one has that \begin{equation} \lim_{N\to \infty} A^{\alpha(N)}_{j,i,k,i\pm 1}=0. \end{equation} If one takes the spatial limit $\alpha\to 0$ first, one has a theory with discrete time and continuous space and with first class constraints and we know in that case the uniform discretization procedure agrees with the Dirac quantization. If one has more than one spatial dimension to discretize, then the situation complicates, since the continuum limit can be achieved with lattices of different topologies and connectivity. Once one has chosen a given topology and connectivity for the lattice, the continuum limit will only produce spin networks of connectivities compatible with such lattices. For instance if one takes a ``square'' lattice in terms of connectivity in two spatial dimensions, one would produce at most spin networks in the continuum with four valent vertices. If one takes a lattice that resembles a honeycomb with triangular plaquettes one would produce sextuple vertices, etc. It is clear that this point deserves further study insofar as to how to achieve the continuum limit in theories with more than one spatial dimension. In addition to this, following the uniform discretization approach one does not need to modify the discrete constraint algebra since it satisfies $\lim_{N\to\infty} \left\{\phi_i,\phi_j\right\}\sim 0$ and all the observables of the continuum theory arise by taking the continuum limit of the constants of the motion of the discrete theory. The encouraging fact that we recover the continuum theory in the limit classically is what raises hopes that a similar technique will also work at the quantum level. \section{Quantization} To proceed to quantize the model, we need to consider the master constraint given in equation (\ref{27}), \begin{equation} {\mathbb H}^\epsilon = \sum_{j=0}^N \left(D_j-1\right)\left(D_j-1\right)^* \epsilon^{-1/2} \frac{\sqrt{E^x_j}} {\left(E^\varphi_j\right)^3}, \end{equation} and quantize it. The quantization of this expression will require appropriate ordering of the exponential that appears in $D_j$ , putting the $K$'s to the left of the $E$'s, as in usual normal ordering. One would then have, \begin{equation} \hat{D}_j = :\exp i\left({-2\left[\hat{E}^x_{j+1}-\hat{E}^x_j\right]} \hat{K}_{x,j}+ {\left[\hat{E}^\varphi_{j}+\hat{E}^\varphi_{j+1}\right]} \left(\hat{K}_{\varphi,j+1}-\hat{K}_{\varphi,j}\right)\right): \label{D} \end{equation} Notice that $\hat{D}_j$ is not self-adjoint and, due to the factor ordering, neither is $\hat{\phi}_j$, but we will see that one can construct an $\mathbb H$ that is self-adjoint. To write the explicit action, let us recall the nature of the basis of spin network states in one dimension (see \cite{spherical} for details). One has a lattice of points $j=0\ldots N$. On such lattice one has a graph $g$ consistent of a collection of links $e$ connecting the vertices $v$. It is natural to associate the variable $K_x$ with links in the graph and the variable $K_\varphi$ with vertices of the graph. For bookkeeping purposes we will associate each link with the lattice site to its left. One then constructs the ``point holonomies'' for both variables as, \begin{equation} T_{g,\vec{k},\vec{\mu}}(K_x,K_\varphi) = \langle K_x,K_\varphi \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1new}}\right\rangle= \exp\left(i\sum_{j} {k_j} K_{x,j}\epsilon \right) \exp\left(i\sum_j \mu_{j,v} K_{\varphi,j}\right) \end{equation} The summations go through all the points in the lattice and we allow the possibility of using ``empty'' links to define the graph, i.e. links where $k_j=0$. The vertices of the graph therefore correspond to lattice sites where one of the two following conditions are met: either $\mu_i\neq0$ or $k_{i-1}\neq k_i$. In terms of this basis it is straightforward to write the action of the operator defined in (\ref{D}), \begin{eqnarray} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle &=& \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f2}} \right\rangle \\ &=& \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}} \right\rangle. \end{eqnarray} The above expression is easy to obtain, since the $\hat{E}^\varphi_j$ may be substituted by the corresponding eigenvalues $\mu_j$ and $\hat{E}^x_j$ produces $(k_{j-1}+k_j)/(2\epsilon)$. The exponential of $\lambda K_{\varphi,j}$ adds $\lambda$ to $\mu_j$, whereas the exponential of $\epsilon n K_{x,i}$ adds $n$ to $k_i$. An interesting particular case is that of an isolated $\mu$ populated vertex, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle. \end{equation} So we see that the operator $\hat{D}$ moves the line to a new vertex. This clean action is in part due to the choice of ``midpoint'' regularization we chose for the $E^\varphi$. This will in the end be important to recover diffeomorphism invariance in the continuum. Something we will have to study later is the possibility of ``coalescing'' two vertices, as in the case, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f6}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f7}} \right\rangle.\label{34} \end{equation} or the case in which a new vertex is created, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f8}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f9}} \right\rangle. \end{equation} To compute the adjoint of $\hat{D}$ is easy, since it is a one-to-one operator. We start by noting that, \begin{equation} \left\langle \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}}\right. \left.\vphantom{\frac{1}{1}}\right\vert \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle=1, \end{equation} and the insertion of any other bra in the left gives zero. Therefore \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f10}} \right\rangle, \end{equation} with special particular cases that ``translate'' a $\mu$ insertion, \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle, \end{equation} or create a vertex, \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f11}} \right\rangle. \end{equation} In addition there is a third particular case of interest in which a vertex is annihilated, it happens if $\mu_{-2}= -2\mu_1$ and $k=(k_1+k_2)/2$. We now need to turn our attention to the other terms in the construction of $\hat{\mathbb H}$ in order to have a complete quantum version of (\ref{27}). The discretization we will propose is, as, \begin{equation}\label{discretizacion} {\mathbb H} = \sum_{j=0}^N \left(O_{j+1}D_j-O_j\right)^\dagger \left(O_{j+1}D_j-O_j\right) \end{equation} where $O_j =\sqrt[4]{\epsilon E^x_j}/(E^\varphi_j)^{3/2}$, and we have chosen to localize $O_j$ and $D_j$ at different points. Intuitively this can be seen in the fact that $\hat{D}$ ``shifts'' links in the spin nets to the next neighbor whereas $\hat{O}$ just acts as a prefactor, as we will discuss in the next paragraph. Therefore if one wishes to find cancellations between both terms in (\ref{discretizacion}) one needs to delocalize the action of both $\hat{O}$'s. The quantization of $O_j$ has been studied in the literature before \cite{aspasi}. Since these operators only act multiplicatively, it is better to revert to a simpler notation for the states $\vert\vec{\mu},\vec{k}\rangle>$. The action of the operator is, \begin{equation} \frac{\sqrt[4]{\hat{E}^x_j}}{\left(E^\varphi_j\right)^{3/2}\epsilon^{1/4}} |\vec{\mu},\vec{k}> = \left(\frac{4}{3\rho } \right)^6 \sqrt[4]{\frac{k_{j-1}+k_{j+1}}{2}} \left[\vert \mu_j+\frac{\rho}{2}\vert^{3/4} - \vert \mu_j-\frac{\rho}{2}\vert^{3/4} \right]^6 |\vec{\mu},\vec{k}>, \end{equation} where $\rho$ is the minimum allowable value of $\mu$ as is customary in loop quantum cosmology. Since this operator has a simple action through a prefactor, we will call such prefactor $f(\vec{\mu},\vec{k},j)$. One therefore has, for example, \begin{equation} \hat{O}_{i+1} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle=f(\vec{\mu},\vec{k},i+1) \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle, \end{equation} or, \begin{equation} \hat{O}_{i+1} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle=f(\vec{\mu},\vec{k},i+1) \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}} \right\rangle, \end{equation} where the $\vec{\mu},\vec{k}$ that appear in the prefactor are the ones that appear in the state to the right of the prefactor. It is worthwhile noticing that if $\mu_2=0$ the map is from a diagram with one insertion to another with one insertion, if $\mu_1=0$ it goes from one insertion to two and if both $\mu_1$ and $\mu_2$ are non-vanishing it maps two insertions to two insertions. It is not possible to go from a state with two consecutive insertions into one with only one insertion, since if $2\mu_2+\mu_1 =0$ then $f=0$. This is a key property one seeks in the regularization. If the regularization were able to fuse two insertions it would be problematic, as we will discuss later on. This allows us to evaluate the action of the quadratic Hamiltonian ${\mathbb H}$ explicitly on a set of states that capture in the discrete theory the flavor of diffeomorphism invariance. For instance, consider a normalized state obtained by superposing all possible states with a given insertion \begin{equation} \left\vert \psi_1\right\rangle =\frac{1}{\sqrt{N}}\sum_{i=0}^N \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle. \end{equation} Such a state would be the analogue in the discrete theory of a ``group averaged'' state. If we now consider the action of $\hat{O}_{i+1} \hat{D}_i-\hat{O}_i$ on such a state we get, \begin{equation} \left\langle \psi_1\vphantom{\frac{1}{1}}\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle=0 \end{equation} since both terms in the difference produce the same prefactor when acting on the state on the right. If one were to consider on the right a state with multiple insertions, then the result will also be zero since the operators do not convert two consecutive insertions at $i,i+1$ into one and the inner product would vanish. As a consequence, we therefore have that, \begin{equation} \left\langle \psi_1\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i =0. \end{equation} Let us now consider states with two insertions, again ``group averaged'' in the sense that we sum over all possible locations of the two insertions respecting a relative order within the lattice (in this case this is irrelevant due to cyclicity in a compact manifold), \begin{equation} \left\vert \psi_2\right\rangle =\frac{1}{\sqrt{N(N-1)}}\sum_{i=0}^N\sum_{\scriptstyle\begin{array}{c}\scriptstyle j \neq i\\ \scriptstyle j=0\end{array}}^N \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f12}}. \right\rangle. \end{equation} If one considers a state $\vert \nu \rangle$, with three or more insertions of $\mu$ one has that \begin{equation} \left\langle \psi_2\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \left\vert \nu \right\rangle =0, \end{equation} since in the first term $\hat{D}_i$ could produce a two insertion diagram, but then the action of $\hat{O}$ at site $i+1$ would vanish, and the term on the right does not produce a two insertion diagram, as seen in (\ref{34}). If one considers a state $\vert\nu\rangle$ with two non-consecutive vertices, the operator also vanishes, for the same reasons as before. Finally, if $\vert\nu\rangle$ has two consecutive insertions then we will have a non-trivial contribution. We will see, however, that such a contribution vanishes in the continuum limit. To see this we evaluate, \begin{eqnarray} \left\langle \psi_2\left\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \right\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f14}} \right\rangle &=& f(\vec{\nu},\vec{m},i+1) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f15}}\right. \right\rangle\nonumber\\ && -f(\vec{\nu},\vec{m},i) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f14}}\right. \right\rangle\nonumber\\ &=&\left[ f(\vec{\nu},\vec{m},i+1) \delta_{\mu_2,2\nu_{i+1}+\nu_i} \delta_{\mu_1,-\nu_{i+1}} \delta_{{k'},m_{i-1}}\delta_{{k'},m_i+m_{i-1}-m_{i+1}} \delta_{k,m_{i+1}}\right.\nonumber\\ &&\left.- f(\vec{\nu},\vec{m},i) \delta_{\mu_1,\nu_i} \delta_{\mu_2,\nu_{i+1}} \delta_{k,m_{i-1}} \delta_{{k'},m_{i}} \delta_{k,m_{i+1}}\right]\frac{1}{\sqrt{N(N-1)}} \end{eqnarray} If $\vert \nu\rangle$ has one $\mu$ insertion then there is another contribution, \begin{eqnarray} \left\langle \psi_2\left\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \right\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f16}} \right\rangle &=& f(\vec{\nu},\vec{m},i+1) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f17}}\right. \right\rangle\nonumber\\ &=&\frac{1}{\sqrt{N(N-1)}}\left[ \delta_{k,m_i} \delta_{{k'},2m_i-m_{i+1}} \delta_{\mu_1,-\nu_{i+1}} \delta_{k,m_{i+1}} f(\vec{\nu},\vec{m},i+1)\right] \end{eqnarray} We are now in a position to evaluate the expectation value of $\hat{\mathbb H}$. To do that we compute, \begin{equation} \langle \psi_2\vert\hat{\mathbb H} \vert \psi_2\rangle= \sum_{j=0}^N \langle \psi_2\vert \left(\hat{O}_{j+1} \hat{D}_j-\hat{O}_j\right) \left(\hat{O}_{j+1} \hat{D}_j-\hat{O}_j\right)^\dagger \vert \psi_2\rangle. \end{equation} and we insert a complete basis of states between the two parentheses. Then we can apply all the results we have just worked out. The final result is that only three finite contributions appear for every $j$ and therefore \begin{equation} \langle \psi_2\vert\hat{\mathbb H}\vert \psi_2\rangle=O\left(\frac{1}{N}\right), \end{equation} and we see that in the limit $N\to \infty$ one shows that the spectrum of $\hat{\mathbb H}$ contains zero and therefore no anomalies appear and the constraints are enforced exactly. Analogously, one can show that for spin networks with $m$ vertices $\langle\psi_m\vert{\mathbb H}\vert\psi_m\rangle=O(1/N)$, and therefore the states that minimize $\langle\hat{\mathbb H}\rangle$ include in the limit $N\to\infty$ the diffeomorphism invariant states obtained via the group averaging procedure. To see this more clearly we note that the state with $m$ vertices we are considering is of the form, \begin{eqnarray} \left\vert \psi_m\right\rangle = \frac{1}{\sqrt{NC^N_m}}\sum_{i_{v_1}<\ldots<i_{v_j}<\ldots<i_{v_m}<i_{v_1}} \left\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f18}} \right\rangle \end{eqnarray} where the sum is over all the spin nets with the only condition that the cyclic order of the vertices is preserved, that is $v_1$ is always between $v_m$ and $v_2$, etc. The quantities $C^{N-1}_m$ are the combinatorial numbers of $N-1$ elements taken in groups of $m$ for normalization purposes. This sum is the discrete version of the sum on the group that is performed in the continuum group averaging procedure.The sum preserves the cyclic order placing the vertices in all the positions compatible with such order. We have shown that the expectation value of $\hat{\mathbb H}$ vanishes in the continuum limit. Since $\hat{\mathbb H}$ is a positive definite operator this also implies that $\hat{\mathbb H}\vert \psi_n\rangle =0$, which is the condition one seeks in the uniform discretization approach. This can be explicitly checked by computing, for instance for a state $\langle \psi_2\vert$, \begin{equation} \sum_s \langle \psi_2 \vert \hat{\mathbb H}\vert s\rangle\langle s\vert= \frac{1}{\sqrt{N(N-1)}} \sum_{i=1}^N f_i \langle s_i\vert \end{equation} where the sum over $s$ means a sum over a basis of spin networks $\vert s\rangle$ and the $\langle s_i\vert $ are spin network states that have vertices at consecutive sites $i$ and $i+1$. Given that the $f_i$'s are finite coefficients independent of $N$ one immediately sees that the right hand side has zero norm when $N\to \infty$. There is a rather important difference with the continuum case, however. The states constructed here as limits of discrete states are normalizable with the kinematical inner product and therefore the calculation suggests that in a problem with a Hamiltonian constraint in addition to diffeomorphism constraints one could work all constraints in the discrete theory on an equal footing. \section{Discussion} We have seen in a $1+1$ dimensional model with diffeomorphism invariance that one can discretize it, therefore breaking the invariance, and treat it using the ``uniform discretizations'' approach yielding a diffeomorphism invariant theory in the continuum limit. We have argued that this would have been close to impossible if one had naively discretized the constraints and quantized the resulting theory. An important point to realize is that the the kinematical Hilbert space has been changed, by considering spin networks on ``lattices'' with a countable number of points. There exist infinitely many possible such lattices built by considering different spacings between the points. However, in $1+1$ dimensions the choice of lattice does not influence the diffeomorphism invariant quantum theory, whose observables can be written in terms of the canonical variables and invariant combinations of their derivatives that can be entirely framed in terms of $\vec{k}$ and $\vec{\mu}$ without reference to details of the lattice. For instance, the total volume of a slice evaluated on a diffeomorphism invariant spin network $\vert\psi_1\rangle$ is given by \begin{equation} \hat{V}\vert \psi_1 \rangle= 4 \pi \ell_{\rm Planck}^3 \sum_v \vert \mu_v\vert \sqrt{\frac{k_{e^+(v)}+k_{e^-(v)}}{2}} \vert \psi_1 \rangle \end{equation} where the sum is over all vertices of the continuum spin network and $k_{e^\pm}$ are the values of $k$ emanating to the right and left of vertex $v$. More generally, consider an observable $O_{\rm Diff}$, that is an operator invariant under diffeomorphisms. Let us study in the space of lattices with a countable number of points its expectation value on diffeomorphism invariant states $\langle \psi_{m,\vec{k},\vec{\mu}} \vert \hat{O}_{\rm Diff}\vert \psi_{m,\vec{k},\vec{\mu}}\rangle$, with $\vert \psi_{m,\vec{k},\vec{\mu}}\rangle$ the cyclic state we considered in the previous section. In the continuum the vectors of the Hilbert space of diffeomorphism invariant states $\vert \{s\}\rangle$ where $\{s\}$ is the knot class of a spin network $s$ belong to the dual of the space of kinematic spin network states $\vert s\rangle$. The expectation value of the observable in the continuum is $\langle \{s\}\vert \hat{O}_{\rm Diff} \vert \{s\}\rangle$ and the result of both expectation values in the continuum and in the discrete theory coincide. The reason for this is that the action of $\hat{O}_{\rm Diff}$ on one of the terms in $\vert \psi_m \rangle$ coincides with $\hat{O}_{\rm Diff} \vert s\rangle$ except when $s$ has vertices that occupy consecutive positions on the lattice. In this case, depending on the specific form of $\hat{O}_{\rm Diff}$ the results could differ. Due to the normalization factor, however, such exceptional contributions contribute a factor $1/N$ in the $N\to \infty$ limit, so we have that in the continuum limit the expectation values in the continuum and the discrete always agree. An issue of importance in loop quantum gravity is the problem of ambiguities in the definition of the quantum theory. Apart from the usual factor ordering ambiguities in a discrete theory one adds the ambiguities of the discretization process. In this example we have made several careful choices in this process to ensure that the operator $\hat{\mathbb H}$ has a non-trivial kernel in the continuum limit. This requirement proved in practice quite onerous to satisfy and it took quite a bit of effort to satisfy the requirement. Though in no way we claim that the results are unique, it hints at the fact that requiring that $\hat{\mathbb H}$ have a non-trivial kernel in the continuum significantly reduces the level of ambiguities in the definition of a quantum discrete theory. We have not been able to find another regularization satisfying the requirement an leading to a different non-trivial kernel. Another point to note is that the quantum diffeomorphism constraints $\phi^\epsilon(M)=\sum_{j=0}^N \frac{M_j}{2i\epsilon}\left(D_j-1\right)$ with $M_j$ stemming from discretizing a smooth shift function do not reproduce the continuum algebra of constraints when they act on generic spin networks on the lattice that belong to the kinematical Hilbert space. The algebra almost works, but there appear anomalous contributions for spin networks with vertices in two consecutive sites of the lattice. In spite of this the constraints can be imposed at a quantum level through the condition $\langle \psi \vert H=0$ and imply, as we showed, that the solutions correspond to a discrete version of the sum in the group that is performed in the group averaging procedure. The difference is that these states are normalizable with the inner product of the kinematical space itself. In this construction the Hilbert space ${\cal H}_{\rm Diff}$ is a subspace of ${\cal H}_{\rm Kin}$, unlike the situation in the ordinary group averaging procedure. This property opens interesting possibilities, particularly if it holds in more elaborate models. If such a property were to hold in more complex models, for instance involving a Hamiltonian constraint, it would be very important since it would provide immediate access to a physical inner product. All of the above suggests that in more realistic models than the one we studied, for instance when there is a Hamiltonian constraint (with structure functions in the constraint algebra) one will also be able to define the diffeomorphism and the Hamiltonian constraints as quantum operators and impose them as constraints (or equivalently, to impose the ``master constraint'' ${\mathbb H}$). They would act on the kinematic Hilbert space of the discrete theory, and one would hope that a suitable continuum limit can be defined. We would therefore have a way of defining a continuum quantum theory via discretization and taking the continuum limit even in systems where the discretization changes the nature of the constraints from first to second class. In $1+1$ dimensions the procedure appears quite promising. It should be noted that this is a quite rich arena in physical phenomena, including Gowdy cosmologies, the Choptuik phenomena and several models of black hole formation. The fact that we could envision treating these problems in detail in the quantum theory in the near future is quite attractive. In higher dimensions the viability of the approach will require further study, in particular since the discretization scheme chosen could constrain importantly the types of spin networks that one can construct in the continuum theory. Summarizing, we have presented the first example of a model with infinitely many degrees of freedom where the uniform discretization procedure works out to the last consequences, providing a continuum theory with diffeomorphism invariance and where the master constraint has a non-trivial kernel. It also leads to an explicit construction of the physical Hilbert space that is different from the usual one, allowing the introduction of the kinematical inner product as the physical one. \section{Acknowledgements} This work was supported in part by grants NSF-PHY0650715, and by funds of the Horace C. Hearne Jr. Institute for Theoretical Physics, FQXi, PEDECIBA and PDT \#63/076 (Uruguay) and CCT-LSU.
proofpile-arXiv_069-1969
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Gravitational lensing can be used as a powerful astrophysical tool for probing the density profiles of galaxies, and is one of the few ways in which dark matter can be detected \citep[e.g.][]{2005MNRAS.363.1136K}. In addition, it often magnifies source objects by one to two orders of magnitude. This allows us to use the intervening gravitational lens as a kind of natural telescope, magnifying the source so that we can observe more detail than we would have been able to without the lens. This extra magnification provided by lensing has been very beneficial to studies of star formation and galaxy morphology at high redshift. Regions of the galaxy size and luminosity distribution that are inaccessible in unlensed observations are made (more) visible by lensing \citep[e.g.][]{2000ApJ...528...96P, wayth, 2006ApJ...651....8B, 2007ApJ...671.1196M, 2008arXiv0804.4002D}. The properties of the lens galaxies (typically elliptical galaxies) can also be inferred from their lensing effect \citep[e.g.][]{2006ApJ...649..599K, 2008arXiv0806.1056T}. Of course, gravitational lensing distorts the image of the source, as well as magnifying it. Thus, techniques have been developed that aim to infer the mass profile of the lens galaxy and the surface brightness profile of the source, given observed images \citep[e.g.][]{2003ApJ...590..673W, 2006ApJ...637..608B}. The aim of this paper is to carry out this process with the recently discovered gravitationally lensed quasar/host galaxy system RXS J1131-1231 \citep{2003A&A...406L..43S}. This system consists of a quadruply imaged quasar at redshift $z=0.658$ lensed by a galaxy at $z=0.295$. At the time of its discovery, it was the closest known lensed quasar, with some evidence for an extended optical Einstein ring - the image of the quasar host galaxy. Initial simple modelling suggested that the quasar source was magnified by a factor of $\sim$ 50. Thus, subsequent observations with the ACS aboard the Hubble Space Telescope \citep[][hereafter C06]{2006A&A...451..865C} allow the recovery of the morphology of the quasar's host galaxy down to a resolution of about 0.01 arc seconds - at least in principle, for the parts of the source nearest the caustic. Indeed, C06 presented a wide array of results based on HST observations (at 555nm and 814nm with ACS, and 1600nm with NICMOS), including a detailed reconstruction of the extended source. The source reconstruction method used by C06 is based on lensing the image back to a pixellated grid in the source plane, setting the source surface brightnesses to equal the image surface brightness, and using a decision rule (in this case, the median) to decide on the value of a source pixel whenever two or more image pixels disagree about the value of the same source pixel. If the point spread function (PSF) is small or the image has been deconvolved (in C06, the deconvolution step was neglected for the purposes of the extended source reconstruction) and the lens model is correct, this method can expect to be quite accurate. However, in principle, the uncertainty in the lens model parameters and the deconvolution step should always be taken into account. In this paper, we focus our attention on the 555nm ACS image (the drizzled images, as reduced by C06, were provided to us), and the process by which we reconstruct the original, unlensed source from it. Any differences between our reconstruction and the C06 one can be attributed to the following advantages of our approach: PSF deconvolution, averaging over the lens parameter uncertainties, simultaneous fitting of all parameters, and the prior information that Bayesian methods are capable of taking into account: in the case of our model, that is the knowledge that the majority of pixels in an astrophysical sources should be dark \citep{2006ApJ...637..608B}. The 555nm image is also of particular interest because its rest wavelength (335nm) probes regions of recent star formation in a galaxy with an AGN. In the case of the Einstein Ring 0047-2808 \citep{2006ApJ...651....8B}, our method was able to resolve structure down to scales of $\sim$ 0.01 arcsec, a factor of five smaller than that obtainable in an unlensed observation with the Hubble Space Telescope and about double the resolution obtained by \citet{2005ApJ...623...31D} using adaptive pixellation and least squares {\it applied to exactly the same data}. This was possible because we used a prior distribution over possible sources that is more realistic as a model of our knowledge of an unknown astrophysical source, that is, we took into account the fact that it should be a positive structure against a dark background, a fact many methods (such as least squares and some popular regularisation formulas) implicitly ignore \citep{2006ApJ...637..608B}. These differences between methods are likely to be most significant when data are sparse or noisy, and all methods tend to agree as the data quality increases and we approach the regime where the observed image uniquely determines the correct reconstruction. \section{Background to the Method} The conceptual basis of the Bayesian reconstruction method was presented in \citet{2006ApJ...637..608B}. The idea is to fit a complex model to some data, but rather than simply optimising the parameters of the model to achieve the best fit, we try to explore the whole volume of the parameter space that remains viable after taking into account the data. The effect of data is usually to reduce the volume of the plausible regions of the parameter space considerably\footnote{For non-uniform probability distributions, ``volume'' is effectively the exponential of the information theory entropy of the distribution.}. The exploration of the parameter space can be achieved by using Markov Chain Monte Carlo (MCMC) algorithms, which are designed to produce a set of models sampled from the posterior distribution. In the case of modelling the background source and lens mass distribution of a gravitational lensing system, this allows us to obtain a sample of model sources and lenses that, when lensed and blurred by a PSF, match the observational data. The diversity of the models gives the uncertainties in any quantity of interest. The reader is referred to \citet{gregory} for an introduction to Bayesian Inference and MCMC. \section{Method and Assumptions} The first step of a Bayesian analysis is to assign a likelihood function, or the probability density we would assign to the observed data if we knew the values of all of the parameters. To assign this, we need a noise frame, a measure of how uncertain we are about the noise level in each pixel. This is typically done by assuming that the observational error at pixel $i$ is from a normal distribution with mean 0 and known standard deviation $\sigma_i$. We extended this to include two ``extra noise parameters'' $\sigma_a$ and $\sigma_b$, such that the standard deviation for the error in the $i$th pixel is $\sqrt{\sigma_i^2 + \sigma_a^2 + \sigma_b^2 f_i}$, where $f_i$ is the predicted flux in the $i$'th pixel. $\sigma_a$ and $\sigma_b$ then become extra model parameters to be estimated from the data. The inclusion of $\sigma_a$ and $\sigma_b$ implies that the extra noise level sigma varies with the predicted brightness of the pixel, with a square root dependence expected from Poisson photon counting. We chose the $\{\sigma_i\}$ values to be zero for most of the image, but infinite for the brightest regions of the quasar images, effectively masking out those parts of the image; this mask can be seen in Figure~\ref{data}. A model PSF was obtained using the TinyTim software \citep{tinytimreference}. As noted by C06, the TinyTim simulations did not successfully perform the geometric distortion correction, and the output PSF had slightly non-orthogonal diffraction spikes, whereas the spikes in the image are perpendicular. To correct this, the PSF was ``straightened out'' by evaluating it with respect to non-orthogonal axes; the resulting PSF is shown in Figure~\ref{tinytim}. Whilst this process is imperfect, the extra noise sigma protects against serious consequences resulting from slight inaccuracies in this process. While our choice of $\{\sigma_i\}$ was designed to block out the brightest parts of the quasar images, since they are so bright, light from the quasar images still extends out past the masked regions and overlaps with interesting Einstein Ring structures. Thus, when modelling the image, we still required a flux component due to the quasar images. The four quasar images were modelled as being proportional to the corrected TinyTim profiles with unknown fluxes and central positions. The surface brightness profile of the lens galaxy was modelled as the sum of two elliptical Gaussian-like profiles (Sersic profiles, one for the core and one for the extended emission) proportional to $e^{-(\frac{R}{L})^{\alpha}}$, where $R = \sqrt{Qx'^2 + y'^2/Q}$ with unknown peak surface brightness, ellipticity $Q$, length scale $L$, and angle of orientation. The central position was also considered initially unknown, but for MCMC purposes the starting point was to have both profiles centred near the observed centre of the lens galaxy core. The slopes $\alpha$ were restricted to the range $[0,10]$ and assigned a uniform prior distribution, along with all of the other free parameters. Although elliptical galaxies are well modelled by a Sersic profile with $\alpha = 1/4$, we are modelling this galaxy by {\it two} such profiles. This was done because the wings of the lens galaxy light profile (where it overlaps the Einstein ring) are of great significance for our source reconstruction, and we do not want the core of the lens galaxy to be relevant to the wings. Note that there are parts of the image where flux is present from three sources: the lensed Einstein Ring, the wings of the PSF from the quasar images, and the foreground lens galaxy. The fact these all overlap suggests that the optimal approach (in all senses apart from CPU time) involves simultaneously fitting all of these components. Throughout this paper, any modelling has included all of the lens galaxy profile parameters as free parameters\footnote{Except for the first part of Section~\ref{pixellated}, where computational restrictions required that we fix the lens parameters.}, as well as the source, lens model parameters and positions and brightnesses of the four TinyTim PSF profiles, to model the contribution from the quasar images that remains even after masking out their central regions. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{data.eps} \end{center} \caption{The logarithm of the observed image (on the left) shows both the quasar images, along with their additional effects such as diffraction spikes, and the faint Einstein ring image of the host galaxy. On the right is the image (scaled linearly) with some parts blocked out - these blanked regions are those where the $\{\sigma_i\}$ have been adjusted to block out the brightest emission from the quasar. The $\sigma_i$ for some pixels has also been artificially boosted to reduce the effects of the outer parts of the diffraction spikes from the quasar images. We expect the inner parts of these spikes to be well modelled by the TinyTim profiles (Figure~\ref{tinytim}). Note that the lens galaxy light profile extends over this entire image; where it appears that the image is blank, there is actually a positive flux.\label{data}} \end{figure*} \section{Lens Model Parameterisation} The particular lens model we assumed for this system was a pseudo-isothermal elliptical potential (PIEP) \citep{piepref}, primarily for computational speed but also because it is fairly general and realistic, at least for single galaxy lenses that are not too elliptical. This model has five parameters: strength $b$, ellipticity $q$ (actually the axis ratio: $q=1$ implies a circularly symmetric lens), orientation angle $\theta$, and two parameters $(x_c,y_c)$ for the central position (measured relative to the central pixel of the images in Figure~\ref{data}). Although any Bayesian modelling can only explore a particular slice through the full hypothesis space we might have in our minds, using a simplified analytical model is often sufficient to illuminate the general properties of the true lens mass distribution. Also, it is typically the case that inferences about the source of an Einstein Ring are insensitive to the specific parameterisation for the lens model \citep[e.g.][]{wayth}, as long as the model is able to fit the observed image at all. All Einstein rings can be expected to reside in an environment where the external shear due to neighbouring galaxies is nonzero (Kochanek, private communication), and thus, external shear was also included in the lens model. There are two parameters for the external shear: $\gamma$, its magnitude, and its orientation angle $\theta_{\gamma}$. \citet{2007ApJ...660.1016S} have observed the flux ratios of the quasar images (via integral field spectroscopy) and found that most of these ratios are consistent with a model of this type (elliptical potential plus external shear). A similar model was used by C06 (they used a singular isothermal ellipse+$\gamma$), where they find that it is the simplest parameterised model that can reproduce the observations. In principle, we could adopt ever less restrictive parameterisations for the lens, to hunt for substructures in its density profile. However, such an approach is extraordinarily computationally expensive (unless simplifying assumptions about the source are also made) and is beyond the scope of this paper. In terms of all of these parameters, the deflection angle formula, relating a point $(x,y)$ in the image plane to a corresponding point $(x_s,y_s)$ in the source plane, is \begin{eqnarray}\label{lenseqn} x_s = x - \alpha_x(x,y) \nonumber \\ y_s = y - \alpha_y(x,y) \end{eqnarray} where the deflection angles $\alpha$ are given by the gradient of the potential \begin{equation} \psi(x,y) = \frac{1}{2}\gamma (x_{\gamma}^2 - y_{\gamma}^2) + b\sqrt{qx_{\theta}^2 + y_{\theta}^2/q} \end{equation} and $(x_{\gamma},y_{\gamma})$ are the ray coordinates in the rotated coordinate system whose origin is $(0,0)$ and is aligned with the external shear (i.e. rotated by an angle $\theta_{\gamma}$), and $(x_{\theta},y_{\theta})$ are the ray coordinates in the rotated and translated coordinate system centred at $(x_c,y_c)$ and oriented at an angle $\theta$. The physical interpretation of each of these parameters suggests a plausible prior range for their values. To represent this knowledge we used the following prior distributions (Table~\ref{lenspriors}). Since these are broad distributions, and the data are good, the influence of these choices is negligible; they are included for completeness. \begin{table*} \begin{center} \caption{Prior probability densities for the lens model parameters, and also the extra noise parameters $\sigma_a$ and $\sigma_b$.}\label{lenspriors} \begin{tabular}{lcc} \hline Parameter & Prior Distribution \\ \hline $b$ & Normal, mean 1.8'', SD 0.5, $b > 1$\\ $q$ & Normal, mean 0.9, SD 0.2, $0 < q < 1$\\ $x_c$ & Normal, centred at the lens galaxy core, SD 1.0'' \\ $y_c$ & Normal, centred at the lens galaxy core, SD 1.0'' \\ $\theta$ & Uniform, between 0 and $\pi$ \\ $\sigma_a$ & Improper Uniform, $\sigma_a > 0$ \\ $\sigma_b$ & Improper Uniform, $\sigma_b > 0$ \\ $\theta_{\gamma}$ & Uniform, between 0 and $2\pi$ \\ $\gamma $ & Exponential, mean 0.1 \end{tabular} \medskip\\ \end{center} \end{table*} \begin{figure} \begin{center} \includegraphics[scale=0.4]{tinytim.eps} \end{center} \caption{Simulated PSFs from TinyTim. Each pixel corresponds to an ACS image pixel and is 0.049 arcseconds across. On the left is the actual PSF. For the purposes of the modelling of the extended images, this PSF was truncated to a 5$\times$5 pixel PSF using only the brightest central parts of the PSF. For the quasar images, the wings are most important, and this can be seen most easily in the right hand panel. Since the quasars are so bright, the wings of the PSF are not negligible.\label{tinytim}} \end{figure} In summary, the observed image was modelled as the sum of the following components:\\ (i) Four TinyTim PSF profiles (Figure~\ref{tinytim}) with initially unknown amplitude and central position, to model the quasar images. While the bright parts of the images are masked out, the wings of the PSF are still important, so these components are required.\\ (ii) Two elliptical Sersic profiles with initially unknown central position, peak surface brightness, scale radius, slope (Sersic index) and angle of orientation. One of these models the lens galaxy's core and the other models the fainter outer regions.\\ (iii) A source, which is lensed by a PIEP+$\gamma$ lens with unknown parameters and blurred by the $5\times5$ pixel core of the TinyTim PSF (Figure~\ref{tinytim}). In Section~\ref{simple}, the source is modelled in a simple way as a sum of six elliptical Sersic profiles (in order to find a good initial value for the lens parameters), and in section~\ref{pixellated} the source is modelled as a pixellated grid with a prior distribution favouring non-negative pixel values and a dark background.\\ (iv) Noise, to which we assign a Gaussian prior probability distribution with unknown standard deviation $\sqrt{\sigma_i^2 + \sigma_a^2 + \sigma_b^2 f_i}$ for each pixel, with $\sigma_a$ and $\sigma_b$ initially unknown and the ${\sigma_i}$ specified in advance to mask out the pixels with the most systematics. One of the most difficult tasks in a source reconstruction problem is to find good values of the lens parameters to serve as the starting point for an MCMC run. If the source is pixellated, then exploration of the lens parameter space is slow because we effectively have to marginalize over thousands of source dimensions - so if we start with incorrect lens model values, the burn-in approach to the more plausible values will be slow. A good starting point for the lens can usually be obtained by running a much simpler version of the whole inference problem, for instance with a simpler model for the source, or by using only the quasar image positions and brightnesses as constraints. In the next section, we apply the latter approach to see how well the quasar images alone can constrain the lens model parameters. Later (Section~\ref{simple}), the extended images are taken into account by using a simply parameterised source model, where the number of source dimensions to be marginalized is only 36. Finally (Section~\ref{pixellated}), we use a pixellated model for the source in order to reconstruct its structure with minimal assumptions. \section{How the quasar Images Constrain the Lens} \subsection{Theory} The four quasar images can constrain the lens model because we require that the four image positions lens back to the same point in the source plane. Actually, with an upper limit of $\sim$ 0.02 pc for the continuum source size \citep[e.g.][]{2005MNRAS.359..561W}, this exact requirement is too strong - we can really only insist that they lens back to within $\sim$ 3 microarcseconds of each other (using a concordance cosmology $\Omega_m = 0.27, \Omega_\Lambda = 0.73, H_0 = 71$ km s$^{-1}$Mpc$^{-1}$, \citet{2004ApJ...608...10N}). The results of this subsection are unchanged if we use a smaller quasar size of $10^{-3}$pc \citep{2008ApJ...676...80M} because the limiting factor is the accuracy of the astrometry, rather than the assumed quasar source size. The magnifications of the images can also provide some information (e.g. \citet{2002MNRAS.330L..15L} used the brightness of the third image of a quasar to argue for a model for the lens mass profile that creates a naked cusp in the source plane) although microlensing and absorption effects can limit the usefulness of including the magnifications as constraints for more typical systems. The quasar positions were measured by fitting Gaussians to the peaks of the images. For the purposes of centroiding, this is an adequate approximation: the alternative is to calculate very high resolution simulated PSFs. Different (unknown) noise levels were assumed for each of the four images; however, the uncertainties in position were found to be $\sim$ 0.003 arc seconds for all of the four images. This corresponds to less than 1/10th of an image pixel. To infer our PIEP+$\gamma$ parameters from the quasar positions, we first implemented an MCMC algorithm to explore the prior distribution for the lens parameters (Table~\ref{lenspriors}). Then we imposed a constraint on the probability distribution: that the expected value of the mean inter-pair distance, or spread, of the four points in the source plane upon back ray tracing, should be about $10^{-5}$ arc seconds. This modifies the probability distribution over the lens parameter space by a multiplicative factor $\exp\left(-k \times \texttt{Spread(Lens Parameters)}\right)$, where the value of $k$ is chosen so that the expectation value takes the value we wish to impose, $10^{-5}$ arc seconds. Conventionally, one would estimate the lens parameters by finding those values that minimise the spread in the source plane. Our probabilistic approach softens this constraint and implies that we sample from the range of lens models that reduce the scatter to about $10^{-5}$ arc seconds or less. This approach assumes that we know the true exact image positions, although it can be extended to allow for uncertainty in the image positions, as follows. Denote the true unknown quasar image positions collectively by $X$, the estimated positions by $x$ and the lens parameter values by $L$. By the rules of probability theory, the posterior probability distribution for $L$ given $x$ (here, we assume that the known data are the $x$'s, rather than the entire image) can be written as: \begin{eqnarray} p(L|x) = \int p(L, X|x) dX \nonumber \\ = \int p(X|x) p(L|X,x) dX \nonumber \\ \end{eqnarray} Since knowledge of $X$ would make $x$ irrelevant, this becomes \begin{eqnarray}\label{qsopost} p(L|x) = \int p(X|x) p(L|X) dX \nonumber \\ \propto \int p(X|x)p(L)\exp\left(-k \times \texttt{Spread}(X, L)\right) dX \end{eqnarray} Hence, the true image positions $\{X\}$ can be introduced as extra nuisance parameters to be estimated, and then we can sample the distribution under the integral sign in Equation~\ref{qsopost}. The small centroiding uncertainties of $\pm \sim$ 0.003 arcseconds were taken to specify the standard deviations of Gaussian probability distributions for $p(X|x)$. The reader may wonder whether it would be more correct to introduce the unknown source plane position of the quasar as the nuisance parameters, calculate the image positions using the lens model, and use the $x$'s and their error bars to define a likelihood function. Whilst there is nothing wrong with this approach, it does involve the computationally challenging task of inverting the lens equations (Equation ~\ref{lenseqn}) to find the image positions. Our source plane spread approach is much easier to implement computationally, but relies on the unconventional step of directly assigning a posterior probability distribution: $p(X|x)$. \subsection{Results} To implement the inference described in the previous section, a Metropolis-Hastings sampler was written to target the posterior distribution of Equation~\ref{qsopost}. Unfortunately, this simple implementation had serious drawbacks. The joint posterior distribution for the lens parameters and true quasar image positions consists of long, thin, curved tunnels of high probability, and most standard sampling techniques have very poor mixing properties when sampling from such highly correlated distributions. To overcome this challenge, we used a different sampling technique, Linked Importance Sampling (LIS) \citep{lis}. LIS produces independent weighted samples from the target probability distribution. It only requires that we can independently sample from a simple distribution (e.g. the prior) and can define valid MCMC transitions with respect to distributions that are in some sense intermediate between the prior and posterior. For example, the common `annealing' approach of raising the likelihood to some power ($<$ 1) can be used within the LIS framework. The fact that each LIS run produces an {\it independent} sample from the target distribution makes it an attractive option for sampling from highly correlated distributions. The only possible pitfall is that the weights can vary significantly, such that the sample is completely dominated by only one or two points with large weight. We repeated this analysis twice: first, with the conservative priors on the central position of the lens - the priors in Table~\ref{lenspriors}. For comparison, we ran the algorithm with much more informative priors on the central position $(x_c,y_c)$ of the lens, an uncertainty of 1 pixel or $\pm 0.049$ arcseconds. The results are shown as the blue and red curves in Figure~\ref{comparison}. Far from uniquely determining the lens model, the quasar images have only managed to give moderately strong constraints on the overall strength of the lens ($b$) and the angle of orientation of the external shear ($\theta_{\gamma}$). For all other parameters, the marginal distributions are very wide, in some cases nearly as wide as the priors, so the quasars have provided only a small amount of information about them. In the seven dimensional space of the lens parameters, the quasar data yields a posterior distribution that contains long, narrow tunnels: while the volume of possible lens models is significantly decreased, the degeneracies inherent in lensing prevent precise inference about any single parameter. With the stronger prior information about the central position (red curves), the marginal probability distributions tighten substantially, but not well enough to give a reliable lens parameter estimate for which we could trust a source reconstruction. In the next section, we use a simplistic model for the extended source in an attempt to get a good starting estimate for the lens model parameters. This estimate will then be used in the final run (Section~\ref{pixellated}) with a pixellated source plane. \begin{figure*} \begin{center} \includegraphics[scale=0.75]{comparison.eps} \end{center} \caption{Comparison of the inferred lens parameters (estimated marginal probability densities, unnormalized) based on three data sets: (1) The quasar image positions and uncertainties, with a weak prior distribution for the central position of the lens (blue curves). (2) The quasar image positions and uncertainties, with a strong prior ($\pm $ 1 pixel) on the central position of the lens (red curves), and (3) The entire image, and a pixellated source model, run at a temperature T=10 (black curves). Note that the parameter spaces for the two angles $\theta$ and $\theta_{\gamma}$ are periodic with period $\pi$.\label{comparison}} \end{figure*} \subsection{Simply-Parameterised Modelling of the Extended Source}\label{simple} Rather than using pixels from the outset, we first modelled the source as the sum of six elliptical Gaussian-like (Sersic) profiles, with unknown brightness, orientation, central position, slope and ellipticity for each. Similarly, the lensing galaxy light profile was modelled as two elliptical Sersic profiles in the image plane. This simplified model reduces the dimensionality of the problem from thousands to 36, and makes it much more likely that a simple search algorithm can find something close to the optimal values for the lens model parameters. We used a simple Metropolis algorithm to derive estimates and uncertainties on the lens parameters. This estimate was used as a starting point for the chains in Section~\ref{pixellated}, where we use a pixellated source. A typical simple source model from the sample is shown in Figure~\ref{mickey}. Within this parameterisation, the uncertainties about the source are very small and Figure~\ref{mickey} can essentially be interpreted as the unique source reconstruction. The scales on the axes are the same for the source plane and the image plane, so an idea of the magnification can be obtained visually. Given this model, the data favour a highly complex source (since the blobs do not overlap to the extent that they become indistinguishable), lensed by a slightly elliptical lens whose centre is located close to the centre of the observed lensing galaxy. The individual Sersic profiles in the source shown in Figure~\ref{mickey} have been colour coded and labelled, making identification of the corresponding images easier. These labels will be used throughout the paper to refer to specific substructures in the source, and their corresponding images. Comparing the image in Figure~\ref{mickey} to the observed one in the right hand panel of Figure~\ref{data}, we see that all of the basic structures observed have been reproduced by this simple model. However, the simply-parameterised model cannot reproduce the exact shapes of the observed images. For example, the part of the source labelled A should contain substructure, because we can see that the simple model has predicted a continuous image A1, yet the actual data contains blank gaps in some parts of that image. Another example is that image B2 has irregular brightness variations along its length, something the simply parameterised source model cannot reproduce. There are also some very faint additional structures that have been missed, such as a third faint inner ring below A2, that could be a continuation of the image D1. These differences can also be seen in the residuals in Figure~\ref{resid_simple}. The bright ring (images C1 and C2) passes through the quasar image positions, so the quasar source is located inside source component C, but within the diamond caustic since the quasar has been imaged 4 times. However, due to uncertainty in the lens parameters, the quasar source cannot be located more accurately than this in the source plane. Source component C is moderately elliptical and is about 0.2 arcseconds in length, corresponding to a physical length scale of $\sim$ 1400 pc. The estimated magnifications for each component of the source are as follows: A (12.3), B (12.2), C (20.0), D (3.9), E (7.5), F (4.8), Quasar (45.0). \begin{figure*} \begin{center} \includegraphics[scale=0.6]{colour_labelled.eps} \end{center} \caption{Reconstruction of the extended source with six elliptical Gaussian-like Sersic profiles, shown on the same scale as the observed image. Most of the basic features of the observed image have been reproduced, but not their detailed shapes. The differences tell us where we should expect to see some additional substructure in the pixellated source model (see text). The positions of the caustics and critical curves are shown in white, although there should really be a small amount of uncertainty about their positions due to the uncertainty in the lens parameters.\label{mickey}} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{resid_simple.eps} \end{center} \caption{Model-predicted lensed, blurred image, using a typical model sampled from the posterior distribution, and the simply-parameterised source model. On the right are the standardised residuals. While the basic features of the observed image are reproduced, there are details that have not been well modelled by the simply-parameterised source.\label{resid_simple}} \end{figure*} \section{Pixellated Source Model}\label{pixellated} To obtain high resolution imaging of the source, we divided the source plane into a grid of 200$\times$200 pixels, each having a width of 0.015 arc seconds. The MCMC algorithm used is almost the same as that described in \citet{2006ApJ...651....8B}, and is described in greater detail there. Starting from a blank source, proposal changes are made that either add a bright ``atom'' of light to the source plane. Each atom has a position in the source plane indicating which pixel it is in, as well as a positive brightness, with exponential prior distribution. Small proposal transitions can be made which slightly adjust the parameters of an atom, in accordance with our chosen prior for these parameters. If the proposed source is a better fit, it is accepted, if it is a worse fit it is accepted with an acceptance probability equal to the ratio of the proposed likelihood to the current likelihood; this is just the standard Metropolis algorithm \citep{gregory} with a Massive Inference prior distribution \citep{massinf} for the source. The prior expected value for an atom's flux can either be specified in advance, or incorporated as an additional hyperparameter to be estimated. The latter approach is attractive and has been used in the context of Gaussian priors for the source pixel values \citep{2006MNRAS.371..983S}. We chose the former approach for increased computational efficiency, and found that different values for this hyperparameter did not significantly change the final reconstruction, provided that the value was not so low that the reconstructed sky was bright or so high that the model cannot detect the presence of structures that are obviously real. A straightforward implementation of any MCMC method is highly inefficient for the problem we have just posed. This is because we need to be able to explore the marginal posterior distribution (marginalising over possibly thousands of source parameters) for the lens parameters with reasonable efficiency. If we just alternated between source updates and lens updates, the lens parameters would only change in extremely tiny steps: for example, typically by $10^{-6}$, as far as the data and the current source will allow, so the lens model would explore the marginal distribution for lens parameters very slowly. Unfortunately, LIS was found to be unfeasible (the weights varied by orders of magnitude) due to the massive number of source parameters used. Hence, there was no realistic option other than to fix the lens model at a reasonable value and run the MCMC for the source variables only. We fixed the lens at the best values obtained from the simply-parameterised extended source model, and then ran a slow ``annealing'' \citep{annealing} simulation to determine a good lens model, which was then fixed, and the source space was explored at an annealing temperature of 1, to sample from the posterior distribution for the source pixel values given the lens. A large number of simulations with slightly different lens parameter values was done to verify that the source reconstruction is insensitive to slight uncertainties in the lens parameters. The results are presented in the next section. \section{Results} \subsection{Source} A final estimate for the source is shown in Figure~\ref{results1}, which is obtained by averaging all sources encountered by the Markov chain. Although the diversity of the samples encountered is an accurate representation of the level of uncertainty about any pixel values, it is inconvenient to present a large sample of images. Additionally, the spiky fluctuations caused by the Massive Inference prior remain in the sampled sources. Taking the mean of all of the sources provides a single estimate of the source profile that is optimal in the sense of minimising the expected square error; it also creates a more visually appealing smooth reconstruction where all of the spiky fluctuations in the individual samples have been averaged out. It is also quite natural for uncertain areas to be blurred in images, and for people viewing images to interpret a smooth image as possibly being caused by an underlying complex image. \begin{figure*} \begin{center} \includegraphics[scale=0.7]{mean.eps} \end{center} \caption{Posterior mean source, with fixed lens parameters. On the right is a greyscale version of the original reconstruction by C06, for comparison. Note that the C06 reconstruction used data in three filters, which partially accounts for its more diffuse appearance: the compact substructures are most notable in the 555nm image (the subject of this paper).\label{results1}} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.7]{resid.eps} \end{center} \caption{Model-predicted lensed, blurred image, using a typical model sampled from the posterior distribution, and the pixellated source model. On the right are the standardised residuals.\label{resid}} \end{figure*} The positions of the major bright substructures in the source (Figure~\ref{results1}) are in agreement with those found by C06. However, our reconstruction presents a clearer view of the compact central source C, where the quasar resides. In the simply parameterised case, source C was found be elliptical. In the pixellated reconstruction, this part of the source has an elliptical component but with extra nearby structures that follow the caustic. The images of this extra structure lie within images C1 and C2. This suggests that the elliptical component is sufficient to explain the position of the ring C2 but not its brightness; the algorithm tries to account for this by adding extra source flux in regions that have images within C2 but only within those parts of C1 that have been masked out. It seems more plausible that image C1 is more affected by dust absorption than C2, rather than the source coincidentally following the caustic. In principle, we could parameterise the unknown dust profile of the lens galaxy and simultaneously estimate this from the data (as done by e.g. \citet{2008arXiv0804.2827S}), but this is beyond the scope of the present paper. The predicted gap in the middle of source component A is also clearly present in our reconstruction, whereas it is much less clear, if present at all, in the reconstruction by C06. Source component E also contains some substructure of its own. In addition, the surface brightness contrast between these features and the diffuse background is much greater in our reconstruction, although this is simply because we are focusing on the 555nm image. At other wavelengths, the compact substructures are less pronounced. To reduce these systematic effects, we repeated the simulations at an annealing temperature of 10, to allow freer exploration of the parameter space. This is an ad-hoc device that is not as justified as explicit modelling of dust \citep{2008arXiv0804.2827S}, but is helpful nonetheless. At a temperature of 10, it also becomes computationally feasible to free the lens parameters. The results, shown in Figure~\ref{temp10}, still show clear images of the bright, compact substructures that are present in the source. These bright substructures account for all of the bright images that are visible in the data; the diffuse emission that the T=10 analysis misses is caused by a thicker, fainter ring (image E2) that is not the most striking feature of the image. The reconstruction produced by the high temperature run still contains source plane structure that follows the caustic. This suggests that increasing the temperature has not completely negated the effect of dust. However, the raised temperature simulation still results in model images that reproduce the positions and shapes of all of the details in the observed Einstein ring, while being more permissive about their fluxes. Thus, the source reconstruction in Figure~\ref{temp10} is a good model for the positions and (excluding the central source C) shapes of the source galaxy substructures. \begin{figure*} \begin{center} \includegraphics[scale=0.6]{temp10.eps} \end{center} \caption{Mean source encountered by a chain run at a temperature of 10. This allows the chain to explore the parameter space more freely, reducing the effect of unmodelled systematics present in the Temperature=1 reconstruction (albeit by discarding information). Most of the substructures are still sharply resolved. On the right, the posterior probability that any pixel is nonzero is plotted. White corresponds to 0, while black corresponds to 1. While it is very difficult to be confident about any particular pixel (due to the finite information content of the image), groups of pixels with consistently high values of this probability represent a secure detection.\label{temp10}} \end{figure*} This system is probably one of the most complex Einstein Rings known, with a spectacular number of distinct extended images. Hence, it is unsurprising that the source galaxy morphology is also very complex. The redshift of the source ($z=0.658$) implies that this light was emitted in the near UV (wavelength of 335 nm), suggesting that these structures are mapping the star forming regions of the source galaxy. With the assumed cosmology, the length scale in the source plane is $\sim$ 6.959 kpc/arcsecond. The bulk of the galaxy is just over 1 arc second across, so the entire source is about 8 kpc across; the source is a medium-sized galaxy. Compared to C06, our reconstruction is devoid of a large amount of extended emission. This is probably caused by some light from the lens galaxy being attributed to the source in their analysis. The surface brightness ratio of the compact structures to the diffuse emission can be estimated visually by simply looking at the image; where the contrast is a lot stronger than it appears to be in the C06 reconstruction. There is also evidence for a companion dwarf galaxy 2.4 kpc away and roughly 700 pc in diameter, to the left of the main galaxy (source F). We have also found that the quasar resides within an elliptical ($q\sim 0.5$) region of young stars that is about 1400 pc in extent. \subsection{Lens Parameters} When running the MCMC simulation at a temperature of 10, inferring the lens parameter uncertainties becomes computationally feasible. Thus, we can compare the lens constraints derived from the extended images with those derived from the point-like images. The marginal posterior distribution for the lens parameters, with a pixellated source, are shown as the black curves in Figure~\ref{comparison}. This shows clearly that the extended images provide significantly stronger constraints on the lens model parameters in this system, in contrast to the claim made by C06 that the opposite is true, and in agreement with \citet{2001ApJ...547...50K}. Note that the distributions for $b$ inferred from the quasar astrometry and the extended image reconstruction overlap only slightly. This is because the very small quoted astrometric uncertainties of 0.003'' (less than 1/10th of an image pixel) do not take into account known systematic effects such as the presence of background flux from the lens galaxy and the Einstein Ring, the fact that the peak of the PSF is not really a Gaussian, and the fact that the lens is not {\it really} a PIEP+$\gamma$. This small disagreement means that the parameters inferred from the extended images are only marginally capable of giving quasar image positions correctly to within 0.003''. They are, however, capable of reproducing the positions to within a relaxed tolerance of $\sim$ 1/3 of a pixel. The axis ratio $q$, describing the ellipticity of the lens potential, is found to be 0.935 $\pm$ 0.005. Clearly, the data rule out $q=1$ and therefore Singular Isothermal Sphere + $\gamma$ models (this is still the case even when only the quasar data is used). If we had assigned a delta function of prior probability at $q=1$, the data would downgrade its posterior probability significantly when compared to any realistic diffuse prior for $q$. The mass of the lens can be calculated from these results. For the purposes of this calculation, the lens is approximated as circular. For an isothermal sphere, the deflection angles at the Einstein radius of the ring (now equal to $b$) is simply $b$. For a point mass at the origin, the required mass to produce the same deflection would be $b^2$. Since lensing obeys Gauss' law, $b^2$ must also be the amount of mass enclosed within the ring. An alternative approach would be to calculate the nondimensional density, which proportional to the 2-D Laplacian of the lens potential \citep{book}. In scaled units, the total mass of the lens is therefore $b^2=3.205 \pm 0.01$, where the uncertainty was found, as usual, by considering an ensemble of lens models from the MCMC output and calculating the mass for each. However, systematics introduced by the approximations in computing this value will probably overwhelm this quoted uncertainty. The mass unit that would give an Einstein Radius of 1 arc second needs to be computed to convert this figure into physical units. When this is done, the estimated mass of the lensing galaxy (within the ring; the total mass outside of this is very poorly contrained by lensing) is found to be $(6.95 \pm 0.02)\times 10^{11}$ $M_0$. With the isothermal assumption, the velocity dispersion is $\sim$ 350 km s$^{-1}$. From the lens galaxy light profile parameters, we find that the flux of the lens galaxy (within the Einstein Ring) is close to 50\% of the flux of the brightest QSO image, which has a magnitude of 17.74 in the 555nm filter. At a redshift of $z$=0.295, the luminosity distance to the lens (with our assumed cosmology) is 1510 Mpc. Thus, the average (within the ring) mass to light ratio of the lens galaxy is found to be 8.8 $M_0/L_0$. The lens potential is only slightly elliptical and is centred near the centre of the lens galaxy, if the centre is defined by the brightest pixel of the galaxy core. All of these conclusions are similar to those made of ER 0047-2808, and are probably typical features of Einstein rings and all other systems with single galaxy lenses. This lends more support to the often used assumption that the centre of a lens galaxies light profile is also the point at which the lens model should be centred. It remains unclear how the galaxy light profile information could be taken into account in a more complex kiloparametric model of the lens; that is a topic for further research. The lens light profile parameters are presented in Table~\ref{lenslight}. \begin{table*} \begin{center} \caption{Lens galaxy light profile parameters, and the noise parameters. When the dimensions are those of surface brightness, the units are 1/20th of the flux of the brightest QSO image (mag 17.74) per square arcsecond. Length units are arcseconds.}\label{lenslight} \begin{tabular}{lcc} \hline Parameter & Component 1 (compact) & Component 2 (diffuse)\\ \hline Peak Brightness & 2.99 $\pm$ 0.31 & 0.283 $\pm$ 0.007\\ Ellipticity & 0.85 $\pm$ 0.01 & 0.80 $\pm$ 0.02\\ x Central Position & -0.166 $\pm$ 0.004 & 0.45 $\pm$ 0.03\\ y Central Position & 0.156 $\pm$ 0.003 & 0.07 $\pm$ 0.01\\ Scale Radius & 0.33 $\pm$ 0.03 & 2.65 $\pm$ 0.03\\ Angle of Orientation & 3.6 $\pm$ 2.4$^\circ$ & -16 $\pm$ 2$^\circ$\\ Slope $\alpha$ & 1.3 $\pm$ 0.1 & 2.70 $\pm$ 0.14\\ $\sigma_a$ & 0.0479 $\pm$ 0.0007\\ $\sigma_b$ & 0.009 $\pm$ 0.002 \\ \end{tabular} \medskip\\ \end{center} \end{table*} A preliminary investigation of the time delays predicted by our lens model suggests that it does not exactly reproduce the time delays measured by \citet{timedelay}. Since this system exhibits significant microlensing \citep{2008arXiv0805.4492C, 2008RMxAC..32...83S}, the time delay measurements are uncertain, but it is possible that the PIEP+$\gamma$ lens model will be ruled out by further observations. This would not be catastrophic for the present study, for several reasons. Firstly, source reconstructions tend to be insensitive to slight misspecification of the lens model \citep[e.g.][]{wayth}. Secondly, all parameterisations are false. We already know from prior information that the lens is not {\it really} a PIEP+$\gamma$ model. All modelling can only consider a single ``slice'' through a full hypothesis space, and the conclusions reached on that slice may or may not be representative of the full space. They often are, but there are never any guarantees. \section{Conclusions and Further Work} In this paper, we have presented a detailed gravitational lens reconstruction of the optical extended source in the Einstein Ring RXS J1131-1231. The source is a medium sized galaxy ($\sim$ 8 kpc in visible extent) with several compact bright emission regions. The substructures we found are in general agreement with those found by C06 in terms of their position, but we have shown that they are brighter and more compact than was previously determined. In addition, our reconstruction provides a clearer view of the substructures, including near the central regions of the source. The quasar resides in a bright emission region with an extent of about $\sim$ 0.15 arcseconds or 1 kpc. It should be noted that the wavelength of the observations in the rest frame is 335 nm, so this reconstruction traces regions of recent star formation in the source galaxy. We have also directly compared point images vs extended images with regard to how well each is able to constrain the lens model. We found that there is a significant gain to be made in taking into account all of the information from the extended images. It has been suggested that this is not true in general \citep{2007arXiv0710.3159F}, although it really depends on the resolution and number of extended images, which in this case is high. Certainly, in using both, there is nothing to lose but CPU cycles. This system has the potential to become one of the most well-constrained gravitational lenses, with multiple images of the extended ring, quasar image positions and flux ratios in multiple bands, and time delay measurements available \citep{timedelay, keeton}. Hence, it should be possible to carry out a detailed kiloarametric study of its mass profile to shed some light on the dark matter halo of the lens galaxy. This paper was based on a single image of this system, the 555nm ACS image. Other HST images at different wavelengths (814nm, 1.6$\mu$m) are available (C06) and can further constrain the lens model. Simultaneous multi-wavelength reconstructions are now becoming routine \citep[e.g.][]{2007ApJ...671.1196M}. However, all of the structures in these images are in the same locations, and so a multi-wavelength reconstruction would not produce significantly different conclusions to those reached here. C06 note that in the near infrared image, the compact bright images are less pronounced compared to the diffuse background, which is what would be expected if the substructures are regions dense in hot young stars. This study has relied on a number of common assumptions that future research will seek to relax. Extending lens reconstruction techniques to incorporate kiloparametric models of the source and the lens simultaneously is an ambitious task, but some steps are already being taken in that direction \citep{2008arXiv0804.2827S}. Flexible lens modelling plus information from time delay measurements and other sources would be extremely valuable for studies of galaxy dark matter haloes. Also, explicit modelling of dust absorption by the lens galaxy is proving to be an important ingredient in the inversion of Einstein Rings and would be an essential part of future work on this system. \section*{Acknowledgments} BJB thanks Olivia Ross for encouragement, and Dennis Stello for allowing me to use his fast computer. This research is supported under the Australian Research Council's Discovery funding scheme (project number DP0665574) and is part of the Commonwealth Cosmology Initiative (CCI). The authors would like to thank Jean-Francois Claeskens and Dominique Sluse for providing the ACS data. The constructive comments of the anonymous referee helped us to improve the paper significantly.
proofpile-arXiv_069-2040
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The electron localization in disordered systems \cite{1} is responsible for a broad variety of transport phenomena experimentally observed in mesoscopic systems: the non-Ohmic behavior of electron conductivity, weak localization, universal conductance fluctuations, and strong electron localization. \cite{MacKK,Janssen} Localization arises in systems with random potential. Let us consider the time evolution of a quantum particle located at a time $t=0$ in a certain small area of the sample. For $t>0$, the electron wave function scatters spatial inhomogeneities (spatial fluctuation of the potential). Multiple reflected components of the wave function interfere with each other. As Anderson \cite{1} proved, this interference can abolish the propagation. \cite{ziman,ATAF,WaveP,2} As a result, wave function will be non-zero only within a specific area, determined by initial electron distribution, and decays exponentially as a function of the distance from the center of localization. The probability to find an electron in its initial position is non-zero for any time $t$, even when time increases to infinity, $t\to\infty$. Similarly to the quantum bound state, the spatial extent of the localized wave function is finite. However, the physical origin of the localization differs: a bounded particle is trapped in the potential well, while localization results from interference of various components of the wave function scattered by randomly distributed fluctuations of the potential. Localized electrons cannot conduct electric current. Consequently, the probability of electron transmission, $T$, through a disordered system decreases exponentially as a function of the system length $L$: $T\propto \exp -L/\xi$. The length $\xi$ is called localization length. Those materials that do not conduct electric current due to electron localization are called Anderson insulators. In spite of significant theoretical effort, our understanding of electron localization is still not complete. Rigorous analytical results were obtained only in the limit of weak randomness, where perturbation theories are applicable. \cite{LSF,DMPK,PNato,Been} In the localized regime, we do not have any small parameter, so no perturbation analysis is possible. Also, as we will see below, the transmission of electrons is extremely sensitive to the change of sample properties. In particular, small local change of random potential might cause the change of the transmission amplitude in many orders of magnitude. Clearly, analytical description of such systems is difficult. Fortunately, it is rather easy to simulate the transport properties numerically. In fact, many quantitative data about the electron localization was obtained numerically. \cite{PS,McKK,KK,2} In this paper we describe two simple numerical experiments which demonstrate the key features of quantum localization. In Section \ref{model} we introduce the Anderson model that represents the most simple model for study of the electron propagation in the two-dimensional system with a random potential. In Sections \ref{diffusion} and \ref{localization} we show how randomness influences the ability of an electron to propagate at large distances. We solve numerically the time-dependent Schr\"odinger equation and confirm that after a certain time electron diffusion ceases. The electron becomes spatially localized in certain part of the disordered lattice. This numerical experiment reproduces the Anderson's original problem.\cite{1} In Section \ref{path} simulate the scattering experiment. We consider an electron approaching the disordered system from outside, and calculate the amplitude of the transmission through the sample. Since the transmission depends on the actual realization of random potential, we can, by a small local change of the potential, estimate the probability that electron propagates through any given sample area. In this way, we investigate the spatial distribution of the electron inside disordered sample. For weak disorder, we find that electron is homogeneously distributed throughout the sample. On the other side, in the localized regime we show that electrons propagate through narrow spatial channel across the sample. Although this channel resembles the trajectory of classical particles, we argue that the electron still behaves a quantum particle. We demonstrate the wave character of the electron propagation by a simple numerical experiment. Both numerical experiments confirm the main feature of electron localization: it has its origin in the wave character of quantum particle propagation. There is no localization phenomena in classical mechanics. \section{The model}\label{model} Left Figure \ref{f-1} represents the two-dimensional lattice created by regular arrangement of atoms. We consider one electron per atom and define its local energy $\epsilon(\vec{r})=\epsilon_0$. If the electronic wave functions of neighboring atoms overlap, electrons can propagate through the lattice. The periodicity of the lattice creates a conductance band, $\epsilon_0-4V\le E\le \epsilon_0+4V$, \cite{economou} where $V$ is given by the overlap of electron wave functions located in neighboring sites. A disordered two-dimensional lattice is shown in right Fig. \ref{f-1}. Now, lattice sites are occupied by different atoms. Therefore, both the energy of the electron on a given site, $\epsilon(\vec{r})$, and the hopping term between two neighboring atoms, $V(\vec{r}-\vec{r'})$, become position-dependent. In our analysis we assume that energies $\varepsilon(\vec{r})$ are randomly distributed according to the Box probability distribution, $P(\epsilon) = 1/W$ if $-W/2\le \epsilon<W/2$, otherwise $P(\epsilon)=0$. We also require these random energies on different sites to be statistically independent and assume that the hopping amplitude $V(\vec{r}-\vec{r'})\equiv V$. Although such random lattice is rather unrealistic, it imitates all physical features of a disordered electron system. The random energies $\varepsilon(\vec{r})$ simulate random potential. Thus, our random model is characterized by two parameters: $W$ represents the strength of the disorder and $V$ determines the hopping amplitude. Note that $V$ defines the energy scale, so we have only one parameter: the ratio $W/V$ that we use as a measure of the strength of the disorder. Let us assume that at a time $t=0$ a quantum particle is located in the position $\vec{r}_0$. Initial wave function is \begin{equation}\label{dvax} \Psi(\vec{r},t=0) = \delta(\vec{r}-\vec{r}_0). \end{equation} We want to estimate the probability of the electron still being in its original position in an infinite time $t\to\infty$. The time evolution of the electron wave function is determined by the Schr\"odinger equation, \begin{equation}\label{ham} i\hbar\displaystyle{ \frac{\partial\Psi(\vec{r},t)}{\partial t}} = \epsilon(\vec{r}) \Psi(\vec{r},t) + V\sum_{\vec{r'}} \Psi(\vec{r'},t), \end{equation} where $|\vec{r}-\vec{r'}|=a$ is the lattice constant. Equation (\ref{ham}) defines the Anderson model. Let us take the zero disorder case, $W=0$ first. The electron located at time $t=0$ in a specific lattice site, will diffuse to the neighboring sites. In the limit of an infinite time $t\to\infty$, the electron will occupy all sites of the lattice. Consequently, the probability to find it in its original position equals zero (or, more accurately, it is approximately proportional to $1/$(lattice volume)). \begin{figure} \begin{center} \includegraphics[width=7cm,clip]{Fig1.png} \end{center} \caption{ Left: A regular two-dimensional lattice is a periodic arrangement of identical atoms in a rectangular lattice. Right: A disordered lattice whose sites are randomly occupied by different atoms. The closest distance between two neighboring atoms is $a$.} \label{f-1} \end{figure} In disordered lattice $W\ne 0$, electron propagation depends on the strength of the disorder. Intuitively, one expects a very weak disorder not to affect the diffusion considerably, but a sufficiently strong disorder should stop the diffusion. Then there should be a critical value $W_c$: diffusion continues forever when $W<W_c$ but ceases when $W>W_c$. In the original paper \cite{1} Anderson derived the equation for the critical disorder as \begin{equation}\label{ek} \displaystyle{\frac{W_c}{V}} = 2eK\ln (eK). \end{equation} According to Eq. (\ref{ek}), the critical disorder depends only on the lattice connectivity $K$ (the number of nearest neighbor sites). Nowadays, we know \cite{wegner,AALR} that the dimension $d$ of the lattice is a more important parameter. In the absence of a magnetic field and of electron spin, all states are localized in disordered systems with dimension $d\le d_c = 2$. Therefore, the critical disorder $W_c=0$ for $d=2$ and is non-zero in systems with higher dimensionality $d>2$. \section{Diffusion}\label{diffusion} Now we demonstrate the Anderson's ideas in numerical simulation. We examine how the electron diffuses in the disordered lattice, defined by Eq. (\ref{ham}). The size of the system is $L\times L$, where $L=2048a$ for weakly disordered samples and $L=1024a$ for systems with a stronger disorder ($W/V>4$). \begin{figure} \begin{center} \includegraphics[width=6.0cm,clip]{Fig2.png} \end{center} \caption{(Color online) The quadratic displacement $\langle r^2(t)\rangle$ (in units $a^2$) as a function of time $t$. Time is measured in $\hbar/V$. The size of the system is $L\times L$ where $L=2048a$ ($L=1024a$ for $W/V=6$). Note the logarithmic scale of both axes. In weak disorders, we expect the electron to diffuse, so that $\langle r^2(t)\rangle = 2D t$, in accordance with Eq. (\ref{dif}). Numerically, we find that $\langle r^2\rangle = 2Dt^\alpha$ with $\alpha = 1.004$ for disorder $W/V=1$ and $\alpha = 0.98$ for $W/V=2$. The corresponding diffusive constants are $D=25.7$ and $9.1$ (in units $a^2V/\hbar$). Only the data for time $t<4000 \hbar/V$ were used for $W/V=1$, since in longer time the electron could reach the edge of the sample. The dashed line represents the limit $\langle r^2\rangle_{\rm max}=L^2/6$, given by Eq. (\ref{max}). For stronger disorders, the time evolution of the wave function is not diffusive. We find the exponent $\alpha \approx 0.82$ ($W/V=4$) and $\alpha\approx 0.39$ ($W/V=6$). } \label{w4-d} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.0cm,clip]{Fig3.png} \end{center} \caption{(Color online) The same data as in Fig. \ref{w4-d}, but on a linear scale. Only the data for small disorder is shown. Note that for $W/V=1$, $\langle r^2(t)\rangle$ is linear only when $t< 4000 \hbar/V$. This is because the electron already reaches the edge of the sample. } \label{dva} \end{figure} \medskip First, we need to define the initial wave function $\Psi(\vec{r},t=0)$. A more suitable candidate than the $\delta$-function (\ref{dvax}) is any eigenfunction of the Hamiltonian defined on small sub lattice (typically the size of $24a\times 24a$) located in the center of the sample.\cite{ohtsuki} Usually we chose the eigenfunction which corresponds to the eigenenergy closest to $E=0$ (the middle of the conductance band). To see how the initial wave function develops in time $t>0$, we solve the Schr\"odinger equation (\ref{ham}) numerically and find the time evolution of the wave function $\Psi(\vec{r},t)$. The numerical program is based on the alternating-direction implicit method \cite{NRCPS,elipt} used for the solution of elliptic partial differential equations. The algorithm is described in Appendix A. \begin{figure}[b] \begin{center} \includegraphics[width=5.0cm,clip]{Fig4.png} \end{center} \caption{(Color online) Quadratic displacement $\langle r^2(t)\rangle$ as a function of time $t/t_0$, $t_0=1000\hbar/V$ for three systems of the size $L=1024 a$ and disorder $W/V = 6$ (triangles). Although $\langle r^2\rangle$ does not increase when time increases, it fluctuates as a function of time. The limiting value, $R^2$ (Eq. \ref{lim}) depends on the actual realization of the random disorder $\varepsilon(\vec{r})$ in the given sample. the dashed line shows $\langle r^2\rangle_{\rm max}=L^2/6 = 174 762~a^2$ which is $50\times$ larger than actual values of $\langle r^2\rangle$. For comparison, we also show the quadratic displacement for a system with stronger disorder, $W/V=8$, which is typically $130 a^2$. } \label{w6-d} \end{figure} The ability of an electron to diffuse through the sample is measured by a quadratic displacement, defined as \begin{equation} \langle r^2(t)\rangle = \int d\vec{r} r^2 |\Psi(\vec{r},t)|^2. \end{equation} Figures \ref{w4-d} and \ref{dva} show that in weak disorders, $W/V=1$ and 2, $\langle r^2(t)\rangle$ is a linear function of time $t$, \begin{equation}\label{dif} \langle r^2(t)\rangle = 2Dt. \end{equation} The parameter $D$ is a diffusive constant which enters the Einstein formula for electric conductivity $\sigma$, \begin{equation} \sigma = e^2 D\rho. \end{equation} Here $e$ is the electron charge and $\rho$ is the density of states.\cite{2} Since we analyze only a lattice of a finite size, we have to take into account that the $t$-dependence of the electron wave function might be affected by the finiteness of our sample. In this case, we not only observe the diffusion, but also the reflection of the electron from the edges. Quantitatively, diffusion (\ref{dif}) is observable only when \begin{equation}\label{max} \langle r^2(t)\langle\ll \langle r^2\rangle_{\rm max}= \frac{1}{L^2}\int_{-L/2}^{L/2} \int_{-L/2}^{L/2}\left(x^2+y^2\right)~dxdy =\frac{L^2}{6}, \end{equation} where $\langle r^2\rangle_{\rm max}$ corresponds to the homogeneously distributed wave function, $|\Psi(\vec{r})|^2 = {\rm const} = 1/L^2$. It might seem that the diffusion of electrons shown in Figs. \ref{w4-d} and \ref{dva} contradicts the localization theory\cite{AALR} that predicts all states to be localized in two-dimensional systems. However, this is not the case. The prediction of the localization theory concerns the limit of an infinite system size. Physically, localization occurs only when the size of the sample exceeds the localization length, $L>\xi$. Since $\xi$ is very large in weak disorder ($\xi\sim 10^6a$ when $W=1$),\cite{KK} we observe metallic behavior and diffusion of electrons in Fig. \ref{w4-d}. Of course, even in the case of $W/V=1$ we would observe localization if much larger systems are taken into account.\cite{2} In general, we can observe the localization if we either increase the size of the system or reduce the localization length. The latter is easier to perform, as it requires us only to increase the disorder strength $W$. We will do it in the next Section. \section{Absence of diffusion - localization}\label{localization} \begin{figure} \begin{center} \includegraphics[width=12.0cm,clip]{Fig5a.png} \vspace*{-1cm} \includegraphics[width=12.0cm,clip]{Fig5b.png} \end{center} \vspace*{-1cm} \caption{(Color online) Spatial distribution of an electron in sample with disorder $W/V=6$. The size of the lattice is $1024 a\times 1024 a$. Time is given in units of $t_0 = 1000 \hbar/V$. The different colors show sites where $|\Psi(r)| >10^{-4}$ (gray), $> 5\times 10^{-4}$ (brown), $10^{-3}$ (blue), $5\times 10^{-3}$ (red), and $> 5\times 10^{-3}$ (black). The probability to find an electron on any other site is less than $10^{-8}$. } \label{w6-t} \end{figure} \begin{figure} \begin{center} \includegraphics[width=12.0cm,clip]{Fig6a.png} \vspace*{-1cm} \includegraphics[width=12.0cm,clip]{Fig6b.png} \end{center} \vspace*{-1cm} \caption{(Color online) The same as in Fig. \ref{w6-t} only the time is $t= 500 t_0$ and $900 t_0$ ($t_0=1000\hbar/V$). } \label{w6-tt} \end{figure} The data in Fig. \ref{w4-d} also confirms that the time evolution of the wave function is not diffusive when the disorder $W$ increases. Linear increase of $\langle r^2(t)\rangle$ is observable only for short initial time interval. For any longer time, the spatial extent of the electron increases very slowly and finally ceases (Fig. \ref{w6-d}). the electron becomes localized. To demonstrate the electron localization more explicitly, we repeat the experiment in Section \ref{diffusion} with a stronger disorder $W/V=6$. Similarly to the previous experiment, the initial wave function is non-zero in the small area $24a\times 24a$ located at the center of the sample. For shorter times, we observe that the spatial extent of the wave function increases. Then, after a while, $\langle r^2(t)\rangle$ saturates: \begin{equation}\label{lim} \lim_{t\to\infty} \langle r^2(t)\rangle = R^2 \ll \langle r^2\rangle_{\rm max}. \end{equation} Although the spatial distribution of the electron varies in time, $\langle r^2(t)\rangle$ does not longer increase even if the time $t$ increases ten and more times. Figures \ref{w6-t} and \ref{w6-tt} show the spatial distribution of the wave function, $|\Psi(\vec{r},t)|$. they represent the lattice sites with $|\Psi(\vec{r})|>10^{-4}$. This means that the probability to find the electron in any other lattice site is less than $10^{-8}$. \begin{figure} \begin{center} \includegraphics[width=11.0cm,clip]{Fig7a.png} \vspace*{-2cm} \includegraphics[width=11.0cm,clip]{Fig7b.png} \end{center} \caption{(Color online) The time development of four electrons located in time $t=0$ in four different areas of the same lattice. The electrons do not leave the initial areas. The size of the sample is $L=1024a$. Disorder $W/V=8$. Again, time is measured in units of $t_0=1000\hbar/V$. } \label{W84-20} \end{figure} Note that there is no potential well in the center of the sample where the electron is localized. The only reason for the electron being localized in the lattice center is the initial wave function, $\Psi(\vec{r},t=0)$, which was non-zero only in the center of the lattice. Applying the initial wave function localized in any other area of the sample, we would achieve electron localization in that area. This is demonstrated in Fig. \ref{W84-20} showing the time development of the wave functions of four electrons in the same lattice. The initial position of the electrons is centered around four points \begin{equation} x_\pm = L/2\pm L/4,~~~~y_\pm = L/2\pm L/4. \end{equation} We see that in time $t>0$ each electron is localized around its initial position. This proves that localization is indeed the result of interference of wave functions. The electron is not trapped in any potential well. The localized state is not a bound state. Figure \ref{W84-20} also shows that the localized states are very sensitive to the realization of the random potential. The spatial distribution of each electron reflects the local distribution of random energies $\varepsilon(\vec{r})$. This is shown quantitatively in Fig. \ref{w6-d} where we plot $\langle r^2(t)\rangle$ as a function of time for three different realizations of the random disorder. We see that although all three samples have the same macroscopic parameter $W/V=6$, the limiting value $R^2 = \lim_{t\to\infty}\langle r^2(t)\rangle$ is not universal but depends on the actual distribution of random energies in the given sample. Moreover, $\langle r^2(t)\rangle$ fluctuates as a function of time $t$. \section{Transmission through disordered sample: How an electron propagates through disordered system?}\label{path} Consider now another experiment, frequently used in the mesoscopic physics: We take a disordered sample, the same as used in previous Sections, to examine what is the probability that an electron propagates from one side of the sample to the opposite side. Both in experiments and in numerical simulations the sample is connected to two semi-infinite, disorder-free leads which guide the electron propagation towards and out of the sample (Fig. \ref{obr}). An incoming electron either propagates through the sample or is reflected back. The probability of transmission, $T$, determines the conductance, \cite{SE,landauer} \begin{equation}\label{se} g = \displaystyle{\frac{e^2}{h}}~T \end{equation} Eq. (\ref{se}) is commonly referred to as the Landauer formula. It was originally derived for a one-dimensional system but can also be used for the analysis of two- and more-dimensional samples. Since the width of the leads is non-zero the transmission $T$ can be larger than 1. \cite{comment} The transmission is calculated by the transfer matrix method described in Refs. \cite{Ando-91,PMcKR,2}. \begin{figure} \begin{center} \includegraphics[width=5.0cm,clip]{Fig8.png} \end{center} \caption{Schematic description of the scattering experiment for measurement of the transmission. The sample is connected to two semi-infinite leads represented by regular lattice with zero disorder. Inside the sample, the disorder is non-zero. If electron comes from the left, it either propagates through the sample and contributes to the transmission, or is reflected back to the left lead. } \label{obr} \end{figure} \begin{figure} \begin{center} \includegraphics[width=5.0cm,clip,angle=-90]{Fig9a.png}\\ ~~\\ \includegraphics[width=5.0cm,clip,angle=-90]{Fig9b.png}\\ ~~\\ \includegraphics[width=5.0cm,clip,angle=-90]{Fig9c.png} \end{center} \caption{(Color online) Sensitivity of the transmission trough the disordered system to the change of the sign of a single random energy $\vec{r}_0$. A change of the sign of the random energy on orange, red and black sites causes the change of the conductance by more than 1\%, 10\% and 100\%, respectively. The transmission $T$ is $4.998$, $0.52$ and 0.00084 for the disorder $W/V=2$, 4 and 6 (from top to bottom). The size of the system is $100a \times 100a$, and the electron propagates from the left side of the sample to the right side. } \label{w-L100} \end{figure} \begin{figure} \begin{center} \includegraphics[width=5.0cm,clip,angle=-90]{Fig10.png} \end{center} \caption{(Color online) The same as in Fig. \ref{w-L100} but disorder $W/V=10$. The transmission $T=9\times 10^{-15}$. } \label{w10-L100} \end{figure} Contrary to the diffusion problem, discussed in Sects. \ref{diffusion},\ref{localization}, in the present experiment we do not analyze the time development of the electron wave function. Instead, we chose the energy $E$ of the electron ($E=0$, that is the center of the energy band), and calculate the time independent current transmission $T$ from the left side of the sample to the right side. To show how electrons are distributed within the sample, we apply Pichard's idea. \cite{PNato} Let us change the sign of a single random energy $\epsilon(\vec{r}_0)$ at a site $\vec{r}_0$: $\epsilon(\vec{r}_0)\to -\epsilon(\vec{r}_0)$ and calculate how this change will influence the total transmission $T$ of an electron through the sample. We expect that $T$ is sensitive to the change of $\epsilon(\vec{r}_0)$ only if the electron occupies the site $\vec{r}_0$, i.e. when $|\Psi(\vec{r}_0)|$ is large. Contrary, if $|\Psi(\vec{r}_0)|$ is negligible, then the change of $\epsilon(\vec{r}_0)$ cannot affect the transmission $T$. Thus, by the comparison of the transmissions through two systems differing only in the sign of the random energy $\varepsilon(\vec{r}_0)$, we can estimate whether the electron, propagating through the sample, travels through the site $\vec{r}_0$ or not. In repeating this analysis for all lattice sites, we can visualize the path of the electron through the sample. For numerical reasons, we restricted the sample size to $100 a\times 100 a$. \begin{figure} \begin{center} \includegraphics[width=10.0cm,clip,angle=-90]{Fig11.png} \end{center} \vspace*{-3cm} \caption{(Color online) The same as in Fig. \ref{w-L100} but for disorder $W/V = 20$. The transmission is really small, $\ln T = -96$. The change of the sign of the random energy on gray, brown, orange and red sites causes a change of the logarithm of the transmission by more than 0.01\%, 0.1\%, 1\% and 10\% respectively. Although it seems that the path through the sample is determined by a valley in the potential landscape, this is not the case. The inset shows sites of the sample where the random energy $|\epsilon| < 1$. } \label{w20-L100} \end{figure} \begin{figure} \begin{center} \includegraphics[width=4.0cm,clip,angle=-90]{Fig12-1.png} ~~~\includegraphics[width=4.0cm,clip,angle=-90]{Fig12-2.png} \end{center} \caption{(Color online) The electron path through two strongly disordered samples: both samples have the same realization of random energies. They differ only in the amplitude of fluctuations. Shown are the lattice sites where the change of the sign of random energy causes the change of the logarithm of the transmission by 1\% (orange) and 10\% (red). We see that the electron prefers completely different trajectories through these samples. } \label{w10-w20} \end{figure} Our results are summarized in Fig. \ref{w-L100}. In weak disorders, $W/V=2$, we see that the changing of only one random energy has an almost negligible influence on the transmission. Typically, $T$ changes only by 1\% (or even less) when the sign of $\varepsilon(\vec{r})$ changes. Also, all lattice sites are more or less equivalent. We conclude that in the course of the transmission the electron is ``everywhere'': it propagates through the entire sample as a quantum wave. This observation is the key idea of the Dorokhov-Mello-Pereyra-Kumar theory of the electron transport in weakly disordered systems \cite{DMPK} and of the random matrix theory of diffusive transport. \cite{PNato} The homogeneity of the electron distribution gets lost when the disorder increases.\cite{muttalib,MMW} The change of the random energy sign on some sites influences the transmission more than the same change on other sites. Some areas of the sample seem not to be visited at all. We can see the formation of the electron ``path'' through the sample.\cite{prior} This path is clearly visible for very strong disorder shown in Figs. \ref{w10-L100}, \ref{w20-L100} and \ref{w10-w20}. However, we want to stress that even in case of strong disorders we cannot speak about the path in its classical sense. Even if the electron path is well visible, there are still other sites, often located on the opposite side of the sample, that influence the transmission as strongly as the sites on the main trajectory (Fig. \ref{w10-L100}). This indicates that the electron still feels the entire sample and its propagation is highly sensitive to any change of the realization of the random potential. the resulting trajectory cannot be identified with any valley or equipotential line in the random potential landscape. To demonstrate this, we show in Fig. \ref{w20-L100} the trajectory of an electron through an extremely strongly disordered system ($W/V=20$ - in this case, we consider the change of the logarithm of the conductance). Although the trajectory of an electron seems to be well defined, there is no continuous potential valley which might support the propagation. The inset of Thus, the choice of the transmission path is the result of quantum interference: an electron arising from the left inspects the entire sample and finds the most convenient spatial ``channel'' for its propagation. We cannot speak about a trajectory in the sense of classical particles. To support our last claim, let us consider two samples, with the same realization of random energies $\varepsilon(\vec{r})$, but different amplitudes of random energies: $W/V=10$ for the sample I and $W/V = 20$ for the sample II: \begin{equation} \varepsilon(\vec{r})^{II} = 2\varepsilon(\vec{r})^{I} \end{equation} for each site $\vec{r}$. With the help of the above-mentioned method, we find the trajectories of electrons through these two samples. For the propagation of classical particle both trajectories (for the sample I and sample II) should coincide. However, an electron is not a classical particle. As shown in Fig. \ref{w10-w20}, the ways how an electron propagates through the two samples, I and II, are completely different. An increase of fluctuations of the random potential causes the electron to choose a completely different route. \section{Conclusion} We discussed two features of localization of quantum particle in a disordered sample. Firstly, we demonstrated numerically that the diffusion of the quantum particle through randomly fluctuating potential ceases after certain time. The particle becomes spatially localized. The physical origin of localization is different from the bounding of a particle in a potential well. Localization is caused by a multiple scattering of the wave function on randomly distributed impurities (fluctuations of the random potential). It is not due to the trapping of the particle in the potential well. In the second part of the paper, we examined the propagation of a quantum particle through a disordered sample and discussed how this propagation depends on the disorder. Again, we confirmed indirectly the wave character of the propagation. This drove us to the conclusion that the electron localization is a purely quantum effect without any analogy in classical mechanics. In both numerical experiments, the key condition for the localization to happen is the quantum coherence of the wave function. This is generally not fulfilled in experiment, where the incoherent scattering - for instance the scattering of electrons with phonons - plays the crucial role. As any incoherent scattering destroys quantum coherence, the observing of electron localization experimentally requires the mean free path of incoherent scattering to be larger or at least comparable to size of the sample. This happens at a very low temperature. Of course, localization does affect the transport of electrons also at higher temperatures. These effects are, however, above the scope of present discussion. With localization being a wave phenomenon, we can expect the similarity of quantum propagation with classical wave phenomena. \cite{dragoman} That enables us to observe localization in many other instances. In particular, we can expect that classical waves - electromagnetic or acoustic - will also be localized in a disordered medium. \cite{S} The localization of microwave electromagnetic waves was experimentally observed \cite{GG}. Another very interesting experiment\cite{qq} proves the weak localization of seismic waves.
proofpile-arXiv_069-2131
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:int} \setcounter{equation}{0} The ring $\La=\La(x)$ of symmetric functions in the set of variables $x=(x_1,x_2,\dots)$ admits a multiparameter generalization $\La(x\vt a)$, where $a$ is a sequence of variables $a=(a_i)$, $i\in\ZZ$. Let $\QQ[a]$ denote the ring of polynomials in the variables $a_i$ with rational coefficients. The ring $\La(x\vt a)$ is generated over $\QQ[a]$ by the {\it double power sums symmetric functions\/} \beql{powess} p_k(x\vt a)=\sum_{i=1}^{\infty} (x_i^k-a_i^k). \eeq Moreover, it possesses a distinguished basis over $\QQ[a]$ formed by the {\it double Schur functions\/} $s_{\la}(x\vt a)$ parameterized by partitions $\la$. The double Schur functions $s_{\la}(x\vt a)$ are closely related to the `factorial' or `double' Schur polynomials $s_{\la}(x|a)$ which were introduced by Goulden and Greene~\cite{gg:nt} and Macdonald~\cite{m:sf} as a generalization of the factorial Schur polynomials of Biedenharn and Louck~\cite{bl:nc, bl:ib}. Moreover, the polynomials $s_{\la}(x|a)$ are also obtained as a special case of the double Schubert polynomials of Lascoux and Sch\"utzenberger; see \cite{cll:fd}, \cite{l:sf}. A formal definition of the ring $\La(x\vt a)$ and its basis elements $s_{\la}(x\vt a)$ can be found in a paper of Okounkov~\cite[Remark~2.11]{o:ni} and reproduced below in Section~\ref{sec:def}. The ring $\La$ is obtained from $\La(x\vt a)$ in the specialization $a_i=0$ for all $i\in\ZZ$ while the elements $s_{\la}(x\vt a)$ turn into the classical Schur functions $s_{\la}(x)\in\La$; see Macdonald~\cite{m:sfh} for a detailed account of the properties of $\La$. Another specialization $a_i=-i+1$ for all $i\in\ZZ$ yields the ring of {\it shifted symmetric functions\/} $\La^*$, introduced and studied by Okounkov and Olshanski~\cite{oo:ss}. Many combinatorial results of \cite{oo:ss} can be reproduced for the ring $\La(x\vt a)$ in a rather straightforward way. The respective specializations of the double Schur functions in $\La^*$, known as the {\it shifted Schur functions\/} were studied in \cite{o:qi}, \cite{oo:ss} in relation with the higher Capelli identities and quantum immanants for the Lie algebra $\gl_n$. In a different kind of specialization, the double Schur functions become the equivariant Schubert classes on Grassmannians; see e.g. Knutson and Tao~\cite{kt:pe}, Fulton~\cite{f:ec} and Mihalcea~\cite{m:gf}. The structure coefficients $c_{\la\mu}^{\ts\nu}(a)$ of $\La(x\vt a)$ in the basis of $s_{\la}(x\vt a)$, defined by the expansion \beql{lrpoldef} s_{\la}(x\vt a)\ts s_{\mu}(x\vt a)= \sum_{\nu} c_{\la\mu}^{\ts\nu}(a)\ts s_{\tss\nu}(x\vt a), \eeq were called the {\it Littlewood--Richardson polynomials\/} in \cite{m:lr}. Under the respective specializations they describe the multiplicative structure of the equivariant cohomology ring on the Grassmannian and the center of the enveloping algebra $\U(\gl_n)$. The polynomials $c_{\la\mu}^{\ts\nu}(a)$ possess the Graham positivity property: they are polynomials in the differences $a_i-a_j$, $i<j$, with positive integer coefficients; see \cite{g:pe}. Explicit positive formulas for the polynomials $c_{\la\mu}^{\ts\nu}(a)$ were found in \cite{kt:pe}, \cite{k:elr} and \cite{m:lr}; an earlier formula found in \cite{ms:lr} lacks the positivity property. The Graham positivity brings natural combinatorics of polynomials into the structure theory of $\La(x\vt a)$. Namely, the entries of some transition matrices between bases of $\La(x\vt a)$ such as analogues of the Kostka numbers, turn out to be Graham positive. The {\it comultiplication\/} on the ring $\La(x\vt a)$ is the $\QQ[a]$-linear ring homomorphism \ben \Delta:\La(x\vt a)\to \La(x\vt a)\ot^{}_{\ts\QQ[a]} \La(x\vt a) \een defined on the generators by \ben \Delta\big(p_k(x\vt a)\big)= p_k(x\vt a)\ot 1+1\ot p_k(x\vt a). \een In the specialization $a_i=0$ this homomorphism turns into the comultiplication on the ring of symmetric functions $\La$; see \cite[Chapter~I]{m:sfh}. Define the {\it dual Littlewood--Richardson polynomials\/} $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ as the coefficients in the expansion \ben \Delta\big(s_{\nu}(x\vt a)\big)=\sum_{\la,\ts\mu} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\la}(x\vt a) \ot s_{\mu}(x\vt a). \een The central problem we address in this paper is calculation of the polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ in an explicit form. Note that if $|\nu|=|\la|+|\mu|$ then $c_{\la\mu}^{\ts\nu}(a)=\wh c_{\la\mu}^{\ts\tss\nu}(a) =c_{\la\mu}^{\ts\tss\nu}$ is the Littlewood--Richardson coefficient. Moreover, \ben c_{\la\mu}^{\ts\nu}(a)=0\quad\text{unless} \quad |\nu|\leqslant|\la|+|\mu|,\fand \wh c_{\la\mu}^{\ts\tss\nu}(a)=0\quad\text{unless} \quad |\nu|\geqslant|\la|+|\mu|. \een We will show that the polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ can be interpreted as the multiplication coefficients for certain analogues of the Schur functions, \ben \wh s_{\la}(x\vt a)\ts \wh s_{\mu}(x\vt a)= \sum_{\nu} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts \wh s_{\tss\nu}(x\vt a), \een where the $\wh s_{\la}(x\vt a)$ are symmetric functions in $x$ which we call the {\it dual Schur functions\/} (apparently, the term `dual double Schur functions' would be more precise; we have chosen a shorter name for the sake of brevity). They can be given by the combinatorial formula \beql{combi} \wh s_{\la}(x\vt a)=\sum_{T} \prod_{\al\in\la}X_{T(\al)}(a^{}_{-c(\al)+1},a^{}_{-c(\al)}), \eeq summed over the {\it reverse\/} $\la$-tableaux $T$, where \ben X_i(g,h)= \frac{x_i\ts(1-g\ts x_{i-1})\dots (1-g\ts x_1)} {(1-h\ts x_{i})\dots (1-h\ts x_1)}, \een and $c(\al)=j-i$ denotes the content of the box $\al=(i,j)$; see Section~\ref{sec:dua} below. We calculate in an explicit form the coefficients of the expansion of $\wh s_{\la}(x\vt a)$ as a series of the Schur functions $s_{\mu}(x)$ and vice versa. This makes it possible to express $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ explicitly as polynomials in the $a_i$ with the use of the Littlewood--Richardson coefficients $c_{\la\mu}^{\ts\tss\nu}$. The combinatorial formula \eqref{combi} can be used to define the skew dual Schur functions, and we show that the following decomposition holds \ben \wh s_{\tss\nu/\mu}(x\vt a)= \sum_{\la} c_{\la\mu}^{\ts\nu}(a)\ts \wh s_{\la}(x\vt a), \een where the $c_{\la\mu}^{\ts\nu}(a)$ are the Littlewood--Richardson polynomials. The functions $\wh s_{\la}(x\vt a)$ turn out to be dual to the double Schur functions via the following analogue of the classical Cauchy identity: \beql{cauchy} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} s_{\la}(x\vt a)\ts \wh s_{\la}(y\vt a), \eeq where $\Pc$ denotes the set of all partitions and $y=(y_1,y_2,\dots)$ is a set of variables. The dual Schur functions $\wh s_{\la}(x\vt a)$ are elements of the extended ring $\wh\La(x\vt a)$ of formal series of elements of $\La(x)$ whose coefficients are polynomials in the $a_i$. If $x=(x_1,x_2,\dots, x_n)$ is a finite set of variables (i.e., $x_i=0$ for $i\geqslant n+1$), then $\wh s_{\la}(x\vt a)$ can be defined as the ratio of alternants by analogy with the classical Schur polynomials. With this definition of the dual Schur functions, the identity \eqref{cauchy} can be deduced from the `dual Cauchy formula' obtained in \cite[(6.17)]{m:sf} and which is a particular case of the Cauchy identity for the double Schubert polynomials \cite{l:cc}. An independent proof of a version of \eqref{cauchy} for the shifted Schur functions (i.e., in the specialization $a_i=-i+1$) was given by Olshanski~\cite{o:un}. In the specialization $a_i=0$ each $\wh s_{\la}(x\vt a)$ becomes the Schur function $s_{\la}(x)$, and \eqref{cauchy} turns into the classical Cauchy identity. We will also need a super version of the ring of symmetric functions. The elements \beql{poss} p_k(x/y)=\sum_{i=1}^{\infty}\big(x_i^k+(-1)^{k-1}y_i^k\big) \eeq with $k=1,2,\dots$ are generators of the ring of {\it supersymmetric functions\/} which we will regard as a $\QQ[a]$-module and denote by $\La(x/y\vt a)$. A distinguished basis of $\La(x/y\vt a)$ was introduced by Olshanski, Regev and Vershik~\cite{orv:fs}. In a certain specialization the basis elements become the {\it Frobenius--Schur functions\/} $Fs_{\la}$ associated with the relative dimension function on partitions; see \cite{orv:fs}. In order to indicate dependence on the variables, we will denote the basis elements by $s_{\la}(x/y\vt a)$ and call them the ({\it multiparameter\/}) {\it supersymmetric Schur functions\/}. They are closely related to the {\it factorial supersymmetric Schur polynomials\/} introduced in \cite{m:fs}; see Section~\ref{sec:def} for precise formulas. Note that the evaluation map $y_i\mapsto -a_i$ for all $i\geqslant 1$ defines an isomorphism \beql{isom} \La(x/y\vt a)\to \La(x\vt a). \eeq The images of the generators \eqref{poss} under this isomorphism are the double power sums symmetric functions \eqref{powess}. We will show that under the isomorphism \eqref{isom} we have \beql{isomimsf} s_{\la}(x/y\vt a)\mapsto s_{\la}(x\vt a). \eeq Due to \cite{orv:fs}, the supersymmetric Schur functions possess a remarkable combinatorial presentation in terms of diagonal-strict or `shuffle' tableaux. The isomorphism \eqref{isom} implies the corresponding combinatorial presentation for $s_{\la}(x\vt a)$ and allows us to introduce the {\it skew double Schur functions\/} $s_{\nu/\mu}(x\vt a)$. The dual Littlewood--Richardson polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ can then be found from the expansion \beql{skewde} s_{\tss\nu/\mu}(x\vt a)=\sum_{\la} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\la}(x\vt a), \eeq which leads to an alternative rule for the calculation of $\wh c_{\la\mu}^{\ts\tss\nu}(a)$. This rule relies on the combinatorial objects called `barred tableaux' which were introduced in \cite{ms:lr} for the calculation of the polynomials $c_{\la\mu}^{\tss\nu}(a)$; see also \cite{k:elr}, \cite{k:pf} and \cite{m:lr}. The coefficients in the expansion of $s_{\mu}(x)$ in terms of the $\wh s_{\la}(x\vt a)$ turn out to coincide with those in the decomposition of $s_{\la}(x/y\vt a)$ in terms of the ordinary supersymmetric Schur functions $s_{\la}(x/y)$ thus providing another expression for these coefficients; cf. \cite{orv:fs}. The identity \eqref{cauchy} allows us to introduce a pairing between the rings $\La(x\vt a)$ and $\wh\La(x\vt a)$ so that the respective families $\{s_{\la}(x\vt a)\}$ and $\{\wh s_{\la}(x\vt a)\}$ are dual to each other. This leads to a natural definition of the monomial and forgotten symmetric functions in $\La(x\vt a)$ and $\wh\La(x\vt a)$ by analogy with \cite{m:sfh} and provides a relationship between the transition matrices relating different bases of these rings. It is well known that the ring of symmetric functions $\La$ admits an involutive automorphism $\om:\La\to\La$ which interchanges the elementary and complete symmetric functions; see \cite{m:sfh}. We show that there is an isomorphism $\om_a:\La(x\vt a)\to\La(x\vt a')$, and $\om_a$ has the property $\om_{a'}\circ\om_{a}=\text{id}$, where $a'$ denotes the sequence of parameters with $(a')_i=-a_{-i+1}$. Moreover, the images of the natural bases elements of $\La(x\vt a)$ with respect to $\om_a$ can be explicitly described; see also \cite{oo:ss} where such an involution was constructed for the specialization $a_i=-i+1$, and \cite{orv:fs} for its super version. Furthermore, using a symmetry property of the supersymmetric Schur functions, we derive the symmetry properties of the Littlewood--Richardson polynomials and their dual counterparts \ben c_{\la\mu}^{\tss\nu}(a)=c_{\la'\mu'}^{\tss\nu^{\tss\prime}}(a') \Fand \wh c_{\la\mu}^{\ts\tss\nu}(a)= \wh c_{\la'\mu'}^{\ts\tss\nu^{\tss\prime}}(a'), \een where $\rho^{\tss\prime}$ denotes the conjugate partition to any partition $\rho$. In the context of equivariant cohomology, the first relation is a consequence of the Grassmann duality; see e.g. \cite[Lecture~8]{f:ec} and \cite{kt:pe}. An essential role in the proof of \eqref{cauchy} is played by interpolation formulas for symmetric functions. The interpolation approach goes back to the work of Okounkov~\cite{o:qi, o:ni}, where the key {\it vanishing theorem\/} for the double Schur functions $s_{\la}(x\vt a)$ was proved; see also \cite{oo:ss}. In a more general context, the Newton interpolation for polynomials in several variables relies on the theory of Schubert polynomials of Lascoux and Sch\"utzenberger; see \cite{l:sf}. The interpolation approach leads to a recurrence relation for the coefficients $c_{P,\ts\mu}^{\ts\nu}(a)$ in the expansion \beql{interp} P\ts s_{\mu}(x\vt a)=\sum_{\nu} c_{P,\ts\mu}^{\ts\nu}(a)\ts s_{\nu}(x\vt a),\qquad P\in\La(x\vt a), \eeq as well as to an explicit formula for the $c_{P,\ts\mu}^{\ts\nu}(a)$ in terms of the values of $P$; see \cite{ms:lr}. Therefore, the (dual) Littlewood--Richardson polynomials and the entries of the transition matrices between various bases of $\La(x\vt a)$ can be given as rational functions in the variables $a_i$. Under appropriate specializations, these formulas imply some combinatorial identities involving Kostka numbers, irreducible characters of the symmetric group and dimensions of skew diagrams; cf. \cite{oo:ss}. \medskip I am grateful to Grigori Olshanski for valuable remarks and discussions. \section{Double and supersymmetric Schur functions} \label{sec:def} \setcounter{equation}{0} \subsection{Definitions and preliminaries} \label{subsec:dpr} Recall the definition of the ring $\La(x\vt a)$ from \cite[Remark~2.11]{o:ni}; see also \cite{m:lr}. For each nonnegative integer $n$ denote by $\La_n$ the ring of symmetric polynomials in $x_1,\dots,x_n$ with coefficients in $\QQ[a]$ and let $\La^k_n$ denote the $\QQ[a]$-submodule of $\La_n$ which consists of the polynomials $P_n(x_1,\dots,x_n)$ such that the total degree of $P_n$ in the variables $x_i$ does not exceed $k$. Consider the evaluation maps \beql{eval} \varphi_n:\La^{k}_{n}\to\La^{k}_{n-1},\qquad P_n(x_1,\dots,x_n)\mapsto P_n(x_1,\dots,x_{n-1},a_n) \eeq and the corresponding inverse limit \ben \La^{k}=\lim_{\longleftarrow} \La^{k}_n,\qquad n\to\infty. \een The elements of $\La^{k}$ are sequences $P=(P_0,P_1,P_2,\dots)$ with $P_n\in \La^{k}_n$ such that \ben \varphi_n(P_n)=P_{n-1} \qquad\text{for}\quad n=1,2,\dots. \een Then the union \ben \La(x\vt a)= \bigcup_{k\geqslant 0}\La^{k} \een is a ring with the product \ben P\tss Q=(P_0\tss Q_0,P_1\tss Q_1,P_2\tss Q_2,\dots), \qquad Q=(Q_0,Q_1,Q_2,\dots). \een The elements of $\La(x\vt a)$ may be regarded as formal series in the variables $x_i$ with coefficients in $\QQ[a]$. For instance, the sequence of polynomials \ben \sum_{i=1}^n (x_i^k-a_i^k),\qquad n \geqslant 0, \een determines the {\it double power sums symmetric function\/} \eqref{powess}. Note that if $k$ is fixed, then the evaluation maps \eqref{eval} are isomorphisms for all sufficiently large values of $n$. This allows one to establish many properties of $\La(x\vt a)$ by working with finite sets of variables $x=(x_1,\dots,x_n)$. Now we recall the definition and some key properties of the double Schur functions. We basically follow \cite[6th~Variation]{m:sf} and \cite{o:ni}, although our notation is slightly different. A partition $\la$ is a weakly decreasing sequence $\la=(\la_1,\dots,\la_l)$ of integers $\la_i$ such that $\la_1\geqslant\dots\geqslant\la_l\geqslant 0$. Sometimes this sequence is considered to be completed by a finite or infinite sequence of zeros. We will identify $\la$ with its diagram represented graphically as the array of left justified rows of unit boxes with $\la_1$ boxes in the top row, $\la_2$ boxes in the second row, etc. The total number of boxes in $\la$ will be denoted by $|\la|$ and the number of nonzero rows will be called the length of $\la$ and denoted $\ell(\la)$. The transposed diagram $\la'=(\la'_1,\dots,\la'_p)$ is obtained from $\la$ by applying the symmetry with respect to the main diagonal, so that $\la'_j$ is the number of boxes in the $j$-th column of $\la$. If $\mu$ is a diagram contained in $\la$, then the skew diagram $\la/\mu$ is the set-theoretical difference of diagrams $\la$ and $\mu$. Suppose now that $x=(x_1,\dots,x_n)$ is a finite set of variables. For any $n$-tuple of nonnegative integers $\al=(\al_1,\dots,\al_n)$ set \ben A_{\al}(x\vt a)=\det\big[(x_i\vt a)^{\al_j}\big]_{i,j=1}^n, \een where $(x_i\vt a)^0=1$ and \ben (x_i\vt a)^r=(x_i-a_n)(x_i-a_{n-1})\dots (x_i-a_{n-r+1}), \qquad r\geqslant 1. \een For any partition $\la=(\la_1,\dots,\la_n)$ of length not exceeding $n$ set \ben s_{\la}(x\vt a)=\frac{A_{\la+\de}(x\vt a)}{A_{\de}(x\vt a)}, \een where $\de=(n-1,\dots,1,0)$. Note that since $A_{\de}(x\vt a)$ is a skew-symmetric polynomial in $x$ of degree $n(n-1)/2$, it coincides with the Vandermonde determinant, \ben A_{\de}(x\vt a)=\prod_{1\leqslant i<j\leqslant n}(x_i-x_j) \een and so $s_{\la}(x\vt a)$ belongs to the ring $\La_n$. Moreover, \ben s_{\la}(x\vt a)=s_{\la}(x)+\ \text{lower degree terms in}\ \ x, \een where $s_{\la}(x)$ is the Schur polynomial; see e.g. \cite[Chapter~I]{m:sfh}. We also set $s_{\la}(x\vt a)=0$ if $\ell(\la)> n$. Then under the evaluation map \eqref{eval} we have \ben \varphi_n: s_{\la}(x\vt a)\mapsto s_{\la}(x'\vt a), \qquad x'=(x_1,\dots,x_{n-1}), \een so that the sequence $\big(s_{\la}(x\vt a)\ |\ n\geqslant 0\big)$ defines an element of the ring $\La(x\vt a)$. We will keep the notation $s_{\la}(x\vt a)$ for this element of $\La(x\vt a)$, where $x$ is now understood as the infinite sequence of variables, and call it the {\it double Schur function\/}. By a {\it reverse $\la$-tableau\/} $T$ we will mean a tableau obtained by filling in the boxes of $\la$ with the positive integers in such a way that the entries weakly decrease along the rows and strictly decrease down the columns. If $\al=(i,j)$ is a box of $\la$ in row $i$ and column $j$, we let $T(\al)=T(i,j)$ denote the entry of $T$ in the box $\al$ and let $c(\alpha)=j-i$ denote the content of this box. The double Schur functions admit the following tableau presentation \beql{defdouble} s_{\la}(x\vt a)=\sum_{T} \prod_{\al\in\la} (x^{}_{T(\alpha)}-a^{}_{T(\alpha)-c(\al)}), \eeq summed over all reverse $\la$-tableaux $T$. When the entries of $T$ are restricted to the set $\{1,\dots,n\}$, formula \eqref{defdouble} provides the respective tableau presentation of the polynomials $s_{\la}(x\vt a)$ with $x=(x_1,\dots,x_n)$. Moreover, in this case the formula can be extended to skew diagrams and we define the corresponding polynomials by \beql{defdoubleskew} \wt s_{\theta}(x\vt a)=\sum_{T} \prod_{\al\in\theta} (x^{}_{T(\alpha)}-a^{}_{T(\alpha)-c(\al)}), \eeq summed over all reverse $\theta$-tableaux $T$ with entries in $\{1,\dots,n\}$, where $\theta$ is a skew diagram. We suppose that $\wt s_{\theta}(x\vt a)=0$ unless all columns of $\theta$ contain at most $n$ boxes. \bre\label{rem:skew} (i)\quad Although the polynomials \eqref{defdoubleskew} belong to the ring $\La_n$, they are generally not consistent with respect to the evaluation maps \eqref{eval}. We used different notation in \eqref{defdouble} and \eqref{defdoubleskew} in order to distinguish between the polynomials $\wt s_{\theta}(x\vt a)$ and the skew double Schur functions $s_{\theta}(x\vt a)$ to be introduced in Definition~\ref{def:skdsf} below. \medskip \noindent (ii)\quad In order to relate our notation to \cite{m:sf}, note that for the polynomials $\wt s_{\theta}(x\vt a)$ with $x=(x_1,\dots,x_n)$ we have \ben \wt s_{\theta}(x\vt a)=s_{\theta}(x\tss|\tss u), \een where the sequences $a=(a_i)$ and $u=(u_i)$ are related by \beql{seqs} u_i=a_{n-i+1},\qquad i\in\ZZ. \eeq The polynomials $s_{\theta}(x\tss|\tss u)$ are often called the {\it factorial Schur polynomials\/} ({\it functions\/}) in the literature. They can be given by the combinatorial formula \beql{facsf} s_{\theta}(x\tss|\tss u)=\sum_{T} \prod_{\al\in\theta} (x^{}_{T(\alpha)}-u^{}_{T(\alpha)+c(\al)}), \eeq summed over all {\it semistandard\/} $\theta$-tableaux $T$ with entries in $\{1,\dots,n\}$; the entries of $T$ weakly increase along the rows and strictly increase down the columns. \medskip \noindent (iii)\quad If we replace $a_i$ with $c_{-i}$ and index the variables $x$ with nonnegative integers, the double Schur functions $s_{\la}(x\vt a)$ will become the corresponding symmetric functions of \cite{o:ni}; cf. formula (3.7) in that paper. Moreover, under the specialization $a_i=-i+1$ for all $i\in\ZZ$ the double Schur functions become the {\it shifted Schur functions\/} of \cite{oo:ss} in the variables $y_i=x_i+i-1$. \qed \ere \subsection{Analogues of classical bases} The {\it double elementary\/} and {\it complete symmetric functions} are defined respectively by \ben e_k(x\vt a)=s_{(1^k)}(x\vt a),\qquad h_k(x\vt a)=s_{(k)}(x\vt a) \een and hence, they can be given by the formulas \ben \bal e_k(x\vt a)&=\sum_{i_1>\dots>i_k}(x_{i_1}-a_{i_1})\dots (x_{i_k}-a_{i_k+k-1}),\\ h_k(x\vt a)&=\sum_{i_1\geqslant\dots\geqslant i_k} (x_{i_1}-a_{i_1})\dots (x_{i_k}-a_{i_k-k+1}). \eal \een Their generating functions can be written by analogy with the classical case as in \cite{m:sfh} and they take the form \begin{align}\label{gene} 1+\sum_{k=1}^{\infty} \frac{e_k(x\vt a)\ts t^k} {(1+a_1\tss t)\dots(1+a_k\tss t)}&= \prod_{i=1}^{\infty}\frac{1+x_i\tss t}{1+a_i\tss t},\\[1em] \label{genh} 1+\sum_{k=1}^{\infty} \frac{h_k(x\vt a)\ts t^k} {(1-a_0\tss t)\dots(1-a_{-k+1}\tss t)}&= \prod_{i=1}^{\infty}\frac{1-a_i\tss t}{1-x_i\tss t}; \end{align} see e.g. \cite{m:sf}, \cite{oo:ss}. Given a partition $\la=(\la_1,\dots,\la_l)$, set \ben \bal p_{\la}(x\vt a)&=p_{\la_1}(x\vt a)\dots p_{\la_l}(x\vt a),\\ e_{\la}(x\vt a)&=e_{\la_1}(x\vt a)\dots e_{\la_l}(x\vt a),\\ h_{\la}(x\vt a)&=h_{\la_1}(x\vt a)\dots h_{\la_l}(x\vt a). \eal \een The following proposition is easy to deduce from the properties of the classical symmetric functions; see \cite{m:sfh}. \bpr\label{prop:basis} Each of the families $p_{\la}(x\vt a)$, $e_{\la}(x\vt a)$, $h_{\la}(x\vt a)$ and $s_{\la}(x\vt a)$, parameterized by all partitions $\la$, forms a basis of $\La(x\vt a)$ over $\QQ[a]$. \qed \epr In particular, each of the families $p_k(x\vt a)$, $e_k(x\vt a)$ and $h_k(x\vt a)$ with $k\geqslant 1$ is a set of algebraically independent generators of $\La(x\vt a)$ over $\QQ[a]$. Under the specialization $a_i=0$, the bases of Proposition~\ref{prop:basis} turn into the classical bases $p_{\la}(x)$, $e_{\la}(x)$, $h_{\la}(x)$ and $s_{\la}(x)$ of $\La$. The ring of symmetric functions $\La$ possesses two more bases $m_{\la}(x)$ and $f_{\la}(x)$; see \cite[Chapter~I]{m:sfh}. The {\it monomial symmetric functions\/} $m_{\la}(x)$ are defined by \ben m_{\la}(x)=\sum_{\si} x_{\si(1)}^{\la_1}x_{\si(2)}^{\la_2}\dots x_{\si(l)}^{\la_l}, \een summed over permutations $\si$ of the $x_i$ which give distinct monomials. The basis elements $f_{\la}(x)$ are called the {\it forgotten symmetric functions\/}, they are defined as the images of the $m_{\la}(x)$ under the involution $\om:\La\to\La$ which takes $e_{\la}(x)$ to $h_{\la}(x)$; see \cite{m:sfh}. The corresponding basis elements $m_{\la}(x\vt a)$ and $f_{\la}(x\vt a)$ in $\La(x\vt a)$ will be defined in Section~\ref{sec:ome}. \subsection{Duality isomorphism} Introduce the sequence of variables $a'$ which is related to the sequence $a$ by the rule \ben (a')_i=-a_{-i+1},\qquad i\in\ZZ. \een The operation $a\mapsto a'$ is clearly involutive so that $(a')'=a$. Note that any element of the polynomial ring $\QQ[a']$ can be identified with the element of $\QQ[a]$ obtained by replacing each $(a')_i$ by $-a_{-i+1}$. Define the ring homomorphism \ben \om_a:\La(x\vt a)\to\La(x\vt a') \een as the $\QQ[a]$-linear map such that \beql{omega} \om_a: e_k(x\vt a)\mapsto h_k(x\vt a'),\qquad k=1,2,\dots. \eeq An arbitrary element of $\La(x\vt a)$ can be written as a unique linear combination of the basis elements $e_{\la}(x\vt a)$ with coefficients in $\QQ[a]$. The image of such a linear combination under $\om_a$ is then found by \ben \om_a:\sum_{\la}c_{\la}(a)\ts e_{\la}(x\vt a) \mapsto \sum_{\la}c_{\la}(a)\ts h_{\la}(x\vt a'),\qquad c_{\la}(a)\in\QQ[a], \een and $c_{\la}(a)$ is regarded as an element of $\QQ[a']$. Clearly, $\om_a$ is a ring isomorphism, since the $h_k(x\vt a')$ are algebraically independent generators of $\La(x\vt a')$ over $\QQ[a']$. In the case of finite set of variables $x=(x_1,\dots,x_n)$ the respective isomorphism $\om_a$ is defined by the same rule \eqref{omega} with the values $k=1,\dots,n$. \bpr\label{prop:om} We have $\om_{a'}\circ \om_a={\rm id}^{}_{\La(x\vt a)}$ and \beql{omhla} \om_a: h_{\la}(x\vt a)\mapsto e_{\la}(x\vt a'). \eeq \epr \bpf Relations \eqref{gene} and \eqref{genh} imply that \ben \Bigg(\sum_{k=0}^{\infty} \frac{(-1)^k\ts e_k(x\vt a)\ts t^k} {(1-a_1\tss t)\dots(1-a_k\tss t)}\Bigg) \Bigg(\sum_{r=0}^{\infty} \frac{h_r(x\vt a)\ts t^r} {(1-a_0\tss t)\dots(1-a_{-r+1}\tss t)}\Bigg)=1. \een Applying the isomorphism $\om_a$, we get \ben \Bigg(\sum_{k=0}^{\infty} \frac{(-1)^k\ts h_k(x\vt a')\ts t^k} {(1+(a')_0\tss t)\dots(1+(a')_{-k+1}\tss t)}\Bigg) \Bigg(\sum_{r=0}^{\infty} \frac{\om_a\big(h_r(x\vt a)\big)\ts t^r} {(1+(a')_1\tss t)\dots(1+(a')_r\tss t)}\Bigg)=1. \een Replacing here $t$ by $-t$ and comparing with the previous identity, we can conclude that $\om_a\big(h_r(x\vt a)\big)=e_r(x\vt a')$. This proves \eqref{omhla} and the first part of the proposition, because $\om_{a'}\big(h_r(x\vt a')\big)=e_r(x\vt a)$. \epf We will often use the shift operator $\tau$ whose powers act on sequences by the rule \ben (\tau^k a)_i=a_{k+i}\qquad\text{for}\quad k\in\ZZ. \een The following analogues of the Jacobi--Trudi and N\"{a}gelsbach--Kostka formulas are immediate from \cite[(6.7)]{m:sf}. Namely, if the set of variables $x=(x_1,\dots,x_n)$ is finite and $\la$ is a partition of length not exceeding $n$, then \beql{jt} s_{\la}(x\vt a) =\det\big[h_{\la_i-i+j}(x\vt \tau^{\tss j-1}\tss a)\big] \eeq and \beql{nk} s_{\la}(x\vt a) =\det\big[e_{\la'_i-i+j}(x\vt \tau^{\tss -j+1}\tss a)\big], \eeq where the determinants are taken over the respective sets of indices $i,j=1,\dots,\ell(\la)$ and $i,j=1,\dots,\ell(\la')$. \subsection{Skew double Schur functions} Consider now the ring of supersymmetric functions $\La(x/y\vt a)$ defined in the Introduction. Taking two finite sets of variables $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$, define the {\it supersymmetric Schur polynomial\/} $s_{\nu/\mu}(x/y\vt a)$ associated with a skew diagram $\nu/\mu$ by the formula \beql{defssf} s_{\nu/\mu}(x/y\vt a)=\sum_{\mu\subseteq\ts\rho\ts\subseteq\tss\nu} \wt s_{\nu/\rho}(x\vt a)\ts s_{\rho^{\tss\prime}/\mu^{\tss\prime}}(y \tss|\tss {-}a), \eeq where the polynomials $\wt s_{\nu/\rho}(x\vt a)$ and $s_{\rho^{\tss\prime}/\mu^{\tss\prime}}(y \tss|\tss{-}a)$ are defined by the respective combinatorial formulas \eqref{defdoubleskew} and \eqref{facsf}. The polynomials \eqref{defssf} coincide with the factorial supersymmetric Schur polynomials $s_{\nu/\mu}(x/y\tss|\tss u)$ of \cite{m:fs} associated with the sequence $u$ related to $a$ by \eqref{seqs}. It was observed in \cite{orv:fs} that the sequence of polynomials $\big(s_{\nu/\mu}(x/y\vt a)\ts|\ts n\geqslant 1\big)$ is consistent with respect to the evaluations $x_n=y_n=0$ and hence, it defines the {\it supersymmetric Schur function\/} $s_{\nu/\mu}(x/y\vt a)$, where $x$ and $y$ are infinite sequences of variables (in fact, Proposition~3.4 in \cite{orv:fs} needs to be extended to skew diagrams which is immediate). Moreover, in \cite{orv:fs} these functions were given by new combinatorial formulas. In order to write them down, consider the ordered alphabet \ben \AAb=\{1'<1<2^{\tss\prime}<2<\dots\}. \een Given a skew diagram $\theta$, an $\AAb$-{\it tableau\/} $T$ of shape $\theta$ is obtained by filling in the boxes of $\theta$ with the elements of $\AAb$ in such a way that the entries of $T$ weakly increase along each row and down each column, and for each $i=1,2,\dots$ there is at most one symbol $i'$ in each row and at most one symbol $i$ in each column of $T$. The following formula gives the supersymmetric Schur function $s_{\theta}(x/y\vt a)$ associated with $\theta$: \beql{tabapr} s_{\theta}(x/y\vt a)= \sum_{T} \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts unprimed}}{\alpha\in\theta}} \big(x^{}_{T(\al)}-a_{-c(\al)+1}\big) \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts primed}}{\alpha\in\theta}} \big(y^{}_{T(\al)}+a_{-c(\al)+1}\big), \eeq summed over all $\AAb$-tableaux $T$ of shape $\theta$, where the subscripts of the variables $y_i$ are identified with the primed indices. An alternative formula is obtained by using a different ordering of the alphabet: \ben \AAb'=\{1<1'<2<2^{\tss\prime}<\dots\}. \een The $\AAb'$-tableaux $T$ of shape $\theta$ are defined in exactly the same way as the $\AAb$-tableaux, only taking into account the new ordering. Then \beql{taba} s_{\theta}(x/y\vt a)= \sum_{T} \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts unprimed}}{\alpha\in\theta}} \big(x^{}_{T(\al)}-a_{-c(\al)}\big) \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts primed}}{\alpha\in\theta}} \big(y^{}_{T(\al)}+a_{-c(\al)}\big), \eeq summed over all $\AAb'$-tableaux $T$ of shape $\theta$. The supersymmetric Schur functions have the following symmetry property \beql{symprsu} s_{\theta}(x/y\vt a)=s_{\theta^{\tss\prime}}(y/x\vt a') \eeq implied by their combinatorial presentation. Moreover, if $x_i=y_i=0$ for all $i\geqslant n+1$, then only tableaux $T$ with entries in $\{1,1',\dots,n,n'\}$ make nonzero contributions in either \eqref{tabapr} or \eqref{taba}. \bre\label{rem:deforv} The supersymmetric Schur function $s_{\theta}(x/y\vt a)$ given in \eqref{tabapr} coincides with $\Sigma_{\theta;-a'}(x;y)$ as defined in \cite[Proposition~4.4]{orv:fs}. In order to derive \eqref{taba}, first use \eqref{symprsu}, then apply the transposition of the tableaux with respect to the main diagonal and swap $i$ and $i^{\tss\prime}$ for each $i$. Note that \cite{orv:fs} also contains an equivalent combinatorial formula for $\Sigma_{\theta;a}(x;y)$ in terms of skew hooks. \qed \ere \bpr\label{prop:imisom} The image of the supersymmetric Schur function $s_{\nu}(x/y\vt a)$ associated with a {\rm(}nonskew{\rm)} diagram $\nu$ under the isomorphism \eqref{isom} coincides with the double Schur function $s_{\nu}(x\vt a)$; that is, \ben s_{\nu}(x/y\vt a)\big|^{}_{y=-a}=s_{\nu}(x\vt a), \een where $y=-a$ denotes the evaluation $y_i=-a_i$ for $i\geqslant 1$. \epr \bpf We may assume that the sets of variables $x$ and $y$ are finite, $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$. The claim now follows from relation \eqref{defssf} with $\mu=\varnothing$, if we observe that $s_{\rho^{\tss\prime}}(y \tss|\tss {-}a)\big|^{}_{y=-a}=0$ unless $\rho=\varnothing$. \epf The symmetry property \eqref{symprsu} implies the following dual version of Proposition~\ref{prop:imisom}. \bco\label{cor:dueva} Under the isomorphism $\La(x/y\vt a)\to \La(y\vt a')$ defined by the evaluation $x_i=-(a')_i$ for all $i\geqslant 1$ we have \ben s_{\theta}(x/y\vt a)\big|^{}_{x=-a'}=s_{\theta'}(y\vt a'). \een \eco Using Proposition~\ref{prop:imisom}, we can find the images of the double Schur functions with respect to the duality isomorphism $\om_a$ defined in \eqref{omega}. \bco\label{cor:imsuf} Under the isomorphism $\om_a:\La(x\vt a)\to\La(x\vt a')$ we have \beql{omsla} \om_a: s_{\la}(x\vt a)\mapsto s_{\la'}(x\vt a'). \eeq \eco \bpf The Littlewood--Richardson polynomials $c_{\la\mu}^{\tss\nu}(a)$ are defined by the expansion \eqref{lrpoldef}. Hence, by Proposition~\ref{prop:imisom} we have \ben s_{\la}(x/y\vt a)\ts s_{\mu}(x/y\vt a)= \sum_{\nu} c_{\la\mu}^{\ts\nu}(a)\ts s_{\tss\nu}(x/y\vt a). \een Using \eqref{symprsu}, we get \beql{symlr} c_{\la\mu}^{\tss\nu}(a)=c_{\la'\mu'}^{\tss\nu^{\tss\prime}}(a'). \eeq Now, observe that relation \eqref{omsla} can be taken as a definition of the $\QQ[a]$-module isomorphism $\La(x\vt a)\to\La(x\vt a')$. Moreover, this definition agrees with \eqref{omega}. Therefore, it is sufficient to verify that this $\QQ[a]$-module isomorphism is a ring homomorphism. Applying \eqref{symlr} we obtain \ben \bal \om_a\big(s_{\la}(x\vt a)\ts s_{\mu}(x\vt a)\big) {}&{}=\sum_{\nu} c_{\la\mu}^{\ts\nu}(a)\ts \om_a\big(s_{\tss\nu}(x\vt a)\big) =\sum_{\nu} c_{\la'\mu'}^{\ts\nu^{\tss\prime}}(a')\ts s_{\tss\nu^{\tss\prime}}(x\vt a')\\ {}&{}=s_{\la'}(x\vt a')\ts s_{\mu'}(x\vt a')= \om_a\big(s_{\la}(x\vt a)\big)\ts \om_a\big(s_{\mu}(x\vt a)\big). \eal \een \epf Proposition~\ref{prop:imisom} leads to the following definition. \bde\label{def:skdsf} For any skew diagram $\theta$ define the {\it skew double Schur function\/} $s_{\theta}(x\vt a)\in\La(x\vt a)$ as the image of $s_{\theta}(x/y\vt a)\in\La(x/y\vt a)$ under the isomorphism \eqref{isom}; that is, \ben s_{\theta}(x\vt a)=s_{\theta}(x/y\vt a)\big|^{}_{y=-a}. \een Equivalently, using \eqref{tabapr} and \eqref{taba}, respectively, we have \beql{tabadsdu} s_{\theta}(x\vt a)= \sum_{T} \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts unprimed}}{\alpha\in\theta}} \big(x^{}_{T(\al)}-a_{-c(\al)+1}\big) \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts primed}}{\alpha\in\theta}} \big(a_{-c(\al)+1}-a^{}_{T(\al)}\big), \eeq summed over all $\AAb$-tableaux $T$ of shape $\theta$; and \beql{tabads} s_{\theta}(x\vt a)= \sum_{T} \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts unprimed}}{\alpha\in\theta}} \big(x^{}_{T(\al)}-a_{-c(\al)}\big) \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts primed}}{\alpha\in\theta}} \big(a_{-c(\al)}-a^{}_{T(\al)}\big), \eeq summed over all $\AAb'$-tableaux $T$ of shape $\theta$. Furthermore, by \eqref{defssf} the skew double Schur function $s_{\nu/\mu}(x\vt a)$ can also be defined as the sequence of polynomials \beql{seqpol} s_{\nu/\mu}(x\vt a)=\sum_{\mu\subseteq\ts\rho\ts\subseteq\tss\nu} \wt s_{\nu/\rho}(x\vt a)\ts s_{\rho^{\tss\prime}/\mu^{\tss\prime}}(-a^{(n)} \tss|\tss {-}a), \qquad n=1,2,\dots, \eeq where $x=(x_1,\dots,x_n)$ and $a^{(n)}=(a_1,\dots,a_n)$. \qed \ede For any partition $\mu$ introduce the sequence $a_{\mu}$ and the series $|a_{\mu}|$ by \ben a_{\mu}=(a_{1-\mu_1},a_{2-\mu_2},\dots)\Fand |a_{\mu}|=a_{1-\mu_1}+a_{2-\mu_2}+\dots. \een Given any element $P(x)\in\La(x\vt a)$, the value $P(a_{\mu})$ is a well-defined element of $\QQ[a]$. The vanishing theorem of Okounkov~\cite{o:qi, o:ni} states that \ben s_{\la}(a_{\rho}\vt a)=0\qquad\text{unless} \quad \la\subseteq\rho, \een and \beql{hoo} s_{\la}(a_{\la}\vt a)=\prod_{(i,j)\in\la} \big(a^{}_{i-\la_i}-a^{}_{\la'_j-j+1}\big). \eeq This theorem can be used to derive the interpolation formulas given in the next proposition. In a slightly different situation this derivation was performed in \cite[Propositions~3.3 \& 3.4]{ms:lr} relying on the approach of \cite{oo:ss}, and an obvious modification of those arguments works in the present context; see also \cite{f:ec}, \cite{kt:pe}. The expressions like $|a_{\nu}|-|a_{\mu}|$ used below are understood as the polynomials $\sum_{i\geqslant 1} (a_{i-\nu_i}-a_{i-\mu_i})$. We will write $\rho\to\sigma$ if the diagram $\si$ is obtained from the diagram $\rho$ by adding one box. \bpr\label{prop:interp} Given an element $P(x)\in\La(x\vt a)$, define the polynomials $c_{P,\ts\mu}^{\ts\nu}(a)$ by the expansion \beql{interpo} P(x)\ts s_{\mu}(x\vt a)=\sum_{\nu} c_{P,\ts\mu}^{\ts\nu}(a)\ts s_{\nu}(x\vt a). \eeq Then $c_{P,\ts\mu}^{\ts\nu}(a)=0$ unless $\mu\subseteq \nu$, and $c_{P,\ts\mu}^{\ts\mu}(a)=P(a_{\mu})$. Moreover, if $\mu\subseteq \nu$, then \ben c_{P,\ts\mu}^{\ts\nu}(a)=\frac{1}{|a_{\nu}|-|a_{\mu}|} \Bigg(\sum_{\mu^+,\ts\mu\to\mu^+}c_{P,\ts\mu^+}^{\ts\nu}(a) -\sum_{\nu^-,\ts\nu^-\to\nu}c_{P,\ts\mu}^{\ts\nu^-}(a)\Bigg). \een The same coefficient can also be found by the formula \beql{rati} c_{P,\ts\mu}^{\ts\nu}(a)=\sum_{R}\sum_{k=0}^{l} \frac{P(a_{\rho^{(k)}})}{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(l)}}|)}, \eeq summed over all sequences of partitions $R$ of the form \ben \mu=\rho^{(0)}\to\rho^{(1)}\to \dots\to\rho^{(l-1)}\to\rho^{(l)}=\nu, \een where the symbol $\wedge$ indicates that the zero factor should be skipped. \qed \epr \section{Cauchy identities and dual Schur functions} \label{sec:dua} \setcounter{equation}{0} \subsection{Definition of dual Schur functions and Cauchy identities} We let $\wh\La(x\vt a)$ denote the ring of formal series of the symmetric functions in the set of indeterminates $x=(x_1,x_2,\dots)$ with coefficients in $\QQ[a]$. More precisely, \beql{defla} \wh\La(x\vt a)=\Big\{\sum_{\la\in\Pc} c_{\la}(a)\ts s_{\la}(x) \ |\ c_{\la}(a)\in \QQ[a]\Big\}. \eeq The Schur functions $s_{\la}(x)$ can certainly be replaced here by any other classical basis of $\La$ parameterized by the set of partitions $\Pc$. We will use the symbol $\wh\La_n=\wh\La_n(x\vt a)$ to indicate the ring defined as in \eqref{defla} for the case of the finite set of variables $x=(x_1,\dots,x_n)$. An element of $\wh\La(x\vt a)$ can be viewed as a sequence of elements of $\wh\La_n$ with $n=0,1,\dots$, consistent with respect to the evaluation maps \ben \psi_n: \wh\La_n\to \wh\La_{n-1},\qquad Q(x_1,\dots,x_n)\mapsto Q(x_1,\dots,x_{n-1},0). \een For any $n$-tuple of nonnegative integers $\be=(\be_1,\dots,\be_n)$ set \ben A_{\be}(x, a)=\det\big[(x_i,a)^{\be_j} \tss(1-a^{}_{n-\be_j-1}x_i)(1-a^{}_{n-\be_j-2}\ts x_i)\dots (1-a^{}_{1-\be_j}x_i)\big]_{i,j=1}^n, \een where $(x_i,a)^{0}=1$ and \beql{dumon} (x_i,a)^{r}=\frac{x_i^r} {(1-a^{}_0\tss x_i)(1-a^{}_{-1}\tss x_i)\dots (1-a^{}_{1-r}\tss x_i)}, \qquad r\geqslant 1. \eeq Let $\la=(\la_1,\dots,\la_n)$ be a partition of length not exceeding $n$. Denote by $d$ the number of boxes on the diagonal of $\la$. That is, $d$ is determined by the condition that $\la_{d+1}\leqslant d\leqslant \la_d$. The $(i,j)$ entry $A_{ij}$ of the determinant $A_{\la+\de}(x, a)$ can be written more explicitly as \ben A_{ij}=\begin{cases}\dfrac{x_i^{\la_j+n-j}} {(1-a^{}_0\tss x_i)(1-a^{}_{-1}\tss x_i)\dots (1-a^{}_{j-\la_j}\tss x_i)} \quad&\text{for}\quad j=1,\dots,d,\\[1.5em] x_i^{\la_j+n-j}\ts (1-a^{}_1\tss x_i)(1-a^{}_2\tss x_i)\dots (1-a^{}_{j-\la_j-1}\tss x_i) \quad&\text{for}\quad j=d+1,\dots,n. \end{cases} \een Observe that the determinant $A_{\de}(x, a)$ corresponding to the empty partition equals the Vandermonde determinant, \ben A_{\de}(x, a)=\prod_{1\leqslant i<j\leqslant n}(x_i-x_j). \een Hence, the formula \beql{dusf} \wh s_{\la}(x\vt a)=\frac{A_{\la+\de}(x, a)}{A_{\de}(x, a)} \eeq defines an element of the ring $\wh\La_n$. Furthermore, setting $\wh s_{\la}(x\vt a)=0$ if the length of $\la$ exceeds the number of the $x$ variables, we obtain that the evaluation of the element $\wh s_{\la}(x\vt a)\in\wh\La_n$ at $x_n=0$ yields the corresponding element of $\wh\La_{n-1}$ associated with $\la$. Thus, the sequence $\wh s_{\la}(x\vt a)\in\wh\La_n$ for $n=0,1,\dots$ defines an element $\wh s_{\la}(x\vt a)$ of $\wh\La(x\vt a)$ which we call the {\it dual Schur function\/}. The lowest degree component of $\wh s_{\la}(x\vt a)$ in $x$ coincides with the Schur function $s_{\la}(x)$. Moreover, if $a$ is specialized to the sequence of zeros, then $\wh s_{\la}(x\vt a)$ specializes to $s_{\la}(x)$. Now we prove an analogue of the Cauchy identity involving the double and dual Schur functions. Consider one more set of variables $y=(y_1,y_2,\dots)$. \bth\label{thm:cch} The following identity holds \beql{cchid} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} s_{\la}(x\vt a)\ts \wh s_{\la}(y\vt a). \eeq \eth \bpf We use a modification of the argument applied in \cite[Section~I.4]{m:sfh} for the proof of the classical Cauchy identity (see formula (4.3) there). As we pointed out in Section~\ref{subsec:dpr}, it will be sufficient to prove the identity in the case of finite sets of variables $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$. We have \beql{adexy} A_{\de}(x\vt a)\ts A_{\de}(y, a) \sum_{\la\in\Pc} s_{\la}(x\vt a)\ts \wh s_{\la}(y\vt a) =\sum_{\ga} A_{\ga}(x\vt a)\ts A_{\ga}(y, a), \eeq summed over $n$-tuples $\ga=(\ga_1,\dots,\ga_n)$ with $\ga_1>\dots>\ga_n\geqslant 0$. Since \ben A_{\ga}(y, a)=\sum_{\si\in\Sym_n}\sgn\tss\si\ts \prod_{i=1}^n (y_i,a)^{\ga_{\si(i)}} \tss(1-a^{}_{n-\ga_{\si(i)}-1}y_i)\dots(1-a^{}_{1-\ga_{\si(i)}}y_i) \een and $A_{\ga}(x\vt a)$ is skew-symmetric under permutations of the components of $\ga$, we can write \eqref{adexy} in the form \beql{abexa} \sum_{\be} A_{\be}(x\vt a) \prod_{i=1}^n (y_i,a)^{\be_i} \tss(1-a^{}_{n-\be_i-1}y_i)\dots(1-a^{}_{1-\be_i}y_i), \eeq summed over $n$-tuples $\be=(\be_1,\dots,\be_n)$ on nonnegative integers. Due to the Jacobi--Trudi formula \eqref{jt}, we have \ben A_{\be}(x\vt a)=A_{\de}(x\vt a)\sum_{\si\in\Sym_n}\sgn\tss\si\cdot h_{\be_{\si(1)}-n+1}(x\vt a)\dots h_{\be_{\si(n)}}(x\vt \tau^{n-1}a). \een Hence, \eqref{abexa} becomes \begin{multline} \label{axy} A_{\de}(x\vt a) \sum_{\al}h_{\al_1}(x\vt a)\dots h_{\al_n}(x\vt \tau^{n-1}a)\\ {}\times\sum_{\si\in\Sym_n}\sgn\tss\si\cdot \prod_{i=1}^n (y_{\si(i)},a)^{\al_i+n-i} \tss(1-a^{}_{i-\al_i-1}y_{\si(i)}) \dots(1-a^{}_{i-\al_i-n+1}y_{\si(i)}), \end{multline} summed over $n$-tuples $\al=(\al_1,\dots,\al_n)$ on nonnegative integers. However, using \eqref{genh}, for each $i=1,\dots,n$ we obtain \ben \bal \sum_{k=0}^{\infty}& h_k(x\vt \tau^{i-1}a)\ts (z,a)^{k+n-i} \tss(1-a^{}_{i-\al_i-1}z) \dots(1-a^{}_{i-\al_i-n+1}z)\\ {}&=z^{n-i}\ts (1-a_1\tss z)\dots(1-a_{i-1}\tss z) \sum_{k=0}^{\infty} h_k(x\vt \tau^{i-1}a)\ts (z,\tau^{i-1}a)^k\\ {}&=z^{n-i}\ts (1-a_1\tss z)\dots(1-a_{i-1}\tss z) \prod_{r=1}^{n}\frac{1-a_{i+r-1}\tss z}{1-x_r\tss z}, \eal \een where we put $z=y_{\si(i)}$. Therefore, \eqref{axy} simplifies to \ben \bal A_{\de}(x\vt a)&\ts \prod_{i,j=1}^{n}\frac{1-a_{i}\tss y_j}{1-x_i\tss y_j} \sum_{\si\in\Sym_n}\sgn\tss\si\cdot \prod_{i=1}^n y_{\si(i)}^{n-i} \ts(1-a^{}_{n+1}y_{\si(i)}) \dots(1-a^{}_{n+i-1}y_{\si(i)})\\ {}&=A_{\de}(x\vt a)\tss A_{\de}(y, a)\ts \prod_{i,j=1}^{n}\frac{1-a_{i}\tss y_j}{1-x_i\tss y_j}, \eal \een thus completing the proof. \epf Let $z=(z_1,z_2,\dots)$ be another set of variables. \bco\label{cor:ssduco} The following identity holds \ben \prod_{i,\ts j\geqslant 1}\frac{1+y_i\ts z_j}{1-x_i\ts z_j} =\sum_{\la\in\Pc} s_{\la}(x/y\vt a)\ts \wh s_{\la}(z\vt a), \een \eco \bpf Observe that the elements $\wh s_{\la}(z\vt a)\in \wh\La(z\vt a)$ are uniquely determined by this relation. Hence, the claim follows by the application of Proposition~\ref{prop:imisom} and Theorem~\ref{thm:cch}. \epf Some other identities of this kind are immediate from the symmetry property \eqref{symprsu} and Corollary~\ref{cor:ssduco}. \bco\label{cor:duco} We have the identities \ben \prod_{i,\ts j\geqslant 1}\frac{1+x_i\ts z_j}{1-y_i\ts z_j} =\sum_{\la\in\Pc} s_{\la}(x/y\vt a)\ts \wh s_{\la'}(z\vt a') \een and \ben \prod_{i,\ts j\geqslant 1}\frac{1+x_i\ts y_j}{1+a_i\ts y_j} =\sum_{\la\in\Pc} s_{\la}(x\vt a)\ts \wh s_{\la'}(y\vt a'). \een \vskip-1.2\baselineskip \qed \eco \subsection{Combinatorial presentation} Given a skew diagram $\theta$, introduce the corresponding {\it skew dual Schur function\/} $\wh s_{\theta}(x\vt a)$ by the formula \beql{skewd} \wh s_{\theta}(x\vt a)=\sum_{T} \prod_{\al\in\theta}X_{T(\al)}(a^{}_{-c(\al)+1},a^{}_{-c(\al)}), \eeq summed over the reverse $\theta$-tableaux $T$, where \ben X_i(g,h)= \frac{x_i\ts(1-g\ts x_{i-1})\dots (1-g\ts x_1)} {(1-h\ts x_{i})\dots (1-h\ts x_1)}. \een \bth\label{thm:tab} For any partition $\mu$ the following identity holds \beql{skcid} \prod_{i,\ts j\geqslant 1}\frac{1-a_{i-\mu_i}\ts y_j}{1-x_i\ts y_j} \ts s_{\mu}(x\vt a) =\sum_{\nu} s_{\nu}(x\vt a)\ts \wh s_{\nu/\mu}(y\vt a), \eeq summed over partitions $\nu$ containing $\mu$. In particular, if $\theta=\la$ is a normal {\rm(}nonskew{\rm)} diagram, then the dual Schur function $\wh s_{\la}(x\vt a)$ admits the tableau presentation \eqref{skewd}. \eth \bpf It will be sufficient to consider the case where the set of variables $y$ is finite, $y=(y_1,\dots,y_n)$. We will argue by induction on $n$ and suppose that $n\geqslant 1$. By the induction hypothesis, the identity \eqref{skcid} holds for the set of variables $y'=(y_2,\dots,y_n)$. Hence, we need to verify that \ben \prod_{i\geqslant 1}\frac{1-a_{i-\mu_i}\ts y_1}{1-x_i\ts y_1} \ts\sum_{\la} s_{\la}(x\vt a)\ts \wh s_{\la/\mu}(y'\vt a) =\sum_{\nu} s_{\nu}(x\vt a)\ts \wh s_{\nu/\mu}(y\vt a). \een However, due to \eqref{genh}, \ben \prod_{i\geqslant 1}\frac{1-a_{i-\mu_i}\ts y_1}{1-x_i\ts y_1} =\sum_{k=0}^{\infty} \frac{h_k(x\vt a^{\mu})\ts y_1^k} {(1-a_0\tss y_1)\dots(1-a_{-k+1}\tss y_1)}, \een where $a^{\mu}$ denotes the sequence of parameters such that $(a^{\mu})_i=a_{i-\mu_i}$ for $i\geqslant 1$ and $(a^{\mu})_i=a_i$ for $i\leqslant 0$. Now define polynomials $c_{\la,(k)}^{\nu}(a,a^{\mu})\in\QQ[a]$ by the expansion \ben s_{\la}(x\vt a)\ts h_k(x\vt a^{\mu})=\sum_{\nu} c_{\la,(k)}^{\nu}(a,a^{\mu})\ts s_{\nu}(x\vt a). \een Hence, the claim will follow if we show that \beql{snumu} \wh s_{\nu/\mu}(y\vt a)=\sum_{\la,\ts k}c_{\la,(k)}^{\nu}(a,a^{\mu}) \ts \wh s_{\la/\mu}(y'\vt a)\ts \frac{y_1^k} {(1-a_0\tss y_1)\dots(1-a_{-k+1}\tss y_1)}. \eeq The definition \eqref{skewd} of the skew dual Schur functions implies that \ben \wh s_{\nu/\mu}(y\vt a)=\sum_{\la} \wh s_{\la/\mu}(y'\vt a)\ts\prod_{\al\in\la/\mu} \frac{1-a_{-c(\al)+1}\tss y_1}{1-a_{-c(\al)}\tss y_1} \ts\prod_{\be\in\nu/\la} \frac{y_1}{1-a_{-c(\al)}\tss y_1}, \een summed over diagrams $\la$ such that $\mu\subseteq\la\subseteq\nu$ and $\nu/\la$ is a horizontal strip (i.e., every column of this diagram contains at most one box). Therefore, \eqref{snumu} will follow from the relation \ben \bal \sum_{k}&c_{\la,(k)}^{\nu}(a,a^{\mu}) \ts\frac{y_1^k} {(1-a_0\tss y_1)\dots(1-a_{-k+1}\tss y_1)}\\ {}&=\prod_{\al\in\la/\mu} \frac{1-a_{-c(\al)+1}\tss y_1}{1-a_{-c(\al)}\tss y_1} \ts\prod_{\be\in\nu/\la} \frac{y_1}{1-a_{-c(\al)}\tss y_1} \eal \een which takes more convenient form after the substitution $t=y_1^{-1}$: \beql{sumts} \sum_{k} \ts\frac{c_{\la,(k)}^{\nu}(a,a^{\mu})} {(t-a_0)\dots(t-a_{-k+1})}=\prod_{\al\in\la/\mu} \big(t-a_{-c(\al)+1}\big) \ts\prod_{\be\in\nu/\mu} \big(t-a_{-c(\al)}\big)^{-1}. \eeq We will verify the latter by induction on $|\nu|-|\la|$. Suppose first that $\nu=\la$. Then $c_{\la,(k)}^{\la}(a,a^{\mu})=h_k(a_{\la}\vt a^{\mu})$ by Proposition~\ref{prop:interp}, and relation \eqref{genh} implies that \ben \sum_{k} \ts\frac{h_k(a_{\la}\vt a^{\mu})} {(t-a_0)\dots(t-a_{-k+1})}=\prod_{i\geqslant 1} \frac{t-a_{i-\mu_i}}{t-a_{i-\la_i}}. \een This expression coincides with \ben \prod_{\al\in\la/\mu} \frac{t-a_{-c(\al)+1}}{t-a_{-c(\al)}}, \een thus verifying \eqref{sumts} in the case under consideration. Suppose now that $|\nu|-|\la|\geqslant 1$. By Proposition~\ref{prop:interp}, we have \ben c_{\la,(k)}^{\nu}(a,a^{\mu})=\frac{1}{|a_{\nu}|-|a_{\la}|}\ts \Bigg(\sum_{\la^+,\ts\la\to\la^+}c_{\la^+,(k)}^{\nu}(a,a^{\mu}) -\sum_{\nu^-,\ts\nu^-\to\nu}c_{\la,(k)}^{\nu^-}(a,a^{\mu})\Bigg). \een Hence, applying the induction hypothesis, we can write the left hand side of \eqref{sumts} in the form \ben \bal \frac{1}{|a_{\nu}|-|a_{\la}|}\ts \Big(&\sum_{\la^+}\prod_{\al\in\la^+/\mu} \big(t-a_{-c(\al)+1}\big) \ts\prod_{\be\in\nu/\mu} \big(t-a_{-c(\al)}\big)^{-1}\\ {}-{}&\sum_{\nu^-}\prod_{\al\in\la/\mu} \big(t-a_{-c(\al)+1}\big) \ts\prod_{\be\in\nu^-/\mu} \big(t-a_{-c(\al)}\big)^{-1}\Big). \eal \een Since $\nu/\la$ is a horizontal strip, we have \ben \sum_{\al=\la^+/\la}\big(t-a_{-c(\al)+1}\big)- \sum_{\al=\nu/\nu^-}\big(t-a_{-c(\al)}\big)=|a_{\nu}|-|a_{\la}|, \een so that the previous expression simplifies to \ben \prod_{\al\in\la/\mu} \big(t-a_{-c(\al)+1}\big) \ts\prod_{\be\in\nu/\mu} \big(t-a_{-c(\al)}\big)^{-1} \een completing the proof of \eqref{sumts}. The second part of the proposition follows from Theorem~\ref{thm:cch} and the fact that the elements $\wh s_{\la}(y\vt a)\in\wh\La(y\vt a)$ are uniquely determined by the relation \eqref{cchid}. \epf \bre\label{rem:aze} Under the specialization $a_i=0$ the identity of Theorem~\ref{thm:tab} turns into a particular case of the identity in \cite[Example~I.5.26]{m:sfh}. \qed \ere Since the skew dual Schur functions are uniquely determined by the expansion \eqref{skcid}, the following corollary is immediate from Theorem~\ref{thm:tab}. \bco\label{cor:symme} The skew dual Schur functions defined in \eqref{skewd} belong to the ring $\wh\La(x\vt a)$. In particular, they are symmetric in the variables $x$. \qed \eco Recall the Littlewood--Richardson polynomials defined by \eqref{lrpoldef}. \bpr\label{prop:skelrp} For any skew diagram $\nu/\mu$ we have the expansion \ben \wh s_{\nu/\mu}(y\vt a)=\sum_{\la} c_{\la\mu}^{\tss\nu}(a)\ts \wh s_{\la}(y\vt a). \een \epr \bpf We use an argument similar to the one used in \cite[Section~I.5]{m:sfh}. Consider the set of variables $(y,y^{\tss\prime})$, where $y=(y_1,y_2,\dots)$ and $y^{\tss\prime}=(y^{\tss\prime}_1,y^{\tss\prime}_2,\dots)$ and assume they are ordered in the way that each $y_i$ precedes each $y^{\tss\prime}_j$. By the tableau presentation \eqref{skewd} of the dual Schur functions, we get \beql{snuyza} \wh s_{\nu}(y,y^{\tss\prime}\vt a)=\sum_{\mu\subseteq\nu} \wh s_{\nu/\mu}(y\vt a)\ts \wh s_{\mu}(y^{\tss\prime}\vt a). \eeq On the other hand, by Theorem~\ref{thm:cch}, \ben \bal {}&\sum_{\nu}s_{\nu}(x\vt a)\ts \wh s_{\nu}(y,y^{\tss\prime}\vt a)= \prod_{i,j\geqslant 1}\frac{1-a_i\tss y_j}{1-x_i\tss y_j} \ts\prod_{i,k\geqslant 1} \frac{1-a_i\tss y^{\tss\prime}_k}{1-x_i\tss y^{\tss\prime}_k}\\ {}&=\sum_{\la,\ts\mu}s_{\la}(x\vt a)\ts \wh s_{\la}(y\vt a)\ts s_{\mu}(x\vt a)\ts \wh s_{\mu}(y^{\tss\prime}\vt a) =\sum_{\la,\ts\mu,\ts\nu}c_{\la\mu}^{\tss\nu}(a)\ts s_{\nu}(x\vt a)\ts \wh s_{\la}(y\vt a)\ts \wh s_{\mu}(y^{\tss\prime}\vt a) \eal \een which proves that \beql{snulrp} \wh s_{\nu}(y,y^{\tss\prime}\vt a)= \sum_{\la,\ts\mu}c_{\la\mu}^{\tss\nu}(a)\ts \wh s_{\la}(y\vt a)\ts \wh s_{\mu}(y^{\tss\prime}\vt a). \eeq The desired relation now follows by comparing \eqref{snuyza} and \eqref{snulrp}. \epf \subsection{Jacobi--Trudi-type formulas} Introduce the {\it dual elementary\/} and {\it complete symmetric functions\/} by \ben \wh e_k(x\vt a)=\wh s_{(1^k)}(x\vt a),\qquad \wh h_k(x\vt a)=\wh s_{(k)}(x\vt a). \een By Theorem~\ref{thm:tab}, \ben \bal \wh e_k(x\vt a)&=\sum_{i_1>\dots>i_k} X_{i_1}(a_1,a_0)X_{i_2}(a_2,a_1)\dots X_{i_k}(a_k,a_{k-1}),\\ \wh h_k(x\vt a)&=\sum_{i_1\geqslant\dots\geqslant i_k} X_{i_1}(a_1,a_0)X_{i_2}(a_0,a_{-1})\dots X_{i_k}(a_{-k+2},a_{-k+1}). \eal \een \bpr\label{prop:gendu} We have the following generating series formulas \ben \bal 1+\sum_{k=1}^{\infty}\wh e_k(x\vt a)\ts (t+a_0)(t+a_1)\dots (t+a_{k-1})= \prod_{i=1}^{\infty}\frac{1+t\tss x_i}{1-a_0\tss x_i},\\ 1+\sum_{k=1}^{\infty}\wh h_k(x\vt a)\ts (t-a_1)(t-a_0)\dots (t-a_{-k+2})= \prod_{i=1}^{\infty}\frac{1-a_1\tss x_i}{1-t\tss x_i}. \eal \een \epr \bpf The first relation follows from the second identity in Corollary~\ref{cor:duco} by taking $x=(t)$ and then replacing $a$ by $a'$ and $y_i$ by $x_i$ for all $i$. Similarly, the second relation follows from Theorem~\ref{thm:cch} by taking $x=(t)$ and replacing $y_i$ by $x_i$. \epf We can now prove an analogue of the Jacobi--Trudi formula for the dual Schur functions. \bpr\label{prop:jtdu} If $\la$ and $\mu$ are partitions of length not exceeding $n$, then \beql{jtdu} \wh s_{\la/\mu}(x\vt a) =\det\big[\tss\wh h_{\la_i-\mu_j-i+j} (x\vt \tau^{\tss -\mu_j+j-1}\tss a)\big]_{i,j=1}^n. \eeq \epr \bpf Apply Theorem~\ref{thm:tab} for the finite set of variables $x=(x_1,\dots,x_n)$ and multiply both sides of \eqref{skcid} by $A_{\de}(x\vt a)$. This gives \beql{pramu} \prod_{j\geqslant 1}\prod_{i=1}^n \frac{1-a_{i-\mu_i}\ts y_j}{1-x_i\ts y_j} \ts A_{\mu+\de}(x\vt a) =\sum_{\la} A_{\la+\de}(x\vt a)\ts \wh s_{\la/\mu}(y\vt a). \eeq For any $\si\in\Sym_n$ we have \ben \prod_{j\geqslant 1}\prod_{i=1}^n \frac{1-a_{i-\mu_i}\ts y_j}{1-x_i\ts y_j} =\prod_{j\geqslant 1}\prod_{i=1}^n \frac{1-a_{i-\mu_i}\ts y_j}{1-x_{\si(i)}\ts y_j}. \een By the second formula of Proposition~\ref{prop:gendu}, \ben \prod_{j\geqslant 1} \frac{1-a_{i-\mu_i}\ts y_j}{1-x_{\si(i)}\ts y_j} =\sum_{k=0}^{\infty} \wh h_k(y\vt\tau^{-\mu_i+i-1}a)\ts (x_{\si(i)}-a_{i-\mu_i})\dots(x_{\si(i)}-a_{i-\mu_i-k+1}). \een Since \ben A_{\mu+\de}(x\vt a)=\sum_{\si\in\Sym_n}\sgn\si\cdot (x_{\si(1)}\vt a)^{\mu_1+n-1}\dots (x_{\si(n)}\vt a)^{\mu_n} , \een the left hand side of \eqref{pramu} can be written in the form \ben \sum_{\si\in\Sym_n}\sgn\si\ts \prod_{i=1}^n\sum_{k_i=0}^{\infty} (x_{\si(i)}-a_n)\dots(x_{\si(i)}-a_{i-\mu_i-k_i+1}) \ts h_{k_i}(y,\tau^{-\mu_i+i-1}a). \een Hence, comparing the coefficients of $(x_{1}\vt a)^{\la_1+n-1}\dots (x_n\vt a)^{\la_n}$ on both sides of \eqref{pramu}, we get \ben \wh s_{\la/\mu}(y\vt a)=\sum_{\rho\in\Sym_n} \sgn\rho\ts\prod_{i=1}^n \wh h_{\la_i-\mu_{\rho(i)}-i+\rho(i)} (y\vt\tau^{-\mu_{\rho(i)}+\rho(i)-1}a), \een as required. \epf Proposition~\ref{prop:jtdu} implies that the dual Schur functions may be regarded as a specialization of the generalized Schur functions described in \cite[9th~Variation]{m:sf}. Namely, in the notation of that paper, specialize the variables $h_{rs}$ by \beql{hrsspe} h_{rs}=\wh h_r(x\vt\tau^{-s}a),\qquad r\geqslant 1,\quad s\in\ZZ. \eeq Then the Schur functions $s_{\la/\mu}$ of \cite{m:sf} become $\wh s_{\la/\mu}(x\vt a)$. Hence the following corollaries are immediate from $(9.6')$ and $(9.7)$ in \cite{m:sf} and Proposition~\ref{prop:jtdu}. The first of them is an analogue of the N\"{a}gelsbach--Kostka formula. \bco\label{cor:nkdu} If $\la$ and $\mu$ are partitions such that the lengths of $\la'$ and $\mu'$ do not exceed $m$, then \beql{nkdu} \wh s_{\la/\mu}(x\vt a) =\det\big[\wh e_{\la'_i-\mu'_j-i+j} (x\vt\tau^{\tss \mu'_j-j+1}\tss a)\big]_{i,j=1}^m. \eeq \eco Suppose that $\la$ is a diagram with $d$ boxes on the main diagonal. Write $\la$ in the Frobenius notation \ben \la=(\al_1,\dots,\al_d\tss|\tss\be_1,\dots,\be_d)=(\al\tss|\tss\be), \een where $\al_i=\la_i-i$ and $\be_i=\la'_i-i$. The following is an analogue of the Giambelli formula. \bco\label{cor:gidu} We have the identity \beql{gidu} \wh s_{(\al\tss|\tss\be)}(x\vt a) =\det\big[\wh s_{(\al_i\tss|\tss\be_j)} (x\vt a)\big]_{i,j=1}^d. \eeq \eco \subsection{Expansions in terms of Schur functions} We will now deduce expansions of the dual Schur functions in terms of the Schur functions $s_{\la}(x)$ whose coefficients are elements of $\QQ[a]$ written explicitly as certain determinants. In Theorem~\ref{thm:tabre} below we will give alternative tableau presentations for these coefficients. Suppose that $\mu$ is a diagram containing $d$ boxes on the main diagonal. \bpr\label{prop:dsfexp} The dual Schur function $\wh s_{\mu}(x\vt a)$ can be written as the series \ben \bal \wh s_{\mu}(x\vt a)=\sum_{\la}(-1)^{n(\la/\mu)}\ts &\det\big[h_{\la_i-\mu_j-i+j} (a^{}_0,a^{}_{-1},\dots,a^{}_{j-\mu_j})\big]_{i,j=1}^d\\ {}\times{}& \det\big[e_{\la_i-\mu_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})\big]_{i,j=d+1}^n \ts s_{\la}(x), \eal \een summed over diagrams $\la$ which contain $\mu$ and such that $\la$ has $d$ boxes on the main diagonal, where $n(\la/\mu)$ denotes the total number of boxes in the diagram $\la/\mu$ in rows $d+1,d+2,\dots,n=\ell(\la)$. \epr \bpf It will be sufficient to prove the formula for the case of finite set of variables $x=(x_1,\dots,x_n)$. We use the definition \eqref{dusf} of the dual Schur functions. The entries $A_{ij}$ of the determinant $A_{\mu+\de}(x, a)$ can be written as \ben A_{ij}=\begin{cases}{\displaystyle\sum_{p_j\geqslant 0} h_{p_j}(a^{}_0,a^{}_{-1},\dots,a^{}_{j-\mu_j})\ts x_i^{\mu_j+p_j+n-j}} \quad&\text{for}\quad j=1,\dots,d,\\[2em] {\displaystyle\sum_{p_j\geqslant 0} (-1)^{p_j}\ts e_{p_j}(a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})\ts x_i^{\mu_j+p_j+n-j}} \quad&\text{for}\quad j=d+1,\dots,n. \end{cases} \een Hence, \eqref{dusf} gives \ben \bal \wh s_{\mu}(x\vt a)={}&\sum_{p_1,\dots,\ts p_n} \prod_{j=1}^d h_{p_j}(a^{}_0,a^{}_{-1},\dots,a^{}_{j-\mu_j}) \prod_{j=d+1}^n (-1)^{p_j}\ts e_{p_j}(a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})\\[1em] {}&{}\times\det[x_i^{\mu_j+p_j+n-j}]/\det[x_i^{n-j}]. \eal \een The ratio of the determinants in this formula is nonzero only if \ben \mu_{\si(j)}+p_{\si(j)}+n-{\si(j)}=\la_j+n-j,\qquad j=1,\dots,n, \een for some diagram $\la$ containing $\mu$ and some permutation $\si$ of the set $\{1,\dots,n\}$. Moreover, since $e_{p_j}(a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})=0$ for $p_j>j-\mu_j-1$, the number of diagonal boxes in $\la$ equals $d$. The ratio can then be written as \ben \det[x_i^{\mu_j+p_j+n-j}]/\det[x_i^{n-j}]=\sgn\si\cdot s_{\la}(x), \een which gives the desired formula for the coefficients. \epf \bco\label{cor:sone} Using the Frobenius notation $(\al\tss|\tss\be)$ for the hook diagram $(\al+1,1^{\be})$, we have \ben \wh s_{(\al\tss|\tss\be)}(x\vt a)= \sum_{p,\ts q\geqslant 0}(-1)^q\ts h_p(a^{}_0,a^{}_{-1},\dots,a^{}_{-\al})\ts h_q(a^{}_1,a^{}_2,\dots,a^{}_{\be+1})\ts s^{}_{(\al+p\tss|\tss\be+q)}(x). \een \eco \bpf By Proposition~\ref{prop:dsfexp}, the coefficient of $s^{}_{(\al+p\tss|\tss\be+q)}(x)$ in the expansion of the dual Schur function $\wh s_{(\al\tss|\tss\be)}(x\vt a)$ equals \beql{hdete} (-1)^q\ts h_p(a^{}_0,a^{}_{-1},\dots,a^{}_{-\al})\ts \det\big[e_{j-i+1} (a^{}_1,a^{}_2,\dots,a^{}_{\be+j})\big]_{i,j=1}^q. \eeq Using the relations for the elementary symmetric polynomials \ben e_k(a^{}_1,a^{}_2,\dots,a^{}_{\be+j}) =e_k(a^{}_1,a^{}_2,\dots,a^{}_{\be+j-1})+ e_{k-1}(a^{}_1,a^{}_2,\dots,a^{}_{\be+j-1})\ts a^{}_{\be+j}, \een it is not difficult to bring the determinant which occurs in \eqref{hdete} to the form \beql{deteq} \det\big[e_{j-i+1} (a^{}_1,a^{}_2,\dots,a^{}_{\be+j})\big]_{i,j=1}^q =\det\big[e_{j-i+1} (a^{}_1,a^{}_2,\dots,a^{}_{\be+1})\big]_{i,j=1}^q. \eeq Indeed, denote by $C_1,\dots,C_q$ the columns of the $q\times q$ matrix which occurs on the left hand side. Now replace $C_j$ by $C_j-a^{}_{\be+j}\ts C_{j-1}$ consequently for $j=q,q-1,\dots,2$. These operations leave the determinant of the matrix unchanged, while for $j\geqslant 2$ the $(i,j)$ entry of the new matrix equals $e_{j-i+1} (a^{}_1,a^{}_2,\dots,a^{}_{\be+j-1})$. Applying similar column operations to the new matrix and using obvious induction we will bring its determinant to the form which occurs on the right hand side of \eqref{deteq}. However, this determinant coincides with $h_q(a^{}_1,a^{}_2,\dots,a^{}_{\be+1})$ due to the N\"{a}gelsbach--Kostka formula (i.e., \eqref{nkdu} with the zero sequence $a$; that is, $a_i=0$ for all $i\in\ZZ$). \epf \bex\label{ex:sone} The dual Schur function corresponding to the single box diagram is given by \ben \wh s_{(1)}(x\vt a)=\sum_{p,\ts q\geqslant 0} (-1)^q\ts a_0^p\ts a_1^q\ts s^{}_{(p\tss|\tss q)}(x). \een \vskip-1.2\baselineskip \qed \eex Recall that the involution $\om:\La\to\La$ on the ring of symmetric functions in $x$ takes $s_{\la}(x)$ to $s_{\la'}(x)$; see \cite[Section~I.2]{m:sfh} or Section~\ref{sec:def} above. Let us extend $\om$ to the $\QQ[a]$-linear involution \beql{whome} \wh\om:\wh\La(x\vt a)\to \wh\La(x\vt a),\qquad \sum_{\la\in\Pc} c_{\la}(a)\ts s_{\la}(x)\mapsto \sum_{\la\in\Pc} c_{\la}(a)\ts s_{\la'}(x), \eeq where $c_{\la}(a)\in\QQ[a]$. We will find the images of the dual Schur functions under $\wh\om$. As before, by $a'$ we denote the sequence of variables such that $(a')_i=-a_{-i+1}$ for all $i\in\ZZ$. \bco\label{cor:omhat} For any skew diagram $\la/\mu$ we have \beql{imomdu} \wh\om:\wh s_{\la/\mu}(x\vt a)\mapsto \wh s_{\la'/\mu'}(x\vt a'). \eeq \eco \bpf By Corollary~\ref{cor:sone}, for any $m\in\ZZ$ \ben \wh\om: \wh s_{(\al\tss|\tss\be)}(x\vt\tau^ma)\mapsto \wh s_{(\be\tss|\tss\al)}(x\vt\tau^{-m}a'). \een In particular, \ben \wh\om: \wh h_k(x\vt\tau^m a)\mapsto \wh e_k(x\vt\tau^{-m}a'), \qquad k\geqslant 0. \een The statement now follows from \eqref{jtdu} and \eqref{nkdu}. \epf Note that \eqref{imomdu} with $\mu=\varnothing$ also follows from Corollary~\ref{cor:sone} and the Giambelli formula \eqref{gidu}. \bre\label{rem:athom} The involution $\wh\om$ does not coincide with the involution introduced in \cite[(9.6)]{m:sfh}. The latter is defined on the ring generated by the elements $h_{rs}$ and takes the generalized Schur function $s_{\la/\mu}$ to $s_{\la'/\mu'}$. Therefore, under the specialization \eqref{hrsspe}, the image of $\wh s_{\la/\mu}(x\vt a)$ would be $\wh s_{\la'/\mu'}(x\vt a)$ which is different from \eqref{imomdu}. \qed \ere We can now derive an alternative expansion of the dual Schur functions in terms of the Schur functions $s_{\la}(x)$; cf. Proposition~\ref{prop:dsfexp}. Suppose that $\la$ is a diagram which contains $\mu$ and such that $\mu$ and $\la$ have the same number of boxes $d$ on the diagonal. By a {\it hook $\la/\mu$-tableau\/} $T$ we will mean a tableau obtained by filling in the boxes of $\la/\mu$ with integers in the following way. The entries in the first $d$ rows weakly increase along the rows and strictly increase down the columns, and all entries in row $i$ belong to the set $\{i-\mu_i,\dots,-1,0\}$ for $i=1,\dots,d$; the entries in the first $d$ columns weakly decrease down the columns and strictly decrease along the rows, and all entries in column $j$ belong to the set $\{1,2,\dots,\mu'_j-j+1\}$ for $j=1,\dots,d$. Then we define the corresponding flagged Schur function $\vp_{\la/\mu}(a)$ by the formula \ben \vp_{\la/\mu}(a)=\sum_T \prod_{\al\in\la/\mu} a^{}_{T(\al)}, \een summed over the hook $\la/\mu$-tableaux $T$. \bth\label{thm:tabre} Let $\mu$ be a diagram and let $d$ be the number of boxes on the main diagonal of $\mu$. We have the expansion of the dual Schur function $\wh s_{\mu}(x\vt a)$ \ben \wh s_{\mu}(x\vt a)=\sum_{\la}(-1)^{n(\la/\mu)}\ts \vp_{\la/\mu}(a)\ts s_{\la}(x), \een summed over diagrams $\la$ which contain $\mu$ and such that $\la$ has $d$ boxes on the main diagonal, where $n(\la/\mu)$ denotes the total number of boxes in the diagram $\la/\mu$ in rows $d+1,d+2,\dots$. \eth \bpf Consider the expansions of $\wh s_{\mu}(x\vt a)$ and $\wh s_{\mu'}(x\vt a')$ provided by Proposition~\ref{prop:dsfexp}. By Corollary~\ref{cor:omhat}, $\wh s_{\mu}(x\vt a)$ is the image of $\wh s_{\mu'}(x\vt a')$ under the involution $\wh\om$. Since $\wh\om:s_{\la}(x)\mapsto s_{\la'}(x)$, taking $\la_i=\mu_i$ for $i=1,\dots,d$ and comparing the coefficients of $s_{\la}(x)$ in the expansions of $\wh s_{\mu}(x\vt a)$ and $\wh\om\big(\wh s_{\mu'}(x\vt a')\big)$, we can conclude that \begin{multline} (-1)^{n(\la/\mu)} \det\big[e_{\la_i-\mu_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})\big]_{i,j\geqslant d+1}\\ =\det\big[h_{\la'_i-\mu'_j-i+j} (a'_0,a'_{-1},\dots,a'_{j-\mu'_j})\big]_{i,j=1}^d \non \end{multline} so that \ben \det\big[e_{\la_i-\mu_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{j-\mu_j-1})\big]_{i,j\geqslant d+1} =\det\big[h_{\la'_i-\mu'_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{\mu'_j-j+1})\big]_{i,j=1}^d. \een On the other hand, if $\la$ is a diagram containing $\mu$ and such that $\la$ has $d$ boxes on the main diagonal, both determinants \ben \det\big[h_{\la_i-\mu_j-i+j} (a^{}_0,a^{}_{-1},\dots,a^{}_{j-\mu_j})\big]_{i,j=1}^d,\quad \det\big[h_{\la'_i-\mu'_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{\mu'_j-j+1})\big]_{i,j=1}^d \een coincide with the respective `row-flagged Schur functions' of \cite[(8.2)]{m:sf}, \cite{w:fs}, and they admit the required tableau presentations. \epf It is clear from the definition of the flagged Schur function $\vp_{\la/\mu}(a)$ that it can be written as the product of two polynomials. More precisely, suppose that the diagram $\la$ contains $\mu$ and both $\la$ and $\mu$ have $d$ boxes on their main diagonals. Let $(\la/\mu)_+$ denote the part of the skew diagram $\la/\mu$ contained in the top $d$ rows. With this notation, the hook flagged Schur function $\vp_{\la/\mu}(a)$ can be written as \beql{whsufa} \vp_{\la/\mu}(a)=(-1)^{n(\la/\mu)}\ts \vp_{(\la/\mu)_+}(a)\ts\vp_{(\la'/\mu')_+}(a'). \eeq In addition to the tableau presentation of the polynomial $\vp_{(\la/\mu)_+}(a)$ given above, we can get an alternative presentation based on the column-flagged Schur functions; see \cite[$(8.2')$]{m:sf}, \cite{w:fs}. Due to \eqref{whsufa}, this also gives alternative formulas for the coefficients in the expansion of $\wh s_{\mu}(x\vt a)$. \bco\label{cor:anexpa} We have the tableau presentation \ben \vp_{(\la/\mu)_+}(a)= \sum_T \prod_{\al\in(\la/\mu)_+} a^{}_{T(\al)}, \een summed over the $(\la/\mu)_+$-tableaux $T$ whose entries in column $j$ belong to the set $\{0,-1,\dots,-j+\mu'_j+2\}$ for $j\geqslant d+1$, and the entries weakly increase along the rows and strictly increase down the columns. \qed \eco Our next goal is to derive the inverse formulas expressing the Schur functions $s_{\mu}(x)$ as a series of the dual Schur functions $\wh s_{\la}(x\vt a)$. \bpr\label{prop:inve} We have the expansion \ben \bal s_{\mu}(x)=\sum_{\la}(-1)^{m(\la/\mu)}\ts &\det\big[e_{\la_i-\mu_j-i+j} (a^{}_0,a^{}_{-1},\dots,a^{}_{i-\la_i+1})\big]_{i,j=1}^d\\ {}\times{}& \det\big[h_{\la_i-\mu_j-i+j} (a^{}_1,a^{}_2,\dots,a^{}_{i-\la_i})\big]_{i,j\geqslant d+1} \ts \wh s_{\la}(x\vt a), \eal \een summed over diagrams $\la$ which contain $\mu$ and such that $\la$ has $d$ boxes on the main diagonal, where $m(\la/\mu)$ denotes the total number of boxes in the diagram $\la/\mu$ in rows $1,\dots,d$. \epr \bpf We will work with a finite set of variables $x=(x_1,\dots,x_n)$. The one variable specialization of \eqref{genh} gives \ben 1+\sum_{k=1}^{\infty} \frac{(x-a_1)(x-a_0)\dots (x-a_{-k+2})\ts t^k} {(1-a_0\tss t)\dots(1-a_{-k+1}\tss t)}= \frac{1-a_1\tss t}{1-x\tss t}. \een This implies \beql{decxt} \sum_{k=1}^{\infty} \frac{(x-a_0)\dots (x-a_{-k+2})\ts t^k} {(1-a_0\tss t)\dots(1-a_{-k+1}\tss t)}= \frac{t}{1-x\tss t}. \eeq Writing \ben (x-a_0)\dots (x-a_{-k+2})=\sum_{i=1}^k(-1)^{k-i} e_{k-i}(a_0,a_{-1},\dots,a_{-k+2})\ts x^{i-1} \een and comparing the coefficients of $x^{r-1}$ on both sides of \eqref{decxt} we come to the relation \beql{tpor} t^{\tss r}=\sum_{k=r}^{\infty} \frac{(-1)^{k-r}\ts e_{k-r}(a_0,a_{-1},\dots,a_{-k+2})\ts t^k} {(1-a_0\tss t)\dots(1-a_{-k+1}\tss t)},\qquad r\geqslant 1. \eeq Similarly, writing \ben \frac{1} {(1-a_0\tss t)\dots(1-a_{-k+1}\tss t)} =\sum_{j=0}^{\infty} h_{j}(a_0,a_{-1},\dots,a_{-k+1})\ts t^{j} \een and comparing the coefficients of $t^{\tss r+1}$ on both sides of \eqref{decxt} we come to \beql{xpor} x^r=\sum_{k=0}^r h_{r-k}(a_0,a_{-1},\dots,a_{-k}) \tss(x-a_0)(x-a_{-1})\dots(x-a_{-k+1}),\qquad r\geqslant 0. \eeq Assuming that the length of $\mu$ does not exceed $n$, represent $s_{\mu}(x)$ as the ratio of determinants \ben s_{\mu}(x)=\frac{A_{\mu+\de}(x)}{A_{\de}(x)}, \een where \ben A_{\al}(x)=\det\big[x_i^{\al_j}\big]_{i,j=1}^n, \qquad \al=(\al_1,\dots,\al_n). \een By \eqref{tpor}, for any $j=1,\dots,d$ we have \ben x_i^{\mu_j-j+1}=\sum_{p=\mu_j-j+1}^{\infty} \frac{(-1)^{p-\mu_j+j-1}\ts e_{p-\mu_j+j-1} (a_0,a_{-1},\dots,a_{-p+2})\ts x_i^p} {(1-a_0\tss x_i)\dots(1-a_{-p+1}\tss x_i)}. \een Similarly, for $j=d+1,\dots,n$ we find from \eqref{xpor} applied for $x=x_i^{-1}$ and $r=j-\mu_j-1$ that \ben x_i^{\mu_j-j+1}=\sum_{p=0}^{j-\mu_j-1} h_{j-\mu_j-p-1}(a_1,a_2,\dots,a_{p+1})\ts x_i^{-p} \tss(1-a_1 x_i)(1-a_2 x_i)\dots(1-a_p x_i). \een Multiplying both sides of these relations by $x_i^{n-1}$ we get the respective expansions of $x_i^{\mu_j+n-j}$ which allow us to write \ben \bal A_{\mu+\de}(x)=\sum_{\be_1,\dots,\ts \be_n} {}&{}\prod_{j=1}^d (-1)^{\be_j-\mu_j+j-1}\ts e_{\be_j-\mu_j+j-1}(a^{}_0,a^{}_{-1},\dots,a^{}_{-\be_j+2})\\ {}\times{}&{}\prod_{j=d+1}^n \ts h_{j-\mu_j-\be_j-1}(a^{}_1,a^{}_2,\dots,a^{}_{\be_j+1}) \ts A_{\be}(x, a). \eal \een Nonzero summands here correspond to the $n$-tuples $\be$ of the form \ben \be_j=\la_{\si(j)}-\si(j)+1,\qquad j=1,\dots,d, \een and \ben \be_j=-\la_{\tau(j)}+\tau(j)-1,\qquad j=d+1,\dots,n, \een where $\si$ is a permutation of $\{1,\dots,d\}$ and $\tau$ is a permutation of $\{d+1,\dots,n\}$, and $\la$ is a diagram containing $\mu$ such that $\la$ has $d$ boxes on the main diagonal. Dividing both sides of the above relation by the Vandermonde determinant, we get the desired expansion formula. \epf Now we obtain a tableau presentation of the coefficients in the expansion of $s_{\mu}(x)$; cf. Theorem~\ref{thm:tabre}. We assume, as before, that $\la$ and $\mu$ have the same number of boxes $d$ on their main diagonals. By a {\it dual hook $\la/\mu$-tableau\/} $T$ we will mean a tableau obtained by filling in the boxes of $\la/\mu$ with integers in the following way. The entries in the first $d$ rows strictly decrease along the rows and weakly decrease down the columns, and all entries in row $i$ belong to the set $\{0,-1,\dots,i-\la_i+1\}$ for $i=1,\dots,d$; the entries in the first $d$ columns strictly increase down the columns and weakly increase along the rows, and all entries in column $j$ belong to the set $\{1,2,\dots,\la'_j-j\}$ for $j=1,\dots,d$. Then we define the corresponding dual flagged Schur function $\psi_{\la/\mu}(a)$ by the formula \ben \psi_{\la/\mu}(a)=\sum_T \prod_{\al\in\la/\mu} a^{}_{T(\al)}, \een summed over the dual hook $\la/\mu$-tableaux $T$. \bth\label{thm:invtabre} We have the expansion of the Schur function $s_{\mu}(x)$ \ben s_{\mu}(x)=\sum_{\la}(-1)^{m(\la/\mu)}\ts \psi_{\la/\mu}(a)\ts \wh s_{\la}(x\vt a), \een summed over diagrams $\la$ which contain $\mu$ and such that $\la$ has $d$ boxes on the main diagonal, where $m(\la/\mu)$ denotes the total number of boxes in the diagram $\la/\mu$ in rows $1,\dots,d$. \eth \bpf This is deduced from Proposition~\ref{prop:inve} and the formulas for the flagged Schur functions in \cite[8th~Variation]{m:sf} exactly as in the proof of Theorem~\ref{thm:tabre}. \epf \bco\label{cor:inext} For the expansion of the hook Schur function we have \ben s_{(\al\tss|\tss\be)}(x)= \sum_{p,\ts q\geqslant 0}(-1)^p\ts e_p(a^{}_0,a^{}_{-1},\dots,a^{}_{-\al-p+1})\ts e_q(a^{}_1,a^{}_2,\dots,a^{}_{\be+q})\ts \wh s^{}_{(\al+p\tss|\tss\be+q)}(x\vt a). \een \eco \bex\label{ex:sonee} We have \ben s_{(1)}(x)= \sum_{p,\ts q\geqslant 0}(-1)^p\ts a^{}_0\ts a^{}_{-1}\dots a^{}_{-p+1}\ts a^{}_1\ts a^{}_2\dots a^{}_{q}\ts \wh s^{}_{(p\tss|\tss q)}(x\vt a). \een \eex As with the flagged Schur functions $\vp_{\la/\mu}(a)$, we have the following factorization formula \beql{invwhsufa} \psi_{\la/\mu}(a)=(-1)^{m(\la/\mu)}\ts \psi_{(\la/\mu)_-}(a)\ts\psi_{(\la'/\mu')_-}(a'), \eeq where $(\la/\mu)_-$ denotes the part of the skew diagram $\la/\mu$ whose boxes lie in the rows $d+1,d+2,\dots$. An alternative tableau presentation for the polynomials $\psi_{(\la/\mu)_-}(a)$ is implied by the formulas \cite[$(8.2)$]{m:sf}, \cite{w:fs}. By \eqref{invwhsufa}, this also gives alternative formulas for the coefficients in the expansion of $s_{\mu}(x)$. \bco\label{cor:invanexpa} We have the tableau presentation \ben \psi_{(\la/\mu)_-}(a)= \sum_T \prod_{\al\in(\la/\mu)_-} a^{}_{T(\al)}, \een where the sum is taken over the $(\la/\mu)_-$-tableaux $T$ whose entries in row $i$ belong to the set $\{1,2,\dots,i-\la_i\}$ for $i=d+1,d+2,\dots$, and the entries weakly increase along the rows and strictly increase down the columns. \eco Completing this section we note that the canonical comultiplication on the ring $\La$ is naturally extended to the comultiplication \ben \Delta:\wh\La(x\vt a)\to \wh\La(x\vt a)\ot^{}_{\ts\QQ[a]} \wh\La(x\vt a) \een defined on the generators by \ben \Delta\big(p_k(x)\big)= p_k(x)\ot 1+1\ot p_k(x). \een Hence, Proposition~\ref{prop:skelrp} can be interpreted in terms of $\Delta$ as the following decomposition of the image of the dual Schur function \ben \Delta\big(\wh s_{\nu}(x\vt a)\big)= \sum_{\mu} \wh s_{\nu/\mu}(x\vt a) \ot \wh s_{\mu}(x\vt a)= \sum_{\la,\ts\mu} c_{\la\mu}^{\ts\tss\nu}(a)\ts \wh s_{\la}(x\vt a) \ot \wh s_{\mu}(x\vt a). \een \section{Dual Littlewood--Richardson polynomials} \label{sec:dlr} \setcounter{equation}{0} It was pointed out in \cite[Remark~3.3]{orv:fs} that the ring of supersymmetric functions $\La(x/y\vt a)$ is equipped with the comultiplication $\Delta$ such that \ben \Delta\big(p_k(x/y)\big)= p_k(x/y)\ot 1+1\ot p_k(x/y); \een cf. \cite[Chapter~I]{m:sfh}. The isomorphism \eqref{isom} allows us to transfer the comultiplication to the ring of double symmetric functions $\La(x\vt a)$ so that $\Delta$ is a $\QQ[a]$-linear ring homomorphism \ben \Delta:\La(x\vt a)\to \La(x\vt a)\ot^{}_{\ts\QQ[a]} \La(x\vt a) \een such that \ben \Delta\big(p_k(x\vt a)\big)= p_k(x\vt a)\ot 1+1\ot p_k(x\vt a). \een \bde\label{def:dulrpol} The {\it dual Littlewood--Richardson polynomials\/} $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ are defined as the coefficients in the expansion \ben \Delta\big(s_{\nu}(x\vt a)\big)=\sum_{\la,\ts\mu} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\la}(x\vt a) \ot s_{\mu}(x\vt a). \een Equivalently, these polynomials can be found from the decomposition \beql{skewdec} s_{\tss\nu/\mu}(x\vt a)=\sum_{\la} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\la}(x\vt a). \eeq \vskip-1.3\baselineskip \qed \ede In order to verify the equivalence of the definitions, note that by \cite[Remark~3.3]{orv:fs}, \ben \Delta\big(s_{\nu}(x/y\vt a)\big)=\sum_{\mu} s_{\nu/\mu}(x/y\vt a) \ot s_{\mu}(x/y\vt a). \een The claim now follows by the application of Proposition~\ref{prop:imisom}. It is clear from the definition that the polynomial $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ is nonzero only if the inequality $|\nu|\geqslant |\la|+|\mu|$ holds. In this case it is a homogeneous polynomial in the variables $a_i$ of degree $|\nu|-|\la|-|\mu|$. Moreover, in the particular case $|\nu|=|\la|+|\mu|$ the constant $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ equals $c_{\la\mu}^{\tss\nu}$, the Littlewood--Richardson coefficient. \bco\label{cor:symlr} We have the following symmetry property \ben \wh c_{\la\mu}^{\ts\tss\nu}(a)= \wh c_{\la'\mu'}^{\ts\tss\nu^{\tss\prime}}(a'). \een \eco \bpf By Proposition~\ref{prop:imisom} and Definition~\ref{def:skdsf}, we have \ben s_{\tss\nu/\mu}(x/y\vt a)=\sum_{\la} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\la}(x/y\vt a). \een The desired relations now follow from the symmetry property \eqref{symprsu}. \epf We can now prove that the dual Littlewood--Richardson polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ introduced in Definition~\ref{def:dulrpol} describe the multiplication rule for the dual Schur functions. \bth\label{thm:proddsf} We have the expansion \ben \wh s_{\la}(x\vt a)\ts \wh s_{\mu}(x\vt a)= \sum_{\nu} \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts \wh s_{\tss\nu}(x\vt a). \een \eth \bpf We argue as in the proof of the classical analogue of this result; see \cite[Chapter~I]{m:sfh}. Applying Corollary~\ref{cor:ssduco} for the families of variables $x=x'\cup x''$ and $y=y^{\tss\prime}\cup y^{\tss\prime\prime}$ we get \ben \bal \sum_{\nu\in\Pc} s_{\nu}(x/y\vt a)\ts \wh s_{\nu}(z\vt a) {}&=\prod_{i,\ts j\geqslant 1} \frac{1+y^{\tss\prime}_i\ts z_j}{1-x'_i\ts z_j} \prod_{i,\ts j\geqslant 1} \frac{1+y^{\tss\prime\prime}_i\ts z_j}{1-x''_i\ts z_j}\\ {}&=\sum_{\la,\ts\mu\in\Pc} s_{\la}(x'/y^{\tss\prime}\vt a) \ts \wh s_{\la}(z\vt a) \ts s_{\mu}(x''/y^{\tss\prime\prime}\vt a)\ts \wh s_{\mu}(z\vt a). \eal \een On the other hand, an alternative expansion of the sum on the left hand side is obtained by using the relation \ben s_{\nu}(x/y\vt a)=\sum_{\la\subseteq\nu}s_{\la}(x'/y^{\tss\prime}\vt a) \ts s_{\nu/\la}(x''/y^{\tss\prime\prime}\vt a) =\sum_{\la,\ts\mu}s_{\la}(x'/y^{\tss\prime}\vt a) \ts \wh c_{\la\mu}^{\ts\tss\nu}(a)\ts s_{\mu}(x''/y^{\tss\prime\prime}\vt a), \een implied by the combinatorial formula \eqref{taba}. Therefore, the required relation follows by comparing the two expansions. \epf An explicit formula for the polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ is provided by the following corollary, where the $c_{\al\be}^{\tss\ga}$ denote the classical Littlewood--Richardson coefficients defined by the decomposition of the product of the Schur functions \ben s_{\al}(x)\ts s_{\be}(x)= \sum_{\ga} c_{\al\be}^{\tss\ga}\ts s_{\tss\ga}(x). \een We suppose that $\vp_{\al/\la}(a)=\psi_{\al/\la}(a)=0$ unless $\la\subseteq\al$ and the diagrams $\la$ and $\al$ have the same number of boxes on their main diagonals. \bco\label{cor:dulr} We have \ben \wh c_{\la\mu}^{\ts\tss\nu}(a)=\sum_{\al,\be,\ga} (-1)^{n(\al/\la)+n(\be/\mu)+m(\nu/\ga)}\ts c_{\al\be}^{\tss\ga}\ts \vp_{\al/\la}(a)\ts \vp_{\be/\mu}(a)\ts \psi_{\nu/\ga}(a), \een summed over diagrams $\al$, $\be$, $\ga$. In particular, $\wh c_{\la\mu}^{\ts\tss\nu}(a)=0$ unless $\la\subseteq\nu$ and $\mu\subseteq\nu$. \eco \bpf The formula follows from Theorems~\ref{thm:tabre}, \ref{thm:invtabre} and \ref{thm:proddsf}. The second statement is implied by the same property of the Littlewood--Richardson coefficients. \epf \bex\label{ex:skewsf} If $k\leqslant l$ and $k+l\leqslant m$ then \ben \wh c_{(k)(l)}^{\ts\tss(m)}(a)=\sum_{r+s=m-k-l} (-1)^s\ts h_r(a_0,a_{-1},\dots,a_{-k+1}) \ts e_s(a_{-l},a_{-l-1},\dots,a_{-m+2}). \een In particular, \ben \wh c_{(1)(l)}^{\ts\tss(m)}(a)= (a_0-a_{-l})(a_0-a_{-l-1})\dots(a_0-a_{-m+2}). \een Applying Corollary~\ref{cor:symlr}, we also get \ben \wh c_{(1^k)(1^l)}^{\ts\tss(1^m)}(a)=\sum_{r+s=m-k-l} (-1)^r\ts h_r(a_1,a_2,\dots,a_k) \ts e_s(a_{l+1},a_{l+2},\dots,a_{m-1}) \een and \ben \wh c_{(1)(1^l)}^{\ts\tss(1^m)}(a)= (a_{l+1}-a_1)(a_{l+2}-a_1)\dots(a_{m-1}-a_1). \een These relations provide explicit formulas for the images of the double elementary and complete symmetric functions $h_m(x\vt a)$ and $e_m(x\vt a)$ with respect to the comultiplication $\Delta$. \qed \eex Another formula for the dual Littlewood--Richardson polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ can be obtained with the use of the decomposition \eqref{skewdec}. We will consider the skew double Schur function as the sequence of polynomials $s_{\nu/\mu}(x\vt a)$ defined in \eqref{seqpol}. For a given skew diagram $\nu/\mu$ consider the finite set of variables $x=(x_1,\dots,x_n)$, where $\nu'_j-\mu'_j\leqslant n$ for all $j$; that is, the number of boxes in each column of $\nu/\mu$ does not exceed $n$. Since the skew double Schur functions are consistent with the evaluation homomorphisms \eqref{eval}, the polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ are determined by the decomposition \eqref{skewdec}, where $x$ is understood as the above finite set of variables. In order to formulate the result, introduce $\nu/\mu$-{\it supertableaux\/} $T$ which are obtained by filling in the boxes of $\nu/\mu$ with the symbols $1,1^{\tss\prime},\dots,n,n'$ in such a way that in each row (resp. column) each primed index is to the left (resp. above) of each unprimed index; unprimed indices weakly decrease along the rows and strictly decrease down the columns; primed indices strictly increase along the rows and weakly increase down the columns. Introduce the ordering on the set of boxes of a skew diagram by reading them by columns from left to right and from bottom to top in each column. We call this the {\it column order\/}. We shall write $\al\prec \be$ if $\al$ (strictly) precedes $\be$ with respect to the column order. Suppose that $\la$ is a diagram. Given a sequence of diagrams $R$ of the form \beql{r} \varnothing=\rho^{(0)}\to\rho^{(1)}\to \dots\to\rho^{(l-1)}\to\rho^{(l)}=\la, \eeq we let $r_i$ denote the row number of the box added to the diagram $\rho^{(i-1)}$. The sequence $r_1r_2\dots r_l$ is called the {\it Yamanouchi symbol\/} of $R$. Construct the set $\Tc(\nu/\mu,R)$ of {\it barred\/} $\nu/\mu$-supertableaux $T$ such that $T$ contains boxes $\al_1,\dots,\al_l$ with \ben \al_1\prec\dots\prec\al_l\Fand T(\al_i)=r_i,\quad 1\leqslant i\leqslant l, \een where all entries $r_i$ are unprimed and the boxes are listed in the column order which is restricted to the subtableau of $T$ formed by the unprimed indices. We will distinguish the entries in $\al_1,\dots,\al_l$ by barring each of them. So, an element of $\Tc(\nu/\mu,R)$ is a pair consisting of a $\nu/\mu$-supertableau and a chosen sequence of barred entries compatible with $R$. We shall keep the notation $T$ for such a pair. For each box $\alpha$ with $\al_i\prec\al\prec\al_{i+1}$, $0\leqslant i\leqslant l$, which is occupied by an unprimed index, set $\rho(\alpha)=\rho^{(i)}$. \bth\label{thm:supt} The dual Littlewood--Richardson polynomials can be given by \ben \bal \wh c_{\la\mu}^{\ts\tss\nu}(a)=\sum_R \sum_{T} \prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts unprimed,\ts unbarred}}{\alpha\in\nu/\mu}} &\big(a^{}_{\tss T(\alpha)-\rho(\alpha)^{}_{T(\al)}}- a^{}_{\tss T(\alpha)-c(\alpha)}\big)\\ {}\times{}\prod_{\underset{\scriptstyle T(\alpha) \text{\ts\ts primed}}{\alpha\in\nu/\mu}} &\big(a^{}_{\tss T(\alpha)-c(\alpha)} -a^{}_{\tss T(\alpha)}\big), \eal \een summed over sequences $R$ of the form \eqref{r} and barred supertableaux $T\in\Tc(\nu/\mu,R)$. \eth \bpf Due to \eqref{seqpol}, we have \ben \wh c_{\la\mu}^{\ts\tss\nu}(a) =\sum_{\mu\subseteq\ts\rho\ts\subseteq\tss\nu} \wt c_{\la\rho}^{\ts\tss\nu}(a)\ts s_{\rho^{\tss\prime}/\mu^{\tss\prime}}(-a^{(n)} \tss|\tss {-}a), \een where the polynomials $\wt c_{\la\rho}^{\ts\tss\nu}(a)$ are defined by the decomposition \ben \wt s_{\nu/\rho}(x\vt a)=\sum_{\la} \wt c_{\la\rho}^{\ts\tss\nu}(a)\ts s_{\la}(x\vt a). \een The desired formula is now implied by \cite[Lemma~2.4]{m:lr} which gives the combinatorial expression for the coefficients $\wt c_{\la\rho}^{\ts\tss\nu}(a)$ and thus takes care of the unprimed part of $T$; the expression for the primed part is implied by \eqref{facsf}. \epf \bre\label{rem:open} Both the formulas for $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ provided by Corollary~\ref{cor:dulr} and Theorem~\ref{thm:supt} involve some terms which cancel pairwise. It would be interesting to find a combinatorial presentation of the polynomials $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ analogous to \cite{kt:pe}, \cite{k:elr} or \cite{m:lr} and to understand their positivity properties. A possible way to find such a presentation could rely on the vanishing theorem of the supersymmetric Schur functions obtained in \cite[Theorems~5.1 \& 5.2]{orv:fs}; see also \cite[Theorem~4.4]{m:fs} for a similar result. \qed \ere \bex\label{ex:superca} In order to calculate the polynomial $\wh c_{(1)\ts(2)}^{\ts\tss(2^2)}(a)$, take $\la=(1)$, $\mu=(2)$, $\nu=(2^2)$ and $n=1$. The barred supertableaux compatible with the sequence $\varnothing\to (1)$ are \setlength{\unitlength}{0.75em} \begin{center} \begin{picture}(30,4.2) \put(0,0){\line(0,1){3.8}} \put(2,0){\line(0,1){3.8}} \put(4,0){\line(0,1){3.8}} \put(0,0){\line(1,0){4}} \put(0,2){\line(1,0){4}} \put(0.7,0.5){$\overline{1}$} \put(0.7,2.5){} \put(2.7,2.5){} \put(2.7,0.5){1} \put(0,2.2){\line(1,0){4}} \put(0,2.4){\line(1,0){4}} \put(0,2.6){\line(1,0){4}} \put(0,2.8){\line(1,0){4}} \put(0,3.0){\line(1,0){4}} \put(0,3.2){\line(1,0){4}} \put(0,3.4){\line(1,0){4}} \put(0,3.6){\line(1,0){4}} \put(0,3.8){\line(1,0){4}} \put(12,0){\line(0,1){3.8}} \put(14,0){\line(0,1){3.8}} \put(16,0){\line(0,1){3.8}} \put(12,0){\line(1,0){4}} \put(12,2){\line(1,0){4}} \put(12.7,0.5){1} \put(12.7,2.5){} \put(14.7,2.5){} \put(14.7,0.5){$\overline{1}$} \put(12,2.2){\line(1,0){4}} \put(12,2.4){\line(1,0){4}} \put(12,2.6){\line(1,0){4}} \put(12,2.8){\line(1,0){4}} \put(12,3.0){\line(1,0){4}} \put(12,3.2){\line(1,0){4}} \put(12,3.4){\line(1,0){4}} \put(12,3.6){\line(1,0){4}} \put(12,3.8){\line(1,0){4}} \put(24,0){\line(0,1){3.8}} \put(26,0){\line(0,1){3.8}} \put(28,0){\line(0,1){3.8}} \put(24,0){\line(1,0){4}} \put(24,2){\line(1,0){4}} \put(24.5,0.5){$1^{\tss\prime}$} \put(24.7,2.5){} \put(26.7,2.5){} \put(26.7,0.5){$\overline{1}$} \put(24,2.2){\line(1,0){4}} \put(24,2.4){\line(1,0){4}} \put(24,2.6){\line(1,0){4}} \put(24,2.8){\line(1,0){4}} \put(24,3.0){\line(1,0){4}} \put(24,3.2){\line(1,0){4}} \put(24,3.4){\line(1,0){4}} \put(24,3.6){\line(1,0){4}} \put(24,3.8){\line(1,0){4}} \end{picture} \end{center} \setlength{\unitlength}{1pt} \noindent so that \ben \wh c_{(1)\ts(2)}^{\ts\tss(2^2)}(a) =a_0-a_1+a_1-a_2 +a_{2}-a_1=a_0-a_1. \een Alternatively, we can take $\la=(2)$, $\mu=(1)$, $\nu=(2^2)$ and $n=2$. The barred supertableaux compatible with the sequence $\varnothing\to (1)\to (2)$ are \setlength{\unitlength}{0.75em} \begin{center} \begin{picture}(30,4.6) \put(0,0){\line(0,1){4}} \put(2,0){\line(0,1){4}} \put(4,0){\line(0,1){4}} \put(0,0){\line(1,0){4}} \put(0,2){\line(1,0){4}} \put(0,4){\line(1,0){4}} \put(0.7,0.5){$\overline{1}$} \put(0.7,2.5){} \put(2.7,2.5){2} \put(2.7,0.5){$\overline{1}$} \put(0,2.2){\line(1,0){2}} \put(0,2.4){\line(1,0){2}} \put(0,2.6){\line(1,0){2}} \put(0,2.8){\line(1,0){2}} \put(0,3.0){\line(1,0){2}} \put(0,3.2){\line(1,0){2}} \put(0,3.4){\line(1,0){2}} \put(0,3.6){\line(1,0){2}} \put(0,3.8){\line(1,0){2}} \put(12,0){\line(0,1){4}} \put(14,0){\line(0,1){4}} \put(16,0){\line(0,1){4}} \put(12,0){\line(1,0){4}} \put(12,2){\line(1,0){4}} \put(12,4){\line(1,0){4}} \put(12.7,0.5){$\overline{1}$} \put(12.7,2.5){} \put(14.7,2.5){$1^{\tss\prime}$} \put(14.7,0.5){$\overline{1}$} \put(12,2.2){\line(1,0){2}} \put(12,2.4){\line(1,0){2}} \put(12,2.6){\line(1,0){2}} \put(12,2.8){\line(1,0){2}} \put(12,3.0){\line(1,0){2}} \put(12,3.2){\line(1,0){2}} \put(12,3.4){\line(1,0){2}} \put(12,3.6){\line(1,0){2}} \put(12,3.8){\line(1,0){2}} \put(24,0){\line(0,1){4}} \put(26,0){\line(0,1){4}} \put(28,0){\line(0,1){4}} \put(24,0){\line(1,0){4}} \put(24,2){\line(1,0){4}} \put(24,4){\line(1,0){4}} \put(24.5,0.5){$\overline{1}$} \put(24.7,2.5){} \put(26.7,2.5){$2^{\tss\prime}$} \put(26.7,0.5){$\overline{1}$} \put(24,2.2){\line(1,0){2}} \put(24,2.4){\line(1,0){2}} \put(24,2.6){\line(1,0){2}} \put(24,2.8){\line(1,0){2}} \put(24,3.0){\line(1,0){2}} \put(24,3.2){\line(1,0){2}} \put(24,3.4){\line(1,0){2}} \put(24,3.6){\line(1,0){2}} \put(24,3.8){\line(1,0){2}} \end{picture} \end{center} \setlength{\unitlength}{1pt} \noindent so that \ben \wh c_{(1)\ts(2)}^{\ts\tss(2^2)}(a) =a_2-a_1+a_0-a_1 +a_{1}-a_2=a_0-a_1. \een This agrees with the previous calculation and the formula implied by Corollary~\ref{cor:dulr}. \eex \bex\label{ex:supediff} Theorem~\ref{thm:supt} gives formulas for the polynomials $\wh c_{(k)(l)}^{\ts\tss(m)}(a)$ and $\wh c_{(1^k)(1^l)}^{\ts\tss(1^m)}(a)$ in a different form as compared to Example~\ref{ex:skewsf}. If $k+l\leqslant m$ then \ben \bal \wh c_{(k)(l)}^{\ts\tss(m)}(a)=\sum &(a_0-a_{-l})(a_0-a_{-l-1})\cdots (a_0-a_{-l-i_1+1})\\ &{}\times{}(a_{-1}-a_{-l-i_1-1})\cdots (a_{-1}-a_{-l-i_2+1})\\ &{}\times{}\cdots (a_{-k+1}-a_{-l-i_{k-1}-1})\cdots (a_{-k+1}-a_{-m+2}) \eal \een summed over the sets of indices $0\leqslant i_1<\dots<i_{k-1}\leqslant m-l-2$. A similar expression for $\wh c_{(1^k)(1^l)}^{\ts\tss(1^m)}(a)$ follows by the application of Corollary~\ref{cor:symlr}. \eex \section{Transition matrices} \label{sec:ome} \setcounter{equation}{0} \subsection{Pairing between the double and dual symmetric functions} We now prove alternative expansion formulas for the infinite product which occurs in the Cauchy formula \eqref{cchid}. These formulas turn into the well known identities when $a$ is specialized to the sequence of zeros; see \cite[Chapter~I]{m:sfh}. Let $\la=(\la_1,\dots,\la_l)$ be a partition and suppose that the length of $\la$ does not exceed $l$. Using the notation \eqref{dumon}, introduce the {\it dual monomial symmetric function\/} $\wh m_{\la}(x\vt a)\in\wh\La(x\vt a)$ by the formula \ben \wh m_{\la}(x\vt a)= \sum_{\si} (x_{\si(1)},a)^{\la_1}\ts (x_{\si(2)},a)^{\la_2}\dots (x_{\si(l)},a)^{\la_l}, \een summed over permutations $\si$ of the $x_i$ which give distinct monomials. For a partition $\la=(1^{m_1}\ts 2^{m_2}\dots)$ set $z_{\la}=\prod_{i\geqslant 1} i^{m_i}\ts m_i!$. \bpr\label{prop:expa} We have the expansions \beql{hmde} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} h_{\la}(x\vt a)\ts \wh m_{\la}(y\vt a) \eeq and \beql{ppde} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} z_{\la}^{-1}\ts p_{\la}(x\vt a)\ts p_{\la}(y). \eeq \epr \bpf Let us set \ben H(t)=\prod_{i=1}^{\infty}\frac{1-a_i\tss t}{1-x_i\tss t}. \een Then using \eqref{genh} and arguing as in \cite[Chapter~I]{m:sfh}, we can write \ben \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\prod_{j\geqslant 1} H(y_j) =\prod_{j\geqslant 1}\sum_{k=0}^{\infty} h_k(x\vt a) \ts (y_j,a)^k=\sum_{\la\in\Pc} h_{\la}(x\vt a)\ts \wh m_{\la}(y\vt a), \een which proves \eqref{hmde}. For the proof of \eqref{ppde} note that \ben \bal \ln H(t)&=\sum_{i\geqslant 1}\Big(\ln(1-a_it)-\ln(1-x_it)\Big)\\ {}&=\sum_{i\geqslant 1}\sum_{k\geqslant 1} \Big(\frac{x_i^k\ts t^k}{k}-\frac{a_i^k\ts t^k}{k}\Big) =\sum_{k\geqslant 1}\frac{p_k(x\vt a) t^k}{k}. \eal \een Hence, \ben H(t)=\sum_{\la\in\Pc} z_{\la}^{-1}\ts p_{\la}(x\vt a)\ts t^{|\la|}. \een Now apply this relation to the sets of variables $x$ and $a$ respectively replaced with the sets $\{x_iy_j\}$ and $\{a_iy_j\}$. Then $p_k(x\vt a)$ is replaced by $p_{\la}(x\vt a)\ts p_{\la}(y)$, and \eqref{ppde} follows by putting $t=1$. \epf Now define the $\QQ[a]$-bilinear pairing between the rings $\La(x\vt a)$ and $\wh\La(y\vt a)$, \beql{pairi} \langle\ ,\ \rangle:\big(\La(x\vt a),\wh\La(y\vt a)\big)\to \QQ[a], \eeq by setting \beql{defpa} \big\langle h_{\la}(x\vt a),\wh m_{\mu}(y\vt a)\big\rangle =\de_{\la\mu}. \eeq Clearly, $\langle u,\wh v\rangle$ is a well-defined polynomial in $a$ for any elements $u\in\La(x\vt a)$ and $\wh v\in\wh\La(y\vt a)$ which is determined from \eqref{defpa} by linearity. The following is an analogue of the duality properties of the classical bases of the ring of symmetric functions; see \cite[Chapter~I]{m:sfh}. \bpr\label{prop:dubas} Let $\{u_{\la}(x\vt a)\}$ and $\{\wh v_{\la}(y\vt a)\}$ be families of elements of rings $\La(x\vt a)$ and $\wh\La(y\vt a)$, respectively, which are parameterized by all partitions. Suppose that for any $n\geqslant 0$ the highest degree components in $x$ {\rm(}resp., the lowest degree components in $y${\rm)} of the elements $u_{\la}(x\vt a)$ {\rm(}resp., $\wh v_{\la}(y\vt a)${\rm)} with $|\la|=n$ form a basis of the space of homogeneous symmetric functions in $x$ {\rm(}resp., $y${\rm)} of degree $n$. Then the following conditions are equivalent: \beql{uvdu} \big\langle u_{\la}(x\vt a),\wh v_{\mu}(y\vt a)\big\rangle =\de_{\la\mu},\qquad\text{for all}\quad \la,\mu; \eeq \beql{unsum} \sum_{\la\in\Pc} u_{\la}(x\vt a)\ts \wh v_{\la}(y\vt a) =\prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j}. \eeq \epr \bpf We only need to slightly modify the respective argument of \cite[Chapter~I]{m:sfh}. Write \ben u_{\la}(x\vt a)=\sum_{\rho} A_{\la\rho}(a)\ts h_{\rho}(x\vt a), \qquad \wh v_{\mu}(y\vt a)=\sum_{\si} B_{\mu\si}(a)\ts \wh m_{\si}(y\vt a), \een where the first sum is taken over partitions $\rho$ with $|\rho|\leqslant |\la|$, while the second is taken over partitions $\si$ with $|\si|\geqslant |\mu|$. Then \ben \big\langle u_{\la}(x\vt a),\wh v_{\mu}(y\vt a)\big\rangle =\sum_{\rho} A_{\la\rho}(a)\ts B_{\mu\rho}(a). \een Hence, condition \eqref{uvdu} is equivalent to \beql{abmu} \sum_{\rho} A_{\la\rho}(a)\ts B_{\mu\rho}(a)=\de_{\la\mu}. \eeq On the other hand, due to \eqref{hmde}, \eqref{unsum} can be written as \ben \sum_{\la\in\Pc} u_{\la}(x\vt a)\ts \wh v_{\la}(y\vt a)=\sum_{\rho\in\Pc} h_{\rho}(x\vt a)\ts \wh m_{\rho}(y\vt a), \een which is equivalent to \ben \sum_{\la} A_{\la\rho}(a)\ts B_{\la\si}(a)=\de_{\rho\si}. \een This condition is easily verified to be equivalent to \eqref{abmu}. \epf Applying Theorem~\ref{thm:cch} and Proposition~\ref{prop:expa} we get the following corollary. \bco\label{cor:dumup} Under the pairing \eqref{pairi} we have \ben \big\langle s_{\la}(x\vt a),\wh s_{\mu}(y\vt a)\big\rangle =\de_{\la\mu} \Fand \big\langle p_{\la}(x\vt a),p_{\mu}(y)\big\rangle =\de_{\la\mu}\ts z_{\la}. \een \vskip-1.2\baselineskip \qed \eco Thus, the symmetric functions $\wh s_{\la}(y\vt a)$ are dual to the double Schur functions $s_{\la}(x\vt a)$ in sense of the pairing \eqref{pairi}. Using the isomorphism \eqref{isom} and the pairing \eqref{pairi}, we get another $\QQ[a]$-bilinear pairing \beql{pairisu} \langle\ ,\ \rangle:\big(\La(x/y\vt a),\wh\La(z\vt a)\big)\to \QQ[a] \eeq such that \beql{duoda} \big\langle s_{\la}(x/y\vt a),\wh s_{\mu}(z\vt a)\big\rangle =\de_{\la\mu}. \eeq Note that Proposition~\ref{prop:dubas} can be easily reformulated for the pairing \eqref{pairisu}. In particular, the condition \eqref{unsum} is now replaced by \beql{unsumsu} \sum_{\la\in\Pc} u_{\la}(x/y\vt a)\ts \wh v_{\la}(z\vt a) =\prod_{i,\ts j\geqslant 1}\frac{1+y_i\ts z_j}{1-x_i\ts z_j}. \eeq This implies that \beql{duod} \big\langle s_{\la}(x/y),s_{\mu}(z)\big\rangle =\de_{\la\mu}, \eeq where $s_{\la}(x/y)$ denotes the ordinary supersymmetric Schur function which is obtained from $s_{\la}(x/y\vt a)$ by the specialization $a_i=0$. Together with Theorems~\ref{thm:tabre} and \ref{thm:invtabre}, the relations \eqref{duoda} and \eqref{duod} imply the following expansions for the supersymmetric Schur functions. \bco\label{cor:expasdo} We have the decompositions \ben s_{\la}(x/y\vt a)=\sum_{\mu}(-1)^{m(\la/\mu)}\ts \psi_{\la/\mu}(a)\ts s_{\mu}(x/y), \een summed over diagrams $\mu$ contained in $\la$ and such that $\la$ and $\mu$ have the same number of boxes on the main diagonal; and \ben s_{\mu}(x/y)=\sum_{\la}(-1)^{n(\la/\mu)}\ts \vp_{\la/\mu}(a)\ts s_{\la}(x/y\vt a), \een summed over diagrams $\la$ which contain $\mu$ and such that $\la$ and $\mu$ have the same number of boxes on the main diagonal. \qed \eco Note that expressions for $\psi_{\la/\mu}(a)$ and $\vp_{\la/\mu}(a)$ in terms of determinants as in Propositions~\ref{prop:dsfexp} and \ref{prop:inve} were given in \cite{orv:fs}. Corollary~\ref{cor:expasdo} gives new tableau formulas for these coefficients. Moreover, under the specialization $a_i=-i+1/2$ the supersymmetric Schur functions $s_{\la}(x/y\vt a)$ turn into the {\it Frobenius--Schur functions\/} $Fs_{\mu}$; see \cite{orv:fs}. Hence, the transition coefficients between the $Fs_{\mu}$ and the Schur functions can be found as follows; cf. \cite[Theorem~2.6]{orv:fs}. \bco\label{cor:frschur} We have the decompositions \ben Fs_{\la}=\sum_{\mu}(-1)^{m(\la/\mu)}\ts \psi_{\la/\mu}\ts s_{\mu}(x/y) \een and \ben s_{\mu}(x/y)=\sum_{\la}(-1)^{n(\la/\mu)}\ts \vp_{\la/\mu}\ts Fs_{\la}, \een where $\psi_{\la/\mu}$ and $\vp_{\la/\mu}$ are the respective values of the polynomials $\psi_{\la/\mu}(a)$ and $\vp_{\la/\mu}(a)$ at $a_i=-i+1/2$, $i\in\ZZ$. \qed \eco Using the notation of Corollary~\ref{cor:expasdo} and applying the isomorphism \eqref{isom} we get the respective expansion formulas involving the double Schur functions. \bco\label{cor:expasdouble} We have the decompositions \ben s_{\la}(x\vt a)=\sum_{\mu}(-1)^{m(\la/\mu)}\ts \psi_{\la/\mu}(a)\ts s_{\mu}(x/{-}a^+) \een and \ben s_{\mu}(x/{-}a^+)=\sum_{\la}(-1)^{n(\la/\mu)}\ts \vp_{\la/\mu}(a)\ts s_{\la}(x\vt a), \een where $a^+=(a_1,a_2,\dots)$. \qed \eco Expressions for the coefficients in the expansions relating the double and ordinary Schur functions or polynomials can be found in \cite{k:elr}, \cite{k:pf}, \cite{m:sf}, \cite{m:lr} and \cite{ms:lr}. Let us now recall the isomorphism $\om_a:\La(x\vt a)\to\La(x\vt a')$ and the involution $\wh\om:\wh\La(x\vt a)\to \wh\La(x\vt a)$; see \eqref{omega} and \eqref{whome}. Since every polynomial $c(a)\in \QQ[a]$ can be regarded as an element of $\QQ[a']$, the ring $\wh\La(x\vt a')$ can be naturally identified with $\wh\La(x\vt a)$ via the map $c(a)\mapsto c^{\tss\prime}(a')$, where $c^{\tss\prime}(a')=c(a)$ as polynomials in the $a_i$, $i\in\ZZ$. \bpr\label{prop:oma} For any elements $u\in \La(x\vt a)$ and $\wh v\in \wh\La(y\vt a)$ we have \ben \big\langle \om_a\tss u,\ts\wh\om\tss \wh v\big\rangle' =\big\langle u, \wh v\big\rangle, \een where $\langle \ ,\ \rangle'$ denotes the pairing \eqref{pairi} between $\La(x\vt a')$ and $\wh\La(y\vt a)\simeq \wh\La(y\vt a')$. \epr \bpf It suffices to take $u=s_{\la}(x\vt a)$ and $\wh v=\wh s_{\mu}(y\vt a)$. Using \eqref{omsla} and \eqref{whome}, we get \ben \big\langle \om_a\tss s_{\la}(x\vt a), \ts\wh\om\tss \wh s_{\mu}(y\vt a)\big\rangle' =\big\langle s_{\la'}(x\vt a'), \ts \wh s_{\mu'}(y\vt a')\big\rangle'. \een By Corollary~\ref{cor:dumup} this equals $\de_{\la\mu}$, and hence coincides with $\langle s_{\la}(x\vt a), \ts \wh s_{\mu}(y\vt a)\rangle$. \epf Introduce the {\it dual forgotten symmetric functions\/} $\wh f_{\la}(y\vt a)\in\wh\La(y\vt a)$ as the images of the dual monomial symmetric functions under the involution $\wh\om$, that is, \ben \wh f_{\la}(y\vt a)=\wh\om\ts \wh m_{\la}(y\vt a'),\qquad \la\in\Pc. \een Furthermore, for any partition $\la$ define the {\it double monomial symmetric functions\/} $m_{\la}(x\vt a)\in\La(x\vt a)$ and the {\it double forgotten symmetric functions\/} $f_{\la}(x\vt a)\in\La(x\vt a)$ by the relations \beql{hmdua} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} m_{\la}(x\vt a)\ts \wh h_{\la}(y\vt a) \eeq and \beql{ppdua} \prod_{i,\ts j\geqslant 1}\frac{1-a_i\ts y_j}{1-x_i\ts y_j} =\sum_{\la\in\Pc} f_{\la}(x\vt a)\ts \wh e_{\la}(y\vt a). \eeq Hence, by Proposition~\ref{prop:dubas}, under the pairing \eqref{pairi} we have \beql{pamh} \big\langle m_{\la}(x\vt a),\wh h_{\mu}(y\vt a)\big\rangle =\de_{\la\mu} \Fand \big\langle f_{\la}(x\vt a),\wh e_{\mu}(y\vt a)\big\rangle =\de_{\la\mu}. \eeq Moreover, Propositions~\ref{prop:om} and \ref{prop:oma} imply \ben \om_a: m_{\la}(x\vt a)\mapsto h_{\la}(x\vt a'), \qquad f_{\la}(x\vt a)\mapsto e_{\la}(x\vt a'), \qquad p_{\la}(x\vt a)\mapsto \ve_{\la}\ts p_{\la}(x\vt a'), \een where $\ve_{\la}=(-1)^{|\la|-\ell(\la)}$. To check the latter relation we need to recall that under the involution $\om$ of the ring of symmetric functions we have $\om:p_{\la}(y)\mapsto \ve_{\la}\ts p_{\la}(y)$; see \cite[Chapter~I]{m:sfh}. We can now obtain analogues of the decomposition of Corollary~\ref{cor:duco} for other families of symmetric functions. \bco\label{cor:decmf} We have the decompositions \ben \bal \prod_{i,\ts j\geqslant 1}\frac{1+x_i\ts y_j}{1+a_i\ts y_j} &=\sum_{\la\in\Pc} e_{\la}(x\vt a)\ts \wh m_{\la}(y\vt a^{\ts\prime}),\\[1em] \prod_{i,\ts j\geqslant 1}\frac{1+x_i\ts y_j}{1+a_i\ts y_j} &=\sum_{\la\in\Pc} \ve_{\la}\ts z_{\la}^{-1}\ts p_{\la}(x\vt a)\ts p_{\la}(y),\\[1em] \prod_{i,\ts j\geqslant 1}\frac{1+x_i\ts y_j}{1+a_i\ts y_j} &=\sum_{\la\in\Pc} m_{\la}(x\vt a)\ts \wh e_{\la}(y\vt a^{\ts\prime}). \eal \een \eco \bpf The relations follow by the application of $\om_a$ to the expansions \eqref{hmde}, \eqref{ppde} and by the application of $\wh\om$ to \eqref{hmdua}. \epf Note that relations of this kind involving the forgotten symmetric functions can be obtained in a similar way. \subsection{Kostka-type and character polynomials} The entries of the transition matrices between the classical bases of the ring of symmetric functions can be expressed in terms of the Kostka numbers $K_{\la\mu}$ and the values $\chi^{\la}_{\mu}$ of the irreducible characters of the symmetric groups; see \cite[Sections~I.6,~I.7]{m:sfh}. By analogy with the classical case, introduce the {\it Kostka-type polynomials\/} $K_{\la\mu}(a)$ and the {\it character polynomials\/} $\chi^{\la}_{\mu}(a)$ as well as their dual counterparts $\wh K_{\la\mu}(a)$ and $\wh\chi^{\ts\la}_{\mu}(a)$ by the respective expansions \ben s_{\la}(x\vt a)=\sum_{\mu} K_{\la\mu}(a)\ts m_{\mu}(x\vt a), \qquad \wh s_{\la}(y\vt a)=\sum_{\mu} \wh K_{\la\mu}(a)\ts \wh m_{\mu}(y\vt a), \een and \ben p_{\mu}(x\vt a)=\sum_{\la} \chi^{\la}_{\mu}(a)\ts s_{\la}(x\vt a), \qquad p_{\mu}(y)=\sum_{\la} \wh\chi^{\ts\la}_{\mu}(a)\ts \wh s_{\la}(y\vt a). \een If $|\la|=|\mu|$, then \beql{cla} K_{\la\mu}(a)=\wh K_{\la\mu}(a)=K_{\la\mu}\Fand \chi^{\la}_{\mu}(a)=\wh\chi^{\ts\la}_{\mu}(a)=\chi^{\la}_{\mu}. \eeq Moreover, $K_{\la\mu}(a)$ and $\wh\chi^{\ts\la}_{\mu}(a)$ are zero unless $|\la|\geqslant |\mu|$, while $\wh K_{\la\mu}(a)$ and $\chi^{\la}_{\mu}(a)$ are zero unless $|\la|\leqslant |\mu|$. Using the duality properties of the double and dual symmetric functions, we can get all other transition matrices in the same way as this is done in \cite[Sections~I.6,~I.7]{m:sfh}. In particular, we have the relations \ben h_{\mu}(x\vt a)=\sum_{\la} \wh K_{\la\mu}(a)\ts s_{\la}(x\vt a), \qquad \wh h_{\mu}(y\vt a)=\sum_{\la} K_{\la\mu}(a)\ts \wh s_{\la}(y\vt a). \een The Littlewood--Richardson polynomials $c_{\la\mu}^{\tss\nu}(a)$ defined in \eqref{lrpoldef} are Graham positive as they can be written as polynomials in the differences $a_i-a_j$, $i<j$, with positive integer coefficients; see \cite{g:pe}. Explicit positive formulas for $c_{\la\mu}^{\ts\nu}(a)$ were found in \cite{kt:pe}, \cite{k:elr} and \cite{m:lr}. Using the fact that $h_k(x\vt a)$ coincides with $s_{(k)}(x\vt a)$, we come to the following expression for the polynomials $\wh K_{\la\mu}(a)$: \ben \wh K_{\la\mu}(a)=\sum_{\rho^{(1)},\dots,\ts\rho^{(l-2)}} c_{(\mu_1)\tss\rho^{(1)}}^{\tss\la}(a)\ts c_{(\mu_2)\tss\rho^{(2)}}^{\tss\rho^{(1)}}(a)\ts \dots c_{(\mu_{l-2})\tss\rho^{(l-2)}}^{\tss\rho^{(l-3)}}(a)\ts c_{(\mu_{l-1})\tss(\mu_{l})}^{\tss\rho^{(l-2)}}(a), \een summed over partitions $\rho^{(i)}$, where $\mu=(\mu_1,\dots\mu_l)$. In particular, each dual Kostka-type polynomial $\wh K_{\la\mu}(a)$ is Graham positive. For a more explicit tableau presentation of the polynomials $\wh K_{\la\mu}(a)$ see \cite{f:tm}. \bex\label{ex:kona} We have \ben \wh K_{(3\ts 2)\tss(3\ts 2\ts 1)}(a)= \sum_{\rho} c_{(3)\tss\rho}^{\tss(3\ts 2)}(a)\ts c_{(2)\tss(1)}^{\tss\rho}(a) =c_{(3)\tss(2)}^{\tss(3\ts 2)}(a)\ts c_{(2)\tss(1)}^{\tss(2)}(a) +c_{(3)\tss(2\ts 1)}^{\tss(3\ts 2)}(a) \ts c_{(2)\tss(1)}^{\tss(2\ts 1)}(a). \een Now, $c_{(3)\tss(2)}^{\tss(3\ts 2)}(a)=c_{(3)\tss(2)}^{\tss(3\ts 2)}=1$ and $c_{(2)\tss(1)}^{\tss(2\ts 1)}(a)=c_{(2)\tss(1)}^{\tss(2\ts 1)}=1$, while applying \cite[Theorem~2.1]{m:lr} we get $c_{(2)\tss(1)}^{\tss(2)}(a)=a_{-1}-a_1$ and $c_{(3)\tss(2\ts 1)}^{\tss(3\ts 2)}(a)=a_{-2}-a_2$. Hence, \ben \wh K_{(3\ts 2)\tss(3\ts 2\ts 1)}(a)= a_{-2}+a_{-1}-a_1-a_2. \een \vskip-1.2\baselineskip \qed \eex The polynomials $K_{\la\mu}(a)$ can be calculated by the following procedure. Given a partition $\mu=(\mu_1,\dots,\mu_l)$, write each dual complete symmetric function $\wh h_{\mu_i}(y\vt a)$ as a series of the hook Schur functions with coefficients in $\QQ[a]$ using Corollary~\ref{cor:sone}. Then multiply the Schur functions using the classical Littlewood--Richardson rule. Finally, use Theorem~\ref{thm:invtabre} to represent each Schur function as a series of the dual Schur functions. \bex\label{ex:konadu} By Example~\ref{ex:sone}, $\wh h_1(y\vt a)^2$ equals \ben \big(s_{(1)}(y)+a_0\tss s_{(2)}(y)- a_1\tss s_{(1^2)}(y)+a^2_0\tss s_{(3)}(y) -a_0\tss a_1\tss s_{(2\ts 1)}(y)+a^2_1\tss s_{(1^3)}(y)+\dots\big)^2. \een Hence, multiplying the Schur functions, we find that \ben \bal \wh h_1(y\vt a)^2&=s_{(2)}(y)+s_{(1^2)}(y)+2\tss a_0\tss s_{(3)}(y) +2\tss(a_0-a_1)\tss s_{(2\ts 1)}(y)-2\tss a_1\tss s_{(1^3)}(y)\\ {}&+3\tss a^2_0\tss s_{(4)}(y) +(3\tss a^2_0-4\tss a_0\tss a_1)\tss s_{(3\ts 1)}(y) +(a^2_0+a^2_1-2\tss a_0\tss a_1)\tss s_{(2^2)}(y)\\ {}&+(3\tss a^2_1-4\tss a_0\tss a_1)\tss s_{(2\ts 1^2)}(y) +3\tss a^2_1\tss s_{(1^4)}(y)+\cdots. \eal \een Expanding now each Schur function with the use of Theorem~\ref{thm:invtabre} or Corollary~\ref{cor:inext}, we come to \ben \bal \wh h_1(y\vt a)^2&=\wh s_{(2)}(y\vt a)+\wh s_{(1^2)}(y\vt a) +(a_0-a_{-1})\tss \wh s_{(3)}(y\vt a) +(a_0-a_1)\tss \wh s_{(2\ts 1)}(y\vt a)\\ {}&+(a_2-a_1)\tss \wh s_{(1^3)}(y\vt a) +(a_0-a_{-2})\tss(a_0-a_{-1})\tss \wh s_{(4)}(y\vt a)\\ {}&+(a_0-a_1)\tss(a_0-a_{-1})\tss \wh s_{(3\ts 1)}(y\vt a) +(a_0-a_1)^2\tss \wh s_{(2^2)}(y\vt a)\\ {}&+(a_1-a_0)\tss(a_1-a_2)\tss \wh s_{(2\ts 1^2)}(y\vt a) +(a_1-a_2)\tss(a_1-a_3)\tss \wh s_{(1^4)}(y\vt a)+\cdots, \eal \een thus calculating the first few polynomials $K_{\la\tss(1^2)}(a)$. \qed \eex \bex\label{ex:monom} Using Example~\ref{ex:konadu}, we can calculate the first few double monomial symmetric functions: \ben \bal m_{(1)}(x\vt a)&=s_{(1)}(x\vt a),\qquad m_{(1^2)}(x\vt a)=s_{(1^2)}(x\vt a)\\ m_{(2)}(x\vt a)&=s_{(2)}(x\vt a)-s_{(1^2)}(x\vt a)\\ m_{(1^3)}(x\vt a)&=s_{(1^3)}(x\vt a) +(a_1-a_2)\tss s_{(1^2)}(x\vt a)\\ m_{(2\ts 1)}(x\vt a)&=s_{(2\ts 1)}(x\vt a)-2\tss s_{(1^3)}(x\vt a) +(2\tss a_2-a_1-a_0)\tss s_{(1^2)}(x\vt a)\\ m_{(3)}(x\vt a)&=s_{(3)}(x\vt a)- s_{(2\ts 1)}(x\vt a)+\tss s_{(1^3)}(x\vt a) +(a_{-1}-a_0)\tss s_{(1^2)}(x\vt a). \eal \een \vskip-1.1\baselineskip \qed \eex The following formula for the dual character polynomials is implied by Corollary~\ref{cor:expasdouble}. \bco\label{cor:char} We have \ben \wh\chi^{\ts\la}_{\mu}(a)=\sum_{\rho} (-1)^{m(\la/\rho)}\ts \chi^{\rho}_{\mu}\ts \psi_{\la/\rho}(a), \een summed over diagrams $\rho$ with $|\rho|=|\mu|$. \qed \eco \section{Interpolation formulas} \label{sec:idd} \setcounter{equation}{0} \subsection{Rational expressions for the transition coefficients} Applying Proposition~\ref{prop:interp}, we can get expressions for the polynomials $\wh K_{\la\mu}(a)$, $\chi_{\mu}^{\la}(a)$, $\wh c_{\la\mu}^{\ts\tss\nu}(a)$ and $c_{\la\mu}^{\tss\nu}(a)$ as rational functions in the variables $a_i$. \bpr\label{prop:ratfe} We have the expressions \begin{align} \label{kostka} \wh K_{\la\mu}(a)&=\sum_{R}\sum_{k=0}^{l} \frac{h_{\mu}(a_{\rho^{(k)}}\vt a)}{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(l)}}|)},\\ \label{charact} \chi_{\mu}^{\la}(a)&=\sum_{R}\sum_{k=0}^{l} \frac{p_{\mu}(a_{\rho^{(k)}}\vt a)}{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(l)}}|)},\\ \label{duallr} \wh c_{\la\mu}^{\ts\tss\nu}(a)&=\sum_{R}\sum_{k=0}^{l} \frac{s_{\nu/\mu}(a_{\rho^{(k)}}\vt a)}{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(l)}}|)}, \end{align} summed over all sequences of partitions $R$ of the form \ben \varnothing=\rho^{(0)}\to\rho^{(1)}\to \dots\to\rho^{(l-1)}\to\rho^{(l)}=\la. \een Moreover, \beql{lrpo} c_{\la\mu}^{\tss\nu}(a)=\sum_{R}\sum_{k=0}^{l} \frac{s_{\la}(a_{\rho^{(k)}}\vt a)}{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(l)}}|)}, \eeq summed over all sequences of partitions $R$ of the form \ben \mu=\rho^{(0)}\to\rho^{(1)}\to \dots\to\rho^{(l-1)}\to\rho^{(l)}=\nu. \een \vskip-1.2\baselineskip \qed \epr The last formula was given in \cite{ms:lr} for polynomials closely related to $c_{\la\mu}^{\tss\nu}(a)$. Due to \eqref{cla}, the Kostka numbers $K_{\la\mu}$ and the values of the irreducible characters $\chi^{\la}_{\mu}$ of the symmetric group can be found from \eqref{kostka} and \eqref{charact}. \bex\label{ex:valide} If $|\la|=n$, then \ben \chi^{\la}_{(1^n)}=\sum_{R}\sum_{k=1}^{n} \frac{(|a_{\rho^{(k)}}|-|a_{\rho^{(0)}}|)^{n-1}} {(|a_{\rho^{(k)}}|-|a_{\rho^{(1)}}|) \ldots\wedge\ldots(|a_{\rho^{(k)}}|-|a_{\rho^{(n)}}|)} =\sum_{R}\ts 1, \een which coincides with the number of standard $\la$-tableaux. \qed \eex \subsection{Identities with dimensions of skew diagrams} Specializing the variables by setting $a_i=-i+1$ for all $i\in\ZZ$ in the expressions of Proposition~\ref{prop:ratfe}, we obtain some identities for the Kostka numbers, the values of the irreducible characters and the Littlewood--Richardson coefficients involving dimensions of skew diagrams. Under this specialization, the double symmetric functions become the {\it shifted symmetric functions\/} of \cite{oo:ss}, so that some of the combinatorial results concerning the ring $\La(x\vt a)$ discussed above in the paper reduce to the respective results of \cite{oo:ss} for the ring $\La^*$ of shifted symmetric functions; see also \cite{io:kc} for an alternative description of the ring $\La^*$. For any skew diagram $\theta$ denote by $\dim\theta$ the number of standard $\theta$-tableaux (i.e., row and column strict) with entries in $\{1,2,\dots,|\theta|\}$ and set \ben H_{\theta}=\frac{|\theta|!}{\dim\theta}. \een If $\theta$ is normal (nonskew), then $H_{\theta}$ coincides with the product of the hooks of $\theta$ due to the hook formula. Under the specialization $a_i=-i+1$, for any partition $\mu$ we have \ben a_{\mu}=(\mu_1,\mu_2-1,\dots). \een The following formula for the values of the double Schur functions was proved in \cite{oo:ss}: if $\mu\subseteq \nu$, then \ben s_{\mu}(a_{\nu}\vt a)=\frac{H_{\nu}}{H_{\nu/\mu}}. \een This formula is deduced from Proposition~\ref{prop:interp} with the use of \eqref{hoo} which takes the form $s_{\la}(a_{\la}\vt a)=H_{\la}$. Then \eqref{lrpo} implies the identity for the Littlewood--Richardson coefficients $c_{\la\mu}^{\tss\nu}$ which was proved in \cite{ms:lr}: \ben c_{\la\mu}^{\nu}=\sum_{\rho}(-1)^{|\nu/\rho|} \frac{H_{\rho}}{H_{\nu/\rho}\ts H_{\rho/\la}\ts H_{\rho/\mu}}, \een summed over diagrams $\rho$ which contain both $\la$ and $\mu$, and are contained in $\nu$. We also have the respective consequences of \eqref{kostka} and \eqref{charact}. For partitions $\mu=(1^{m_1}2^{m_2}\dots r^{m_r})$ and $\rho=(\rho_1,\dots,\rho_l)$ set \ben \pi_{\mu}(\rho)=\prod_{k=1}^r \Big((1-\rho_1)^{k}+\dots+ (l-\rho_l)^{k}-1^k-\dots-l^{\tss k}\Big)^{m_k} \een and \ben \varkappa_{\mu}(\rho)=\prod_{k=1}^r \Big(\sum_{i_1\geqslant\dots\geqslant i_k\geqslant 1} \rho_{i_1}(\rho_{i_2}-1)\dots (\rho_{i_k}-k+1)\Big)^{m_k}. \een The following formulas are obtained by specializing $a_i=i$ and $a_i=-i+1$, respectively, in \eqref{charact} and \eqref{kostka}. \bco\label{cor:kochar} Let $\la$ and $\mu$ be partitions of $n$. Then \ben \chi^{\la}_{\mu}=\sum_{\rho\subseteq\la}\ts \frac{(-1)^{|\rho|}\ts \pi_{\mu}(\rho)}{H_{\rho}\ts H_{\la/\rho}} \een and \ben K_{\la\mu}=\sum_{\rho\subseteq\la}\ts \frac{(-1)^{|\la/\rho|}\ts \varkappa_{\mu}(\rho)}{H_{\rho}\ts H_{\la/\rho}}. \een \vskip-1.2\baselineskip \qed \eco \bex\label{ex:char} Let $\la=(3\ts 2)$ and $\mu=(2\ts 1^3)$. Then \ben \pi_{\mu}(\rho)=-(\rho_1+\rho_2)^3\ts (\rho_1^{\tss 2}+\rho_2^{\tss 2}-2\ts\rho^{}_1-4\ts\rho^{}_2), \een and \ben \bal H_{(3\ts 2)/(1)}&=24/5,\qquad H_{(3\ts 2)/(2)}=2,\qquad H_{(3\ts 2)/(1^2)}=3,\qquad H_{(3\ts 2)/(3)}=2,\\ H_{(3\ts 2)/(2\ts 1)}&=1,\qquad H_{(3\ts 2)/(2\ts 2)}=1,\qquad H_{(3\ts 2)/(3\ts 2)}=1. \eal \een Hence, \ben \chi^{(3\ts 2)}_{(2\ts 1^3)}= -\frac{5}{24}+\frac{32}{6}+\frac{81}{12} -\frac{81}{3}+\frac{256}{12}-\frac{125}{24}=1. \een \eex
proofpile-arXiv_069-2188
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Stellar mass compact binaries, involving black holes and neutron stars, are the most promising sources of gravitational radiation for the operational and planned ground-based laser interferometric gravitational wave (GW) detectors. Gravitational wave signals from inspiraling compact binaries are being searched in the detector data by using {\it matched filtering} \cite{Helstrom} with several types of theoretically modeled inspiral templates \cite{Blanchet:1995ez,Damour:2000zb}. A good resource for computing these templates in software, which is being actively used by the LIGO Scientific Collaboration (LSC) \cite{LSC} and the Virgo Collaboration \cite{VirgoC} for analyzing detector data, is the LSC Algorithm Library (LAL) \cite{LAL}. As such, the construction of these search templates requires {\it two} crucial inputs from the post-Newtonian (PN) approximation to general relativity, appropriate for describing the dynamics of a compact binary during its inspiral phase. These are the 3PN accurate dynamical (orbital) energy ${\cal E}(x)$ and the 3.5PN accurate expression for the GW luminosity $ {\cal L }(x) $ \cite{PN_results}, both of which are usually expressed as PN series in the gauge invariant quantity $x\equiv (G\,m\, \omega/c^3)^{2/3}$, where $m$ and $\omega$ are the total mass and the orbital angular frequency of the binary, and the familiar symbols $G$ and $c$ denote the universal gravitational constant and the speed of light in vacuum, respectively. Recall that the 3PN accurate expression for ${\cal E}(x)$ provides corrections to the Newtonian orbital energy to the order of $(v/c)^6$, where $v$ is the orbital speed. Further, the currently employed search templates only require the Newtonian contributions to the amplitude of GW polarizations, $h_{+} (t)$ and $h_{\times}(t)$. However, expressions for $h_{+} (t)$ and $h_{\times}(t)$ that include the 3PN amplitude corrections are available in Ref.~\cite{BFIS} and are being used to develop amplitude corrected templates for GW inspiral searches. With the help of the two aforesaid PN-accurate inputs, one can construct two distinct classes of inspiral GW templates. The templates belonging to the first category require \begin{equation}\label{phiEvolution} \frac{d \phi (t)}{dt} = \omega (t) \equiv \frac{c^3}{G\,m}\, x^{3/2}\, \end{equation} and PN-accurate prescriptions for the reactive evolution of $x(t)$. Such templates are usually referred to as {\em adiabatic} inspiral templates, and all PN-accurate inspiral templates that LAL employs are of this type. In this paper, we consider from this class time-domain templates of the TaylorT1 \cite{PN_results} and the Numerical Relativity (NR) inspired TaylorT4 \cite{CC07} families and frequency-domain templates of the TaylorF2 family \cite{AISS}. For all three families, we incorporate radiation reaction effects to the (relative) 3.5PN order (see Eqs.~(\ref{EqP1}), (\ref{EqP3}), and (\ref{EqP4}) below). Due to their use of the $x$-based phase evolution expression in Eq. (\ref{phiEvolution}), it may be argued that these templates model GWs from compact binaries inspiraling under PN-accurate radiation reaction along {\it exact} circular orbits \cite{AG07}. A new class of inspiral approximants introduced in Ref.~\cite{AG07}, termed as TaylorEt, requires PN expansion for $d\phi/dt$ in terms of the orbital binding {\em Energy} to derive the {\em temporal} GW phase evolution. This alternative phasing prescription models GWs from compact binaries inspiraling under PN-accurate reactive dynamics along {\it PN-accurate circular orbits} \cite{AG07}. In other words, the TaylorEt approximant {\it explicitly} incorporates the secular contributions to GW phase evolution appearing at the 1PN, 2PN, and 3PN orders. Contrastingly, in the case of $x$-based adiabatic inspiral templates and due to the use of Eq. (\ref{phiEvolution}), the above mentioned conservative (and secular) contributions to the GW phase evolution do not appear before the radiation reaction kicks in at the absolute 2.5PN order. It should be noted that the cost of computing TaylorEt templates is comparable to that of TaylorT1/T4 templates. In this paper, we study how effectively and faithfully the TaylorT1, TaylorT4, and TaylorF2 inspiral templates, at 3.5PN order, can capture a GW signal modeled using the TaylorEt approximant of the same order. The main motivation for using the latter as the fiducial signal originates from the observation that the TaylorEt approximant is an appropriate zero-eccentricity limit of GW phasing for compact binaries inspiraling along PN-accurate eccentric orbits \cite{AG07}. We quantify our results by computing fitting factors (FF) following prescriptions detailed in Refs.~\cite{A95,DIS}, and inherent systematic errors in the estimated value of $m$ and the symmetric mass-ratio, $\eta$, for the various search templates, relevant for the initial LIGO (heretofore referred to as ``LIGO''), Advanced LIGO (or ``AdLIGO'') and Virgo detectors. We conclude that it is desirable to incorporate the TaylorEt approximant at 3.5PN order into LAL to minimize possible loss of inspiral events. Further, one might also view this work as an exercise in assessing the effects of using inspiral templates from different representations on a GW signal's detectability and parameter estimation in earth-based detectors. Similar assessments of systematic errors on GW searches in LISA and Virgo were made by comparing inspiral templates of different PN orders from the same representation in Refs.~\cite{CV07} and ~\cite{PC01}, respectively. The plan of the paper is as follows. In the next section, Sec.~\ref{approximants}, we provide explicit PN-accurate equations required for constructing TaylorT1, TaylorT4, TaylorF2 and TaylorEt templates having 3.5PN accurate reactive evolution. Section~\ref{FF} explains how we perform our FF computations and tabulates their values, along with the associated systematic errors in $m$ and $\eta$, for our different templates. We briefly discuss the implications of these results on the on-going searches in real interferometric data. We conclude in Sec.~\ref{conclusions} by providing a brief summary and future directions. \section{Phasing formulae for various inspiral templates} \label{approximants} The PN approximation to general relativity is expected to describe accurately the adiabatic inspiral phase of a comparable mass compact binary \cite{CC07}. During this phase, the change in the orbital frequency over one orbit may be considered to be tiny compared to the mean orbital frequency itself. For compact binaries, having negligible eccentricities, the adiabatic orbital phase evolution can be accurately described with the help of 3PN and 3.5PN accurate expressions for the orbital energy and the GW luminosity, respectively, available in Refs.~\cite{PN_results}. While employing $x$ as a PN expansion parameter, there exist several prescriptions to compute the adiabatic GW phase evolution. Each prescription, termed a PN approximant, provides a slightly different GW phase evolution and, correspondingly, a different inspiral template family. Following Ref.~\cite{LAL}, we first list the equations describing the TaylorT1 and the TaylorF2 approximants, which are regularly employed by various GW data analysis groups. The time-domain TaylorT1 approximant is given by \begin{subequations} \label{EqP1} \begin{align} \label{EqP1a} h(t) & \propto x\, \cos 2\,\phi(t) \,,\\ \label{EqP1b} \frac{d \phi (t)}{dt} & = \omega (t) \equiv \frac{c^3}{G\,m}\, x^{3/2}\,,\\ \frac{d\,x(t)}{dt} &= -\frac{{\cal L}( x)}{ \left( d {\cal E} / d x \right)}\,, \label{EqP1c} \end{align} \end{subequations} where the proportionality constant in Eq. (\ref{EqP1a}) may be set to unity for our analysis. To construct the TaylorT1 3.5PN order adiabatic inspiral templates, one needs to use 3.5PN accurate ${\cal L}(x)$ and 3PN accurate ${\cal E}(x)$, respectively. The explicit expressions for these quantities, extracted from Refs.~\cite{PN_results}, read \begin{subequations} \label{EqP2} \begin{align} \label{EqP2a} {\cal L}(x) &= \frac{32\,\eta^2\,c^5}{5\,G}\, x^{5}\, \biggl \{ 1 - \biggl [ {\frac {1247}{336}}+{\frac {35}{12}}\,\eta \biggr ] x +4\,\pi \,{x}^{3/2} \nonumber \\ & \quad - \biggl [ {\frac {44711}{9072}} -{\frac { 9271}{504}}\,\eta -{\frac {65}{18}}\,{\eta}^{2} \biggr ] {x}^{2} - \biggl [ {\frac {8191}{ 672}} \nonumber \\ & \quad +{\frac {583}{24}}\,\eta \biggr ]\, \pi\, {x}^{5/2} + \biggl [ {\frac {6643739519}{69854400}} +\frac{ 16\, {\pi }^{2}}{3} \nonumber \\ & \quad -{\frac {1712}{ 105}}\,\gamma - \left({\frac {134543}{7776}}-{\frac {41}{48}}\,{\pi } ^{2} \right) \eta -{\frac {94403}{3024}}\,{\eta}^{2} \nonumber \\ & \quad -{\frac {775}{324}} \,{\eta}^{3}-{\frac {1712}{105}}\,\ln \left( 4\,\sqrt {x} \right) \biggr ] {x}^{3} - \biggl [ {\frac {16285}{504}}\, \nonumber \\ & \quad -{\frac {214745}{ 1728}}\,\eta - {\frac {193385}{3024}}\,{\eta}^{2} \biggr ]\, \pi\, {x}^{7/2} \biggr \} \,,\\ {\cal E}(x) &= -\frac{\eta\, m\, c^2}{2}\,x \biggl \{ 1 - \frac{1}{12} \biggl [ 9 + \eta \biggr ] x - \biggl [ {\frac {27}{8}} -{ \frac {19}{8}}\,\eta \nonumber \\ & \quad +\frac{1}{24}\,{\eta}^{2} \biggr ]{x}^{2} - \biggl [ {\frac {675}{64}}+{\frac {35}{5184}}\,{\eta}^{3}+{\frac { 155}{96}}\,{\eta}^{2} \nonumber \\ & \quad + \left( {\frac {205}{96}}\,{\pi}^{2}-{\frac { 34445}{576}} \right) \eta \biggr ] {x}^{3} \biggr \} \,, \label{EqP2b} \end{align} \end{subequations} where $\gamma$ is the Euler constant and $\eta \equiv \mu/m$, with $\mu$ being the reduced mass of the binary. The frequency-domain TaylorF2 approximant at 3.5PN order, extracted from Ref.~\cite{AISS}, reads \begin{subequations} \label{EqP3} \begin{align} \label{EqP3a} \tilde h(f) & \propto f^{-7/6}\, e^{i\, \psi(f)}\,,\\ \psi(f) &= 2\, \pi\, f\, t_c - \phi_c -\frac{\pi}{4} \nonumber \\ & \quad + \frac{3}{128\, \eta\, (v/c)^5} \sum_{k=0}^{k=7} \alpha_k\, \left(\frac{v}{c}\right)^{k}\,, \label{EqP3b} \end{align} \end{subequations} where $ v = ( G\pi\, m\, f / c^3)^{1/3}$, and $t_c$ and $\phi_c$ are the fiducial time and phase of coalescence, respectively. The explicit expressions for the PN coefficients $\alpha_k$ are \begin{subequations} \begin{align} \alpha_0&=1,\\ \alpha_1&=0,\\ \alpha_2&=\frac{20}{9}\,\left( \frac{743}{336} + \frac{11}{4}\eta \right),\label{Eq:alpha2}\\ \alpha_{3}&= -16\pi,\label{Eq:alpha3}\\ \alpha_4&=10\,\left( \frac{3058673}{1016064} + \frac{5429\, }{1008}\,\eta + \frac{617}{144}\,\eta^2 \right),\\ \alpha_5&=\pi\biggl\{\frac{38645 }{756}+ \frac{38645 }{252}\, \log \left(\frac{v}{v_{\rm lso}}\right) \nonumber \\ & \quad - {65\over9}\eta\left[1 + 3\log \left(\frac{v}{v_{\rm lso}}\right)\right]\biggr\},\\ \alpha_{6}&=\left(\frac{11583231236531}{4694215680} - \frac{640\,{\pi }^2}{3} - \frac{6848\,\gamma }{21}\right) \nonumber \\ & \quad -\eta \,\biggl ( \frac{15737765\,635}{3048192} - \frac{2255\,{\pi }^2}{12} \biggr ) \nonumber \\ & \quad +{76055\over 1728}\eta^2-{127825\over 1296}\eta^3-{6848\over 21} \log\left(4\;{v}\right),\\ \alpha_7 &=\left(\frac{77096675 }{254016} + \frac{378515 }{1512}\,\eta - \frac{74045}{756}\,\eta^2\right)\,\pi\,, \end{align} \end{subequations} where $v_{\rm lso}$ is the speed at the last stable orbit, which we take to be at $6\, Gm/c^2$. Recently, Ref.~\cite{CC07} introduced another Taylor approximant, termed TaylorT4. This approximant is obtained by Taylor expanding in $x$ the right-hand side of Eq.~(\ref{EqP1c}) for $dx/dt$ and truncating it at the appropriate reactive PN order. This approximant at 3.5PN order has an interesting (and accidental) property that was discovered due to the recent advances in Numerical Relativity (NR) involving coalescing binary black holes \cite{FP}. It was observed in Ref.~\cite{CC07} that the NR-based GW phase evolution for an equal-mass binary black hole agrees quite well with its counterpart in TaylorT4 approximant at 3.5PN order. Specifically, Ref.~\cite{CC07} observed that the accumulated GW phase difference between TaylorT4 waveforms at 3.5PN order and NR waveforms agrees within 0.06 radians over 30 wave cycles and matched at $x \sim 0.215$. The time-domain TaylorT4 approximant at 3.5PN order is specified by \begin{subequations} \label{EqP4} \begin{align} \label{EqP4a} h(t) & \propto x\, \cos 2\,\phi(t) \,,\\ \label{EqP4b} \frac{d \phi (t)}{dt} & = \omega (t) \equiv \frac{c^3}{G\,m}\, x^{3/2}\, ,\\ \label{EqP4c} \frac{d\,x(t)}{dt} &= \frac{c^3}{G\,m}\, \frac{64\,\eta}{5}\, x^5 \biggl \{ 1 - \left( {\frac {743}{336}}+ \frac{11}{4}\,\eta \right) x +4\,\pi\,{x}^{3/2} \nonumber \\ & \quad + \left( {\frac {34103}{18144}}+{\frac {13661}{2016}}\,\eta+{\frac {59} {18}}\,{\eta}^{2} \right) {x}^{2} - \biggl [ { \frac {4159}{672}}\, \nonumber \\ & \quad +{\frac {189}{8}}\,\eta \biggr]\,\pi\, {x}^{5/2} + \biggl [ {\frac {16447322263}{139708800}} -{\frac {1712}{105}}\, \gamma \nonumber \\ & \quad +\frac{16\,{\pi}^{2}}{3}-{\frac {3424}{105}}\,\ln \left( 2 \right) -{ \frac {856}{105}}\,\ln \left( x \right) - \biggl ( {\frac {56198689}{217728}} \nonumber \\ & \quad -{\frac {451}{48}}\,{\pi}^{2} \biggr ) \eta +{\frac {541}{896}}\,{\eta}^{2} -{ \frac {5605}{2592}}\,{\eta}^{3} \biggr ] {x}^{3} - \biggl [ {\frac {4415}{4032}}\, \nonumber \\ & \quad -{\frac {358675}{6048}}\,\eta -{\frac {91495}{1512}}\,{\eta}^{2} \biggr ]\,\pi\, {x}^{7/2} \biggr \} \,. \end{align} \end{subequations} It should be noted that the TaylorF2 waveform in Eqs. (\ref{EqP3}) is the Fourier transform of $h(t)$, given by Eqs.~(\ref{EqP4}) above, computed with the help of the stationary phase approximation \cite{BO}; we speculate that this is the reason that the TaylorT4 approximant is not directly employed in LAL (which already uses the TaylorF2 approximant). A close inspection of various time-domain adiabatic inspiral templates available in LAL reveals that they all invoke Eq. (\ref{phiEvolution}). These template families are different from one another only in the manner in which they incorporate the reactive evolution of $x(t)$. For example, PadeT1 time-domain inspiral templates are constructed by invoking a specific Pade resummation for the right-hand side of Eq.~(\ref{EqP4c}). Therefore, we may state that various $x$-based inspiral templates provide slightly different GW phase evolution by perturbing a compact binary in an exact circular orbit, defined by Eq. (\ref{phiEvolution}), by different prescriptions for the reactive evolution of $x(t)$. This is the main reason behind the observation that these templates model GWs from compact binaries inspiraling under PN-accurate radiation reaction along {\it exact} circular orbits. Interestingly, it is possible to construct, in a gauge-invariant manner, inspiral GW search templates that do not require working in terms of the $x$ variable. The TaylorEt approximant \cite{AG07} employs the orbital binding energy in lieu of that variable to describe PN-accurate adiabatic GW phase evolution. Hence, it requires an appropriate PN expansion for $d\phi/dt$ in terms of the orbital binding energy. Accordingly, it can be argued that the TaylorEt approximant models GWs from compact binaries inspiraling under PN-accurate radiation reaction along {\it PN-accurate circular orbits}. The TaylorEt approximant at 3.5PN order is defined by \begin{subequations} \label{EqP6} \begin{align} h(t) & \propto {\cal E}(t)\, \cos 2\,\phi(t) \label{EqP6a} \,,\\ \label{EqP6b} \frac{d \phi (t)}{dt} &\equiv \omega (t) = \frac{c^3}{G\,m}\, \xi^{3/2} \biggl \{ 1 + {\frac {1}{8}} \left( {9}+\eta \right) \xi + \biggl [ {\frac {891}{128}} \nonumber \\ & \quad -{\frac {201}{ 64}}\,\eta +{\frac {11}{128}}\,{\eta}^{2} \biggr ] {\xi}^{2} + \biggl [ {\frac {41445}{1024}} - \biggl ( {\frac {309715}{3072}} \nonumber \\ & \quad -{\frac {205} {64}}\,{\pi}^{2} \biggr) \eta +{\frac {1215}{1024}}\,{\eta}^{2} +{\frac {45}{1024}}\,{\eta}^{3} \biggr ] {\xi}^{3} \biggr \} \,,\\ \frac{d\,\xi (t)}{dt} &= {\frac {64}{5}}\,\eta\,{\xi}^{5} \biggl \{ 1 + \left( {\frac {13}{336}}- \frac{5}{2}\,\eta \right) \xi +4\,\pi\,{\xi}^{3/2} \nonumber \\ & \quad + \left( {\frac {117857}{18144}} -{\frac {12017}{2016}}\,\eta +\frac{5}{2} \,{\eta}^{2} \right) {\xi}^{2} + \biggl [ {\frac {4913}{672}} \nonumber \\ & \quad -{\frac {177}{8}}\,\eta \biggr ]\, \pi\, {\xi}^{5/2} + \biggl [ {\frac {37999588601}{279417600}} \nonumber \\ & \quad -{\frac {1712}{105}}\,\ln \left( 4\,\sqrt {\xi} \right) -{\frac {1712}{105}} \,\gamma +\frac{16\,{\pi}^{2}}{3} \nonumber \\ & \quad + \biggl ( {\frac {369}{32}}\,{\pi}^{2}-{\frac {24861497}{72576}} \biggr ) \eta +{\frac {488849}{16128}}\,{\eta}^{2} \nonumber \\ & \quad -{\frac {85}{64}}\,{\eta}^{3} \biggr ] {\xi}^{3} + \biggl [ {\frac {129817}{2304}}\, -{\frac {3207739}{48384}}\,\eta \nonumber \\ & \quad + {\frac {613373}{12096}}\,{\eta}^{2} \biggr ]\, \pi\, {\xi}^{7/2} \biggr \} \,, \label{EqP6c} \end{align} \end{subequations} where $\xi = -{2\, \cal E}/\mu\,c^2$. A close inspection of Eqs.~(\ref{EqP6}) reveals that the above inspiral $h(t)$ is obtained by perturbing a compact binary in a 3PN accurate circular orbit, defined by Eq.~(\ref{EqP6b}), by radiation reaction effects at 3.5PN order, given by Eq.~(\ref{EqP6c}). The explicit use of PN-accurate expression for $d \phi/dt$ allows us to state that the TaylorEt approximants model GWs from compact binaries inspiralling under PN accurate reactive dynamics along PN accurate circular orbits. Importantly, a recent study of the accumulated phase difference between NR waveforms on the one hand and TaylorEt, TaylorT1, and TaylorT4 waveforms on the other hand reveals the following characteristics \cite{GHHB}. In the interval $x \sim 0.127 $ to $x \sim 0.215 $ this difference for the TaylorEt approximant at 3.5PN order is $\delta \phi \sim -1.18$ radians, which is more than what is found for the TaylorT1 and TaylorT4 counterparts (with $\delta \phi \sim 0.6$ and $0.06$ radians, respectively). However, significantly, TaylorEt is the only approximant studied so far that exhibits monotonic phase convergence with the NR waveforms when its reactive PN order is increased \cite{GHHB}. Recall that sophisticated Pad\'{e} approximations are required to make $x$-based Taylor approximants converge monotonically to the $h(t)$ obtained from numerical relativity in the $\eta = 0$ case \cite{DIS}. In the context of the present paper, the analysis detailed in Ref.~\cite{GHHB} also suggests that TaylorEt approximant at 3.5PN order remains fairly accurate in describing the inspiral $h(t)$ even near the last stable orbit. These properties make it worthwhile to study the data analysis implications of TaylorEt approximants. Another motivation for using the TaylorEt approximant to model the expected inspiral GW signal is as follows: With the help of Refs.~\cite{AG07,DGI}, it can be argued that TaylorEt is an appropriate approximant resulting from the zero-eccentricity limit of GW phasing of compact binaries inspiraling along PN-accurate eccentric orbits. By contrast, the construction of the usual {\em adiabatic} inspiral templates requires redefining the right-hand side of Eq.~(\ref{EqP6b}) to be $c^3 x^{3/2}/ Gm$, which can not be extended to yield GWs from precessing and inspiraling eccentric binaries as obtained in Ref.~\cite{DGI}. Therefore, the TaylorEt approximant can be expected to closely model GW signals from inspiraling compact binaries, which realistically will not move along exactly circular orbits. The above statements are based on Ref.~\cite{TG07} that, while restricting radiation reaction to the dominant quadrupole contributions, demonstrated the undesirable consequences of redefining the right-hand side of Eq.~(\ref{EqP6b}) at 2PN order to be $c^3 x^{3/2}/ Gm$. Currently, for the low-mass binary signal searches the LSC usually employs templates based on TaylorT$n$ (where $n$=1, 2, and 3) and TaylorF2 approximants \cite{LAL,Abbott:2007xi}. Therefore, it is important to probe if some of these templates can capture inspiral signals modeled on the TaylorEt approximant. This is what we pursue in the next section. \section{Fitting Factors} \label{FF} Inspiraling compact binaries are the most promising sources of GWs for LIGO/Virgo. Detailed source population synthesis studies suggest that achieving an appreciable event rate, of at least a few compact binary coalescences per year, is possible if one could hear sources in the far reaches of our local super-cluster and beyond \cite{Kalogera:2003tn,O'Shaughnessy:2005qs}. Such an endeavor necessitates the ability to detect signals with relatively low signal-to-noise ratios (SNRs), even with second generation detectors, such as AdLIGO. Let the GW strain from a non-spinning compact binary be denoted by $h(t;\vek{\lambda})$, where $\vek{\lambda}$ represents the signal parameters, namely, $m$, $\eta$, $t_c$, and $\phi_c$, or an alternative set of transformed coordinates in that parameter space. If a detector's strain-data is denoted by $s(t)$ and its noise power-spectral-density (PSD) by $S_n(f)$, then the SNR when filtering the data with template $h(t;\vek{\lambda'})$ is \begin{equation}\label{SNRDef} {\rm SNR} = \frac{\langle s | h(\vek{\lambda'}) \rangle } {\sqrt{ \langle h(\vek{\lambda'}) |h(\vek{\lambda'}) \rangle}} \,, \end{equation} where $\vek{\lambda'}$ symbolizes the template parameters, which need not be the same as the parameters of a signal embedded in the data, and the inner product $\langle a|b \rangle$ is defined as, \begin{equation} \langle a|b \rangle \equiv 4\Re \int_0^{\infty} \frac{\tilde a^*(f) \tilde b(f)}{S_n(f)}\,df \,. \end{equation} Above, $\tilde{a}(f)$ is the Fourier transform of $a(t)$ and the asterisk denotes complex conjugation. Using Eq.~(\ref{SNRDef}), it can be shown that the quantity \begin{equation}\label{match} {\rm M}\left( \vek{\lambda}, \vek{\lambda'} \right) = \frac{\langle g(\vek{\lambda}) | h(\vek{\lambda'}) \rangle } {\sqrt{ \langle g(\vek{\lambda}) |g(\vek{\lambda}) \rangle \langle h(\vek{\lambda'}) |h(\vek{\lambda'}) \rangle}} \,, \end{equation} also known as the ``match'', is useful in describing how well two normalized waveforms, not necessarily from the same template family, overlap \cite{BO96}. For the problem of detecting a GW inspiral signal, the prevailing sentiment in the community is that it is not as essential to search with a template bank that is an exact representation of the signal, as it is to search with an approximate one that can filter the data in real-time, provided its expected maximal match with a signal from anywhere in the parameter space is above a desired threshold. In other words, it should be possible to obtain a sufficiently large `match' with a family of templates having $\vek{\lambda'} \neq \vek{\lambda}$. It is often stressed that this faithlessness of a template in accessing the signal parameters does not concern the detection problem {\it per se}, but that it affects the parameter-estimation problem, which can be tackled {\it a posteriori}, i.e., after the transient signal has been detected and localized in time. The effectiveness of a template family, say, $h(\vek{\lambda'})$, in detecting the target signal $g(\vek{\lambda})$ is quantified by the fitting factor (FF) \cite{A95} \begin{equation}\label{FFDef} {\rm FF}(\vek{\lambda}) = \max_{\vek{\lambda'}} \, {\rm M}\left( \vek{\lambda}, \vek{\lambda'} \right)\,. \end{equation} If a template bank provides near-unity FF values for a given signal, it is considered to be {\it effectual} in detecting it \cite{DIS}. Employing an approximate template bank results in a drop in event rate by $(1 - {\rm FF}^3)$ for a homogeneous distribution of sources. This is easily seen when one realizes that the FF is a measure of the fractional loss in SNR (which scales inversely with source distance) stemming from using such a bank. So, e.g., a FF of 90\% results in a 27\% loss in event rate in any given detector. The expected rate in LIGO or Enhanced LIGO, which is a proposed upgrade in sensitivity of LIGO by roughly a factor of two while making minimal changes to the shape of the LIGO noise PSD \cite{ELIGOFFs}, is very low, i.e., realistically, less than one event in a few years. Therefore, a FF of 90\% can potentially subvert a detection in the era of first-generation detectors. This is why a FF $\geq$ 97\% is so desirable. The values of the fitting factors for a couple of template banks against the TaylorEt 3.5PN waveforms are given in Tables~\ref{tab:paramErrTaylorT1Q1}, \ref{tab:paramErrTaylorT1Q1_3}, and \ref{tab:paramErrTaylorT1Q1_4}. The FFs were computed using two separate codes. One of these employs LAL \cite{LAL}, which is used by the LSC in its inspiral searches, and the other is a home-grown code that extensively uses routines from {\it Numerical Recipes} \cite{Num_Rec}. Both codes have the ability to compute the FF in Eq. (\ref{FFDef}) as well as the more conservative (or lower) {\em minimax} match, detailed in Ref.~\cite{DIS}. The latter is obtained by minimizing the FF of Eq. (\ref{FFDef}) with respect to the coalescence phase of the target waveform. The numbers presented in the tables are FFs (and not minimax matches) and, therefore, are larger than values that are realistically achievable with the above listed inspiral template banks, available in LAL, and the TaylorT4 template bank. Importantly, the first few detections will likely require validation from more than one detector, which implies that in addition to being effectual a template bank also must be {\it faithful} \cite{DIS}. The latter requirement means that the parameter values of the best matched template are allowed to differ from (a subset of) those of the signal only by acceptably small biases. This is because unless these systematics are modeled for, the same signal can be picked up by templates with parameter values different enough in two (or more) detectors so as to fail a parameter-value coincidence test \cite{ethinca}. We infer that differences in the estimated masses, illustrated in Tables~\ref{tab:sh_f} -\ref{tab:paramErrTaylorT1Q1_4}, between different comparable class detectors, such as AdLIGO and Virgo, are due to their different noise PSDs. Based on the tables and figures presented here, a few observations are in order. First, a good fraction of equal-mass compact binary templates, which are chosen to be from TaylorT1, TaylorT4 (presented only in the figures), and TaylorF2 (presented only in the tables) 3.5PN families, have ${\rm FF} ~{}^{\Large <}_{\sim}~ 0.97$. They also show substantial biases for the estimated total-mass against TaylorEt (3.5PN) signals as long as the symmetric mass-ratio of templates is limited to $\eta' \leq 0.25$, which is the upper-limit for physical signals. Note how in the first row of plots in Fig. \ref{fig:T1T4Etq1} the FF first decreases as $m$ is increased before recovering to higher values eventually. This behavior can be explained by the fact that in any given signal band the TaylorEt approximant has a greater number of GW cycles than the $x$-based templates of the same mass system. This means that $x$-based templates with $m'<m$ and $\eta'>\eta$ are more likely to provide a higher match than the one with $m'=m$ and $\eta'=\eta$. However, since for the equal-mass signals in Table~\ref{tab:paramErrTaylorT1Q1} and Fig. \ref{fig:T1T4Etq1} we restrict the templates to have $\eta' \leq 0.25$, the templates yielding the highest match saturate this bound and attain $\eta'= \eta =0.25$. This wave-based argument alone also implies that the highest match will decrease with increasing $m$. This is because while decreasing $m'$ increases the number of template cycles, which helps in improving the match, decreasing it too much adversely affects the match by lowering the template $f_{\rm lso}$ and, thereby, decreasing the integration band. The reason why the FF values eventually regain high values at large $m$ is that there the number of wave cycles is small and it is easier to obtain larger fits on signals with a smaller number of time-frequency bins. Compact binaries with mass-ratios smaller than unity can yield high FFs, but at the expense of introducing high systematic errors in estimating the values of $m$ and $\eta$. The high FFs can be explained by the fact that unlike in the case of equal-mass signals, here $\eta'$ can exceed $\eta$, simply because $\eta<0.25$. For illustration purposes, we consider signals with two different values of the mass-ratio, namely, $q \equiv m_1/m_2 = 1/3\,\,\mbox{and} \,\, 1/4$. In general, our inspiral templates are found to be fairly {\it unfaithful} with respect to the fiducial signal for these cases. Moreover, Tables~\ref{tab:paramErrTaylorT1Q1} and Fig. \ref{fig:ambiguityT1Et10_10Msun} show that the TaylorT1 template with the maximum match almost always has $\eta'=0.25$. This arises from restricting the template banks to have physical values of the symmetric mass-ratio, namely, $\eta' \leq 0.25$. As shown in Fig. \ref{fig:ambiguityT1Et10_10Msun}, this can be seen from the fact that the match in Eq. (\ref{match}), also known as the ambiguity function in this context, has a sharp wedge that rises with increasing $\eta'$ and attains its maximum beyond $\eta'=0.25$. The above observation prompted us to compute FFs in Tables~\ref{tab:paramErrTaylorT1Q1WideEta} with $\eta' > 0.25$. We find that it requires $\eta'$ to be as high as 0.35 for the FF to attain values at least equal to 97\% for the total-mass range considered here (i.e., $m \leq 40 M_\odot$), albeit, at the cost of large biases in the estimated values of both $m'$ (up to almost 20\%) and $\eta'$ (up to about 40\%). In our opinion, while this allowance increases the match, it does not necessarily translate into an increase in the detection confidence. This is because an expanded range in $\eta'$ has the potential to increase the false-alarm rate. To test what the effective gain is it is imperative to include templates with $\eta'>0.25$ in signal simulation studies involving real interferometric data. As presented in Tables~\ref{tab:paramErrTaylorT1Q1}-\ref{tab:paramErrTaylorT1Q1_3}, the systematic errors also show variation with respect to the shape of the detector noise power spectral density. This implies, e.g., that the estimated value of the total mass of a signal in LIGO and Virgo can disagree and, consequently, fail a sufficiently stringent mass-consistency check in a multi-detector search \cite{Abbott:2007xi,Pai:2000zt}. To wit, in the search for inspiral signals in LIGO data from its third and fourth science runs by the LSC, the estimated chirp-mass, ${\cal M}_c$, in the three LIGO detectors was allowed to differ by 0.020$M_\odot$. A comparison of the estimated total-mass values in Tables~\ref{tab:paramErrTaylorT1Q1} shows that this window needs to be relaxed if the search involved both LIGO and Virgo detectors and if TaylorEt was indeed a more appropriate representation of a GW signal. For instance, Table~\ref{tab:paramErrTaylorT1Q1} shows that for $m=5.0~M_\odot$ the measured $m'$ in LIGO and Virgo are expected to differ by as much as 0.062$M_\odot$, which amounts to $\Delta{\cal M}_c = 0.026M_\odot$ (where we assumed, conservatively, that all the error in ${\cal M}_c$ arose from the error in $m'$). This is larger than the allowed window and can, therefore, fail a multi-detector mass-consistency test. The biases shown in Tables~\ref{tab:paramErrTaylorT1Q1} only get worse as the total mass of the signal is increased. Note that this exercise is meant to serve as a guide for sources of systematic effects and how to deal with them; it is not clear if a concurrent search with LIGO and Virgo detectors with LIGO and Virgo design sensitivity curves, respectively, (as used for Table~\ref{tab:paramErrTaylorT1Q1}) is likely. (Nevertheless, it is more likely that the shapes of the respective curves are maintained in the next set of science runs, such as with the planned Enhanced-LIGO design. This is because the FFs depend on the shape of these curves and not the overall scale). It is possible, however, to use our studies to model the variation of the estimated parameter bias in real detector data so that the windows can be scaled and shifted appropriately to mitigate the effect on detection efficiency. \section{Conclusions} \label{conclusions} In this paper, we investigated the GW data analysis implications of the TaylorEt approximant at the 3.5PN order. We limited our attention to the case of GW signals from non-spinning, comparable mass, compact binaries in LIGO, AdLIGO and Virgo interferometers. With the help of detailed fitting factor computations, we compared the performance of three $x$-based inspiral templates, namely, TaylorT1, TaylorT4, and TaylorF2 at 3.5PN order, in detecting a fiducial TaylorEt signal of the same PN order. For the equal-mass binaries, we generally obtain FF ${}^{\Large <}_{\sim}~ 0.97$ when restricting the above templates to physically allowed mass-ratios. In the case of unequal mass binaries, it is possible to obtain high FFs with the LAL inspiral templates. However, the templates that provide those high FFs have substantially different values of $m$ and $\eta$ compared to those of the fiducial TaylorEt signals. In all cases, templates giving high FFs have lower values of the total-mass parameter compared to their associated TaylorEt signals. This is due to the fact that in a given GW frequency band the TaylorEt approximant always provides more accumulated GW cycles than the $x$-based templates. Further, the systematic errors in $m$ and $\eta$ parameters of TaylorF2 templates are substantially higher than the statistical errors in those parameters reported in Ref.~\cite{AISS}. These observations lead us to believe that the unfaithful nature of the $x$-based inspiral templates vis \`{a} vis the TaylorEt approximant may adversely affect the chances of detecting GW inspiral signals assuming that the latter waveforms more accurately represent such a signal. To summarize, the present study shows that it should be worthwhile to include the theoretically motivated TaylorEt templates, which has a number of attractive features as detailed in Ref.~\cite{AG07}, in the search for inspiral GW signals from non-spinning compact binaries in the data of ground-based broadband detectors. Further, this work should also be useful in assessing the effects of systematic errors arising from employing inspiral templates from different representations on a GW signal's detectability and parameter estimation with earth-based detectors. In the literature, there exist gauge-dependent prescriptions for constructing inspiral $h(t)$ that give equal emphasis to both the conservative and the reactive orbital phase evolution, such as the Effective One-Body (EOB) approach \cite{DamourEOB} and the Semi-Analytic Puncture Evolution (SAPE) \cite{SAPE}. Recall that the conservative Hamiltonian relevant for the EOB scheme is in Schwarzschild-type coordinates, while for SAPE it is in the Arnowitt-Deser-Misner gauge. By contrast, PN-accurate TaylorEt based $h(t)$ is fully gauge-invariant. Therefore, we are pursuing a study, similar to the one presented here, of comparing the effectualness and faithfulness of EOB- and SAPE-based inspiral waveforms vis \'{a} vis TaylorEt waveforms. We are also extending the present analysis by including spin effects with the help of a generalized version of fiducial signals and templates detailed in Ref.~\cite{HHBG}. \acknowledgments It is our pleasure to thank Bruce Allen and Gerhard Sch\"afer for helpful discussions and persistent encouragement. We are grateful for their warm hospitality at Hannover and Jena during various stages of the present study. We would also like to thank B. S. Sathyaprakash for useful observations on the comparison of the different template banks studied here and G.~Esposito-Far\`ese and C.~R\"{o}ver for their careful reading of the manuscript and making helpful suggestions. SB thanks P.~Ajith for his explanation of a fast algorithm for computing the fitting factor of a generic template bank and a target signal, and for providing useful comments on this work. This work is supported in part by the NSF grant PHY-0239735 to SB and by grants from DLR (Deutsches Zentrum f\"ur Luft- und Raumfahrt) and DFG's SFB/TR 7 ``Gravitational Wave Astronomy'' to AG and MT, respectively. \begin{widetext}
proofpile-arXiv_069-2235
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} The classical T Tauri stars (CTTS) are optically revealed, low-mass, pre--main-sequence stars that accrete material from a circumstellar disk and have a well-defined connection between accretion and outflow \citep[HEG hereafter]{har95}. Accretion from the disk to the star is thought to be guided by the stellar magnetosphere, where a sufficiently strong magnetic field truncates the disk at several stellar radii and material follows field lines that direct it to the stellar surface at high latitudes \citep{gho78,kon91,col93,shu94}. Magnetospheric accretion controls the star-disk interaction, and an improved understanding of this process will shed light on outstanding issues in the innermost 10~$R_*$ of CTTS systems, such as the regulation of stellar angular momentum and the launching of the inner wind. Leading diagnostics of CTTS accretion include the optical/UV continuum excess and the profiles of permitted emission lines \citep{bou07a}. Kinematic evidence for infalling gas in CTTS began with the discovery that some CTTS show inverse P Cygni structure in upper Balmer lines extending to velocities of several hundred km~s$^{-1}$\ \citep{wal72}. Later, more sensitive surveys found that redshifted absorption components are relatively common in some lines, especially in the upper Balmer and Paschen series \citep{edw94,ale00,fol01}. Although redshifted absorption extending to several hundred km~s$^{-1}$\ clarifies that material accretes in free-fall from at least several stellar radii, it has been the success of radiative transfer modeling of line formation in magnetospheric accretion flows in a key series of papers culminating with \citet{muz01} that has provided the strongest underpinning for this phenomenon. Under the assumption of an aligned, axisymmetric dipole, the models have had reasonably good success in reproducing the general morphology of hydrogen profiles and emission fluxes in some stars. The complementary assessment of accretion rates follows from interpreting the SED of the optical/UV excess, which has been successfully modeled for wavelengths shortward of 0.5~$\micron$ as arising in a hot accretion shock, where accreting material impacts the stellar surface after free-fall along funnel flows coupled to the disk \citep[CG hereafter]{joh00,cal98}. To match the observations, the accretion shock filling factor is less than 1\% in most cases but can climb to 10\% in a few of the most active accretors. The derived accretion rates range from $10^{-10}$ to $10^{-6}$ $M_\sun$~yr$^{-1}$\ with a median of $10^{-8}$~$M_\sun$~yr$^{-1}$\ \citep{cal00}. Additionally, Zeeman broadening of unpolarized CTTS photospheric lines indicates mean surface field strengths in the range 1-3~kG \citep{joh07}, sufficiently strong to induce disk truncation and drive funnel flows. However, these strong surface fields are not predominantly dipolar, as photospheric lines show only weak net circular polarization implying dipole components an order of magnitude smaller \citep{val04,yan07}. Nevertheless, an extended dipole component is inferred for the accretion flow, since the same authors find significant circular polarization in the narrow component of the \ion{He}{1}~$\lambda$5876\ emission line, thought to be formed in the accretion shock at the base of the funnel flow \citep{ber01}. The evidence for magnetospheric accretion is thus compelling; however, the topology of the magnetosphere, the geometry of the accretion flow, and the disk truncation radius remain topics of considerable investigation, since their configuration impacts processes for angular momentum regulation and wind launching. One form of angular momentum regulation, known as disk locking, invokes a spin-up torque from accreting material just inside the corotation radius balanced by a spin-down torque at larger radii \citep{col93}. This approach has been questioned by \citet{mat05,mat08a,mat08b}, who instead suggest accretion-powered stellar winds as a more likely means for stellar spindown, which must occur simultaneously with magnetospheric accretion from the disk (see also \citealt{sau94}). Alternatively the X-wind model \citep{shu94} originally featured a narrow annulus of star-disk coupling close to the corotation radius, where closed field lines develop funnel flows and open field lines drive a centrifugal outflow that carries away angular momentum from accreting material, thus inhibiting stellar spin-up. The flexibility of this model to maintain its basic properties in the face of complex magnetospheric accretion geometries has recently been demonstrated by \citet{moh08}. If the stellar and disk fields are parallel, then an intermittent outflow can develop via a Reconnection X-wind, which removes angular momentum from the star as well as the inner disk \citep{fer06}. Evidence for non-aligned fields coupled with complex accretion geometries is mounting, coming from a variety of recent studies. Using time-resolved spectropolarimetry of the mildly accreting CTTS V2129 Oph, \citet{don07} used Zeeman detections from both photospheric features and emission lines from the accretion shock to construct a Doppler tomographic map of the magnetic topology on the stellar surface. The dominant field on the star is a misaligned octupole, and accretion is confined largely to a high-latitude spot, covering $\le5$\% of the stellar surface. These authors also attempt to reconstruct the 3D field geometry out to the disk interaction region and suggest that the large-scale field funneling accreting material from the disk is more complex than a simple dipole. However, such extrapolation techniques, while tantalizing, must be applied with caution at present, since there are numerous uncertainties in the reconstruction process \citep{moh08}. Numerical magnetohydrodynamic (MHD) simulations of star-disk interactions demonstrate that non-axisymmetric funnel flows arise if the stellar field is tipped by only a few degrees relative to the rotation axis, breaking into two discrete streams under stable accretion conditions \citep{rom03}. If the accretion rate is sufficiently high, accretion is predicted to proceed through equatorial ``tongues'' that can push apart field lines \citep{rom08}. Similarly, accretion spots can appear at a wide range of latitudes, including equatorial belts. Such behavior has been observed in models featuring accretion along quadrupolar as well as dipolar field lines \citep{lon07,lon08}, accretion along field lines extrapolated from surface magnetograms \citep{gre06}, and accretion mediated by a dynamo-generated disk magnetic field \citep{von06}. Observational signatures for misaligned dipoles are being explored in radiative transfer models for hydrogen line formation in funnel flows, with subsequent predictions for rotationally modulated profile variations. Initially \citet{sym05} presented radiative transfer models of hydrogen lines featuring curtains of accretion covering a limited extent in azimuth in geometries consistent with aligned dipoles. Their model profiles exhibit certain characteristics of the observed line profile variability, such as rotationally modulated line strengths and the appearance of red absorption components at certain phases and inclinations, but the predicted level of variability is higher than observed. More recently, \citet{kur08} applied a radiative transfer code for H line formation to the 3-D output from the MHD simulations of \citet{rom03,rom04}, which prescribe the geometry, density, and velocity of two-armed accretion streams that result from dipoles misaligned with the rotation axis by angles ranging from 10$^\circ$ to 90$^\circ$. Applying temperatures similar to those from the \citet{muz01} axisymmetric models, they were able to reproduce some of the trends in continuum and profile variability from models. One of the larger discrepancies in comparing the model profiles to observed ones is that the model profiles for Paschen and Brackett lines are a factor of two narrower than the mean value observed by \citet{fol01}. The problem is likely more complex than simply finding another line broadening mechanism, since Paschen~$\gamma$ has recently been shown to have line widths that are correlated with the 1-$\micron$ continuum excess, in the sense that the narrowest lines are found among objects with the lowest disk accretion rates \citep[EFHK hereafter]{EFHK}. Evidence that some of the hydrogen emission is formed in the accretion shock rather than the funnel flow is now clearly demonstrated from the discovery of circular polarization in the core of Balmer lines \citep{don08}. The implication is that hydrogren lines are not necessarily a definitive means for probing the properties of funnel flows, so additional probes are desirable. In this paper, we explore a different means of diagnosing the geometry of the accretion flow, making use of the redshifted subcontinuum absorption in the $2p~{}^3P^o$\ $\rightarrow$ $2s~{}^3S$\ transition of neutral helium ($\lambda10830$), recently demonstrated to be a very sensitive probe of both outflowing gas in the inner wind and infalling gas in the funnel flow due to its frequent display of blue and red absorptions (EFHK). A subcontinuum absorption feature is a more tell-tale diagnostic of a kinematic flow than an emission profile, since its position (blue or red) indicates the direction of the flow, its width indicates the range of line-of-sight velocities in the flow, and its depth at a particular velocity indicates the fraction of the continuum (stellar plus veiling) occulted by material moving at that velocity. In the case of a red absorption, the absorption depth signals the fraction of the stellar surface covered by the funnel flow at each velocity, making it an effective probe of the CTTS accretion geometry. This is the third in a series of papers about 1-$\micron$ diagnostics of accretion and outflow in CTTS. The first, EFHK, presented 1-$\micron$ spectra from 38 CTTS including \ion{He}{1}~$\lambda$10830\ profiles, P$\gamma$\ profiles, and measurements of the continuum excess ``veiling'' in the 1-$\micron$ region. The second paper, \citet[KEF hereafter]{kwa07}, modeled blueshifted absorption components at \ion{He}{1}~$\lambda$10830, which appear in about three quarters of the sample, and found that while some stars have winds best explained as arising from the inner disk, others require an outflow moving radially away from the star in an accretion-powered ``stellar'' wind. In this paper, we analyze redshifted subcontinuum absorption at \ion{He}{1}~$\lambda$10830\ in 21 CTTS that present red absorption in at least one observation. The following section describes the sample selection, data acquisition, and data reduction. Section 3 presents the data and discusses variability. In Section 4, we present model scattering profiles that arise in a dipolar flow geometry, show that they explain only a fraction of the red absorptions, and explore modifications to a dipolar flow that better explain the remaining observations. Discussion and conclusions follow in Sections 5 and 6. \section{SAMPLE AND DATA REDUCTION} In this paper we focus on the 21 of 38 CTTS included in EFHK that display redshifted subcontinuum absorption at \ion{He}{1}~$\lambda$10830\ at least once in a multi-epoch observing program with Keck NIRSPEC. It includes spectra presented in EFHK acquired in November 2001 and November 2002, when 8 of the 38 CTTS were observed twice and 1 on three occasions. It also includes 33 additional spectra of 24 objects from that study, taken in 2005, 2006, and 2007. In EFHK, 19 out of 38 CTTS showed red absorption at \ion{He}{1}~$\lambda$10830. In the subsequent observing runs, \ion{He}{1}~$\lambda$10830\ red absorption was seen in 2 additional stars. Thus, the 21 of 38 CTTS (55\%) that have shown subcontinuum red helium absorption in at least one spectrum of 81 acquired between 2001 and 2007 form the sample for this paper. Among the 21 stars with helium red absorption, 12 were observed more than once, with 6 observed twice, 1 observed three times, 4 observed four times, and 1 observed six times. The EFHK sample was assembled to span the full range of mass accretion rates observed for CTTS, from less than 10$^{-9}$ to $\sim 10^{-6}$~$M_\sun$~yr$^{-1}$, with a median rate of 10$^{-8}$~$M_\sun$~yr$^{-1}$. Most of them are from the Taurus-Auriga star-forming region and have spectral types of K7 to M0. The subset of 21 stars that are the focus of this paper are identified in Table~\ref{t.sample}, along with their spectral types, masses, radii, rotation periods, median veilings $r_V$ at 0.57 $\micron$, and mass accretion rates from the literature. We have included $r_V$ only for the 29 sources in common with HEG, obtained more than a decade earlier. We also list the number of observations for each star. \begin{deluxetable*}{lcccccccccc} \tablecaption{21 of 38 CTTS with Subcontinuum Red Absorption at \ion{He}{1}~$\lambda$10830\label{t.sample}} \tablewidth{6in} \tablehead{\colhead{Object} & \colhead{Sp Type} & \colhead{$M_*$} & \colhead{$R_*$} & \colhead{$V_{\rm esc}$} & \colhead{$P_{\rm rot}$} & \colhead{$R_{\rm co}$} & \colhead{$\left<r_V\right>$} & \colhead{$\log \dot{M}_{\rm{acc}}$} & \colhead{Ref} & \colhead{$N_{\rm obs}$} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)}} \startdata AA Tau\dotfill & K7 & 0.70 & 1.75 & 390 & 8.22 & 8.7 & 0.32 & -8.5 & 12,9,5,9 & 4 \\ BM And\dotfill & G8 & 2.03 & 3.02 & 510 & \nodata & \nodata & \nodata & $>$-9 & 15,15,7 & 2 \\ CI Tau\dotfill & K7 & 0.70 & 1.94 & 370 & \nodata & \nodata & 0.47 & -6.8 & 12,12,10 & 1 \\ CY Tau\dotfill & M1 & 0.43 & 1.70 & 310 & 7.5 & 7.2 & 1.20 & -8.1 & 12,9,4,9 & 3 \\ DK Tau\dotfill & K7 & 0.69 & 2.51 & 320 & 8.4 & 6.1 & 0.49 & -7.4 & 12,9,3,9 & 4 \\ DN Tau\dotfill & M0 & 0.52 & 2.15 & 300 & 6.0 & 5.2 & 0.08 & -8.5 & 12,9,2,9 & 2 \\ DR Tau\dotfill & K7 & 0.69 & 2.75 & 310 & 9.0 & 5.9 & 9.60 & -5.1 & 12,10,4,10 & 4 \\ DS Tau\dotfill & K5 & 1.09 & 1.30 & 570 & \nodata & \nodata & 0.96 & -7.9 & 12,9,9 & 1 \\ FP Tau\dotfill & M4 & 0.21 & 2.00 & 200 & \nodata & \nodata & 0.15 & -7.7 & 12,12,10 & 1 \\ GI Tau\dotfill & K6 & 0.93 & 1.74 & 450 & 7.2 & 8.8 & 0.24 & -8.0 & 12,9,17,9 & 2 \\ GK Tau\dotfill & K7 & 0.69 & 2.16 & 350 & 4.65 & 4.8 & 0.23 & -8.2 & 12,9,3,9 & 2 \\ HK Tau\dotfill & M0.5 & 0.45 & 1.65 & 320 & \nodata & \nodata & 1.10 & -6.5 & 12,12,10 & 1 \\ LkCa 8\dotfill & M0 & 0.53 & 1.48 & 370 & 3.25 & 5.1 & 0.15 & -9.1 & 12,9,3,9 & 2 \\ RW Aur B\dotfill & K5 & 0.96 & 1.09 & 580 & \nodata & \nodata & \nodata & -8.8 & 19,19,19 & 1 \\ SU Aur\dotfill & G2 & 2.02 & 3.27 & 490 & 1.7 & 2.3 & \nodata & -8.0 & 12,12,6,8 & 1 \\ TW Hya\dotfill & K7 & 0.75 & 1.04 & 520 & 2.80 & 7.3 & \nodata & -9.3 & 18,18,13,14 & 6 \\ UY Aur\dotfill & M0 & 0.54 & 1.30 & 400 & \nodata & \nodata & 0.40 & -7.6 & 11,11,11 & 4 \\ UZ Tau E\dotfill & M1 & 0.43 & 1.39 & 340 & \nodata & \nodata & 0.73 & -8.7 & 19,19,19 & 1 \\ UZ Tau W\dotfill & M2 & 0.33 & 1.88 & 260 & \nodata & \nodata & \nodata & -8.0 & 11,11,11 & 1 \\ V836 Tau\dotfill & K7 & 0.71 & 1.43 & 440 & 7.0 & 9.6 & 0.05 & -8.2 & 12,12,16,10 & 2 \\ YY Ori\dotfill & K7 & 0.68 & 3.00 & 290 & 7.58 & 4.8 & 1.80 & -5.5 & 10,10,1,10 & 1 \\ \enddata \tablecomments{Col.~2: Spectral type; Col.~3: Stellar mass in $M_\odot$; Col.~4: Stellar radius in $R_\odot$; Col.~5: Stellar escape velocity in km~s$^{-1}$, calculated from columns 3, 4; Col.~6: Rotation period in days; Col.~7: Corotation radius in $R_*$, calculated from columns 3, 4, 6; Col.~8: Median veiling at 5700~\AA\ from HEG; Col.~9: Logarithm of the mass accretion rate in $M_\sun$~yr$^{-1}$; Col.~10: References for the spectral type, stellar luminosity (to determine $M_*$ and $R_*$), rotation rate (where available), and mass accretion rate; Col.~11: Number of spectra acquired with NIRSPEC.}\tablerefs{(1) \citealt{ber96}; (2) \citealt{bou86}; (3) \citealt{bou93}; (4) \citealt{bou95}; (5) \citealt{bou07b}; (6) \citealt{dew03}; (7) \citealt{gue93}; (8) \citealt{gul00}; (9) \citealt{gul98}; (10) HEG; (11) \citealt{har03}; (12) \citealt{ken95}; (13) \citealt{law05}; (14) \citealt{muz00}; (15) \citealt{ros99}; (16) \citealt{ryd84}; (17) \citealt{vrb86}; (18) \citealt{web99}; (19) \citealt{whi01}.} \end{deluxetable*} As in the 2006 paper, the additional spectra were acquired with NIRSPEC on Keck II \citep{mcl98} using the N1 filter ($Y$ band), which covers the range 0.95 to 1.12~\micron\ at a resolution $R=25,000$ ($\Delta V=12$~km~s$^{-1}$). The echelle order of primary interest extends from 1.081 to 1.096~\micron\ and contains both \ion{He}{1}~$\lambda$10830\ and P$\gamma$. Spectra from the 2005-06 season were acquired by G. Blake (13 December 2005) and D. Stark (13 January 2006). Those from November and December 2006 were obtained by L. Hillenbrand, W. Fischer, S. Edwards, and C. Sharon. Finally, Hillenbrand obtained two additional spectra of TW Hya in December 2007. Data reduction, including wavelength calibration and spatial rectification, extraction of one-dimensional spectra from the images, and removal of telluric emission and absorption features, is discussed in EFHK. While we used an IRAF script to reduce the EFHK data, we used the IDL package REDSPEC by S. S. Kim, L. Prato, and I. McLean to reduce data acquired in Fall 2006 and later. EFHK also describes the procedure for measuring photospheric lines to determine the 1-$\micron$ veiling $r_Y$, defined as the ratio of excess flux to photospheric flux near the \ion{He}{1}~$\lambda$10830\ line \citep[see also][]{har89}. After the veilings are determined, a non-accreting template that has been artifically veiled to match the observed CTTS is subtracted from each target spectrum. This removes photospheric absorption lines from the \ion{He}{1}~$\lambda$10830\ and P$\gamma$\ regions, which allows for a more accurate definition of the remaining structure in each of these two lines. We augmented the spectral templates from those of EFHK, resulting in a reassessment of the 1-$\micron$ veiling for one object. Recent determinations of the spectral type of BM And in the V band range from G8 \citep{gue93} to K5 \citep{mor01}. In EFHK we used an early K star to deveil the 2002 spectrum of BM And. However, using our new grid of templates acquired in Fall 2006, we found that the G8 dwarf HD 75935 provides a better match to the photosphere of BM And. Deveiling the 2002 spectrum of BM And with this template yields a veiling of 0.4, in contrast to the value of 0.1 reported in EFHK. We adopt the more appropriate veilings for BM And in this work, $r_Y=0.4$ in 2002 and $r_Y=0.5$ in 2006. The veiling determinations for all the other objects from EFHK are unaffected by our extended grid of templates. We use the stellar mass, radius, and rotation period to calculate the escape velocity and the star-disk corotation radius for each star, which are included in Table~\ref{t.sample} and will be used in later analysis of the accretion geometry. We carefully surveyed the literature to acquire the most up-to-date estimates of spectral types and stellar luminosities, using \citet{ken95} and \citet{gul98} in most cases. The luminosity of YY Ori (HEG) was updated to reflect the latest estimate of the distance to Orion \citep{men07}, which is 10\% less than the earlier value. Spectral types were converted into effective temperatures using the scale from \citet{hil04}, and stellar radii follow directly from application of the Stefan-Boltzmann law to the effective temperatures and luminosities. Stellar masses are then derived from the \citet{sie00} pre--main-sequence tracks, available online. Since the escape velocity will be an important parameter in comparing observed to model profiles, we have given some thought to its accuracy. With a dependency on $M/R$, the largest source of uncertainty in calculating the escape velocity is the uncertainty in $T_{\rm eff}$, since temperature strongly affects both the mass and radius determination, while luminosity only weakly influences the radius estimate. We assess that the typical error in the escape velocity is $\sim20\%$. For the 12 stars with rotation periods in the literature, we calculate corotation radii with a typical error of 20\%, provided the photometric period is equivalent to the rotation period. Three of the 21 objects are known members of binary pairs resolved in our spectra where we have observed only the primary: DK Tau A, HK Tau A, and UY Aur A. For another system, RW Aur, we have resolved spectra of RW Aur A and RW Aur B, but only the latter shows red absorption at \ion{He}{1}~$\lambda$10830\ and thus qualifies as part of the sample for this study. There is conflicting evidence in the literature on whether RW Aur B has a close companion at an angular separation of 0.12\arcsec\ with a K-band flux ratio of 0.024 \citep{ghe93,cor06}. In our spectra we see the lines of only one object consistent with a K5 spectral type, and we call this RW Aur B. An additional two objects are unresolved binaries: UZ Tau E and UZ Tau W. For these, we also see lines from only one star and attribute the 1-$\micron$ continuum and line profiles to the primary. \section{EMPIRICAL RESULTS\label{s.obs}} In this study we concentrate on the red absorption seen at \ion{He}{1}~$\lambda$10830\ as a probe of the accreting gas, ignoring the blue absorptions that arise from disk and stellar winds (KEF). For each spectrum of the 21 stars that show subcontinuum redshifted \ion{He}{1}~$\lambda$10830\ absorption at least once, Table~\ref{t.redabs} lists the HJD of observation, the 1-$\micron$ veiling, and measurements of the red absorption. In this section we first compare the veilings of the stars in this study to those from the ensemble of 38 CTTS in EFHK, and then we present the profiles and kinematic data for the {\it reference sample}, consisting of the single observation of each star with the deepest red absorption (identified with an asterisk in Table~\ref{t.redabs}). Next we demonstrate that the propensity for \ion{He}{1}~$\lambda$10830\ to absorb all impinging 1-$\micron$ photons provides a means of estimating the origination radius in the disk for infalling gas and the filling factor of accreting material immediately before the accretion shock. We conclude the section with a discussion of profile and veiling variability. \begin{deluxetable*}{lcccccccc} \tablecaption{Veilings and Measurements of \ion{He}{1}~$\lambda$10830\ Subcontinuum Red Absorption\label{t.redabs}} \tablewidth{6in} \tablehead{& & & \colhead{$W_\lambda$} & \colhead{$D_{\rm max}$} & \colhead{$V_C$} & \colhead{FWQM} & \colhead{$V_{\rm{blue}}$} & \colhead{$V_{\rm red}$}\\ \colhead{Object} & \colhead{HJD} & \colhead{$r_Y$} & \colhead{(\AA)} & \colhead{(\%)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)} & \colhead{(km~s$^{-1}$)}\\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)}} \startdata AA Tau\dotfill & 605.0 & 0.2 & 0.5 & 14 & 185 & 120 & 110 & 250 \\ & 606.9 & 0.1 & 1.9 & 42 & 110 & 170 & 40 & 250 \\ & 1718.0* & 0.0 & 4.5 & 61 & 90 & 310 & -40 & 310 \\ & 2069.0 & 0.1 & 0.9 & 24 & 50 & 70 & 0\tablenotemark{a} & 80 \\ BM And\dotfill & 604.8 & 0.4 & 2.3 & 28 & 115 & 260 & -20 & 290 \\ & 2068.7* & 0.5 & 2.9 & 40 & 90 & 270 & -40 & 280 \\ CI Tau\dotfill & 605.9 & 0.2 & 1.3 & 17 & 140 & 290 & 10 & 310 \\ CY Tau\dotfill & 606.8 & 0.1 & 1.0 & 27 & 140 & 120 & 80 & 230 \\ & 1718.0* & 0.0 & 1.1 & 37 & 140 & 100 & 80 & 240 \\ & 2068.8 & 0.2 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ DK Tau\dotfill & 604.9 & 0.5 & 2.1 & 28 & 145 & 290 & 20 & 330 \\ & 606.9 & 0.5 & 1.9 & 37 & 80 & 240 & 0\tablenotemark{a} & 280 \\ & 1748.9* & 0.0 & 3.1 & 40 & 150 & 320 & -20 & 340 \\ & 2068.9 & 0.4 & 1.9 & 34 & 95 & 290 & 0\tablenotemark{a} & 310 \\ DN Tau\dotfill & 606.0 & 0.0 & 1.3 & 30 & 145 & 170 & 60 & 250 \\ & 1718.0* & 0.0 & 1.2 & 33 & 150 & 140 & 70 & 260 \\ DR Tau\dotfill & 605.0 & 2.0 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ & 606.0 & 2.0 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ & 606.9* & 2.0 & 0.7 & 14 & 235 & 160 & 150 & 320 \\ & 2069.1 & 3.5 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ DS Tau\dotfill & 605.9 & 0.4 & 1.1 & 18 & 205 & 240 & 90 & 340 \\ FP Tau\dotfill & 605.0 & 0.1 & 0.5 & 17 & 50 & 120 & 0\tablenotemark{a} & 120 \\ GI Tau\dotfill & 606.0 & 0.1 & 3.1 & 47 & 160 & 230 & 50 & 330 \\ & 2069.8* & 0.0 & 3.3 & 52 & 180 & 240 & 50 & 350 \\ GK Tau\dotfill & 606.0 & 0.3 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ & 2069.8* & 0.1 & 0.5 & 11 & 160 & 140 & 50 & 220 \\ HK Tau\dotfill & 606.1 & 0.4 & 0.5 & 17 & 75 & 100 & 30 & 140 \\ LkCa 8\dotfill & 604.9* & 0.05 & 1.4 & 32 & 160 & 160 & 70 & 280 \\ & 2068.9 & 0.1 & 1.2 & 24 & 125 & 190 & 40 & 250 \\ RW Aur B\dotfill & 605.1 & 0.1 & 2.8 & 43 & 160 & 230 & 50 & 330 \\ SU Aur\dotfill & 607.0 & 0.0 & 1.6 & 35 & 50 & 180 & -50 & 150 \\ TW Hya\dotfill & 605.2 & 0.0 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ & 606.1 & 0.0 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ & 1718.1* & 0.1 & 1.4 & 32 & 245 & 170 & 170 & 370 \\ & 2069.1 & 0.1 & 0.9 & 17 & 255 & 170 & 170 & 350 \\ & 2452.1 & 0.0 & 0.8 & 17 & 230 & 190 & 150 & 350 \\ & 2453.1 & 0.1 & 0.7 & 14 & 240 & 190 & 160 & 330 \\ UY Aur\dotfill & 605.0 & 0.4 & 0.4 & 14 & 160 & 110 & 110 & 220 \\ & 607.0 & 0.4 & 0.4 & 12 & 160 & 110 & 100 & 220 \\ & 1718.1* & 0.2 & 0.7 & 20 & 160 & 130 & 90 & 240 \\ & 2069.9 & 0.3 & 0.5 & 13 & 160 & 120 & 90 & 230 \\ UZ Tau E\dotfill & 605.9 & 0.3 & 0.2 & 8 & 185 & 60 & 150 & 210 \\ UZ Tau W\dotfill & 605.9 & 0.1 & 0.6 & 18 & 75 & 140 & 20 & 170 \\ V836 Tau\dotfill & 606.0* & 0.0 & 1.7 & 35 & 160 & 170 & 80 & 300 \\ & 1749.0 & 0.0 & 0.0 & \nodata & \nodata & \nodata & \nodata & \nodata \\ YY Ori\dotfill & 607.1 & 0.4 & 2.1 & 37 & 225 & 210 & 110 & 390 \\ \enddata \tablecomments{Col.~2: Heliocentric Julian Date (2,452,000 +); for multiple observations an asterisk indicates membership in the reference sample; Col.~3: Veiling at one micron; Col.~4: Equivalent width of red absorption below the continuum; Col.~5: Percentage of the continuum absorbed at the deepest point of the profile; Col.~6: Centroid of the red absorption; Col.~7: Width at one quarter of red absorption minimum; Col.~8: Minimum velocity of red absorption; Col.~9: Maximum velocity of red absorption.} \tablenotetext{a}{The true minimum velocity is obscured by central absorption; we assume $V_{\rm blue}=0$.} \end{deluxetable*} \subsection{Veiling and Redshifted Absorption} Our additional observations beyond those in EFHK confirm the result reported therein, that subcontinuum redshifted absorption is more prevalent in CTTS with low veiling. We illustrate this in Figure~\ref{f.veil}, where the equivalent width of the redshifted \ion{He}{1}~$\lambda$10830\ absorption below the continuum is plotted against both the simultaneous 1-$\micron$ veiling $r_Y$ and the non-simultaneous optical veiling $r_V$ for the 38 CTTS in EFHK. The 21 CTTS that are the focus of this study, showing redshifted absorption at least once among 46 spectra, are each identified by name. The remaining 17 CTTS that have not yet been seen to show redshifted absorption among 35 spectra acquired to date appear as symbols (but can be identified from EFHK). All points in the figure are averages for objects with multiple observations taken between 2001-2007. \begin{figure} \epsscale{1.2} \plotone{f1.eps} \figcaption{ Equivalent width of red absorption at \ion{He}{1}~$\lambda$10830\ versus veiling for CTTS from EFHK. The top panel shows the relation for the simultaneously measured 1-$\micron$ veiling $r_Y$ for all 38 CTTS, using averages for stars with multiple observations. The 21 stars featured here show helium red absorption in at least one observation and are labeled with abbreviations of their names. The 17 that have not been observed to show red absorption are identified with plus signs. The bottom panel plots the same equivalent width data versus the average of optical veiling measurements ($\lambda=5700$~\AA) from HEG, obtained a decade before the NIRSPEC campaign, which exist for 29 of the EFHK stars. In this and future scatter plots, points that would otherwise overlap are slightly offset for clarity. \label{f.veil}} \epsscale{1} \end{figure} We have included the optical veiling measurements in Figure~\ref{f.veil} because it is the excess emission at optical and shorter wavelengths that is associated with luminosity from accretion shocks and is the basis for deriving disk accretion rates. Note that the range of veilings is different at each of the two wavelengths, with maxima of $r_V=9.6$ and $r_Y=2$, and that although all CTTS show detectable veiling in the optical (HEG), 7/38 have no detectable veiling at 1 $\micron$. (All the CTTS with $r_Y=0$ do show \ion{He}{1}~$\lambda$10830\ and P$\gamma$\ in emission, differentiating them from WTTS.) Despite these differences, objects with low $r_V$ have low $r_Y$, and objects with high $r_V$ have high $r_Y$, clarifying that the 1-$\micron$ veiling is a rough proxy for optical/UV veiling and disk accretion rate, and that variations in the disk accretion rate are relatively modest over timescales of a decade. The proportionality between $r_Y$ and disk accretion rate is further corroborated by the excellent correlation between $r_Y$ and the equivalent width of P$\gamma$\ emission (EFHK), assuming that the equivalent width of P$\gamma$, like that of P$\beta$, is correlated with accretion rate \citep{muz98,fol01,nat04}. The prime message from Figure~\ref{f.veil} is that when the veiling is high, $r_Y>0.5$ or $r_V>2$, red absorption at \ion{He}{1}~$\lambda$10830\ is rare. Although the number of observations of each of the 38 stars ranges from 1 to 6, we note that out of a total of 25 observations of the 9 objects with $r_Y>0.5$, only once, in one of four observations of the highest-veiling object DR Tau, did a weak redshifted absorption appear. In contrast, out of the 56 total spectra of the 29 objects with $r_Y\le0.5$ or $r_V\le2$, redshifted absorption is detected in 37 spectra. Even with our non-uniform sampling of individual objects, it is clear that the frequency of redshifted absorption in CTTS with the highest veilings, seen in only 4\% (1/25) of the total spectra of 9 objects, is significantly lower than that in CTTS with more modest veilings, where red absorption is seen in 66\% (37/56) of the total spectra of 29 objects. Spectral variability for these objects will be discussed in Section~3.4. \subsection{Line Profiles and Subcontinuum Absorption} \ion{He}{1}~$\lambda$10830\ profiles for the 21 CTTS that have shown subcontinuum redshifted absorption at least once are presented in Figure~\ref{f.redabs}. This set of profiles is for the reference sample, and the profiles are ordered by their simultaneous 1-$\micron$ veiling. The part of the profile we identify as the red absorption component is delineated in Figure~\ref{f.redabs} by shading. As noted above, the reference sample contains the profile with the deepest red absorption at \ion{He}{1}~$\lambda$10830\ for each star (in contrast to the reference sample from EFHK that emphasized blueshifted absorption from winds). The full set of profiles observed for the 12 stars with multiple spectra appears in Section~\ref{s.var} where we discuss variability. The reference sample will be used in all subsequent analysis unless we are explicitly considering profile or veiling variations. \begin{figure*} \plotone{f2.eps} \figcaption{The reference sample of residual \ion{He}{1}~$\lambda$10830\ profiles for the 21 CTTS with subcontinuum redshifted absorption (shaded), ordered by decreasing 1-$\micron$ veiling $r_Y$. Since the veiled photospheric contribution has been subtracted, the continuum corresponds to zero on the flux axis, and total absorption of the continuum corresponds to -1. Velocities are relative to the stellar photosphere, and the spectra are plotted with three-pixel binning.\label{f.redabs}} \end{figure*} The uniqueness of the \ion{He}{1}~$\lambda$10830\ line in the study of accreting gas is immediately seen by comparing it to the P$\gamma$\ profile found in the same NIRSPEC order. This comparison is made in Figure~\ref{f.width}, which zooms in on the red side of the profile for each of the 21 objects in the reference sample, sorted now by the equivalent width of the \ion{He}{1}~$\lambda$10830\ red absorption. (We have ignored the blue half of the line in order to draw attention to the red absorption; full P$\gamma$\ profiles can be found in EFHK.) Only 5 of 21 (24\%) stars show red absorption at both \ion{He}{1}~$\lambda$10830\ and P$\gamma$, and when seen, it is considerably weaker at P$\gamma$. Specifically, the maximum depth of red absorption seen at P$\gamma$\ is 21\% of the continuum, compared to 61\% for $\lambda$10830, and the maximum equivalent width is 1.1~\AA, versus 4.5~\AA\ at $\lambda$10830. Surprisingly, the 3 stars with the strongest helium absorption (AA Tau, GI Tau, and DK Tau) show no absorption at P$\gamma$, while the 3 stars with the strongest P$\gamma$\ absorption (TW Hya, BM And, and YY Ori) have intermediate helium absorptions, although their P$\gamma$\ absorptions do share similar velocity structures with their helium absorptions. In models for H line formation in magnetospheric accretion scenarios, inverse P Cygni absorption is seen only when the accretion rate is favorable and the line of sight is directed toward the hot continuum in the accretion shock. In contrast, our data indicate a much wider range of formation conditions for red absorption at \ion{He}{1}~$\lambda$10830, which offers a unique probe of the infalling gas projected in front of the stellar surface by absorbing continuum photons from both the star and the accretion shock. \begin{figure*} \plotone{f3.eps} \figcaption{Comparison of the red half of \ion{He}{1}~$\lambda$10830\ (left) and P$\gamma$\ (right) profiles from the reference sample, arranged top to bottom in order of increasing \ion{He}{1}~$\lambda$10830\ red absorption equivalent width. Subcontinuum absorption is shaded in both lines.\label{f.width}} \end{figure*} Measured parameters of the \ion{He}{1}~$\lambda$10830\ red absorption, i.e., the section of the profile shaded in Figure~\ref{f.redabs}, are listed for each observation in Table~\ref{t.redabs} with an asterisk identifying the spectrum in the reference sample for stars with multiple observations. Parameters include the equivalent width $W_\lambda$, the depth of maximum penetration into the continuum $D_{\rm max}$, the centroid velocity $V_C$, and the width measured at one quarter of the absorption minimum, FWQM. We also tabulate velocities at the blueward and redward edges of the absorption, $V_{\rm blue}$ and $V_{\rm red}$. In most cases, $V_{\rm blue}$ is easily identified as the location where emission sharply transitions to red absorption. On the other hand, the gradual return to the continuum at the high-velocity end makes $V_{\rm red}$\ less straightforward to measure. In order to have a uniform definition for all stars, we conservatively define $V_{\rm red}$\ as the velocity where the absorption reaches 95\% of the continuum level, with the consequence that it is somewhat smaller than the extreme infall velocity. Histograms illustrating the diversity of these parameters appear in Figure~\ref{f.hist}. The equivalent widths range from 0.2 to 4.5~\AA, maximum penetrations into the continuum range from 8\% to 61\%, centroids range from 50 to 255~km~s$^{-1}$, and FWQM range from 60 to 320~km~s$^{-1}$. In many stars the absorptions begin near the stellar rest velocity, so the FWQM reflects the true width of the absorbing velocities. In others $V_{\rm blue}$ is well redward of the rest velocity (e.g., DR Tau and TW Hya) due to the presence of helium emission that is likely from another region, such as the wind, that is filling in the red absorption and reducing its magnitude. \begin{figure} \epsscale{1.2} \plotone{f4.eps} \figcaption{Histograms of red absorption profile measurements for the reference sample.\label{f.hist}} \epsscale{1} \end{figure} \subsection{Maximum Infall Velocities\label{s.rred}} Without assuming any particular infall geometry, we can estimate the outer extent of an accretion flow by comparing the most redward velocity in an absorption profile with the stellar escape velocity. A particle undergoing ballistic infall from a distance $R$ toward a star of mass $M_*$ and radius $R_*$ has a free-fall speed at a distance $r$ of \begin{eqnarray}v_{ff}&=&\left[\frac{2GM_*}{R_*}\left(\frac{R_*}{r}-\frac{R_*}{R}\right)\right]^{1/2}\nonumber\\ &=&V_{\rm esc}\left(\frac{R_*}{r}-\frac{R_*}{R}\right)^{1/2}\label{e.speed}\end{eqnarray} where $V_{\rm esc}$\ is the escape velocity from the surface of the star. The largest infall velocity achieved in the funnel flow, immediately before impact, is thus set by the maximum distance where infalling gas leaves the disk, $R_{\rm max}$, i.e., $V_{\rm max}=V_{\rm esc}\left(1-R_*/R_{\rm max}\right)^{1/2}$. We can then use the velocity of the red edge of the \ion{He}{1}~$\lambda$10830\ absorption, $V_{\rm red}$, as an indicator for $V_{\rm max}$ and hence determine $R_{\rm max}$. The depth of the absorption near $V_{\rm red}$\ will then indicate the filling factor of infalling gas immediately before it impacts the photosphere. However, because of projection effects and the conservative assumption for measuring $V_{\rm red}$, this gives a lower limit to the actual $V_{\rm max}$, and thus the corresponding inferred maximum distance for infall, $R_{\rm red}$, will be a lower limit to the actual value of $R_{\rm max}$. Thus $V_{\rm max}\ge V_{\rm red}$ and with $R_{\rm max}$ in units of $R_*$, we have \begin{equation}R_{\rm max}\ge\left(1-V_{\rm red}^2/V_{\rm esc}^2\right)^{-1}\equiv R_{\rm red}.\label{e.rred}\end{equation} In Table~\ref{t.radius} we list $V_{\rm red}$, $V_{\rm esc}$, and their ratio, followed by the implied $R_{\rm red}$. Figure~\ref{f.rmax} shows the locations of the 21 observations of the reference sample in the ($V_{\rm red}$, $V_{\rm esc}$) space as well as dotted lines corresponding to $V_{\rm red}/V_{\rm esc}$ of 0.94, 0.87, 0.71, and 0.30, or $R_{\rm red}=8$, 4, 2, and 1.1~$R_*$ respectively. The average value of $R_{\rm red}$ is 2.9 $R_*$, and the median is 1.9 $R_*$, where we adopt $R_{\rm red}\ge8R_*$ for the 3 stars with $V_{\rm red}>V_{\rm esc}$. Of these 3 outliers, YY Ori has $V_{\rm red}/V_{\rm esc}=1.4$, which indicates an error in the stellar parameters, while the other two, DR Tau and DK Tau, have $V_{\rm red}$\ slightly larger than $V_{\rm esc}$\ but within the estimated 20\% uncertainty. If we used a less conservative estimate for $V_{\rm red}$\ as the outermost redward velocity (see Section 3.2), at a penetration depth of 2\% rather than 5\% of the continuum, then the median $R_{\rm red}$ increases to 2.9 $R_*$. \begin{figure} \epsscale{1.2} \plotone{f5.eps} \figcaption{Velocity at the red edge of the \ion{He}{1}~$\lambda$10830\ absorption ($V_{\rm red}$) versus the stellar escape velocity ($V_{\rm esc}$) for the 21 profiles of the reference sample. The solid line represents $V_{\rm red}=V_{\rm esc}$, while the dotted lines represent the ratios of $V_{\rm red}$\ to $V_{\rm esc}$\ for ballistic infall from 8, 4, 2, and 1.1~$R_*$, assuming $V_{\rm red}=V_{\rm max}$.\label{f.rmax}} \epsscale{1} \end{figure} \begin{deluxetable*}{lcccccc} \tablecaption{Maximum Infall Distances and Corotation Radii\label{t.radius}} \tablewidth{6in} \tablehead{\colhead{Object} & \colhead{$V_{\rm red}$} & \colhead{$V_{\rm esc}$} & \colhead{$V_{\rm red}/V_{\rm esc}$} & \colhead{$R_{\rm red}$\tablenotemark{a}} & \colhead{$R_{\rm co}$\tablenotemark{b}} & \colhead{$R_{\rm red}/R_{\rm co}$\tablenotemark{c}} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)}} \startdata AA Tau\dotfill & 310 & 390 & 0.79 & 2.7 & 8.7 & 0.3 \\ BM And\dotfill & 280 & 510 & 0.55 & 1.4 & [6.3] & [0.2] \\ CI Tau\dotfill & 310 & 370 & 0.84 & 3.4 & [6.3] & [0.5] \\ CY Tau\dotfill & 240 & 310 & 0.77 & 2.5 & 7.2 & 0.4 \\ DK Tau\dotfill & 340 & 320 & 1.06 & [$\ge$8] & 6.1 & [$\ge$1.3] \\ DN Tau\dotfill & 260 & 300 & 0.87 & 4.0 & 5.2 & 0.8 \\ DR Tau\dotfill & 320 & 310 & 1.03 & [$\ge$8] & 5.9 & [$\ge$1.4] \\ DS Tau\dotfill & 340 & 570 & 0.60 & 1.6 & [6.3] & [0.3] \\ FP Tau\dotfill & 120 & 200 & 0.60 & 1.6 & [6.3] & [0.3] \\ GI Tau\dotfill & 350 & 450 & 0.78 & 2.5 & 8.8 & 0.3 \\ GK Tau\dotfill & 220 & 350 & 0.63 & 1.7 & 4.8 & 0.4 \\ HK Tau\dotfill & 140 & 320 & 0.44 & 1.2 & [6.3] & [0.2] \\ LkCa 8\dotfill & 280 & 370 & 0.76 & 2.3 & 5.1 & 0.5 \\ RW Aur B\dotfill & 330 & 580 & 0.57 & 1.5 & [6.3] & [0.2] \\ SU Aur\dotfill & 150 & 490 & 0.31 & 1.1 & 2.3 & 0.5 \\ TW Hya\dotfill & 370 & 520 & 0.71 & 2.0 & 7.3 & 0.3 \\ UY Aur\dotfill & 240 & 400 & 0.60 & 1.6 & [6.3] & [0.3] \\ UZ Tau E\dotfill & 210 & 340 & 0.62 & 1.6 & [6.3] & [0.3] \\ UZ Tau W\dotfill & 170 & 260 & 0.65 & 1.7 & [6.3] & [0.3] \\ V836 Tau\dotfill & 300 & 440 & 0.68 & 1.9 & 9.6 & 0.2 \\ YY Ori\dotfill & 390 & 290 & 1.34 & [$\ge$8] & 4.8 & [$\ge$1.7] \\ \enddata \tablecomments{For the reference sample only. Col.~2: Maximum velocity of \ion{He}{1}~$\lambda$10830\ red absorption (km~s$^{-1}$); Col.~3: Escape velocity from the stellar surface (km~s$^{-1}$); Col.~5: Lower limit to maximum distance of infalling material ($R_*$); Col.~6: Corotation radius ($R_*$).} \tablenotetext{a}{Brackets indicate an assumed $R_{\rm red}\ge8~R_*$ since $V_{\rm red}>V_{\rm esc}$.} \tablenotetext{b}{Brackets indicate that, since $P_{\rm rot}$ and thus $R_{\rm co}$ are unknown, $R_{\rm co}$ is set to the mean of the known values.} \tablenotetext{c}{Brackets indicate uncertainty due to one of the preceding conditions.} \end{deluxetable*} In Figure~\ref{f.rcoro} we compare $R_{\rm red}$ to $R_{\rm co}$ for the 12 stars with published rotation periods (listed in Table~\ref{t.sample}). For this group of stars, the median ratio of $R_{\rm red}$ to $R_{\rm co}$ is 0.4, which increases to 0.5 for the less conservative estimate of the maximum redward velocity and can increase further when projection effects are considered. This is well inside the corotation radius for most stars, but as will be apparent from model profiles in Section 4, for some viewing angles projection effects can result in a significant underestimate of $R_{\rm max}$ as determined from $R_{\rm red}$. Three stars show $R_{\rm red}/R_{\rm co}>1.3$ (YY Ori, DR Tau, and DK Tau), indicating that infall originates close to or possibly outside corotation, unless the error in $R_{\rm co}$ is larger than the typical 20\% uncertainty. \begin{figure} \epsscale{1.2} \plotone{f6.eps} \figcaption{Graphical comparison between $R_{\rm red}$, the {\em minimum} distance from the star of the outermost edge of the flow estimated from $V_{\rm red}$, and $R_{\rm co}$, the star-disk corotation radius, for the twelve stars with known $P_{\rm rot}$. Sort is by $R_{\rm red}/R_{\rm co}$, which decreases from top to bottom. Lower limits to $R_{\rm red}$ are indicated by greater-than symbols.\label{f.rcoro}} \epsscale{1} \end{figure} The projected area of the accretion flow immediately above the accretion shock can be equated to the depth of the absorption at the highest velocity in the red absorption profile. This estimate will be most reliable for objects where both (1) $V_{\rm red}$\ is near $V_{\rm esc}$, indicating that projection effects are not significantly altering our determination of the velocity just before impact, and (2) the 1-$\micron$ veiling is near zero, indicating that the red absorption is solely due to scattering of stellar photons so the absorption depth at each velocity is the minimum percentage of the stellar surface that is obscured at that velocity. In the reference sample, three stars with $V_{\rm red}\ge0.8~V_{\rm esc}$ and no 1-$\micron$ veiling have significant absorption depths at 0.9~$V_{\rm red}$: AA Tau (14\%), DK Tau (10\%), and GI Tau (10\%). Their implied projected areas of material moving close to the escape velocity are an order of magnitude larger than the accretion shock filling factors estimated from their optical continuum excesses (CG). Although accretion flows do ``funnel'' into narrow columns as they arrive at magnetic footpoints on the stellar surface, the coverage fraction of the flow in typical dipole flow geometries diminishes by less than 50\% from $\sim1.2~R_*$ to $R_*$ as the speed increases from $\sim0.9$~$V_{\rm red}$\ to $V_{\rm red}$, not by an order of magnitude. This hints that in at least several stars, a conventional dipolar accretion flow will have difficulty reconciling small shock filling factors with deep, broad red absorptions. \\ \subsection{Variability\label{s.var}} The \ion{He}{1}~$\lambda$10830\ profiles for the 12 CTTS in this study with multiple spectra are shown in Figure~\ref{f.multi}, where for each star the full set of observed profiles is superposed and the range of simultaneous veilings is indicated. Since the time intervals are randomly distributed, ranging from days to years, only very general statements about variability can be made. Three categories of variability are seen: (1) five objects always show redshifted absorption with little variation in the absorption morphology (BM And, UY Aur, LkCa 8, DN Tau, and GI Tau); (2) two objects always show red absorption, but the profile morphology changes dramatically (AA Tau and DK Tau); and (3) five objects have no redshifted absorption at one epoch, but do show it at another epoch (DR Tau, GK Tau, TW Hya, CY Tau, and V836 Tau). \begin{figure*} \plotone{f7.eps} \figcaption{Residual \ion{He}{1}~$\lambda$10830\ profiles of the twelve CTTS that were observed more than once and that show subcontinuum redshifted absorption in at least one observation. The reference sample spectra are shown with heavier lines, and the range of observed 1-$\micron$ veilings appears in each box.\label{f.multi}} \end{figure*} The 5 stars observed at least four times (AA Tau, DK Tau, DR Tau, TW Hya and UY Aur) can be examined to see if there is a relation between veiling and the \ion{He}{1}~$\lambda$10830\ red absorption. The red absorption equivalent width is plotted against $r_Y$ for each observation of these five stars in Figure~\ref{f.veilvar}. For the star with little change in the morphology of its red absorption (UY Aur), the veiling varies by a factor of 2. For the two stars where redshifted absorption is always present but changes dramatically (AA Tau and DK Tau), the absorption is strongest when the veiling is lowest (i.e., not detected). For the two stars where redshifted absorptions come and go (DR Tau and TW Hya), there is no relation between veiling and the strength of the absorption. \begin{figure} \epsscale{1.2} \plotone{f8.eps} \figcaption{Equivalent width of red absorption at \ion{He}{1}~$\lambda$10830\ versus the 1-$\micron$ veiling for stars with at least four observations and at least one helium profile with subcontinuum red absorption. Observations on contiguous days are represented by open points for 2002 and asterisks for 2007.\label{f.veilvar}} \epsscale{1} \end{figure} Each of the 5 stars with at least 4 observations was observed on at least two nights of a three-night run in 2002, providing a look at short-term variability and the possible role of rotation. Data points from this run appear in Figure~\ref{f.veilvar} as open symbols, and the pair of asterisks for TW Hya are from a second set of two consecutive nights 5 years hence in 2007. The only objects to show much variation over a time scale of days are DR Tau, which showed weak red absorption only on the last of three consecutive nights in 2002, and AA Tau, which we now discuss further. The variability of AA Tau at optical wavelengths has been thoroughly examined in the context of rotational modulation from a misaligned magnetosphere interacting with the inner disk \citep{bou99,bou03,bou07b}. Briefly, the system is close to edge-on, with an inclination angle of 75$^\circ$, and the rotation period is 8.22 days. Phase zero corresponds to the epoch of maximum V-band flux, while phase 0.5 is characterized by a reduction in V-band flux due to occultation of the star by a warped disk. Accretion diagnostics are strongest near phase 0.5, with redshifted absorption appearing at H$\alpha$ and H$\beta$ between phase 0.39 and phase 0.52 accompanied by a rise in the optical veiling (measured between 5400 and 6700~\AA) from 0.2 at phase zero to between 0.4 and 0.7 during the occultation phase. To see whether our 1-$\micron$ data of AA Tau are consistent with this picture, we adopt the 8.22-day rotation period, assign phase 0.51 to HJD 2,453,308 \citep{bou07b}, and convert our observation dates to rotation phases. Figure~\ref{f.aavar} shows the \ion{He}{1}~$\lambda$10830\ and P$\gamma$\ profiles and veilings for each of our 4 observations, corresponding to projected phases from $\sim$~0 to 0.4. The figure also plots the equivalent width of \ion{He}{1}~$\lambda$10830\ red absorption against the derived phase, where each point is roughly aligned with its corresponding profiles. While \ion{He}{1}~$\lambda$10830\ red absorption appears at all phases, it is weakest near phase zero and increases steadily to phase 0.39. The velocity at the red absorption edge ($V_{\rm red}$) also varies, increasing from 250~km~s$^{-1}$\ near phase zero to 310~km~s$^{-1}$\ at phase 0.39. In contrast, P$\gamma$\ shows red absorption only once, close to phase zero, when the 1-$\micron$ veiling is also highest. If the phasing from the Bouvier epoch is accurate, then the deepest and widest red absorption at \ion{He}{1}~$\lambda$10830\ occurs at the phase associated with maximum accretion effects in the optical when the line of sight pierces the disk warp and the accretion shock. However, this would then mean that the 1-$\micron$ veiling and the P$\gamma$\ red absorption are out of phase with respect to optical veilings and profiles. Whether or not this phase projection is accurate, the profile sequence for \ion{He}{1}~$\lambda$10830\ and P$\gamma$\ provides another illustration of the very different kinds of information about the accretion flow that can be inferred from these two lines. Clearly, time-monitoring studies at 1 $\micron$ will be revealing! \begin{figure} \epsscale{1.2} \plotone{f9.eps} \figcaption{Relation between red absorption and rotational phase for 4 spectra of AA Tau (phased from \citealt{bou07b}). The \ion{He}{1}~$\lambda$10830\ and P$\gamma$\ profiles of AA Tau are shown above the corresponding equivalent width of the red absorption at \ion{He}{1}~$\lambda$10830\ for each phase. Profiles are labeled with the simultaneous 1-$\micron$ veiling, and their velocity axes run from -500 to 500~km~s$^{-1}$. For the helium line, the flux axis runs from -1 to 1, while for P$\gamma$, it runs from -0.5 to 0.5 to elucidate the morphology of the weaker profiles.\label{f.aavar}} \epsscale{1} \end{figure} \section{SCATTERING MODELS AND COMPARISON TO OBSERVATIONS\label{s.models}} Radiative transfer models of hydrogen lines arising in the accretion flow \citep {muz01,sym05,kur06,kur08} have been successful in reproducing general characteristics of these lines in some stars. However, the models assume all of the hydrogen emission arises in the funnel flow, they depend on an assumed temperature in the flow that is not well understood, and they are limited in their ability to constrain the accretion geometry. In this section, we take a new approach to understanding CTTS accretion flows, {\em modeling the scattering of continuum photons by \ion{He}{1}~$\lambda$10830\ in the infalling gas}. The lower level of the \ion{He}{1}~$\lambda$10830\ transition ($2s~{}^3S$) is 21~eV above the ground state, restricting its formation to regions near the star where the ionizing photon flux is high. Further, since the only permitted transition downward from the upper level ($2p~{}^3P^o$) is emission of a $\lambda$10830 photon, we model this line as resonance scattering. We first lay out the assumptions of our model, in which the accretion geometry is the commonly adopted dipolar flow, a geometrically flat disk is truncated by the innermost field lines, and all accreting field lines terminate in an accretion shock of uniform temperature at the stellar photosphere that generates a continuum excess observable as veiling. We then compare profiles generated from these models to the observed \ion{He}{1}~$\lambda$10830\ red absorption profiles. The basic dipolar flow is found lacking in a significant number of objects, so we then explore modifications to this geometry that better explain these observations. \subsection{Basic Dipolar Flow} We first consider an axisymmetric dipolar field in which the stellar magnetic and rotational axes are aligned and an opaque accretion disk extends from an initial radius $R_i$ to infinity in the equatorial plane. The outline of the overall structure of accretion from the disk to the star is completely specified by two parameters, although there is some flexibility in which two parameters we choose. One pair is $R_i$ and $R_f$, which indicate the range in radial distance from the star, i.e., $R_i\le R\le R_f$, over which the dipolar field lines that participate in accretion are distributed over the disk. Alternatively, we can specify $\theta_i$ and $\theta_f$, which mark the range in polar angle, i.e., $\theta_f\le\theta\le\theta_i$ and $\left(\pi-\theta_i\right)\le\theta\le\left(\pi-\theta_f\right)$, where the same field lines are distributed at the stellar surface. The relation between the two pairs is apparent from the dipolar field structure, which stipulates that \begin{equation}R_{i,f}=R_*/\sin^2\theta_{i,f}.\label{e.dipole}\end{equation} A third pair of parameters is $F$ and $R_0$, where $F=\cos\theta_f-\cos\theta_i$ is the fraction of the stellar surface outlined by the above field lines, and $R_0=R_*/\sin^2\theta_0$, with $\cos\theta_f-\cos\theta_0=\cos\theta_0-\cos\theta_i$, marks the approximate median radius of where the accretion flow originates in the disk. The median field line originating at $R_0$ will thus correspond to a median polar angle on the star of $\theta_0$. We find this pair of parameters to be instructive, since if the whole geometric structure is fully occupied by accreting gas, then $F$ will equal $f$, the filling factor of the accretion shock. In this subsection, we consider $F=f$ and thus use $f$ to indicate both the fraction of the stellar surface outlined by the overall flow structure and the filling factor of shocked gas at the terminus of accreting field lines, as in previous work by others. The modeled values of $R_0$ and $f$ are chosen to sample the full range of plausible accretion flow sizes and filling factors. The values of $R_0$, taken to be 2, 4, and 8~$R_*$, are consistent with the understanding that the accretion flow arises near the star-disk corotation radius \citep{gho78,kon91,shu94}. The values of $f$, taken to be 0.01, 0.05, and 0.1, cover the range found from shock models of CG. The upper section of Table~\ref{t.space} lists the 9 modeled combinations of $R_0$ ($\theta_0$), $f$ ($=F$), and the associated ranges $\left(R_i, R_f\right)$ over which material leaves the disk. These configurations are visualized in Figure~\ref{f.geom}. (The lower half of the table lists cases with $F\ne f$, which will be explored beginning in Section 4.2.) Three cases correspond closely to geometries used in previous models of magnetospheric accretion \citep{muz01}. The case with $R_0=4R_*$ and $f=0.01$ approximates their SN (small/narrow) case, the case with $R_0=4R_*$ and $f=0.05$ approximates their SW (small/wide) case, and the case with $R_0=8R_*$ and $f=0.01$ approximates their LW (large/wide) case. \begin{figure} \epsscale{1.2} \plotone{f10.eps} \figcaption{Schematic representations of accretion geometries used in scattering calculations for dipoles with $F=f$. The star is black, the accretion flows are gray, and the disk is the solid line in the equatorial plane. Each column shows a different $R_0$ ($\theta_0$), and each row shows a different $f$ with a corresponding $r_Y$. We note that in the extreme case of $R_0=8R_*$ and $f=0.1$, the accreting field lines thread the disk out to 35~$R_*$, so the shading in the lower right panel extends far beyond the figure boundary.\label{f.geom}} \epsscale{1} \end{figure} \begin{deluxetable}{ccccccc} \tablecaption{Model Magnetospheric Geometry Parameters\label{t.space}} \tablewidth{3in} \tablecolumns{7} \tablehead{\colhead{$R_0$} & \colhead{$\theta_0$} & \colhead{$F$} & \colhead{$f$} & \colhead{$r_Y$} & \colhead{$R_i$} & \colhead{$R_f$} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)}} \startdata \cutinhead{Undiluted} 2 & 45 & 0.01 & 0.01 & 0.06 & 1.97 & 2.03 \\ & & 0.05 & 0.05 & 0.33 & 1.87 & 2.16 \\ & & 0.10 & 0.10 & 0.70 & 1.76 & 2.34 \\ 4 & 30 & 0.01 & 0.01 & 0.06 & 3.87 & 4.14 \\ & & 0.05 & 0.05 & 0.33 & 3.42 & 4.85 \\ & & 0.10 & 0.10 & 0.70 & 2.99 & 6.22 \\ 8 & 20.7 & 0.01 & 0.01 & 0.06 & 7.44 & 8.65 \\ & & 0.05 & 0.05 & 0.33 & 5.84 & 12.9 \\ & & 0.10 & 0.10 & 0.70 & 4.63 & 34.5 \\ \cutinhead{Diluted} 2 & 45 & 0.05 & 0.01 & 0.06 & 1.87 & 2.16 \\ & & 0.10 & 0.01 & 0.06 & 1.76 & 2.34 \\ & & 0.20 & 0.01 & 0.06 & 1.58 & 2.87 \\ 4 & 30 & 0.05 & 0.01 & 0.06 & 3.42 & 4.85 \\ & & 0.10 & 0.01 & 0.06 & 2.99 & 6.22 \\ & & 0.20 & 0.01 & 0.06 & 2.42 & 15.0 \\ 8 & 20.7 & 0.05 & 0.01 & 0.06 & 5.84 & 12.9 \\ & & 0.10 & 0.01 & 0.06 & 4.63 & 34.5 \\ \enddata \tablecomments{Col.~1: Fiducial disk coupling radius ($R_*$); Col.~2: Stellar impact angle in degrees from the pole; Col.~3: Fraction of the star over which the full range of magnetospheric footpoints is distributed; Col.~4: Filling factor of accretion shocks on the stellar surface; Col.~5: Approximate 1-$\micron$ veiling (eq.~[\ref{e.veil}]); Col.~6: Innermost radius at which accreting material leaves the disk, also the disk truncation radius ($R_*$); Col.~7: Outermost radius at which accreting material leaves the disk ($R_*$). $R_i$ and $R_f$ follow directly from $R_0$ and $F$.} \end{deluxetable} For each model the veiling $r_\lambda$ from the associated accretion shock, defined as the ratio of the continuum excess flux $F_v$ to the stellar flux $F_*$, is determined by the blackbody temperatures of the star and hot gas from the shock, the magnitude of $f$, and the viewing angle. In all cases we assume a $T_*=4000$~K blackbody for the stellar continuum and a $T_v=8000$~K blackbody for the continuum from the accretion shock. This is a typical value found by CG from continuum excesses shortward of 0.5 $\micron$, although values as high as 10,000 K or as low as 6000 K are sometimes indicated. The veiling at wavelength $\lambda$~is \begin{equation}r_\lambda\equiv\frac{F_v}{F_*}\approx\frac{I_\lambda^{bb}\left(T_v\right)}{I_\lambda^{bb}\left(T_*\right)}\left(\frac{f}{1-f}\right)\label{e.veil},\end{equation} where the approximate equality arises from setting the ratio of the projected areas of the two continua perpendicular to the line of sight, which depends on viewing angle, to simply $f/(1-f)$. Over the full range of viewing angle, the observed $r_\lambda$ for the same $f$ can vary by a factor of a few (see Section 4.1.1). The approximate value for $r_Y$ from equation~(\ref{e.veil}), without including the effect of viewing angle, is identified in Figure~\ref{f.geom} for each of the 3 values of $f$. With our assumed temperatures, the ratio of the blackbody intensities from the veiling continuum and the photosphere $I_\lambda^{bb}\left(T_v\right)/I_\lambda^{bb}\left(T_*\right)$ is 24.5 at $\lambda=5700$~\AA\ and 6.3 at $\lambda=1.08$~\micron, so that for a typical observed $f=0.01$, the approximate veilings at these wavelengths are $r_V=0.25$ and $r_Y=0.06$. The corresponding ratio of $r_V/r_Y\sim4$ is preserved for all $f$ and is independent of viewing angle. The velocity of the flow has contributions from both free-fall and rotation. The free-fall speed at a distance $r$ from the star along a field line threading the disk at $R$ is given by equation~(\ref{e.speed}). Since the gas follows the field lines, the velocity vector takes the form \begin{equation}{\mathbf v_{ff}} =-v_{ff}\left[\frac{3q^{1/2}(1-q)^{1/2}\hat{\mbox{\boldmath $\rho$}}\pm(2-3q)\hat{\mathbf z}}{(4-3q)^{1/2}}\right]\end {equation}\citep{cal92,har94}. Here $q=\sin^2\theta$, and $(\hat{\mbox{\boldmath $\rho$}}, \hat{\mbox{\boldmath $\phi$}}, \hat{\mathbf z})$ are unit vectors in the cylindrical coordinate system. Above the equatorial plane, the plus sign applies, while below the equatorial plane, the minus sign applies so that the flow is always from the disk to the star. For the rotational component of the flow, the magnetosphere is assumed to rotate rigidly with velocity \begin{equation}{\mathbf v_{\phi}}=v_*\frac{\rho}{R_*}\hat{\mbox{\boldmath $\phi$}},\end{equation} where $v_*$ is the rotation speed of the star at its equator, assumed here to be $0.05~V_{\rm esc}$, or 15~km~s$^{-1}$\ when $V_{\rm esc}=300$~km~s$^{-1}$, a typical value for TTS \citep[and references therein]{reb04}, and $\rho$ is the cylindrical radial distance of a point from the rotation axis. Since the rotational motion is for the most part transverse to the line of sight for the absorbing gas seen projected in front of the star, it has a very small effect on the absorption part of the line profile. The flow scatters continuum photons, which arise from the star and the accretion-heated photosphere. To maximize the red absorption, the $\lambda$10830 transition in the accreting flow is assumed to be optically thick. A rectangular line absorption profile with a 10~km~s$^{-1}$\ half-width is adopted to account for thermal and turbulent broadening. Thus, if a particular ray from a point on the stellar surface intersects the accreting flow such that the projection of the gas velocity along the ray extends from $v_{\rm min}$ to $v_{\rm max}$, then continuum photons from ($v_{\rm min}-10$~km~s$^{-1}$) to ($v_{\rm max}+10$~km~s$^{-1}$) are scattered. Because spontaneous emission is the dominant de-excitation route of the $\lambda$10830 upper state ($2p~{}^3P^o$) in comparison with other decay, collisional, or ionization processes, the photon absorption and subsequent re-emission is in effect a resonant scattering process if the small fine-structure energy differences among the three sub-levels are ignored. Rather than following the photon path in detail (e.g., with a Monte Carlo simulation), we simply assume a single scattering in which the absorbed photon is re-emitted isotropically with the appropriate Doppler shift, and it either hits the star; hits the opaque, flat disk; or escapes the system. While this is inconsistent with the assumption of an opaque line, we find that the exact contribution to the observed profile from the scattered photons has no significant bearing on our conclusions (see subsequent sections), so the extra effort is unwarranted. The emergent spectrum at a particular viewing angle $i$ is made up of photons that, either because they are not absorbed or they are scattered, escape into a solid angle bin centered on $i$. For a random selection of $i$ over $4\pi$ steradians, $\cos i$ is uniformly distributed. Considering five viewing angles, we choose $\cos i=0.9$, 0.7, 0.5, 0.3, and 0.1, or $i=26^\circ$, $46^\circ$, $60^\circ$, $73^\circ$, and $84^\circ$. \subsubsection{Basic Dipolar Flow: Results} Figure~\ref{f.example} shows an example of the components contributing to the emergent model profile in the case of $R_0=4R_*$ ($\theta_0=30^\circ)$, and $f=0.05$ ($3.4\le R/R_*\le4.9$), viewed at $i=60^\circ$. The final emergent spectrum, shown in black, is the sum of contributions from the stellar and the veiling (accretion shock) continua, each shown separately in solid gray. The stellar contribution is the one with a normalized continuum level of 0.78, due to scattering of the 4000-K continuum, while the veiling contribution is the one with a normalized continuum level of 0.22, due to scattering of the 8000-K continuum. The ratio of these two continua, $0.22/0.78=0.3$, is the 1-$\micron$ veiling $r_Y$, also noted in the figure. Each solid gray component is further the sum of two subcomponents. One, shown with a dashed line in each case, is the absorption profile of the respective continuum. The other, shown with a dotted line in each case, is the emission profile, produced by scattered photons that escape toward the specified line of sight. The emission subcomponent is both broad and weak, since scattered photons can be red- or blueshifted and because for each photon absorbed, the ensuing emitted photon may hit the disk or star and not escape. Thus the filling-in of the red absorption by its own associated scattered emission is generally slight. \begin{figure} \epsscale{1.2} \plotone{f11.eps} \figcaption{Example scattering profile for dipolar infall with $\theta_0=30^\circ$ and $F=f=0.05$ (equivalently, $R_0=4R_*$ and $3.4<R/R_*<4.9$), viewed from an angle $i=60^\circ$. The emergent profile (black line) is the sum of two components: the profile due to photons from the 4000-K stellar continuum (upper gray line) and the profile due to photons from the 8000-K veiling continuum (lower gray line). The veiling, shown in the lower left, is the ratio of the veiling continuum height to the stellar continuum height. Each component is further made up of two subprofiles: the absorption profile of the continuum (dashed) and the emission profile of scattered photons that escape toward the line of sight (dotted).\label{f.example}} \epsscale{1} \end{figure} Figure~\ref{f.example} also illustrates an important aspect of the models, that the redshifted absorption in the emergent profile is affected differently by scattering of the stellar and the veiling continua. In this model, with $\theta_0=30^\circ$ and $i=60^\circ$, the line of sight toward the veiling continuum intersects the portion of the accretion flow close to the star where the gas velocity is high (see Fig.~\ref{f.geom}), and scattering of the veiling continuum produces a red absorption that extends from 0.27 to 0.87~$V_{\rm esc}$. In contrast, the line of sight toward the stellar continuum intercepts portions of the accretion column with smaller infall speeds and a smaller velocity component is projected onto the line of sight. The red absorption thus produced ranges from $\sim 0$ to 0.74~$V_{\rm esc}$\ and is also shallower than the one from the veiling continuum. The resultant absorption profile is thus complex in shape and broader than either of the two individually, and in this case it has a maximum depth of about 20\% into the summed continuum. \begin{figure*} \includegraphics[angle=90,width=\textwidth]{f12.eps} \figcaption{Scattering profiles for dipolar infall with $R_0=2R_*$ ($\theta_0=45^\circ$) and $F=f$. Each row shows a different value of $f$. Within each row, each panel shows the profile for a different viewing angle and the corresponding 1-$\micron$ veiling $r_Y$. Emergent profiles (upper black lines) are the sum of the profiles from the stellar continuum ($T=4000$~K; gray lines) and the veiling continuum ($T=8000$~K; dotted lines). The optical veiling (at $\lambda=5700$~\AA) is approximately 4 times greater than $r_Y$.\label{f.simr2}} \end{figure*} \begin{figure*} \includegraphics[angle=90,width=\textwidth]{f13.eps} \figcaption{Scattering profiles for dipolar infall with $R_0=4R_*$ ($\theta_0=30^\circ$) and $F=f$, as in Figure~\ref{f.simr2}. \label{f.simr4}} \end{figure*} \begin{figure*} \includegraphics[angle=90,width=\textwidth]{f14.eps} \figcaption{Scattering profiles for dipolar infall with $R_0=8R_*$ ($\theta_0=20.7^\circ$) and $F=f$, as in Figure~\ref{f.simr2}.\label{f.simr8}} \end{figure*} Figures~\ref{f.simr2}, \ref{f.simr4}, and \ref{f.simr8} show the full range of model profiles for the three chosen values of $R_0$: 2, 4, and 8~$R_*$ respectively (corresponding to $\theta_0=45^\circ$, $30^\circ$, and $20.7^\circ$). In each figure, the three rows show, from top to bottom, the three selections of $f=0.01$, 0.05, and 0.1. In each row, the five panels show the profiles for the five viewing angles, from $i=26-84^\circ$. Within each of the 15 panels representing a unique combination of $f$ and $i$, the final emergent profile is shown (solid black curve) along with the separate contributions from scattering of the veiling continuum (dotted curve) and the stellar continuum (solid gray curve), but not the subcomponents of absorption and scattered emission from each continuum source. We emphasize the following points from these three figures: 1. When $f=0.01$, the red absorption is dominated by scattering of the stellar continuum, generally showing small absorption equivalent widths and velocity widths. The profiles have a strong dependence on inclination, with shallow absorption at small inclinations and narrow, deeper, low-velocity absorption at high inclinations. 2. The magnitude of the red absorption, measured by either the equivalent width or the maximum depth of absorption, is sensitive to the parameter $f$. As $f$ increases, there is both an increase in the veiling continuum and an increase in the coverage of accreting field lines projected in front of the stellar surface for a given $R_0$, enabling the line of sight to each point on the star to intersect more accreting field lines and hence yield a broader range in the projected velocity of the infalling gas. 3. For a given $f$, the red absorption is generally stronger at a larger $R_0$ (smaller $\theta_0$), since the accreting field lines then cover a greater range of solid angles, and the larger span between $R_i$ and $R_f$ produces a broader range in the gas velocity. However, inclination also plays a role, so that for a given $f$ and $R_0$, the strongest absorption occurs at a line of sight $i$ that parallels the final part of the trajectory of the accretion flow. From the schematic in Figure~\ref{f.geom}, it can be seen that for $\theta_0=45^\circ$, $30^\circ$, and $20.7^\circ$, the corresponding viewing angle to maximize the red absorption is roughly $i\approx84^\circ$, $60^\circ$, and $46^\circ$ respectively. Thus an increased $\theta_0$ (smaller $R_0$) requires a higher $i$ for a strong red absorption. This occurs because the contribution to the absorption from scattering of the veiling continuum is broadest when viewed in a direction parallel to the flow just before it impacts the star. 4. The observed emission, i.e., the part of the profile above the continuum, is usually weaker than the absorption and is mostly blueshifted. Only in the extreme but unrealistic case when $R_0=8R_*$ ($\theta_0=20.7^\circ$) with high $f$ (0.05 or 0.1) and excessive $R_f$ (13 and 35 $R_*$ for $f=0.05$ and 0.1 respectively) does a double-peaked profile result when viewed close to edge on. The resulting accretion flow has a large solid angle, and since it is assumed to be in corotation with the star, the rotational broadening is considerable. This situation was included to complete our chosen parameter space and is not realistic. 5. The veiling ($r_\lambda$) from the 8000-K accretion zone depends on $f$, $\theta_0$ and $i$. The dependence on $f$ is obvious, since $r_\lambda$ scales almost linearly with $f$ (eq.~[\ref{e.veil}]). The dependences on $\theta_0$ and $i$ arise through their influences on the projected area of each continuum source. The parameter $\theta_0$ signifies the orientation of the veiling continuum, hence its direct effect on the veiled area viewed. Although less sensitive to the viewing angle, the projected stellar continuum area changes because of the presence of the disk extending from $R_i$ (dependent on $\theta_0$ and $f$) to infinity. For both $\theta_0=20.7^\circ$ and $30^\circ$, $r_Y$ at a given $f$ drops monotonically as $i$ increases from pole-on to edge-on, by a factor of 5 and 3 respectively. At $\theta_0=45^\circ$, $r_Y$ varies less, dropping by a factor of 1.6 from $i=26^\circ$ to $73^\circ$, then increasing slightly toward $i=84^\circ$. For example, when $R_0=4R_*$ ($\theta_0=30^\circ$) and $f=0.1$, $r_Y$ ranges from 0.4 to 1.15 and $r_V$ ranges from 1.5 to 4.5 with viewing angle. In sum, a strong red absorption extending to high velocities, like those observed, requires $f$ (and thus $r_\lambda$) to be large so the contribution from scattering of the veiling continuum is enhanced and the angular extent of the flow on the star is increased. Strong, broad absorption is also more likely when $R_0$ is large and the line of sight parallels the accretion flow close to the star. The relation between absorption magnitude and $r_\lambda$ is a crucial test of the dipolar accretion model, as we will show in the following subsection when we compare the observations to our model profiles.\\ \subsubsection{Basic Dipolar Flow: Comparison to Observations} In comparing our models with observed profiles, we focus on the red absorptions, since the small emission at blueward velocities expected from scattering in the funnel flow will often be overwhelmed by additional sources of emission, such as scattering and in-situ emission from a wind. The red absorptions are evaluated in context with the observed veiling, which is the basis for evaluating $f$. It is immediately apparent that there is a mismatch between the model profiles in Figures~\ref{f.simr2}, \ref{f.simr4}, and \ref{f.simr8} and the observed spectra in Figure~\ref{f.redabs}, since the majority of CTTS are known to have $f \lesssim0.01$ (CG) while model sequences for $f=0.01$ (implied $r_Y \sim 0.06$ and $r_V \sim 0.25$) have shallow and/or narrow red absorptions bearing little resemblance to the ensemble of broad and deep observed \ion{He}{1}~$\lambda$10830\ profiles. A more explicit demonstration of the limitation of the models can be made from a quantitative comparison between the equivalent width and veiling for model and observed profiles. This comparison requires normalizing the observed profiles to their respective escape velocities, since in the models all velocities are in units of the escape velocity. The normalized equivalent width, $W_\lambda'=W_\lambda/V_{\rm esc}$, has an intuitive interpretation: it is simply the fraction of the continuum absorbed between rest and the escape velocity, with a value of 1 indicating total absorption over the entire range. Figure~\ref{f.comp} compares the normalized red absorption equivalent width $W_\lambda'$ to $r_Y$ and $r_V$ for both models and observations of the reference sample. In the models we have assumed that the excesses at both $Y$ and $V$ arise from an accretion shock that emits an 8000-K blackbody continuum. This is known to be a valid assumption for optical veilings, and while the $r_V$ data points are not simultaneous with the observed absorption profiles, the fact that all but one of the objects with $r_Y=0$ also have low $r_V$ indicates that this is a reasonable approach. Unless the $r_V$ values for these objects were all a factor of 5 to 10 higher when the \ion{He}{1}~$\lambda$10830\ profiles were obtained than when the HEG data were obtained, the two panels together clearly indicate that only a fraction of the observed data lie within the realm of the model results: those with weak red absorption and small veiling or those with modest red absorption and intermediate veiling. There is a glaring discrepancy between models and observations for stars with large $W_\lambda'$ and small $r_Y$. \begin{figure} \epsscale{1.2} \includegraphics[angle=90,width=0.5\textwidth]{f15.eps} \figcaption{Comparison of the red absorption equivalent width (normalized to the escape velocity) to the 1-$\micron$ veiling (left) and the non-simultaneous average optical veiling (right; from HEG) for basic dipolar models and the profiles from the reference sample. The model properties appear as lines connected by symbols. Each symbol type is for a different $R_0$ / $\theta_0$, with circles for $R_0=2R_*$, asterisks for $R_0=4R_*$, and diamonds for $R_0=8R_*$. Each line type is for a different viewing angle, with solid black for $26^\circ$, solid gray for $46^\circ$, dotted for $60^\circ$, dashed gray for $73^\circ$, and dashed black for $84^\circ$. Along a line, symbols indicate filling factors $f=0.01$, 0.05, and 0.10, always increasing toward increasing veiling. Since the veiling axes are logarithmic, stars with no detected 1-$\micron$ veiling are placed at $r_Y=0.025$.\label{f.comp}} \epsscale{1} \end{figure} The values for $W_\lambda'$ and adopted escape velocities are listed in Table~\ref{t.normal} along with two additional properties of the red absorption: the normalized width and the depth. The normalized full-width at quarter-minimum, FWQM$'$, is the width measured at one quarter of the absorption minimum as a fraction of the escape velocity. The depth of the absorption component at 0.75~$V_{\rm esc}$\ normalized to 100\%, $D_{0.75}$, was chosen since it is sensitive to the infall geometry close to the star. As with $W_\lambda'$, a number of stars indicate a discrepancy with the basic dipole model, where objects with small veiling frequently have both FWQM$'$ and $D_{0.75}$ much larger than can be accounted for with the models. This is illustrated in Figure~\ref{f.comp2}, which shows the comparison of the velocity-normalized equivalent width, velocity-normalized line width, and high-velocity depth to 1-$\micron$ veiling for observations and models. \begin{figure} \epsscale{1.2} \plotone{f16.eps} \figcaption{Comparison of 3 properties of the red absorption to the 1-$\micron$ veiling for basic dipolar models and the 21 profiles from the reference sample. In each column, the observed parameters are the same, but model parameters appear only for the indicated $R_0$ / $\theta_0$ combination. The model parameters for the three filling factors $f=0.01$, 0.05, and 0.10 at a particular viewing angle are connected by lines, and the correspondence between line type and viewing angle is the same as in Figure~\ref{f.comp}. As before, stars with no detected 1-$\micron$ veiling are placed at $r_Y=0.025$.\label{f.comp2}} \epsscale{1} \end{figure} \begin{deluxetable}{lcccc} \tablecaption{Measurements of Red Absorption in Velocity-Normalized \ion{He}{1}~$\lambda$10830\ Profiles\label{t.normal}} \tablewidth{3in} \tablehead{\colhead{Object} & \colhead{$V_{\rm esc^*}$\tablenotemark{a}} & \colhead{$W_\lambda'$} & \colhead{FWQM$'$} & \colhead{$D_{0.75}$} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)}} \startdata AA Tau\dotfill & 390 & 0.32 & 0.79 & 10 \\ BM And\dotfill & 510 & 0.16 & 0.53 & 0 \\ CI Tau\dotfill & 370 & 0.10 & 0.78 & 7 \\ CY Tau\dotfill & 310 & 0.10 & 0.32 & 4 \\ DK Tau\dotfill & [380] & 0.22 & 0.84 & 14 \\ DN Tau\dotfill & 300 & 0.11 & 0.47 & 11 \\ DR Tau\dotfill & [360] & 0.05 & 0.44 & 10 \\ DS Tau\dotfill & 570 & 0.05 & 0.42 & 0 \\ FP Tau\dotfill & 200 & 0.08 & 0.60 & 3 \\ GI Tau\dotfill & 450 & 0.20 & 0.53 & 6 \\ GK Tau\dotfill & 350 & 0.04 & 0.40 & 0 \\ HK Tau\dotfill & 320 & 0.05 & 0.31 & 0 \\ LkCa 8\dotfill & 370 & 0.11 & 0.43 & 5 \\ RW Aur B\dotfill & 580 & 0.14 & 0.40 & 0 \\ SU Aur\dotfill & 490 & 0.09 & 0.37 & 0 \\ TW Hya\dotfill & 520 & 0.07 & 0.33 & 1 \\ UY Aur\dotfill & 400 & 0.05 & 0.32 & 1 \\ UZ Tau E\dotfill & 340 & 0.01 & 0.18 & 1 \\ UZ Tau W\dotfill & 260 & 0.07 & 0.54 & 3 \\ V836 Tau\dotfill & 440 & 0.11 & 0.39 & 2 \\ YY Ori\dotfill & [430] & 0.13 & 0.49 & 11 \\ \enddata \tablecomments{For the reference sample only. Col.~2: Adopted escape velocity (km~s$^{-1}$); Col.~3: Equivalent width of velocity-normalized absorption (dimensionless); Col.~4: Full-width at quarter-minimum of velocity-normalized absorption (dimensionless); Col.~5: Depth at 75\% of the escape velocity as a percentage of the continuum.} \tablenotetext{a}{Brackets indicate $V_{\rm red}/V_{\rm esc}>1$, so we assume $V_{\rm esc^*}=V_{\rm red}/0.9$.} \end{deluxetable} This comparison demonstrates that a fraction of CTTS with subcontinuum red absorption at \ion{He}{1}~$\lambda$10830\ have red absorptions too strong to be accounted for by magnetospheric accretion in a basic dipole, where the filling factor of the flow on the stellar surface $F$ is equivalent to the filling factor of shocked gas at the terminus of accreting field lines $f$. The observations that present the greatest challenge to the model are those in which the red absorption is strong ($W_\lambda'\ge0.1$) but the veiling is weak ($r_Y\le0.1$). This conclusion is robust, since the models have been constructed to produce maximal red absorption for a given $R_0$ and $f$, in that the $\lambda$10830 transition is assumed to be optically thick and the thermal/turbulent broadening has a generous 10~km~s$^{-1}$\ half-width. This conclusion is not compromised by the choice of 8000~K for the temperature of the shock-heated photosphere. This value corresponds to the low end of the temperature range derived from modeling the SEDs of observed continuum excesses \citep{har91,gul98,joh00}. If higher temperatures were assumed, the veiling for a given $f$ would be even larger, worsening the agreement between the models and the observations. If we adopted the lowest temperature allowed by the SED models of the optical continuum excess, $T\sim6000$~K, the associated veiling for a given $f$ would be reduced by $\sim2$ at $Y$ and $\sim3$ at $V$. Figure~\ref{f.comp} demonstrates that shifting all the model results to the left by a factor of 2 in $r_Y$ or 3 in $r_V$ is still insufficient to account for the strong absorptions and low veilings. We thus conclude that those profiles with strong absorptions and small veilings lie outside the realm of model results for self-consistent dipole flows. \subsection{Dilution} A simple way to keep the veiling small and yet have the accretion flow project a broad velocity range in front of the star is to let the flow arise over a large range of $R$ (thus impacting the star over a large range of $\theta$) but to fill the whole enclosed volume only {\em dilutely} with accreting gas. We now distinguish between $F$, the fractional surface area on the star over which the magnetospheric footpoints are distributed, and $f$, the fractional surface area on the star occupied by accretion shocks at the base of field lines that carry accreting gas. We define $f'\equiv f/F$ as the fraction of $F$ occupied by all the accretion shocks. With enough dilution, i.e., $f'$ sufficiently small, $F$ can be large enough to provide the areal coverage over a large velocity range that is necessary for a broad and deep red absorption, while $f=Ff'$ can remain small, as required to produce a low veiling. One way to achieve this is to postulate a large number of narrow accretion streamlets spatially separated from one another that together impact only a fraction $f'$ of the outlined area $F$. (We assume the many accretion shocks are dispersed randomly throughout $F$.) Then, with an intrinsic thermal or turbulent line broadening of $\sim10$~km~s$^{-1}$\ associated with each streamlet, photons from the star can intersect a sufficient number of streamlets such that the continuum (stellar or veiling) will be absorbed over the full velocity range specified by the parameter $F$ as though the whole volume were filled. The concept of many accretion streamlets dilutely filling a volume has the additional advantage of offering a credible explanation for how the lower level of $\lambda$10830 ($2s~{}^3S$) is populated over all streamlines. With the difficulty of maintaining a temperature high enough ($\gtrsim2\times10^4$~K) for collisional excitation to the $2s~{}^3S$\ level in a freely falling gas, it is likely that photoionization is the excitation mechanism. Then, if the source of ionizing photons is the accretion shock itself, the much smaller shocked area of an individual streamlet within a diluted flow, as compared to the shocked area of a single undiluted flow, will enable more ionizing photons to escape from the sides and ionize the gas in other streamlets, even at positions far from the star. Or, if the dominant source of ionizing radiation is located away from the streamlets (e.g., the stellar corona), these photons will be able to penetrate into the volume and ionize individual streamlets as opposed to ionizing just the skin of a single completely filled accretion flow. Thus, many narrow streamlets dilutely filling a large volume not only yield a deep red absorption from the large coverage area over a broad velocity range of infalling gas, but they also readily account for the ionization of gas at each location in the flow to produce an optically thick $\lambda$10830 transition over the whole velocity range. \subsubsection{Profiles for Wide, Dilutely Filled Flows} We compute scattering profiles for diluted dipole flows for the same 3 geometries shown earlier, with ($R_0$, $\theta_0$) pairs of (2 $R_*$, $45^\circ$), (4 $R_*$, $30^\circ$), and (8 $R_*$, $20.7^\circ$). We introduce a wider range in $F$, from 0.01 to 0.2, although now all models have $f=0.01$, corresponding to a range in $f'$ from 1 to 0.05. The resulting profiles are shown in Figure~\ref{f.dilute}, and the model parameters are listed in the lower portion of Table~\ref{t.space}. In the figure, the 3 columns correspond to the 3 $R_0$ values and each row is a common value of $f'$. The degree of dilution increases downward in the figure, with the case for no dilution shown in the top row ($f'=1$ and $f=F$) repeated from Figures~\ref{f.simr2}, \ref{f.simr4}, and \ref{f.simr8}. In subsequent rows the dilution grows as $F$ increases to 0.05, 0.1, and finally 0.2. Each panel shows the superposed profiles for all 5 viewing angles for each $R_0$, $f'$ (or $F$) combination and the corresponding inner and outer radii of the accreting volume, $R_i<R<R_f$. Since the effect of the viewing angle on the profile morphology is roughly independent of dilution, the individual viewing angles can be identified by referring to the earlier figures. We highlight the $i=60^\circ$ profile with a darker line, since this is the most probable viewing angle. \begin{figure} \epsscale{1.2} \plotone{f17.eps} \figcaption{Red side of scattering profiles for a series of ``diluted'' dipoles all with $f=0.01$ ($r_Y \le 0.11$) but with $F$ ranging from 0.01 (top row) to 0.2 (bottom row). From left to right, columns correspond to $R_0=2$, 4, and 8~$R_*$, and the range of $R$ for each $F$ is specified. The profile sequences for each panel correspond to viewing angles of $26^\circ$, $46^\circ$, $60^\circ$ (black), $73^\circ$, and $84^\circ$. The top row with $F=f=0.01$ corresponds to the models shown in the upper rows of Figures~\ref{f.simr2}, \ref{f.simr4}, and \ref{f.simr8}. For comparison with observations, dotted horizontal lines mark depths of 20\% and 30\% and crosses mark a depth at $V/V_{\rm esc}=0.75$ ($D_{0.75}$) of 10\%. \label{f.dilute}} \epsscale{1} \end{figure} Although all profiles in Figure~\ref{f.dilute} are for $f=0.01$, the associated veilings differ slightly because the accretion-heated area is distributed differently for different values of $F$ and $\theta_0$, leading to slightly different projected areas. Nonetheless, in all cases, $r_Y<0.11$. For these small veilings, the absorption, while quite strong when there is significant dilution, is almost entirely due to scattering of the stellar continuum, in contrast to the undiluted models where large veilings and scattering of the veiling continuum were necessary to produce strong absorption. As dilution increases for a given $R_0$, the red absorptions become increasingly strong and broad, due to the increased areal coverage over a broader range of velocities as the interval between $R_i$ and $R_f$ increases. For example, the maximum penetration depth of the red absorption into the continuum, $D_{\rm max}$, increases from 10\% to 30\% for $R_0=2R_*$ between an undiluted and a $f'=0.05$ flow. As before, larger $R_0$ also increases the areal coverage and thus the depth and breadth of the absorption: For $R_0=4R_*$ and $F=0.2$, $D_{\rm max}$ reaches to 50\% of the stellar continuum for all viewing angles. In the unrealistic case of $R_0=8R_*$ and $F=0.1$, where $R_f$ extends to $35~R_*$, $D_{\rm max}$ can be 70\% of the stellar continuum. However, for flows confined to maximum sizes on the order of corotation, the deepest penetrations are about 50\% of the stellar continuum. \\ \subsubsection{Further Comparison to Observations} By introducing the concept of a diluted dipole, where field lines carrying accreting gas only dilutely fill the volume occupied by a wide magnetosphere, we can simultaneously generate deep and broad red absorption features while maintaining small filling factors for hot accretion shocks with $f\sim0.01$. This is the empirical regime in Figure~\ref{f.comp} where dipolar models with $f=F$ were unable to account for stars with both strong absorption (large $W_\lambda'$) and very low veiling ($r_Y\sim0$). We compare observed and model profiles for a few individual stars in Figure~\ref{f.fits}, in 4 cases for undiluted, fairly narrow dipoles for stars with $r_Y$ ranging from 0 to 0.4 and in 2 cases for wide, diluted dipoles with $r_Y\sim0$, where we have rescaled the model profiles to the escape velocity of each star. Since we have not computed a large grid of models, the magnetospheric properties listed for each fit are not intended to be predictions for a particular star. However, this fitting procedure shows that weaker red absorptions can be reasonably described by basic undiluted models with a small range of origination radii in the disk, where veilings $r_Y$ from 0 to 0.4 can be consistently modeled with an appropriate choice of $f$, and the red absorption can include scattering contributions from both stellar and accretion shock continua. Similarly, strong red absorptions in stars with low veilings can be well fit by dilutely filled flows with small $f$ but a wide span of origination radii in the disk, resulting in a large projected area of accreting gas for the scattering of the stellar continuum. \begin{figure} \epsscale{1.2} \plotone{f18.eps} \figcaption{Examples of least-squares fits of dipolar model profiles (gray) to selected observations (black), with the stellar escape velocities marked by short vertical lines. The top four panels use dipoles with $F=f$, and the corresponding $f$, $r_Y$, and $i$ are shown. The bottom two panels have strong red absorptions and no detected veiling; they are well fit by extended dilute dipolar models ($R_0=4R_*$, $F=0.2$ for GI Tau; $R_0=4R_*$, $F=0.1$ for V836 Tau). Since processes other than scattering by the accretion flow can be important at low velocities, points with $V/V_{\rm esc}\le0.1$ are ignored in the fitting procedure. \label{f.fits}} \epsscale{1} \end{figure} The overall applicability of the diluted dipolar model can be appreciated by comparing the model profiles from Figure~\ref{f.dilute} to the ensemble of observed helium profiles for those stars with $r_Y\le0.1$ and thus $f \sim 0.01$, where the effect of scattering from a hot accretion shock will be inconsequential and the properties of the red absorption will be shaped almost entirely by scattering of the stellar continuum. To effect this comparison in a general way, rather than focusing on individual stars, in Figure~\ref{f.normed} we plot superposed observed profiles for the redward side of \ion{He}{1}~$\lambda$10830, each normalized to their respective escape velocity and separated into 3 groups on the basis of their depths both at 0.75~$V_{\rm esc}$\ ($D_{0.75}$) and at maximum absorption ($D_{\rm max}$). To aid in the comparison, both Figures~\ref{f.dilute} and \ref{f.normed} denote depths for $D_{0.75}=10\%$ and $D_{\rm max}= 20\%$ and 30\%. \begin{figure} \epsscale{1.2} \plotone{f19.eps} \figcaption{Superposed \ion{He}{1}~$\lambda$10830\ lines from the reference sample for the 13 stars with $r_Y\le0.1$, appropriate for modeling with dilute dipole flows with $f=0.01$. Only the red half of the profile is shown, normalized to the individual escape velocity of each star. Profiles are grouped by $D_{0.75}$, the penetration depth into the continuum at $V/V_{\rm esc}=0.75$, and by $D_{\rm max}$, the maximum penetration into the continuum. For comparison with models, the dotted cross in each panel marks $D_{0.75}=10\%$, and the dotted horizontal lines mark depths of 20\% and 30\%. \label{f.normed}} \epsscale{1} \end{figure} In Figure~\ref{f.normed} the left panel contains the 3 shallowest profiles, with $D_{\rm max}\le20$\% and $D_{0.75}<10$\%. Compared to the predicted profiles in Figure~\ref{f.dilute}, the model flows that most resemble such broad but shallow profiles have $R_0\sim2R_*$ and $F\lesssim0.1$, although some viewing angles for larger flows with relatively small areal coverage of magnetic footpoints, $F\lesssim0.05$, could also apply. Our coverage of parameter space is not exhaustive, but it is clear that for the broad but shallow red absorptions the range of radii over which the accretion flow leaves the disk is narrow, corresponding to a fairly small area on the star for the magnetospheric footpoints, but still larger than 1\%. The central and right panels contain the 10 deeper profiles among the stars with low veiling, where $D_{\rm max}$ ranges from 30 to 60\%. Model flows that produce deeper profiles generally have significant areal coverage of magnetic footpoints $F$, as seen in Figure~\ref{f.dilute} where the accretion flow leaves the disk over a wide range of radii, impacting the star over a wide range of angles, in some cases with magnetic footpoint coverage up to 20\% of the stellar surface area. This is considerably larger than has been modeled in previous work on magnetospheric infall. Even with wide diluted flows, the profiles of the 3 stars in the right panel of Figure~\ref{f.normed} (AA Tau, DK Tau, DN Tau) are a challenge to explain under the constraints of a dipolar geometry. These profiles not only have $D_{\rm max}>30$\% but also have $D_{0.75}\ge10$\%, with the caveat that errors in escape velocity may be up to 20\%. From the models explored in Figure~\ref{f.dilute}, flows with very wide extents, leaving the disk over a range of radii from a few $R_*$ to beyond corotation and viewed fairly close to pole-on, are required to produce profiles with $D_{0.75}\ge10$\%. Rather than postulate an enormous dipolar flow with a polar viewing angle (which is clearly not the case for, at least, the edge-on source AA Tau), in the next section we will explore an example of a non-dipolar geometry to find a more plausible explanation for these three observations. Only 8 CTTS in the reference sample have $r_Y>0.1$, such that the properties of the \ion{He}{1}~$\lambda$10830\ red absorption may be affected by scattering of continuum photons from the hot accretion shock. One of these is DR Tau, where the high $r_Y=2$ implies $f\approx0.24$ (eq.~[\ref{e.veil}]), which as shown in Section 4.1 would yield a red absorption at least an order of magnitude stronger than the observed $W_\lambda'=0.05$. As will be addressed in Section 5, we suspect that in this case the red absorption has been filled in by a wind exterior to the accretion flow. \subsection{Diluted Radial Flows} We have identified the 3 stars in the right panel of Figure~\ref{f.normed}, AA Tau, DK Tau, and DN Tau, as difficult to explain with scattering in a dipolar geometry due to their absorption depths at velocities in excess of 0.5 $V_{\rm esc}$. In a dipolar flow, the impact velocity at the stellar surface depends on the polar angle $\theta$, which is determined by the initial distance of infall $R$ (eq.~[\ref{e.dipole}]), such that the impact velocity is greatest when $\theta$ is near the pole (i.e., $R$ is large) and diminishes as $\theta$ approaches the equator (i.e., $R$ becomes small). Thus if $\theta$ is small enough, high impact velocities will result, although flows with small $\theta$ become highly curved and pinched as they reach the star (Fig.~\ref{f.geom}), resulting in small areal coverage and thus a shallow absorption profile at the highest velocities. \begin{figure*} \includegraphics[angle=90,width=\textwidth]{f20.eps} \vskip -1.42in \figcaption{Scattering profiles for diluted radial infall in non-rotating, azimuthally symmetric flows that begin at 8~$R_*$ and impact the star over a range of polar angles $\theta$ that encompasses $F=20\%$ of the stellar surface area. Accreting field lines and their accretion shocks fill only 5\% of $F$ (i.e., $f=1\%$), with 1-$\micron$ veilings $r_Y$ as listed. In the top row, the flow impacts the star over the range $66^\circ<\theta<78^\circ$, while in the bottom row, the flow impacts the star over the range $78^\circ<\theta<90^\circ$. The same five viewing angles are used as in previous figures.\label{f.radial}} \end{figure*} We investigate radial infall trajectories as an alternative geometry that could produce deep absorption at high velocities. In all aspects except the geometry, radial models have the same assumptions as our dipolar models except that we have not included rotation. The axisymmetric flow begins at some distance from the star $R_{\rm max}$, and it falls radially toward the star, impacting the stellar surface between polar angles $\theta_1$ and $\theta_2$ in one hemisphere and between $\pi-\theta_1$ and $\pi-\theta_2$ in the other. The fractional surface of the star spanned by the accretion flow, $F$, is $\cos\theta_1-\cos\theta_2$, and the shocks within this region together occupy a fraction $f'$ of the area $F$, so that $f=Ff'$. The disk truncation radius is a free parameter, but we set it equal to $R_{\rm max}$, which is 8~$R_*$ in all radial models. Figure~\ref{f.radial} shows scattering profiles from two radial geometries at 5 viewing angles. In the top row, the impact region extends from $\theta_1=66.4^\circ$ to $\theta_2=78.5^\circ$, while in the bottom row, the impact region extends from $\theta_1=78.5^\circ$ to the equator. In both cases, $F=0.2$ and $f=0.01$. As expected, the absorption is strongest for a viewing angle within the confines of the flow (i.e., $\theta_1<i<\theta_2$), and the profile becomes a nearly symmetric emission profile (assuming axisymmetry and no rotation) for views close to pole-on. When the viewing angle is aligned or nearly aligned with the column of absorbing gas, each radial model can produce the observed range of absorption depths at high velocities, with $D_{0.75}>10$\% for profiles with $i>60^\circ$ in the top row and $i>73^\circ$ in the bottom row. We are not advocating radial infall starting from a large distance, and thus, the profile sequences in Figure~\ref{f.radial} are not expected to be realistic for the whole velocity range. However, the requisite deep absorption at high velocities, resulting from material moving faster than $\sim2/3~V_{\rm esc}$, all arises inside of about 2~$R_*$. Thus the message from these calculations is that the accretion stream only need move in a radial trajectory, i.e., become less curved than a dipole, as it nears the star. \begin{figure} \epsscale{1.2} \plotone{f21.eps} \figcaption{High-velocity tails of observed and model profiles for 3 stars with the largest values of $D_{0.75}$ and $r_Y=0$, inverted to show the minimum fraction of the star occulted by infalling material at each velocity. Dark shading indicates the regime of diluted dipolar models with $f=0.01$ and infall contained entirely within a typical corotation radius, marked by the profile (dotted line) with $F=0.1$, $3.0<R/R_*<6.2$, and $i=46^\circ$. The hatched region indicates the extension when dipolar field lines out to $\sim$ twice the corotation radius participate in infall, marked by the profiles (dashed lines) with $F=0.2$, $2.4<R/R_*<15$, and $i=26^\circ$ or $46^\circ$. Light shading shows the regime for profiles formed in diluted radial infall with $F=0.2$ and $f=0.01$. For both radial geometries in Figure~\ref{f.radial}, two profiles (solid lines) with $i$ close to the infall angle are shown.\label{f.oplot}} \epsscale{1} \end{figure} The effectiveness of radial infall trajectories for material near the star in accounting for the high-velocity absorption in AA Tau, DK Tau, and DN Tau is shown in Figure~\ref{f.oplot}. The figure shows model and observed profiles where (1) profiles are inverted so the vertical axis is a measure of the minimum stellar coverage fraction at each velocity and (2) only velocities in excess of 0.5~$V_{\rm esc}$\ are plotted. The regime of diluted dipolar models with the largest $D_{0.75}$ is shown with dark and hatched shading, while the regime of flows with radial trajectories for gas near the star is shown with light shading. The dark shading is for the best case from our diluted dipolar models for a flow contained entirely within the corotation radius: $F=0.1$ originating between 3.0 and 6.2~$R_*$ in the disk and viewed from $i=46^\circ$. Although 8~$R_*$ is a more typical corotation radius, extending the flow out to this distance would not produce much additional absorption. The hatched region is for a diluted dipole that allows field lines extending out to nearly twice the corotation radius to participate in the flow, where the dashed lines are for the case $R_0=4R_*$, $F=0.20$ ($2.4<R/R_*<15$), seen from two viewing angles, $i=26^\circ$ and $i=46^\circ$. Although the latter two extreme dipolar models come close to producing sufficient absorption at high velocities, significant accretion beyond corotation is likely not physical. In contrast to the dipole trajectories, the regime of the four radial models from Figure~\ref{f.radial} with viewing angles nearly aligned with the infalling gas easily contains the observed stellar coverage fraction from 0.6 to 0.85~$V_{\rm esc}$, with no requirement that the flow originate at radii beyond corotation. The realistic situation is likely to involve some complex magnetic field topologies with trajectories approaching radial as they near the star. \section{DISCUSSION\label{s.discuss}} \subsection{Implications of Diluted Funnel Flows} The high opacity and resonance scattering properties of \ion{He}{1}~$\lambda$10830\ enable the geometry of magnetospheric accretion to be probed via absorption of gas seen in projection against the star, in contrast to previous studies that rely on the morphology of emission lines. Under the assumptions that the flow is an azimuthally symmetric dipole and helium is sufficiently optically thick that all incident 1-$\micron$ radiation is scattered, we have illustrated the sensitivity of the red absorption to both the angular extent of the magnetosphere and the filling factor of hot gas from the accretion shock $f$. If $f$ exceeds a few percent, the hot spot will be an important contributor to the scattering of the 1-$\micron$ continuum; however, since the strongest and broadest \ion{He}{1}~$\lambda$10830\ absorptions are seen in stars with little or no 1-$\micron$ veiling, these red absorptions must instead arise almost solely by scattering of photospheric radiation. Achieving the observed breadth and depth of the absorption requires a large angular coverage of the stellar continuum in the azimuthal direction over a wide range of velocities for many stars, with areal coverage in footpoints on the star of $F=10-20\%$. We suggest that the required combination of wide flows and low filling factors of hot gas is a result of accretion in many narrow streamlets, each of which may have a dipolar configuration but which together only fill a small fraction of the enclosed volume. We have explored the case where the streamlets are uniformly distributed through the accreting volume, producing wide, dilutely filled flows that reconcile the need for absorption over a broad range of velocities with filling factors of hot gas $f<1\%$, as observed (CG). Earlier studies also imply a discrepancy between the areal coverage $F$ of magnetospheric footpoints and the filling factor of hot accretion shocks $f$. For example, magnetospheres with $f=F=8\%$ were invoked to model hydrogen lines arising from accretion flows in order to produce sufficient line fluxes and mass accretion rates \citep{sym05,kur06}. The seminal sequence of papers modeling hydrogen line formation in funnel flows from \citet{har94} to \citet{muz01} also required filling factors that were larger than predicted by SED modeling of continuum excesses to account for observed emission line luminosities. The notion of accretion via streamlets that dilutely fill a large volume is a straightforward way to reconcile this discrepancy, simultaneously allowing large field sizes and small shock filling factors. Although our model invokes diluted accretion flows in widely distributed streamlets of gas, an alternate scenario for diluted accretion is the one suggested by the MHD simulations of \citet{rom04}, where internal structure within the accretion flow gives a mass accretion rate (and a corresponding blackbody continuum temperature) that is highest at the interior and falls off toward the sides. Although this scenario can also, for a large $F$, produce a smaller veiling from the area-weighted blackbody continua than the undiluted $f=F$ case, the advantage to widely dispersed streamlets is that they provide a facile means for ionizing radiation to penetrate to most of the infalling gas, since distributed accretion shocks with small individual areas would allow ionizing photons produced in each shock to escape more easily from the sides and ionize helium at other locations. Another consequence of such distributed accretion shocks is that photons from the shocks emitted toward the star would be incident on a larger area of the photosphere than for a single shock with the same $f$. This may invalidate the usual assumption of a plane-parallel geometry for the radiative transfer of photons with the effect that, independent of the internal structure within an individual streamlet, the resultant veiling continuum would encompass a range in blackbody temperatures. In a wide flow where dilution is somewhat uniform, there will be many separate shocks with a range of blackbody temperatures surrounding them. There may be some observational support for this phenomenon in that the veiling continuum longward of 0.5~$\micron$ (\citealt{bas90}; \citealt{whi04}; EFHK) is broader than the single 8000-K blackbody that is a good match to the excess at shorter wavelengths (CG). Constraints on the angular extent of accreting gas and the location in the disk where infall originates are relevant to models for disk locking and wind launching. Although there are some cases where \ion{He}{1}~$\lambda$10830\ profiles resemble those expected from viewing an accretion funnel restricted to a narrow origination around the corotation radius, the suite of profiles expected from viewing this magnetic topology from all inclination angles is not consistent with the observations. The most extreme deep and broad absorptions instead require infall spanning a wide extent of origination radii, from a few $R_*$ out to at least typical corotation radii of 6 to 8~$R_*$ if the flows are dipolar. For other magnetic field configurations, such as a tilted dipole or a multipole field, significant red absorption need not require such a wide range in initial infall distances. In general, the depth of the red absorption is governed more by the range of impact latitudes than by the range of initial radii; only for a dipolar flow aligned with the rotational axis are the two ranges so closely linked. For example, in an aligned dipolar flow with $R_0=4R_*$ and $F=0.1$, the range in impact latitude of 23.6-35.3$^\circ$ corresponds to a range in initial radius of 3.0-6.2~$R_*$. In a more complex magnetic configuration, a comparably wide range of impact angles could produce a strong red absorption without the need for such a large range of initial radii. The necessity for a dilutely filled flow does imply that there is not a sharp delineation on the disk for accretion onto the star. It likely indicates a very inhomogeneous field structure at large distances, with many local pockets distributed over a broad radial range on the disk giving rise to accretion streamlets. Since our analysis assumes axial symmetry in a set of nested dipolar flows, the constraints that the breadth and depth of the red absorption place on the angular extent of the accreting gas are even more extreme if, as likely, accretion channels are in restricted azimuth zones. Furthermore, there are some red absorptions that are so deep at velocities $\ge 0.5~V_{\rm esc}$ that a dipole morphology is inadequate, even when arising from 2 $R_*$ to the corotation radius. In these cases we find that radially directed infall can achieve the requisite depth of absorption, although other topologies that result in a large covering factor of the star at the highest velocities can likely be constructed. Recent Doppler tomographic maps of the CTTS V2129 Oph and BP Tau, based on circular polarization of \ion{Ca}{2} \citep{don07,don08}, reveal the locations of accretion hot spots on the stars. The spots span quite a broad latitude range (extending roughly from the pole to $45^\circ$) but a very narrow azimuthal range. The narrow azimuthal range implies that the detection of \ion{He}{1}~$\lambda$10830\ red absorption requires an opportune time at which the accretion spots are directly in view. This situation is consistent with the result that BP Tau, a mildly accreting CTTS included in our \ion{He}{1}~$\lambda$10830\ survey, did not show any \ion{He}{1}~$\lambda$10830\ red absorption on the two occasions we observed it. At present, there are not enough tomographic data to see how consistent this pattern is among a range of accreting stars, although our detection of subcontinuum red absorption in 21 of 38 CTTS, including 20 of 29 stars (and 37 of 56 total spectra) with $r_Y\le0.5$ ($r_V\le2$), would imply a large azimuthal coverage by the accretion spots. However, we note that even in the two stars with tomographic maps, there is a possibility that accretion impacts the stars over a wide range of longitudes. Donati et al.\ attribute only 2/3 of the \ion{He}{1}~$\lambda$5876\ emission but all of the \ion{He}{1}~$\lambda$5876\ circular polarization to accretion spots, based on the fraction of the emission that shows rotational modulation compared to that which is time-independent. The time-independent component, responsible for 1/3 of the \ion{He}{1}~$\lambda$5876\ emission, is attributed to a chromospheric component distributed uniformly over the stellar surface. However, since non-accreting WTTS show either very weak or, more commonly, no \ion{He}{1}~$\lambda$5876\ emission \citep{ber01}, it would appear that TTS chromospheres are not significant contributors to this line. Instead, the time-independent component may be from more widely distributed accretion shocks that cover a broader range of longitudes. \subsection{Absence of \ion{He}{1}~$\lambda$10830\ Red Absorption} In this paper we have focused on the 21/38 CTTS that show redshifted absorption in \ion{He}{1}~$\lambda$10830\ at least once in an observational program with sporadic time coverage. Clearly the absence of \ion{He}{1}~$\lambda$10830\ red absorption is also important in constraining the topology of magnetospheric accretion. An important point is that \ion{He}{1}~$\lambda$10830\ red absorption is rarely seen among CTTS with the highest 1-$\micron$ veiling (1/25 observations; see Section 3.1). Of the 9 stars in the EFHK survey in this category, the only one that showed redshifted absorption, on 1 of 4 occasions, is DR Tau. We suspect that in all 9 of these stars, emission from a wind exterior to the accretion flow, instead of from the flow itself, is filling in any redshifted absorption that may be present. If in-situ emission from the funnel flow were significant, it would be difficult for it to fill up the absorption at the red edge of the profile, since the geometry of the funnel flow results in smaller volumes at higher velocities, producing centrally peaked emission profiles that fall off rapidly toward both blue and red high velocities (see the contribution to the emission from scattering of the stellar continuum in Fig.~\ref{f.example}). The near absence of red absorption among these stars instead calls for a situation in which the redshifted absorption, if present, is filled in completely. In the case of DR Tau, it is clear that weak red absorption, confined to high velocities, is visible when the emission from the P Cygni wind profile is weakest (see Fig.~\ref{f.multi}). Among the other stars in this high-veiling group, all have either broad blue helium absorptions indicative of viewing through a stellar wind or strong helium emission interpreted to arise in a conical stellar wind viewed obliquely (see KEF and EFHK). Either of these contributions to redward emission would be sufficient to fill up even a strong red absorption, provided the wind is optically thick and exterior to the accretion flow. When profiles from both \ion{He}{1}~$\lambda$10830\ and \ion{He}{1}~$\lambda$5876\ are considered, the evidence suggests that \ion{He}{1}~$\lambda$10830\ red absorption is rare or absent in CTTS with large 1-$\micron$ veiling not primarily because the absorption is being filled in by wind emission but more because the geometry of the funnel flows is altered compared to that of low-veiling CTTS. This inference is drawn from a study of \ion{He}{1}~$\lambda$5876\ profiles and optical veiling presented in \citet{ber01}, which includes many stars in common with EFHK. They found that CTTS whose \ion{He}{1}~$\lambda$5876\ profiles showed only a narrow component, consistent with formation in post-shock gas from an accretion shock, show an excellent correlation between the strength of narrow-component helium emission and optical veiling. In contrast, CTTS whose \ion{He}{1}~$\lambda$5876\ profiles show a contribution from a broad component have reduced or absent emission from a narrow component relative to stars of similar optical veiling. While it might appear otherwise, this is not an esoteric point! It suggests that CTTS with strong stellar winds and high optical and 1-$\micron$ veilings may have crunched or otherwise-altered magnetospheres resulting in weak narrow-component emission from a hot accretion shock, and there is a significant contribution to the veiling continuum from another source. In contrast, CTTS without strong stellar winds (many of which show disk wind profiles at \ion{He}{1}~$\lambda$10830; see KEF) have extensive magnetospheres carrying accreting gas to the star, and hot accretion shocks are the dominant contributor to their optical veiling. We anticipate being able to test this suggestion shortly, following analysis of simultaneously obtained spectra extending from 0.4 to 2.2~$\micron$. A second point regarding the frequency of \ion{He}{1}~$\lambda$10830\ red absorption is that, in contrast to CTTS with high 1-$\micron$ veiling, red absorption is commonly seen in stars with lower veiling (37 out of 56 spectra for $r_Y \le 0.5$; see Section 3.1). Among this group, some objects (e.g., TW Hya and CY Tau; see Fig.~\ref{f.multi}) clearly show reduction of the red absorption as the emission, likely that of a stellar wind as indicated by the strong P Cygni profile, increases. In such stars the appearance and disappearance of the red absorption is likely due, at least in part, to filling in by an exterior stellar wind, as in DR Tau. In others (e.g., V836 Tau and GK Tau; Fig.~\ref{f.multi}), the weaker helium emission could arise simply from scattering in the funnel flow, and the absence of red absorption may indicate viewing at an azimuth with no funnel flow activity. Azimuthal asymmetry in the funnel flow is also the likely explanation for the strongly variable red absorption morphology in objects such as AA Tau and DK Tau (Fig.~\ref{f.multi}). The possibility that red absorption may be partially filled in, either by in-situ emission from the accretion flow or by scattered or in-situ emission from a wind, implies that the true magnitudes of some red absorptions are stronger and their constraints on the flow structure stiffer than the observations indicate. Further, since red absorption can be completely filled in by in-situ wind emission or, in some cases, not be observed at all due to azimuthal inhomogeneities, \ion{He}{1}~$\lambda$10830\ red absorption is likely more pervasive among CTTS than is already apparent. \subsection{Size and Structure of the Accretion Flow} Inferences to date on the physical extent of accretion flows have largely relied on models positing that hydrogen and sodium lines are formed primarily in these flows \citep{cal00}. A correlation between the emitting area of the accretion flow and the magnitude of the mass accretion rate has been suggested by \citet{muz01} as the explanation for the well-established empirical correlation between infrared hydrogen line luminosities and accretion luminosities \citep{muz98,fol01,nat04}. The models of hydrogen line formation in magnetospheric flows predict that hydrogen line luminosities are primarily determined by the surface area of the accreting gas, not the density in the flow. The suggestion is that objects with higher accretion rates require larger emitting areas for their magnetospheres than objects with smaller accretion rates. Since more extended magnetospheres are expected on theoretical grounds in objects with lower disk accretion rates, a further suggestion is that high-accretion-rate objects have wider azimuthal coverage of accreting columns. The red absorption profiles of \ion{He}{1}~$\lambda$10830\ give new insight into this phenomenon, since we have a clear indication of very extended and wide flows in stars with low accretion rates. For example, our limited phase coverage of the edge-on system AA Tau shows that at the same time exceptionally strong red absorption at \ion{He}{1}~$\lambda$10830\ is observed, requiring extensive but dilutely filled accretion flows, the hydrogen P$\gamma$ profile is weak, narrow, and symmetric, suggesting a small magnetospheric emitting area if it is formed in the accretion flow. We anticipate that time-monitoring campaigns combining profile monitoring of both \ion{He}{1}~$\lambda$10830\ and the immediately adjacent P$\gamma$ line of hydrogen will provide a definitive assessment of the size and azimuthal coverage of the funnel flow and possibly also clarify the origin of the correlation between infrared hydrogen line luminosities and the accretion luminosity. \section{CONCLUSIONS} We have probed the geometry of magnetospheric accretion in classical T Tauri stars by modeling red absorption at \ion{He}{1}~$\lambda$10830\ via scattering of the stellar and veiling continua. Between 2001 and 2007, we acquired 81 1-$\micron$ spectra of 38 CTTS spanning the full observed range of mass accretion rates. Of the 38 stars, 1 of 9 with $r_Y>0.5$ and 20 of 29 with $r_Y\le0.5$ show red absorption at \ion{He}{1}~$\lambda$10830\ that extends below the 1-$\micron$ continuum in one or more spectra, demonstrating that red absorption from magnetospheric accretion is rare in objects with high veiling but is found in about two thirds of objects with moderate to low veiling. The red absorptions can be strong, deep, and broad, with equivalent widths up to 4.5~\AA, maximum penetrations into the 1-$\micron$ continuum up to 61\%, and widths at one quarter of the absorption minimum up to 320~km~s$^{-1}$; furthermore, they tend to be strongest in stars with the lowest veilings. We model the red absorption by assuming that an axisymmetric dipolar accretion flow scatters photons from the star and from hot zones in the accretion-heated photosphere that produce the 1-$\micron$ veiling and have filling factor $f$. Testing a range of magnetosphere widths and $f$ consistent with shock filling factors from the literature, we find that about half of the absorption profiles can be explained by dipolar flows in which the size of the flow is consistent with the size of the shock filling factor $f$. Weak absorptions in stars with weak veiling and intermediate absorptions in stars with intermediate veiling are explained by such flows, but strong absorptions in stars with little to no veiling are not. We introduce the concept of dilution as a means of producing a strong red absorption while keeping the filling factor and thus the veiling low. In a diluted flow, the magnetosphere can extend over a wide range of radii, with a large covering factor on the stellar surface, but this volume is incompletely filled by accreting gas. Instead of a single thick flow, we posit multiple nested streamlets with a total filling factor small enough for a low veiling, but each with an intrinsic thermal or turbulent width sufficient to scatter photons as though the entire volume were filled, thereby yielding a large red absorption. The multiple streamlets can also explain how helium is ionized through the entire flow, rather than just the skin of a thick flow. Large, dilutely filled accretion flows are necessary for about half of the objects, some of which require accreting streamlets to connect to the disk over a range from 2~$R_*$ out to or beyond corotation. A few stars show such deep absorption at redward velocities exceeding 50\% of the stellar escape velocity that flows near the star with less curvature than a dipolar trajectory seem to be required. The frequency of \ion{He}{1}~$\lambda$10830\ red absorption is also informative. Our limited temporal coverage suggests that the frequency of helium absorption differs in stars with high and low veiling. Red absorption at \ion{He}{1}~$\lambda$10830\ is far more common in stars with low veiling. When it is absent from these stars, it is sometimes because helium emission from another source such as a wind fills it in and sometimes because of inhomogeneous azimuthal coverage of accreting magnetic columns. Among stars with high veiling ($r_Y\ge0.5$), red absorption at \ion{He}{1}~$\lambda$10830\ is rarely seen. If these stars had accretion geometries similar to those of the low-veiling stars, they would be expected to have extremely strong red absorptions. Even if the absorption were filled in by emission from the accretion flow, the stars would still be expected to show red absorption at high velocities. In the high-veiling stars, the paucity of \ion{He}{1}~$\lambda$10830\ red absorption, the presence of \ion{He}{1}~$\lambda$10830\ emission and blue absorption that suggest formation in accretion-powered stellar winds, and the weakness or absence of narrow-component \ion{He}{1}~$\lambda$5876\ emission from an accretion shock lead us to suggest that the magnetospheric accretion structure may be crunched or otherwise reduced in CTTS with the highest disk accretion rates. We find the study of \ion{He}{1}~$\lambda$10830\ red absorption due to infalling gas projected in front of the star to be complementary to studies of emission lines modeled as arising over the full size of the accretion flow. The proximity of \ion{He}{1}~$\lambda$10830\ and P$\gamma$ offer an excellent pair of lines for deeper investigation of magnetospheric geometries through intensive time-monitoring programs that can track non-aziumuthal structures as stars rotate. Our limited phase coverage of AA Tau demonstrates that this approach will be very effective, particularly when coupled with radiative transfer models that can constrain formation conditions for both lines simultaneously. \acknowledgments NASA grant NNG506GE47G issued through the Office of Space Science provides support for this project. Thanks to A. Rostopchina for personally providing the last measurement needed to derive stellar parameters for every star in the sample and to M. Romanova for stimulating conversations on accretion flows. We acknowledge helpful conversations with J. Bjorkman, S. Cabrit, N. Calvet, L. Hartmann, S. Matt, and an anonymous referee. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have had the opportunity to conduct observations with the Keck II telescope from this mountain.
proofpile-arXiv_069-2299
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:level1}Introduction} The discovery of superconductivity in doped LaFeAsO triggered intensive research on layered FeAs systems\cite{LaFeAsOF_YKamihara_01}. So far, the superconducting (SC) transition temperature {$T_{\rm sc}$} of $R$FeAsO has rapidly been raised up to 54\,K\cite{PrFeAsOF_ZARen_01,NdFeAsOF_ZARen_01, CeFeAsOF_GFChen_01,SmOFFeAs_XHChen_01,GdFeAsO_JYang_01}. Furthermore, a similar high {$T_{\rm sc}$} exceeding 35\,K was also discovered upon doping the related compounds $A$Fe$_2$As$_2$ ($A$=Ca, Sr, Ba).\cite{AFe2As2_CKrellner_01,SrFe2As2_GFChen_01} $A$Fe$_2$As$_2$ has the well-known ThCr$_2$Si$_2$-type structure, which can be regarded as replacing the (R$_2$O$_2$)$^{2+}$ layer in $R$FeAsO by a single divalent ion ($A^{2+}$) layer keeping the same electron count.\cite{AFe2As2_CKrellner_01} $R$FeAsO and $A$Fe$_2$As$_2$ were found to present very similar physical properties. Undoped $R$FeAsO compounds ($R$=La-Gd) present a structural transition at $T_0{\sim}$150\,K followed by the formation of a spin-density wave (SDW) at a slightly lower temperature {$T_{\rm N}$}${\sim}$140\,K. $A$Fe$_2$As$_2$ exhibits a similar structural distortion, whereas the SDW forms at the same or a slightly lower temperature\cite{SrFe2As2_MTegel_01}. Electron or hole doping leads to the suppression of the SDW and to the onset of superconductivity. This connection between a vanishing magnetic transition and the simultaneous formation of a SC state is reminiscent of the behavior in the cuprates and in the heavy-fermion systems, suggesting the SC state in these doped FeAs systems to be of unconventional nature. In addition, the SDW state is strongly coupled to the lattice distortion in this system and therefore it is of high interest to reveal the relationship between the lattice, the magnetism and superconductivity. The magnetic structures of the FeAs system have been investigated by neutron diffraction study on $R$FeAs(O,F) for $R$=La, Ce, and Nd, and {BaFe$_2$As$_{2}$}\cite{LaFeAsOF_CdlCruz_01,CeFeAsOF_JZhao_01,BaFe2As2_QHuang_01}. The antiferromagnetic (AFM) reflection was observed around the (1,0) or (0,1) position with respect to the Fe-As layer. This corresponds to Fe moments with AFM coupling along the $a$ or $b$ direction, called the columnar structure. In contrast, interlayer couplings is not unique among these compounds, suggesting a relatively weak coupling between the layers. The size of the estimated Fe moment is less than 1\,{${\mu}_{\rm B}$}, which is much smaller than theoretical predictions. So far the magnetic structure of these iron arsenides have not been determined uniquely, especially the relation of the direction of the AFM ordering with respect to the short or long Fe-Fe distances is not settled. This is one of the fundamental questions on the magnetic properties of these materials and is indispensable for understanding the interplay between magnetism and superconductivity. Among the reported FeAs systems, {SrFe$_2$As$_{2}$} should be a suitable compound to study the detailed magnetic structure. By the substitution of Sr with K or Cs, the superconductivity appears with the maximum {$T_{\rm sc}$} of 38\,K.\cite{SrFe2As2_GFChen_01,SrFe2As2_KSasmal_01} The parent compound {SrFe$_2$As$_{2}$} undergoes a first order transition at $T_0$=205\,K, where both, the SDW and the structural transition, occur simultaneously.\cite{SrFe2As2_AJesche_01} A detailed x-ray diffraction study clarified that the structural transition at {$T_0$} is from tetragonal($I4/mmm$) to orthorhombic($Fmmm$) and therefore similar to BaFe$_2$As$_2$\cite{SrFe2As2_MTegel_01,SrFe2As2_AJesche_01}. A stronger magnetism in {SrFe$_2$As$_{2}$} is inferred from the higher ordering temperature, the larger value of the Pauli like susceptibility above $T_0$ as well as the larger Fe hyperfine field observed in M\"ossbauer experiments.\cite{SrFe2As2_MTegel_01,SrFe2As2_AJesche_01} Therefore, a larger ordered moment is expected for this compound, which should allow a detailed analysis. In this paper, we report a neutron powder diffraction study on {SrFe$_2$As$_{2}$}. The precise analysis of the observed magnetic Bragg peaks on {SrFe$_2$As$_{2}$}\,\,allowed us to uniquely determine the magnetic structure. The magnetic propagation vector of {SrFe$_2$As$_{2}$} is {\textit{\textbf{q}}}=(1\,0\,1), thus the AFM coupling is realized in the longer Fe-Fe direction within the Fe-As layer. The stacking along the $c$-direction is also AFM. The direction of the magnetic moment is parallel to the $a$-axis as well. A remarkable agreement was obtained in the temperature evolution of the magnetic moment and structural distortion obtained from independent measurements. These facts clearly demonstrate the close relationship between magnetism and lattice distortion in {SrFe$_2$As$_{2}$}. The details on the sample preparation of {SrFe$_2$As$_{2}$} have been described in Ref.~\onlinecite{AFe2As2_CKrellner_01}. Neutron powder diffraction experiments were carried out on the two-axis diffractometer E6, installed at Helmholtz Center Berlin, Germany. A double focusing pyrolytic graphite (PG) monochromator results in a high neutron flux at the sample position. Data were recorded at scattering angle up to 110$^{\circ}$ using a two dimensional position sensitive detector (2-D PSD) with the size of 300${\times}$300\,mm$^2$. Simultaneous use of the double focusing monochromator and 2-D PSD in combination with the radial oscillating collimator gives a high efficiency for taking diffraction patterns. The neutron wavelength was chosen to be 2.4\,\AA\,\,in connection with a PG filter in order to avoid higher-order contaminations. As a compensation for the high efficiency, error in the absolute values in the lattice constants become relatively large. Since the detailed absolute values were already obtained from the x-ray diffraction\cite{SrFe2As2_AJesche_01}, we relied on these data and focus here on the details of the magnetic structure. Fine powder of {SrFe$_2$As$_{2}$} with a total mass of ${\sim}$2\,g was sealed in a vanadium cylinder as sample container. The standard $^4$He cryostat was used to cool the sample down to 1.5\,K well below {$T_0$}. Neutron diffraction patterns were taken at different temperatures between 1.5\,K and 220\,K and the obtained diffraction patterns were analyzed by the Rietveld method using the software RIETAN-FP\cite{RIETAN_FIzumi_01}. The software VESTA\cite{VESTA_KMomma_01} was used to draw both crystal and magnetic structures. Figure~\ref{f1} shows neutron diffraction patterns of {SrFe$_2$As$_{2}$} taken at (a)\,$T$=220\,K ($T>T_0$) and (b)\,$T$=1.5\,K ($T<T_0$). Results of the Rietveld analysis, the residual intensity curve, and tick marks indicating the expected reflection angle, are also plotted in the figure. For the high temperature $T>${$T_0$}, the observed pattern is well reproduced by assuming the crystal structure with the space group $I4/mmm$ as shown in Fig.~\ref{f1}(a). These results are consistent with that reported from x-ray diffraction study, including the positional parameter of As, $z_{\rm As}$=0.3602(2), where the number in the parenthesis indicate the uncertainty at the last decimal point. The conventional reliability factors $R_{\rm wp}$=3.73\,\% $R_{\rm I}$=3.18\,\% and $R_{\rm F}$=1.93\,\% of the present analysis indicate the high quality of the present analysis. Small impurity peaks observed at around 63$^{\circ}$ and 84$^{\circ}$ which originate from Cu and Al were excluded from the analysis. Below {$T_0$}=205\,K, some nuclear Bragg reflections became broad. The inset of Fig.~\ref{f1} shows the Bragg peak profile around 57.5$^{\circ}$ where the 1\,1\,2 Bragg peak of the high temperature tetragonal phase was observed. The peak intensity drops and broadens on passing through {$T_0$}, as expected for the orthorhombic distortion where the lattice constant $a$ becomes longer than $b$. The reflection profile could be well reproduced by the lattice distortion from the tetragonal to orthorhombic lattice reported by the x-ray diffraction study\cite{SrFe2As2_AJesche_01}. Within the errors, the positional parameter of As in the orthorhombic phase $z_{\rm As}$=0.3604(2) is the same as that above $T_0$. In addition to the lattice distortion, additional reflections were also observed at $T$=1.5\,K as indicated by arrows in Fig.~\ref{f1}(b). These superlattice peaks are most prominent at low scattering angles, and the corresponding peaks were not observed in the x-ray diffraction\cite{SrFe2As2_MTegel_01,SrFe2As2_AJesche_01}. Therefore, the origin of these superlattice peaks should be magnetic. Figure~\ref{f2} shows the temperature dependence of the integrated intensity of the superlattice reflection around 43$^{\circ}$ which can be indexed as 1\,0\,3 as described later. The left axis is set to be proportional to the size of the magnetic moment, ${\sqrt{I/I_0}}$, where $I_0$ is the intensity at the lowest temperature. The 1\,0\,3 reflection appears below $T_0$= 205\,K and shows a sharp increase in its intensity, which is clearly seen in the profile shown in the inset. The magnetic moment at 201\,K, just 4\,K below {$T_0$}, already attains 70\,\% of that at 1.5\,K. These findings strongly support the first order transition at {$T_0$} in {SrFe$_2$As$_{2}$}. In the same figure, the relative lattice distortion and the muon precession frequency of {SrFe$_2$As$_{2}$} taken from Ref.\onlinecite{SrFe2As2_AJesche_01} both normalized to the saturated values are plotted. A remarkable agreement of the temperature dependences is seen for these quantities obtained from independent measurements. This clearly demonstrates the close relationship between magnetism and lattice distortion in {SrFe$_2$As$_{2}$}. Hereafter, we analyze the magnetic structure of {SrFe$_2$As$_{2}$}. For simplicity, in the magnetic structure analysis the structural parameters were fixed to the best value determined from the nuclear Bragg peaks. In the following, three representative models shown in the right panel of Fig.~\ref{f3} are considered. At first, the direction of the AFM coupling with respect to the orthogonal axis is examined. This corresponds to the difference between Model I with the propagation vector {\textbf{\textit{q}}}=(1\,0\,1) and Model II with {\textbf{\textit{q}}}=(0\,1\,1). The difference between these models appears in the scattering angle arising from the subtle difference between $a$ and $b$. The difference can be clearly seen in the comparison for 1\,0\,1(upper panel) and 1\,2\,1(bottom) reflections. When Model I is used as a reference, the Model II gives a higher scattering angle for 1\,0\,1 which then corresponds to 0\,1\,1, and a lower angle for 1\,2\,1 (2\,1\,1). The comparison in Fig.~\ref{f3} clearly show that only Model I gives the correct positions of the 1\,0\,1 and 1\,2\,1 reflections. Therefore, the magnetic propagation vectors of {SrFe$_2$As$_{2}$} is determined to be {\textbf{\textit{q}}}=(1\,0\,1). In a second step, we try to analyze the direction of AFM moment, which corresponds to Model I and III. In Model I, the magnetic moment is set to be parallel to the AFM coupling in the Fe-As plane, ${\mu}{\parallel}a$, whereas it is perpendicular in Model III. The magnetic diffraction intensity is depending on the angle between the magnetic moment and scattering vector. The powder average of this angle factor for the orthorhombic symmetry is given as follows; \begin{equation} {\langle}q^2{\rangle}=1-(h^2a^{*2}\cos^2{\psi}_a+k^2b^{*2}\cos^2{\psi}_b+l^2c^{*2}\cos^2{\psi}_c)d^2 \end{equation} where $h, k, l$ are the reflection indexes, $a^*, b^*, c^*$ are the primitive lattice vectors in reciprocal space, and $d$ is the spacing of the ($hkl$) in real space. ${\psi}_a$, ${\psi}_b$, ${\psi}_c$ are the angles between the magnetic moment and crystallographic axes $a, b$ and $c$. This factor affects the relative magnetic intensities of 1\,0\,1, 1\,0\,3, and 1\,2\,1 diffraction peaks. Model I gives the comparable intensity for all three peaks, in good agreement with experimental results. In contrary, Model III results in a strong 1\,0\,1 and a vanishing 1\,2\,1 peak, which does not at all fit with the measured intensities. As a result, the calculation based on the Model I gives an excellent agreement for the position and intensity of all magnetic reflections. Therefore, we can conclusively determine the direction of the magnetic moment to be $a$. By using the structure Model I, the size of the Fe magnetic moment is deduced to be 1.01(3)\,{${\mu}_{\rm B}$} with good reliability factors $R_{\rm wp}$=3.54\,\% $R_{\rm I}$=2.49\,\% and $R_{\rm F}$=1.63\,\%. The determined magnetic structure of {SrFe$_2$As$_{2}$} is shown in Fig.~\ref{f4}; the AFM coupling occurs along the longer $a$ direction and the moment orients in the same direction. The magnetic ordering in {SrFe$_2$As$_{2}$} within the FeAs layer seems identical to that in BaFe$_2$As$_{2}$\cite{BaFe2As2_QHuang_01}, in LaFeAs(O,F)\cite{LaFeAsOF_CdlCruz_01} and in CeFeAs(O,F)\cite{CeFeAsOF_JZhao_01}, although the stacking order is not common. This supports the importance of the FeAs intralayer coupling as well as the weak interlayer coupling. We now turn to the discussion of the relation between the structural distortion and the magnetic structure. As mentioned before, the magnetic order in the FeAs system occurs either after the structural distortion or simultaneously. A simple approach would be to consider a superexchange path between the nearest Fe neighbors. Since the positional parameter of As and the lattice constant $c$ does not exhibit significant change, both the packing of the Fe-As layer and the Fe-As distance stay constant on passing through $T_0$. Thus, the distortion within the Fe-As layer at {$T_0$} mainly results in a slight change in the Fe-Fe distance and the Fe-As-Fe bond angle. The longer $a$ lattice constant leads to a slightly smaller bond angle along $a$, $\angle$(Fe-As-Fe)$_a$=71.2$^{\circ}$ as compared to that along $b$,$\angle$(Fe-As-Fe)$_b$=72.1$^{\circ}$, in other words, the difference is less than 1$^{\circ}$. It is unlikely that such minor distortions lead to significant differences in exchange parameters. In contrast, it should be noted that the observed orthorhombic distortion and the columnar magnetic structure in {SrFe$_2$As$_{2}$} are consistent with the results of a band structure calculation\cite{SrFe2As2_AJesche_01}. The calculation gives both a comparable distortion and the correct magnetic structure in which the AFM arrangement is along the long $a$-axis. These calculation predict the distortion to be stable only for the columnar state, not for other magnetic structures or the non-magnetic state, indicating the strong coupling of magnetic order and orthorhombic distortion. Similar results were also obtained for LaFeAsO.\cite{LaFeAsO_yildirim_01,LaFeAsOF_PVSushko_01} For this compound, T. Yildirim discusses the lattice distortion to be related to a lifting of the double degeneracy of the columnar state within a localized spin model for a frustrated square lattice\cite{LaFeAsO_yildirim_01}. However, I. I. Mazin \textit{et al.} questioned the applicability of such a model for these layered FeAs systems\cite{LaFeAsO_IIMazin_01}. A detailed neutron scattering study on single crystals is expected to give further insights into this fascinating coupling. The size of the magnetic moment of 1.01\,{${\mu}_{\rm B}$} determined by the present neutron diffraction study is the largest among the $R$FeAsO and the ternary arsenide $A$Fe$_2$As$_2$ reported so far: 0.36\,{${\mu}_{\rm B}$} for LaFeAs(O,F)\cite{LaFeAsOF_CdlCruz_01}, 0.8\,{${\mu}_{\rm B}$} for CeFeAs(O,F)\cite{CeFeAsOF_JZhao_01}, 0.87\,{${\mu}_{\rm B}$} for BaFe$_2$As$_2$\cite{BaFe2As2_QHuang_01}. This corresponds to the stronger magnetism deduced for {SrFe$_2$As$_{2}$} from bulk measurements, i.e. from the higher {$T_0$} and the larger hyperfine field in M\"ossbauer experiments. On the other hand, it should be noted that the size of the ordered moment obtained in our neutron diffraction study is almost twice as large as that deduced in other microscopic measurements, ${\mu}$SR and M\"ossbauer spectroscopy. Since the neutron diffraction intensity is proportional to the square of the size of the moment, this large difference can hardly be attributed to an experimental error. Similar differences were obtained for {BaFe$_2$As$_{2}$} and $R$FeAsO as well\cite{LaFeAsOF_HHKauss_01,BaFe2As2_MRotter_01}. This points to a general problem. While the neutrons directly probe the density of the magnetic moment, M\"ossbauer and ${\mu}$SR rely on a scaling of the observed quantity, i.e. precession frequency and the hyperfine field. It might be that these scaling procedures, which are well established for stable magnetic Fe systems with large moment, are not appropriate for the unusual magnetism in these layered FeAs systems. \begin{acknowledgments} We thank H.-H. Klaus and H. Rosner for stimulating discussions. \end{acknowledgments}
proofpile-arXiv_069-2317
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
proofpile-arXiv_069-2416
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection*{Acknowledgements} The work was partially supported by the VEGA grant agency projects Nos. 6070 and 7012, the CNRS-SAV Project No.~20246 and by the Action Austria--Slovakia project No. 58s2. We thank Hans Havlicek (Vienna University of Technology) for valuable comments and suggestions. \pdfbookmark[1]{References}{ref}
proofpile-arXiv_069-2432
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} Let $X$ be a closed oriented Riemannian manifold of dimension $n \geq 3$. {The Ricci flow} on $X$ is the following evolution equation: \begin{eqnarray*} \frac{\partial }{\partial t}{g}=-2{Ric}_{g}, \end{eqnarray*} where ${Ric}_{g}$ is the Ricci curvature of the evolving Riemannian metric $g$. The Ricci flow was firstly introduced in the celebrated work \cite{ha-0} of Hamilton for producing the constant positive sectional curvature metrics on 3-manifolds. Since the above equation does not preserve volume in general, one often considers the normalized Ricci flow on $X$: \begin{eqnarray*}\label{Ricci} \frac{\partial }{\partial t}{g}=-2{Ric}_{g} + \frac{2}{n}\overline{s}_{g} {g}, \end{eqnarray*} where $\overline{s}_{g}:={{\int}_{X} {s}_{g} d{\mu}_{g}}/{vol_{{g}}}$ and ${s}_{g}$ denotes the scalar curvature of the evolving Riemannian metric $g$, $vol_{g}:={\int}_{X}d{\mu}_{g}$ and $d{\mu}_{g}$ is the volume measure with respect to $g$. A one-parameter family of metric $\{g(t)\}$, where $t \in [0, T)$ for some $0<T\leq \infty$, is called a solution to the normalized Ricci flow if this satisfies the above equation at all $x \in X$ and $t \in [0, T)$. It is known that the normalized flow is equivalent to the unnormalized flow by reparametrizing in time $t$ and scaling the metric in space by a function of $t$. The volume of the solution metric to the normalized Ricci flow is constant in time. \par The key point of an approach for understanding the topology of a given manifold via the normalized Ricci flow is to get the long-time behavior of the solution. Recall that a solution $\{g(t)\}$ to the normalized Ricci flow on a time interval $[0, T)$ is said to be maximal if it cannot be extended past time $T$. Let us also recall the following definition firstly introduced by Hamilton \cite{ha-1, c-c}: \begin{defn}\label{non-sin} A maximal solution $\{g(t)\}$, $t \in [0, T)$, to the normalized Ricci flow on $X$ is called non-singular if $T=\infty$ and the Riemannian curvature tensor $Rm_{g(t)}$ of $g(t)$ satisfies $$ \sup_{X \times [0, T)}|Rm_{g(t)}| < \infty. $$ \end{defn} As a pioneer work, Hamilton \cite{ha-0} proved that, in dimesion 3, there exists a unique non-singular solution to the normalized Ricci flow if the initial metric is positive Ricci curvature. Moreover, Hamilton \cite{ha-1} classified non-singular solutions to the normalized Ricci flow on 3-manifolds and the work was very important for understanding long-time behaivor of solutions of the Ricci flow on 3-manifolds. On the other hand, many authors studied the properties of non-singular solutions in higer dimensions. For example, Hamilton \cite{ha-2} proved that, for any closed oriented Riemannian 4-manifold with constant positive curvature operator, there is a unique non-singular solution to the normalized flow which converges to a smooth Riemannian metric of positive sectional curvature. On the other hand, it is known that the solution on a 4-manifold with positive isotropic curvature definitely becomes singular \cite{ha-3, ha-4}. See also a recent very nice work of Chen and Zhu \cite{c-z} on Ricci flow with surgery on 4-manifolds with positive isotropic curvature inspired by the work of Hamilton \cite{ha-3} and the celebrated work of Perelman \cite{p-1, p-2, p-3, c-c, lott, m-t}. There is also an interesting work concerning Ricci flow on homogeneous 4-manifolds due to Isenberg, Jackson and Lu \cite{isen}. See also Lott's work \cite{lott-r} concerning with the long-time behavior of Type-III Ricci flow solutions on homogeneous manifolds. However, the existence and non-existence of non-singular solutions to the normalized Ricci flow in higher dimensions $n \geq 4$ are still mysterious in general. The main purpose of this article is to study, from the gauge theoretic point of view, this problem in case of dimension 4 and point out that the difference between existence and non-existence of non-singular solutions to the normalized Ricci flow strictly depend on one's choice of smooth structure. The main result of the present article is Theorem \ref{main-A} stated below. \par In \cite{fz-1}, Fang, Zhang and Zhang also studied the properties of non-singular solutions to the normalized Ricci flow in higher dimensions. Inspired by their work, we shall introduce the following definition: \begin{defn}\label{bs} A maximal solution $\{g(t)\}$, $t \in [0, T)$, to the normalized Ricci flow on $X$ is called quasi-non-singular if $T=\infty$ and the scalar curvature $s_{g(t)}$ of $g(t)$ satisfies $$ \sup_{X \times [0, T)}|{s}_{g(t)}| < \infty. $$ \end{defn} Of course, the condition of Definition \ref{bs} is weaker than that of Definition \ref{non-sin}. Namely, any non-singular solution is quasi-non-singular, but the converse is not true in general. In dimension 4, the authors of \cite{fz-1} observed, among others, that any closed oriented smooth 4-manifold $X$ must satisfy the following {\it topological} constraint on the Euler characteritic $\chi(X)$ and signature $\tau(X)$ of $X$: \begin{eqnarray}\label{FZZ} 2 \chi(X) \geq 3|\tau(X)| \end{eqnarray} if there is a quasi-non-singular solution to the normalized Ricci flow on $X$ and, moreover, if the solution satisfies \begin{eqnarray}\label{FZZ-s} \hat{s}_{g(t)} \leq -c <0, \end{eqnarray} where the constant $c$ is independent of $t$ and define as $\hat{s}_{g} := \min_{x \in X}{s}_{g}(x)$ for a given Riemannian metric $g$. In this article, we shall call the inequality (\ref{FZZ}) the {\it Fang-Zhang-Zhang inequality} (or, for brevity, FZZ inequality) for the normalized Ricci flow and we shall also call $2 \chi(X) > 3|\tau(X)|$ the {\it strict} FZZ inequality for the normalized Ricci flow. The FZZ inequality gives us, under the condition (\ref{FZZ-s}), the only known topological obstruction to the existence of quasi-non-singular solutions to the normalized Ricci flow. It is also known that any Einstein 4-manifold $X$ must satisfy the same bound $2 \chi(X) \geq 3|\tau(X)|$ which is so called Hitchin-Thorpe inequality \cite{thor, hit}. We notice that, however, under the bound (\ref{FZZ-s}), (quasi-)non-singular solutions do not necessarily converge to smooth Einstein metrics on $X$. Hence, FZZ inequality never follows from Hitchin-Thorpe inequality in general. See \cite{fz-1} for more details. \par On the other hand, there is a natural diffeomorphism invariant arising from a variational problem for the total scalar curvature of Riemannian metrics on any given closed oriented Riemannian manifold $X$ of dimension $n\geq 3$. As was conjectured by Yamabe \cite{yam}, and later proved by Trudinger, Aubin, and Schoen \cite{aubyam,lp,rick,trud}, every conformal class on any smooth compact manifold contains a Riemannian metric of constant scalar curvature. For each conformal class $[g]=\{ vg ~|~v: X\to {\mathbb R}^+\}$, we are able to consider an associated number $Y_{[g]}$ which is so called {\em Yamabe constant} of the conformal class $[g]$ defined by \begin{eqnarray*} Y_{[g]} = \inf_{h \in [g]} \frac{\int_X s_{{h}}~d\mu_{{h}}}{\left(\int_X d\mu_{{h}}\right)^{\frac{n-2}{n}}}, \end{eqnarray*} where $d\mu_{{h}}$ is the volume form with respect to the metric $h$. The Trudinger-Aubin-Schoen theorem tells us that this number is actually realized as the constant scalar curvature of some unit volume metric in the conformal class $[g]$. Then, Kobayashi \cite{kob} and Schoen \cite{sch} independently introduced the following invariant of $X$: \begin{eqnarray*} {\mathcal Y}(X) = \sup_{\mathcal{C}}Y_{[g]}, \end{eqnarray*} where $\mathcal{C}$ is the set of all conformal classes on $X$. This is now commonly known as the {\em Yamabe invariant} of $X$. It is known that ${\mathcal Y}(X) \leq 0$ if and only if $X$ does not admit a metric of positive scalar curvature. There is now a substantial literature \cite{ishi-leb-2,leb-4, leb-7, leb-11,jp2, jp3, petyun} concerning manifolds of non-positive Yamabe invariant, and the exact value of the invariant is computed for a large number of these manifolds. In particular, it is also known that the Yamabe invariant is sensitive to the choice of smooth structure of a 4-manifold. After the celebrated works of Donaldson \cite{don, don-kro} and Freedman \cite{free}, it now turns out that quite many exotic smooth structures exist in dimension 4. Indeed, there exists a compact topological 4-manifold $X$ which admits many distinct smooth structures ${Z}^i$. Equivalently, each of the smooth 4-manifolds ${Z}^i$ is homeomorphic to $X$, but never diffeomorphic to each! other. One can construct quite many explicite examples of compact topological 4-manifolds admitting distinct smooth structures for which values of the Yamabe invariants are different by using, for instance, a result of LeBrun with the present author \cite{ishi-leb-2}. \par Now, let us come back to the Ricci flow picture. In this article, we shall observe that the condition (\ref{FZZ-s}) above is closely related to the negativity of the Yamabe invariant of a given smooth Riemannian manifold. More precisely, in Proposition \ref{yama-b} proved in Section \ref{ya} below, we shall see that the condition (\ref{FZZ-s}) is always satisfied for {\it any} solution to the normalized Ricci flow if a given smooth Riemannian manifold $X$ of dimension $n \geq 3$ has ${\mathcal Y}(X)<0$. Moreover, we shall also observe that, in Theorem \ref{bound-four} in Section \ref{ya} below, if a compact topological 4-manifold $M$ admits a smooth structure $Z$ with ${\mathcal Y}<0$ and for which there exists a non-singular solution to the normalized Ricci flow, then the strict FZZ inequality for $Z$ must hold: $$ 2 \chi(Z) > 3|\tau(Z)|, $$ where, of course, we identified the compact topological 4-manifold $M$ admits the smooth structure $Z$ with the smooth 4-manifold $Z$. Let us here emphasize that $2 \chi(Z) > 3|\tau(Z)|$ is just a topological constraint, is {\it not} a differential topological one. The observations made in this article and the special feature of smooth structures in dimension 4 naturally lead us to ask the following: \begin{Pro}\label{Q} Let $X$ be any compact topological 4-manifold which admits at least two distinct smooth structures $Z^i$ with negative Yamabe invariant ${\mathcal Y}<0$. Suppose that, for at least one of these smooth structures $Z^i$, there exist non-singular solutions to the the normalized Ricci flow. Then, for every other smooth structure $Z^i$ with ${\mathcal Y}<0$, are there always non-singular solutions to the normalized Ricci flow? \end{Pro} Since $X$ admits, for at least one of these smooth structures $Z^i$, non-singular solutions to the the normalized Ricci flow, we are able to conclude that $2 \chi({Z}_{i}) > 3|\tau({Z}_{i})|$ holds for every $i$. Notice that this is equivalent to $2 \chi(X) > 3|\tau(X)|$. Hence, even if there are always non-singular solutions to the normalized Ricci flow for every other smooth structure $Z^i$, it dose not contradict the strict FZZ inequality. \par Interestingly, the main result of this article tells us that the answer to Problem \ref{Q} is negative as follows: \begin{main}\label{main-A} For every natural number $\ell$, there exists a simply connected compact topological non-spin 4-manifold $X_{\ell}$ satisfying the following properties: \begin{itemize} \item $X_{\ell}$ admits at least $\ell$ different smooth structures $M^i_{\ell}$ with ${\mathcal Y}<0$ and for which there exist non-singular solutions to the the normalized Ricci flow in the sense of Definition \ref{non-sin}. Moreover the existence of the solutions forces the strict FZZ inequality $2 \chi > 3|\tau|$ as a topological constraint, \item $X_{\ell}$ also admits infinitely many different smooth structures $N^j_{\ell}$ with ${\mathcal Y}<0$ and for which there exists no quasi-non-singular solution to the normalized Ricci flow in the sense of Definition \ref{bs}. In particular, there exists no non-singular solution to the the normalized Ricci flow in the sense of Definition \ref{non-sin}. \end{itemize} \end{main} Notice that Freedman's classification \cite{free} implies that $X_{\ell}$ above must be homeomorphic to a connected sum $p{\mathbb C}{P}^2 \# q \overline{{\mathbb C}{P}^2}$, where ${\mathbb C}{P}^2$ is the complex projective plane and ${\mathbb C}{P}^2$ is the complex projective plane with the reversed orientation, and $p$ and $q$ are some appropriate positive integers which depend on the natural number $\ell$. Notice also that, for the standard smooth structure on $p{\mathbb C}{P}^2 \# q \overline{{\mathbb C}{P}^2}$, we have ${\mathcal Y} >0$ because, by a result of Schoen and Yau \cite{rick-yau} or Gromov and Lawson \cite{g-l}, there exists a Riemannian metric of positive scalar curvature for such a smooth structure. Hence, smooth structures which appear in Theorem \ref{main-A} are far from the standard smooth structure. On the other hand, notice also that the second statement of Theorem \ref{main-A} tells us that the topological 4-manifold $X_{\ell}$ admits infinitely many different smooth structures $N^j_{\ell}$ with ${\mathcal Y}<0$ and for which any solution to the normalized Ricci flow always becomes singular for any initial metric. In the case of 4-manifolds with ${\mathcal Y} >0$, for example, consider a smooth 4-manifold with positive isotropic curvature metric $g$ and with no essential incompressible space form. Then it is known that the Ricci flow develops singularites for the initial metric $g$. The structure of singularites is studied deeply by Hamilton \cite{ha-3, ha-4} and Chen and Zhu \cite{c-z}. In the present article, however, we do not pursue this issue in our case ${\mathcal Y}<0$. \par To the best of our knowledge, Theorem \ref{main-A} is the first result which shows that, in dimension 4, smooth structures become definite obstructions to the existence of non-singular solutions to the normalized Ricci flow. Namely, Theorem \ref{main-A} teaches us that the existence or non-existence of non-singular solutions to the normalized Ricci flow depends strictly on the diffeotype of a 4-manifold and it is {\it not} determined by homeotype alone. This gives a completely new insight into the property of solutions to the Ricci flow on 4-manifolds. \par To prove the non-existence result in Theorem \ref{main-A}, we need to prove new obstructions to the existence of non-singular solutions to the normalized Ricci flow. Indeed, it is the main non-trivial step in the proof of Theorem \ref{main-A}. For instance, we shall prove the following obstruction: \begin{main}\label{main-B} Let $X$ be a closed symplectic 4-manifold with $b^{+}(X) \geq 2$ and $2 \chi(X) + 3\tau(X)>0$, where ${b}^{+}(X)$ stands for the dimension of a maximal positive definite subspace of ${H}^{2}(X, {\mathbb R})$ with respect to the intersection form. Then, there is no non-singular solution of the normalized Ricci flow on a connected sum $M:=X \# k{\overline{{\mathbb C}{P}^2}}$ if \begin{eqnarray*} k \geq \frac{1}{3}\Big(2 \chi(X) + 3\tau(X) \Big). \end{eqnarray*} \end{main} See also Theorem \ref{ricci-ob-1} and Theorem \ref{ricci-ob-2} below for more general obstructions. We shall use the Seiberg-Witten monopole equations \cite{w} to prove the obstructions. We should notice that, under the same condition, LeBrun \cite{leb-11} firstly proved that $M$ above cannot admit any Einstein metric by using Seiberg-Witten monopole equations. As was already mentioned above, notice that, however, (quasi-)non-singular solutions do not necessarily converge to smooth Einstein metrics on $M$ under the bound (\ref{FZZ-s}). Hence, the above non-existence result on non-singular solutions never follows from the obstruction of LeBrun in general. In this sense, the above obstruction in Theorem \ref{main-B} is new and non-trivial. On the other hand, to prove the existence result of non-singular solutions in Theorem \ref{main-A}, we shall use a very nice result of Cao \cite{c, c-c} concerning the existence of non-singular solutions to the normalized Ricci flow on compact K{\"{a}}hler manifolds. By combining non-existence result derived from Theorem \ref{main-B} with the existence result of Cao, we shall give a proof of Theorem \ref{main-A}. \par The organization of this article is as follows. In Section \ref{ht}, we shall recall the proof of the FZZ inequality (\ref{FZZ}) because we shall use, in Section \ref{obstruction} below, the idea of the proof to prove new obstructions to the existence of non-singular solutions to the normalized Ricci flow. In Section 3, first of all, we shall prove that the condition (\ref{FZZ-s}) above is always satisfied for any solution to the normalized Ricci flow if a given Riemannian manifold $X$ has negative Yamabe invariant. Moreover, we shall improve the FZZ inequality (\ref{FZZ}) under an assumption that a given Riemannian manifold $X$ has negative Yamabe invariant. This motivates Problem \ref{Q} partially. See Theorem \ref{bound-four} below. In Section \ref{monopoles}, we shall discuss curvature bounds arising from the Seiberg-Witten monopole equations. In fact, we shall firstly recall, for the reader who is unfamiliar with Seiberg-Witten theory, these curvature bounds following a! recent beautiful article \cite{leb-17} of LeBrun. And we shall prove, by using the curvature bounds, some results which are needed to prove the new obstructions. The main results of this section are Theorems \ref{yamabe-pere} and \ref{key-mono-b} below. In Section \ref{obstruction}, we shall prove the new obstructions by gathering results proved in the previous several sections. See Theorem \ref{ricci-ob-1}, Corollary \ref{non-sin-cor} (Theorem \ref{main-B}) and Theorem \ref{ricci-ob-2} below. In Section \ref{final-main}, we shall finally give a proof of the main theorem, i.e., Theorem \ref{main-A}, by using particularly Corollary \ref{non-sin-cor} (Theorem \ref{main-B}). Finally, in Section \ref{remark}, we shall conclude this article by giving some open questions which are closely related to Theorem \ref{main-A}. \par The main part of this work was done during the present author's stay at State University of New York at Stony Brook in 2006. I would like to express my deep gratitude to Claude LeBrun for his warm encouragements and hospitality. I would like to thank the Department of Mathemathics of SUNY at Stony Brook for their hospitality and nice atmosphere during the preparation of this article. \section{Hitchin-Thorpe Type Inequality for the Normalized Ricci Flow}\label{ht} In this section, we shall recall the proof of the Fang-Zhang-Zhang inequality (\ref{FZZ}) for the normalized Ricci flow. We shall use the idea of the proof, in Section 5 below, to prove new obstructions. We notice that, throughout the article \cite{fz-1}, the authors of \cite{fz-1} assume that any solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci follow has {\it unite volume}, namely, ${vol}_{g(t)}=1$ holds for all $t \in [0, \infty)$. Since the normalized Ricci flow preserves the volume of the solution, this condition is equivalent to the condition that ${vol}_{g(0)}=1$ for the initial metric $g(0)$. Though one can always assume this condition by rescaling the metic, such a condition is not essential. In what follows, let us give a proof of the FZZ inequality without such a condition on the volume. Lemma \ref{FZZ-lem}, Proposition \ref{FZZ-prop} and Theorem \ref{fz-key} below are essentially due to the authors of \cite{fz-1}. We shall include its proof for completeness and the reader's convenience. \par Now, let $X$ be a closed oriented Riemannian 4-manifold. Then, the Chern-Gauss-Bonnet formula and the Hirzebruch signature formula tell us that the following formulas hold for {\it any} Riemannian metric $g$ on $X$: \begin{eqnarray*} \tau(X)=\frac{1}{12{\pi}^2}{\int}_{X}\Big(|W^+_{g}|^2-|W^-_{g}|^2 \Big) d{\mu}_{g}, \\ \chi(X) = \frac{1}{8{\pi}^2}{\int}_{X}\Big(\frac{{s}^2_{g}}{24}+|W^+_{g}|^2+|W^-_{g}|^2-\frac{|\stackrel{\circ}{r}_{g}|^2}{2} \Big) d{\mu}_{g}, \end{eqnarray*} where $W^+_{g}$ and $W^-_{g}$ denote respectively the self-dual and anti-self-dual Weyl curvature of the metric $g$ and $\stackrel{\circ}{r}_{g}$ is the trace-free part of the Ricci curvature of the metric $g$. And $s_{g}$ is again the scalar curvature of the metric $g$ and $d\mu_{{g}}$ is the volume form with respect to $g$. By these formulas, we are able to get the following important equality: \begin{eqnarray}\label{4-im} 2\chi(X) \pm 3\tau(X) = \frac{1}{4{\pi}^2}{\int}_{X}\Big(2|W^{\pm}_{g}|^2+\frac{{s}^2_{g}}{24}-\frac{|\stackrel{\circ}{r}_{g}|^2}{2} \Big) d{\mu}_{g}. \end{eqnarray} If $X$ admits an Einstein metric $g$, then we have $\stackrel{\circ}{r}_{g} \equiv 0$. The above formula therefore implies that any Einstein 4-manifold must satisfies $$ 2 \chi(X) \geq 3|\tau(X)|. $$ This is nothing but the Hitchin-Thorpe inequality \cite{thor, hit}. As was already mentioned in Introduction, it is proved that, in \cite{fz-1}, the same inequality still holds for some 4-manifold which is {\it not} necessarily Einstein. Namely, under the existence of quasi-non-singular solutions satisfying the uniform bound (\ref{FZZ-s}) to the normalized Ricci flow, the same inequality still holds. \par A key observation is the following lemma. This is proved in Lemma 2.7 of \cite{fz-1} for unit volume solution. We would like to point out that the following lemma was already proved essentially by Lemma 7.1 in the article \cite{ha-1} of Hamilton. Notice that, we do not assume that $vol_{{g(t)}}=1$ holds for any $t \in [0, \infty)$: \begin{lem}\label{FZZ-lem} Let $X$ be a closed oriented Riemannian manifold of dimension $n$ and assume that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow in the sense of Definition \ref{bs}. Assume moreover that the solution satisfies the uniform bound (\ref{FZZ-s}), namely, \begin{eqnarray*} \hat{s}_{g(t)} \leq -c <0 \end{eqnarray*} holds, where the constant $c$ is independent of $t$ and define as $\hat{s}_{g} := \min_{x \in X}{s}_{g}(x)$ for a given Riemannian metric $g$. Then the following two bounds \begin{eqnarray}\label{fzz-key-0} {\int}^{\infty}_{0}\Big(\overline{s}_{g(t)}- \hat{{s}}_{g(t)} \Big)dt < \infty, \end{eqnarray} \begin{eqnarray}\label{fzz-key} {\int}^{\infty}_{0}{\int}_{X}| {{s}}_{g(t)}-\overline{s}_{g(t)} |d{\mu}_{g(t)}dt \leq 2{vol}_{g(0)} {\int}^{\infty}_{0}\Big(\overline{s}_{g(t)}- \hat{{s}}_{g(t)} \Big) dt <\infty \end{eqnarray} hold, where $\overline{s}_{g(t)}:={{\int}_{X} {s}_{g(t)} d{\mu}_{g(t)}}/{vol_{{g(t)}}}$. \end{lem} \begin{proof} As was already used in Lemma 2.7 in \cite{fz-1}, we shall also use an idea due to Hamilton \cite{ha-1}. More precisely, we shall use the idea of the proof of Lemma 7.1 in \cite{ha-1}. Recall the evolution equation for $s_{g(t)}$: \begin{eqnarray*} \frac{\partial s_{g(t)}}{\partial t} = \Delta s_{g(t)} + 2|Ric_{g(t)}|^2 - \frac{2}{n}\overline{s}_{g(t)}{s}_{g(t)} \end{eqnarray*} which was firstly derived by Hamilton \cite{ha-0}. If we decompose the Ricci tensor $Ric$ into its trace-free part $\stackrel{\circ}{r}$ and its trace $s$, then we have \begin{eqnarray*} |Ric_{g(t)}|^2 = |\stackrel{\circ}{r}_{g(t)}|^2 + \frac{1}{n}s_{g(t)}\Big(s_{g(t)}-\overline{s}_{g(t)} \Big) + \frac{\overline{s}_{g(t)}}{n}{s}_{g(t)}. \end{eqnarray*} We therefore obtain the following \begin{eqnarray}\label{evolution} \frac{\partial s_{g(t)}}{\partial t} = \Delta s_{g(t)} + 2|\stackrel{\circ}{r}_{g(t)}|^2 + \frac{2}{n}s_{g(t)}\Big(s_{g(t)}-\overline{s}_{g(t)} \Big). \end{eqnarray} From this, we are able to get the ordinary differential inequality: \begin{eqnarray*} \frac{d}{d t}\hat{s}_{g(t)} \geq \frac{2}{n}\hat{s}_{g(t)}\Big(\hat{s}_{g(t)}-\overline{s}_{g(t)} \Big). \end{eqnarray*} Since the solution satisfies the uniform bound (\ref{FZZ-s}), we have \begin{eqnarray*} \frac{d}{d t}\hat{s}_{g(t)} \geq \frac{2c}{n}\Big(\overline{s}_{g(t)}-\hat{s}_{g(t)} \Big). \end{eqnarray*} It is clear that this inequality indeed implies the desired bound (\ref{fzz-key-0}). \par On the other hand, we have the following inequality (see also the proof of Lemma 7.1 in \cite{ha-1}): \begin{eqnarray*} \Big|{s}_{g(t)}-\overline{s}_{g(t)}\Big| = \Big|\Big({s}_{g(t)}-\hat{{s}}_{g(t)}\Big)-\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big)\Big| \leq \Big({s}_{g(t)}-\hat{{s}}_{g(t)}\Big)+\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big). \end{eqnarray*} This implies the following: \begin{eqnarray*} {\int}_{X}|{s}_{g(t)}-\overline{s}_{g(t)}|d{\mu}_{g(t)} \leq {\int}_{X}\Big({s}_{g(t)}-\hat{{s}}_{g(t)}\Big)d{\mu}_{g(t)} + {\int}_{X}\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big)d{\mu}_{g(t)}. \end{eqnarray*} On the other hand, notice that the following holds: \begin{eqnarray*} {\int}_{X}\overline{s}_{g(t)}d{\mu}_{g(t)}={\int}_{X}\Big(\frac{{\int}_{X} {s}_{g(t)} d{\mu}_{g(t)}}{vol_{{g(t)}}} \Big)d{\mu}_{g(t)} ={\int}_{X}{s}_{g(t)}d{\mu}_{g(t)}. \end{eqnarray*} We therefore obtain \begin{eqnarray*} {\int}_{X}|{s}_{g(t)}-\overline{s}_{g(t)}|d{\mu}_{g(t)} \leq 2{\int}_{X}\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big)d{\mu}_{g(t)}=2{vol}_{g(t)}\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big). \end{eqnarray*} Moreover, as was already mentioned, the normalized Ricci flow preserves the volume of the solution. We therefore have $vol_{g(t)}=vol_{g(0)}$. Hence, \begin{eqnarray*} {\int}_{X}|{s}_{g(t)}-\overline{s}_{g(t)}|d{\mu}_{g(t)} \leq 2{vol}_{g(0)}\Big(\overline{s}_{g(t)}-\hat{{s}}_{g(t)}\Big). \end{eqnarray*} This tells us that \begin{eqnarray*} {\int}^{\infty}_{0}{\int}_{X}| {{s}}_{g(t)}-\overline{s}_{g(t)} |d{\mu}_{g(t)}dt \leq 2{vol}_{g(0)} {\int}^{\infty}_{0}\Big(\overline{s}_{g(t)}- \hat{{s}}_{g(t)}\Big)dt. \end{eqnarray*} This inequality with the bound (\ref{fzz-key-0}) implies the desired bound (\ref{fzz-key}). \end{proof} Using the above lemma, we are able to show a real key proposition to prove the FZZ inequality. The following result is pointed out in Lemma 3.1 of \cite{fz-1} for unit volume solution and $n=4$ \begin{prop}\label{FZZ-prop} Let $X$ be a closed oriented Riemannian manifold of dimension $n$ and assume that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow in the sense of Definition \ref{bs}. Assume moreover that the solution satisfies the uniform bound (\ref{FZZ-s}), namely, \begin{eqnarray*} \hat{s}_{g(t)} \leq -c <0 \end{eqnarray*} holds, where the constant $c$ is independent of $t$ and define as $\hat{s}_{g} := \min_{x \in X}{s}_{g}(x)$ for a given Riemannian metric $g$. Then, the trace-free part $\stackrel{\circ}{r}_{g(t)}$ of the Ricci curvature satisfies \begin{eqnarray}\label{fzz-ricci} {\int}^{\infty}_{0} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt < \infty. \end{eqnarray} \end{prop} \begin{proof} Now suppose that there exists a quasi-non-singular solution to the normalized Ricci flow on a closed oriented manifold $X$ of dimension $n$. As before, let us consider the evolution equation (\ref{evolution}) for the scalar curvature of the solution: \begin{eqnarray*} \frac{\partial s_{g(t)}}{\partial t} = \Delta s_{g(t)} + 2|\stackrel{\circ}{r}_{g(t)}|^2 + \frac{2}{n}s_{g(t)}\Big(s_{g(t)}-\overline{s}_{g(t)} \Big). \end{eqnarray*} Notice that, by the assumption that the solution is quasi-non-singular in the sense of Definition \ref{bs}, we are able to conclude that there is a constant $C$ which is independent of both $t \in [0, \infty)$ and $x \in X$, and $|{s}_{g(t)}|< C$ holds. We therefore obtain \begin{eqnarray*} {\int}^{\infty}_{0} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt &=& \frac{1}{2}{\int}^{\infty}_{0} {\int}_{X} \frac{\partial s_{g(t)}}{\partial t} d{\mu}_{g(t)}dt-\frac{1}{n}{\int}^{\infty}_{0} {\int}_{X} {s_{g(t)}}\Big(s_{g(t)}-\overline{s}_{g(t)} \Big) d{\mu}_{g(t)}dt \\ &\leq& \frac{1}{2}{\int}^{\infty}_{0} {\int}_{X} \frac{\partial s_{g(t)}}{\partial t} d{\mu}_{g(t)}dt + \frac{1}{n} {\int}^{\infty}_{0} {\int}_{X}| s_{g(t)}| |s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt \\ &=& \frac{1}{2}{\int}^{\infty}_{0} \frac{\partial}{\partial t}\Big({\int}_{X} s_{g(t)} d{\mu}_{g(t)} \Big) dt + \frac{1}{n} {\int}^{\infty}_{0} {\int}_{X}| s_{g(t)}||s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt \\ &\leq& \frac{1}{2}{\int}^{\infty}_{0} \frac{\partial}{\partial t}\Big( \overline{s}_{g(t)}{vol}_{g(t)} \Big) dt + \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt \\ &=& \frac{{vol}_{g(0)}}{2}{\int}^{\infty}_{0} \frac{\partial}{\partial t}\overline{s}_{g(t)}dt + \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt, \end{eqnarray*} where we used a fact that $vol_{g(t)}=vol_{g(0)}$ holds for any $t \in [0, \infty)$. Hence we have \begin{eqnarray*} {\int}^{\infty}_{0} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt &\leq& \frac{{vol}_{g(0)}}{2}{\int}^{\infty}_{0} \frac{\partial}{\partial t} \overline{s}_{g(t)} dt + \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt \\ &\leq& \frac{{vol}_{g(0)}}{2}\lim_{t \rightarrow \infty} \sup|\overline{s}_{g(t)}-\overline{s}_{g(0)}| + \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt. \end{eqnarray*} On the other hand, the uniform bound $|{s}_{g(t)}|< C$ implies \begin{eqnarray*} |\overline{s}_{g(t)}|=|\frac{{\int}_{X} {s}_{g(t)} d{\mu}_{g(t)}}{vol_{{g(t)}}}|\leq \frac{{\int}_{X} |{s}_{g(t)}| d{\mu}_{g(t)}}{vol_{{g(t)}}} \leq \frac{{\int}_{X} {C} d{\mu}_{g(t)}}{vol_{{g(t)}}} = C. \end{eqnarray*} This tells us that \begin{eqnarray*} |\overline{s}_{g(t)}-\overline{s}_{g(0)}| \leq |\overline{s}_{g(t)}|+|\overline{s}_{g(0)}| \leq C+C = 2C. \end{eqnarray*} Therefore, we are able to conclude that the following holds: \begin{eqnarray*} \sup|\overline{s}_{g(t)}-\overline{s}_{g(0)}| \leq 2C. \end{eqnarray*} Hence we obtain \begin{eqnarray*} {\int}^{\infty}_{0} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt &\leq& \frac{{vol}_{g(0)}}{2} \cdot 2C + \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt \\ &\leq& {vol}_{g(0)}C+ \frac{C}{n} {\int}^{\infty}_{0} {\int}_{X}|s_{g(t)}-\overline{s}_{g(t)}| d{\mu}_{g(t)}dt. \end{eqnarray*} This estimate with the bound (\ref{fzz-key}) implies \begin{eqnarray*} {\int}^{\infty}_{0} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt < \infty \end{eqnarray*} as promised. \end{proof} As was already noticed in \cite{fz-1}, the bound (\ref{fzz-ricci}) tells us that, when $m \rightarrow \infty$, \begin{eqnarray}\label{fzz-ricci-0} {\int}^{m+1}_{m} {\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)}dt \longrightarrow 0 \end{eqnarray} holds since ${\int}_{X} |\stackrel{\circ}{r}_{g(t)}|^2 d{\mu}_{g(t)} \geq 0$. Indeed, one can see this by completely elementary reasons. Particularly, in dimension $n=4$, this (\ref{fzz-ricci-0}) immediately implies the Fang-Zhang-Zhang inequality as follows (See also Lemma 3.2 in \cite{fz-1}): \begin{thm}\label{fz-key} Let $X$ be a closed oriented Riemannian 4-manifold and assume that there is a quasi-non-singular solution to the normalized Ricci flow in the sense of Definition \ref{bs}. Assume moreover that the solution satisfies the uniform bound (\ref{FZZ-s}), namely, \begin{eqnarray*} \hat{s}_{g(t)} \leq -c <0 \end{eqnarray*} holds, where the constant $c$ is independent of $t$ and define as $\hat{s}_{g} := \min_{x \in X}{s}_{g}(x)$ for a given Riemannian metric $g$. Then, $X$ must satisfy \begin{eqnarray*} 2 \chi(X) \geq 3|\tau(X)|. \end{eqnarray*} \end{thm} \begin{proof} Suppose that there exists a quasi-non-singular solution $\{g(t) \}$, $t \in [0, \infty)$, to the normalized Ricci flow on $X$. Assume also that the bound (\ref{FZZ-s}) is satisfied. By the equality (\ref{4-im}) which holds for any Riemannian metric on $X$, we are able to get \begin{eqnarray*} 2\chi(X) \pm 3\tau(X) = \frac{1}{4{\pi}^2}{\int}_{X}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}-\frac{|{r}^{\circ}_{g(t)}|^2}{2} \Big) d{\mu}_{g(t)}. \end{eqnarray*} From this and (\ref{fzz-ricci-0}), we are able to obtain \begin{eqnarray*} 2\chi(X) \pm 3\tau(X) &=& {\int}^{m+1}_{m} \Big(2\chi(X) \pm 3\tau(X) \Big)dt \\ &=& \frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}-\frac{|{r}^{\circ}_{g(t)}|^2}{2} \Big) d{\mu}_{g(t)}dt \\ &\geq & \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}-\frac{|{r}^{\circ}_{g(t)}|^2}{2} \Big) d{\mu}_{g(t)}dt \\ &=& \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt \geq 0. \end{eqnarray*} We therefore get the desired inequality. \end{proof} \section{Fang-Zhang-Zhang Inequality and Negativity of the Yamabe Invariant}\label{ya} In this section, we shall improve the FZZ inequality under an assumption that the Yamabe invariant of a given 4-manifold is negative. This motivates partially Problem \ref{Q} which was already mentioned in Introduction. The main result of this section is Theorem \ref{bound-four} below. \par Suppose now that $X$ is a closed oriented Riemannian manifold of dimension $n\geq 3$, and moreover that $[g]=\{ ug ~|~u: X \to {\mathbb R}^+\}$ is the conformal class of an arbitrary metric $g$. Trudinger, Aubin, and Schoen \cite{aubyam,lp,rick,trud} proved every conformal class on any smooth compact manifold contains a Riemannian metric of constant scalar curvature. Such a metric $\hat{g}$ can be constructed by minimizing the Einstein-Hilbert functional: $$ \hat{g}\mapsto \frac{\int_X s_{\hat{g}}~d\mu_{\hat{g}}}{\left(\int_X d\mu_{\hat{g}}\right)^{\frac{n-2}{n}}}, $$ among all metrics conformal to $g$. Notice that, by setting $\hat{g} = u^{4/(n-2)}g$, we have the following identity: $$ \frac{\int_X s_{\hat{g}}~d\mu_{\hat{g}}}{\left(\int_X d\mu_{\hat{g}}\right)^{\frac{n-2}{n}}}= \frac{\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g}{\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}. $$ As was already mentioned in Introduction, associated to each conformal class $[g]$, we are able to define the following number which is called Yamabe constant of the conformal class $[g]$: $$ Y_{[g]} = \inf_{\hat{g} \in [g]}\frac{\int_X s_{\hat{g}}~d\mu_{\hat{g}}}{\left(\int_X d\mu_{\hat{g}}\right)^{\frac{n-2}{n}}}. $$ Equivalently, $$ Y_{[g]} = \inf_{u \in {C}^{\infty}_{+}(X)}\frac{\int_X\left[ s_gu^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_g}{\left(\int_X u^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}}, $$ where ${C}^{\infty}_{+}(X)$ is the set of all positive functions $u: X \to {\mathbb R}^+$. Kobayashi \cite{kob} and Schoen \cite{sch} independently introduced the following interesting invariant which is now called Yamabe invariant of $X$: \begin{eqnarray}\label{yama-def-1} {\mathcal Y}(X) = \sup_{[g] \in \mathcal{C}} Y_{[g]}, \end{eqnarray} where $\mathcal{C}$ is the set of all conformal classes on $X$. This is a diffeomorphism invariant of $X$. Notice again that ${\mathcal Y}(X) \leq 0$ if and only if $X$ does not admit Riemannian metrics of positive scalar curvature. In this case, it is also known that the Yamabe invariant of $X$ can be rewritten as \begin{eqnarray}\label{yama-def-2} {\mathcal Y}(X) = - \Big(\inf_{g}{\int}_{X}|s_{g}|^{{n}/{2}} d{\mu}_{g} \Big)^{{2}/{n}}, \end{eqnarray} where supremum is taken over all smooth metrics $g$ on $X$. For instance, see Proposition 12 in \cite{ishi-leb-2}. In dimension 4, it is known that there are quite many manifolds whose Yamabe invariants are strictly negative \cite{leb-4, ishi-leb-2}. \par For any Riemannian metric $g$, consider the minimum $\hat{s}_{g}:=\min_{x \in X}{s}_{g}(x)$ of the scalar curvature ${s}_{g}$ of the metric $g$ as before. In Theorem 2.1 in \cite{ha-1}, Hamilton pointed out that the minimum $\hat{s}_{g}$ is increasing along the normalized Ricci flow when it is non-positive. Hence, it may be interesting to give an upper bound to the quantity. We shall give the following upper bound in terms of the Yamabe invariant. This result is simple, but important for our purpose: \begin{prop}\label{yama-b} Let $X$ be a closed oriented Riemannian manifold of dimension $n \geq 3$ and assume that the Yamabe invariant of $X$ is negative, i.e., ${\mathcal Y}(X)<0$. If there is a solution $\{g(t)\}$, $t \in [0, T)$, to the normalized Ricci flow, then the solution satisfies the bound (\ref{FZZ-s}). More precisely, the following is satisfied: \begin{eqnarray*} \hat{s}_{g(t)}:=\min_{x \in X}{s}_{g(t)}(x) \leq \frac{{\mathcal Y}(X)}{(vol_{g(0)})^{2/n}} < 0. \end{eqnarray*} \end{prop} \begin{proof} Suppose that there is a solution $\{g(t)\}$, $t \in [0, T)$ to the normalized Ricci flow. Let us consider the Yamabe constant $Y_{[g(t)]}$ of a conformal class $[g(t)]$ of a metric $g(t)$ for any $t \in [0, T)$. By definition, we have $$ {\mathcal Y}(X) \geq Y_{[g(t)]} = \inf_{u \in {C}^{\infty}_{+}(X)}\frac{\int_X\left[ s_{g(t)}u^2 + 4 \frac{n-1}{n-2}|\nabla u|^2\right] d\mu_{g(t)}}{\left(\int_X u^{2n/(n-2)}d\mu_{g(t)}\right)^{(n-2)/n}}. $$ We therefore obtain \begin{eqnarray*} {\mathcal Y}(X) &\geq& \inf_{u \in {C}^{\infty}_{+}(X)}\frac{{\int}_{X} \Big(\displaystyle\min_{x \in X}{s}_{g(t)}u^2 + 4\frac{n-1}{n-2}|\nabla u|^2 \Big) d{\mu}_{g(t)}}{\Big({\int}_{X}u^{{2n}/{(n-2)}} d{\mu}_{g(t)}\Big)^{{n-2}/{n}}} \\ &\geq& \hat{s}_{g(t)} \Big(\inf_{u \in {C}^{\infty}_{+}(X)}\frac{{\int}_{X} u^2 d{\mu}_{g(t)}}{\Big({\int}_{X}u^{{2n}/{(n-2)}} d{\mu}_{g(t)}\Big)^{{n-2}/{n}}} \Big). \end{eqnarray*} where notice that $\hat{s}_{g} := \min_{x \in X}{s}_{g}(x)$. If $\hat{s}_{g(t)} \geq 0$ holds, then the above estimate tells us that ${\mathcal Y}(X) \geq 0$. Since we assume that ${\mathcal Y}(X) < 0$, we are able to conclude that $\hat{s}_{g(t)} < 0$ must hold. \par On the other hand, the H{\"{o}}lder inequality tells us that the following inequality holds: \begin{eqnarray*} {\int}_{X} u^2 d{\mu}_{g(t)} &\leq& \Big({\int}_{X} u^{2n/n-2} d{\mu}_{g(t)} \Big)^{{n-2}/{n}}\Big({\int}_{X} d{\mu}_{g(t)} \Big)^{2/n} \\ &=& \Big({\int}_{X} u^{2n/n-2} d{\mu}_{g(t)} \Big)^{{n-2}/{n}}{(vol_{g(t)})^{2/n}}. \end{eqnarray*} This implies that \begin{eqnarray*} \inf_{u \in {C}^{\infty}_{+}(X)}\frac{{\int}_{X} u^2 d{\mu}_{g(t)}}{\Big({\int}_{X} u^{2n/n-2} d{\mu}_{g(t)} \Big)^{{n-2}/{n}}} \leq {(vol_{g(t)})^{2/n}}. \end{eqnarray*} Since we have $\hat{s}_{g(t)} < 0$, this also implies \begin{eqnarray*} \hat{s}_{g(t)}\Big( \inf_{u \in {C}^{\infty}_{+}(X)}\frac{{\int}_{X} u^2 d{\mu}_{g(t)}}{\Big({\int}_{X} u^{2n/n-2} d{\mu}_{g(t)} \Big)^{{n-2}/{n}}} \Big) \geq \hat{s}_{g(t)}{(vol_{g(t)})^{2/n}}. \end{eqnarray*} We therefore obtain \begin{eqnarray*} {\mathcal Y}(X) &\geq& \hat{s}_{g(t)}\Big( \inf_{u \in {C}^{\infty}_{+}(X)}\frac{{\int}_{X} u^2 d{\mu}_{g(t)}}{\Big({\int}_{X} u^{2n/n-2} d{\mu}_{g(t)} \Big)^{{n-2}/{n}}} \Big) \\ &\geq& \hat{s}_{g(t)}{(vol_{g(t)})^{2/n}}. \end{eqnarray*} On the other hand, notice that the normalized Ricci flow preserves the volume of the solution. We therefore have $vol_{g(t)}=vol_{g(0)}$ for ant $t \in [0, T)$. Hence, we get the desired bound for any $t \in [0, T)$: \begin{eqnarray*} \hat{s}_{g(t)} \leq \frac{{\mathcal Y}(X)}{(vol_{g(t)})^{2/n}}=\frac{{\mathcal Y}(X)}{(vol_{g(0)})^{2/n}} < 0. \end{eqnarray*} In particular, the solution $\{g(t)\}$ satisfies the bound (\ref{FZZ-s}) by setting $-c={{\mathcal Y}(X)}/{(vol_{g(0)})^{2/n}}$. \end{proof} The following theorem is the main result of this section. Let inj$(x, g)$ be the injectivity radius of the metric $g$ at $x \in X$. Recall that, following \cite{c-g, fz-1}, a solution $\{g(t)\}$ to the normalized Ricci flow on a Riemannian manifold $X$ is called {\it collapse} if there is a sequence of times $t_{k} \rightarrow T$ such that $\sup_{x \in X}$inj$(x, g(t_{k})) \rightarrow 0$, where $T$ is the maximal existence time for the solution, which may be finite or infinite: \begin{thm}\label{bound-four} Let $X$ be a closed oriented Riemannian 4-manifold. Suppose that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow in the sense of Definition \ref{bs}. If the Yamabe invariant of $X$ is negative, i.e., ${\mathcal Y}(X)<0$, then the following holds: \begin{eqnarray*} 2 \chi(X) -3|\tau(X)| \geq \frac{1}{96{\pi}^2}|{\mathcal Y}(X)|^2. \end{eqnarray*} In particular, $X$ must satisfy the strict FZZ inequality \begin{eqnarray*} 2 \chi(X) > 3|\tau(X)| \end{eqnarray*} in this case. Moreover, if the solution is non-singular in the sense of Definition \ref{non-sin}, then the solution does not collapse. \end{thm} \begin{proof} Suppose that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow. By Proposition \ref{yama-b} and the assumption that ${\mathcal Y}(X)<0$, the solution automatically satisfies \begin{eqnarray*} \hat{s}_{g(t)} \leq \frac{{\mathcal Y}(X)}{(vol_{g(0)})^{1/2}} < 0. \end{eqnarray*} In particular, the solution satisfies the bound (\ref{FZZ-s}). By the proof of Theorem \ref{fz-key} above, we are able to obtain the following bound because there is a quasi-non-singular solution with the uniform bound (\ref{FZZ-s}): \begin{eqnarray*} 2\chi(X) \pm 3\tau(X) &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt \\ &\geq&\ \liminf_{m \longrightarrow \infty}\frac{1}{96{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}{{s}^2_{g(t)}} d{\mu}_{g(t)}dt. \end{eqnarray*} On the other hand, we have the equality (\ref{yama-def-2}) under ${\mathcal Y}(X)<0$. In case where $n=4$, this tells us that \begin{eqnarray*} |{\mathcal Y}(X)|^2 = \inf_{g}{\int}_{X}s^{2}_{g} d{\mu}_{g}. \end{eqnarray*} We therefore have \begin{eqnarray*} 2\chi(X) \pm 3\tau(X) &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{96{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}{{s}^2_{g(t)}} d{\mu}_{g(t)}dt \\ &\geq& \frac{1}{96{\pi}^2}|{\mathcal Y}(X)|^2. \end{eqnarray*} Since $|{\mathcal Y}(X)| \not=0$, we particularly obtain $2 \chi(X) > 3|\tau(X)|$ as desired. On the other hand, as was already used in \cite{ha-1} and \cite{fz-1}, Cheeger-Gromov's collapsing theorem \cite{c-g} tells us that $X$ must satisfy $\chi(X) = 0$ if it collapses with bounded sectional curvature. However, $X$ now satisfies $2 \chi(X) > 3|\tau(X)|$ and hence $\chi(X) \not=0$. Therefore, we are able to conclude that if the solution is non-singular in the sense of Definition \ref{non-sin}, then the solution does not collapse. \end{proof} \begin{rmk} It is a natural question to ask whether or not a similar bound holds in the case where ${\mathcal Y}(X) \geq 0$. Suppose now that a given closed 4-manifold $X$ has ${\mathcal Y}(X) > 0$. Notice that the positivity of the Yamabe invariant of $X$ implies the existence of a Riemannian metric $g$ of positive scalar curvature on $X$. According to Proposition 2.2 in \cite{fz-1}, any non-singular solution to the normalized Ricci flow on $X$ with the positive scalar curvature metric $g$ as an initial metric always converges along a subsequence of times to shrinking Ricci soliton $h$. If the $h$ is a {\it gradient} shrinking Ricci soliton, the following bound (\ref{soli}) is known. In this case, there are smooth function $f$ and positive constant $\lambda >0$ satisfying \begin{eqnarray*} {Ric}_{h}={\lambda}h+{D}^2f, \end{eqnarray*} where ${D}^2f$ is the Hessian of the Ricci potential function $f$ with respect to $h$. Under the following constraint on the Ricci potential function $f$ \begin{eqnarray*} {\int}_{X}f{d}{\mu}_{h}=0, \end{eqnarray*} one can see that the following bound holds from the proof of the main theorem of Ma \cite{li-ma}: \begin{eqnarray}\label{soli} 2\chi(X)-{3}|\tau(X)| \geq \frac{1}{48{\pi}^2}{\mathcal A}_{\frac{3}{2}}(X, h). \end{eqnarray} Here, for any positive constant $a$, define as \begin{eqnarray*} {\mathcal A}_{a}(X, h):=a \Big(\frac{({\int}_{X}s_{h} d{\mu}_{h})^2}{vol_{h}}\Big)-{\int}_{X}{s}_{h}^2d{\mu}_{h}. \end{eqnarray*} Notice that, by the Schwarz inequality, ${\mathcal A}_{1}(X, h) \leq 0$ holds. ${\mathcal A}_{\frac{3}{2}}(X, h) \geq 0$ is equivalent to ${\int}_{X}s^2_{h} d{\mu}_{h} \leq 24{\lambda}^2{vol_{h}}$. See the bound (1) in the main theorem of Ma \cite{li-ma}. We also notice that there is a conjecture of Hamilton which asserts that any compact gradient shrinking Ricci soliton with positive curvature operator must be Einstein. See an interesting article of Cao \cite{cao-X} including a partial affirmative answer under a certain integral inequality concerning the Ricci soliton. \end{rmk} On the other hand, let us next recall the definition of Pelerman's $\bar{\lambda}$ invariant \cite{p-1, p-2, lott} briefly. We shall firstly recall an entropy functional which is so called ${\cal F}$-functional introduced and investigated by Perelman \cite{p-1}. Let $X$ be a closed oriented Riemannian manifold of dimension $n$ and $g$ any Riemannian metric on $X$. We shall denote the space of all Riemannian metrics on $X$ by ${\cal R}_{X}$ and the space of all $C^{\infty}$ functions on $X$ by $C^{\infty}(X)$. Then ${\cal F}$-functional is the following functional ${\cal F} : {\cal R}_{X} \times C^{\infty}(X) \rightarrow {\mathbb R}$ defined by \begin{eqnarray*} {\cal F}(g, f):={\int}_{X}({s}_{g} + |{\nabla}f|^{2}){e}^{-f} d\mu_{g}, \end{eqnarray*} where $f \in C^{\infty}(X)$, ${s}_{g}$ is again the scalar curvature and $d\mu_{g}$ is the volume measure with respect to $g$. It is then known that, for a given metric $g$, there exists a unique minimizer of the ${\cal F}$-functional under the constraint ${\int}_{X}{e}^{-f} d\mu_{g} =1$. Hence it is so natural to consider the following which is so called Perelman $\lambda$-functional: \begin{eqnarray*} {{\lambda}}_g:=\inf_{f} \ \{ {\cal F}(g, f) \ | \ {\int}_{X}{e}^{-f} d\mu_{g} =1 \}. \end{eqnarray*} It turns out that $\lambda_g$ is nothing but the least eigenvalue of the elliptic operator $4 \Delta_g+s_g$, where $\Delta = d^*d= - \nabla\cdot\nabla $ is the positive-spectrum Laplace-Beltrami operator associated with $g$. Consider the scale-invariant quantity $\lambda_g (vol_g)^{2/n}$. Then Perelman's $\bar{\lambda}$ invariant of $X$ is defined to be \begin{eqnarray*}\label{p-inv} \bar{\lambda}(X)= \sup_g \lambda_g (vol_g)^{2/n}, \end{eqnarray*} where supremum is taken over all smooth metrics $g$ on $X$. This quantity is closely related to the Yamabe invariant. In fact, the following result holds: \begin{thm}[\cite{A-ishi-leb-3}]\label{AIL} Let $X$ be a closed oriented Riemannian $n$-manifold, $n\geq 3$. Then \begin{eqnarray*} \bar{\lambda}(X) = \begin{cases} {\mathcal Y}(X) & \text{ if } {\mathcal Y}(X) \leq 0, \\ +\infty & \text{ if } {\mathcal Y}(X) > 0. \end{cases} \end{eqnarray*} \end{thm} Theorem \ref{bound-four} and Theorem \ref{AIL} immediately imply \begin{cor}\label{bound-four-perel} Let $X$ be a closed oriented Riemannian 4-manifold. Suppose that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow in the sense of Definition \ref{bs}. If the Perelman's $\bar{\lambda}$ invariant of $X$ is negative, i.e., $\bar{\lambda}(X)<0$, then the following holds: \begin{eqnarray*} 2 \chi(X) -3|\tau(X)| \geq \frac{1}{96{\pi}^2}|\bar{\lambda}(X)|^2. \end{eqnarray*} In particular, $X$ must satisfy the strict FZZ inequality $2 \chi(X) > 3|\tau(X)|$ in this case. Moreover, if the solution is non-singular in the sense of Definition \ref{non-sin}, then the solution does not collapse. \end{cor} Notice that this corollary was firstly proved in \cite{fz-1} under the assumption that the solution to the normalized Ricci flow has unit volume. See Theorem 1.4 in \cite{fz-1}. \section{Curvature Bounds and Convex Hull of the Set of Monopole Classes}\label{monopoles} By important works \cite{leb-1, leb-2, leb-4, leb-11, leb-12, leb-17} of LeBrun, it is now well known that the Seiberg-Witten monopole equations \cite{w} lead to a remarkable family of curvature estimates which has many strong applications to 4-dimensional geometry. In this section, following a recent beautiful article \cite{leb-17} of LeBrun, we shall recall firstly these curvature estimates in terms of the {\it convex hull} of the set of all monopole classes on 4-manifolds. We shall use these estimates to prove new obstructions to the existence of non-singular solutions to the normalized Ricci flow in Section 5 below. The main results of this section are Theorems \ref{yamabe-pere} and \ref{key-mono-b} below. \par For the convenience of the reader who is unfamiliar with Seiberg-Witten theory, we shall recall briefy the definition of the Seiberg-Witten monopole equations. Let $X$ be a closed oriented Riemannian 4-manifold and we assume that $X$ satisfies ${b}^{+}(X) \geq 2$, where ${b}^{+}(X)$ stands again for the dimension of a maximal positive definite subspace of ${H}^{2}(X, {\mathbb R})$ with respect to the intersection form. Recall that a ${spin}^{c}$-structure $\Gamma_{X}$ on a smooth Riemannian 4-manifold $X$ induces a pair of spinor bundles ${S}^{\pm}_{\Gamma_{X}}$ which are Hermitian vector bundles of rank 2 over $X$. A Riemannian metric on $X$ and a unitary connection $A$ on the determinant line bundle ${\cal L}_{\Gamma_{X}} := det({S}^{+}_{\Gamma_{X}})$ induce the twisted Dirac operator ${\cal D}_{{A}} : \Gamma({S}^{+}_{\Gamma_{X}}) \longrightarrow \Gamma({S}^{-}_{\Gamma_{X}})$. The Seiberg-Witten monopole equations over $X$ are the following system of non-linear partial differential equations for a unitary connection $A \in {\cal A}_{{\cal L}_{\Gamma_{X}}}$ and a spinor $\phi \in \Gamma({S}^{+}_{\Gamma_{X}})$: \begin{eqnarray}\label{sw-mono} {\cal D}_{{A}}{\phi} = 0, \ {F}^{+}_{{A}} = iq(\phi), \end{eqnarray} here ${F}^{+}_{{A}}$ is the self-dual part of the curvature of $A$ and $q : {S}^{+}_{\Gamma_{X}} \rightarrow {\wedge}^{+}$ is a certain natural real-quadratic map satifying \begin{eqnarray*} |q(\phi)|=\frac{1}{2\sqrt{2}}|\phi|^2, \end{eqnarray*} where ${\wedge}^{+}$ is the bundle of self-dual 2-forms. \par We are now in a position to recall the definition of monopole class \cite{kro, leb-11, ishi-leb-1, ishi-leb-2, leb-17}. \begin{defn}\label{ishi-leb-2-key} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. An element $\frak{a} \in H^2(X, {\mathbb Z})$/torsion $\subset H^2(X, {\mathbb R})$ is called monopole class of $X$ if there exists a spin${}^c$ structure $\Gamma_{X}$ with \begin{eqnarray*} {c}^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}}) = \frak{a} \end{eqnarray*} which has the property that the corresponding Seiberg-Witten monopole equations (\ref{sw-mono}) have a solution for every Riemannian metric on $X$. Here ${c}^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}})$ is the image of the first Chern class ${c}_{1}({\cal L}_{\Gamma_{X}})$ of the complex line bundle ${\cal L}_{\Gamma_{X}}$ in $H^2(X, {\mathbb R})$. We shall denote the set of all monopole classes on $X$ by ${\frak C}(X)$. \end{defn} Crucial properties of the set ${\frak C}(X)$ are summarized as follow \cite{leb-17, ishi-leb-2}: \begin{prop}[\cite{leb-17}]\label{mono} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then ${\frak C}(X)$ is a finite set. Morever ${\frak C}(X) = -{\frak C}(X)$ holds, i.e., $\frak{a} \in H^2(X, {\mathbb R})$ is a monopole class if and only if $-\frak{a} \in H^2(X, {\mathbb R})$ is a monopole class, too. \end{prop} These properties of ${\frak C}(X)$ which sits in a real vector space $H^2(X, {\mathbb R})$ natually lead us to consider the convex hull ${\bf{Hull}}({\frak C}(X))$ of ${\frak C}(X)$. Recall that, for any subset $W$ of a real vector space $V$, one can consider the convex hull ${\bf{Hull}}(W) \subset V$, meaning the smallest convex subset of $V$ containg $W$. Then, Proposition \ref{mono} immediately implies the following result: \begin{prop}[\cite{leb-17}]\label{mono-leb} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then the convex hull ${\bf{Hull}}({\frak C}(X)) \subset H^2(X, {\mathbb R})$ of ${\frak C}(X)$ is compact, and symmetric, i.e., ${\bf{Hull}}({\frak C}(X)) = -{\bf{Hull}}({\frak C}(X))$. \end{prop} By Proposition \ref{mono}, ${\frak C}(X)$ is a finite set and hence we are able to write as ${\frak C}(X)=\{{\frak{a}}_{1},{\frak{a}}_{2}, \cdots, {\frak{a}}_{n} \}$. The convex hull ${\bf{Hull}}({\frak C}(X))$ is then expressed as follows:\begin{eqnarray}\label{hull} {\bf{Hull}}({\frak C}(X))= \{ \sum^{n}_{i=1}t_{i} {\frak{a}}_{i} \ | \ t_{i} \in [0,1], \ \sum^{n}_{i=1}t_{i}=1 \}. \end{eqnarray} Notice also that the symmetric property tells us that ${\bf{Hull}}({\frak C}(X))$ contains the zero element. \par Now, consider the following self-intersection function: \begin{eqnarray*} {\cal Q} : H^2(X, {\mathbb R}) \rightarrow {\mathbb R} \end{eqnarray*} which is defined by $x \mapsto x^2:=<x \cup x, [X]>$, where $[X]$ is the fundamental class of $X$. Since this function ${\cal Q}$ is a polynomial function and hence is a continuous function on $H^2(X, {\mathbb R})$. We can therefore conclude that the restriction ${\cal Q} |_{{\bf{Hull}}({\frak C}(X))}$ to the compact subset ${\bf{Hull}}({\frak C}(X))$ of $H^2(X, {\mathbb R})$ must achieve its maximum. This leads us naturally to introduce the following quantity ${\beta}^2(X)$: \begin{defn}[\cite{leb-17}]\label{beta} Suppose that $X$ is a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Let ${\bf{Hull}}({\frak C}(X)) \subset H^2(X, {\mathbb R})$ be the convex hull of the set ${\frak C}(X)$ of all monopole classes on $X$. If ${\frak C}(X) \not= \emptyset$, define \begin{eqnarray*} {\beta}^2(X):= \max \{ {\cal Q}(x):=x^2 \ | \ x \in {\bf{Hull}}({\frak C}(X)) \}. \end{eqnarray*} On the other hand, if ${\frak C}(X) = \emptyset$ holds, define simply as ${\beta}^2(X):=0$. \end{defn} Since ${\bf{Hull}}({\frak C}(X))$ contains the zero element, the above definition particularly implies that ${\beta}^2(X) \geq 0$ holds. \par On the other hand, the Hodge star operator associated to a given metric $g$ defines an involution on the real vector space $H^2(X, {\mathbb R})$ and this gives rise to an eigenspace decomposition: \begin{eqnarray}\label{h-de} H^2(X, {\mathbb R}) = {\cal H}^+_{g} \oplus {\cal H}^-_{g}, \end{eqnarray} where ${\cal H}^{\pm}_{g}:=\{ \psi \in \Gamma({\wedge}^{\pm}) \ | \ d \psi=0 \}$ are the space of self-dual and anti-self-dual harmonic 2-forms. Notice that this decomposition depends on the metric $g$. This dependence also can be described in terms of the {\it period map}. In fact, consider the following map which is so called the period map of the Riemannian 4-manifold $X$: \begin{eqnarray}\label{h-de-p} {\cal P} : {\cal R}_{X} \longrightarrow {Gr}^+_{b^+(X)} \Big(H^2(X, {\mathbb R}) \Big) \end{eqnarray} which is defined by $g \mapsto {\cal H}^+_{g}$. Here, ${\cal R}_{X}$ is the infinite dimensional space of all Riemannian metrics on $X$ and ${Gr}^+_{b^+} \Big(H^2(X, {\mathbb R}) \Big)$ is the finite dimensional Grassmannian of $b^+(X)$-dimensional subspace of $H^2(X, {\mathbb R})$ on which the intersection form of $X$ is positive definite. Namely, we are able to conclude that the decomposition (\ref{h-de}) depends on the image of the metric $g$ under the period map (\ref{h-de-p}). \par Now, let $\frak{a} \in H^2(X, {\mathbb R})$ be a monopole class of $X$. Then we can consider the self-dual part $\frak{a}^+$ of $\frak{a}$ with respect to the decompsition (\ref{h-de}) and take square $\big(\frak{a}^+ \big)^2$. From the above argument, it is clear that this quantity $\big(\frak{a}^+ \big)^2$ also depends on the image of the meric $g$ under the period map (\ref{h-de-p}). On the other hand, the quantity ${\beta}^2(X)$ introduced in Definition \ref{beta} above dose not depend on the metric and hence it never depend on the period map (\ref{h-de-p}). One of important observations made in \cite{leb-17} is the following: \begin{prop}[\cite{leb-17}]\label{main-leb} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Suppose that ${\frak C}(X) \not= \emptyset$. Then, for any Riemannian metric $g$ on $X$, there is a monopole class $\frak{a} \in {\frak C}(X)$ satisfying \begin{eqnarray}\label{h-de-p-1} \big(\frak{a}^+ \big)^2 \geq {\beta}^2(X). \end{eqnarray} \end{prop} On the other hand, it is well known that, as was firstly pointed out by Witten \cite{w}, the existence of a monopole class gives rise to a priori lower bound on the $L^2$-norm of the scalar curvature of Riemannian metrics. Its refined version is proved by LeBrun \cite{leb-2, leb-17}: \begin{prop}[\cite{leb-2, leb-17}]\label{slight-leb} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$ and a monopole class $\frak{a} \in H^2(X, {\mathbb Z})$/torsion $\subset H^2(X, {\mathbb R})$. Let $g$ be any Riemannian metric on $X$ and let $\frak{a}^+$ be the self-dual part of $\frak{a}$ with respect to the decomposition $H^2(X, {\mathbb R}) = {\cal H}^+_{g} \oplus {\cal H}^-_{g}$, identified with the space of $g$-harmonic 2-forms, into eigenspaces of the Hodge star operator. Then, the scalar curvature $s_{g}$ of $g$ must satisfy the following bound: \begin{eqnarray}\label{sca-leb} {\int}_{X}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}\big(\frak{a}^+ \big)^2. \end{eqnarray} If $\frak{a}^+ \not=0$, furthermore, equality holds if and only if there is an integrable complex structure $J$ with ${c}^{\mathbb R}_{1}(X, J)=\frak{a}$ such that $(X, g, J)$ is a K{\"{a}}hler manifold of constant negative scalar curvature. \end{prop} In \cite{leb-11, leb-17}, LeBrun moreover finds that the existence of a monopole class implies an estimate involving both the scalar curvature and the self-dual Weyl curvature as follows: \begin{prop}[\cite{leb-11, leb-17}]\label{fami-leb} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$ and a monopole class $\frak{a} \in H^2(X, {\mathbb Z})$/torsion $\subset H^2(X, {\mathbb R})$. Let $g$ be any Riemannian metric on $X$ and let $\frak{a}^+$ be the self dual part of $\frak{a}$ with respect to the decomposition $H^2(X, {\mathbb R}) = {\cal H}^+_{g} \oplus {\cal H}^-_{g}$, identified with the space of $g$-harmonic 2-forms, into eigenspaces of the Hodge star operator. Then, the scalar curvature $s_{g}$ and the self-dual Weyl curvature $W^{+}_{g}$ of $g$ satisfy the following: \begin{eqnarray}\label{weyl-leb} {\int}_{X}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 72{\pi}^{2}\big(\frak{a}^+ \big)^2, \end{eqnarray} where the point wise norm are calculated with respect to $g$. And if $\frak{a}^+ \not =0$, furthermore, equality holds if and only if there is a symplectic form $\omega$, where the deRham class $[\omega]$ is negative multiple of $\frak{a}^+$ and ${c}^{\mathbb R}_{1}(X, \omega)=\frak{a}$, such that $(X, g, \omega)$ is a almost complex-K{\"{a}}hler manifold with the following complicated properties: \begin{itemize} \item $2{s}_{g} + |\nabla \omega|^2$ is a negative constant; \item $\omega$ belongs to the lowest eigenspace of $W^+_{g} : \wedge^+ \rightarrow \wedge^+$ everywhere; and \item the two largest eigenvalues of $W^+_{g} : \wedge^+ \rightarrow \wedge^+$ are everywhere equal. \end{itemize} \end{prop} Notice that, as was already mentioned above, the lower bounds of both (\ref{sca-leb}) and (\ref{weyl-leb}) depend on the image of the Riemmanin metric under the period map (\ref{h-de-p}). This means that these curvature estimates are not uniform in the metric. Propositions \ref{main-leb}, \ref{slight-leb} and \ref{fami-leb} together imply, however, the following curvature estimates which do not depend on the image of the Riemannian metric under the period map (\ref{h-de-p}): \begin{thm}[\cite{leb-17}]\label{beta-ine-key} Suppose that $X$ is a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then any Riemannian metric $g$ on $X$ satisfies the following curvature estimates: \begin{eqnarray}\label{weyl-leb-sca-1} {\int}_{X}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}\beta^2(X), \end{eqnarray} \begin{eqnarray}\label{weyl-leb-sca-2} {\int}_{X}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 72{\pi}^{2}\beta^2(X), \end{eqnarray} where $s_{g}$ and $W^{+}_{g}$ denote respectively the scalar curvature and the self-dual Weyl curvature of $g$. If $X$ has a non-zero monopole class and, moreover, equality occurs in either the first or the second estimate if and only if $g$ is a K{\"{a}}hler-Einstein metric with negative scalar curvature. \end{thm} Notice that if $X$ has no monopole class, we define as $\beta^2(X):=0$ (see Definition \ref{beta} above). On the other hand, notice also that the left-hand side of these two curvature estimates in Theorem \ref{beta-ine-key} is always non-negative. Therefore, Propositions \ref{main-leb}, \ref{slight-leb} and \ref{fami-leb} indeed tell us that the desired estimates hold. To prove the statement of the boundary case, we need to analyze the curvature estimates more deeply. See the proof of Theorem 4.10 in \cite{leb-17}. \par As a corollary of the second curvature estimate, we particularly obtain the following curvature bound (cf. Proposition 3.1 in \cite{leb-11}): \begin{cor}\label{bound-cor} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Then any Riemannian metric $g$ on $X$ satisfies the following curvature estimate: \begin{eqnarray}\label{monopole-123} \frac{1}{4{\pi}^2}{\int}_{X}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} \geq \frac{2}{3}\beta^2(X), \end{eqnarray} where $s_{g}$ and $W^{+}_{g}$ denote respectively the scalar curvature and the self-dual Weyl curvature of $g$. If $X$ has a non-zero monopole class and, moreover, equality occurs in the above estimate if and and only if $g$ is a K{\"{a}}hler-Einstein metric with negative scalar curvature. \end{cor} \begin{proof} First of all, we have the curvature estimate (\ref{weyl-leb-sca-2}): \begin{eqnarray}\label{dot-1} {\int}_{X}\Big({s}_{g}-\sqrt{6}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 72{\pi}^{2}\beta^2(X). \end{eqnarray} By multiplying this by $4/9$, we are able to get \begin{eqnarray*} {\int}_{X}\Big(\frac{2}{3}{s}_{g}-2\sqrt{\frac{2}{3}}|W^{+}_{g}|\Big)^2 d{\mu}_{g} \geq 32{\pi}^{2}\beta^2(X). \end{eqnarray*} We are able to rewrite this estimate as follows: \begin{eqnarray*} ||\frac{2}{3}{s}_{g}-2\sqrt{\frac{2}{3}}|W^{+}_{g}| || \geq 4\sqrt{2}{\pi}\sqrt{\beta^2(X)}, \end{eqnarray*} where $|| \cdot ||$ is the $L^2$ norm with respect to $g$ and notice that we always have $\beta^2(X) \geq 0$. The rest of the proof is essentially the same with that of Proposition 3.1 in \cite{leb-11}. For completeness, let us include the proof. Indeed, by the triangle inequality, we get the following estimate from the above \begin{eqnarray}\label{dot} \frac{2}{3}||{s}_{g}|| +\frac{1}{3}||\sqrt{24} |W^{+}_{g}| || \geq 4\sqrt{2}{\pi}\sqrt{\beta^2(X)}. \end{eqnarray} The left-hand side of this can be interpreted as the dot product in $\mathbb R^2$: \begin{eqnarray*} \Big(\frac{2}{3}, \frac{1}{3\sqrt{2}} \Big) \cdot \Big(||{s}_{g}||, || \sqrt{48}|W^{+}_{g}| || \Big)=\frac{2}{3}||{s}_{g}||+\frac{1}{3}||\sqrt{24} |W^{+}_{g}| ||. \end{eqnarray*} By applying Cauchy-Schwartz inequality, we have \begin{eqnarray}\label{Cauchy-Schwartz} \Big( \Big(\frac{2}{3}\Big)^2 + \Big(\frac{1}{3\sqrt{2}} \Big)^2 \Big)^{\frac{1}{2}}\Big( {\int}_{X}({s}^2_{g}+48|W^{+}_{g}|^2)d{\mu}_{g} \Big)^{\frac{1}{2}} \geq \frac{2}{3}||{s}_{g}||+\frac{1}{3}||\sqrt{24} |W^{+}_{g}| ||. \end{eqnarray} On the other hand, notice that \begin{eqnarray*} \Big( \Big(\frac{2}{3}\Big)^2 + \Big(\frac{1}{3\sqrt{2}} \Big)^2 \Big)^{\frac{1}{2}}\Big( {\int}_{X}({s}^2_{g}+48|W^{+}_{g}|^2)d{\mu}_{g} \Big)^{\frac{1}{2}} = \frac{1}{\sqrt{2}}\Big({\int}_{X}({s}^2_{g}+48|W^{+}_{g}|^2)d{\mu}_{g}\Big)^{\frac{1}{2}}. \end{eqnarray*} This with the bounds (\ref{dot}) and (\ref{Cauchy-Schwartz}) tells us that \begin{eqnarray*} \frac{1}{\sqrt{2}}\Big({\int}_{X}({s}^2_{g}+48|W^{+}_{g}|^2)d{\mu}_{g}\Big)^{\frac{1}{2}} \geq 4\sqrt{2}{\pi}\sqrt{\beta^2(X)}. \end{eqnarray*} Thus we have \begin{eqnarray*} \frac{1}{{2}}{\int}_{X}\Big({s}^2_{g}+48|W^{+}_{g}|^2 \Big)d{\mu}_{g} \geq 32{\pi}^2\beta^2(X). \end{eqnarray*} This immediately implies the desired bound: \begin{eqnarray*} \frac{1}{4{\pi}^2}{\int}_{X}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} \geq \frac{2}{3}\beta^2(X). \end{eqnarray*} Finally, if $X$ has a non-zero monopole class and, moreover, equality occurs in the above estimate, then the above argument tells us that equality occurs in (\ref{dot-1}). Therefore the last claim follows from the last assertion in Theorem \ref{beta-ine-key}. \end{proof} On the other hand, we use the following result to prove Theorem \ref{yamabe-pere} below: \begin{prop}[\cite{leb-17}]\label{positive-mono} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. If there is a non-zero monopole class $\frak{a} \in {H}^2(X, {\mathbb R}) - \{0\}$, then $X$ cannot admit any Riemannian metric $g$ of scalar curvature $s_{g} \geq 0$. \end{prop} This result is well known to experts in Seiberg-Witten theory. We would like to notice that, however, a complete proof appears firstly in the proof of Proposition 3.3 in \cite{leb-17}. \par On the other hand, there are several ways to detect the existence of monopole classes. For example, if $X$ is a closed symplectic 4-manifold $X$ with ${b}^{+}(X) \geq 2$, then $\pm c_{1}(K_{X})$ are both monopole classes by the celebrated result of Taubes \cite{t-1}, where $c_{1}({X})$ is the first Chern class of the canonical bundle of $X$. This is proved by thinking the moduli space of solutions of the Seiberg-Witten monopole equations as a cycle which represents an element of the homology of a certain configuration space. More precisely, for any closed oriented smooth 4-manifold $X$ with $b^+(X) \geq 2$, one can define the integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \in {\mathbb Z}$ for any spin${}^{c}$-structure $\Gamma_{X}$ by integrating a cohomology class on the moduli space of solutions of the Seiberg-Witten monopole equations associated to $\Gamma_{X}$: \begin{eqnarray*} SW_{X} : Spin(X) \longrightarrow {\mathbb Z}, \end{eqnarray*} where $Spin(X)$ is the set of all spin${}^c$-structures on $X$. For more details, see \cite{w, nico}. Taubes indeed proved that, for any closed symplectic 4-manifold $X$ with ${b}^{+}(X) \geq 2$, $SW_{X}(\hat{\Gamma}_{X}) \equiv 1 \ (\bmod \ 2)$ holds for the canonical spin${}^{c}$-structure $\hat{\Gamma}_{X}$ induced from the symplectic structure. This actually implies that $\pm c_{1}(K_{X})$ are monopole classes of $X$. \par On the other hand, there is a sophisticated refinement of the idea of this construction. It detects the presence of a monopole class by element of a stable cohomotopy group. This is due to Bauer and Furuta \cite{b-f, b-1}. They interpreted Seiberg-Witten monopole equations as a map between two Hilbert bundles over the Picard tours of a 4-manifold $X$. The map is called the Seiberg-Witten map (or monopole map). Roughly speaking, the cohomotopy refinement of the integer valued Seiberg-Witten invariant is defined by taking an equivariant stable cohomotopy class of the finite dimensional approximation of the Seiberg-Witten map. The invariant takes its value in a certain complicated equivariant stable cohomotopy group. We notice that Seiberg-Witten moduli space {\it does not} appear in their story. By using the stable cohomotopy refinement of Seiberg-Witten invariant, the following result is proved essentially by LeBrun with the present author (Proposition 10 and Corollary 11 in \cite{ishi-leb-2}): \begin{prop}\label{prop-2} For $i= 1,2,3,4$, suppose that $X_{i}$ is a closed almost-complex 4-manifold whose integer valued Seiberg-Witten invariant satisfies $SW_{X_{i}}(\Gamma_{X_{i}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{i}}$ is the spin${}^c$-structure compatible with the almost-complex structure. Moreover assume that the following conditions are satisfied: \begin{itemize} \item $b_{1}(X_{i})=0$, \ $b^{+}(X_{i}) \equiv 3 \ (\bmod \ 4)$, \item $\displaystyle\sum^{4}_{i=1}b^{+}(X_{i}) \equiv 4 \ (\bmod \ 8)$. \end{itemize} Suppose that $N$ is a closed oriented smooth 4-manifold with $b^{+}(N)=0$ and let $E_{1}, E_{2}, \cdots, E_{k}$ be a set of generators for $H^2(N, {\mathbb Z})$/torsion relative to which the intersection form is diagonal. Then, for any $j=1, 2,3,4$, \begin{eqnarray}\label{mono-cone} \sum^{j}_{i=1} \pm {c}_{1}(X_{i}) + \sum^{k}_{i=1} \pm{E}_{i} \end{eqnarray} is a monopole class of $M:=\Big(\#^{j}_{i=1}{X}_{i} \Big) \# N$, where ${c}_{1}(X_{i})$ is the first Chern class of the canonical bundle of the almost-complex 4-manifold $X_{i}$ and the $\pm$ signs are arbitrary, and are independent of one another. Moreover, for any $j=1, 2,3,4$, \begin{eqnarray}\label{monopole-123446} \beta^2(M) \geq \sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray} \end{prop} \begin{proof} Thanks to Proposition 10 in \cite{ishi-leb-2}, it is enough to prove the bound (\ref{monopole-123446}) only. First of all, by the very definition, we have \begin{eqnarray*} {\beta}^2(M):= \max \{ {\cal Q}(x):=x^2 \ | \ x \in {\bf{Hull}}({\frak C}(M)) \}. \end{eqnarray*} On the other hand, by (\ref{mono-cone}), we especially have the following two monopole classes of $M$: \begin{eqnarray*} {\frak{a}}_{1}:=\sum^{j}_{i=1} {c}_{1}(X_{i}) + \sum^{k}_{i=1} {E}_{i}, \ {\frak{a}}_{2}:=\sum^{j}_{i=1} {c}_{1}(X_{i}) - \sum^{k}_{i=1} {E}_{i}. \end{eqnarray*} By (\ref{hull}), we are able to conclude that \begin{eqnarray*} \sum^{j}_{i=1} {c}_{1}(X_{i})= \frac{1}{2}{\frak{a}}_{1}+\frac{1}{2}{\frak{a}}_{2} \in {\bf{Hull}}({\frak C}(M)). \end{eqnarray*} We therefore obtain \begin{eqnarray*} {\beta}^2(M) \geq \Big( \sum^{j}_{i=1} {c}_{1}(X_{i})\Big)^2=\sum^{j}_{i=1}{c}^2_{1}(X_{i}) \end{eqnarray*} as desired. \end{proof} Notice here that, in case of $j=1$, we assume that $b_{1}=0$ and $b^{+} \equiv 3 \ (\bmod \ 4)$ hold. It turns out that, however, these conditions are superfluous though such a thing is not asserted in \cite{ishi-leb-2}. In fact, we are able to show \begin{prop}\label{prop-3} Let $X$ be a closed almost-complex 4-manifold with a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$, where $\Gamma_{X}$ is the spin${}^c$-structure compatible with the almost complex structure. Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$ and let $E_{1}, E_{2}, \cdots, E_{k}$ be a set of generators for $H^2(N, {\mathbb Z})$/torsion relative to which the intersection form is diagonal. Then, \begin{eqnarray*} \pm {c}_{1}(X) + \sum^{k}_{i=1} \pm{E}_{i} \end{eqnarray*} is a monopole class of $M:={X} \# N$, where ${c}_{1}(X)$ is the first Chern class of the canonical bundle of $X$ and the $\pm$ signs are arbitrary, and are independent of one another. Moreover, the following holds: \begin{eqnarray}\label{monopole-1234} \beta^2(M) \geq {c}^2_{1}(X). \end{eqnarray} \end{prop} \begin{proof} It is known that there is a comparision map between the stable cohomotopy refinement of Seiberg-Witten invariant and the integer valued Seiberg-Witten invariant \cite{b-f, b-2}. In particular, Proposition 5.4 in \cite{b-2} tells us that the comparision map becomes isomorphism when the given 4-manifold is almost-complex and ${b}^+ > 1$. Hence, the value of Bauer-Furuta's stable cohomotopy invariant of $X$ for the spin${}^c$-structure $\Gamma_{X}$ compatible with the almost complex structure is non-trivial if $X$ is a closed almost-complex 4-manifold with a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$. Moreover, the proofs of Proposition 6 and Corollary 8 in \cite{ishi-leb-2} (see also Theorem 8.8 in \cite{b-2}) imply that \begin{eqnarray}\label{mono-cone-1} \pm {c}_{1}(X) + \sum^{k}_{i=1} \pm{E}_{i} \end{eqnarray} is indeed a monopole class of the connected sum $M:={X} \# N$. \par On the other hand, the last claim follows as follows. Indeed, by (\ref{mono-cone-1}), we are able to obtain the following two monopole classes of $M$: \begin{eqnarray*} {\frak{b}}_{1}:={c}_{1}(X) + \sum^{k}_{i=1} {E}_{i}, \ {\frak{b}}_{2}:={c}_{1}(X) - \sum^{k}_{i=1} {E}_{i}. \end{eqnarray*} By (\ref{hull}), we obtain \begin{eqnarray*} {c}_{1}(X)= \frac{1}{2}{\frak{b}}_{1}+\frac{1}{2}{\frak{b}}_{2} \in {\bf{Hull}}({\frak C}(M)). \end{eqnarray*} We therefore get \begin{eqnarray*} {\beta}^2(M) \geq {c}^2_{1}(X) \end{eqnarray*} as promised. \end{proof} Theorem \ref{AIL}, Theorem \ref{beta-ine-key}, Proposition \ref{positive-mono}, Proposition \ref{prop-2}, and Proposition \ref{prop-3} together imply Theorem \ref{yamabe-pere} below. We shall use Theorem \ref{yamabe-pere} in next section. Moreover, Theorem \ref{yamabe-pere} is of interest independently of the applications to the Ricci flow. Compare Theorem \ref{yamabe-pere} with Theorem A in \cite{ishi-leb-2} and several results of \cite{fang}: \begin{thm}\label{yamabe-pere} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Let $X$ be a closed almost-complex 4-manifold with $b^{+}(X) \geq 2$ and $c^2_{1}(X)=2\chi(X) + 3 \tau(X) > 0$ . Assume that $X$ has a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$, where $\Gamma_{X}$ is the spin${}^c$-structure compatible with the almost-complex structure. Then, \begin{eqnarray}\label{one-ya} {\mathcal Y}(X \# N) = \bar{\lambda}(X \# N) \leq -4{\pi}\sqrt{2c^2_{1}(X)} < 0. \end{eqnarray} Moreover, if $X$ a minimal K{\"{a}}hler surface and if $N$ admits a Riemannian metric of non-negative scalar curvature, then, \begin{eqnarray*} {\mathcal Y}(X \# N) = \bar{\lambda}(X \# N) = -4{\pi}\sqrt{2c^2_{1}(X)} < 0. \end{eqnarray*} On the other hand, let ${X}_{i}$ be as in Proposition \ref{prop-2} and assume that $\sum^j_{i=1}c^2_{1}(X_{i})=\sum^j_{i=1}(2\chi(X_{i}) + 3 \tau(X_{i})) > 0$ is satisfied, where $j=2,3,4$. For $j=2,3,4$, \begin{eqnarray}\label{se-ya} {\mathcal Y}((\#^{j}_{i=1}{X}_{i}) \# N) = \bar{\lambda}((\#^{j}_{i=1}{X}_{i}) \# N) \leq -4{\pi}\sqrt{2\sum^j_{i=1}c^2_{1}(X_{i})} < 0. \end{eqnarray} Moreover, if $X_{i}$ is a minimal K{\"{a}}hler surface, where $i=1,2,3,4$, and if $N$ admits a Riemannian metric of non-negative scalar curvature, then, \begin{eqnarray*} {\mathcal Y}((\#^{j}_{i=1}{X}_{i}) \# N) = \bar{\lambda}((\#^{j}_{i=1}{X}_{i}) \# N) = -4{\pi}\sqrt{2\sum^j_{i=1}c^2_{1}(X_{i})} < 0. \end{eqnarray*} \end{thm} \begin{proof} First of all, the condition that $c^2_{1}(X)=2\chi(X) + 3 \tau(X) > 0$ forces that $\frak{a}:=c^{\mathbb R}_{1}({\cal L}_{\Gamma_{X}})$ is a non-zero monopole class. This fact with Proposition \ref{prop-3} allows us to conclude that the connected sum $X \# N$ has non-zero monopole classes. By Proposition \ref{positive-mono} and this fact, $X \# N$ does not admit any Riemannian metric $g$ with $s_{g} \geq 0$. This particularly implies that the Yamabe invariant of $X \# N$ is non-positive. By formula (\ref{yama-def-2}), we are able to obtain \begin{eqnarray}\label{sca-yama} {\mathcal Y}(X \# N) = - \Big(\inf_{g}{\int}_{X\# N}s^{{2}}_{g} d{\mu}_{g} \Big)^{{1}/{2}}. \end{eqnarray} On the other hand, the bounds (\ref{weyl-leb-sca-1}) and (\ref{monopole-1234}) immediately imply \begin{eqnarray}\label{sca-1} {\mathcal I}_{s}(X \# N):=\inf_{g}{\int}_{X \# N}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}{c}^2_{1}(X). \end{eqnarray} We are therefore able to obtain the desired bound (\ref{one-ya}), where we used Theorem \ref{AIL}. On the ther hand, it is known that, for any minimal K{\"{a}}hler surface $X$ with $b^+(X) \geq 2$, ${\mathcal I}_{s}(X)=32{\pi}^2c^2_{1}(X)$ holds \cite{leb-4, leb-7}. Moreover, ${\mathcal I}_{s}(N)=0$ holds because we assume that $N$ admits a Riemannian metric of non-negative scalar curvature. Proposition 13 of \cite{ishi-leb-2} with these facts together tells us that \begin{eqnarray}\label{sca-2} {\mathcal I}_{s}(X \# N) \leq {\mathcal I}_{s}(X) + {\mathcal I}_{s}(N) = 32{\pi}^2c^2_{1}(X). \end{eqnarray} It is clear that (\ref{sca-1}) and (\ref{sca-2}) imply ${\mathcal I}_{s}(X \# N)= {32}{\pi}^{2}{c}^2_{1}(X)$. This equality with (\ref{sca-yama}) and Theorem \ref{AIL} gives us the desired equality: \begin{eqnarray*} {\mathcal Y}(X \# N) = \bar{\lambda}(X \# N) = -4{\pi}\sqrt{2c^2_{1}(X)}. \end{eqnarray*} We should notice that, in case where $b_{1}(X)=0$ and $b^{+}(X) \equiv 3 \ (\bmod \ 4)$, this result can be recovered from Theorem \ref{AIL} and Theorem A of \cite{ishi-leb-2}. Moreover, the bound (\ref{se-ya}) is also essentially proved in \cite{ishi-leb-2}. For the reader, we shall include a proof. The method is quite similar to the above. In fact, since we know that $(\#^{j}_{i=1}{X}_{i}) \# N$ has non-zero monopole classes, the bounds (\ref{weyl-leb-sca-1}) and (\ref{monopole-123446}) tell us that the following holds for $j=2,3,4$: \begin{eqnarray*} {\mathcal I}_{s}((\#^{j}_{i=1}{X}_{i}) \# N):=\inf_{g}{\int}_{(\#^{j}_{i=1}{X}_{i}) \# N}{{s}^2_{g}}d{\mu}_{g} \geq {32}{\pi}^{2}\sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} This bound with Theorem \ref{AIL} implies the desired bound (\ref{se-ya}) because the existence of non-zero monopole classes of $(\#^{j}_{i=1}{X}_{i}) \# N$ forces that \begin{eqnarray}\label{sp} {\mathcal Y}((\#^{j}_{i=1}{X}_{i}) \# N) = - \Big(\inf_{g}{\int}_{(\#^{j}_{i=1}{X}_{i}) \# N}s^{{2}}_{g} d{\mu}_{g} \Big)^{{1}/{2}} \end{eqnarray} as before. On the other hand, if $X_{i}$ is a minimal K{\"{a}}hler surface, here $i=1,2,3,4$, then Proposition 13 in \cite{ishi-leb-2} tells us that \begin{eqnarray*} {\mathcal I}_{s}((\#^{j}_{i=1}{X}_{i}) \# N) \leq \sum^{j}_{i=1}{\mathcal I}_{s}(X_{i}) + {\mathcal I}_{s}(N) = 32{\pi}^2\sum^{j}_{i=1}{c}^2_{1}(X_{i}), \end{eqnarray*} where we again used ${\mathcal I}_{s}(N)=0$. We therefore get ${\mathcal I}_{s}((\#^{j}_{i=1}{X}_{i}) \# N) = 32{\pi}^2\sum^{j}_{i=1}{c}^2_{1}(X_{i})$. This equality with (\ref{sp}) and Theorem \ref{AIL} implies \begin{eqnarray*} {\mathcal Y}((\#^{j}_{i=1}{X}_{i}) \# N) = \bar{\lambda}((\#^{j}_{i=1}{X}_{i}) \# N) = -4{\pi}\sqrt{2\sum^j_{i=1}c^2_{1}(X_{i})}. \end{eqnarray*} Hence we obtain the promised result. \end{proof} As was already mentioned in Introduction, it is known that the Yamabe invariant is sensitive to the choice of smooth structure of a 4-manifold. In fact, one can easily construct many examples of compact topological 4-manifolds admitting distinct smooth structures for which values of the Yamabe invariants are different by using Theorem \ref{yamabe-pere}. We leave it as an excercise for the interested reader. \par We shall close this section with the following result. The bounds (\ref{monopole-123}), (\ref{monopole-123446}) and (\ref{monopole-1234}) immedialtely imply the following important result for our purpose: \begin{thm}\label{key-mono-b} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Let $X$ be a closed almost-complex 4-manifold with ${b}^+(X) \geq 2$ and with a non-trivial integer valued Seiberg-Witten invariant $SW_{X_{i}}(\Gamma_{X_{i}}) \not=0$, where $\Gamma_{X}$ is the spin${}^c$-structure compatible with the almost-complex structure. Then, any Riemannian metric $g$ on the connected sum $M_{1}:={X} \# N$ satisfies \begin{eqnarray}\label{monopolee-1} \frac{1}{4{\pi}^2}{\int}_{M_{1}}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} \geq \frac{2}{3} c^2_{1}({X}). \end{eqnarray} On the other hand, let ${X}_{i}$ be as in Proposition \ref{prop-2}. For $j=2,3,4$, any Riemannian metric $g$ on the connected sum $M_{2}:=\Big(\#^{j}_{i=1}{X}_{i} \Big) \# N$ satisfies the following strict bound: \begin{eqnarray}\label{monopoleee-2} \frac{1}{4{\pi}^2}{\int}_{M_{2}}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} \geq \frac{2}{3} \sum^{j}_{i=1}c^2_{1}({X}_{i}). \end{eqnarray} \end{thm} \section{Obstructions to the Existence of Non-Singular Solutions to the Normalized Ricci Flow}\label{obstruction} In this section, we shall prove new obstructions to the existence of non-singular solutions to the normalized Ricci flow by using results proved in the previous several sections. One of the main results of this section is the following: \begin{thm}\label{ricci-ob-1} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Let $X$ be a closed almost-complex 4-manifold with ${b}^+(X) \geq 2$ and ${c}^2_{1}(X)=2\chi(X) + 3 \tau(X)>0$. Assume that $X$ has a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$, where $\Gamma_{X}$ is the spin${}^c$-structure compatible with the almost-complex structure. Then, there does not exist quasi-non-singular solutions to the normalized Ricci flow in the sense of Definition \ref{bs} on a connected sum $M:=X \# N$ if the following holds: \begin{eqnarray}\label{ob-N-Ricci} (12b_{1}(N) + 3{b}^{-}(N)) > {c}^2_{1}(X). \end{eqnarray} In particular, under this condition, there does not exist non-singular solutions to the normalized Ricci flow in the sense of Definition \ref{non-sin}. \end{thm} \begin{proof} Suppose that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow on $M:=X \# N$. First of all, the bound (\ref{one-ya}) in Theorem \ref{yamabe-pere} tells us that \begin{eqnarray*} {\mathcal Y}(M) = \bar{\lambda}(M) \leq -4{\pi}\sqrt{2c^2_{1}(X)} < 0, \end{eqnarray*} where notice that the assumption that ${c}^2_{1}(X)=2\chi(X) + 3 \tau(X)>0$. Theorem \ref{bound-four} therefore tells us that the connected sum $M$ must satisfy the strict FZZ inequality. More precisely, as was already seen in the proof of Theorem \ref{bound-four} or Theorem \ref{fz-key}, the following holds: \begin{eqnarray*} 2\chi(M) + 3\tau(M) \geq \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt. \end{eqnarray*} On the other hand, by the bound (\ref{monopolee-1}) in Theorem \ref{key-mono-b}, we get the following bound for any Riemannian metric $g$ on $M$: \begin{eqnarray*} \frac{1}{4{\pi}^2}{\int}_{M}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} \geq \frac{2}{3} c^2_{1}({X}). \end{eqnarray*} We therefore obtain \begin{eqnarray*} 2\chi(M) + 3\tau(M) &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt \\ &\geq&\ \frac{2}{3}c^2_{1}({X}). \end{eqnarray*} On the other hand, a direct computation tells us that \begin{eqnarray*} 2\chi(M) + 3\tau(M) &=& 2\chi(X) + 3 \tau(X) + (2\chi(N) + 3 \tau(N)) -4 \\ &=& c^2_{1}({X}) - (4b_{1}(N) + {b}^{-}(N)), \end{eqnarray*} where we used the assumption that ${b}^{+}(N)=0$. We therefore obtain \begin{eqnarray*} c^2_{1}({X}) - (4b_{1}(N) + {b}^{-}(N)) \geq \frac{2}{3}c^2_{1}({X}) . \end{eqnarray*} Namely, \begin{eqnarray*} (12b_{1}(N) + 3{b}^{-}(N)) \leq {c}^2_{1}(X). \end{eqnarray*} By contraposition, we are able to get the desired result. \end{proof} In Section \ref{final-main} below, we shall actually use the following special case of Theorem \ref{ricci-ob-1}, but, a slightly stronger result in a sense: \begin{cor}\label{non-sin-cor} Let $X$ be a closed symplectic 4-manifold with $b^{+}(X) \geq 2$ and ${c}^2_{1}(X) >0$. Then, there is no non-singular solution of the normalized Ricci flow on a connected sum $M:=X \# k{\overline{{\mathbb C}{P}^2}}$ if the following holds: \begin{eqnarray}\label{Ricci-sym} k \geq \frac{1}{3}{c}^2_{1}(X). \end{eqnarray} \end{cor} \begin{proof} Let us again recall that a celebrated result of Taubes \cite{t-1} asserts that, for any symplectic 4-manifold with $b^+(X)>1$, the integer valued Seiberg-Witten invariant satisfies $SW_{X}(\Gamma_{X}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X}$ is the canonical spin${}^c$ structure compatible with the symplectic structure. Notice also that $k{\overline{{\mathbb C}{P}^2}}$ satisfies $b^+=0$. These facts with (\ref{ob-N-Ricci}) tell us that, if $3k > {c}^2_{1}(X)$, then there is no non-singular solution of the normalized Ricci flow on $M$. However, notice that the symplectic 4-manifold $M$ cannot admit any K{\"{a}}hler-Einstein metric with negative scalar curvature if $k > 0$. This particularly implies the following strict bound: \begin{eqnarray*} \frac{1}{4{\pi}^2}{\int}_{M}\Big(2|W^{+}_{g}|^2+\frac{{s}^2_{g}}{24}\Big) d{\mu}_{g} > \frac{2}{3} c^2_{1}({X}), \end{eqnarray*} here see also Corollary \ref{bound-cor}. This bound and the above proof of Theorem \ref{ricci-ob-1} immediately implies the slightly strong bound (\ref{Ricci-sym}) as desired. \end{proof} A similar method also allows us to prove the following obstruction which is the second main result of this section: \begin{thm}\label{ricci-ob-2} For $i= 1,2,3,4$, let $X_{i}$ be a closed almost-complex 4-manifold whose integer valued Seiberg-Witten invariant satisfies $SW_{X_{i}}(\Gamma_{X_{i}}) \equiv 1 \ (\bmod \ 2)$, where $\Gamma_{X_{i}}$ is the spin${}^c$-structure compactible with the almost-complex structure. Assume that the following conditions are satisfied: \begin{itemize} \item $b_{1}(X_{i})=0$, \ $b^{+}(X_{i}) \equiv 3 \ (\bmod \ 4)$, \ $\displaystyle\sum^{4}_{i=1}b^{+}(X_{i}) \equiv 4 \ (\bmod \ 8)$, \item $\displaystyle\sum^j_{i=1}c^2_{1}(X_{i})=\sum^j_{i=1}(2\chi(X_{i}) + 3 \tau(X_{i})) > 0$, where $j=2,3,4$. \end{itemize} Let $N$ be a closed oriented smooth 4-manifold with $b^{+}(N)=0$. Then, for $j=2,3,4$, there does not exist quasi-non-singular solutions to the normalized Ricci flow in the sense of Definition \ref{bs} on a connected sum $M:=\Big(\#^{j}_{i=1}{X}_{i} \Big) \# N$ if the following holds: \begin{eqnarray*} 12(j-1)+(12b_{1}(N) + 3{b}^{-}(N)) \geq \sum^{j}_{i=1}{c}^2_{1}(X_{i}) . \end{eqnarray*} In particular, under this condition, there does not exist non-singular solutions to the normalized Ricci flow on $M$ in the sense of Definition \ref{non-sin}. \end{thm} \begin{proof} Suppose now that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow on $M$. The bound (\ref{se-ya}) in Theorem \ref{yamabe-pere} tells us that \begin{eqnarray*} {\mathcal Y}(M) = \bar{\lambda}(M) \leq -4{\pi}\sqrt{2\sum^{j}_{i=1}c^2_{1}(X_{i})} < 0. \end{eqnarray*} This particularly tells us that, as before, the following must hold (see the proofs of Theorem \ref{fz-key} and Theorem \ref{bound-four} above) \begin{eqnarray*} 2\chi(M) + 3\tau(M) \geq \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt. \end{eqnarray*} On the other hand, notice that the connected sum $M$ admits non-zero monopole classes and cannot admit symplectic structures. This fact and Theorem \ref{beta-ine-key} tell us that the bound (\ref{monopoleee-2}) must be strict: \begin{eqnarray*} \frac{1}{4{\pi}^2}{\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)} > \frac{2}{3}\sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} We therefore obtain \begin{eqnarray*} 2\chi(M) + 3\tau(M) &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt \\ &>&\ \frac{2}{3}\sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} On the other hand, a direct computation implies \begin{eqnarray*} 2\chi(M) + 3\tau(M) &=& \sum^{j}_{i=1}(2\chi(X_{i}) + 3 \tau(X_{i})) + (2\chi(N) + 3 \tau(N)) -4j \\ &=& -(4b_{1}(N) + {b}^{-}(N)) - 4(j-1) + \sum^{j}_{i=1}{c}^2_{1}(X_{i}) , \end{eqnarray*} where we used the assumption that ${b}^{+}(N)=0$. We therefore get \begin{eqnarray*} -(4b_{1}(N) + {b}^{-}(N)) - 4(j-1)+\sum^{j}_{i=1}{c}^2_{1}(X_{i}) > \frac{2}{3}\sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} Namely, we have \begin{eqnarray*} 12(j-1)+(12b_{1}(N) + 3{b}^{-}(N)) < \sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} By contraposition, the desired result follows. \end{proof} Theorem \ref{ricci-ob-2}, a result of Taubes \cite{t-1} and the fact that a connected sum $k{\overline{{\mathbb C}{P}^2}} \# {\ell}({S^1} \times {S}^3)$ satisfies $b^+=0$ enable us to prove \begin{cor}\label{main-cor} For $i=1,2,3,4$, let ${X}_{i}$ be a simply connected closed symplectic 4-manifold satifying \begin{itemize} \item $b^{+}(X_{i}) \equiv 3 \ (\bmod \ 4)$, \ $\displaystyle\sum^{4}_{i=1}b^{+}(X_{i}) \equiv 4 \ (\bmod \ 8)$, \item $\displaystyle\sum^j_{i=1}c^2_{1}(X_{i})=\sum^j_{i=1}(2\chi(X_{i}) + 3 \tau(X_{i})) > 0$, where $j=2,3,4$. \end{itemize} Then, for $j=2,3,4$, there is also no non-singular solution to the normalized Ricci flow on a connected sum $\Big(\#^{j}_{i=1}{X}_{i} \Big) \# k{\overline{{\mathbb C}{P}^2}} \# {\ell}({S^1} \times {S}^3)$ if the following holds: \begin{eqnarray*}\label{Ricci-sym-1} 12(j-1)+12{\ell}+3k \geq \sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} Similarly, for $j=2,3,4$, there is also no non-singular solution to the normalized Ricci flow on $\#^{j}_{i=1}{X}_{i}$ if the following holds: \begin{eqnarray*}\label{ishi-leb-ob-22222} 12(j-1) \geq \sum^{j}_{i=1}{c}^2_{1}(X_{i}). \end{eqnarray*} \end{cor} Let us close this section with the following result. Though it is not used in what follows, perhaps, it is worth pointing out that the following holds (cf. Corollary 1.5 in \cite{fz-1}, Theorems 5.1 and 5.2 in \cite{leb-17}): \begin{thm}\label{ricci-ein-ob} Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Suppose that there is a quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow in the sense of Definition \ref{bs}. If the Yamabe invariant of $X$ is negative, i.e., ${\mathcal Y}(X)<0$, then the following two inequalities hold: \begin{eqnarray}\label{mi-ya-ricci-2} 2 \chi(X) +3\tau(X) \geq \frac{2}{3}\beta^2(X), \end{eqnarray} \begin{eqnarray}\label{mi-ya-ricci} 2 \chi(X) -3\tau(X) \geq \frac{1}{3}\beta^2(X). \end{eqnarray} In particular, if $X$ is a closed almost-complex 4-manifold with a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$, where $\Gamma_{X}$ is the spin${}^c$-structure compatible with the almost-complex structure, then the bound (\ref{mi-ya-ricci}) implies the Bogomolov-Miyaoka-Yau type inequality: \begin{eqnarray*} \chi(X) \geq 3\tau(X). \end{eqnarray*} \end{thm} \begin{proof} By the assumption that ${\mathcal Y}(X)<0$ and the proof of Theorem \ref{bound-four} above, we know that the existence of quasi-non-singular solution $\{g(t)\}$, $t \in [0, \infty)$, to the normalized Ricci flow implies the following: \begin{eqnarray*} 2\chi(M) \pm 3\tau(M) \geq \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{\pm}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt. \end{eqnarray*} The inequality (\ref{mi-ya-ricci}) is derived from this and (\ref{weyl-leb-sca-1}). In fact, \begin{eqnarray*} 2\chi(X) - 3\tau(X) &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}\Big(2|W^{-}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt \\ &\geq& \liminf_{m \longrightarrow \infty}\frac{1}{96{\pi}^2}{\int}^{m+1}_{m} {\int}_{X}{{s}^2_{g(t)}} d{\mu}_{g(t)}dt \\ &\geq& \frac{1}{3}\beta^2(X). \end{eqnarray*} We used (\ref{weyl-leb-sca-1}) in the last part. Moreover, suppose that $X$ is a closed almost-complex 4-manifold with a non-trivial integer valued Seiberg-Witten invariant $SW_{X}(\Gamma_{X}) \not=0$. Then, the bound (\ref{monopole-1234}) particularly tells us that the following holds: \begin{eqnarray} \beta^2(X) \geq {c}^2_{1}(X)= 2\chi(X) + 3\tau(X). \end{eqnarray} We therefore get $$ 2\chi(X) - 3\tau(X) \geq \frac{1}{3}(2\chi(X) + 3\tau(X)). $$ Namely, we obtain \begin{eqnarray*} \chi(X) \geq 3\tau(X) \end{eqnarray*} as promised. \par Finally, we also have \begin{eqnarray*} 2\chi(M) + 3\tau(M) \geq \liminf_{m \longrightarrow \infty}\frac{1}{4{\pi}^2}{\int}^{m+1}_{m} {\int}_{M}\Big(2|W^{+}_{g(t)}|^2+\frac{{s}^2_{g(t)}}{24}\Big) d{\mu}_{g(t)}dt. \end{eqnarray*} This bound with (\ref{monopole-123}) immediately implies the desired inequality: \begin{eqnarray*} 2\chi(M) + 3\tau(M) \geq \frac{2}{3}\beta^2(X). \end{eqnarray*} Hence the claim follows. \end{proof} \begin{rmk} Both (\ref{mi-ya-ricci-2}) and (\ref{mi-ya-ricci}) still hold even if $\beta^2(X)$ is replaced by $\alpha^2(X)$ which is introduced in \cite{leb-12, leb-17}. For the reader, let us recall briefly the defintion of $\alpha^2(X)$. Let $X$ be a closed oriented smooth 4-manifold with $b^+(X) \geq 2$. Consider the Grassmannian ${\bf Gr}:={Gr}^+_{b^+} \Big(H^2(X, {\mathbb R}) \Big)$ which consists of all maximal linear subspaces $\bf H$ of $H^2(X, {\mathbb R})$ on which the intersection form of $X$ is positive definite. For each element ${\bf H} \in {\bf Gr}$, we have an orthogonal decomposition with respect to the intersection form: $$ H^2(X, {\mathbb R}) = {\bf H} \oplus \overline{\bf H}. $$ Hence, for a given monopole class $\frak{a} \in {\frak C}(X)$ and an element ${\bf H} \in {\bf Gr}$, one can define $\frak{a}^+$ to be the orthogonal projection of $\frak{a}$ to ${\bf H}$. Using this projection, we can define the following natural quantity: $$ \alpha^2(X) := \inf_{{\bf H} \in {\bf Gr}} \Big(\max_{\frak{a}\in {\frak C}(X)}(\frak{a}^+)^2 \Big). $$ Though this definition is totally different from that of $\beta^2(X)$, it is observed in \cite{leb-17} that $\alpha^2(X) = \beta^2(X)$ actually occurs in many cases. In this direction, see Section 5 of \cite{leb-17}. \end{rmk} \section{Proof of Theorem \ref{main-A}}\label{final-main} In this section, we shall give a proof of Theorem \ref{main-A}. In what follows, we shall use the following notation: \begin{eqnarray*} {\chi}_{h}(X):=\frac{1}{4}\Big(\chi(X) + \tau(X)\Big), \ {c}^{2}_{1}(X):=2\chi(X) + 3\tau(X) \end{eqnarray*} for any 4-manifold $X$. \par First of all, we shall prove the following result by using the obstruction proved in Corollary \ref{non-sin-cor} above: \begin{prop}\label{non-prop} For every $\delta>0$, there exists a constant $d_\delta>0$ satisfying the following property: every lattice point $(\alpha, \beta)$ satisfying \begin{eqnarray}\label{geo} 0 < \beta \leq (6-\delta)\alpha-d_\delta \end{eqnarray} is realized by $({\chi}_{h}, {c}^{2}_{1})$ of infinitely many pairwise non-diffeomorphic simply connected symplectic 4-manifolds with the following properties: \begin{itemize} \item each symplectic 4-manifold $N$ is non-spin, \item each symplectic 4-manifold $N$ has negative Yamabe and Pelerman's $\bar{\lambda}$ invariant, i.e., ${\mathcal Y}(N)=\bar{\lambda}(N) <0$, \item on each symplectic 4-manifold $N$, there exists no quasi-non-singular solution of the normalized Ricci flow in the sense of Definition \ref{bs}. In particular, there is also no non-singular solution of the normalized Ricci flow in the sense of Definition \ref{non-sin}. \end{itemize} \end{prop} \begin{proof} Building upon symplectic sum construction due to Gompf \cite{g} and gluing formula of Seinerg-Witten invariants due to Morgan-Mrowka-Szab{\'{o}} \cite{mms} and Morgan-Szab{\'{o}}-Taubes \cite{mst}, a nice result on infinitely many pairwise non-diffeomorphic simply connected symplectic 4-manifolds is proved in \cite{b-k}. In particualr, infinitely many smooth structures are given by performing the logarithmic transformation in the sense of Kodaira. Theorem 4 of \cite{b-k} tells us that, for every $\delta>0$, there exists a constant $d_\delta>0$ satisfying the following property: every lattice point $(\alpha, \beta)$ satisfying $$ 0 < \beta \leq (9-\delta)\alpha-d_\delta $$ is realiezed by $({\chi}_{h}, {c}^{2}_{1})$ of infinitely many pairwise non-diffeomorphic simply connected symplectic 4-manifolds. In particular, each symplectic 4-manifold $X$ satisfies ${c}^{2}_{1}(X)=\beta > 0$ and we are able to know that $b^+(X) \geq 2$ by the construction. By the bound (\ref{Ricci-sym}), we are able to conclude that, if a positive integer $k$ satisfies \begin{eqnarray*} k \geq \frac{1}{3}{c}^2_{1}(X) = \frac{\beta}{3}, \end{eqnarray*} then there exists no quasi-non-singular solution to the normalized Ricci flow on the symplectic 4-manifold $N:=X \# k \overline{{\mathbb C}{P}^2}$. Moreover, $N:=X \# k \overline{{\mathbb C}{P}^2}$ is non-spin. These non-spin symplectic 4-manifolds actually cover the area (\ref{geo}) and here notice also that \begin{eqnarray*} {\chi}_{h}(N) = {\chi}_{h}(X), \ c^{2}_{1}(N)=\beta-k. \end{eqnarray*} Moreover, under the connected sum with $\overline{{\mathbb C}{P}^2}$, the infinitely many different smooth structures remain distinct as was already noticed in \cite{b-k}. Finally, since $X$ has non-trivial valued Seiberg-Witten invariant by a result of Taubes \cite{t-1}, the bound (\ref{one-ya}) tells us that \begin{eqnarray*} {\mathcal Y}(N) = \bar{\lambda}(N) \leq -4{\pi}\sqrt{2c^2_{1}(X)}=-4{\pi}\sqrt{2\beta}<0. \end{eqnarray*} We therefore obtain the desired result. \end{proof} \begin{rmk} By using Corollary \ref{main-cor}, Proposition \ref{non-prop} and Theorem 4 of \cite{b-k}, it is not hard to prove the following general non-existence result on non-singular solution: for every $\delta >0$, there is a constant $d_{\delta}>0$ such that a non-spin 4-manifold $m{\mathbb C}{P}^2 \# n \overline{{\mathbb C}{P}^2}$ has infinitely many smooth structures with ${\mathcal Y}<0$ for which there exists no non-singular solution to the normalized Ricci flow for every large enough $m \not\equiv 0 \ (\bmod \ 8)$ and $n \geq (2+\delta)m + d_{\delta}$. The details are left to the interested reader. Under these conditions, the author does not know, however, whether or not $m{\mathbb C}{P}^2 \# n \overline{{\mathbb C}{P}^2}$ admits actually a smooth structure for which non-singular solutions of the normalized Ricci flow exist. \end{rmk} On the other hand, there is a nice result of Cao \cite{c, c-c} concerning the existence of non-singular solutions to the normalized Ricci flow. We shall recall the following version of Cao's result which appears in \cite{c-c}. \begin{thm}[\cite{c, c-c}]\label{cao-K} Let $M$ be a compact K{\"{a}}hler manifold with definite first Chern class ${c}_{1}(M)$. If ${c}_{1}(M)=0$, then for any initial K{\"{a}}hler metric $g_{0}$, the solution to the normalized Ricci flow exists for all time and converges to a Ricci-flat metric as $t \rightarrow \infty$. If ${c}_{1}(M) < 0$ and the initial metric $g_0$ is chosen to represent the first Chern class, then the solution to the normalized Ricci flow exists for all time and converges to an Einstein metric of negative scalar curvature as $t \rightarrow \infty$. If ${c}_{1}(M) > 0$ and the initial metric $g_0$ is chosen to represent the first Chern class, then the solution to the normalized Ricci flow exists for all time. \end{thm} Notice that, in case where ${c}_{1}(M) = 0$ or ${c}_{1}(M) < 0$, the solution is actually non-singular in the sense of Definition \ref{non-sin}. Notice also that the affirmative answer of the Calabi conjecture due to Aubin \cite{a} and Yau \cite{yau, yau-1} tells us that K{\"{a}}hler-Einstein metrics exist in these cases. See also Section 4 in \cite{c-c}. We shall use Theorem \ref{cao-K} to prove \begin{prop}\label{exis-prop} For every positive integer $\ell > 0$, there are $\ell$-tuples of simply connected spin and non-spin algebraic surfaces with the following properties: \begin{itemize} \item these are homeomorhic, but are pairwise non-diffeomorphic, \item for every fixed $\ell > 0$, the ratios $c^2_{1}/{\chi}_{h}$ of the $\ell$-tuples are dense in the interval $[4,8]$, \item each algebraic surface $M$ has negative Yamabe and Pelerman's $\bar{\lambda}$ invariant, i.e., ${\mathcal Y}(M)=\bar{\lambda}(M) <0$, \item on each algebraic surface $M$, there exists a non-singular solution to the the normalized Ricci flow in the sense of Definition \ref{non-sin}. Moreover the existence of the solution forces the strict FZZ inequality $2 \chi(M)> 3|\tau(M)|$ as a topological constraint. \end{itemize} \end{prop} \begin{proof} Salvetti \cite{sal} proved that, for any $k > 0$, there exists a pair $(\chi_h, c^{2}_{1})$ such that for this pair one has at least $k$ homeomorphic algebraic surfaces with different divisibilities for their canonical classes by taking iterated branched covers of the projective plane. This construction is fairly generalized in \cite{b-k}. By Corollary 1 of \cite{b-k}, we know that, for every $\ell$, there are $\ell$-tuples of simply connected spin and non-spin algebraic surfaces with ample canonical bundles which are homeomorphic, but are pairwise non-diffeomorphic. Moreover, it is shown that, for every fixed $\ell$, the ratios $c^2_{1}/\chi_{h}$ of the $\ell$-tuples are dense in the interval $[4,8]$. Therefore, to prove this proposition, it is enough to prove the third and fourth statements above. We notice that one can see that each such an algebraic surface $M$ has $b^+(M) \geq 3$ by the construction. Now, the negativity of the Yamabe and Pelerman's $\bar{\lambda}$ invar! iant of the algebraic surface $M$ is a direct consequence of Theorem \ref{yamabe-pere}. In fact, the canonical bundle of each algebraic surface $M$ is ample and hence ${c}_{1}(M) < 0$. In particular, since $M$ is a minimal K{\"{a}}hler surface with ${b}^{+}_{2}(M) \geq 3$ and ${c}^2_{1}(M) > 0$, Theorem \ref{yamabe-pere} tells us that \begin{eqnarray*} {\mathcal Y}(M) = \bar{\lambda}(M) = -4{\pi}\sqrt{2c^2_{1}(M)} < 0. \end{eqnarray*} Hence the third statement follows. \par The fourth statement follows from Theorem \ref{cao-K} above because each algebraic surface $M$ has ample canonical bundle and hence ${c}_{1}(M) < 0$. We therefore conclude that, for the initial metric $g_0$ which is chosen to represent the first Chern class, there always exists a non-singular solution to the normalized Ricci flow and it converges to an Einstein metric of negative scalar curvature as $t \rightarrow \infty$. On the other hand, notice that the non-singular solution is particularly a quasi-non-singular solution in the sense of Definition \ref{bs}. Theorem \ref{bound-four} and the fact that $M$ has negative Yamabe invariant imply that $M$ must satisfy the strict FZZ inequality $2 \chi(M)> 3|\tau(M)|$ as a topological constraint. \end{proof} Propositions \ref{non-prop} and \ref{exis-prop} enable us to prove the main result of this article, i.e., Theorem \ref{main-A} stated in Introduction: \begin{thm} For every natural number $\ell$, there exist a simply connected topological non-spin 4-manifold $X_{\ell}$ satisfying the following properties: \begin{itemize} \item $X_{\ell}$ admits at least $\ell$ different smooth structures $M^i_{\ell}$ with ${\mathcal Y}<0$ and for which there exist non-singular solutions to the the normalized Ricci flow in the sense of Definition \ref{non-sin}. Moreover, the existence of the solutions forces the strict FZZ inequality $2 \chi > 3|\tau|$ as a topological constraint, \item $X_{\ell}$ also admits infinitely many different smooth structures $N^j_{\ell}$ with ${\mathcal Y}<0$ and for which there exists no quasi-non-singular solution to the normalized Ricci flow in the sense of Definition \ref{bs}. In particular, there exists no non-singular solution to the the normalized Ricci flow in the sense of Definition \ref{non-sin}. \end{itemize} \end{thm} \begin{proof} Proposition \ref{exis-prop} tells us that, for every positive integer $\ell > 0$, we are always able to find $\ell$-tuples $M^i_{\ell}$ of simply connected non-spin algebraic surfaces of general type and these are homeomorhic, but are pairwise non-diffeomorphic. And the ratios $c^2_{1}/\chi_h$ of $M^i_{\ell}$ are dense in the interval $[4,8]$ for every fixed $\ell > 0$. Moreover, Proposition \ref{exis-prop} tells us that each of $M^i_{\ell}$ has ${\mathcal Y}<0$ and, on each of $M^i_{\ell}$, there exists a non-singular solution to the the normalized Ricci flow and the existence of the solution forces the strict FZZ inequality $2 \chi> 3|\tau|$ as a topological constraint. \par On the other hand, Proposition \ref{non-prop} tells us that any pair $(\alpha, \beta)$ in the area (\ref{geo}) can be realized by $(\chi_h, {c}^{2}_{1})$ of infinitely many pairwise non-diffeomorphic simply connected non-spin symplectic 4-manifolds with ${\mathcal Y}<0$ and on each of which there exists no quasi-non-singular solution of the normalized Ricci flow. Notice that the ratios $c^2_{1}/\chi_h$ of these non-spin symplectic 4-manifolds are not more than 6, here see again the area (\ref{geo}). By this fact and the density of the ratios $c^2_{1}/\chi_h$ of $M^i_{\ell}$ in the interval $[4,8]$, we are able to find infinitely many pairwise non-diffeomorphic simply connected non-spin symplectic 4-manifolds $N^i_{\ell}$ such that ${\mathcal Y}<0$ and, on each of $N^i_{\ell}$, there exists no quasi-non-singular solution of the normalized Ricci flow, and moreover, $M^i_{\ell}$ and $N^i_{\ell}$ are both non-spin and have the same $(\chi_h, {c}^{2}_{1})$. Freedman's classification \cite{free} implies that they must be homeomorphic. However, each of $M^i_{\ell}$ is not diffeomorphic to any $N^i_{\ell}$ because, on each of $M^i_{\ell}$, a non-singular solution exists and, on the other hand, no non-singular solution exists on each of $N^i_{\ell}$. Therefore, we are able to conclude that, for every natural number $\ell$, there exists a simply connected topological non-spin 4-manifold $X_{\ell}$ satisfying the desired properties. \end{proof} \section{Concluding Remarks}\label{remark} In this article, we have seen that the existence or non-existence of non-singular solutions to the normalized Ricci flow depends on the diffeotype of a 4-manifold and it is not determined by homeotype alone. In particular, we considered distinct smooth structures on simply connected topological non-spin 4-manifolds $p{\mathbb C}{P}^2 \# q \overline{{\mathbb C}{P}^2}$ in Theorem \ref{main-A}. Freedman's classification \cite{free} tells us that, up to homeomorphism, the connected sums $j{\mathbb C}{P}^2 \# k \overline{{\mathbb C}{P}^2}$ provides us with a complete list of the simply connected non-spin 4-manifolds. In light of this fact, it will be tempting to ask whether or not the phenomenon like Theorem \ref{main-A} is a general feature of the Ricci flow on simply connected non-spin 4-manifolds admitting exotic smooth structures. However, there are quite many difficulties to prove such a result and hence this is a completely open problem. \par On the other hand, in case of topological spin 4-manifolds, the situation on homeotypes is a bit more unsettled. But, the connected sum $m(K3)\#n(S^2 \times S^2)$ and thier orientation-reversed version, together with 4-sphere $S^4$ at least exhaust all the simply connected homeotypes satifying $\chi \geq \frac{11}{8}|\tau|+2$. The $11/8$-conjecture asserts that this constraint is indeed satisfied automatically and hence that the above list of spin homeotypes is complete. Notice that there is a storng partial result due to Furuta \cite{f-1} which asserts that $\chi \geq \frac{10}{8}|\tau|+2$ holds. It will be also tempting to ask whether or not a result like Theorem \ref{main-A} still holds for the Ricci flow on simply connected spin 4-manifolds admitting exotic smooth structures. However, the present method cannot prove an abundance theorem like Theorem \ref{main-A} in spin case because the present method cannot prove a result like Proposition \ref{non-prop} in spin case. Hence the situation is quite different from the case of non-spin. \par Finally, it will be also interesting to ask whether or not a phenomenon like Theorem \ref{main-A} still occurs in {\it non} simply connected case. We hope to return this interesting subject in further research.
proofpile-arXiv_069-2440
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} GX~339--4 is an X-ray binary system hosting one of the most promising Galactic black hole (BH) candidates, with a mass function of $\sim 6$~M$_{\odot}$\ \citep{hynes03_gx339}. It is classified as a micro-quasar, based on tight radio/X-ray correlations found over a large dynamic range in source luminosity \citep[e.g. ][]{gallo03}. Extensive X-ray studies have revealed a rich timing structure \citep[e.g. ][]{dunn08, belloni05, nowak99, miyamoto91}. In its low state, the source is typically associated with an optically-bright counterpart, during which flickering on timescales as short as 10~ms has been observed. Rare transitions to the X-ray--off state have enabled placing some constraints on the much fainter optical companion star \citep{shahbaz01}. And during X-ray outbursts, the optical and X-ray fluxes display complex correlations as well as anti-correlations over extended timescales \citep[e.g. ][]{makishima86,russell06}. Rapid timing observations simultaneous over a broad energy range remain as one of the lesser-explored aspects of GX~339--4. The last published multi-wavelength studies to probe sub-second timescales were carried out over 20 years ago (but see Spruit et al. in prep.). Besides high state observations by \citeauthor{makishima86}, \citet[][]{motch82, motch83, motch85} studied the source during a low- to high-state transition, and found an anti-correlated cross-correlation function (CCF) signal with optical leading X-rays by a few seconds. Although there was an indication in the low-frequency light curves that the optical and 13--20 keV X-ray fluxes were correlated at a significance of about 98 per cent, no obvious high frequency ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 20$ s) correlation was uncovered at any of the selected energies. The Galactic-halo X-ray binary XTE~J1118+480\ has recently been a target of extensive, simultaneous multi-wavelength coverage. Among its interesting properties, a complex optical/X-ray CCF has been found \citep[e.g. ][]{kanbach01, spruitkanbach02}, including positive and negative correlation components and an optical-vs.-X-ray peak lag of $\sim 0.5$ s. Complex CCFs are now being found in other sources as well (\citealt{durant08}). These provide important time domain constraints for physical models. In this Letter, we present the first simultaneous optical/X-ray timing analysis of GX~339--4\ on rapid timescales of $\sim 50-130$ ms in a low/hard state associated with a relatively-faint optical counterpart. A clear cross-correlation signal is detected with the optical peak lagging X-rays by $\sim 150$ ms. The CCF has a complex pattern with some similarities as well as differences to the CCF found for XTE~J1118+480, and is likely to be result of distinct interacting accretion/ejection components. Full details of the timing and spectral analysis are presented in a forthcoming paper. \section{Observations} The triple-beam optical camera ULTRACAM\ \citep{ultracam}, capable of high-speed photometry at up to 500 Hz, was mounted on the {\sl Very Large Telescope (VLT)} as a visitor instrument during Jun 2007. We carried out three 1~h long observations of GX~339--4 on alternate nights of UT Jun 14th, 16th and 18th (hereafter referred to as Nights 3, 2 and 1 respectively in order of improving weather), simultaneously with the {\sl Rossi X-ray Timing Explorer (RXTE)} satellite. Only the final night [Night 1] was photometric, while the other two had variable transparency, Night 3 being the worst. This period fell a few weeks after the source had returned to the low/hard state following a large X-ray outburst \citep{kalemci07} and also coincided with rising optical emission \citep{buxtonbailyn07}. Our choice of time resolution was governed by the need to obtain a good signal:noise under prevailing atmospheric conditions. The final values used were $\approx 50, 133$ and 136 ms on Nights 1, 2 and 3 respectively. Data calibration and relative photometry (with respect to a brighter comparison star observed simultaneously) was carried out with the ULTRACAM\ pipeline v. 8.1.1. Three-filter simultaneous observations are possible and we used $u'$, $g'$ and $r'$, but much longer exposures were required in $u'$ to obtain comparable signal:noise; consequently, the $u'$ data is not considered further in this Letter. Optical spectro-photometry carried out with the {\em VLT}/FORS2 instrument on Night 1 gives $F_{\lambda}^{5000 \AA}= 6\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ $(V_{\rm Vega}\approx 17)$. Correction for Galactic extinction of $A_{\rm V}$$\approx$3.3 implies $\lambda L_\lambda^{5000 \AA}\approx 4.6\times 10^{35} (d/8\ {\rm kpc})^2$ erg s,$^{-1}$ \citep[distance from ][]{zdziarski04}. {\sl RXTE}\ observed the target in its canonical {\tt GoodXenon} and {\tt Standard} PCA modes, and in a 32-s on-off rocking mode with HEXTE cluster 1. Recommended HEADAS v. 6.4 procedures were followed for data reduction and extraction of light-curves and spectra, including the latest calibration information and background model corrections. A simple hard power-law with photon-index $\Gamma=1.65\pm0.02$ provided a statistically acceptable fit to the spectra for energies of 3--200 keV. The source had a flux $F_{2-10}=1.6\times 10^{-10}$ erg s$^{-1}$ cm$^{-2}$, $=> L_{2-10}=1.2\times 10^{36}$ erg s$^{-1}$, implying an optical($V$):X-ray($2-10$ keV) luminosity ratio of about 40 per cent. The low flux and hard power-law are characteristic of the source in the low/hard state. \citet{tomsick08} find that the X-ray flux reached a minimum during our observation period. The source showed a high fractional variability amplitude in X-rays, approaching 50 per cent in the full-band PCA energy range (above the value expected from Poisson fluctuations) on all nights. In the optical, the net rms variability over the full light curves was $\approx 13$ and 15 per cent in the $g'$ and $r'$ filters, respectively. \section{Results : the cross-correlation function} \begin{figure*} \begin{center} \includegraphics[angle=90,width=15cm]{fig1.ps} \caption{The $r'$ vs. X-ray full-band PCA cross-correlation for data on all three nights, obtained from simultaneous light curve sections of 60 s length. A positive delay (in this case peaked at $\approx 150$ ms) implies that optical lags X-rays. The inset shows a zoom-in average CCF of Nights 1 and 2 interpolated onto the fastest timescale of 50 ms in order to clearly illustrate the delay. The shaded region is the average scatter computed in an ensemble of light curve sections. \label{fig:crosscorr}} \end{center} \end{figure*} The net optical and X-ray light curves were translated to a common Barycentric frame, and cross-correlated on the fastest optical time resolutions available on each night. The absolute and relative timing accuracies of ULTRACAM\ are $\sim 1$ ms and 50 $\mu$s, respectively \citep{ultracam}, much better than the smallest timescales in our light curves. The main result of our work is shown in Fig.~\ref{fig:crosscorr}. The CCF shows a single, significant peak at an optical lag of $\sim$ 150 ms. The peak itself has a narrow core, with a shallow rise from $\sim -1.5$ s to 0 s, and a steep decline from 150 ms to $\sim 0.5$ s. Weaker, but significant anti-correlation troughs are centred at $\sim -4$ s and 1 s. Each of these structures is visible in all the observations (in spite of some clear inter-night variation), suggesting that each of them is real. The peak narrowness and position are constant between the three nights, within the $\sim 50-130$ ms resolution available. The $g'$ data shows a very similar CCF to the $r'$/X-ray one presented. The asymmetric shallow rise and steep decline of the CCF is clearly reminiscent of that seen in XTE~J1118+480\ \citep{kanbach01}, but \lq mirror-imaged\rq\ about a vertical axis and shifted to a lag of 0.15 s. This lag was confirmed by constructing averaged optical light curves around micro- (local) flares and dips selected in the full-band X-ray data. Fig.~\ref{fig:likemalzac} shows the resultant optical (as well as 2--5 and 5--20 keV X-ray) superpositions around several hundred X-ray peaks and dips. An optical extremum appears at $\sim 150$ ms lag on all nights (though only the best weather Nights 1 and 2 are shown). Furthermore, the peaks light curve clearly shows lower-than-average troughs, as well as a preceding rise of the optical, all matching the CCF within $t$=$\pm$2 s. There are also some matches beyond this range, including the peaks local maximum at $t$=$+10$ s. But strong intrinsic (not Poisson) variability dominates, especially in the higher-resolution Night 1 data. This suggests that more flaring is present on timescales smaller than those probed by us. \begin{figure*} \begin{center} \includegraphics[angle=90,width=6.5cm]{fig2a.ps} \includegraphics[angle=90,width=6.5cm]{fig2b.ps} \caption{Averaged 2--5 keV, 5--20 keV and optical ($r'$) light curves around full-band PCA X-ray flares {\bf\em (left)} and dips {\bf\em (right)}. Optical light curves from Nights 1 and 2 are shown in red and blue; averages from both nights are shown in black. For making this plot, full-band extrema are selected according to the method of \citet[][ cf. their Fig.~9]{malzac03}. A flare (or dip) must be at least $f$=2 times above (or $1/f$ times below) the local X-ray mean in a running $t_m$=32 s long section, and is also required to be the local extremum within a contiguous segment of $\pm t_p$=8 s (this effectively selects significant flares only). Corresponding light curves sections in other bands are then normalized to their local means before being averaged. Error bars show typical Poisson uncertainties. \label{fig:likemalzac}} \end{center} \end{figure*} There may be several reasons why the previous observations of \citet{motch83} and \citet{makishima86} did not find a positive CCF signal. Firstly, the CCF strength itself probably evolves between the different states probed: a high-state during the observation of \citeauthor{makishima86} and an optically-bright [$V\approx 15.4$] low state in \citeauthor{motch83} During our observations, the CCF in Fig.~\ref{fig:crosscorr} has a peak strength of only $\sim 0.1$; a perfect correlation would show a peak of 1 (for comparison, XTE~J1118+480\ has a peak of 0.4). It is also possible that the length of the simultaneous optical/X-ray observation ($\sim 100$ s) available to \citet{motch83} was too short to reveal any positive CCF components present. \section{Discussion} The main question that we wish to address is the origin of the rapidly variable optical power and the complex cross-correlation. Published models for the XTE~J1118+480\ CCF include, among others: \citet{merloni00}, who invoke a magnetically-dominated corona, \citeauthor{esin01} (\citeyear{esin01}, a dominant advection-dominated flow [ADAF] with additional synchrotron), \citet[][ a pure jet]{markoff01}, \citet[][ feedback in a common jet+corona reservoir]{malzac04} and \citet[][ an ADAF and a jet dominating at different energies]{yuan05}. Most can explain the averaged broad-band energetics, and provide a qualitative description of the expected variability. But all agree that the origin and the details of the rapid variability patterns are likely to be complicated. The case of GX~339--4\ may well be similar, and a full investigation of the parameter space of the various models is beyond the scope of this Letter. Nevertheless, several important conclusions can be drawn from our simultaneous multi-wavelength data. \subsection{Reprocessing?} The peak of the CCF time delay (150~ms) corresponds to a light-travel distance of $5000$ $R_{\rm G}$\ [$\equiv GM/c^2$] for $M$$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$6~M$_{\odot}$, too small for reprocessing on the companion star (GX~339--4\ has a binary separation of $\approx 25$ light-seconds). An auto-correlation analysis of the individual light curves can also be used to constrain the emission processes. Fig.~\ref{fig:autocorr} shows the computed X-ray and optical auto-correlation functions (ACFs). Low count-rate Poisson noise dominating the X-ray ACF at zero lag has been corrected-for by subtracting white noise from an X-ray power spectrum, followed by an inverse Fourier transform. The final X-ray ACF is broader than the optical one, similar to the result found in the case of XTE~J1118+480. This also argues against a reprocessing origin (on the outer parts of the accretion disk, say) for the rapidly-variable component, at least as described by a simple, linear transfer function \citep[cf.][]{kanbach01}. \begin{figure} \begin{center} \includegraphics[angle=90,width=8.5cm]{fig3.ps} \caption{The X-ray (black) and optical (red dashed) auto correlation functions (ACF) computed from the highest time resolution (50 ms) light curves on Night 1. The X-ray ACF is for the full-band PCA, and the optical refers to the $r'$ filter. Both are corrected for Poisson noise (dominant in X-rays). \label{fig:autocorr}} \end{center} \end{figure} \subsection{Behaviour around flares and dips} Fig.~\ref{fig:likemalzac} shows that the behaviour of low- and high-energy X-ray (in this case, 2--5 keV and 5--20 keV) photon intensities is similar, when selected around full band PCA peaks and dips, with no obvious lag. We note that CCFs of light curves extracted in these energy ranges, with respect to the optical, showed little difference to the full-band result of Fig.~\ref{fig:crosscorr}. The optical light curves follow the CCF shape, as already mentioned. No significant colour ($g'-r'$) changes around the positions of X-ray flares/dips were detected. What about the source spectral behaviour? Using standard HEADAS tools, we extracted average X-ray spectra (and background) within short time-bins, $\sim \pm 50-150$ ms, centred on X-ray PCA full-band flares and dips (HEXTE was not used for this analysis, as the source is always background-dominated above $\sim 20$ keV). What we find is that the source hardens when it flares, and vice-versa. This is illustrated in Fig.~\ref{fig:flares_dips_spectra}, which shows the contours of independent fits to the extracted flares and dips spectra. A simple power-law (with absorption fixed to Galactic) was used to parametrize the spectral change between the two cases. The photon-index fitted to the flares spectrum is harder than that fitted to dips at 99 per cent confidence on all nights. Simulations were used to confirm that the results are robust to fluctuations in the background, which dominates above $\sim 5$ keV in the lower-flux dips spectrum. On the other hand, no significant changes were detected in X-ray spectra extracted at the positions of {\em optical} peaks as compared to optical dips -- the spectral slopes were close to the slope for the average spectrum of the full dataset ($\Gamma\approx 1.6$). Extracting spectra 150 ms {\em before} optical flares and dips (as suggested by the CCF delay) showed a small but clear difference in the intensities of the X-ray flare and dip spectra, but no obvious change in spectral slope within the errors. This seems consistent with the low CCF peak strength -- i.e. every optical flare need not have been preceded by a locally-maximum X-ray flare, in spite of the average correlation. \begin{figure} \begin{center} \includegraphics[angle=90,width=8.5cm]{fig4.ps} \caption{ Photon-index ($\Gamma$) contours for single power-law fits to the extracted 3--30 keV flare and dip background-subtracted spectra, with absorption fixed to Galactic. The total resultant time interval for spectral extraction is $\approx 30$ s long for both flares and dips, and the net count rate is $\approx 40$ and 5 ct s$^{-1}$ respectively. Contour levels correspond to 68.3, 95 and 99\% for two interesting parameters, and include 1 per cent systematic errors. Results are shown for all three nights. The dotted diagonal line is the 1:1 line. \label{fig:flares_dips_spectra}} \end{center} \end{figure} \subsection{Implications} The above analysis shows that the source is harder during X-ray flares, and vice-versa. In the context of a hot accretion disk corona model, this is consistent with increased Compton up-scattering when the source is (momentarily) brighter, and vice-versa. Fast Compton cooling timescales of tens of ms or smaller have been inferred in Galactic black holes in the low/hard state \citep[cf. ][]{guilbert82_nature}, for coronal electron temperatures $kT_e \sim 100$ keV and seed-photon disk-blackbody X-ray luminosities $< 0.01 L/L_{\rm Edd}$. Very similar physical parameters are inferred for GX~339--4\ during previous low/hard states \citep{joinet07, miller06}, and also contemporaneous with our observations (cf. \citealt{tomsick08}, who favour an inner disk radius [$r_{\rm in}$] of $\sim 10$~$R_{\rm G}$). Thus, similarly-fast cooling timescales are likely to hold for the case of GX~339--4, resulting in apparently-simultaneous flaring and hardening, which is what we observe. As for the optical emission, we have already excluded simple reprocessing models. Bremsstrahlung can also be ruled out, as this would require a corresponding X-ray flux higher than that observed by several orders of magnitude. The most likely remaining physical mechanism is then synchrotron emission. Cyclo-synchrotron models in which several magnetized active regions with sizes of a few Schwarzschild radii contribute significantly to the dereddened optical flux have been investigated by \citet[ in addition cf. \citealt{fabian82}]{dimatteo99}. The active regions are characterized by $B\sim$ few $\times 10^6$ G, coronal optical depths $\tau\sim 0.2-1$ and $kT_e\sim 150-200$~keV. \citet{wardzinski00} have also discussed important modifications to such models. Within the context of these models, what could the shape and the time delay of the CCF correspond do? The steep decline that follows the CCF peak (Fig.~\ref{fig:crosscorr}) suggests the presence of some mechanism that cuts off the optical flares suddenly -- like infall of synchrotron-emitting blobs into the BH, say. A potential difficulty of this scenario is that a free-fall time of 150 ms for a 6--10 M$_{\odot}$\ BH corresponds to a physical radius of $\sim 200-250$ $R_{\rm G}$, which is larger than the value of $r_{\rm in}$\ inferred by \citet{tomsick08}. Unless the disk has receded further or an atypically large corona is present, this does not correspond to an obvious physically meaningful scale. An alternate hypothesis is that the rapid variability originates in non-thermal emission within a relativistic outflow or a jet. \citet{malzac04} invoked a magnetic energy reservoir that feeds both a jet component (dominating the optical), as well as an electron corona (dominating in X-rays) to explain the CCF of XTE~J1118+480. Energy injection was modeled as shot flares. Feedback between the two components resulted in complex correlations, with the optical power being proportional to the differential of the X-ray flux. In such a model, the CCF shape and time lag are determined by the dissipation timescale of the process that injects energy into the jet. This should be largely independent of the exact injection mechanism, though the authors discuss the context of a magnetic energy reservoir. For our observations, a simple exponential ($\propto e^{-t/\tau}$) fit to the innermost part of the X-ray ACF (lags $<$0.5 s) in Fig.~\ref{fig:autocorr} gives a dissipation timescale $\tau=0.2\pm 0.05$ s (90\% error). This agrees with the observed optical delay, and suggests that such a differential correlation may apply to GX~339--4 as well. If X-ray flares above the accretion disk trigger the dissipation and large-scale re-ordering of the poloidal magnetic field threading a jet, it is possible that the synchrotron (optical) emission will respond on timescales related to subsequent field build-up. Significant modulation of the poloidal field can occur on timescales orders of magnitude longer than the dynamical time of the inner accretion disk regions where this field originates \citep[][ see their Eq. 4]{livio03}. For $r_{\rm in} \sim 10$~$R_{\rm G}$, the Keplerian dynamical time is 6 ms. Our observed delay of the optical (i.e., jet) component ($\sim 150$ ms) is 25 times longer, which can easily be accommodated within the picture of magnetic modulation. Synchrotron emission by plasma accelerated along the jet during this period, followed by rapid radiative cooling, may thus explain the positive CCF peak and delay. As for the anti-correlation troughs: if the initial X-ray flares that triggered magnetic re-ordering are related to field reconnection events in a coexistent corona, the coronal magnetic energy density will be released on the X-ray flaring timescale. This will lead to a decrease of any ambient synchrotron emission that is occurring within the corona itself, resulting in an anti-correlation of optical with X-rays. X-ray flaring is coherent over timescales of $\sim$ several seconds, as seen in the X-ray ACF (Fig.~\ref{fig:autocorr}), and this is also the total length of the anti-correlation seen in the CCF plot of Fig.~\ref{fig:crosscorr}. In short, interaction between the jet and coronal components may be the key to understanding the complex correlation structure. A testable prediction of this model is that the appearance and strength of the positive CCF signal should be intimately related to the prominence of the jet. Our observations probed the beginning of the low/hard state. As the sources enters deeper into this state and the jet establishes itself, its contribution to the overall energetics should grow (e.g. \citealt{fender04}), as should the related optical fractional variability rms. Support of this comes from the fact that we find an optical rms variability of $\sim 15$ per cent over the full light curve timescales, while \citet{motch83} found an rms of 50 per cent when the source was optically brighter (this is well-matched to the estimate of $\sim 50$ per cent by \citealt{corbel02}, based on broad-band photometry). Increased corona/jet coupling will result in a higher peak strength of the optical/X-ray CCF. Finally, if a stronger poloidal field is required to establish a stronger jet component, this may result in longer timescales for breaking and establishing this field component following reconnection flares. Changing CCF delays could then be used to directly probe the evolution of characteristic accretion/ejection structures in X-ray binaries. \section{Acknowledgements} ULTRACAM\ is supported by STFC grant PP/D002370/1. PG acknowledges JSPS \& RIKEN Foreign Researcher Fellowships. He thanks R.P. Fender, A.A. Zdziarski \& T. Belloni for valuable comments, and the referee for a prompt report. TS and MD acknowledge Spanish Ministry grants AYA2004 02646 \& AYA2007 66887. \bibliographystyle{mnras}
proofpile-arXiv_069-2586
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Test for Bell's inequality} \label{test} In Ref.~\cite{stobinska}, it was shown that a superposition of two coherent states, $|c_+\rangle=N_+(|\alpha\rangle+|-\alpha\rangle)$ with the normalization factor $N_+$ and coherent states $|\pm\alpha\rangle$ of amplitudes $\pm \alpha$, when divided at a beam splitter, violates Bell-CHSH inequality using homodyne measurements and nonlinear interactions. One can show that the fidelity $\cal F$ between a coherent-state superposition $|c_+\rangle$ and a single-mode squeezed state is very high when $\alpha$ is relatively small (e.g. ${\cal F}\geq 0.99$ for $\alpha<0.75$). This motivates us to first investigate violation of Bell's inequality for single-mode squeezed states divided at a beam splitter. We shall later study another set of Gaussian states which outperform the results for this case. Let us suppose that two parties, Alice and Bob, share an entangled state generated using a single-mode squeezed vacuum and a $50:50$ beam splitter~\cite{kokralph}. Analytically, the state can be described by the following Gaussian-weighted continuous superposition of coherent states~\cite{papers} \begin{equation} \label{initial} \ket{\xi}_{AB}={\cal N}\int{d}^2\alpha~{\cal G}(r,\alpha) |\frac{\alpha}{\sqrt{2}},\frac{\alpha}{\sqrt{2}}\rangle_{AB}, \end{equation} where ${\cal G}(r,\alpha)=\exp[{-(1-\tanh{r})\alpha^2/({2\tanh{r}})}]$, $r$ is the squeezing parameter, $\alpha\in\mathbb{R}$ and ${\cal N}=1/\sqrt{2\pi\sinh{r}}$ is the normalization factor. The class of nonlinear transformations we consider can be understood as an approximation of the following rotations performed in the bidimensional space spanned by the generic coherent state $\{\ket{\pm\beta}\}$ ($\beta\in\mathbb{C}$)~\cite{stobinska} \begin{equation} \label{ideale} \begin{split} &\hat{R}_{j}(\theta)\ket{\beta}_j\rightarrow \sin(2\theta_j)\ket{\beta}_j+\cos(2\theta_j) \ket{-\beta}_j,\\ &\hat{R}_{j}(\theta)\ket{-\beta}_j\rightarrow \cos(2\theta_j)\ket{\beta}_j-\sin(2\theta_j)\ket{-\beta}_j, \end{split} \end{equation} where $\theta_j$ is the effective ``angle'' of such idealized rotations and $j=A,B$ labels Alice's or Bob's site. It should be noted that the ``idealized" transformation described in Eq.~(\ref{ideale}) is {\it not} unitary (approximately unitary when $\beta$ is large) so that it cannot be performed deterministically. The actual physical local transformation using nonlinear interactions will be considered later in this Section. After the application of the local operations~(\ref{ideale}) to their respective mode, Alice and Bob perform bilocal homodyne measurements, which result in the joint probability-amplitude function \begin{equation} C_{id}(\theta_A,\theta_B,x,y)\propto\langle{x,y} |\hat{R}_A(\theta_A)\hat{R}_B(\theta_B)\ket{\xi}_{AB} \end{equation} with $\ket{x}$ ($\ket{y}$) the in-phase quadrature eigenstate of Alice's (Bob's) mode. A sketch of the thought-experiment is presented in Fig.~\ref{fig:scheme}. In order to test CHSH version of Bell's inequality, we need to construct a set of bounded dichotomic observables, which we do by assigning value $+1$ to a homodyne-measurement's outcome larger than 0, and $-1$ otherwise~\cite{peculiar}. With this, a joint probability of outcomes can be calculated as \begin{equation} P_{kl}(\theta_A,\theta_B)= \int^{k_s}_{k_i} dx \int^{l_s}_{l_i} dy~|C_{id}(\theta_A,\theta_B,x,y)|^2, \end{equation} where the subscripts $k,l=\pm$ correspond to Alice's and Bob's assigned measurements outcomes $\pm{1}$ and the integration limits are such that $+_s=\infty,~+_i=-_s=0$ and $-_i=-\infty$. We can now calculate the Bell-CHSH function, $B(\theta_A,\theta_B,\theta'_A,\theta'_B)={\cal C} (\theta_A,\theta_B)+{\cal C}(\theta'_A,\theta_B) +{\cal C}(\theta_A,\theta'_B)-{\cal C}(\theta'_A,\theta'_B)$, where we have introduced the correlation function \begin{equation} \label{corre} {\cal C}(\theta_A,\theta_B)=\sum_{k,l=\pm}P_{kk} (\theta_A,\theta_B)-\sum_{k\neq{l}=\pm}P_{kl}(\theta_A,\theta_B). \end{equation} According to local-realistic theories, the Bell-CHSH inequality $|B(\theta_A,\theta_B,\theta'_A,\theta'_B)|\le{2}$ holds. Quantitatively, we have found that \begin{equation} \label{corre1M} {\cal C}_{id}(\theta_A,\theta_B,r)\!=\!\frac{2\text{arctan}(\sinh{r}) \cos(4\theta_A)\cos(4\theta_B)}{\pi(1+\mathop{\sum}\limits_{j\neq{k}} \sin(4\theta_j)[\frac{\sin(4\theta_k)}{2}+\sinh{r}])} \end{equation} with $j,k=A,B$ and the subscript $id$ is used in order to remind of the idealized version of local operations being used. The behavior of the numerically optimized Bell-CHSH function corresponding to Eq.~(\ref{corre1M}) is shown by the solid curve in Fig.~\ref{fig:ide}, which demonstrates that a local realistic description of $\ket{\xi}_{AB}$ is impossible as the squeezing parameter for the initial single-mode state surpasses $\sim{2.1}$. The degree of violation of the Bell-CHSH inequality then reaches a maximum of $\sim2.23$, robustly against $r$. Now that we have gained a quantitative picture of the behavior of the Bell function under the class of formal operations and Gaussian measurements considered in our work, it is time to provide a physically effective description of each rotation $\hat{R}_{j}(\theta_j)$. Such physical implementation stems from the observation made in Ref.~\cite{stobinska} that Eqs.~(\ref{ideale}) can be approximated by a combination of single-mode Kerr interaction $\hat{U}_{\text{Kerr}}=e^{-i\hat{H}_{\text{Kerr}}t}$ with $\hat{H}_{\text{Kerr}}=\hbar\Omega(\hat{a}^\dag\hat{a})^2$ ($\Omega$ being the strength of the non-linear coupling) and displacement of amplitude $\varphi\in\mathbb{C}$, $\hat{D}(\varphi)=e^{\varphi\hat{a}^\dag-\varphi^*\hat{a}}$. Here $\hat{a}$ ($\hat{a}^\dag$) is the annihilation (creation) operator of a field mode. A single-mode Kerr interaction may be implemented, for example, by nonlinear crystals \cite{j04,v04} while the displacement can be easily performed via a beam splitter with high transmittivity and a local oscillator. \begin{figure}[t] \centerline{\scalebox{0.65}{\includegraphics{coninefficienze2D.eps}} } \caption{Bell test for ideal rotations. The Bell-CHSH function is plotted against the squeezing parameter $r$ for three values of the detection efficiency $\eta$. We show the case corresponding to ideal homodyne detection (solid line), $\eta=0.5$ (dashed line) and $\eta=0.05$ (dotted line). The horizontal line shows the bound for local realistic theories.} \label{fig:ide} \end{figure} In detail, the evolution induced by the effective rotations $\hat{V}_j(\theta_j)=\hat{U}_{\text{Kerr}}\hat{D} (i\theta_j/d)\hat{U}_{\text{Kerr}}$ on input coherent states $\ket{\pm\beta}_j$ is given by the following expressions ($\beta=\beta_r+i\beta_i$ and $d\in{\mathbb R}$ determines the amplitde of the displacement)~\cite{stobinska} \begin{equation} \label{rotazioni} \begin{split} \hat{V}_j(\theta_j)\ket{\beta}_j&=\frac{1}{2} \left\{e^{i\frac{\theta_j}{d}\beta_r} (|{\beta+\frac{i\theta_j}{d}}\rangle_j+i| {-\beta-\frac{i\theta_j}{d}}\rangle_j)\right.\\ &\left.+ie^{-i\frac{\theta_j}{d}\beta_r} (|{-\beta+\frac{i\theta_j}{d}}\rangle_j+ i|{\beta-\frac{i\theta_j}{d}}\rangle_j)\right\},\\ \hat{V}_j(\theta_j)\ket{-\beta}_j&=\frac{1}{2} \left\{ie^{i\frac{\theta_j}{d}\beta_r}(|{\beta +\frac{i\theta_j}{d}}\rangle_j +i|{-\beta-\frac{i\theta_j}{d}}\rangle_j)\right.\\ &\left.+e^{-i\frac{\theta_j}{d}\beta_r}(|{-\beta +\frac{i\theta_j}{d}}\rangle_j +i|{\beta-\frac{i\theta_j}{d}}\rangle_j)\right\}. \end{split} \end{equation} Note that $\hat{V}_j(\theta_j)$ is unitary while $\hat{R}_j(\theta_j)$ is not strictly a unitary operation. The physical operation $\hat{V}_j(\theta_j)$ is a good approximation of the ideal operation $\hat{R}_j(\theta_j)$ when the amplitudes of coherent states on which the operation is acted are large. As seen in Eq.~(\ref{initial}), our squeezed state can be expanded in terms of coherent states with a Gaussian weight factor as a function of the coherent amplitude. When the squeezing $r$ is large, contributions of coherent states of small amplitudes will become arbitrarily small. This implies that as the squeezing $r$ becomes large, the results of the Bell-CHSH inequality violation using the ideal rotation $\hat{R}_j(\theta_j)$ should be closer to the results using the physical rotation $\hat{V}_j(\theta_j)$. We then adjust our notation and indicate with ${C}_{ef}(\theta_A,\theta_B,x,y)=|\langle{x,y} |\hat{V}_A(\theta_A) \hat{V}_B(\theta_B)|\xi\rangle_{AB}|^2$ the probability of measuring the values $x$ and $y$ of the quadrature variables at the homodyne detectors. The subscript clearly states that this is the function associated with the use of physical effective rotations. Quantitatively, ${C}_{ef}$ is easily found using the projection of a coherent state onto a position quadrature eigenstate $\ket{x}$, which is given by $\langle{x}|\beta\rangle={\pi^{-1/4}}e^{\sqrt{2} i\beta_ix-\frac{1}{2}(x-\sqrt{2}\beta_r)^2 -i\beta_r\beta_i}$~\cite{kokralph}. We eventually obtain \begin{equation} \label{corrphys} \begin{split} &|{C}_{ef}(\theta_A,\theta_B,x,y)|^2 =\frac{1}{\pi}e^{-r-e^{-r}\cosh{r}(x^2+y^2)} \left(e^{xy{e}^{-2r}}\times\right.\\ &\sin[\frac{\sqrt{2}(y\,\theta_A+x\,\theta_B)} {d}]\!+\!e^{xy}\cos[\frac{\sqrt{2} (y\,\theta_A-x\,\theta_B)}{d}]\Big). \end{split} \end{equation} We are now in a position to build up the Bell-CHSH function for our Bell's inequality test in such physically effective case. Unfortunately, producing an analytic result is rather demanding due to the semi-infinite range of integrations over the quadrature variables $x$ and $y$, which also enter into the trigonometric functions in Eq.~(\ref{corrphys}), required in order to gather the joint probabilities $P_{kl}(\theta_A,\theta_B)$. We have therefore performed the Bell's inequality test by numerically evaluating the Bell-CHSH function for a set value of $d$ and by scanning the squeezing parameter $r$. The results are shown by the top-most curve in Fig.~\ref{fig:physical2D}, where violation of local realistic theories starting from $r\gtrsim{2.1}$ is observed, which is in full agreement with the ideal-rotation case. Also, the degree of violation is consistent between the two cases, $|B|_{max}$ being $2.229$ at $r=3.3$. Although the reproduction of the behaviour for large $r$ is computationally demanding, it is possible to perform a qualitative comparison between ideal and effective case by looking at the corresponding joint probability functions $|C_{id}(\theta_A,\theta_B,x,y)|^2$ and Eq.~(\ref{corrphys}), evaluated at the angles corresponding to the (numerically-optimized) associated Bell-CHSH function. This is done in Fig.~\ref{fig:probs}, where the clear similarity of the two probability functions ensures the closeness of the value of the corresponding Bell-CHSH functions. \begin{figure}[b] \centerline{\scalebox{0.8} {\includegraphics{physicalconinefficienze2Dconpunti2small.eps}}} \caption{The numerically optimized Bell function is plotted against the squeezing parameter $r$ for the case of physical effective rotations and three values of detection efficiency. The horizontal line shows the bound for local realistic theories. The actual value of $d$ is irrelevant, in this figure. The solid line with $\eta=1$ embodies the ideal-detector case with the other two cases of $\eta=0.8$ and $\eta=0.3$.} \label{fig:physical2D} \end{figure} \begin{figure}[t] \centerline{\scalebox{0.55}{\includegraphics{checkcorrelations1small.eps}}} \caption{We compare the behavior of the joint-probability functions $|{C}_{id}(\theta_A,\theta_B,x,y)|^2$ and $|{\cal C}_{ef}(\theta_A,\theta_B,x,y)|^2$ against the quadrature variables $x$ and $y$ for $r=4$. The angles $\theta_{A,B}$ are those maximizing the corresponding Bell-CHSH function. We thus have $\theta_A\!=\!0.061$ and $\theta_{B}\!=\!0.182$ ($\theta_A\!=\!-0.009$ and $\theta_{B}\!=\!0.004$) for the leftmost (rightmost) plot. Moreover, $\int\!\int{d}xdy|{C}_{ef}(\theta_A,\theta_B,x,y)|^2 \simeq\int\!\int{d}xdy|{C}_{id}(\theta_A,\theta_B,x,y)|^2$, regardless of the domain of integration. } \label{fig:probs} \end{figure} \section{Robustness to imperfections} \label{robustness} Although homodyne detectors have rather high efficiencies, the violation of the Bell-CHSH inequality by $\ket{\xi}_{AB}$ is far from $2\sqrt 2$, the maximum given by Tsirelson's bound~\cite{cirelson}. One might thus wonder whether even mild detection inefficiencies are sufficient to wash out the Bell-CHSH inequality violation unveiled in Fig.~\ref{fig:physical2D}. An important issue to address is thus given by the effects of detection inefficiencies. As done before, we first gain an idea of the expected behavior by studying the idealized picture. In order to quantitatively assess this point, we have modeled the imperfect homodyne detector onto which mode $j=A,B$ impinges as the cascade of a beam splitter of transmittivity $\eta$, mixing mode $j$ to an ancillary vacuum mode $a_j$, and a perfect homodyner. We are not interested in the state of the ancillae, which are discarded by tracing them out of the overall state, so that $|C_{id}(\theta_A,\theta_B,x,y)|^2$ is changed into $\langle{x,y}|{\rm Tr}_{a_Aa_B}\psi(\theta_A,\theta_B)|x,y\rangle$ with \begin{equation} \begin{split} &{\psi(\theta_A,\theta_B)}={\hat B}_{Aa_A}(\eta){\hat B}_{Ba_B} (\eta)\hat{R}_A(\theta_A)\hat{R}_B(\theta_B) \ket{\xi}_{AB}\!\bra{\xi}\\ &\otimes\ket{00}_{a_Aa_B} \!\!\bra{00}\hat{R}^\dag_A(\theta_A) \hat{R}^{\dag}_B(\theta_B){\hat B}^\dag_{Aa_A} (\eta){\hat B}^\dag_{Ba_B}(\eta), \end{split} \end{equation} where the beam splitter operation between modes $j$ and the corresponding ancilla ${a}_j$ is defined as ${\hat B}_{ja_j}(\zeta)=\exp[{\frac{\zeta}{2} ({\hat a}_j^\dagger {\hat b}_{a_j} -{\hat a}_j {\hat b}_{a_j}^\dagger)}]$ with $\cos\zeta=\sqrt{\eta}$ and $\hat{b}_{a_j}$ being the annihilation operator of $a_j$~\cite{efficiency}. The remaining procedure for the construction of the appropriate Bell-CHSH function remains as described above. The final form of the correlation function, which now depends on the efficiency as well, is obtained from Eq.~(\ref{corre1M}) by simply replacing $\arctan(\sinh{r})\rightarrow\arctan (\frac{\eta{e}^r\sinh{r}}{\sqrt{1+2\eta{e}^r\sinh{r}}})$. The behavior of the associated Bell function is shown, for two values of $\eta$, in Fig.~\ref{fig:ide}. We observe a rather striking robustness of the Bell function with respect to the homodyners' inefficiency: Even severely inefficient homodyne detectors would be able to unveil Bell-CHSH inequality violations with a state which is initially squeezed enough. By simply increasing the squeezing of the input state, one can compensate the effects of detection inefficiencies. Although for small values of $\eta$, the required squeezing factor becomes prohibitively large, the trend revealed by the ideal case leaves quite a few hopes for the physical effective one as well. In fact, such robustness persists when the local operations~(\ref{rotazioni}) are used, as shown in Fig.~\ref{fig:physical2D} for $\eta=0.8$ and $0.3$ (chosen for easiness of representation). The squeezing threshold at which the Bell's inequality test starts to be violated increases only quite slowly as the quality of the homodyne detectors is degraded. In passing, we should stress that the beam-splitter model used for the description of an inefficient homodyne detector can be used in order to describe the influences of external zero-temperature reservoirs coupled to the correlated two-mode state we are studying. Thus, similar conclusions regarding the resilience of the Bell-CHSH function to losses induced by a low-temperature environment can be drawn. We complete our study about the effects of imperfections by investigating the case in which we start with a mixed resource. This is practically quite relevant, given the fact that, experimentally, a single-mode squeezed thermal state is in general produced instead of a pure single-mode squeezed vacuum state. This is formally accounted for by considering the resource state \begin{equation} \label{squeezedthermal} \rho^{st}_{AB}\!=\!\int{d}^2\alpha{\cal T}(\overline{n},\alpha)\hat{S}_A(r) |{\frac{\alpha}{\sqrt 2},\frac{\alpha}{\sqrt 2}}\rangle_{AB} \langle{\frac{\alpha}{\sqrt 2},\frac{\alpha}{\sqrt 2}}|\hat{S}^\dag_A(r), \end{equation} where ${\cal T}(\overline{n},\alpha)= {e}^{-|\alpha-d|^2/\overline{n}}/\pi\overline{n}$ $(\alpha=\alpha_r+i\alpha_i)$ is the Glauber-Sudarshan function of a single-mode state at thermal equilibrium with mean photon number $\overline{n}$ and displaced, in phase space, by $d\in\mathbb{R}$~\cite{kokralph} while $\hat{S}_A(r)=e^{\frac{r}{2}(\hat{a}^{\dag{2}}-\hat{a}^2)}$ is mode-$A$ squeezing operator. Eq.~(\ref{squeezedthermal}) results from superimposing at a $50:50$ beam splitter a squeezed displaced thermal state of mode $A$ and the vacuum state of mode $B$. It is straightforward to find that Eq.~(\ref{squeezedthermal}) can be written as $\rho^{st}_{AB}=\int{d}^2\alpha\tilde{\cal T}(r,V,\alpha) |{{\alpha}/{\sqrt 2},{\alpha}/{\sqrt 2}}\rangle_{AB}\!\langle{{\alpha}/ {\sqrt 2},{\alpha}/{\sqrt 2}}|$ with $V=2\overline{n}+1$ and \begin{equation} \tilde{\cal T}(r,V,\alpha)=\frac{2e^{-\frac{2\alpha^2_i}{e^{2r}V-1} -\frac{2(\alpha_r-d)^2}{e^{-2r}V-1}}}{\pi\sqrt{V^2+1-2V\cosh(2r)}}. \end{equation} This state is then locally rotated and projected onto quadrature eigenstates by means of homodyne measurements. Once more, for clarity of our arguments, we refer to the case of ideal rotations. The use of our formal procedure applied so far lead to the correlation function \begin{equation} {\cal C}_{st}(\theta_A,\theta_B,r)\!=\!\frac{2\text{arctan} (\frac{e^{r}-Ve^{-r}}{2\sqrt{V}})\cos(4\theta_A)\cos(4\theta_B)} {\pi\left[1+\frac{\sin(4\theta_A)\sin(4\theta_B)}{V}+ \frac{2(\sin(4\theta_A)+\sin(4\theta_B)}{\sqrt{V^2+1+2V\cosh(2r)}}\right]}. \end{equation} Clearly, ${\cal C}_{st}\equiv{\cal C}_{id}$ when $V=1$, {\it i.e.} when a pure state is generated. With this expression, one can easily build up the Bell-CHSH function and test its behavior against the thermal parameter $V$ and, as usual, the squeezing. The results are shown in Fig. \ref{fig:misto}, where it is shown that it is enough to consider a slightly more squeezed initial resource in order to counteract any thermal effect. The same conclusions are reached by using the set of nonlinear unitary transformations $\hat{V}_j(\theta_j)$, although the analysis is largely numerical and more involved. \begin{figure}[t] \centerline{\scalebox{0.7}{\includegraphics{MISTOsmall.eps}}} \caption{Bell-CHSH test for an input squeezed thermal state superimposed to vacuum at a $50:50$ beam splitter. The Bell-CHSH function is plotted against $r$ and $V=2\overline{n}+1$, {\it i.e.} the thermal variance of the state. The horizontal plane shows the bound for local realistic theories.} \label{fig:misto} \end{figure} \section{ Improvement using two-mode squeezed states} \label{testTMSS} The required level of squeezing, {\it e.g.} $r\gtrsim 2$ for $\eta \geq 0.8$, revealed in Figs.~2 and 3 to demonstrate Bell-CHSH inequality violations is experimentally difficult to achieve using current technology. In this Section, we show that this requirement can be radically reduced by using another class of Gaussian states. So far, we have investigated the Bell's inequality test under nonlinear operations using the paradigmatic source given by state $\ket{\xi}_{AB}$. However, the behaviour of a Bell-CHSH function strongly depends on intrinsic properties of the tested quantum correlated state. In fact, this can be seen as the ``dual" of the well-known fact that the same bipartite entangled state behaves differently, in terms of Bell inequality tests, under different sets of local operations. Here, we are interested in finding out whether another realistic Gaussian resource is conceivable for the violation of our Bell-CHSH inequality when smaller values of $r$ are taken. Our starting point is the observation~\cite{KSBK,bowen} \begin{equation} \hat{B}_{AB}(\frac{\pi}{2})\hat{S}_{A}(r)\ket{0,0}_{AB} =\hat{S}_{A}(\frac{r}{2})\hat{S}_B(\frac{r}{2}) \hat{S}_{AB}(\frac{r}{2})\ket{00}_{AB}, \end{equation} where we have used the single-mode squeezing operator $\hat{S}_{j}(r)=\exp[\frac{r}{2} (\hat{a}^2_j-\hat{a}^{\dag2}_j)]~(j=A,B)$ and its two-mode version $\hat{S}_{AB}(r)=\exp[r(\hat{a}^\dag_A \hat{a}^\dag_B-\hat{a}_A\hat{a}_B)]$. Therefore, our resource $\ket{\xi}_{AB}$ is formally equivalent to a two-mode squeezed state that is also subjected to additional local squeezing operation. These latter are unable to change the nonlocal content of the state being used and could well be regarded as a pre-stage of the local actions (comprising nonlinear rotations and homodyne measurements) performed at Alice's and Bob's site respectively. We now remove them from the overall setup for Bell's inequality tests by considering, instead of Eq.~(\ref{initial}), the standard two-mode squeezed vacuum~\cite{originalsqueezing} \begin{equation} \ket{\xi'}={\cal M}\int{d}^2\beta~{\cal G}'(r,\beta)\ket{\beta,\beta^*} \end{equation} with weight function~\cite{jlk} \begin{equation} {\cal G}'(r,\beta)=\exp[{-\frac{1-\tanh{r} |\beta|^2}{{\tanh{r}}}}] \end{equation} and normalization factor ${\cal M}=(\pi\,\sinh{r})^{-1}$. The adaptation of the formal procedure described in our work to the use of this Gaussian resource is quite straightforward. For the simple case of ideal local rotations, the correlation function for joint outcomes at Alice's and Bob's site is identical to Eq.~(\ref{corre1M}) with the replacement $r\rightarrow{2r}$. The violation of the local realistic bound occurs now for $r\sim{1}$ and the entire Bell-CHSH function shown in Figs.~\ref{fig:ide} {\bf (a)} is ``shifted back" on the $r$ axis accordingly This effect is the same when Eqs.~(\ref{rotazioni}) are used, although the form of $C_{ef}$ is too cumbersome to be shown here. For clarity, we note that a two-mode squeezed state of squeezing $r$ can be generated using two single-mode squeezed states of the same degree of squeezing and a beam splitter as \begin{equation} \hat{S}_{AB}(r)\ket{00}_{AB}= \hat{B}_{AB}(\frac{\pi}{2})\hat{S}_{A}({r}) \hat{S}_B(-{r})\ket{0,0}_{AB}. \end{equation} This means that single mode squeezed states of $|r|\gtrsim 1$ ($\gtrsim 8.7$dB) can be used as resources to show violations of Bell's inequality. This makes our proposal closer to an experimental implementation as such high levels of squeezing can be generated (for example, up to 10dB \cite{squeezing}) using current technology. On the other hand, the local nonlinear operations may be more demanding and various types of unitary interactions need to be investigated to improve experimental feasibility of our approach. \section{Conclusions} \label{conclusions} We have shown a way to unveil violations of Bell's inequality for two-mode Gaussian states by means of nonlinear local operations and Gaussian homodyne measurements. Besides its theoretical interest, which stays at the center of current investigations on entangled CV systems and their fundamental features, our study emerges as an appealing alternative to the current strategy for Bell's inequality tests based on the use of appropriately de-Gaussified resources and high-efficiency homodyning. Our proposal has been shown to be robust against the inefficiency of the homodyne detection and mixedness in the initial resource. {This robustness is consistent with a previous study using entangled thermal states \cite{ourown}.} While the squeezing degree of $r \gtrsim 1$ required for resource Gaussian states is possible to achieve using present day technology, the strong nonlinear interactions required to implement the local operations may be more difficult to realize. On the other hand, it is worth noting that there has been remarkable progress to obtain strong nonlinear effects~\cite{v04,strong}. There remains some interesting future work. As the local operations used in our paper are not necessarily optimized ones, the research for more efficient local operations is desirable. Since we have used nonlinear operations to reveal violations of Bell's inequality for Gaussian states and Gaussian measurements, it is also natural to extend this investigation to entanglement distillation protocols for the Gaussian states. Interesting open questions are therefore whether there exist such entanglement distillation protocols using the type of local operations employed in this paper and how much they would be feasible and useful. \acknowledgments MP thanks M. S. Kim for useful discussions and acknowledges the UK EPSRC for financial support (Grant number EP/G004579/1). This work was supported by the Australian Research Council, Defence Science and Technology Organization, and the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korea government(MEST) (R11-2008-095-01000-0).
proofpile-arXiv_069-2651
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} During the period 11-12 May 1999 solar wind densities, as observed by the Advanced Composition Explorer spacecraft (ACE; \citet{StF98}) located upstream of the Earth at the Lagrangian point L1, dropped to unusually low values ($<$ 1 cm${^{-3}}$) for extended periods of time ($>$ 24 hours). This unusual and extended density depletion was also accompanied by very low-velocity solar wind flows ($<$300 km s${^{-1}}$) and caused a dramatic expansion of the Earth's magnetosphere and bow shock \citep{LeC00}. It has been estimated that the expanding bow shock moved outwards to a distance of $\sim$ 60 Earth radii, the lunar orbit, from its normal location of $\sim$10 Earth radii. The extremely spectacular nature of this event has caused it to be referred to as the ``{\it{day the solar wind nearly died}}" \citep{Loc01}. In a recent study of this event, \cite{JaF05} traced the solar wind outflows, observed at 1 AU, back to the Sun and showed that the flows responsible for the event began on 05 May 1999 from an active region-coronal hole (AR-CH) complex located at central meridian. They suggested that a continuously evolving CH boundary could cause a pinch-off, leading in turn to a separation of the CH outflow. The expansion of this detached solar wind flow, by a factor of 6-7, could then give the desired low densities at 1 AU. Given that the travel time between the Sun and the Earth, at the low velocities observed, was $\sim$5$-$6 days, it was argued that a pinch-off taking place $\sim$24$-$48 hours after the start of the coronal hole outflow would cause typical particle densities of 20$-$25 particles cm${^{-3}}$ at $\sim$0.5 AU (the approximate distance the CH outflow would have moved outwards in 48 hours) to be reduced to 0.1 particle cm${^{-3}}$ at 1 AU. Thus, the expansion of a large, detached, low-velocity flow region from a small CH (as it propagated out to 1 AU) could give rise to an extremely large, low-density cloud that engulfed the Earth on 11 May 1999, as seen from interplanetary scintillation (IPS) observations \citep{BaJ03}. Isotopic ratios of O${^{7+}}$/O${^{6+}}$ are known to be good proxies for associating solar wind outflows to either AR or CH \citep{LNZ04}. In a recent study it was shown that the solar source of this event could not be pinned down to either an AR or a CH \citep{JaF08}, as the O${^{7+}}$/O${^{6+}}$ ratios were sometimes indicative of a CH origin and at other times indicative of an AR origin. Such a fluctuating O${^{7+}}$/O${^{6+}}$ signature was attributed by \cite{JaF08} to a dynamic and rapid evolution taking place at the AR-CH boundary region, which was at the solar source of the event. Although many studies have attempted to understand the disappearance event of 11 May 1999 \citep{CrS00, FaS00, RiB00, UGF00, BaJ03, JaF05, JaF08}, none have examined the source region of this event. We study the \begin{figure} \centering \includegraphics[width=0.85\textwidth]{xarchive-f01.eps} \caption{Left: Full-disk EIT 195~{\AA} image on 05 May 1999. The boxed region contains AR8525 and the small CH lying against its westernmost boundary. Right: Three-dimensional structure of the coronal magnetic field on 05 May 1999, with the field lines projected on to a source surface at 2.5R${_{\odot}}$. The thick wavy line is the magnetic neutral line. See text for details.} \label{fig1} \end{figure} source region of the 11 May 1999 disappearance event to understand the evolution and dynamics of this AR-CH complex and to try and pin-point its implications to solar-terrestrial relationships in the absence of explosive solar events. \section{Observations} The EIT \citep{DeA95} provides observation of the Sun at four different wavelengths $\it{viz.~}$171~{\AA}~(\ion{Fe}{ix/x}; 1.0~MK), 195~{\AA}~(\ion{Fe}{xii}; 1.5~MK), 284~{\AA}~(\ion{Fe}{xv}; 1.8~MK), and 304~{\AA}~(\ion{He}{ii}; 0.05~MK). The images recorded at 171~{\AA}, 195~{\AA}, and 284~{\AA} are mainly dominated by iron lines and probe systematically higher heights and temperature regions in the corona. However, the images recorded at 195~{\AA} are contaminated with the \ion{Fe}{viii} line as well as the \ion{Fe}{xxiv} line, with the former being dominant in CH regions and later in flaring ARs \citep{DBM03, TrD06}. Therefore, these images can be used to study the evolution of the both ARs and CHs. In addition to EIT images, SXT images with the Al/Mg filter \citep{TsA91} were used to study the higher temperature (3$-$5 MK) responses and MDI line-of-sight magnetograms \citep{ScB95}, from the Solar and Heliospheric Observatory \citep{DFP95}, were used to study the evolution of photospheric magnetic fields around 05 May 1999, the approximate launch time of the disappearance event of 11$-$12 May 1999. The images were processed using the standard IDL based solar-soft software tree. \section{Data analysis and results} Figure \ref{fig1} (left) shows a full disk EIT 195~{\AA} image on 05 May 1999. The white box on the solar disk encloses the AR-CH complex, which is the region of interest in this study. The small CH can be clearly seen butting up against the western most boundary of AR8525, which is located at central meridian. Figure~\ref{fig1} (right) shows the three-dimensional structure of the coronal magnetic field on 05 May 1999 as viewed from a Carrington longitude of 315${^{\circ}}$, the central meridian passage longitude for 05 May 1999. The fields were computed using a potential field source surface (PFSS) model \citep{HKo99}. The differently-shaded magnetic field lines distinguish the two polarities (black: positive and grey: negative) and are shown projected onto a source surface at 2.5 R${_{\odot}}$, beyond which the potential field lines are assumed to be radial. Only fields between 5$-$250 G on the photosphere are plotted. The thick wavy line is the solar magnetic neutral line. The black, outward pointed open fields lines at central meridian and slightly north of the equator are clearly visible and correspond to the location of AR8525 and the CH. Based on the PFSS model it is clear that the target region shows open field lines emanating from the AR-CH complex. Figure~\ref{fig2} shows images of the solar disk corresponding to the boxed region from the left-hand panel of Fig.~\ref{fig1}. Each image is approximately centered on the AR-CH complex AR8525 on 05 May 1999 (left-hand panels) and 06 May 1999 (right-hand panels). Starting from the top, the panels show respectively, EIT 171~{\AA}; EIT 195~{\AA}; EIT 284~{\AA}, and SXT images with the Al/Mg filter on 05 May 1999 (left) and 06 May 1999 (right). \begin{figure} \centering \includegraphics[width=0.85\textwidth]{xarchive-f02.eps} \caption{The boxed region of the solar disk from Fig. \ref{fig1} on 05 May 1999 (left column) and 06 May 1999 (right column). From the top down are shown respectively, EIT 171~{\AA}; EIT 195~{\AA} ; 284~{\AA}; and an SXT image. The white arrows in two of the right-hand panels point to new bright features in the CH.} \label{fig2} \end{figure} The small CH lying $\sim$300 arcsec north and immediately adjacent to AR8525, whose western-most boundary is located almost exactly at central meridian, can be easily identified in the images. New bright features at the center can be seen to be producing a discernible change in the CH region on 06 May 1999, as compared with the previous day. These changes, perceived as a constriction developing across the CH, are indicated by white arrows in two of the right-hand panels. The two SXT images (lower most panels) also show a change in the emission on 06 May as compared to 05 May. It may be noted that images taken on 05 May and 06 May in Fig. \ref{fig2} are normalized to the same intensity scaling. To further substantiate the occurrence of this constriction or pinch-off taking place in the CH, we show EIT base difference images of the region in Fig. \ref{fig3}. The images were obtained by subtracting a reference EIT image on 05 May 1999 at 06:35:25 UT from EIT images obtained at intervals of $\sim$9 hours ahead of the reference image. Note that the images were differentially rotated to the time of the reference image. The black regions in the difference images represent original features from the reference image, while the white regions show changes that have occurred from the time of the reference image. The three panels clearly show the changes that can be seen to produce a constriction or narrowing of the CH in the $\sim$24 hour interval between the first panel on the left and the third panel on the right. The evolution of the CH, as observed by the EIT at 195~{\AA}, can be unambiguously seen in the base-difference movie Movie1.gif\footnote{Movies are available on line at http://www.edpsciences.org}, wherein the new bright features within the CH can be seen to be producing a progressive reduction in its area by causing a clear constriction or ``{\sl{pinch-off}}" across the CH. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{xarchive-f03.eps} \caption{Base difference images obtained at intervals of $\sim$9 hours. In the middle and right-hand panels, the reference image at the left at 06:35:25 UT on 05 May 1999 has been subtracted from images $\sim$9 hours ahead of it. The arrows in the middle and right-hand panels indicate the changes taking place in the CH.} \label{fig3} \end{figure} Figure \ref{fig4} shows MDI magnetograms, displayed between $\pm$300 G, of the boxed region of the solar disk from the left-hand panel of Fig.{\ref{fig1}}. The left-hand panel is at 06:24:03 UT on 05 May 1999 while the right-hand panel is on 06 May, a little over 24 hours ahead of this time. Note that the image in the right-hand panel has been rotated to the time of the panel on the left. The black and white regions in each panel correspond to negative and positive polarities respectively. The small, white, circular region of strong magnetic field lying slightly north and almost exactly at central meridian corresponds to the location of a small sunspot. The negative polarities surrounding the strong sunspot field on 05 May are moving magnetic features that appear around spots during their decay phase \citep{HHa73}. On 06 May, a new negative polarity (shown by arrow numbered 1), whose corresponding positive polarity is not identifiable unambiguously, is seen to appear to the northwest of the sunspot field. Also seen are two bipolar regions to the far west (arrows numbered 2 and 3), with the westernmost being clearly seen from 04 May and the one to its east beginning to emerge and evolve from 04 May. The brightness observed in EIT and SXT images at these locations indicates the presence of hot closed loops. The constriction taking place in the CH and seen as new bright features in its central region ( see Fig. \ref{fig2}) can take place by a process of interchange reconnection \citep{BVA07} wherein the open CH field lines reconnect with the closed field lines to the west. This process will reduce the number of open field lines in the CH, thereby reducing the earth-directed solar wind outflows, produce the observed brightness at its center and shift the open field lines to a new location. The new locations of these shifted open fields need not be ideally located to produce earth directed outflows and would therefore not contribute to the subsequent events at 1 AU. The interchange reconnection process could also occur between the open CH fields and the closed field lines anchored at one end at the new negative polarity seen to emerge on 06 May (marked by arrow numbered 1 in Fig. \ref{fig4}) or between the open CH fields and the closed loops at the two bipoles to the west. A number of small new positive and negative polarities are also seen to appear around the CH location on 06 May. It is therefore possible that the interchange reconnection process could initially start between the CH open fields and these new closed loops to initiate the constriction process and then lead up to interchange reconnection with closed loops at the bipole locations to the west, in a gradual and stepwise reconnection process \citep{AtH07, MaN07}. For a detailed view of the evolution sequence in the AR-CH complex see the movie, Movie2.gif${^{1}}$. \section{Discussion and Conclusions} Using both spacecraft observations and tomographic IPS observations \cite{JaF05} located the solar source region of the disappearance event of 11 May 1999 and showed that the flows responsible for the event originated around 05 May 1999 from a small CH lying adjacent to AR8525. We examined the AR-CH complex at the source region of this event using EIT, SXT, and MDI observations. The observations have clearly shown the rapid evolution and changes taking place in the CH lying adjacent to AR8525. The changes are seen to take place in a $\sim$24 hour interval starting from 05 May 1999, the approximate launch time of the disappearance event. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{xarchive-f04.eps} \caption{MDI magnetograms of the boxed region of the solar disk from Fig. \ref{fig1}. The panels differ in time by a little over 24 hours. Arrow 1 shows a small region of newly emerging negative flux while arrows 2 and 3 show two evolving bipolar regions.} \label{fig4} \end{figure} Based on the combined observations, it appears that the rapid evolution seen in the CH is due to a process of interchange reconnection taking place between the open CH fields and the closed fields from either the bipolar regions to their west or other small closed field regions as described above. The exact magnetic topology of the AR-CH region is however, complicated and would require a much more detailed study to isolate and pinpoint the reconnection sites and locations of the opposing polarities involved. What is clear however, is that the interchange reconnection process causes the formation of new bright loops within the CH that can be perceived as a progressive constriction taking place across the CH. Since there is a a high degree of correlation between solar wind speed and size of the CH from which it emanates \citep{NoK76, Neu94, Wan94, NeF98, KoF99}, we believe that, in this event, the formation of the new EUV loops would cause a reduction in the CH area and lead to a suppression of CH outflow. This would then give rise to slower velocity flows form regions that earlier produced faster flows, as has been observed in this event. As stated above, the rapid changes taking place can be seen to be producing a progressive reduction in the area of the CH by causing a clear constriction or pinch-off across it. The observations thus provide support for the mechanism suggested by \cite{JaF05} for causing the long lasting low density anomaly or ``{\it{disappearance event}}" at 1 AU. Since this disappearance event was known to have had significant space weather effects \citep{Ros00, PaC00}, these observations clearly link the observed effects at 1 AU to a sequence of discernible changes taking place in an AR-CH complex on the Sun. Not considering CIRs, these observations thus provide, to the best of our knowledge, the first evidence for the so-called Sun-Earth ``transmission-line" arising from non-explosive solar events. Whether or not AR open fields connect to the interplanetary medium to produce solar wind outflows has been debated for some years now \citep{KoF99, LuL02, ArH03, SDe03}. However, the first actual observations of solar wind outflows from AR open fields located at central meridian and lying at the boundary of an AR and a CH have recently been reported \citep{SaK07}. These authors have shown that the observed solar wind outflows came from regions that showed large flux expansion factors and low-velocity solar wind, as identified by tomographic IPS observations. It must be noted here that the work by \cite{JaF05} has reported both large flux expansion factors and low velocities from the source region of the 11 May 1999 disappearance event. As opposed to the well-known drivers of space weather phenomena like CME's or large flares, disappearance events are not associated with explosive solar phenomena. However, they do produce other observable effects that are not fully understood. For example, \cite{BaJ03} have reported very unusual IPS power spectra attributed to high-energy Strahl electrons. The study of such events is therefore important in establishing and understanding solar terrestrial relationships in absence of explosive solar events. With the exception of CIR's our observations, as stated earlier, provide the first evidence for solar terrestrial connection caused by a non-explosive solar event. Solar wind disappearance events constitute extreme deviations from the average conditions expected in the solar wind at 1 AU. It would therefore be important to continue such studies using both ground and space-based data to gain a better understanding of the dynamics and evolution of AR-CH boundary fields. \begin{acknowledgements} The authors would like to acknowledge the EIT and MDI consortia for providing data in the public domain via the world wide web. SoHO is a project of international collaboration between ESA and NASA. One of the authors, JP, would like to thank DAMPT, Cambridge, and STFC for support to initiate this work while DT and HEM acknowledge support from STFC. We thank G. Del Zanna for his comments on the manuscript. We also thank the referee for his critical comments and suggestions. \end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_069-2672
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Rarefied gas flow simulation is always a research hotspot of computational fluid dynamics (CFD). Recent years, multiscale gas-kinetic methods based on the discrete velocity method (DVM, \cite{Goldstein1989Investigations,Yang1995Rarefied,Mieussens2000Discretev,li2004Study,Titarev2007Conservative}) framework for nonequilibrium rarefied flow simulation have been developed, like the unified gas-kinetic scheme (UGKS) \cite{Xu2010A} by Xu and Huang, the discrete unified gas-kinetic scheme (DUGKS) \cite{guo2013discrete,guo2015discrete} by Guo et al. These multiscale methods overcome the time step and cell size restrictions of the original DVM method which requires time step and cell size of the order of mean collision time and mean free path, and thus have attracted more and more researchers' attention. It is worth pointing out that although UGKS and DUGKS can adopt time step and cell size comparable to the traditional macroscopic Navier-Stokes (NS) method, they still involve large amount of computation due to the curse of dimensionality. Hence, many researches on the acceleration of these multiscale methods have been carried out, including Mao et al.'s implicit UGKS \cite{Mao2015STUDY}, Zhu et al.'s prediction based implicit UGKS \cite{Zhu2016Implicit,zhu2019implicit}, Zhu et al.'s implicit multigrid UGKS algorithm \cite{Zhu2017Unified}, Yang et al.'s memory saving implicit multiscale scheme \cite{Yang2018An}, Pan et al.'s implicit DUGKS \cite{pan2019implicit}, etc. Following these previous works, it is quite valuable to further develop the fast algorithm for the multiscale method. In this paper, a multiple prediction implicit multiscale method for steady state calculation of gas flow in all flow regimes is proposed. The idea of macroscopic prediction presented by Zhu et al.~\cite{Zhu2016Implicit} is further developed. A prediction solver is used to predict the macroscopic variable based on the macroscopic residual, and a multiple prediction procedure is constructed. The prediction solver is designed to ensure the accuracy of the predicted macroscopic variable in the continuum flow regime and the stability of the numerical system in all flow regimes, which makes the method very efficient in the continuum flow regime and stable in all flow regimes. Our test cases show that the present method is one order of magnitude faster than the previous implicit multiscale method in the continuum flow regime. \section{Numerical method}\label{sec:numerical_method} In this paper, the monatomic gas is considered and the governing equation is BGK-type equation \cite{bhatnagar1954model}, \begin{equation}\label{eq:bgk} \frac{{\partial f}}{{\partial t}}{\rm{ + }}\vec u\cdot\frac{{\partial f}}{{\partial \vec x}} = \frac{{g - f}}{\tau }, \end{equation} where $f$ is the gas particle velocity distribution function, $\vec u$ is the particle velocity, $\tau$ is the relaxation time calculated as $\tau = \mu /p$ ($\mu$ and $p$ are the viscosity and pressure). $g$ is the equilibrium state which has a form of Maxwellian distribution, \begin{equation} g = \rho {\left( {\frac{\lambda }{\pi }} \right)^{\frac{3}{2}}}{e^{ - \lambda {{\vec c}^2}}}, \end{equation} or if the Shakhov model \cite{shakhov1968generalization} is used \begin{equation}\label{eq:eqstate} g^* = \rho {\left( {\frac{\lambda }{\pi }} \right)^{\frac{3}{2}}}{e^{ - \lambda {{\vec c}^2}}}\left[ {1 + \frac{{4(1 - \Pr ){\lambda ^2}\vec q \cdot \vec c}}{{5\rho }}(2\lambda {{\vec c}^2} - 5)} \right], \end{equation} where $\vec c$ is the peculiar velocity $\vec c = \vec u - \vec U$ and $\vec U$ is the macroscopic gas velocity, $\vec q$ is the heat flux, $\lambda $ is a variable related to the temperature $T$ by $\lambda = 1/(2RT)$. Pr is the Prandtl number and has a value of $2/3$ for monatomic gas. $f$ is related to the macroscopic variables by \begin{equation}\label{eq:f_int_conserve} \vec W = \int {\vec \psi fd\Xi}, \end{equation} where $\vec W=(\rho,\rho\vec U,\rho E)^T$ is the vector of the macroscopic conservative variables, $\vec \psi$ is the vector of moments $\vec \psi = {\left( {1,\vec u,\frac{1}{2}{{\vec u}^2}}\right)^T}$, $d\Xi = du_xdu_ydu_z$ is the velocity space element. The stress tensor $\pmb{P}$ and the heat flux $\vec q$ can also be calculated by $f$ as \begin{equation}\label{eq:f_int_stress} \pmb{P} = \int {\vec c\vec cfd\Xi }, \end{equation} \begin{equation}\label{eq:f_int_qflux} \vec q = \int {\frac{1}{2}\vec c{{\vec c}^2}fd\Xi }. \end{equation} Moreover, $f$ and $g$ obey the conservation law, \begin{equation}\label{eq:int_conserve_law} \int {\vec \psi (g - f)d\Xi } = \vec 0. \end{equation} Adopting the integral form, the steady state of the governing equation Eq.~\ref{eq:bgk} is \begin{equation}\label{eq:mic_fixedpoint} \int\limits_{\partial \Omega } {\vec u \cdot \vec nfdA} = \int\limits_\Omega {\frac{{g - f}}{\tau }dV} , \end{equation} where $\Omega$ is the control volume, $dV$ is the volume element, $dA$ is the surface area element and $\vec n$ is the outward normal unit vector. Take the moment of Eq.~\ref{eq:mic_fixedpoint} for $\vec \psi = {\left( {1,\vec u,\frac{1}{2}{{\vec u}^2}}\right)^T}$, the corresponding macroscopic governing equation can be written as \begin{equation}\label{eq:mac_fixedpoint} \int\limits_{\partial \Omega } {\vec FdA} = \vec 0, \end{equation} where the flux $\vec F$ has the relation with the distribution function $f$ by \begin{equation} \vec F = \int {\vec u \cdot \vec n\vec \psi fd\Xi }. \end{equation} This paper is about the numerical method of determining the steady state defined by Eq.~\ref{eq:mic_fixedpoint}. It is time-consuming to directly solve Eq.~\ref{eq:mic_fixedpoint} through a microscopic scheme involving discretization in both physical space and velocity space. The main idea of the present method is summarized as that, using the accurate but expensive scheme to calculate the residual of the system deviating from the steady state, then utilizing this residual and using the less accurate but efficient scheme to do the evolution. More distinctly, an accurate multiscale microscopic scheme based on the DVM framework is used to handle the microscopic numerical system with Eq.~\ref{eq:mic_fixedpoint}, and a fast prediction scheme is used to do the evolution of the macroscopic variables. The prediction scheme can be some kind of macroscopic scheme based on macroscopic variables or even a scheme based on the DVM framework but with less velocity points. The schematic of the general algorithmic framework for the present method is shown in Fig.~\ref{fig:general_frame}. The method consists of several loops in different layers. The outermost loop is denoted by $n$. One iteration of the $n$ loop includes a loop denoted by $m$ and a loop denoted by $l$, where the macroscopic variable $\vec W^{n}_{i}$ and the residual $\vec R^{n}_{i}$ are given as the input, the new $\vec W^{n+1}_{i}$ and $\vec R^{n+1}_{i}$ are the output. In the $m$ loop, the predicted macroscopic variable $\tilde{\vec W}^{n+1}_{i}$ is determined by the prediction scheme and by the numerical smoothing process. In the $l$ loop, the microscopic variable $f^{n+1}_{i,k}$ is calculated and the new $\vec W^{n+1}_{i}$ and $\vec R^{n+1}_{i}$ are obtained. The present method is a development of the prediction method of Zhu et al.~\cite{Zhu2016Implicit}, and has a structure similar to the multigrid method of Zhu et al.~\cite{Zhu2017Unified}, therefore we call it as ``multiple prediction method''. The method is detailed in following paragraphs. \subsection{Construction of the $l$ loop}\label{sec:loop_l} In the $l$ loop, residuals of the numerical system are evaluated through the microscopic scheme and the microscopic variables (the discrete distribution function) are updated through an implicit method (the numerical smoothing process). The microscopic scheme is very important because it determines the final steady state of the whole numerical system and thus determines the nature of the present numerical method. The microscopic scheme is based on Eq.~\ref{eq:mic_fixedpoint}. Discretizing the physical space by finite volume method and discretizing the velocity space into discrete velocity points, the microscopic governing equation Eq.~\ref{eq:mic_fixedpoint} can be expressed as \begin{equation}\label{eq:mic_fixedpoint_disc0} \sum\limits_{j \in N\left( i \right)} {{A_{ij}}{{\vec u}_k} \cdot {{\vec n}_{ij}}f_{ij,k}^{}} = {V_i}\frac{{g_{i,k}^{} - f_{i,k}^{}}}{{\tau _i^{}}}, \end{equation} where the signs $i,k$ correspond to the discretizations in physical space and velocity space respectively. $j$ denotes the neighboring cell of cell $i$ and $N\left( i \right)$ is the set of all of the neighbors of $i$. Subscript $ij$ denotes the variable at the interface between cell $i$ and $j$. $A_{ij}$ is the interface area, ${\vec n_{ij}}$ is the outward normal unit vector of interface $ij$ relative to cell $i$, and $V_i$ is the volume of cell $i$. The $l$ loop aims to find the solution of Eq.~\ref{eq:mic_fixedpoint_disc0} with the input predicted variable $\tilde{\vec W}^{n+1}_{i}$, therefore Eq.~\ref{eq:mic_fixedpoint_disc0} can be written more exactly as \begin{equation}\label{eq:mic_fixedpoint_disc} \sum\limits_{j \in N\left( i \right)} {{A_{ij}}{{\vec u}_k} \cdot {{\vec n}_{ij}}f_{ij,k}^{n + 1}} = {V_i}\frac{{\tilde g_{i,k}^{n + 1} - f_{i,k}^{n + 1}}}{{\tilde \tau _i^{n + 1}}}, \end{equation} where the symbol $\sim$ denotes the predicted variables at the $(n+1)$th step. $\tilde g_{i,k}^{n + 1}$ and $\tilde \tau _i^{n + 1}$ can be directly calculated from the input variable $\tilde{\vec W}^{n+1}_{i}$. The distribution function $f_{ij,k}^{n + 1}$ at the interface $ij$ is very important to ensure the multiscale property of the scheme. In this paper, following the idea of DUGKS \cite{guo2013discrete,guo2015discrete}, the construction of $f_{ij,k}^{n + 1}$ in reference \cite{yuan2018conservative} is adopted, i.e. \begin{equation}\label{eq:interfacef} f_{ij,k}^{n + 1} = \frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}f\left( {{{\vec x}_{ij}} - {{\vec u}_k}{h_{ij}},0,{{\vec u}_k}} \right) + \frac{{{h_{ij}}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}\tilde g\left( {{{\vec x}_{ij}},0,{{\vec u}_k}} \right), \end{equation} where \begin{equation} f({\vec x_{ij}} - {\vec u_k}{h_{ij}},0,{\vec u_k}) = \left\{ {\begin{array}{*{20}{l}} {f_{i,k}^{n + 1} + ({{\vec x}_{ij}} - {{\vec x}_i} - {{\vec u}_k}{h_{ij}})\nabla f_{i,k}^{n + 1}{\mkern 1mu} {\kern 1pt} {\mkern 1mu} {\mkern 1mu} {\kern 1pt} ,\;{\kern 1pt} \;{\kern 1pt} \;{\kern 1pt} {{\vec u}_k} \cdot {{\vec n}_{ij}} \ge 0,}\\ {f_{j,k}^{n + 1} + ({{\vec x}_{ij}} - {{\vec x}_j} - {{\vec u}_k}{h_{ij}})\nabla f_{j,k}^{n + 1}{\mkern 1mu} {\kern 1pt} {\mkern 1mu} {\kern 1pt} ,\;{\kern 1pt} \;{\kern 1pt} \;{\kern 1pt} {{\vec u}_k} \cdot {{\vec n}_{ij}} < 0.} \end{array}} \right. \end{equation} In above equations, $\nabla f_{i,k}^{n+1}$ and $\nabla f_{j,k}^{n+1}$ can be obtained through the reconstruction of the distribution function data. $\tilde g\left( {{{\vec x}_{ij}},0,{{\vec u}_k}} \right)$ and ${\tilde \tau _{ij}^{n+1}}$ are calculated by the same way as the method of GKS \cite{xu2001gas} and they can be both calculated from the predicted macroscopic variable $\tilde {\vec W}^{n+1}_{i}$. For $\tilde g({\vec x_{ij}},0,{\vec u_k})$, it is determined by the interface macroscopic variables $\tilde {\vec W}^{n+1}_{ij}$, which can be calculated as \begin{equation} \tilde {\vec W}_{ij}^{n + 1} = \int_{\vec u\cdot{{\vec n}_{ij}} \ge 0} {\vec \psi \tilde g_{ij}^{\rm{l},n + 1}d\Xi + } \int_{\vec u\cdot{{\vec n}_{ij}} < 0} {\vec \psi \tilde g_{ij}^{\rm{r},n + 1}d\Xi } , \end{equation} where the superscripts $\rm{l}$ and $\rm{r}$ denote variables at the left and right sides of the interface, ${\tilde g_{ij}^{\rm{l},n + 1}}$ and ${\tilde g_{ij}^{\rm{r},n + 1}}$ can be determined after the spacial reconstruction of $\tilde {\vec W}^{n+1}_{i}$. For ${\tilde \tau _{ij}^{n+1}}$, it is calculated as \begin{equation} \tilde \tau _{ij}^{n + 1} = \frac{{\mu (\tilde {\vec W}_{ij}^{n + 1})}}{{p(\tilde {\vec W}_{ij}^{n + 1})}} + \frac{{\left| {p_{ij}^{{\rm{l}},n + 1} - p_{ij}^{{\rm{r}},n + 1}} \right|}}{{\left| {p_{ij}^{{\rm{l}},n + 1} + p_{ij}^{{\rm{r}},n + 1}} \right|}}{h_{ij}}, \end{equation} where the pressure ${p_{ij}^{{\rm{l}},n + 1}}$, ${p_{ij}^{{\rm{r}},n + 1}}$ at two sides of the interface can be obtained from the reconstruction and the second term on the right is for artificial viscosity. $h_{ij}$ in above equations is calculated from the physical local time step \begin{equation} {h_{ij}} = \min ({h_i},{h_j}). \end{equation} The physical local time step $h_i$ for the cell $i$ is determined by the local CFL condition as \begin{equation} {h_i} = \frac{{{V_i}}}{{\mathop {\max }\limits_k \left( {\sum\limits_{j \in N(i)} {\left( {{{\vec u}_k} \cdot {{\vec n}_{ij}}{A_{ij}}{\rm{H}}[{{\vec u}_k} \cdot {{\vec n}_{ij}}]} \right)} } \right)}}{\rm{CFL}}, \end{equation} where ${\rm{H}}[x]$ is the Heaviside function defined as \begin{equation} {\rm{H}}[x] = \left\{ \begin{array}{l} 0,\quad x < 0,\\ 1,\quad x \ge 0. \end{array} \right. \end{equation} For more details about the construction of the interface distribution function $f_{ij,k}^{n + 1}$ please refer to reference \cite{yuan2018conservative}. Eq.~\ref{eq:mic_fixedpoint_disc} is solved by iterations. The microscopic residual $r_{i,k}^{n + 1,(l)}$ at the $l$th iteration can be defined as \begin{equation}\label{eq:mic_residual} r_{i,k}^{n + 1,(l)} = \frac{{\tilde g_{i,k}^{n + 1} - f_{i,k}^{n + 1,(l)}}}{{\tilde \tau _i^{n + 1}}} - \frac{1}{V_i}\sum\limits_{j \in N\left( i \right)} {{A_{ij}}{{\vec u}_k} \cdot {{\vec n}_{ij}}f_{ij,k}^{n + 1,(l)}}. \end{equation} According to the previous descriptions, $r_{i,k}^{n + 1,(l)}$ can be calculated from $f_{i,k}^{n + 1,(l)}$ and $\tilde {\vec W}^{n+1}_{i}$ through the spatial data reconstruction. The increment equation to get the microscopic variable $f_{i,k}^{n + 1,(l+1)}$ at the iteration $l+1$ is constructed by backward Euler method, \begin{equation}\label{eq:mic_iter_rsd} r_{i,k}^{n + 1,(l)} + \Delta r_{i,k}^{n + 1,(l + 1)} = \frac{1}{{\Delta \xi _{i,k}^{n + 1,(l + 1)}}}\Delta f_{i,k}^{n + 1,(l + 1)}, \end{equation} where ${\Delta \xi _{i,k}^{n + 1,(l + 1)}}$ is the pseudo time step and ${\Delta \xi _{i,k}^{n + 1,(l + 1)}}$ is always set to be $\infty$ in the present study. Combined with the residual expression Eq.~\ref{eq:mic_residual}, Eq.~\ref{eq:mic_iter_rsd} can be written as \begin{equation}\label{eq:mic_iter} \left( {\frac{1}{{\Delta \xi _{i,k}^{n + 1,(l + 1)}}} + \frac{1}{{\tilde \tau _i^{n + 1}}}} \right)\Delta f_{i,k}^{n + 1,(l + 1)} = r_{i,k}^{n + 1,(l)} - \frac{1}{V_i}\sum\limits_{j \in N(i)} {{A_{ij}}{{\vec u}_k} \cdot {{\vec n}_{ij}}\Delta f_{ij,k}^{n + 1,(l + 1)}}. \end{equation} For the increment of the interface distribution function ${\Delta f_{ij,k}^{n + 1,(l + 1)}}$, it is simply handled by a modified upwind scheme \begin{equation}\label{eq:mic_itff_jne0} \Delta f_{ij,k}^{n + 1,(l + 1)} = \left\{ \begin{array}{l} \frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}\Delta f_{i,k}^{n + 1,(l + 1)},\quad {{\vec u}_k} \cdot {{\vec n}_{ij}} \ge 0\\ \frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}\Delta f_{j,k}^{n + 1,(l + 1)},\quad {{\vec u}_k} \cdot {{\vec n}_{ij}} < 0 \end{array} \right. , \end{equation} where the coefficient ${\frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}}$ is the corresponding coefficient multiplied by $f\left( {{{\vec x}_{ij}} - {{\vec u}_k}{h_{ij}},0,{{\vec u}_k}} \right)$ in Eq.~\ref{eq:interfacef}. This coefficient is multiplied because during the whole $l$ loop the term $\tilde g\left( {{{\vec x}_{ij}},0,{{\vec u}_k}} \right)$ in Eq.~\ref{eq:interfacef} is calculated by the predicted macroscopic variable $\tilde {\vec W}^{n+1}_{i}$ and therefore is an invariant, so the variation of the microscopic variable $f_{i,k}^{n + 1,(l + 1)}$ only influences the term $f\left( {{{\vec x}_{ij}} - {{\vec u}_k}{h_{ij}},0,{{\vec u}_k}} \right)$, which is multiplied by the coefficient ${\frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}}$. It is worth noting that in an actual implementation of the method, the interface distribution function $f_{ij,k}^n$ at the $n$th step may be taken as the initial value $f_{ij,k}^{n+1,(0)}$ at $l=0$ for the step $n+1$ to reduce computation cost, in this situation the variation $\Delta f_{i,k}^{n + 1,(1)}$ should also account for the variation of $\tilde g\left( {{{\vec x}_{ij}},0,{{\vec u}_k}} \right)$, and the coefficient ${\frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}}$ shouldn't be multiplied at the first iteration of the $l$ loop, i.e. \begin{equation}\label{eq:mic_itff_je0} \Delta f_{ij,k}^{n + 1,(1)} = \left\{ {\begin{array}{*{20}{l}} {\Delta f_{i,k}^{n + 1,(1)},\quad {{\vec u}_k} \cdot {{\vec n}_{ij}} \ge 0}\\ {\Delta f_{j,k}^{n + 1,(1)},\quad {{\vec u}_k} \cdot {{\vec n}_{ij}} < 0} \end{array}} \right.. \end{equation} In this situation, after the first iteration of the $l$ loop, the interface distribution function $f_{ij,k}^{n + 1,(l>0)}$ will be calculated with the newly predicted $\tilde {\vec W}^{n+1}_{i}$ and Eq.~\ref{eq:mic_itff_jne0} is used to handle ${\Delta f_{ij,k}^{n + 1,(l + 1)}}$ again. Without loss of generality, substituting Eq.~\ref{eq:mic_itff_jne0} into Eq.~\ref{eq:mic_iter} will yield \begin{equation}\label{eq:mic_update} \begin{aligned} & \left( {\frac{1}{{\Delta \xi _{i,k}^{n + 1,(l + 1)}}} + \frac{1}{{\tilde \tau _i^{n + 1}}} + \frac{1}{V_i}\sum\limits_{j \in N_k^ + (i)} {\frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}{A_{ij}}{{\vec u}_k}\cdot{{\vec n}_{ij}}} } \right)\Delta f_{i,k}^{n + 1,(l + 1)}\\ = & r_{i,k}^{n + 1,(l)} - \frac{1}{V_i}\sum\limits_{j \in N_k^ - (i)} {\frac{{\tilde \tau _{ij}^{n + 1}}}{{\tilde \tau _{ij}^{n + 1} + {h_{ij}}}}{A_{ij}}{{\vec u}_k}\cdot{{\vec n}_{ij}}\Delta f_{j,k}^{n + 1,(l + 1)}} , \end{aligned} \end{equation} where $ N_k^ + (i)$ is the set of $i$'s neighboring cells satisfying ${\vec u_k} \cdot {\vec n_{ij}} \ge 0$ while for $ N_k^ - (i)$ it satisfies ${\vec u_k} \cdot {\vec n_{ij}} < 0$. For simplicity, Eq.~\ref{eq:mic_update} is solved by the Symmetric Gauss-Seidel (SGS) method, or also known as the Point Relaxation Symmetric Gauss-Seidel (PRSGS) method \cite{Rogers1995Comparison,Yuan2002Comparison}. In each time of the SGS iteration, a forward sweep from the first to the last cell and a backward sweep from the last to the first cell are implemented, during which the data of a cell is always updated by the latest data of its adjacent cells through Eq.~\ref{eq:mic_update}. Such a SGS iteration procedure is totally matrix-free and easy to implement. After several times of SGS iterations for solving Eq.~\ref{eq:mic_update}, an evaluation of $f_{i,k}^{n + 1,(l + 1)}$ with a certain precision can be obtained. Then the residual $r_{i,k}^{n + 1,(l+1)}$ at the $(l+1)$th iteration of the $l$ loop can be computed from $f_{i,k}^{n + 1,(l + 1)}$ and $\tilde {\vec W}^{n+1}_{i}$, and a new turn of the $l$ loop will be performed. After several iterations of the $l$ loop, an evaluation of $f_{i,k}^{n + 1}$ with a certain precision can be obtained, and the interface distribution function $f_{ij,k}^{n + 1}$ can be calculated by Eq.~\ref{eq:interfacef}. Then the macroscopic numerical flux ${\vec F_{ij}^{n + 1}}$ at the interface can be got by numerical integral in the discrete velocity space \begin{equation}\label{eq:f_inc_disc_flux} \vec F_{ij}^{n + 1} = \sum\limits_k {{{\vec \psi }_k}{{\vec u}_k} \cdot {{\vec n}_{ij}}f_{ij,k}^{n + 1}\Delta {\Xi _k}} , \end{equation} and the macroscopic residual $\vec R_i^{n + 1}$ defined by the macroscopic governing equation Eq.~\ref{eq:mac_fixedpoint} at the $(n+1)$th step can be calculated from the flux by \begin{equation}\label{eq:mac_fluxrsd} \vec R_i^{n + 1} = - \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\vec F_{ij}^{n + 1}}. \end{equation} Note that in the $l$ loop we solve the microscopic system Eq.~\ref{eq:mic_fixedpoint_disc} which is under the condition of the predicted variable $\tilde {\vec W}^{n+1}_{i}$, so $\vec R_i^{n + 1}$ is not zero even if the microscopic system is solved sufficiently accurately. Finally, the macroscopic variable ${\vec W}^{n+1}_{i}$ is calculated by numerical integral as \begin{equation}\label{eq:f_inc_disc_mac} \vec W_i^{n + 1} = \sum\limits_k {{{\vec \psi }_k}f_{i,k}^{n + 1}\Delta {\Xi _k}} + \tilde {\vec W}_i^{n + 1} - \sum\limits_k {{{\vec \psi }_k}\tilde g_{i,k}^{n + 1}\Delta {\Xi _k}} , \end{equation} where the term $\tilde {\vec W}_i^{n + 1} - \sum\limits_k {{{\vec \psi }_k}\tilde g_{i,k}^{n + 1}\Delta {\Xi _k}} $ is the integral error compensation term to make the scheme conservative, more details about this term please refer to reference \cite{yuan2018conservative}. The iteration of the $l$ loop is similar to the numerical smoothing process in multigrid method \cite{Zhu2017Unified}. The computation procedure of the $l$ loop is listed as follows: \begin{description} \item[Step 1.] Set the initial value $f_{i,k}^{n + 1,(0)}=f_{i,k}^{n}$. \item[Step 2.] Calculate the interface distribution function $f_{ij,k}^{n + 1,(l)}$ by Eq.~\ref{eq:interfacef} from $f_{i,k}^{n + 1,(l)}$ and $\tilde {\vec W}_i^{n + 1}$ through data spatial reconstruction. Calculate the microscopic residual $r_{i,k}^{n + 1,(l)}$ by Eq.~\ref{eq:mic_residual}. \item[Step 3.] Make judgement: if the residual $r_{i,k}^{n + 1,(l)}$ meets the convergence criterion, or if the iteration number of the $l$ loop meets the maximum limit, break out of the $l$ loop and go to Step 5. \item[Step 4.] Solve Eq.~\ref{eq:mic_update} by several times of SGS iterations, obtain $f_{i,k}^{n + 1,(l+1)}$, and go to Step 2. \item[Step 5.] By Eq.~\ref{eq:f_inc_disc_flux}, Eq.~\ref{eq:mac_fluxrsd} and Eq.~\ref{eq:f_inc_disc_mac}, do numerical integral in the velocity space to get ${\vec W}_{i}^{n+1}$ and $\vec R_i^{n + 1}$ for the step $n+1$. \end{description} \subsection{Construction of the $m$ loop}\label{sec:multiscaleflux} In the $m$ loop, based on the macroscopic variable ${\vec W}_{i}^{n}$ and the macroscopic residual $\vec R_i^{n}$ at the $n$th step, a reasonable estimation for the macroscopic variable $\tilde {\vec W}_i^{n + 1}$ is obtained through a fast prediction scheme to accelerate convergence. Theoretically speaking, the prediction scheme can be either a macroscopic scheme based on macroscopic variables or a microscopic scheme based on the DVM framework but with less velocity points. In this paper, a macroscopic scheme is designed to do the prediction. The process of the $m$ loop has certain similarity to the coarse grid correction in the multigrid method \cite{Zhu2017Unified}. \subsubsection{Framework} The macroscopic residual has the form of Eq.~\ref{eq:mac_fluxrsd}. To reduce the residual, a prediction equation is constructed by backward Euler formula \begin{equation}\label{eq:mac_prediction} \frac{1}{{\Delta {t_i^{n + 1}}}}\left( {\tilde {\vec W}_i^{n + 1} - \vec W_i^n} \right) = \vec R_i^n + \Delta \tilde {\vec R}_i^{n + 1}. \end{equation} $\Delta {t_i^{n + 1}}$ is the local prediction time step, the purpose of this time step is to constrain the marching time depth of the prediction process to make the scheme stable in the extreme case. The predicted residual increment $\Delta \tilde {\vec R}_i^{n + 1}$ is calculated by \begin{equation}\label{eq:mac_prediction_rsd} \Delta \tilde {\vec R}_i^{n + 1} = - \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\tilde {\vec {\mathcal{F}}}_{ij}^{n + 1}} + \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\vec {\mathcal{F}}_{ij}^n}, \end{equation} where $\vec {\mathcal{F}}_{ij}^n$ and $\tilde {\vec {\mathcal{F}}}_{ij}^{n + 1}$ are fluxes calculated by the prediction solver from $\vec W_{i}^n$ and the predicted $\tilde {\vec W}_{i}^{n + 1}$ with data reconstruction. This prediction solver is well-designed to balance between accuracy and stability, and will be presented later in the next section. The aim of the $m$ loop is to solve Eq.~\ref{eq:mac_prediction} and give an estimation for $\tilde {\vec W}_{i}^{n + 1}$ with a certain precision. Like what we do in the $l$ loop, Eq.~\ref{eq:mac_prediction} is also solved by iterations. The residual $\vec {\mathcal{R}}_i^{n+1,(m)}$ at the $m$th iteration can be defined by Eq.~\ref{eq:mac_prediction} and expressed as \begin{equation}\label{eq:mac_iter_rsd} \begin{aligned} \vec {\mathcal{R}}_i^{n+1,(m)} = & - \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\tilde {\vec {\mathcal{F}}}_{ij}^{n + 1,(m)}} + \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\vec {\mathcal{F}}_{ij}^n} + \vec R_i^n \\ & - \frac{1}{{\Delta t_i^{n + 1}}}\left( {\tilde {\vec W}_i^{n + 1,(m)} - \vec W_i^n} \right), \end{aligned} \end{equation} and the corresponding increment equation for $\tilde {\vec W}_i^{n + 1,(m+1)}$ is \begin{equation}\label{eq:mac_iter} \vec {\mathcal{R}}_i^{n + 1,(m)} + \Delta \vec {\mathcal{R}}_i^{n + 1,(m + 1)} = \frac{1}{{\Delta \eta _i^{n + 1,(m + 1)}}}\Delta \tilde {\vec W}_i^{n + 1,(m + 1)}, \end{equation} where ${\Delta \eta _i^{n + 1,(m + 1)}}$ is the pseudo time step. Considering Eq.~\ref{eq:mac_iter_rsd}, the increment of the residual $\Delta \vec {\mathcal{R}}_i^{n + 1,(m + 1)}$ can be expressed as \begin{equation}\label{eq:mac_iter_rsd_inc} \Delta \vec {\mathcal{R}}_i^{n + 1,(m + 1)} = - \frac{1}{{\Delta t_i^{n + 1}}}\Delta \tilde {\vec W}_i^{n + 1,(m + 1)} - \frac{1}{{{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\Delta \tilde {\vec {\mathcal{F}}}_{ij}^{n + 1,(m+1)}}, \end{equation} the variation of the flux $\Delta \tilde {\vec {\mathcal{F}}}_{ij}^{n + 1,(m+1)}$ is further approximated by \begin{equation}\label{eq:mac_iter_flux_inc} \Delta \tilde {\vec {\mathcal{F}}}_{ij}^{n + 1,(m+1)} = {\vec {\mathsf{F}}}_{ij}^{n + 1,(m+1)} - {\vec {\mathsf{F}}}_{ij}^{n + 1,(m)}, \end{equation} where $\vec {\mathsf{F}}_{ij}$ has the form \cite{luo1998fast} of the well-known Roe's flux function \begin{equation}\label{eq:mac_iter_flux_roe} \vec {\mathsf{F}}_{ij} = \frac{1}{2}\left( {{{\vec {\mathbb{F}}}_{ij}}({\vec W}_i) + {{\vec {\mathbb{F}}}_{ij}}({\vec W}_j) + {\mathfrak{r}_{ij}}{{\vec W}_i} - {\mathfrak{r}_{ij}}{{\vec W}_j}} \right). \end{equation} Here ${\vec {\mathbb{F}}_{ij}}(\vec W)$ is the Euler flux \begin{equation} {\vec {\mathbb{F}}_{ij}}(\vec W) = \left( \begin{array}{c} \rho \vec U \cdot {{\vec n}_{ij}}\\ \rho {U_x}\vec U \cdot {{\vec n}_{ij}} + {n_{ij,x}}p\\ \rho {U_y}\vec U \cdot {{\vec n}_{ij}} + {n_{ij,y}}p\\ \rho {U_z}\vec U \cdot {{\vec n}_{ij}} + {n_{ij,z}}p\\ (\rho E + p)\vec U \cdot {{\vec n}_{ij}} \end{array} \right), \end{equation} and ${\mathfrak{r}_{ij}}$ is \begin{equation} {\mathfrak{r}_{ij}} = \left| {{{\vec U}_{ij}} \cdot {{\vec n}_{ij}}} \right| + {a_{ij}} + 2\frac{{{\mu _{ij}}}}{{{\rho _{ij}}\Delta {x_{ij}}}}, \end{equation} where $a_{ij}$ is the acoustic speed at the interface and $\Delta {x_{ij}}$ is the distance between cell center $i$ and $j$. Substitute Eq.~\ref{eq:mac_iter_rsd_inc}, Eq.~\ref{eq:mac_iter_flux_inc} and Eq.~\ref{eq:mac_iter_flux_roe} into Eq.~\ref{eq:mac_iter}, approximate ${{\mathfrak{r}}_{ij}^{n+1,(m+1)}}$ by ${\mathfrak{r}_{ij}^{n+1,(m)}}$, and note that $\sum\limits_{j \in N(i)} {{A_{ij}}{\vec{\mathbb{F}}_{ij}}({{\vec W}_i})} = \vec 0$ holds, we can get \begin{equation}\label{eq:mac_update} \begin{aligned} & \left( {\frac{1}{{\Delta t_i^{n + 1}}} + \frac{1}{{\Delta \eta _i^{n + 1,(m + 1)}}} + \frac{1}{{2{V_i}}}\sum\limits_{j \in N(i)} {{\mathfrak{r}}_{ij}^{n + 1,(m)}{A_{ij}}} } \right)\Delta \tilde {\vec W}_i^{n + 1,(m + 1)} \\ = & \vec {\mathcal {R}}_i^{n + 1,(m)} + \frac{1}{{2{V_i}}}\sum\limits_{j \in N(i)} {{\mathfrak{r}}_{ij}^{n + 1,(m)}{A_{ij}}\Delta \tilde {\vec W}_j^{n + 1,(m + 1)}} \\ & - \frac{1}{{2{V_i}}}\sum\limits_{j \in N(i)} {{A_{ij}}\left( {{{\vec {\mathbb{F}}}_{ij}}(\tilde {\vec W}_j^{n + 1,(m + 1)}) - {{\vec {\mathbb{F}}}_{ij}}(\tilde {\vec W}_j^{n + 1,(m)})} \right)} . \end{aligned} \end{equation} Eq.~\ref{eq:mac_update} is solved by several times' SGS iterations. An estimation of $\tilde {\vec W}_i^{n + 1,(m + 1)}$ with a certain precision can be obtained from Eq.~\ref{eq:mac_update}, then ${\mathfrak{r}}_{ij}^{n + 1,(m+1)}$ and the residual $\vec {\mathcal {R}}_i^{n + 1,(m+1)}$ at the $(m+1)$th iteration of the $m$ loop can be calculated. After several turns of the $m$ loop, the predicted macroscopic variable $\tilde {\vec W}_i^{n + 1}$ with a certain precision can be determined. In fact, utilizing ${\vec W}_i^{n}$ and $\tilde {\vec W}_i^{n + 1}$, a prediction for the microscopic variable ${\tilde f}_{i,k}^{n + 1}$ can also be obtained to accelerate the convergence of the microscopic numerical system (i.e. the $l$ loop). The increment of the distribution function $\Delta {\tilde f}_{i,k}^{n + 1}$ can be calculated from the Chapman-Enskog expansions \cite{chapman1990mathematical} based on macroscopic variables ${\vec W}_i^{n}$ and $\tilde {\vec W}_i^{n + 1}$. This strategy will increase the complexity of the algorithm and thus is not adopted in the present method. Likewise, as one can see, the process of the $m$ loop is similar to the numerical smoothing process in multigrid method \cite{Zhu2017Unified}. The computation procedure of the $m$ loop is listed as follows: \begin{description} \item[Step 1.] Set the initial value $\tilde {\vec W}_i^{n + 1,(0)}={\vec W}_i^n$. \item[Step 2.] Calculate the residual $\vec {\mathcal {R}}_i^{n + 1,(m)}$ by Eq.~\ref{eq:mac_iter_rsd} from $\vec R_{i}^n$, $\vec W_{i}^n$ and $\tilde {\vec W}_i^{n + 1,(m)}$ (data reconstruction is implemented). \item[Step 3.] Make judgement: if the residual $\vec {\mathcal {R}}_i^{n + 1,(m)}$ meets the convergence criterion, or if the iteration number of the $m$ loop meets the maximum limit, break out of the $m$ loop and the predicted macroscopic variable $\tilde {\vec W}_{i}^{n+1}$ is determined. \item[Step 4.] Solve Eq.~\ref{eq:mac_update} by several times of SGS iterations, obtain $\tilde {\vec W}_i^{n + 1,(m+1)}$, and go to Step 2. \end{description} \subsubsection{Prediction solver} The prediction solver used to calculate the fluxes $\vec {\mathcal{F}}_{ij}^n$ and $\tilde {\vec {\mathcal{F}}}_{ij}^{n + 1}$ in Eq.~\ref{eq:mac_prediction_rsd} requires careful design. For the continuum flow, the prediction solver should be as accurate as a traditional NS solver. For the rarefied flow, it's unrealistic for a fast solver based on macroscopic variables to provide a very precise flux, but the solver should be stable so that the present method can be applied to all flow regimes. Thus, there are two principles for the prediction solver: accurate in the continuum flow regime, stable in all flow regimes. We start constructing the solver from the view of gas kinetic theory. Based on the famous Chapman-Enskog expansion \cite{chapman1990mathematical}, the distribution function $f$ obtained from the BGK equation Eq.~\ref{eq:bgk} to the first order of $\tau$ is \begin{equation}\label{eq:bgk_ce1} f = g - \tau (\frac{{\partial g}}{{\partial t}} + \vec u \cdot \frac{{\partial g}}{{\partial \vec x}}). \end{equation} Suppose there is an interface in $x$ direction. If the interface distribution function has the form of Eq.~\ref{eq:bgk_ce1}, take moments of $u_x\vec \psi$ to Eq.~\ref{eq:bgk_ce1} and ignore second (and higher) order terms of $\tau$, we can get the NS flux \cite{xu2001gas,Xu2015Direct}, where the term $g$ corresponds to the Euler flux and terms with $\tau$ (i.e. terms except $g$) correspond to viscous terms in the NS flux. Flux directly calculated from Eq.~\ref{eq:bgk_ce1} will lead to divergence in many cases, and we introduce some modifications below. The Euler flux often causes stability issue. Inspiring by gas-kinetic scheme (GKS) or also known as BGK-NS scheme \cite{xu2001gas}, we replace it by a weighting of Euler flux and the flux of kinetic flux vector splitting (KFVS) \cite{mandal1994kinetic}. That is, we replace the term $g$ in Eq.~\ref{eq:bgk_ce1} and the interface distribution function is expressed as \begin{equation}\label{eq:mac_interfacef_0} f = \frac{\tau' }{{\tau' + h}}{g^{\rm{lr}}} + \frac{h}{{\tau' + h}}{g} - \tau (\frac{{\partial g}}{{\partial t}} + \vec u \cdot \frac{{\partial g}}{{\partial \vec x}}), \end{equation} where $g^{\rm{lr}}$ is \begin{equation} {g^{{\rm{lr}}}} = \left\{ \begin{array}{l} {g^{\rm{l}}},u_x \ge 0\\ {g^{\rm{r}}},u_x < 0 \end{array} \right. \end{equation} which is determined by the reconstructed macroscopic variables on the two side of the interface. The interface macroscopic variable $\vec W$ is calculated as \begin{equation} \vec W = \int_{u_x \ge 0} {\vec \psi {g^{\rm{l}}}d\Xi } + \int_{u_x < 0} {\vec \psi {g^{\rm{r}}}d\Xi } , \end{equation} and $g$ is obtained from $\vec W$. The weight factors $\tau' /(\tau' + h)$ and $h /(\tau' + h)$ share the same forms as those in Eq.~\ref{eq:interfacef} (for how these weight factors are constructed please refer to \cite{yuan2018conservative}), and $\tau'$ is calculated by \begin{equation} \tau ' = \tau + {\tau _{{\rm{artificial}}}} = \frac{\mu }{p} + \frac{{\left| {{p^{\rm{l}}} -{p^{\rm{r}}}} \right|}}{{\left| {{p^{\rm{l}}} + {p^{\rm{r}}}} \right|}}h, \end{equation} where ${\tau _{{\rm{artificial}}}}$ is for artificial viscosity. $h$ is the local CFL time step and is equal to $h_{ij}$ in Eq.~\ref{eq:interfacef}. Eq.~\ref{eq:mac_interfacef_0} has a form similar to the interface distribution function of GKS \cite{xu2001gas}, except that the viscous term is not upwind split and the weight factor is constructed following the thought of DUGKS \cite{guo2013discrete,guo2015discrete}. Because the KFVS scheme is very robust, the flux obtained from Eq.~\ref{eq:mac_interfacef_0} makes the numerical system more stable than directly using Eq.~\ref{eq:bgk_ce1}. In the continuum flow regime, $h\gg\tau$, if the flow is continuous the term ${\tau _{{\rm{artificial}}}}$ for artificial viscosity will be negligible and Eq.~\ref{eq:mac_interfacef_0} will recover the NS flux, while if the flow is discontinuous the term ${\tau _{{\rm{artificial}}}}$ will be activated and Eq.~\ref{eq:mac_interfacef_0} will work as a stable KFVS solver. In the rarefied flow simulation, $\tau > h$ and the inviscid part of Eq.~\ref{eq:mac_interfacef_0} generally provides a KFVS flux, which can increase the stability of the scheme. The flux obtained from Eq.~\ref{eq:mac_interfacef_0} works well in the continuum flow regime. However, in the case of large Kn number, the numerical system based on Eq.~\ref{eq:mac_interfacef_0} is very stiff due to the large NS-type linear viscous term and the scheme is easy to blow up. Therefore, we multiply the viscous term by a limiting factor ${\mathfrak{q}}(\kappa)$ and Eq.~\ref{eq:mac_interfacef_0} is transformed into \begin{equation}\label{eq:mac_interfacef} f = \frac{\tau' }{{\tau' + h}}{g^{\rm{lr}}} + \frac{h}{{\tau' + h}}{g} - {\mathfrak{q}}(\kappa)\tau (\frac{{\partial g}}{{\partial t}} + \vec u \cdot \frac{{\partial g}}{{\partial \vec x}}). \end{equation} Here we emphasize that the limiting factor ${\mathfrak{q}}(\kappa)$ aims not to accurately calculate the flux, but to increase the stability in the case of large Kn number. One can view it as an empirical parameter. The limiting factor ${\mathfrak{q}}(\kappa)$ is constructed considering the form of nonlinear coupled constitutive relations (NCCR) \cite{xiao2014computational,liu2019Anextended}, and is expressed as \begin{equation}\label{eq:mac_interfacef_nccrlim} {\mathfrak{q}}(\kappa) = \frac{\kappa}{\sinh (\kappa)}, \end{equation} which has $\mathop {\lim }\limits_{\kappa \to 0} {\mathfrak{q}}(\kappa ) = 1$ and $\mathop {\lim }\limits_{\kappa \to +\infty } {\mathfrak{q}}(\kappa ) = 0$. $\kappa$ is related to the viscous term and calculated as \begin{equation}\label{eq:mac_interfacef_nccrkap} \kappa = \ln \left( {2\frac{{{\pi ^{\frac{1}{4}}}}}{{\sqrt {2\beta } }}\sqrt {\frac{{\Pr {{\left| {k\nabla T} \right|}^2}}}{{{C_p}T{p^2}}} + \frac{{{{\left| {2\mu {S_{ij}}} \right|}^2}}}{{2{p^2}}}} + 1} \right), \end{equation} where $-k\nabla T$ and $2\mu {S_{ij}}$ correspond to the heat flux and stress in NS equation, $C_p$ is the specific heat at constant pressure. $\beta$ is a molecular model coefficient \cite{liu2019Anextended} involved in the variable soft sphere (VSS) model \cite{Koura1991Variable,Koura1992Variable} and is calculated as \begin{equation} \beta = \frac{{5(\alpha + 1)(\alpha + 2)}}{{4\alpha (5 - 2\omega )(7 - 2\omega )}}, \end{equation} where the molecular scattering factor $\alpha$ and the heat index $\omega$ depend on the type of gas molecule. The limiting factor ${\mathfrak{q}}(\kappa)$ is constructed to weaken the viscous term in large Kn number case to make the scheme stable. It can be seen from Eq.~\ref{eq:mac_interfacef_nccrlim} and Eq.~\ref{eq:mac_interfacef_nccrkap} that, when the stress and heat flux are small, ${\mathfrak{q}}(\kappa)$ is approaching to $1$ and we can get the NS viscous term in Eq.~\ref{eq:mac_interfacef}, when the stress and heat flux are large, ${\mathfrak{q}}(\kappa)$ is approaching to $0$ and the viscous term is weakened. Here we further reveal the mechanism of ${\mathfrak{q}}(\kappa)$ through a simple one-dimensional case where there is no stress but only heat flux, i.e.~$k\partial T/\partial x \ne 0$ and $\partial {U}/\partial x = 0$. In this case $\kappa$ is \begin{equation} \kappa = \ln \left( {2\frac{{{\pi ^{\frac{1}{4}}}}}{{\sqrt {2\beta } }}\sqrt {\frac{{\Pr {{\left| {k\partial T/\partial x} \right|}^2}}}{{{C_p}T{p^2}}}} + 1} \right), \end{equation} and the heat flux from the viscous term of Eq.~\ref{eq:mac_interfacef} is \begin{equation}\label{eq:mac_qflux_nccrlim} q = {\mathfrak{q}}(\kappa)\left( { - k\frac{{\partial T}}{{\partial x}}} \right) = {\mathfrak{q}}(\kappa){q_{{\rm{NS}}}}. \end{equation} If the magnitude of the NS heat flux $\left|{q_{{\rm{NS}}}}\right|$ approaches $0$, ${\mathfrak{q}}(\kappa)$ will approach $1$ and $q$ approaching $q_{{\rm{NS}}}$ holds true for Eq.~\ref{eq:mac_qflux_nccrlim}. If $\left|{q_{{\rm{NS}}}}\right|$ approaches $+\infty$, in this case the heat flux $q$ from Eq.~\ref{eq:mac_qflux_nccrlim} goes to \begin{equation}\label{eq:mac_qflux_nccrlim_infty} q = \frac{{\ln \left( {2M\left| {{q_{{\rm{NS}}}}} \right|} \right)}}{M}\frac{{{q_{{\rm{NS}}}}}}{{\left| {{q_{{\rm{NS}}}}} \right|}}, \end{equation} where $M$ is \begin{equation}\label{eq:mac_qflux_nccrlim_M} M = \frac{\pi ^{\frac{1}{4}}}{{\sqrt {2\beta } }}\sqrt {\frac{{\Pr }}{{{C_p}T{p^2}}}}. \end{equation} On the other hand, in the NCCR relation \cite{liu2019Anextended}, for one-dimensional case, if $\partial {U}/\partial x = 0$, the heat flux is calculated as \begin{equation}\label{eq:nccr_qflux_1d} {q_{{\rm{NCCR}}}} = {\mathfrak{q}}({\kappa _{{\rm{NCCR}}}})\left( { - k\frac{{\partial T}}{{\partial x}}} \right) = {\mathfrak{q}}({\kappa _{{\rm{NCCR}}}}){q_{{\rm{NS}}}}, \end{equation} where ${\kappa _{{\rm{NCCR}}}}$ is \begin{equation}\label{eq:nccr_kapa_1d} {\kappa _{{\rm{NCCR}}}} = \frac{{{\pi ^{\frac{1}{4}}}}}{{\sqrt {2\beta } }}\sqrt {\frac{{\Pr {{\left| {{q_{{\rm{NCCR}}}}} \right|}^2}}}{{{C_p}T{p^2}}}}. \end{equation} If the magnitude of $\left|{q_{{\rm{NCCR}}}}\right|$ approaches $0$, similarly $q_{{\rm{NCCR}}}$ approaching $q_{{\rm{NS}}}$ holds true, i.e.~the NS heat flux is recovered. If the magnitude of $\left|{q_{{\rm{NCCR}}}}\right|$ approaches $+\infty$, in this limiting case the magnitude of the heat flux can be deduced from Eq.~\ref{eq:nccr_qflux_1d} and Eq.~\ref{eq:nccr_kapa_1d} as \begin{equation}\label{eq:nccr_qflux_1d_infty} \left| {{q_{{\rm{NCCR}}}}} \right| = \frac{{\ln \left( {2M\left| {{q_{{\rm{NS}}}}} \right|} \right)}}{M}, \end{equation} where $M$ has the same definition as Eq.~\ref{eq:mac_qflux_nccrlim_M}. Comparing Eq.~\ref{eq:mac_qflux_nccrlim_infty} and Eq.~\ref{eq:nccr_qflux_1d_infty}, one can find that $q$ and $q_{{\rm{NCCR}}}$ are identical in the limiting case. The above derivation implies that $q$ and $q_{{\rm{NCCR}}}$ are very similar when their magnitudes are very small or very large. Of course, instead of the above special case, for more general multidimensional case, $\vec q$ from the viscous term of Eq.~\ref{eq:mac_interfacef} and $\vec q_{{\rm{NCCR}}}$ based on the NCCR relation \cite{liu2019Anextended} are not exactly same when their magnitudes approach $+\infty$, but they are generally of the same order of magnitude when they are large. All in all, the viscous term of Eq.~\ref{eq:mac_interfacef} recovers the NS viscous term when the stress and heat flux are small, and this viscous term will be reduced to the same order of magnitude as the NCCR viscous term when the stress and heat flux are large. Thus, in small Kn number case the flux obtained from Eq.~\ref{eq:mac_interfacef} is accurate as the NS flux, while in large Kn number case the viscous term of Eq.~\ref{eq:mac_interfacef} is suppressed to make the numerical system more stable. Finally, take moments of $u_x\vec \psi$ to Eq.~\ref{eq:mac_interfacef} (ignore second and higher order terms of $\tau$, i.e.~$\int {\vec \psi \left( {\frac{{\partial g}}{{\partial t}} + \vec u \cdot \frac{{\partial g}}{{\partial \vec x}}} \right)d\Xi } = \vec 0$ is used to transform time derivatives into spatial derivatives), the prediction flux is \begin{equation}\label{eq:mac_predict_flux} \begin{aligned} \vec {\mathcal{F}} = & \frac{{\tau '}}{{\tau ' + h}}\int {{u_x}\left( \begin{aligned} &1\\ &{u_x}\\ &{u_y}\\ &{u_z}\\ &{\frac{1}{2}}{{\vec u}^2} \end{aligned} \right){g^{{\rm{lr}}}}d\Xi } + \frac{h}{{\tau ' + h}}\left( \begin{aligned} &\rho {U_x}\\ &\rho {U_x}{U_x} + p\\ &\rho {U_y}{U_x}\\ &\rho {U_z}{U_x}\\ &(\rho E + p){U_x} \end{aligned} \right) \\ & + {\mathfrak{q}}(\kappa )\left( \begin{aligned} & \;\quad 0\\ & - 2\mu {S_{xx}}\\ & - 2\mu {S_{xy}}\\ & - 2\mu {S_{xz}}\\ & - 2\mu {{\vec S}_x} \cdot \vec U - k\frac{{\partial T}}{{\partial x}} \end{aligned} \right) \end{aligned}. \end{equation} For the calculation about the moments of the Maxwellian distribution function, one can refer to reference \cite{xu2001gas} for some instruction. The present prediction solver based on Eq.~\ref{eq:mac_predict_flux} is efficient compared to the solver of GKS \cite{xu2001gas}. It is accurate as an NS solver in the continuum flow regime and has enhanced stability in large Kn number case. It is not accurate for rarefied flow calculation, but as mentioned before, it's unrealistic for a solver based on macroscopic variables to provide a very precise flux in large Kn number case. As a prediction solver, stability is the most important. The accuracy of the final solution obtained from the present method only depends on the microscopic scheme described in Section \ref{sec:loop_l}. \section{Numerical results and discussions}\label{sec:numericaltest} More test cases will be added during the preparation of the final paper. \subsection{Lid-driven cavity flow}\label{sec:testcav} The test case of lid-driven cavity flow is performed to test the efficiency of the present method, and to test if the viscous effect can be correctly simulated by the present method. Three cases Re=1000 and Kn=0.075, 10 are considered, involving gas flows from continuum regime to free molecular regime. The Mach number, which is defined by the upper wall velocity $U_{\rm{wall}}$ and the acoustic velocity, is 0.16. The Shakhov model is used and the Prandtl number Pr=$2/3$. The hard sphere (HS) model is used, with heat index $\omega$=0.5 and molecular scattering factor $\alpha$=1. On the wall of the cavity, the diffuse reflection boundary condition with full thermal accommodation \cite{Xu2015Direct} is implemented. For the physical space discretization, as shown in Fig.~\ref{fig:testcav_mesh}, a nonuniform 61$\times$61 mesh with a mesh size $0.004L$ ($L$ is the width of the cavity) near the wall is used for the case Re=1000 while a uniform 61$\times$61 mesh is used for the cases Kn=0.075, 10. For the velocity space discretization, as shown in Fig.~\ref{fig:testcav_meshmic}, a 1192 cells' unstructured mesh is used, where the central area is refined to reduce the ray effect. For the iteration strategy, in each step $n$, 60 turns of $m$ loop and 3 turns of $l$ loop are performed, while 40 times and 6 times of SGS iterations are executed for each turn of the $m$ loop and the $l$ loop respectively. The prediction step $\Delta t_i^{n + 1}$ in Eq.~\ref{eq:mac_prediction} is set as $ + \infty $ in this set of test cases. The convergence criterion is that the global root-mean-square of the infinite norm about the macroscopic residual vector defined by Eq.~\ref{eq:mac_fluxrsd} is less than $10^{-9}$. Computations are run on a single core of a computer with \emph{Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz}. The computational efficiency compared with the implicit multiscale method in reference \cite{yuan2018conservative} is shown in Tab.~\ref{tab:testcav_eff}. It can be seen that in the continuum flow regime (case of Re=1000), the present method is one order of magnitude faster than the implicit method of reference \cite{yuan2018conservative}. Considering that the implicit method of reference \cite{yuan2018conservative} is two orders of magnitude faster than explicit UGKS (discussed in \cite{yuan2018conservative}) in the continuum flow regime, the present method should be thousands of times faster than explicit UGKS in the continuum flow regime. For the cases Kn=0.075, 10, the present method is only one to two times faster than the method of reference \cite{yuan2018conservative}. This efficiency is reasonable because in the continuum flow regime the prediction scheme (the $m$ loop) gives very accurate predicted macroscopic variables and the numerical system converges rapidly, while for the rarefied flow the prediction scheme fails to be so precise and the slight efficiency increase compared with the method of reference \cite{yuan2018conservative} is due to the improved iteration strategy (namely, the $l$ loop) of the present method. The results of the present method for this set of test cases are shown in Fig.~\ref{fig:testcav_1000}, Fig.~\ref{fig:testcav_75} and Fig.~\ref{fig:testcav_10}. The present results agree very well with the results of GKS and UGKS. \clearpage \bibliographystyle{yuan_mimplicit}
proofpile-arXiv_069-2939
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper we consider a $D$-dimensional gravitational model with Gauss-Bonnet term and cosmological term $\Lambda$, which extend the model with cosmological $\Lambda$ term from ref. \cite{Deruelle}. Attempts at the geometric description of dark energy led to the analysis of a number of possible extensions of general relativity. In addition to the "classical" and scalar tensor theories, more complicated extensions have been developed, which are associated with the presence of torsion or more involved invariants of gravitational action. Among those theories that include a second-order derivative of the metric, one can distinguish the Einstein-Gauss-Bonnet gravitational theory, which has interesting properties. The idea of these theories follows from the concept of cosmology on branes, which is based on string theory. In \cite{Lovelock_1} a new class of the gravity theory was proposed, called Lovelock gravity,while Gauss-Bonnet gravity is its particular case. The cosmology of Gauss-Bonnet gravity has been studied in detail in a number of papers and at present such exact analytical solutions as cosmological (\cite{ElMakObOsFil} - \cite{IvKob-19-1}), centrally symmetric (generalization of Schwarzschild metric based on the Einstein-Gauss-Bonnet gravity) (\cite{R.Kon-1} - \cite{Torii}) and wormhole (\cite{Kanti} - \cite{Cu_Kon_Zhid}) solutions have been obtained. \section{The setup} In this multidimensional Einstein-Gauss-Bonnet model, the metric is expressed in the following form: \begin{equation} g= w du \otimes du + \sum_{i=1}^{n} e^{2\beta^i(t)}\epsilon_i dy^i \otimes dy^i. \label{2.1A} \end{equation} on the manifold \begin{equation} M = R \times M_1 \times \ldots \times M_n \label{2.2A}, \end{equation} where $ w=\pm1$, $\epsilon_i=\pm1$, $ i=1, ... , n$. The manifold M is defined as a set of one-dimensional manifolds $M_1, ... , M_n$. In the open real set $R_*=(u_-, u_+) $, the functions $ \gamma(u) $ and $\beta^i(u)$ are smooth. The metric (\ref{2.1A}) is cosmological for $w=-1$, $\epsilon_1=\epsilon_2= ...=\epsilon_n=1$ and for physical applications the manifolds $M_1$, $M_2$ and $M_3$ are equal to $\mathbb{R}$ and the other manifolds are considered as compact sets. In this model, the action is expressed as \begin{equation} S = \int_{M} d^{D}z \sqrt{|g|} \{ \alpha_1 (R[g] - 2 \Lambda) + \alpha_2 {\cal L}_2[g] \}, \label{2.3A} \end{equation} where $g=g_{MN}dz^M \otimes dz^N$ is the metric defined on the manifold M, $dimM=D$, $|g|=|det(g_{MN}| $ and \begin{equation} {\cal L}_2 = R_{MNPQ} R^{MNPQ} - 4 R_{MN} R^{MN} +R^2 \label{2.4A} \end{equation} is the standard Gauss-Bonnet term and $\alpha_1$, $\alpha_2$ are nonzero constants. Further we denote $\alpha= \frac{\alpha_2}{\alpha_1}$. Recent astronomical observations and studies show that the Universe expands with acceleration instead of deceleration according to the scheme of the standard Friedmann model. Observations in the large-scale structure of the Universe show that visible matter and invisible dark matter can contribute only $ 31.7 \% $ of the total amount. The remaining $68.3 \%$ is dark energy, which causes an accelerated expansion of the Universe. Several interesting articles on Einstein-Gauss-Bonnet gravity have been published, which attempt to explain the problems of dark energy in cosmology (\cite{ChPavTop1} - \cite{Myrzakulov_1}). It should be noted that in the Einstein-Gauss-Bonnet theory of gravity the expansion of the Universe may be explained without a cosmological term. This is the main feature of this theory of gravity. Here we are dealing with the cosmological solutions with diagonal metrics (of Bianchi-I-like type) governed by $n$ scale factors depending upon one variable, where $n>3$. Morever, we restrict ourselves by the solutions with exponental dependence of scale factors (with respect to synchronous variable $t$). \begin{equation} a_i(t) \sim exp(h^i t) \label{2.5A}, \end{equation} $i=1, ... ,n; $ $D = n+1$. Recent astronomical observations show that our Universe in its present state expands with acceleration. Therefore, in order to describe the 3-dimensional exponential expansion of the Universe, we will assume that \begin{equation} h^1=h^2=h^3=H>0 \label{2.6A}. \end{equation} The integrand in (\ref{2.3A}), when the metric (\ref{2.1A}) is substituted, reads as follows \begin{equation} \sqrt{|g|}\Bigl\{\alpha_1(R[g]-2\Lambda) +\alpha_2 {\cal L}_2[g]\Bigl\}=L+\frac{df}{du} \label{2.7A}, \end{equation} where \begin{equation} L=\alpha_1 L_1+\alpha_2 L_2 \label{2.8A}. \end{equation} Here terms $L_1$ and $L_2$ are expressed in the following form \cite{IvKob-19-1}: \begin{equation} L_1=(-w)e^{-\gamma+\gamma_0}G_{ij}\dot\beta^i\dot\beta^j - 2 \Lambda e^{\gamma+\gamma_0} \label{2.9A}, \end{equation} \begin{equation} L_2=-\frac{1}{3}e^{-3\gamma+\gamma_0}G_{ijkl}\dot\beta^i\dot\beta^j\dot\beta^k\dot\beta^l \label{2.10A}, \end{equation} where \begin{equation} \gamma_0= \sum_{i=1}^n \beta^i \label{2.11A}. \end{equation} Here we use a 2-linear symmetric form in the "mini-supermetric" - 2 - metric of a pseudo-Euclidean signature: \begin{equation} <\upsilon_1, \upsilon_2> = G_{ij}\upsilon_1^i\upsilon_2^j \label{2.12A}, \end{equation} where \begin{equation} G_{ij}= \delta_{ij}-1 \label{2.13A}, \end{equation} and a 4-linear symmetric form - Finslerian 4-metric: \begin{equation} <\upsilon_1, \upsilon_2, \upsilon_3, \upsilon_4> = G_{ijkl}\upsilon_1^i\upsilon_2^j\upsilon_3^k\upsilon_4^l \label{2.14A} \end{equation} with components \begin{equation} G_{ijkl}= (\delta_{ij}-1) (\delta_{ik}-1) (\delta_{il}-1) (\delta_{jk}-1) (\delta_{jl}-1) (\delta_{kl}-1) \label{2.15A}. \end{equation} Here we denote $ \dot A=\frac{dA}{du} $, and the function $f(u)$ in (\ref{2.7A}) is irrelevant for our consideration (see \cite{Iv-09}, \cite{Iv-10}). The derivation of (\ref{2.8A}) - (\ref{2.10A}) is based on the following identities (\cite{Iv-09}, \cite{Iv-10}): \begin{equation} G_{ij}\upsilon^i\upsilon^j= \sum_{i=1}^n (\upsilon^i)^2 - \Biggl(\sum_{i=1}^n \upsilon^i\Biggl)^2 \label{2.16A}, \end{equation} \begin{eqnarray} G_{ijkl}\upsilon^i\upsilon^j\upsilon^k\upsilon^l= \Biggl( \sum_{i=1}^n \upsilon^i \Biggl)^4 - 6\Biggl(\sum_{i=1}^n \upsilon^i\Biggl)^2\sum_{j=1}^n (\upsilon^j)^2 \nonumber \\ + 3\Biggl(\sum_{i=1}^n (\upsilon^i)^2\Biggl)^2 + 8\Biggl(\sum_{i=1}^n \upsilon^i\Biggl) \sum_{j=1}^n (\upsilon^j)^3 - 6\sum_{i=1}^n (\upsilon^i)^4 \label{2.17A}. \end{eqnarray} From the action (\ref{2.3A}) we can get the following form of the equation of motion: \begin{equation} \epsilon_{MN} = \alpha_1\epsilon_{MN}^{(1)} + \alpha_2\epsilon_{MN}^{(2)} = 0 \label{3.1A}, \end{equation} where \begin{equation} \epsilon_{MN}^{(1)} = R_{MN} - \frac{1}{2}Rg_{MN} + \Lambda \label{3.2A}, \end{equation}, \begin{equation} \epsilon_{MN}^{(2)} = 2 \Biggl(R_{MPQS}R_N^{PQS} - 2R_{MP}R_N^P - 2R_{MPNQ}R^{PQ} + RR_{MN} \Biggl) - \frac{1}{2}{\cal L}_2 g_{MN} \label{3.3A}. \end{equation} Now we put $w= -1$, and the equations of motion for the action (\ref{2.3A}) give us the set of polynomial equations \cite{ErIvKob-16} \begin{eqnarray} E = G_{ij} v^i v^j + 2 \Lambda - \alpha G_{ijkl} v^i v^j v^k v^l = 0, \label{3.4A} \\ Y_i = \left[ 2 G_{ij} v^j - \frac{4}{3} \alpha G_{ijkl} v^j v^k v^l \right] \sum_{i=1}^n v^i - \frac{2}{3} G_{ij} v^i v^j + \frac{8}{3} \Lambda = 0, \label{3.5A} \end{eqnarray} $i = 1,\ldots, n$, where $\alpha = \alpha_2/\alpha_1$. For $n > 3$ we get a set of forth-order polynomial equations. We note that for $\Lambda =0$ and $n > 3$ the set of equations (\ref{3.4A}) and (\ref{3.5A}) has an isotropic solution $v^1 = \cdots = v^n = H$ only if $\alpha < 0$ \cite{Iv-09,Iv-10}. This solution was generalized in \cite{ChPavTop} to the case $\Lambda \neq 0$. It was shown in \cite{Iv-09,Iv-10} that there are no more than three different numbers among $v^1,\dots ,v^n$ when $\Lambda =0$. This is valid also for $\Lambda \neq 0$ if $\sum_{i = 1}^{n} v^i \neq 0$. \section{Exponential solutions with three factor spaces} In this section we deal with a class of solutions to the set of equations (\ref{3.4A}), (\ref{3.5A}) of the following form: \begin{equation} \label{3.1} v =(\underbrace{H,H,H}_{``our'' \ space},\underbrace{\overbrace{h_1, \ldots, h_1}^{k_1}, \overbrace{h_2, \ldots, h_2}^{k_2}}_{internal \ space}), \end{equation} where $H$ is the Hubble-like parameter corresponding to an $3$-dimensional factor space, $h_1$ is the Hubble-like parameter corresponding to an $k_1$-dimensional factor space with $k_1 > 1$ and $h_2$ ($h_2 \neq h_1$) is the Hubble-like parameter corresponding to an $k_2$-dimensional factor space with $k_2 > 1$. The first one is identified with "our" $3d$ space while the next ones are considered as subspaces of $( k_1 + k_2)$-dimensional internal space. We assume \begin{equation} \label{3.2a} H > 0 \end{equation} for a description of an accelerated expansion of a $3$-dimensional subspace (which may describe our Universe). According to the ansatz (\ref{3.1}), the first $3$-dimensional factor space is expanding with the Hubble parameter $H >0$, while the $k_i$-dimensional internal factor space is contracting with the Hubble-like parameter $h_i < 0$, where $i$ is either $1$ or $2$. Now we consider the ansatz (\ref{3.1}) with three Hubble parameters $H$, $h_1$ and $h_2$ which obey the following restrictions: \begin{equation} S_1 = 3 H + k_1h_1 + k_2 h_2 \neq 0, \quad H \neq h_1, \quad H \neq h_2, \quad h_1 \neq h_2, \quad k \neq 1. \label{3.3} \end{equation} It was proved in ref. \cite{ErIv-17-2} that the set of $(n + 1)$ polynomial equations (\ref{3.4A}), (\ref{3.5A}) under ansatz (\ref{3.1}) and restrictions (\ref{3.3}) imposed is reduced to a set of three polynomial equations (of fourth, second and first orders, respectively) \begin{eqnarray} E =0, \label{3.4E} \\ Q = - \frac{1}{2 \alpha}, \label{3.4Q} \\ L = H + h_1 + h_2 - S_1 = 0. \label{3.4L} \end{eqnarray} where $E$ is defined in (\ref{3.4A}) with ($v^1$,$v^2$,$v^3$) = (H, $h_1$, $h_2$) and \begin{equation} Q = Q_{h_1 h_2} = S_1^2 - S_2 - 2 S_1 (h_1 + h_2) + 2 (h_1^2 + h_1 h_2 + h_2^2), \label{3.5} \end{equation} $S_1$ is defined in (\ref{3.3}) and $ S_2 = 3 H^2 + k_1 (h_1)^2 + k_2 (h_2)^2$ As it was proved in \cite{ErIv-17-2} by using results of ref. \cite{Ivas-16-1} (see also \cite{Pavl-15}), the exponential solutions with $v$ from (\ref{3.1}) and $k_1 > 1$, $k_1 > 2$ are stable if and only if \begin{equation} S_1 = 3 H + k_1h_1 + k_2 h_2 = H + h_1 + h_2 > 0. \label{3.6} \end{equation} Here we use the relation (\ref{3.4L}). \subsection{Exact stable solutions in $(3+ 3+ k)$-dimensional case} In this subsection, we present solutions to the set of equations of motion in in the form: \begin{equation} v =(\underbrace{H,H,H}_{''our'' space},\underbrace{\overbrace{h_1, h_1, h_1}^3, \overbrace{h_2, \ldots, h_2}^{k_2}}_{internal \ space}) \label{6.1A} \end{equation} \noindent where $k_2 = k > 2$ and $H$ is the Hubble-like parameter that corresponds to the 3-dimensional “our” subspace and the $h_1$, $h_2$ are Hubble-like parameters that correspond to the internal subspaces of dimensions $3$ and $k_2 >2$, respectively. These solutions may be readily rewritten to the case \begin{equation} v =(\underbrace{H,H,H}_{''our'' \ space},\underbrace{\overbrace{h_1, \ldots, h_1}^{k_1}, \overbrace{h_2, h_2, h_2}^3}_{internal \ space}), \label{6.2A} \end{equation} where $k_1 >1$. Our solutions must satisfy the following conditions: A) $H>0$. This condition is necessary to describe the accelerated expansion of "our" 3-dimensional world, i.e., we assume that our 3-dimensional world corresponds to an expanding subspace in the multidimensional model. The remaining dimensions are considered as an internal subspace dimensions. B) As it is well known, our multidimensional model is considered anisotropic, therefore expansion is carried out in some dimensions and the remaining dimensions are compressed. In this particular case, we believe that in "our" 3-dimensional world there is an expansion, and the rest of the internal dimensions either have a contraction, or an expansion in some dimensions and a contraction in the other internal dimensions. Therefore, the fulfillment of this condition is required: \noindent B.1) ($h_1 < 0$, $h_2 < 0$) - a contraction in the internal subspace; \newline B.2) ($h_1 < 0$, $h_2 > 0$) - a contraction in the internal $k_1$-dimensions and an expansion in the internal $ k_2 $-dimensions; \newline B.3) ($h_1 > 0$, $h_2 < 0$) - an expansion in the internal $k_1$-dimensions and a contraction in the internal $k_2$ -dimensions. We note that the solutions with $H>0$, $h_1>0$, $h_2 >0$ do note appear in our consideration due to relation (\ref{6.6A}). The solutions obeying B.2) are unstable. When fulfilling the above conditions, from (\ref{3.4Q}) we get following solutions in case $(m, k_1, k_2) = (3, 3, k_2)$: \begin{equation} h_1 = - \frac{1}{4}\Biggl((k_2 -1)h_2 \pm\sqrt{\frac{2}{\alpha} - (k_2 +3)(k_2 -1)h_2^2}\Biggl) \label{6.3A}. \end{equation}. The substitution of (\ref{6.3A}) and $\lambda=\Lambda\alpha$ into relation (\ref{3.4E}) gives us the following expression \begin{equation} h_2 = \frac{\sqrt{\frac{1}{\alpha}(k_2+1)(k_2-1)(k_2 + 3)(k_2 - 3)\Biggl((k_2 - 1)(k_2-3) \pm 2A}\Biggl)}{(k_2 - 1)(k_2 + 1)(k_2 - 3)(k_2 +3)} \label{6.4A}, \end{equation} \begin{equation} h_2 = -\frac{\sqrt{\frac{1}{\alpha}(k_2+1)(k_2-1)(k_2 + 3)(k_2 - 3)\Biggl((k_2 - 1)(k_2-3) \pm 2A}\Biggl)}{(k_2 - 1)(k_2 + 1)(k_2 - 3)(k_2 + 3)} \label{6.4ABN}, \end{equation} here \begin{equation} A=\sqrt{(k_2 - 1)(k_2 - 3)\Biggl((1 - 4\lambda)k_2^2 + 2(1 - 8\lambda)k_2 + 3(1 - 4\lambda)\Biggl)} \label{6.5A}. \end{equation}. \noindent Above we denote part of the expression by A to shorten the long formulas. Thus, from the last relations we can see that all the Hubble - like parameters $h_1$ and $h_2$ of the internal space are uniquely determined by $k_2$ and $\lambda$. The Hubble-like parameters of ''our'' 3-dimensional world is carried out using equation (\ref{3.4L}): \begin{equation} H = - \frac{2h_1 + (k_2 - 1)h_2}{2} \label{6.6A}. \end{equation} The stable solutions are selected by relation \begin{equation} S_1 = \frac{(3 - k_2)}{2} h_2 > 0. \label{6.6B} \end{equation} For $k_2 > 2$ we have stable solutions for $h_2 < 0$ and unstable - for $h_2 > 0$. The case $k_2 = 2$ will be considered in a separate publication. \section{Examples} \subsection{$k_2 =5$ and $\alpha > 0$} Let us consider the case $ k_2 = 5$. From (\ref{6.4A}) we get four solutions: \begin{equation} h_2 = \frac{1}{4\sqrt{3\alpha}}\sqrt{1 \pm \sqrt{19 - 96\lambda}} \label{7.1A}, \end{equation} \begin{equation} h_2 = -\frac{1}{4\sqrt{3\alpha}}\sqrt{1 \pm \sqrt{19 - 96\lambda}} \label{7.1ABK} \end{equation} \noindent and further, as our calculations show, each value of four solution of $h_2$ corresponds to two values of the solution $h_1$ (see (\ref{6.3A})) and one number of the solutions H (see (\ref{6.6A})). Therefore, we can find eight number of the set of real solutions. As our calculations show, four of them are unstable. Therefore, when we introduce the stability condition and the conditions A), B), the number of the set of stable real solutions reduce to three: 1) \begin{equation} H = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 + \sqrt{19 - 96\lambda}} + \sqrt{4 - 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.2A}, \end{equation} \begin{equation} h_1 = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 + \sqrt{19 - 96\lambda}} - \sqrt{4 - 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.3A}, \end{equation} \begin{equation} h_2 =- \frac{1}{4\sqrt{3\alpha}}\sqrt{1 + \sqrt{19 - 96\lambda}} \label{7.4A}, \end{equation} \begin{equation} S_1 = \frac{1}{4\sqrt{3\alpha}}\sqrt{1 + \sqrt{19 - 96\lambda}} > 0 \label{7.4AB}. \end{equation} The solutions $H > 0$, $h_1 < 0$ and $h_2 < 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{3}{16} < \lambda < \frac{19}{96} \end{displaymath} and the solutions $H > 0$, $h_1 > 0$ and $h_2 < 0 $ are existed in the interval of $\lambda$: \begin{displaymath} \frac{5}{32} < \lambda < \frac{3}{16}. \end{displaymath} 2) \begin{equation} H = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 + \sqrt{19 - 96\lambda}} - \sqrt{4 - 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.5AB}, \end{equation} \begin{equation} h_1 = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 + \sqrt{19 - 96\lambda}} + \sqrt{4 - 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.6AB}, \end{equation} \begin{equation} h_2 =- \frac{1}{4\sqrt{3\alpha}}\sqrt{1 + \sqrt{19 - 96\lambda}} \label{7.7AB}, \end{equation} \begin{equation} S_1 = \frac{1}{4\sqrt{3\alpha}}\sqrt{1 + \sqrt{19 - 96\lambda}} > 0 \label{7.7AB_2}. \end{equation} The solutions $H > 0$, $h_1 > 0$ and $h_2 < 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{5}{32} < \lambda < \frac{3}{16}. \end{displaymath} 3) \begin{equation} H = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 - \sqrt{19 - 96\lambda}} + \sqrt{4 + 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.8AC}, \end{equation} \begin{equation} h_1 = \frac{1}{4\sqrt{3\alpha}}\Biggl(\sqrt{1 - \sqrt{19 - 96\lambda}} - \sqrt{4 + 2\sqrt{19 - 96\lambda}}\Biggl) \label{7.9AC}, \end{equation} \begin{equation} h_2 =- \frac{1}{4\sqrt{3\alpha}}\sqrt{1 - \sqrt{19 - 96\lambda}} \label{7.10AC}, \end{equation} \begin{equation} S_1 = \frac{1}{4\sqrt{3\alpha}}\sqrt{1 - \sqrt{19 - 96\lambda}} > 0 \label{7.10AC_2}. \end{equation} The solutions $H > 0$, $h_1 < 0$ and $h_2 < 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{3}{16} < \lambda < \frac{19}{96}. \end{displaymath} \subsection{$k_1=6$ and $\alpha > 0$} In the set of dimensions $ (m, k_1, k_2 ) = ( 3, 6, 3 )$, solving the set of polynomial equations ( (\ref{3.4E}) - (\ref{3.4L})), one can obtain analogous formulas and expressions as in the set of dimensions $ (m, k_1, k_2 ) = ( 3, 3, 5 )$. In this set of dimensions from (\ref{3.4E}) we get four real solutions for $h_1$ and for each value of four solution of $h_1$ corresponds to two values of the solution $h_2$ (see (\ref{3.4Q})) and four numbers of the solutions H (see (\ref{3.4L})). Therefore, eight number of the set of real solutions are occured. As our calculations show, four of them are unstable. Therefore, when we introduce the stability condition and the conditions A), B), the number of the set of stable real solutions decreases up to three: 1) \begin{equation} H = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 + 10\sqrt{85 - 420\lambda}} + 3\sqrt{9 - 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.11AD}, \end{equation} \begin{equation} h_1 =- \frac{1}{\sqrt{315\alpha}}(\sqrt{5 + 2\sqrt{85 - 420\lambda}} \label{7.12AD}, \end{equation} \begin{equation} h_2 = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 + 10\sqrt{85 - 420\lambda}} - 3\sqrt{9 - 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.13AD}, \end{equation} \begin{equation} S_1 = \frac{1}{2\sqrt{35\alpha}}(\sqrt{5 + 2\sqrt{85 - 420\lambda}} > 0 \label{7.13ADC}. \end{equation} The solutions $H > 0$, $h_1 < 0$ and $h_2 < 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{27}{140} < \lambda < \frac{17}{84} \end{displaymath} and the solutions $H > 0$, $h_1 < 0$ and $h_2 > 0 $ are existed in the interval of $\lambda$: \begin{displaymath} \frac{37}{240} < \lambda < \frac{27}{140}. \end{displaymath} 2) \begin{equation} H = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 - 10\sqrt{85 - 420\lambda}} + 3\sqrt{9 + 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.14AD}, \end{equation} \begin{equation} h_1 =- \frac{1}{\sqrt{315\alpha}}(\sqrt{5 - 2\sqrt{85 - 420\lambda}} \label{7.15AD}, \end{equation} \begin{equation} h_2 = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 - 10\sqrt{85 - 420\lambda}} - 3\sqrt{9 + 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.16AD}, \end{equation} \begin{equation} S_1 = \frac{1}{2\sqrt{35\alpha}}(\sqrt{5 - 2\sqrt{85 - 420\lambda}} > 0 \label{7.16ADC}. \end{equation} The solutions $H > 0$, $h_1 < 0$ and $h_2 < 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{3}{16} < \lambda < \frac{17}{84}. \end{displaymath} 3) \begin{equation} H = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 + 10\sqrt{85 - 420\lambda}} - 3\sqrt{9 - 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.17AD}, \end{equation} \begin{equation} h_1 =- \frac{1}{\sqrt{315\alpha}}(\sqrt{5 + 2\sqrt{85 - 420\lambda}} \label{7.18AD}, \end{equation} \begin{equation} h_2 = \frac{1}{12\sqrt{7\alpha}}\Biggl(\sqrt{25 + 10\sqrt{85 - 420\lambda}} + 3\sqrt{9 - 2\sqrt{85 - 420\lambda}}\Biggl) \label{7.19AD}, \end{equation} \begin{equation} S_1 = \frac{1}{2\sqrt{35\alpha}}(\sqrt{5 + 2\sqrt{85 - 420\lambda}} > 0 \label{7.19ADC}. \end{equation} The solutions $H > 0$, $h_1 < 0$ and $h_2 > 0 $ are occured in the interval of $\lambda$: \begin{displaymath} \frac{37}{240} < \lambda < \frac{27}{140}. \end{displaymath} \section{Stable solutions with zero variation of G} In this multidimensional Einstein-Gauss-Bonnet model, the solutions with zero variation of the effective gravitational G are of particular interest. Certain results are obtained in papers (\cite{ErIvKob-16} - \cite{Ivas-16-1}) on exact solutions with zero variation of the effective gravitational constant G. The condition of zero variation of the effective gravitational constant G in our case reads \begin{equation} k_1h_1 + k_2h_2 = 0 \label{8.1A}. \end{equation} As shown by our early studies \cite{ErIv-17-2}, in any dimension of the multidimensional model there are many exact solutions that include the exact solution with zero variation of the effective gravitational constant G. In this case, the cosmological term $\Lambda$ is determined by formula (3.25), (see \cite{ErIv-17-2}). This means that each set of dimensions and the cosmological term $ (m, k_1, k_2, \lambda)$ corresponds to a set of exact cosmological solutions, which includes a solution with zero variation of the effective gravitational constant G. In this case we obtained \cite{ErIv-17-2}: \begin{eqnarray} \lambda(m, k_1, k_2) = \Lambda\alpha = \frac{1}{8P^2} (m + k_1 + k_2 -3)\Biggl[\Biggl(k_1 + k_2)(k_1 + k_2 - 2\Biggl)m^3 \nonumber \\ + \Biggl(k_1^3 + k_2^3 + 11 (k_1^2 k_2 + k_1 k_2^2) - 19 (k_1^2 + k_2^2 - 22 k_1 k_2 + 18 (k_1 + k_2)\Biggl) m^2 - \nonumber \\ \Biggl(8((k_1^3 + k_2^3) - 63 (k_1 + k_2)^2 - 8 k_1^2 (k_1 - 11) k_2 \nonumber \\ - 8 k_2^2 (k_2 - 11) k_2) - 32 k_1^2 k_2^2 + 54 (k_1 + k_2)\Biggl) m\nonumber \\ - \Biggl( 9 (k_1^3 + k_2^3) + 45 (k_1^2 + k_2^2) - 54 (k_1 + k_2) + 8 (k_1^2 + k_2^2) k_1 k_2 \nonumber \\ - 16 (k_1 + k_2 -10) k_1^2 k_2^2 - 9 (21 k_1 + 21 k_2 - 26) k_1 k_2\Biggl)\Biggl],\nonumber \\ \label{8.2A} \end{eqnarray} where, \begin{eqnarray} P = P(m, k_1, k_2) = -(m+ k_1 + k_2 -3)\Biggl(m(k_1 + k_2 - 2) + \nonumber\\ k_1(2k_2 - 5) + k_2(2k_1 - 5) + 6\Biggl) \neq 0 \label{8.3A}, \end{eqnarray} In the case under consideration, when $( m, k_1, k_2 ) = (3, 3, 5)$ the value of the cosmological term $\lambda$ is equal to \begin{equation} \lambda = \Lambda\alpha = \frac{681}{3872} \label{8.4A}. \end{equation} Then for the set of dimensions and the cosmological term $ (m, k_1, k_2, \lambda) = (3, 3, 5, \frac{681}{3872}$), we obtain the following stable real solutions: 1) $H = \frac{5}{4}\frac{1}{\sqrt{11\alpha}}$, $h_1 = \frac{H}{5}$, $h_2 = - \frac{3}{5}H$; 2) $H = \frac{1}{4}\frac{1}{\sqrt{11\alpha}}$, $h_1 = 5H$, $h_2 = - 3H$. Here the solution 2) is stable \cite{ErIv-17-2}. In the second case, when $( m, k_1, k_2 ) = (3, 6, 3)$ the value of the cosmological term $\lambda$ is equal to \begin{equation} \lambda = \Lambda\alpha = \frac{19}{108} \label{8.5A}, \end{equation} \noindent and for the set of dimensions and the cosmological term $ (m, k_1, k_2, \lambda) = (3, 6,3, \frac{19}{108})$, we obtain the following stable real solutions: 1) $H = \frac{2}{3}\frac{1}{\sqrt{3\alpha}}$, $h_1 = -\frac{H}{2}$, $h_2 = \frac{H}{4}$; 2) $H =\frac{1}{6} \frac{1}{\sqrt{3\alpha}}$, $h_1 = -2H$, $h_2 = 4H$. In this case the solution 2) is stable \cite{ErIv-17-2}. \section{Conclusions} We have considered the $(7 + k)$-dimensional Einstein-Gauss-Bonnet (EGB) model with the $\Lambda$-term. By using the ansatz with diagonal cosmological metrics, we have found for $D = 7 + k $, $\alpha = \alpha_2 / \alpha_1 > 0$ and certain $\lambda = \alpha \Lambda$ a class of exponential solutions with three Hubble-like parameters $H >0$, $h_1$, and $h_2$ corresponding to submanifolds of dimensions $m=3$, $k_1 = 3$ and $k_2 > 2$ (or $m=3$, $k_1 > 2$ and $k_2 = 3$), respectively. The obtained solutions are exact and stable. Two examples of solutions (for $(m,k_1,k_2) = (3,3,5), (3,6,3)$) are considered. As we know, the stability plays a predominant role in study of exact cosmological solutions. Therefore, we assume that the obtained results will be used in our next research. {\bf Acknowledgments} The publication was prepared with the support of the ''RUDN University Program 5-100''. It was also partially supported by the Russian Foundation for Basic Research, grant Nr. 19-02-00346. \small
proofpile-arXiv_069-3001
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Owing the popularity of smartphones, tablets, and other wireless devices, the recent widespread adoption of wireless broadband has resulted in a tremendous growth in the volume of mobile data traffic, which is projected to continue unabated \cite{bio1_Cisco,bio2_Nokia_WP}. As a consequence of this, the system capacity of wireless communication systems have been severely challenged. However, restricted by the lack of available spectrum resource in licensed band, the traditional Long Term Evolution (LTE) technology is powerless in tackling this problem. Therefore, the available resources in unlicensed band have attracted recently more, and more attention as an important complement to alleviate the high data traffic load \cite{bio3_HW_WP,bio4_Q_WP}. In this regards, 3GPP has introduced its operation in unlicensed band via Licensed Assist Access (LAA) in release 13 \cite{bio5_LAA,bio6_TR889}. LAA uses carrier aggregation in the downlink to combine LTE in unlicensed spectrum with LTE in the licensed band to expand the system bandwidth. {While significant changes have been made compared to the LTE framework through the introduction of several mechanisms \cite{bio_new2_LTEU_LAA}, the authors of \cite{bio_new1_LAA_Rel13} have shown that LAA ensures fair coexistence with existing Wi-Fi networks.} {Following the current momentum on unlicensed spectrum, recently 3GPP has started two new working items, named ``new radio (NR) based unlicensed access" \cite{bio_new3_NRSI} and ``Enhancements to LTE operation in unlicensed spectrum" \cite{bio_new4_NRWI}.} With this in mind, apart from LAA systems, MulteFire (MF) systems \cite{bio7_MF} employ LTE technology, but solely work in unlicensed spectrum without assistance of the ``anchor" in licensed spectrum. For these systems the control information and reference signal must be supported to be transmitted on unlicensed carriers along with the entire data. In this regards, a MF system is totally different than an LAA system, and its system framework needs to be modified to support the sole operation in unlicensed band. Even though the MF technology is still at an embryonic stage, the combination of LTE like performance benefits, and WiFi like deployment simplicity makes MF a significantly important supplement, and valuable study topic to meet the ever-increasing wireless traffic. \begin{figure*}[!t] \centering \includegraphics[width=6.3in, height=2.0 in]{Fig1_SUL_mode_new} \vspace{-0.35cm} \caption{{Scheduled based uplink transmission mode}} \label{SUL_mode} \vspace{-0.25cm} \end{figure*} \newcounter{mytempeqncnt} \begin{figure*}[hb] \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{0} \hrulefill \begin{equation}\label{eq:1} \begin{split} p_{tx}^{WiFi}&=\frac{2q(1\!-\!p_b)(1\!-\!2p_f)}{2(1-p_b)(1-p_f)(1-2p_f)+q[W_0p_f(1-(2p_f)^m)+(1+W_0-2p_b)(1-2p_f)]}\\ \end{split} \end{equation} \begin{equation}\label{eq:1b} \begin{split} p_{tx}^{Cat4}&=\frac{2q(1\!-\!p_b)(1\!-\!p_f)R}{Q\!+\!q[W_0P(1\!-\!p_f)(1\!-\!(2p_f)^{(m\!+\!1)})\!+\!PR(1\!-\!2p_b)(1\!-\!2p_f)\!+\!2R(1\!-\!p_b)^2(1\!-\!p_f)(1\!-\!2p_f)]} \end{split} \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \vspace*{1pt} \end{figure*} \setcounter{equation}{2} In legacy LTE systems, scheduled based uplink (SUL) transmission has been considered in the 3GPP release 14 study, wherein uplink transmission is conditional to an explicit uplink grant via physical downlink control channel (PDCCH) \cite{eLAA}. In order to comply with the FCC regulation requirements, and in order to maintain fair coexistence with other technologies, the listen-before-talk (LBT) mechanism is applied to check whether the channel is clear or occupied before using it. However, the use of the legacy two stage modality for LBT reduces the uplink channel access probability. This drawback is highlighted and verified in terms of channel access probability by modeling the dynamics in time of a system employing LBT throughout a Markov chain, similarly as \cite{bio8_Bianchi,bio9_CChen,bio10_MarkovChain}. Besides the penalty imputable to a lowered channel access probability, the performance of SUL is also negatively affect by the processing delay (generally 4 ms due to hardware constraints) between the uplink grant and the scheduled transmission, which may also lead to transmission latency and resource waste in case there is no downlink data. Hence, scheduled based uplink transmissions are not suitable in unlicensed band. In this paper, we consider a new uplink transmission scheme, which does not require any eNB scheduling grant, named \textit{grant-less uplink (GUL)} transmission. While this methodology highly resembles that currently used in the WiFi uplink design, it is a significant departure from the existing SUL transmission of the legacy LTE. In this regards, a number of enhancements, which are discussed along this manuscript, need to be made with respect to the legacy LTE design in order to be able to properly enable and perform GUL transmissions. The rest of this paper is organized as follows. Section \ref{section_2} begins with a brief introduction of the SUL scheme. This section continues by building an analytical framework based on a Markov chain model, which is employed to model the dynamics in time of the LBT procedure for both the user equipment (UE) and eNB. This analytical framework is then used to compare the channel access probability of SUL and GUL schemes, and highlight the benefits of the proposed scheme. The overview procedure and design details for GUL mode are then provided in section \ref{section_3}. In section \ref{section_4}, the performance of the proposed scheme is evaluated via system level simulations. Finally, conclusions are drawn in section \ref{section_5}. \section{System Model} \label{section_2} In legacy LTE, the UE that intends to transmit data needs to obtain an uplink grant from the serving eNB, and only then it can start uplink transmission, as illustrated in Fig.\ref{SUL_mode}. In primis, the eNB is required to perform Cat.4 LBT on the target carrier for the uplink grant transmission, as regulated in \cite{bio11_ETSI893,bio12_TS213}. Once it is able to access successfully the channel, the subsequent maximum channel occupancy time (MCOT) can be occupied, and scheduled for either downlink or uplink transmission by the eNB. While the PDCCH carrying uplink grant can be transmitted in the first available subframe (SF), due to the processing delay the physical uplink shared channel (PUSCH) is scheduled at the latter SFs during the same MCOT. The remaining symbols in the downlink SFs can be utilized for downlink data transmission, if any. Before uplink transmissions can take place, the scheduled UE needs to complete an additional LBT (either single interval LBT or Cat.4 LBT) \cite{bio12_TS213} after receiving the grant. If this second LBT fails, the resources reserved for uplink are wasted. Intuitively, the SUL mode hampers the channel access probability for the UE. In order to overcome this issue, it is proposed here to adopt one-LBT uplink access mechanism instead of this double-LBT procedure, which leads to uplink transmissions that can be performed autonomously without requiring grants, which we refer to as GUL. For the proposed GUL scheme, on the other hand, similarly to SUL, Cat.4 LBT is still employed for the fair sharing of unlicensed band. {While Markov chains and their properties have been extensively used to model and characterize the procedure of LBT for Wi-Fi and LAA \cite{bio8_Bianchi,bio9_CChen,bio10_MarkovChain,bio_new5_ThvLAA}, in this contribution they are used to model the LBT for MF systems, in order to study its coexistence with Wi-Fi. In particular, utilizing} a Markov chain model, the LBT procedure of a WiFi, and MF access node is modeled, and the transmit probabilities in a randomly chosen slot time can be calculated by (\ref{eq:1}) and (\ref{eq:1b}), respectively \cite{bio9_CChen,bio10_MarkovChain}. For these equations, $Q=2(1-p_b)(1-p_f)(1-2p_f)$, $P=(p_b+p_f-p_bp_f)$, $R=(1-p_f^{(m+1)})$, $q$ denotes the probability of packet arrival, $m$ is the maximum clear channel assessment (CCA) stage, and $W_0$ is the initial contention window size. $p_f$ and $p_b$ denote the probability of transmission failure due to collisions, and the probability that the channel is detected to be occupied, respectively. In \cite{bio9_CChen,bio10_MarkovChain}, these probabilities are although determined under the simplified assumption that all the nodes in the coexistence scenario can detect the signal from all other nodes above the carrier sense threshold. In order to address this issue, the distribution of the detected energy \cite{bio13_ED1,bio14_ED2,bio15_LBT} is here taken into account. For simplicity, let assume that the path loss between any two nodes are identical. The total receiving power $P_{rx}$ can be then be obtained by multiplying the receiving power $P_{0rx}$ from a single transmitting node by the total number of the transmitters $n$. Thus, the distribution of the energy detection conditioned on the number of transmitters could be expressed as follows \begin{equation}\label{eq:2} \hspace{-0.1 cm}f_Y(y|n)= \begin{cases} \frac{1}{2^\mu\varGamma(\mu)}y^{\mu-1}\mathrm{e}^{-\frac{y}{2}}& \text{idle}\\ \frac{1}{2}(\frac{y}{2\gamma})^{\frac{\mu-1}{2}}\mathrm{e}^{-\frac{2\gamma+y}{2}}I_{\mu-1}(\sqrt{2\gamma y})& \text{busy}\\ \end{cases} \end{equation} where $\gamma$ is the signal-to-noise ratio (SNR), and depends on the number of transmitters $n$ since $\gamma= nP_{0rx}/P_{noise}$, $\varGamma(.)$ represents the gamma function, and $I_v(.)$ is the $v$th-order modified Bessel function of the first kind. Assume that the channel sense failure and the transmission collision both occur in the case that the detected energy is above the LBT threshold $y_{thv}$. In a network with $N$ access nodes, it yields that \begin{equation}\label{eq:3} p_b\hspace{-0.08cm}=\hspace{-0.08cm}p_f\hspace{-0.08cm}=\hspace{-0.1cm}\sum_{n=1}^{N-1}\hspace{-0.1cm} \dbinom{N}{n} p_{tx}^n(1\hspace{-0.08cm}-\hspace{-0.08cm}p_{tx})^{N\hspace{-0.05cm}-\hspace{-0.05cm}1\hspace{-0.05cm}-\hspace{-0.05cm}n} \hspace{-0.15cm}\int_{y_{thv}}^{+\infty}\hspace{-0.25cm} f_Y(y|n)\,dy. \end{equation} The transmission probability for a WiFi Access Point (AP), and a system with Cat.4 LBT is evaluated then by solving (\ref{eq:1}) and (\ref{eq:1b}) using (\ref{eq:3}), respectively. For a SUL scheme, only $N$ eNBs perform Cat.4 LBT, and the uplink channel is available only when both downlink Cat.4 LBT, and the single slot LBT at the UE side succeed. Thus, in this case the uplink channel access probability can be expressed as \begin{equation}\label{eq:4} p_{tx}^{SUL}=(1-p_b)p_{tx}^{Cat4} \end{equation} In the proposed GUL mode, the UE performs independent Cat.4 LBT, which is nearly the same behaviour as eNB in respect to the channel access procedure. Therefore, the channel access probability for both UE and eNB can be obtained by substituting within $N$ in (\ref{eq:3}) the sum of the number of UEs and eNBs involved. \begin{figure}[!t] \centering \includegraphics[width=7.00cm]{Fig2_chAccessPro} \vspace{-0.5cm} \caption{Channel access probability} \label{ChAccessPro} \vspace{-0.25cm} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=6.3in, height=2.0 in]{Fig3_GUL_mode} \vspace{-0.1cm} \caption{Grant-less uplink transmission mode} \label{GUL_mode} \vspace{-0.25cm} \end{figure*} Fig. \ref{ChAccessPro} shows the channel access probability based on the assumptions that $y_{thv}=-72$ dBm for both the Wi-Fi and the MF system. The number of WiFi APs or MF eNBs deployed for each operator is $N=5$, and each eNB only has one active UE. For this plot, $m=4$, $W_0 = 16$, $P_{0rx}/P_{noise}\approx 10$ and the following two scenarios are shown: \begin{itemize} \item WiFi+SUL-MF: WiFi APs of operator 1 coexist with MF eNBs and UEs of operator 2, which operates in scheduled based uplink modality; \item WiFi+GUL-MF: WiFi APs of operator 1 coexist with MF eNBs and UEs of operator 2, which operates in grant-less based uplink modality. \end{itemize} As illustrated by Fig. \ref{ChAccessPro}, the SUL scheme is subject to a small uplink channel access probability due to the double LBT required, while for the GUL mode this improves significantly with negligible impact on the WiFi system performance. \section{System Design of Grant-less Uplink Mode} \label{section_3} Fig. \ref{GUL_mode} provides an illustration of the overall procedure for the GUL mode. Firstly, the UE with uplink data performs channel sensing. In this case, Cat.4 LBT is adopted to maintain the fair coexistence with the incumbent system and other technologies. A preamble signal is needed before data transmission for the detection at the anchored eNB, and signalling of control information. A reservation signal may also be needed to align with the predefined boundary. Then, the UE can use the whole MCOT for data transmission rather than shared with downlink. Finally, the eNB needs to feedback the ACK/NACK information for Hybrid Automatic Repeat Request (HARQ) process. The proper use of GUL mode needs a quite different framework than that used by SUL in the LTE technology. Therefore, a number of enhancements, such as the control information and feedback, are needed with the respect to the legacy LTE design, and the details are discussed in this section. \subsection{Detection of PUSCH at eNB side} Due to the lack of scheduling, the serving eNB is not aware of the UE's transmission and it needs to detect the presence of the uplink burst. Two candidate methods can be taken into consideration for such indication: \begin{itemize} \item Implicit indication by demodulation reference signal (DMRS): the serving eNB performs blind detection of the DMRS sequence to infer the presence of PUSCH; \item Explicit indication through Uplink Control Indicator (UCI): in this context, the existing UCI formats can be reused to provide additional information regarding the uplink burst. The content of UCI includes but is not limit to the following fields: HARQ process number, UE identifier, and new data indicator (NDI). \end{itemize} \subsection{Uplink Sub-frame Design} Since the LBT could be completed at any time instant, mostly not aligned with the primary cell (PCell) SF boundary, this may result in a waste of resources due to the fact that the transmission is postponed until the boundary of next SF. In order to better utilize the interval of time from the ends of the LBT until the PCell sub-frame boundary, a more flexible design of the uplink SF is required. As shown in Fig. \ref{GUL_mode}, three uplink SF types can be adopted: \begin{itemize} \item Synchronous SF, which is aligned with the boundary of PCell SF to minimize the implementation impact. In this context, a partial SF or super SF can be defined on a subset of OFDM symbols within the uplink SF (similar to the partial TTI for downlink LAA), while the PCell still remains aligned in terms of timing relationship with the uplink burst transmission. In this case, the UE can start PUSCH transmission at certain known OFDM symbol positions within a SF with the aim to limit the UE scheduling complexity. In particular, as UE may know the duration of the partial TTI, it may need to create multiple potential partial SFs corresponding to different hypothesis of possible partial SF. However, this incurs in a significant computation and buffer complexity at the UE side. Thus it is desirable to limit the set of possible starting positions to assume some predefined and restrict values, e.g. \{1, 8\}. \item Asynchronous SF, which cannot be aligned with PCell boundary as illustrated in Fig. \ref{GUL_mode}. As long as the channel is acquired through LBT, UE could carry out the uplink transmission based on the legacy 1 ms SF design. \end{itemize} \subsection{Scheduling, Link Adaptation and HARQ Operation} \label{LA_HARQ} Instead of relying on the indication from the serving eNB, the UE needs to autonomously select the resource allocation in GUL mode. Accurate channel state information (CSI) is essential for both the scheduling at the UE side, and the demodulation at the eNB side. Apart from this, the UE also needs feedback information for HARQ retransmission. In this regards, the process could be summarized in following steps: \begin{itemize} \item Step 1: eNB estimates and calculates the uplink CSI based on the sounding reference signals (SRSs) from the UE. In particular, in this case the legacy LTE design for SRS can be reused, and they can be transmitted in the last OFDM symbol. Additionally, the CSI request can be transmitted along with SRSs. \item Step 2: The UE chooses an appropriate modulation and coding scheme (MCS). The selection can be done by the eNB, which can indicate the best suitable MCS to UE. Alternatively, the UE can request CSI, and based upon this information it can select the appropriate MCS by itself. \item Step 3: The UE transmits data along with the scheduling information via PUCCH, which may contain the HARQ process number and NDI. \item Step 4: The eNB transmits ACK/NACK feedback via PDCCH, after receiving uplink data. \end{itemize} For link adaptation, the possible suitable options are: \begin{itemize} \item The eNB dynamically feedbacks the uplink CSI for MCS selection, while indicating HARQ ACK/NACK feedback; \item The UE uses the MCS indicated in latest DCI. \end{itemize} \subsection{Control Channel Design} When UEs have simultaneous uplink data and control transmission, control signaling can be multiplexed with data prior to the discrete fourier transformation (DFT) to preserve the single-carrier property in the uplink transmission as shown in Fig.\ref{PUCCH}. This methodology can be reused in systems such as MF, which work solely in unlicensed spectrum, but with certain extent. In fact, in these systems a different content can be carried, and a list of possible fields is as follows: \begin{itemize} \item Cell radio network temporary identifier (C-RNTI); \item HARQ process number; \item NDI, which is used to state whether the current transmission is a retransmission or not; \item MCOT and uplink burst related information described as the number of SFs. In case not all the SFs are used, the remaining SFs could be scheduled by eNB for downlink or uplink transmission of other UEs. \item Carrier used; \item A-CSI, and HARQ ACK/NACK bitmaps. \end{itemize} In order to reduce the UCI signalling overhead, it is preferred to transmit some of this information once, especially for fields such as A-CSI and HARQ ACK/NACK bitmaps, while some other fields which are essential (i.e, HARQ process number, C-RNTI and NDI) should be transmitted in each SF. In this regard, at least two different sizes for UCI should be predefined: one which includes the complete UCI with all fields, and one which incorporates only necessary fields. Since both the MCS index (which can be determined according to subsection \ref{LA_HARQ}) and the UCI size are needed to separate the control information, the eNB can perform blind detection to determine this last information. This option is although very computational intense, and alternatively the HARQ ACK/NACK or the rank indicator (RI) can be used to indicate the UCI size. \begin{figure} \centering \includegraphics[width=7.75cm]{Fig4_PUCCH} \vspace{-0.6cm} \caption{Illustration of PUCCH control region} \label{PUCCH} \vspace{-0.50cm} \end{figure} \begin{figure} \centering \includegraphics[width=7.75cm]{Fig5_indoorDeploy} \caption{Network layout for the indoor scenario} \label{deploy} \end{figure} \begin{table} \centering \caption{Main Simulation Parameters} \begin{tabular*}{2.4in}{c|c} \hline \hline \textbf{Parameters} & \textbf{Value}\\ \hline Scenario Layout & Indoor Scenario \\ \hline Number of UE & 20 UEs per sector \\ \hline Channel Model & WINNER B+ \\ \hline Carrier frequency & 5GHz\\ \hline Inter-Station Distance &500m\\ \hline MCOT & 5ms \\ \hline Traffic Model & FTP Model 3 \\ \hline File Size & 0.5MByte \\ \hline DL:UL Traffic Ratio & 50:50 \\ \hline eNB Tx Power & 18dBm \\ \hline eNB Antenna Gain & 5dB \\ \hline UE Tx Power & 18dBm \\ \hline UE Antenna Gain & 0dB \\ \hline \end{tabular*} \label{ParaList} \vspace{-0.5cm} \end{table} \section{Simulation and Performance Evaluation} \label{section_4} This section provides the results obtained from comprehensive system level simulations performed with the aim to evaluate the performance of the proposed scheme. The simulations are performed under the assumptions agreed in \cite{bio6_TR889}, which are summarized in Table \ref{ParaList}. An indoor deployment with 7 hexagonal cell sites is considered for each operator, as shown in Fig.\ref{deploy} . Every site has 3 sectors with 4 MF eNBs or 4 Wi-Fi APs randomly dropped, and grouped as a cluster. Similarly to Fig. \ref{ChAccessPro} two scenarios are considered: WiFi+SUL-MF, and WiFi+GUL-MF. Fig. \ref{UPT_all} shows the performance in terms of the mean user perceived throughput (UPT) for these two scenarios for both a WiFi and MF systems and for both uplink and downlink. This figure highlights that the uplink throughput of a MF operator is quite low when the SUL scheme is adopted for the aforementioned issues, and that the proposed GUL scheme allows to improve significantly its performance up to achieving comparable performances with WiFi. As for the downlink, the proposed scheme leads to a slight performance loss, which is negligible compared to the gain obtained for uplink. In fact, performance in terms of sum throughput of both downlink and uplink are still significantly improved with the proposed scheme. Moreover, by comparing the performances of WiFi in terms of throughput between the case when the SUL and the GUL scheme is used, it is possible to notice that the proposed scheme is able to still guarantee coexistence between MF and WiFi. For both downlink and uplink, the WiFi performance is slightly degraded due to the intense channel access competition with MF. However, such performance degradation is acceptable, and it may also incur in case the number of WiFi APs are increased in a certain area. In conclusion, the proposed GUL scheme can achieve remarkable performance gain for MF uplink and maintain the friendly coexistence with incumbent MF downlink, and Wi-Fi technology. \begin{figure} \centering \hfill \subfigure[Average uplink UPT] {\includegraphics [width=7.25cm, height=1.6 in] {Fig6_UL} \hfill \label{UL_UPT} } \hfill \vspace{-0.10cm} \subfigure[Average downlink UPT] {\includegraphics [width=7.25cm, height=1.6 in] {Fig6_DL} \label{DL_UPT} } \caption{Average UPT performance} \label{UPT_all} \vspace{-0.25cm} \end{figure} \section{Conclusion} \label{section_5} In this paper, in order to cope with the tremendous deterioration of the uplink performance of LTE systems operating in unlicensed spectrum, such as MF, a new transmission scheme is proposed, which allows to perform grant-less transmissions. By developing an analytical framework based on a Markov chain representation of the LBT procedure, it shows that the GUL scheme is able to increase the uplink channel access probability in a MF system compared to a scheduled based scheme. In addition, along the paper system designs and details on how to enable this transmission scheme within the LTE ecosystem are elaborated. Finally, comprehensive system level simulations are provided, and evaluation indicates that the proposed GUL mode can lead to a significant improvement of the uplink UPT performance with the negligible performance loss for MF downlink and Wi-Fi systems.
proofpile-arXiv_069-3069
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:Introduction} Massive star-forming galaxies (SFGs) with centrally concentrated luminosity profiles at $z$>2 have been recently suggested by different authors to be the direct progenitors of compact quiescent galaxies (cQGs) at $z$=1.5-3 \citep[e.g.][]{wuyts2011, whitaker2012a, barro2013, vandokkum2015, toft2007, cassata2011, vanderwel2014}. Several theories have been proposed to achieve the high stellar densities observed in cQGs, including gas-rich mergers and/or disk instabilities \citep[e.g.][]{tacconi2008, zolotov2015, tacchella2016}, or in-situ inside-out growth \citep[e.g.][]{wellons2015, lilly2016}. Most scenarios predict the formation of a compact SFG (cSFG) as the last stage before quenching the star formation. Observationally, cSFGs candidates have been identified as being dense, compact, and dusty \citep{barro2013, nelson2014, vandokkum2015}. The kinematics of the H$\alpha$ emission line suggests that cSFGs have rotating disk of ionized gas slightly larger or comparable to the stellar distribution \citep[][]{vandokkum2015, wisnioski2017}. From the comparison between the dynamical and stellar masses it is inferred that these galaxies must have low gas mass fractions and short gas depletion timescales, as would be expected if they were soon to terminate their star formation. A key element in the understanding of quenching mechanisms is a direct measurement of the size and mass content of the cold gas reservoirs that provide the fuel for star formation, but until now there have been only few of such measurements \citep[e.g.][]{barro2016, spilker2016, popping2017, tadaki2017b}. cSFGs seem also to show a higher AGN incidence than the overall galaxy population at a fixed stellar mass \citep{kocevski2017, wisnioski2017}, suggesting that AGN activity might play a role in quenching, possibly through feedback provided by large-scale outflows \citep[e.g.][]{gilli2014, genzel2014, brusa2015b}. In order to investigate the gas properties of the progenitors of cQGs, in this letter we present ALMA spatially resolved observations of the dust continuum and CO lines emission of GMASS 0953, a heavily obscured AGN host selected from the GMASS sample \citep{kurk2013}. We adopt a cosmology with $H_{0}{=}70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}{=}0.3$, $\Omega_{\Lambda}{=}0.7$ and assume a \citet{chabrier2003} IMF. \begin{figure} \centering \includegraphics[scale=0.35]{Figure/sed_gmass_0953.png} \caption{SED of GMASS 0953. The black line is the total best fit model, while blue and red curves indicate respectively the star formation and AGN contributions. Red dots mark the observed photometry from: MUSIC \citep{grazian2006}, SPITZER/MIPS \citep{magnelli2011}, \emph{Herschel}/PACS \citep{magnelli2013} and SPIRE \citep{roseboom2010}, ALMA at 1.2mm \citep{ueda2018}, 1.4 mm (this work, Sec. \ref{sec:data}), and 2.1 mm \citep{popping2017}. The black arrow represents the 5$\sigma$ upper limit on the ALMA band 3 continuum obtained from the combined map of our data with those by \citet{popping2017}.} \label{ivanplot} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5, trim= 0mm 0mm 140mm 0mm, clip=true]{Figure/hst.png} \includegraphics[scale=0.5, trim= 140mm 0mm 0mm 0mm, clip=true]{Figure/hst.png} \caption{HST/ACS z-band \citep[top,][]{giavalisco2004} and CANDELS HST/WFC3 H-band \citep[bottom,][]{grogin2011}. The lower contours are at 3$\sigma$ level. The source to the north of our target is a foreground galaxy.} \label{hst} \end{figure} \section{GMASS 0953}\label{sec:gmass0953} GMASS 0953\footnote[1]{a.k.a. K20-ID5, GS3-19791, 3D-HST GS30274 \citep[e.g.][]{daddi2004b, forsterschreiber2009, popping2017}.} (R.A. 03:32:31.48, Dec. -27:46:23.40) is a SFG at z$_{CO}$=2.2256. It is detected in the 7Ms CDF-S X-ray maps \citep{luo2017} and hosts a heavily obscured ($N_{\rm H}$>10$^{24}$ cm$^{-2}$; Dalla Mura et al. in prep.) AGN with a rest-frame intrinsic luminosity (i.e. corrected for the obscuration) $L_{\rm 2-10keV}{\sim}$6.0$\times$10$^{44}$ erg/s. The target shows marginally extended emission in the 1.4 GHz VLA radio continuum maps \citep{miller2013}. The monochromatic 1.4 GHz luminosity $L_{\rm 1.4GHz}$ = 10$^{24.84}$ W Hz$^{-1}$ is consistent with radio emission predominantly arising from an AGN \citep{bonzini2012, bonzini2013}. Despite the clear presence of the AGN, the rest-frame UV spectrum remarkably does not show high-ionization emission lines (e.g. CIV$\lambda$1550, SiIV$\lambda$1400), likely because of the large obscuration of the nucleus \citep{cimatti2013}. Optical lines ratios are consistent with a type II AGN, though shocks have also been proposed as an ionization mechanism \citep{vandokkum2005}, supported by evidence of large-scale outflows in multiple gas phases \citep[][Loiacono et al. in prep.]{cimatti2013, forsterschreiber2014, genzel2014}. Following \citet{delvecchio2014}, we performed a multi-component SED fitting to the available broadband photometry with the SED3FIT code \citep{berta2013}, that combines \citet{bruzual2003} stellar libraries, \citet{dacunha2008} IR-dust libraries, and \citet{feltre2012} torus+disc models. The full SED is shown in Fig. \ref{ivanplot}. We derive a the stellar mass $M_{\star}$=(1.15$\pm$0.1)$\times$10$^{11}$ $M_{\odot}$ and $SFR_{\rm IR}$=214$\pm$20 $M_{\odot}$ yr$^{-1}$, the latter assuming the \citet{kennicutt1998} relation (scaled to a \citet{chabrier2003} IMF) between rest-frame 8-1000$\mu$m $L_{\rm IR}$, corrected for the AGN contribution, and SFR. These values would place GMASS 0953 on the SFR-mass main sequence \citep[MS; e.g.][]{rodighiero2011}. A fit of the FIR points ($\lambda$>24$\mu$m) to a greybody gives values of $M_{\rm dust}$=(2.6$\pm$0.5)$\times$10$^{8}$ $M_{\odot}$ and $T_{\rm dust}$=38$\pm$2 K, consistent with \citet{popping2017}. \begin{figure*} \centering \includegraphics[scale=0.34, trim= 95mm -8mm 110mm 25mm, clip=true]{Figure/co65.png} \includegraphics[scale=0.368, trim=105mm 10mm 110mm 20mm, clip=true]{Figure/b6cont.png} \includegraphics[scale=0.39, trim=110mm 10mm 110mm 20mm, clip=true]{Figure/co32.png} \caption{\emph{From left to right:} ALMA 1.4mm continuum map (band 6), \emph{moment 0} map of the CO(3-2) line, \emph{moment 0} map of the CO(6-5) line. The beam size is also shown in grey. In all images the lower continuous contours are at 3$\sigma$ level. The scales are the same as in Fig. \ref{hst}, but no astrometric correction was applied (see Sec. \ref{sec:data}).} \label{alma} \end{figure*} HST images (Fig. \ref{hst}; top: ACS/z-band; bottom: WFC3/H-band)) show a compact morphology \citep[$r_{\rm e,Hband}$=2.5 kpc,][]{vanderwel2014} with a low-surface brightness tail to the west of the core that has been interpreted as either a merger remnant \citep[][]{vandokkum2015} or a faint disk \citep[][]{wisnioski2017}. \section{ALMA observations}\label{sec:data} ALMA observations were carried out in band 3 and 6, during Cycle 3 project 2015.1.01379.S (PI: P. Cassata) for a total integration time on source of 32 mins and 1.3 hrs respectively, and an angular resolution of 0.6$\arcsec$. The precipitable water vapour during the observations was between 1.4 mm and 3.1 mm. We centred one spectral window of bandwidth 1.875 GHz covering 3840 channels at 107.291 GHz and 214.532 GHz respectively in band 3 and 6 to target CO(3-2) and CO(6-5) lines, and placed in each band, on line-free regions, other two spectral windows of bandwidth 1.875 GHz covering 960 channels to target dust continuum. The data were calibrated, imaged and analysed using the standard ALMA pipeline and software package CASA \citep[version 4.5.3; ][]{mcmullin2007}. The calibrated data were cleaned interactively using masks at source position and setting a threshold of 3$\times$r.m.s. noise level as measured on the dirty images. We adopted a Briggs weighting scheme \citep{briggs1995} with a \emph{robust} parameter of 0.2 and a channel width of 100 km s$^{-1}$ as the best trade-off between sensitivity and spatial resolution, resulting in a clean beam of $FWHM$=0.6$\arcsec$$\times$0.5$\arcsec$, with a position angle (P.A.) of 60$\degree$ in band 6 and of $FWHM$=1.0$\arcsec$$\times$0.7$\arcsec$, with a position angle (P.A.) of 80$\degree$ in band 3. We used the cleaned data-cubes to produce continuum and line intensity maps (\emph{moment 0}; Fig. \ref{alma}) and to study the gas kinematics (Sec. \ref{sec:disk}). ALMA 1.4mm continuum map (band 6) was obtained by averaging all the line-free channels in the data-cube over a total velocity range of $\sim$ 4000 km s$^{-1}$. The rms noise level is $\sigma_{1.4mm}$=0.03 mJy/beam. We do not have a significant continuum detection in band 3. We also combined our band 3 continuum data with those from \citet{popping2017} taken from the ALMA archive, after taking into account the different angular resolutions. We do not find a significant continuum detection also in the combined map. From the combined image we quote a 5$\sigma$ upper limit on band 3 flux of $\sim$0.05 mJy, that is consistent with the flux intensity predicted by the SED (Fig. \ref{ivanplot}). \emph{Moment 0} maps of the CO(3-2) and CO(6-5) lines shown in Fig. \ref{alma} were obtained by integrating the line channels over the velocity range between -1000 and 1000 km s$^{-1}$. In band 6 we first subtracted in the \emph{uv} plane the continuum with the task \texttt{uvcontsub}. The noise level measured in line-free channels is 0.15 and 0.10 mJy/beam, respectively in band 3 and 6. We derived the source size and fluxes by fitting an elliptical Gaussian to the visibility data (task \texttt{uvmodelfit}), thus avoiding the uncertainty related to the cleaning parameters for these quantities. We measure line fluxes $I_{\rm CO(3-2)}$=0.82$\pm$0.12 Jy km$^{-1}$, consistent with that reported by \citet{popping2017}, and $I_{\rm CO(6-5)}$=1.21$\pm$0.13 Jy km/s and 1.4 mm continuum flux $S_{\rm 1.4mm}$=378$\pm$65 $\mu$Jy. Flux errors account for both measurement error and the 10$\%$ absolute flux accuracy due to the calibrator.\\ \indent While previous ALMA observations of GMASS 0953 \citep{popping2017} could not constrain the size of the molecular gas because of their lower angular resolution (2$\arcsec$), we marginally resolve the target in band 6. For the CO(6-5) line we measure a deconvolved $FWHM$=0.18$\arcsec$$\pm$0.06$\arcsec$ (with an axis ratio of 1.0$^{+0.0}_{-0.3}$), that corresponds to a radius $r_{\rm CO}$=0.5$\times FWHM \sim$ 0.75$\pm$0.25 kpc, and an intrinsic size of the continuum emission of $FWHM$=0.30$\arcsec$$\pm$0.09$\arcsec$ ($r_{\rm 1.4mm}$=1.24$\pm$0.37 kpc), consistent with the CO(6-5) line within the errors. We have performed Monte Carlo simulations in order to test the reliability of \texttt{uvmodelfit} errors on the size of our target. In particular, we simulated the observation of a mock galaxy with the same best-fit properties of our target, both in line and in continuum. We then created 100 realizations of the background noise to match our observations and measured the properties of the simulated sources using \texttt{uvmodelfit}. In both cases (i.e. line and continuum) the peak and the sigma of the distribution of the measurements are perfectly consistent with the \texttt{uvmodelfit} best-fit size and errors. The emission centroids in ALMA and HST images are co-spatial, after accounting for the known systematic $\lesssim$0.5$\arcsec$ shift in the NW direction between ALMA and HST positions in the CDFS \citep[e.g.][]{rujopakarn2016, dunlop2017, ginolfi2017}. \subsection{ISM MODELING}\label{sec:gas} GMASS 0953 had already been observed with ALMA by \citet{popping2017} in bands 3 and 4, targeting CO(3-2), CO(4-3), and [C I](1-0) lines. \begin{figure} \centering \includegraphics[trim=0mm 2mm 0mm 125mm, height=45mm, width=85mm, clip=true]{Figure/sled_v4.png} \caption{Observed CO SLED of GMASS 0953 (red diamonds). We also plot, normalized to the J$_{\rm up}$=3 transition of our source, the expected scaling in the LTE approximation, the Milky Way \citep{fixsen1999}, and the average values from different classes of objects, namely SMGs \citep{bothwell2013}, BzK \citep{daddi2015}, and ULIRGs \citep{papadopoulos2012}. We added in quadrature a 10$\%$ flux accuracy uncertainty to the CO(4-3) line flux of GMASS 0953, that was not accounted for in \citet{popping2017} (Popping, private communication).} \label{sled} \end{figure} They derived a CO(1-0) luminosity of $L^{\prime}$$_{\rm CO}$=2.1$\pm$0.2$\times$10$^{10}$ K km s$^{-1}$ pc$^{2}$ assuming that the lines are all in the Rayleigh-Jeans limit and in local thermodynamic equilibrium (LTE)\footnote[4]{The published value is actually a typo (Popping, private communication).}. However, our new value of the CO(6-5) transition is different from what we would expect in the LTE approximation (Fig. \ref{sled}), therefore we adopt an empirical method for estimating the $L^{\prime}$$_{\rm CO}$ luminosity. The shape of the observed CO-SLED of GMASS 0953 shows a strong similarity with the average SLEDs of supposedly similar sources, namely BzK (mostly MS galaxies at $z$$\sim$2) and local ULIRGs (often hosting an AGN), normalized to the flux of the J$_{\rm up}$=3 transition of our target (Fig. \ref{sled}). Extrapolating the CO(1-0) transition from our observed CO(3-2) flux, assuming the average flux ratio from the appropriate literature SLEDs, we estimate a flux $I_{\rm CO(1-0)}$=0.17$\pm$0.03 Jy km s$^{-1}$. A similar value would be derived normalizing the SLEDs to the CO(6-5) flux of our target. From the CO(1-0) flux, following \citet{solomon1997} we derive $L^{\prime}$$_{\rm CO}$=(4.0$\pm$0.7)$\times$10$^{10}$ K km s$^{-1}$ pc$^{2}$, about twice the value quoted by \citet{popping2017}. Assuming a CO-to-H$_{\rm 2}$ conversion factor $\alpha_{\rm CO}$=0.8 $M_{\odot}$/(K km s$^{-1}$ pc$^{2}$) we derive the gas mass: $M_{\rm H_{\rm 2}}$=(3.24$\pm$0.6)$\times$10$^{10}$$M_{\odot}$, that is in good agreement with the estimate derived from the [C I] emission line and a factor of $\sim$4 higher than that estimated from the dust mass \citep{popping2017}. Our choice of $\alpha_{\rm CO}$ is motivated by the compactness of our source and its high SFR surface density \citep[Sec. \ref{sec:discussion}; see also][]{bolatto2013}.\\ \indent Concerning the ISM physical properties, \citet{popping2017} derived an estimate of the molecular gas density and the far-UV (6-13.6 eV) radiation field flux from the comparison of the [C I]/CO(4-3) intensity ratio of GMASS 0953 to the outputs of single photo-dissociation region (PDR) models from \citet{kaufman1999, kaufman2006}. We did the same investigation with the code \emph{CLOUDY} v17.00 \citep{ferland2017}, adding our new observations. In particular, we run a grid of CLOUDY PDR models that span ranges in density and intensity of the UV radiation field that illuminates the cloud, assumed to be a 1-D gas slab, and we linearly scaled with the SFR the CLOUDY default Cosmic-ray Ionization Rate \citep[see][]{bisbas2015, vallini2017}. In Fig. \ref{liviaplot} we show the predicted [C I]/CO(6-5) luminosity ratio as a function of $G_{0}$\footnote[5]{$G_{0}$ is the flux in the far-ultraviolet band (6-13.6eV) scaled to that in the solar neighborhood (~1.6$\times$10$^{-3}$ erg s$^{-1}$ cm$^{-2}$) \citep{habing1968}.} and density \emph{n}, highlighting the parameters space that give the observed [C I]/CO(6-5) (white) and CO(6-5)/CO(4-3) (magenta) luminosity ratios. It is evident that a single PDR with constant density and $G_{0}$ is not able to reproduce both luminosity ratios, because the two ratios do not trace the same parameters space. We argue that at least two components are needed to correctly model the observations, though a robust fit is not currently feasible because the degrees of freedom outnumber the data. Multiple phases are usually required to fit the ISM in local LIRGs and ULIRGs sources and in high-redshift galaxies \citep[e.g.][]{ward2003, carilli2010, danielson2011, daddi2015, pozzi2017, mingozzi2018}, consisting in a diffuse, lower-excitation component and a more concentrated, higher-excitation gas. Our measurement of the CO(6-5) transition points towards the existence of a very dense ISM component with $n\sim10^{5.5}$ cm$^{-3}$. We also point out that GMASS 0953 is hosting a Compton-thick AGN and therefore higher-excitation emission could also be associated to a dense \emph{X-ray dominated region} (XDR), though higher-J CO lines would be needed to properly constrain its contribution \citep{meijerink2007, vanderwerf2010, pozzi2017, mingozzi2018}. \begin{figure} \centering \includegraphics[trim=0mm 0mm 0mm 0mm, height=45mm, width=85mm, clip=true]{Figure/liviaplot_mod.png} \caption{The [C I]/CO(6-5) luminosity ratio (expressed in erg s$^{-1}$ cm$^{-2}$) as a function of the $G_{0}$ and density \emph{n}. The observed values of [C I]/CO(6-5), CO(6-5)/CO(4-3) and CO(6-5)/CO(3-2) are marked respectively by the thick white, magenta and purple contours.} \label{liviaplot} \end{figure} \section{Kinematics}\label{sec:disk} The CO(3-2) and CO(6-5) lines have a $FWHM$ of 733$\pm$98 and 751$\pm$40 km s$^{-1}$ respectively, that are consistent with the FWHM of CO(3-2), CO(4-3) and [C I](1-0) reported by \citet{popping2017}. We show the CO(6-5) velocity map in Fig. \ref{velmap} and the position-velocity (PV) diagram extracted along the major axis at a P.A. of 95$\degree$ in Fig. \ref{pv}. A velocity gradient is clearly detected. A merger system in a coalescence phase observed at a favourable orientation could in principle originate such a gradient. However, this picture seems unlikely based on the absence of two distinct nuclei in the core of the high-resolution HST/ACS image (Fig. \ref{hst}). Alternatively, the PV diagram could be the signature of a rotating disk of molecular gas. Possible evidence of a rotating disk of ionized gas in GMASS 0953 from the study of the H$\alpha$ and [OIII]$\lambda$5007 emission lines kinematics has been also reported \citep[][Loiacono et al. in prep.]{wisnioski2017}. Under the assumption of a rotating disk we investigate the kinematic properties of the dense molecular gas traced by the CO(6-5) line with \texttt{$^{3D}$BAROLO} \citep{diteodoro2015}, a tool for fitting 3D tilted-ring models to emission-line datacubes that takes into account the effect of beam smearing. We assumed a disk model with two rings and a ring width of 0.2$\arcsec$, that is approximately half the clean beam size of the datacube. We fix the P.A. at 95$\degree$, that is the value that maximizes the spatial extention of the galaxy in the PV diagram. This value is consistent with both the photometric and kinematic H$\alpha$ PAs as determined by HST imaging and KMOS data \citep{vanderwel2014, wisnioski2017}. Then we run \texttt{BAROLO} leaving as free parameters the rotation velocity ($V_{rot}$) and the intrinsic velocity dispersion ($\sigma$) for different values of the inclination. \begin{figure} \centering \includegraphics[scale=0.27]{Figure/velmap_def.png} \caption{CO(6-5) velocity map with the continuum superimposed (black contours, 3$\sigma$ and 6$\sigma$, see Fig. \ref{alma}). The black line shows the direction of the major axis.} \label{velmap} \end{figure} We derive a fiducial interval for the inclination (i.e. the range of values for which the model does not change significantly, as estimated from the residuals maps) between 60$\degree$ and 90$\degree$ and a best-fit $V_{rot}$=320$^{+92}_{-53}$ km s$^{-1}$, where the error includes both the formal error from the fit and the uncertainty from the variation of the inclination in our fiducial range. From our simulations we also conclude that the model is quite insensitive to large variations in $\sigma$ due to the large channel width and estimate an upper limit of $\sigma$=140 km/s. The best-fit model normalized to the azimuthally-averaged flux in each ring is shown in red contours in Fig. \ref{pv}. We also show the 1D spectrum extracted from the model, along with the CO(3-2) spectrum, in Fig. \ref{spec}. We note that the intrisic H$\alpha$ rotation curve presented in Fig. 5 of \citet{wisnioski2017} suggests an intrinsic velocity of $\sim$200 km s$^{-1}$ on nuclear scale, broadly consistent with our results, and a lower velocity of $\sim$100 km s$^{-1}$ at larger radii. Though it is difficult to make a direct comparison with the results by \citet{wisnioski2017} because of the different approaches to deal with beam-smearing effects, we speculate that the combination of the two results might suggest a declining rotation curve of the inner regions of the gas disk in GMASS 0953 as observed in massive early-type galaxies both locally and at high-redshift \citep{noordermeer2007, genzel2017}. We note a 2.5$\sigma$ flux excess with respect to the disk model with an offset of $\Delta v\sim$ -700 km s$^{-1}$ with respect to the line peak, also visible in the PV diagram. The velocity offset is consistent with the signatures of AGN-driven large-scale outflows in the neutral and ionized gas phases, namely the blueshift of rest-frame UV ISM absorption lines and the offset of a broad component detected in the [OIII]$\lambda$5007 emission line \citep[][Loiacono et al. in prep]{cimatti2013}. \citet{popping2017} report that they do not find any signature of outflow in the flux density profile of the CO(4-3) line, that is detected at a similar significance level as the CO(6-5) line in this work. The lack of a flux excess in the lower-J observed transitions could indicate that the excitation ratio between the CO(6-5) and lower-J transitions is higher in the possible outflow than in the rest of molecular gas in the host galaxy \citep[e.g.][]{dasyra2016, richings2017}. \begin{figure} \centering \includegraphics[scale=0.24]{Figure/pv_v6.png} \caption{CO(6-5) PV diagram extracted along the major axis at $\phi$=95$\degree$, assuming an inclination of 75$\degree$. The the iso-density contours (2.5, 5, 10$\sigma$) of the galaxy and \texttt{BAROLO} best-fit model are shown in blue and red, respectively. The yellow points mark the rotation curve.} \label{pv} \end{figure} \begin{figure} \centering \includegraphics[scale=0.27, trim=0mm 0mm 0mm 70mm, clip=true]{Figure/co32_spec.png} \includegraphics[scale=0.27, trim=0mm 0mm 0mm 70mm, clip=true]{Figure/barolo_v6.png} \caption{Velocity-integrated flux densities of the CO(3-2) (top) and CO(6-5) (bottom). Both spectra were extrated from the respective data-cubes in the region delimited by the 3$\sigma$ contour in the \emph{moment 0} map (see Fig. \ref{alma}). The red dashed line in the top plot marks the gaussian fit to the density profile. The red continuous line in the bottom plot is the disk model extracted from the 3D-BAROLO model-cube in same region as the source spectrum.} \label{spec} \end{figure} \section{Discussion}\label{sec:discussion} We have presented ALMA observations of GMASS 0953, an heavily obscured AGN host at z$\sim$2.226. The M$_{H_{2}}$ derived in Sec. \ref{sec:gas} returns a gas fraction $M_{\rm H_{\rm 2}}$/($M_{\rm H_{\rm 2}}$+$M_{\star}$)=0.2 and a gas depletion time scale $\tau_{\rm depl}$=$M_{\rm H_{\rm2}}$/$SFR$$\sim$150 Myr. As pointed out by \citet{popping2017} this value of $\tau_{\rm depl}$ is much shorter than in more extended MS galaxies at the same redshift \citep{sargent2014, scoville2017, tacconi2017}, but consistent with the values measured in off-MS galaxies, other cSFGs, and a few galaxies hosting an obscured AGN \citep[e.g.][]{polletta2011, brusa2015b, barro2016, spilker2016, tadaki2017b}. We find evidence for a multi-phase ISM in our galaxy and estimate the density of the higher-excitation gas probed by the observed CO(6-5) line: $n\sim10^{5.5}$ cm$^{-3}$. We measure a very compact radius ($\sim$1 kpc) for both the molecular gas and the dust emission, $\sim$2 times smaller than the stellar distribution. We derive a gas mass surface density of $\Sigma_{M_{\rm H_{\rm 2}}}$=0.5$\times$$M_{\rm H_{\rm 2}}$/$\pi$($r_{\rm CO}$)$^{2}$$\sim$9000 $M_{\odot}$ pc$^{-2}$. This value is similar to the typical stellar mass surface density of quiescent galaxies of similar stellar mass at the same redshift \citep{barro2015}. Considering that the SFR surface density at the radius of the dust continuum is $\Sigma_{SFR}$=0.5$\times$$SFR$/$\pi$($r_{\rm 1.4mm}$)$^{2}$$\sim$22 $M_{\odot}$ yr$^{-1}$ kpc$^{-2}$ GMASS 0953 would lie at the high star-formation and gas-density end of the Kennicutt-Schmidt relation, consistent with local ULIRGs \citep[e.g.][]{genzel2010}.\\ \indent From the afore-mentioned results we conclude that GMASS 0953, though formally lying on the MS of SFGs, has a lower gas content than MS galaxies with the same stellar mass and is consuming it much more rapidly in a very compact core \citep{elbaz2017}. On short timescales this galaxy will likely exhaust its gas reservoirs and become a cQG. This scenario, consistent with previous analysis of ISM properties and optical emission lines kinematics \citep{vandokkum2015, popping2017, wisnioski2017}, finds a further confirmation from our direct measurement of the extremely compact size of the star-forming region and the molecular gas of GMASS 0953 \citep{gilli2014, barro2016, tadaki2017a, tadaki2017b, brusa2018}. With our data we are unable to discriminate between the different mechanisms that could have originated such a compact core, i.e. a merger occurred in the past, \citep[e.g.][]{tacconi2008, wellons2015}, disk instabilities \citep[e.g.][]{dekel2014, ceverino2015, zolotov2015}, or in-situ secular processes \citep[e.g.][]{wellons2015, vandokkum2015}, though recent studies tend to favour dissipative formation mechanisms to explain the smaller size of the nuclear region of intense star formation with respect to the stellar distribution \citep{barro2016, tadaki2017a}. AGN activity is also advocated as a quenching mechanism for cSFGs in addition to the gas consumption provided by the strong star-formation activity \citep{barro2013}, both likely triggered by the same mechanism that led to the formation of the compact core \citep{kocevski2017}. In particular, because of its compactness and the presence of a luminous, obscured AGN, GMASS 0953 is consistent with the 'quasar mode' postulated by \citet{hopkins2006b} where the AGN quenches the star formation within the host galaxy through feedback mechanisms \citep[see also][and Lapi et al. in prep.]{rangel2014}, e.g. fast large-scale outflows as those that have been observed in different gas phases of GMASS 0953 \citep[][Loiacono et al. in prep.]{cimatti2013, forsterschreiber2014, genzel2014}, tentatively including the molecular one. GMASS 0953 is also one of the first cases in which, thanks to the quality of the data, we are able to measure the rapid rotation ($V_{rot}$=320$^{+92}_{-53}$ km s$^{-1}$) of the molecular gas disk in the core \citep{tadaki2017b, barro2017, brusa2018}, predicted by some simulations before the gas is completely depleted \citep{shi2017}, though it is not yet clear if this is a common feature in all cQGs progenitors \citep[e.g.][]{spilker2016}. The observation of stellar rotation in cQGs \citep{newman2015, toft2017} could indicate that cSFGs cores might retain their rotation after the quenching processes. In conclusion, in this work we have highlighted the importance of spatially-resolved ALMA observations for the study of a prototypical progenitor of cQGs, likely caught in the act of quenching through the combined action of efficient compact nuclear star-formation activity and AGN feedback. \section{Acknowledgements} This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2015.1.01379.S (PI: Cassata); $\#$2015.1.00228.S (PI: Popping). ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We acknowledge extensive support in data reduction and analysis from the ALMA Regional Centre in Bologna. MT gratefully thanks E. Di Teodoro for his support with the \texttt{$^{3D}$BAROLO} code, P. Popesso for her warm hospitality in Munich during the writing of this paper, L. Pantoni for providing the dust mass and temperature of GMASS 0953, G. Popping, R. Decarli, R. Paladino and A. Lapi for useful discussions. PC acknowledges support from CONICYT through the project FONDECYT regular 1150216. MB acknowledges support from the FP7 Career Integration Grant "eEASy" (CIG 321913). FP, CG, AR, LP, GR acknowledge funding from the INAF PRIN-SKA 2017 program 1.05.01.88.04. EI acknowledges partial support from FONDECYT through grant N$^\circ$\,1171710. The authors thank the anonymous referee for constructive comments that helped to improve the presentation of the results. \bibliographystyle{mnras}
proofpile-arXiv_069-3189
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The matrix of second-order energy derivatives with respect to nuclear displacements, or just the Hessian, is directly related to many properties of great interest to chemists \cite{Papai1990,Frisch1990,Thomas1993,Wong1996,Pulay2013}. Derivative methods are widely used to characterize the stationary points on the potential energy surface, but are also essential for the study of high-resolution molecular spectroscopy \cite{Yamaguchi2011}, or geometry dependent molecular properties such as electrostatic moments \cite{Mitxelena}.\textcolor{black}{{} Analytic first-order derivatives for reduced density matrix (RDM) methods are well-established, e.g. for the parametric second-order RDM method \cite{referee1-4}, as well as} analytical expressions of second-order energy derivatives are well-known for standard electronic structure methods. Nevertheless, the latter are still missing for methods that have been appeared in the last few decades, such as those derived directly from RDMs \cite{Mazziotti2007,Sokolovdcft,Piris2014a,referee1-1,referee1-2,referee1-3} without using the wavefunction. In fact, the Hamiltonian corresponding to Coulombic systems only involves one- and two-particle operators, hence the ground-state energy of an electronic system can be computed using the first- and second-order RDMs, denoted hereafter as $\varGamma$ and $D$, respectively. Within the Born-Oppenheimer approximation, the electronic energy is then written as \begin{equation} E_{el}=\sum\limits _{ik}\varGamma_{ki}\mathcal{H}_{ki}+\sum\limits _{ijkl}D_{kl,ij}\left\langle ij|kl\right\rangle ,\label{Eelec_0} \end{equation} where $\mathcal{H}_{ki}$ are the one-electron matrix elements of the core-Hamiltonian, whereas $\left\langle ij|kl\right\rangle $ are the two-electron integrals of the Coulomb interaction. Accordingly, the role of the N-particle wavefunction can be assumed by RDMs. Of particular interest are one-particle theories, where the ground-state energy is represented in terms of $\varGamma$, because the necessary and sufficient conditions that guarantee the ensemble N-representability of $\varGamma$ are well established and are very easy to implement \cite{Coleman1963}. In addition, the unknown functional in a $\varGamma$-based theory only needs to reconstruct the electron-electron potential energy \cite{Piris2007}, which is a notable advantage over the density functional theory, where the kinetic energy functional needs also to be reconstructed. $\varGamma$-functional theories seem a promising way of overcoming the drawbacks of density functional approximations currently in use. Most functionals employ the exact energy expression (\ref{Eelec_0}) but using solely a reconstruction functional $D\left[\varGamma\right]$. This implies that the exact ground-state energy will not, in general, be entirely rebuilt. Approximating the energy functional has important consequences \cite{Piris2017}. First, the theorems obtained for the exact functional $E\left[\varGamma\right]$ are no longer valid. The point is that an approximate functional still depends on $D$. An undesired implication of the $D$-dependence is that the functional N-representability problem arises, that is, we have to comply the requirement that $D$ reconstructed in terms of $\varGamma$ must satisfy the same N-representability conditions \cite{Mazziotti2007,referee1-1} as those imposed on unreconstructed second-order RDMs to ensure a physical value of the approximate ground-state energy. Otherwise, the functional approximation will not be correct since there will not be an N-electron system with an energy value (\ref{Eelec_0}). In addition, due to this $D$-dependence, the resulting functional depends only implicitly on $\varGamma$ and is not invariant with respect to a unitary transformation of the orbitals. Nowadays, the approximate functionals are constructed in the basis where $\varGamma$ is diagonal, which is the definition of a natural orbital functional (NOF). Accordingly, it is more appropriate to speak of a NOF rather than a functional of $\varGamma$ due to the existing dependence on $D$. In this vein, in the NOF theory (NOFT) \cite{Piris2007}, the natural orbitals (NOs) are the orbitals that diagonalize $\varGamma$ corresponding to an approximate energy expression, such as those obtained from an approximate wavefunction. The electronic energy can therefore be considered as a functional of the NOs and occupation numbers (ONs). In the following, we refer only to this basis, hence the ground-state functional for N-electron systems is given by the formula \begin{equation} E_{el}=\sum_{i}n_{i}\mathcal{H}_{ii}+\sum\limits _{ijkl}D\left[n_{i},n_{j},n_{k},n_{l}\right]\left\langle ij|kl\right\rangle .\label{Eelec_1} \end{equation} In Eq. (\ref{Eelec_1}), $D\left[n_{i},n_{j},n_{k},n_{l}\right]$ represents the reconstructed two-particle RDM in terms of the ONs. It is worth to note that we neglect any explicit dependence of $D$ on the NOs themselves because the energy functional has already a strong dependence on the NOs via the two-electron integrals. In the last two decades, much effort has been put into making NOFT able to compete with well-established electronic structure methods \cite{Piris2014a,Pernal2016}. In this vein, the analytic energy gradients in the atomic orbital representation for NOFT were obtained recently \cite{Mitxelena2017}. In the present paper, an alternative expression for them in terms of the NOs is given. On the other hand, the analytical calculation of second-order derivatives is also desirable over numerical treatment when high accuracy is required. Here, for the first time in the context of NOFT, the second-order analytic energy derivatives with respect to nuclear displacements are given. \section{The Hessian} The procedure for the minimization of the energy (\ref{Eelec_1}) requires optimizing with respect to the ONs and the NOs, separately. The method of Lagrange multipliers is used to ensure the orthonormality requirement for the NOs, and the ensemble N-representability restrictions on $\varGamma$, which reduce to $0\leq n_{i}\leq1$ and $\sum_{i}n_{i}=N$ \cite{Coleman1963}. The bounds on $\left\{ n_{i}\right\} $ are enforced by means of auxiliary variables, so merely one Lagrange multiplier $\mu$ is needed to assure normalization of ONs. Hence, the auxiliary functional $\Lambda\left[\mathrm{N},\left\{ n_{i}\right\} ,\left\{ \phi_{i}\right\} \right]$ is given by \begin{equation} \begin{array}{c} \Lambda=E_{el}-\mu\left({\displaystyle \sum_{i}}n_{i}-N\right)-{\displaystyle \sum_{ki}}\lambda_{ik}\left(\left\langle \phi_{k}|\phi_{i}\right\rangle -\delta_{ki}\right).\end{array}\label{Lagrangian} \end{equation} By making (\ref{Lagrangian}) stationary with respect to the NOs and ONs, we obtain the Euler equations: \begin{equation} \frac{\partial E_{el}}{\partial n_{m}}=\mathcal{H}_{mm}+\sum\limits _{ijkl}\frac{\partial D_{kl,ij}}{\partial n_{m}}\left\langle ij|kl\right\rangle =\mu,\label{equation_for_occupations} \end{equation} \begin{equation} \frac{\partial E_{el}}{\partial\phi_{m}^{*}}=n_{m}\hat{\mathcal{H}}\phi_{m}+\sum\limits _{ijkl}D_{kl,ij}\frac{\partial\left\langle ij|kl\right\rangle }{\partial\phi_{m}^{*}}=\sum_{k}\lambda_{km}\phi_{k}.\label{orbital_EULER_equation} \end{equation} Eq. (\ref{equation_for_occupations}) is obtained holding the orbitals fixed, whereas the set of the orbital Euler Eqs. (\ref{orbital_EULER_equation}) is satisfied for a fixed set of occupancies. For the sake of simplicity, we concern only on the use of real orbitals throughout this work. At present, the procedure of solving simultaneously Eqs. (\ref{equation_for_occupations}) and (\ref{orbital_EULER_equation}) is carried out by the iterative diagonalization method described in Ref. \cite{Piris2009a}, which is based on the hermiticity of the matrix of Lagrange multipliers $\lambda$ at the extremum, i.e. $\left[\lambda-\lambda^{\dagger},\varGamma\right]=0$ (where super-index $\dagger$ is used to express the conjugate transpose). As it is shown in Ref. \cite{Mitxelena2017}, the first-order derivative of the electronic energy with respect to Cartesian coordinate $x$ of nucleus $A$, written in the atomic orbital representation, reads as \begin{equation} \begin{array}{c} {\displaystyle \frac{dE_{el}}{dx_{A}}}={\displaystyle \sum_{\mu\upsilon}}\Gamma_{\mu\upsilon}\dfrac{\partial\mathcal{H}_{\mu\upsilon}}{\partial x_{A}}+{\displaystyle \sum_{\mu\upsilon\eta\delta}}D_{\eta\delta,\mu\upsilon}\dfrac{\partial\left\langle \mu\upsilon|\eta\delta\right\rangle }{\partial x_{A}}\\ \\ -{\displaystyle \sum_{\mu\upsilon}}\lambda_{\mu\upsilon}\dfrac{\partial\mathcal{S_{\mu\upsilon}}}{\partial x_{A}},\qquad\qquad\qquad\qquad \end{array}\label{eq:gradient_initial} \end{equation} so the energy gradient depends only on the explicit derivatives of one- and two-electron integrals and the overlap matrix. Therefore, there is no contribution from ONs, and the resulting Eq. (\ref{eq:gradient_initial}) does not require obtaining the NOs and ONs at the perturbed geometry. One could differentiate Eq. (\ref{eq:gradient_initial}) to achieve an expression for the Hessian, nevertheless, perturbation of both NOs and ONs must be considered. For that purpose it is more convenient to work in the natural orbital (NO) representation $\left\{ \phi_{i}\right\} $, so that Eq. (\ref{eq:gradient_initial}) transforms into \begin{equation} \begin{array}{c} {\displaystyle \frac{dE_{el}}{dx_{A}}}={\displaystyle \sum_{i}}n_{i}{\displaystyle \frac{\partial\mathcal{H}_{ii}}{\partial x_{A}}}+{\displaystyle \sum_{ijkl}}D_{kl,ij}{\displaystyle \frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}\qquad\qquad\\ \\ {\textstyle \;-{\displaystyle \sum_{ij}}S_{ij}^{x_{A}}\lambda_{ij},\quad S_{ij}^{x_{A}}={\displaystyle {\textstyle {\displaystyle \sum_{\mu\upsilon}}}}C_{\mu i}C_{\upsilon j}{\displaystyle \frac{\partial\mathcal{S_{\mu\upsilon}}}{\partial x_{A}}}}. \end{array}\label{eq:gradient_MO} \end{equation} The NOs associated to the perturbed geometry are usually expressed as a linear combination of those NOs corresponding to the reference state, so a perturbation of $x_{A}$ up to first order will carry out the next change in the $\phi_{i}$ \begin{equation} \phi_{i}+\delta x_{A}\left(\sum_{j}U_{ij}^{x_{A}}\phi_{j}+\sum_{\mu}C_{\mu i}\frac{\partial\zeta_{\mu}}{\partial x_{A}}\right)+\mathcal{O}\left(\delta x_{A}^{2}\right).\label{eq:perturbation} \end{equation} In Eq. (\ref{eq:perturbation}), $\left\{ \zeta_{\mu}\right\} $ are the atomic orbitals, whereas changes in NO coefficients are accounted by standard coupled-perturbed coefficients $\left\{ U_{ij}^{x_{A}}\right\} $. The orthonormality relation of the perturbed NOs provides the relationship \cite{Yamaguchi2011} \begin{equation} \frac{\partial S_{ij}}{\partial x_{A}}=U_{ij}^{x_{A}}+U_{ji}^{x_{A}}+S_{ij}^{x_{A}}=0,\label{U+U+S=00003D00003D0} \end{equation} which can be used to derive the relation \begin{equation} \begin{array}{c} {\displaystyle \sum_{ij}S_{ij}^{x_{A}}\lambda_{ij}=-2\sum_{ij}U_{ij}^{x_{A}}\lambda_{ij}},\end{array}\label{S-->U} \end{equation} so the electronic energy gradients with respect to Cartesian coordinate $x$ of nucleus $A$ in the NO representation reads as \begin{equation} {\displaystyle \begin{array}{c} {\displaystyle \frac{dE_{el}}{dx_{A}}=}{\displaystyle \sum_{i}}n_{i}{\displaystyle \frac{\partial\mathcal{H}_{ii}}{\partial x_{A}}}+{\displaystyle \sum_{ijkl}}D_{kl,ij}{\displaystyle \frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}\\ \\ +\;2{\displaystyle \sum_{ij}}U_{ij}^{x_{A}}\lambda_{ij}.\qquad\qquad\quad \end{array}}\label{eq:gradient} \end{equation} We may obtain second derivatives of the NOF energy by differentiating Eq. (\ref{eq:gradient}) with respect to coordinate $y$ of nucleus $B$, namely, \begin{equation} \begin{array}{c} {\displaystyle \frac{d^{2}E_{el}}{dx_{A}dy_{B}}=\sum_{i}n_{i}\frac{\partial^{2}\mathcal{H}_{ii}}{\partial x_{A}\partial y_{B}}+\sum_{ijkl}D_{kl,ij}\frac{\partial^{2}\left\langle ij|kl\right\rangle }{\partial x_{A}\partial y_{B}}}\\ \\ {\displaystyle \qquad\quad\;+\,2\sum_{ij}U_{ij}^{y_{B}}\lambda_{ij}^{x_{A}}+2\sum_{ij}\frac{d}{dy_{B}}{\textstyle \left(U_{ij}^{x_{A}}\lambda_{ij}\right)}}\\ \\ {\displaystyle {\displaystyle \quad+\sum_{m}n_{m}^{y_{B}}\frac{\partial}{\partial n_{m}}\left(\frac{dE_{el}}{dx_{A}}\right)}.\qquad\quad} \end{array}\label{eq: HESSIAN} \end{equation} The first two terms in Eq. (\ref{eq: HESSIAN}) contain the explicit derivatives of the core Hamiltonian and the two-electron integrals, respectively. The next two terms arise from the derivatives of NO coefficients with respect to the nuclear perturbation. Finally, $n_{m}^{y_{B}}$ represents the change in ON $m$ due to perturbation $y_{B}$, so the last term in Eq. (\ref{eq: HESSIAN}) accounts for the contribution from the perturbation of the ONs. Taking into account Eq. (\ref{orbital_EULER_equation}), the matrix of Lagrange multipliers can be written as \begin{equation} \lambda_{ij}=n_{j}\mathcal{H}_{ij}+2\,\sum_{mkl}D_{kl,jm}\left\langle im|kl\right\rangle ,\label{LAMBDA-1} \end{equation} so explicit derivatives read as \begin{equation} \lambda_{ij}^{x_{A}}=n_{j}\frac{\partial\mathcal{H}_{ij}}{\partial x_{A}}+2\,\sum_{mkl}D_{kl,jm}\frac{\partial\left\langle im|kl\right\rangle }{\partial x_{A}}.\label{lambda-explicit} \end{equation} Regarding the fourth summation of Eq. (\ref{eq: HESSIAN}), a more comprehensive expression can be obtained, namely, \begin{equation} \begin{array}{c} {\displaystyle {\textstyle {\displaystyle \sum_{ij}\frac{d}{dy_{B}}}\left(U_{ij}^{x_{A}}\lambda_{ij}\right)}=\sum_{ij}\left\{ \frac{dU_{ij}^{x_{A}}}{dy_{B}}\lambda_{ij}+U_{ij}^{x_{A}}\frac{d\lambda_{ij}}{dy_{B}}\right\} }\end{array},\label{eq:3rd summation} \end{equation} where the first term in Eq. (\ref{eq:3rd summation}) is given by \cite{Yamaguchi2011} \begin{equation} \frac{dU_{ij}^{x_{A}}}{dy_{B}}=U_{ij}^{x_{A}y_{B}}-\sum_{k}U_{ik}^{y_{B}}U_{kj}^{x_{A}}.\label{dU/dY} \end{equation} By using Eq. (\ref{U+U+S=00003D00003D0}) together with the orthonormality relation of the NOs we arrive at \cite{Yamaguchi2011} \begin{equation} \begin{array}{c} {\displaystyle \frac{\partial^{2}S_{ij}}{\partial x_{A}\partial y_{B}}}=U_{ij}^{x_{A}y_{B}}+U_{ji}^{x_{A}y_{B}}-{\displaystyle \sum_{m}}\left\{ S_{im}^{y_{B}}S_{jm}^{x_{A}}+S_{jm}^{y_{B}}S_{im}^{x_{A}}\right.\\ \left.-U_{im}^{y_{B}}U_{jm}^{x_{A}}-U_{jm}^{y_{B}}U_{im}^{x_{A}}\right\} +{\displaystyle {\textstyle {\displaystyle \sum_{\mu\upsilon}}}}C_{\mu i}C_{\upsilon j}{\displaystyle \frac{\partial^{2}\mathcal{S_{\mu\upsilon}}}{\partial x_{A}\partial y_{B}}}=0, \end{array}\label{eq:algebra_U_ab_S_ab=00003D00003D0} \end{equation} then \begin{equation} \begin{array}{c} 2{\displaystyle \sum_{ij}}U_{ij}^{x_{A}y_{B}}\lambda_{ij}={\displaystyle \sum_{ij}}\lambda_{ij}\left({\displaystyle \sum_{m}}\left\{ S_{im}^{y_{B}}S_{jm}^{x_{A}}+S_{jm}^{y_{B}}S_{im}^{x_{A}}\right.\right.\\ \left.\left.-U_{im}^{y_{B}}U_{jm}^{x_{A}}-U_{jm}^{y_{B}}U_{im}^{x_{A}}\right\} -{\displaystyle {\textstyle {\displaystyle \sum_{\mu\upsilon}}}}C_{\mu i}C_{\upsilon j}{\displaystyle \frac{\partial^{2}\mathcal{S_{\mu\upsilon}}}{\partial x_{A}\partial y_{B}}}\right). \end{array}\label{eq:algebra_U_ab} \end{equation} The derivative of Lagrange multipliers is obtained differentiating Eq. (\ref{LAMBDA-1}) \begin{equation} \frac{d\lambda_{ij}}{dy_{B}}=\lambda_{ij}^{y_{B}}+\sum_{k}U_{ki}^{y_{B}}\lambda_{kj}+\sum_{kl}U_{kl}^{y_{B}}Y_{ijkl},\label{lambda-derivative} \end{equation} where \[ \begin{array}{c} Y_{ijkl}=n_{j}\delta_{jl}\mathcal{H}_{ik}+2\,{\displaystyle \sum_{mn}}D_{ln,jm}\left\langle im|kn\right\rangle \\ +\;4\,{\displaystyle \sum_{mn}}D_{mn,jl}\left\langle ik|mn\right\rangle .\qquad \end{array} \] In Eq. (\ref{lambda-derivative}), the response from ONs has been omitted since it is included later. Overall the fourth summation in Eq. (\ref{eq: HESSIAN}) is given by \begin{equation} \begin{array}{c} {\displaystyle \sum_{ij}\frac{d}{dy_{B}}{\textstyle \left(U_{ij}^{x_{A}}\lambda_{ij}\right)}=\sum_{ij}\biggl\{ U_{ij}^{x_{A}y_{B}}\lambda_{ij}+U_{ij}^{x_{A}}\lambda_{ij}^{y_{B}}}\\ \begin{array}{c} \qquad\qquad{\displaystyle {\displaystyle \qquad\qquad\qquad+{\displaystyle {\displaystyle {\displaystyle \sum_{kl}U_{ij}^{x_{A}}}}}U_{kl}^{y_{B}}Y_{ijkl}\biggr\}}}\end{array}. \end{array}\label{eq:3rd summation-1} \end{equation} In the last summation of Eq. (\ref{eq: HESSIAN}), the derivatives with respect to the occupancies read as \begin{equation} \begin{array}{c} {\displaystyle {\displaystyle \frac{\partial}{\partial n_{m}}\left(\frac{\partial E_{el}}{\partial x_{A}}\right)=}\frac{\partial\mathcal{H}_{mm}}{\partial x_{A}}+2\sum_{ij}U_{ij}^{x_{A}}\frac{\partial\lambda_{ij}}{\partial n_{m}}}\\ \\ {\displaystyle \qquad\qquad\quad+\sum_{ijkl}\frac{\partial D_{kl,ij}}{\partial n_{m}}\frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}, \end{array}\label{eq:contribution_occ} \end{equation} where \begin{equation} \begin{array}{c} {\displaystyle \frac{\partial\lambda_{ij}}{\partial n_{m}}=\delta_{mj}\mathcal{H}_{ij}+2\,\sum_{rkl}\frac{\partial D_{kl,jr}}{\partial n_{m}}\left\langle ir|kl\right\rangle }\end{array}.\label{eq:lambda-respect-OCC} \end{equation} Note that $\partial D_{kl,jr}/\partial n_{m}$ is determined by the given two-particle RDM reconstruction $D\left[n_{i},n_{j},n_{k},n_{l}\right]$ (see Eq. \ref{Eelec_1}). Substituting Eqs. (\ref{eq:3rd summation-1}) and (\ref{eq:contribution_occ}) into Eq. (\ref{eq: HESSIAN}), we obtain the general expression for the Hessian in the NO representation, namely, \begin{equation} \begin{array}{c} {\displaystyle \frac{d^{2}E_{el}}{dx_{A}dy_{B}}=\sum_{i}n_{i}\frac{\partial^{2}\mathcal{H}_{ii}}{\partial x_{A}\partial y_{B}}+\sum_{ijkl}D_{kl,ij}\frac{\partial^{2}\left\langle ij|kl\right\rangle }{\partial x_{A}\partial y_{B}}}\\ \\ \qquad\quad\;+\;2\;{\displaystyle \sum_{ij}}\left(U_{ij}^{y_{B}}\lambda_{ij}^{x_{A}}+U_{ij}^{x_{A}}\lambda_{ij}^{y_{B}}+U_{ij}^{x_{A}y_{B}}\lambda_{ij}\right)\\ {\displaystyle \qquad\quad+\;2\,\sum_{ijkl}U_{ij}^{x_{A}}U_{kl}^{y_{B}}Y_{ijkl}+{\textstyle {\displaystyle {\displaystyle \sum_{m}n_{m}^{y_{B}}}\left(\frac{\partial\mathcal{H}_{mm}}{\partial x_{A}}\right.}}}\\ \qquad\;\,\left.{\displaystyle +\;2\sum_{ij}U_{ij}^{x_{A}}\frac{\partial\lambda_{ij}}{\partial n_{m}}+\sum_{ijkl}\frac{\partial D_{kl,ij}}{\partial n_{m}}\frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}\right). \end{array}\label{eq:Hessian_FINAL} \end{equation} In contrast to first-order energy derivatives, the calculation of the analytic Hessian requires the knowledge of NOs and ONs at the perturbed geometry, expressed in Eq. (\ref{eq:Hessian_FINAL}) by coefficients $U$ and $n_{m}^{y_{B}}$, respectively. Both magnitudes are obtained from the solution of coupled perturbed equations which are the result of deriving the variational conditions (\ref{equation_for_occupations}-\ref{orbital_EULER_equation}). It is worth noting that in the case of Eq. (\ref{orbital_EULER_equation}), it is more convenient to use its combination with its Hermitian conjugate equation that gives us the variational condition on the Hermiticity of Lagrange multipliers ($\lambda-\lambda^{\dagger}=0$). \section{Coupled-perturbed equations} Coupled perturbed equations for NOs and ONs were derived by Pernal and Baerends \cite{Pernal2006} to obtain the linear response of $\varGamma$ in a problem with a one-electron static perturbation in the Hamiltonian. In particular, these equations were employed in the calculation of the static polarizabilities of atoms and molecules. The formalism was later extended by Giesbertz \cite{giesbertz_thesis} to deal with pinned ONs. Here we present the coupled perturbed equations for NOs and ONs considering from the beginning that NOs have an explicit dependence on the perturbation (Eq. \ref{eq:perturbation}) through the position dependence of the basis functions. Therefore, instead of considering an anti-Hermitian $U$ matrix as done in Refs. \cite{Pernal2006,giesbertz_thesis}, standard coupled-perturbed coefficients are related with the overlap matrix $S$ by Eq. (\ref{U+U+S=00003D00003D0}). In addition, the existence of a generalized Fock matrix has not been assumed in the present derivation. Our coupled-perturbed equations are obtained from the Euler equations (\ref{equation_for_occupations}-\ref{orbital_EULER_equation}), which are valid for any approximate NOF. For real orbitals, at the extremum, the total derivatives of the variational condition on the Hermiticity of Lagrange multipliers vanishes, \begin{equation} \frac{d}{dx_{A}}\left(\lambda_{ij}-\lambda_{ji}\right)=0.\label{eq:L-L} \end{equation} Taking into account Eqs. (\ref{lambda-derivative}) and (\ref{eq:lambda-respect-OCC}), Eq. (\ref{eq:L-L}) can be rewritten as \begin{equation} \begin{array}{c} \lambda_{ij}^{x_{A}}-\lambda_{ji}^{x_{A}}+{\displaystyle \sum_{k}}\left(U_{ki}^{x_{A}}\lambda_{kj}-U_{kj}^{x_{A}}\lambda_{ki}\right)+{\displaystyle \sum_{kl}\left(U_{kl}^{x_{A}}\right.}\\ \\ \left.Y_{ijkl}-U_{kl}^{x_{A}}Y_{jikl}\right)+{\displaystyle \sum_{k}}{\displaystyle \left(\frac{\partial\lambda_{ij}}{\partial n_{k}}-\frac{\partial\lambda_{ji}}{\partial n_{k}}\right)}n_{k}^{x_{A}}=0. \end{array}\label{eq: lambda_a - lambda_a} \end{equation} Eq. (\ref{U+U+S=00003D00003D0}) can be used to simplify first and second summations in Eq. (\ref{eq: lambda_a - lambda_a}), namely, \begin{equation} {\displaystyle \begin{array}{c} {\displaystyle \sum_{k}U_{ki}^{x_{A}}\lambda_{kj}={\displaystyle \sum_{k>l}}\left[U_{kl}^{x_{A}}\left(\lambda_{kj}\delta_{li}-\lambda_{lj}\delta_{ki}\right)\right.}\\ \\ \qquad\qquad\qquad\quad-\left.S_{kl}^{x_{A}}\lambda_{lj}\delta_{ki}\right]{\displaystyle -\frac{1}{2}\,\sum_{k}S_{kk}^{x_{A}}\lambda_{kj}\delta_{ki}}, \end{array}}\label{eq:algebra} \end{equation} \begin{equation} \begin{array}{c} {\displaystyle \sum_{kl}}U_{kl}^{x_{A}}Y_{ijkl}={\displaystyle \sum_{k>l}}\left[U_{kl}^{x_{A}}\left(Y_{ijkl}-Y_{ijlk}\right)\right.\\ \\ \begin{array}{c} \qquad\qquad\qquad\quad-\left.S_{kl}^{x_{A}}Y_{ijlk}\right]{\displaystyle -\frac{1}{2}\,\sum_{k}S_{kk}^{x_{A}}Y_{ijkk}.}\end{array} \end{array}\label{eq:algebra_2} \end{equation} Accordingly, Eq. (\ref{eq: lambda_a - lambda_a}) can be rewritten as \begin{equation} \begin{array}{c} \lambda_{ij}^{x_{A}}-\lambda_{ji}^{x_{A}}+{\displaystyle \sum_{k}\left(\frac{\partial\lambda_{ij}}{\partial n_{k}}-\frac{\partial\lambda_{ji}}{\partial n_{k}}\right)}n_{k}^{x_{A}}\qquad\\ \\ -{\displaystyle \frac{1}{2}\,}{\displaystyle \sum_{k}}S_{kk}^{x_{A}}\left(\delta_{ki}\lambda_{kj}-\delta_{kj}\lambda_{ki}+Y_{ijkk}-Y_{jikk}\right)\\ \\ +\;{\displaystyle \sum_{k>l}}U_{kl}^{x_{A}}\left(\delta_{li}\lambda_{kj}-\delta_{ki}\lambda_{lj}-\delta_{lj}\lambda_{ki}+\delta_{kj}\lambda_{li}\right.\\ \qquad\quad\left.+Y_{ijkl}-Y_{ijlk}-Y_{jikl}+Y_{jilk}\right)\\ \\ {\displaystyle -{\displaystyle \sum_{k>l}}S_{kl}^{x_{A}}\left(\delta_{ki}\lambda_{lj}-\delta_{kj}\lambda_{li}+Y_{ijlk}-Y_{jilk}\right)}=0 \end{array}\label{eq:lambda-lamda_definitive} \end{equation} Let us now consider the Eq. (\ref{equation_for_occupations}) involving derivatives with respect to ONs. A perturbation up to first order transforms it into \begin{equation} \begin{array}{c} {\displaystyle \frac{\partial\mathcal{H}_{mm}}{\partial x_{A}}+{\displaystyle \sum_{ijkl}\frac{\partial D_{kl,ij}}{\partial n_{m}}\frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}+}{\displaystyle \sum_{rijkl}\frac{\partial^{2}D_{kl,ij}}{\partial n_{m}\partial n_{r}}}\left\langle ij|kl\right\rangle n_{r}^{x_{A}}\\ \\ +\;2{\displaystyle \sum_{ij}}\left[U_{ij}^{x_{A}}\left(\delta_{jm}\mathcal{H}_{ij}+2{\displaystyle \sum_{rkl}\frac{\partial D_{kl,jr}}{\partial n_{m}}}\left\langle ir|kl\right\rangle \right)\right]=\mu^{x_{A}}. \end{array}\label{eq:coupled_occ_perturbed} \end{equation} Taking into account Eq. (\ref{U+U+S=00003D00003D0}), Eq. (\ref{eq:coupled_occ_perturbed}) can be rewritten in compact form as \begin{equation} {\displaystyle \sum_{r}W_{mr}n_{r}^{x_{A}}+\sum_{i>j}U_{ij}^{x_{A}}\left(E_{ij}^{m}-E_{ji}^{m}\right)=F_{m}^{x_{A}},}\label{eq:coupled_occ_response} \end{equation} where \[ \begin{array}{c} F_{m}^{x_{A}}=\mu^{x_{A}}-{\textstyle \left({\displaystyle \frac{\partial\mathcal{H}_{mm}}{\partial x_{A}}}+{\displaystyle {\textstyle {\displaystyle \sum_{ijkl}\frac{\partial D_{kl,ij}}{\partial n_{m}}\frac{\partial\left\langle ij|kl\right\rangle }{\partial x_{A}}}}}\right)}\\ \\ +\;{\displaystyle \sum_{i>j}S_{ij}^{x_{A}}E_{ji}^{m}+{\textstyle \frac{1}{2}}\,\sum_{i}S_{ii}^{x_{A}}E_{ii}^{m}},\qquad\quad\\ \\ E_{ij}^{m}=2\delta_{jm}\mathcal{H}_{ij}+4{\displaystyle \sum_{rkl}\frac{\partial D_{kl,jr}}{\partial n_{m}}}\left\langle ir|kl\right\rangle ,\qquad\qquad\\ \\ W_{mr}={\displaystyle \sum_{ijkl}\frac{\partial^{2}D_{kl,ij}}{\partial n_{m}\partial n_{r}}}\left\langle ij|kl\right\rangle .\qquad\qquad\qquad\qquad \end{array} \] Note that $E_{ij}^{m}$ relates to $\partial\lambda_{ij}/\partial n_{m}$ by a factor $1/2$ according to Eq. (\ref{eq:lambda-respect-OCC}), so Eqs. (\ref{eq:lambda-lamda_definitive}) and (\ref{eq:coupled_occ_response}) can bring together to obtain the complete expression for the coupled-perturbed NOF equations \begin{equation} {\displaystyle \begin{array}{c} {\displaystyle \forall_{i>j}\;\sum_{k>l}A_{ij,kl}U_{kl}^{x_{A}}+\left(E_{ij}^{k}-E_{ji}^{k}\right)n_{k}^{x_{A}}=B_{ij}^{x_{A}}}\\ \\ \quad{\displaystyle \forall_{i}\quad\sum_{k>l}\left(E_{kl}^{i}-E_{lk}^{i}\right)U_{kl}^{x_{A}}+W_{ik}n_{k}^{x_{A}}=F_{i}^{x_{A}}} \end{array}}\label{eq:CP-NOF} \end{equation} where \[ \begin{array}{c} {\displaystyle A_{ij,kl}=\delta_{li}\lambda_{kj}-\delta_{ki}\lambda_{lj}-\delta_{lj}\lambda_{ki}+\delta_{kj}\lambda_{li}\qquad}\\ \\ +\;Y_{ijkl}-Y_{ijlk}-Y_{jikl}+Y_{jilk},\quad \end{array} \] \[ \begin{array}{c} {\displaystyle B_{ij}^{x_{A}}=\sum_{k>l}S_{kl}^{x_{A}}\left(\delta_{ki}\lambda_{lj}-\delta_{kj}\lambda_{li}+Y_{ijkl}-Y_{jilk}\right)}\quad\\ \\ {\displaystyle \qquad\quad+\,\frac{1}{2}\,}{\displaystyle \sum_{k}}S_{kk}^{x_{A}}\left(\delta_{ki}\lambda_{kj}-\delta_{kj}\lambda_{ki}+Y_{ijkk}-Y_{jikk}\right)\\ \\ -\,\lambda_{ij}^{x_{A}}+\lambda_{ji}^{x_{A}}.\qquad\qquad\qquad\qquad\qquad\; \end{array} \] It is worth noting that the coupled-perturbed equations given by Eq. (\ref{eq:CP-NOF}) are totally general and can be easily implemented, so that an expression for the reconstructed $D\left[n_{i},n_{j},n_{k},n_{l}\right]$ is only required. The here presented formulation of such equations exploits Eq. (\ref{U+U+S=00003D00003D0}) to calculate only necessary $U$ coefficients, namely, the lower (or upper) block of matrix $U$. The matricial form of Eq. (\ref{eq:CP-NOF}) is \begin{equation} \left(\begin{array}{cc} A & E-E^{\dagger}\\ E-E^{\dagger} & W \end{array}\right)\left(\begin{array}{c} U^{x_{A}}\\ n^{x_{A}} \end{array}\right)=\left(\begin{array}{c} B^{x_{A}}\\ F^{x_{A}} \end{array}\right),\label{eq:CP-NOF___MATRIX} \end{equation} where $E^{\dagger}$ represents conjugate transpose operation only acting on the subindexes, and it makes clear the symmetric nature of the square matrix. The latter has to be computed and inverted only once, since it is independent of the perturbation $\delta x_{A}$, and presents only dependence on non-perturbed NOs and ONs.\bigskip{} \section{Closing remarks} Simple analytic expressions have been derived for computation of the second-order energy derivatives with respect to nuclear displacements in the context of the natural orbital functional theory. An alternative expression for analytic gradients in terms of the NOs is given as well. In contrast to first-order energy derivatives, the calculation of the analytic Hessian requires the knowledge at the perturbed geometry of NOs and ONs, which are obtained from the solution of coupled-perturbed equations. The coupled-perturbed equations were obtained from the corresponding variational Euler equations considering that also basis functions have explicit dependence on the geometry perturbations. Consequently, the linear response of both NOs and ONs to non-external perturbations of the Hamiltonian, as in the case of nuclear geometry displacements, can be easily obtained by solving a set of equations that only need to specify the reconstruction of the second-order RDM in terms of the ONs. In geometry optimization problems, the algorithms that employ the Hessian knowledge are superior with respect to methods that use only the gradient. The Hessian can be used for the most efficient search of an extremum, and to test whether an extremum is a minimum or maximum too. The formulas here presented constitute the groundwork for practical calculations related to second-order energy derivatives with respect to nuclear displacements, such as computation of harmonic vibrational frequencies and thermochemical analysis. \subsection*{Acknowledgments} Financial support comes from Eusko Jaurlaritza (Ref. IT588-13) and Ministerio de Economia y Competitividad (Ref. CTQ2015-67608-P). \foreignlanguage{american}{One of us (I.M.) is grateful to Vice-Rectory for research of the UPV/EHU for the PhD. grant }(PIF//15/043\foreignlanguage{american}{).} The SGI/IZO\textendash SGIker UPV/EHU is gratefully acknowledged for generous allocation of computational resources. \expandafter\ifx\csname natexlab\endcsname\relax\global\long\def\natexlab#1{#1} \fi \expandafter\ifx\csname bibnamefont\endcsname\relax \global\long\def\bibnamefont#1{#1} \fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \global\long\def\bibfnamefont#1{#1} \fi \expandafter\ifx\csname citenamefont\endcsname\relax \global\long\def\citenamefont#1{#1} \fi \expandafter\ifx\csname url\endcsname\relax \global\long\def\url#1{\texttt{#1}} \fi \expandafter\ifx\csname urlprefix\endcsname\relax\global\long\defURL {URL } \fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}}
proofpile-arXiv_069-3208
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this article, by a semisimple algebraic group we always mean a {\em connected} semisimple group. Let $k$ be a field and let $G$ be a linear algebraic group over $k$. Let $H$ and $H'$ be two algebraic $k$-subgroups of $G$. Let $K/k$ be a field extension. We write $H_K=H\times_k K$ and $H'_K=H'\times_k K$. We say that $H$ and $H'$ are {\em conjugate over $K$} if there exists an element $g\in G(K)$ such that \begin{equation*} H'_K =g\cdot H_K \cdot g^{-1}. \end{equation*} We start with the following finiteness results for the set of conjugacy classes of semisimple subgroups: \begin{proposition} \label{finiteconjugation} Let $G$ be a linear algebraic group over an algebraically closed field $k$ of characteristic 0. Then the set $C(G)$ of $G(k)$-conjugacy classes of semisimple $k$-subgroups of $G$ is finite. \end{proposition} \begin{corollary}\label{finiteconjugation-R} Let $k$ be a field of characteristic 0 of type (F) in the sense of Serre \cite{SerreCG}, III.4.2, for example, the field of real numbers $\R$ or a $p$-adic field (a finite extension of the field of $p$-adic numbers $\Q_p$). Let $G$ be a linear algebraic group over $k$. Then the set $C(G)$ of $G(k)$-conjugacy classes of semisimple $k$-subgroups of $G$ is finite. \end{corollary} Proposition \ref{finiteconjugation} and Corollary \ref{finiteconjugation-R} will be proved in Section \ref{s:conjugation}. Our main results are Theorems \ref{t:abs} and \ref{t:abs-R} below. \begin{theorem}\label{t:abs} Let $k$ be a field of characteristic 0 and let $\kbar$ be a fixed algebraic closure of $k$. Let $G$ be a linear algebraic group over $k$. There exists a natural number $d$ depending only on $G_\kbar$ with the following property: if $H$ and $H'$ are two semisimple $k$-subgroups of $G$ that are conjugate over $\kbar$, then they are conjugate over a finite extension $K/k$ of degree $[K:k]\le d$. \end{theorem} In Section \ref{s:reduction}, we shall deduce Theorem \ref{t:abs} from Proposition \ref{finiteconjugation} and the following theorem: \begin{theorem}\label{t:cohom} Let $k$ be a field of characteristic 0 and let $\kbar$ be a fixed algebraic closure of $k$. Let $N$ be a linear algebraic group over $k$. Then there exists a natural number $d=d(N_\kbar)$ such that any cohomology class $\xi\in H^1(k,N)$ can be killed by a finite field extension of degree at most $d$ (that is, there exists a field extension $K=K(N,\xi)$ of $k$ of degree $[K:k]\le d$ such that the image of $\xi$ in $H^1(K,N)$ is 1). \end{theorem} We shall prove Theorem \ref{t:abs} in Section \ref{s:reduction} by applying Theorem \ref{t:cohom} to the normalizer $N=\mathcal{N}_G(H)$ of $H$ in $G$. For a proof of Theorem \ref{t:cohom}, see Section \ref{s:cohom}. We need a version of Theorem \ref{t:abs} for real number fields. A {\em real number field} is a finite extension $k$ of $\Q$ contained in the field of real numbers $\R$, or, in other words, a finite extension $k$ of $\Q$ equipped with an embedding into $\R$. \begin{theorem}\label{t:abs-R} Let $G$ be a linear algebraic group over a {\em real number field} $k\subset \R$. There exists a natural number $d_\R$ depending only on $G_\C$ with the following property: if $H$ and $H'$ are two semisimple $k$-subgroups of $G$ that are conjugate over $\R$, then they are conjugate over some finite {\em real extension} $K/k$, \ $k\subset K\subset \R$, of degree $[K:k]\le d_\R$. \end{theorem} In Section \ref{s:reduction}, we shall deduce Theorem \ref{t:abs-R} from Corollary \ref{finiteconjugation-R} and the following theorem: \begin{theorem}\label{t:cohom-R} Let $k\subset \R$ be a real number field. Let $N$ be a linear algebraic group over $k$. Then there exists a natural number $d_\R=d_\R(N_\C)$ such that any cohomology class $\xi\in H^1(k,N)$ becoming 1 over $\R$ can be killed by a finite real extension of degree at most $d_\R$. \end{theorem} We shall prove Theorem \ref{t:cohom-R} in Section \ref{s:cohom-R}. This is the most complicated proof of the present article. The current proof, proposed by an anonymous referee, uses a result of G.~Lucchini Arteche \cite{LA15} (see Proposition \ref{p:LA15}) and real approximation for homogeneous spaces with cyclic finite stabilizers (see Theorem \ref{p:LA-cyclic}). Theorem \ref{t:cohom-R} can be also proved without using the cited result of Lucchini-Arteche (see version 1 of \cite{BDR}). \begin{subsec} Now let $k$ be a real number field and let $G$ be a linear algebraic group over $k$. Consider the set $C(G_\R)$ of $G(\R)$-conjugacy classes of semisimple $\R$-subgroups of $G$; by Corollary \ref{finiteconjugation-R}, this set is finite. Let $C_0\subset C(G_\R)$ denote the set of those $c\in C(G_\R)$ for which there exists a semisimple subgroup in $c$ {\em defined over $k$}. For each $c\in C_0$\hs, let us choose such a semisimple $k$-subgroup $H_c\subset G$ in $c$. We obtain a finite set of subgroups $\Omega=\{H_c\ |\ c\in C_0\}$ with the following property: any semisimple $k$-subgroup of $G$ is conjugate over $\R$ to some $H_c\in\Omega$. The next corollary follows immediately from Theorem \ref{t:abs-R}. \end{subsec} \begin{corollary}\label{finite-real} Let $k$ be a real number field and let $G$ be a linear algebraic group over $k$. Let $\Omega=\{H_c\ |\ c\in C_0\}$ be a finite set of semisimple $k$-subgroups of $G$ as above. Then there exists a natural number $d=d(G_\C)$ such that any semisimple $k$-subgroup $H\subset G$ is conjugate to some $H_c$ ($c\in C_0$) over a finite real extension $K/k$, $k\subset K\subset\R$, of degree $[K:k]\le d$. \end{corollary} \noindent {\em Motivation.} This article was motivated by earlier work of Daw and Ren on the Zilber--Pink conjecture for Shimura varieties. The relevance of Corollary \ref{finite-real} is explained in Section 12 of their article \cite{DawRen17}. \begin{notation*} Let $k$ be a field of characteristic 0. In this article, by a $k$-variety we mean a separated scheme of finite type over $k$, not necessarily irreducible. By an algebraic group over $k$, or, shorter, a $k$-group, we always mean a {\em linear} algebraic group over $k$, that is, an affine group scheme of finite type over $k$, not necessarily connected. If $G$ is a $k$-group, we write $G^0$ for the identity component of $G$. If, moreover, $G$ is connected, we denote by $R_u(G)$ the unipotent radical of $G$. By a $k$-subgroup of $G$ we mean an algebraic $k$-subgroup of $G$. \end{notation*} \paragraph{\em Acknowledgements.} Jinbo Ren would like to thank his supervisor Emmanuel Ullmo for regular discussions and constant support during the preparation of this article and he would like to thank Yongqi Liang for several useful discussions. The authors are grateful to Friedrich Knop for his MathOverflow answer \cite{Knop-MO} to Jinbo Ren's question, to Sean Lawton for the reference to Richardson's article \cite{Richardson}, and to Giancarlo Lucchini Arteche for useful comments. We thank the anonymous referees for their helpful comments. We especially thank one of the referees for his/her suggestion to use a result of Lucchini Arteche and a result on real approximation for homogeneous spaces with cyclic finite stabilizers, which permitted us to shorten the proofs significantly. \section{Conjugation over an algebraically closed field\\ and over a field of type (F)} \label{s:conjugation} \begin{subsec} {\em Proof of Proposition \ref{finiteconjugation}.} Write $\gGer=\Lie G$; the group $G(k)$ acts on $\gGer$ via the adjoint representation. Let $\hh\subset \gGer$ be a Lie subalgebra; $\hh$ acts on itself, on $\gGer$, and on the quotient space $\gGer/\hh$, so we may consider the cohomology space $H^1(\hh,\gGer/\hh)$. By Richardson \cite{Richardson}, Corollary (b) in the introduction, there exist only finitely many $G(k)$-conjugacy classes of Lie subalgebras $\hh\subset \gGer$ such that $H^1(\hh,\gGer/\hh)=0$. If $\hh$ is semisimple, then for any $\hh$-module $M$ we have $H^1(\hh,M)=0$ (see Chevalley and Eilenberg \cite{Chevalley-Eilenberg}, Theorem 25.1). In particular, we have $H^1(\hh,\gGer/\hh)=0$. It follows that there exist only finitely many $G(k)$-conjugacy classes of {\em semisimple} Lie subalgebras of $\gGer$. Now let $H_1$ and $H_2$ be two semisimple subgroups of $G$. Let $\hh_1$ and $\hh_2$ denote their respective Lie algebras. Assume that $\hh_1$ and $\hh_2$ are conjugate under $G(k)$, that is, there exists $g\in G(k)$ such that $\Ad(g)(\hh_1)=\hh_2$. Since ${\rm char}(k)=0$, a connected algebraic subgroup $H\subset G$ is uniquely determined by its Lie algebra $\Lie H\subset \gGer$ (see Humphreys \cite{Humphreys}, Section 13.1). It follows that $g\hs H_1\hs g^{-1}= H_2$. We see that if $\Lie H_1$ and $\Lie H_2$ are conjugate, then $H_1$ and $H_2$ are conjugate. We conclude that there are only finitely many conjugacy classes of semisimple subgroups of $G$, which proves the proposition. \qed \end{subsec} \begin{remark} The properties `connected' and `semisimple' required of the subgroups in Proposition \ref{finiteconjugation} are necessary. Indeed, let $G$ be the two-dimensional torus $\G_{m,k}^2$\hs, where $\G_{m,k}$ denotes the multiplicative group over $k$. Then clearly $G$ has infinitely many different $k$-subgroups, and even infinitely many different connected $k$-subgroups, and, clearly, they are not conjugate in $G$. Moreover, let $n\in\Z_{>0}$. Set \[S_n=\{(x_1,x_2)\in \G_{m,k}^2\ |\ x_2=x_1^n\}\hs,\quad H_n=\G_{a,k}^2\rtimes S_n\subset \G_{a,k}^2\rtimes \G_{m,k}^2\hs,\] where $\G_{a,k}$ denotes the additive group over $k$, on which $\G_{m,k}$ naturally acts. We embed \[H_n\subset \G_{a,k}^2\rtimes \G_{m,k}^2\into \GL_{4,k}\into {\rm SL}_{5,k}\hs.\] One can easily check that the connected solvable algebraic groups $H_n$ are pairwise non-isomorphic, and therefore, they are not conjugate in the simple group ${\rm SL}_{5,k}$. \end{remark} \begin{subsec}\label{ss:gen-1} Let $k$ be a field of characteristic 0 and let $\kbar$ be a fixed algebraic closure of $k$. Let $G$ be a linear algebraic group over $k$. Let $C$ be a conjugacy class of connected $\kbar$-subgroups $\Hbar'$ of $G_\kbar$ that contains a subgroup $H_\kbar=H\times_k \kbar$ defined over $k$. Then $C$ is the set of $\kbar$-points of a $k$-variety $V$ on which $G$ acts transitively by \[ g*\Hbar'=g\cdot \Hbar'\cdot g^{-1}.\] The stabilizer of the $k$-point $H$ of this variety is $N=\mathcal{N}_G(H)$. Therefore, we may identify $V$ with $G/N$. We identify the set of $k$-subgroups $H'\subset G$ that are conjugate to $H$ over $\kbar$ with $(G/N)(k)$, and we identify the set of $G(k)$-conjugacy classes of such $k$-subgroups with the set of orbits of $G(k)$ in $(G/N)(k)$. \end{subsec} \begin{subsec} {\em Proof of Corollary \ref{finiteconjugation-R}.} Let $k$ be a field of characteristic 0 of type (F) in the sense of Serre. By Serre \cite{SerreCG}, III.4.4, Theorem 5, the set of orbits of $G(k)$ in $(G/N)(k)$ in Subsection \ref{ss:gen-1} is finite. Therefore, the set of $G(k)$-conjugacy classes of connected $k$-subgroups in a $G(\kbar)$-conjugacy class is finite. Let $C_k$ denote the set of $G(k)$-conjugacy classes of connected algebraic $k$-sub\-groups, and let $C_\kbar$ denote the corresponding set for $\kbar$. Then we have a canonical map \[ C_k\to C_\kbar.\] As explained above, since $k$ is of type (F), all fibers of this map are finite. By Proposition \ref{finiteconjugation}, the set of $G(\kbar)$-conjugacy classes of connected {\em semisimple} $\kbar$-subgroups is finite, and the corollary follows. \qed \end{subsec} \section{Reductions} \label{s:reduction} \begin{subsec} We show that, in order to prove Theorem \ref{t:abs}, it suffices to prove Theorem \ref{t:cohom}. Indeed, by Proposition \ref{finiteconjugation}, the set $C$ of conjugacy classes of semisimple subgroups of $G_\kbar$ is finite. Therefore, it suffices to show that, if $H,H'\subset G$ are two such $k$-subgroups {\em in a given $G(\kbar)$-conjugacy class $c\in C$}, then they are conjugate over a finite extension $K$ of $k$ of degree at most $d_c$\hs, where $d_c$ depends only on $G_\kbar$ and $c$. Set \[Y=\{\hs g\in G(\kbar)\ |\ gHg^{-1}=H'\hs\}\] and write $N=\sN_G(H)$. Since $H$ and $H'$ are connected and ${\rm char}(k)=0$, by Humphreys \cite{Humphreys}, Theorem 13.1, we have \[Y=\{\hs g\in G(\kbar)\ |\ \Ad(g) (\Lie H)=\Lie H'\hs\},\] and hence, the variety $Y$ is defined over $k$. The $k$-group $N$ acts on the right on $Y$ by \[g\mapsto gn \qquad (g\in G(\kbar),\ n\in N(\kbar)).\] This action is simply transitive (over $\kbar$). Hence, $Y$ is a principal homogeneous space (torsor) of $N$, and so we obtain a cohomology class $\xi=\xi(H,H')\in H^1(k,N)$. The two subgroups $H$ and $H'$ are conjugate over a field extension $K$ of $k$ if and only if $Y$ has a $K$-point, hence, if and only if the extension $K/k$ kills $\xi$. Since, by Theorem \ref{t:cohom}, the class $\xi$ can be killed by a finite field extension $K/k$ of degree at most $d(N_\kbar)$, the subgroups $H$ and $H'$ are conjugate over this field $K$. Clearly, $N_\kbar$ depends (up to isomorphism) only on $G_\kbar$ and $c$. \qed \end{subsec} \begin{subsec} We show that, in order to prove Theorem \ref{t:abs-R}, it suffices to prove Theorem \ref{t:cohom-R}. Indeed, by Corollary \ref{finiteconjugation-R}, the set $C_\R$ of $G(\R)$-conjugacy classes of semisimple subgroups of $G_\R$ is finite. Therefore, it suffices to show that, if $H,H'\subset G$ are two semisimple $k$-subgroups of $G$ {\em in a given $G(\R)$-conjugacy class $c\in C_\R$}, then they are conjugate over a finite real extension $K$ of $k$ of degree at most $ d_{\R,c}$, where $d_{\R,c}$ depends only on $G_\C$ and $c$. As above, starting from two semisimple $k$-subgroups $H,H'\subset G$ that are conjugate over $\R$, we obtain a cohomology class \[\xi\in\ker[H^1(k,N)\to H^1(\R,N)],\] where $N=\mathcal{N}_G(H)$. By Theorem \ref{t:cohom-R}, the class $\xi$ can be killed by a finite real extension $K/k$ of degree at most $d_\R(N_\C)$. Then the subgroups $H$ and $H'$ are conjugate over this field $K$. Clearly, $N_\C$ depends (up to isomorphism) only on $G_\C$ and the conjugacy class of $H$ over $\C$. \qed \end{subsec} \section{Taking quotient by a unipotent normal subgroup} \label{s:Serre-Sansuc} In this section $k$ is a field of characteristic 0. We shall need the following result of Sansuc: \begin{proposition}[\cite{Sansuc}, Lemma 1.13] \label{l:Sansuc} Let $G$ be an algebraic group over a field $k$ of characteristic 0, and let $U\subset G$ be a unipotent $k$-subgroup. Assume that $U$ is normal in $G$. Then the canonical map $H^1(k,G)\to H^1(k,G/U)$ is bijective. \end{proposition} Sansuc assumes that $G$ is connected, but his proof does not use this assumption. As Sansuc's proof is concise, we provide a more detailed proof. We need a lemma. \begin{lemma}[well-known] \label{l:comm-unip} Let $U$ be a {\emm commutative} unipotent algebraic group over a field $k$ of characteristic 0. Then: \begin{enumerate} \item[(i)] $U\simeq \G_{a,k}^d$, where $\G_{a,k}$ is the additive group and $d\ge 0$ is an integer (the dimension of $U$). \item[(ii)] Any twisted form of $U$ is isomorphic to $U$. \item[(iii)] $H^n(k,U)=0$ for all $n\ge 1$. \end{enumerate} \end{lemma} \begin{proof} For (i) see, for instance, Milne \cite[Corollary 14.33]{Milne}. Any twisted form of $U=\G_{a,k}^d$ is again a commutative unipotent $k$-group of the same dimension $d$, and by (i) it is isomorphic to $\G_{a,k}^d$, whence (ii). By Serre \cite{SerreLF}, X.1, Proposition 1, we have $H^n(k,\G_{a,k})=0$ for all $n\ge1$, and (iii) follows. \end{proof} \begin{subsec} {\em Proof of Proposition \ref{l:Sansuc}.} We prove the proposition by induction on the dimension of $U$. We may and shall assume that $U\ne \{1\}$. Let $Z=Z(U)$ denote the center of $U$. Then $Z\neq 1$ because $U$ is nilpotent. Clearly, $Z$ is a commutative unipotent group. By Lemma \ref{l:comm-unip}(i) we have $\dim Z>0$. Since $Z$ is a characteristic subgroup of $U$, we see that $Z$ is normal in $G$. Set $U'=U/Z$ and $G'=G/Z$. Then $U'$ is a normal unipotent subgroup of $G'$ and $\dim \,U'<\dim\, U$. We factorize the canonical homomorphism $G\to G/U$ as \[ G\labelto{p} G'\labelto{q} G'/U'=G/U.\] We obtain a factorization of the canonical map $H^1(k,G)\to H^1(k,G/U)$ as \[H^1(k,G) \labelto{p_*} H^1(k,G') \labelto{q_*} H^1(k,G'/U')=H^1(k,G/U).\] Since $\dim U'<\dim U$, by the induction hypothesis the map $q_*$ is bijective. It remains to show that the map $p_*$ is bijective. Consider the short exact sequence of $k$-groups \[1\lra Z\labelto{i} G\labelto{p} G/Z\lra 1\] and the induced cohomology exact sequence \[H^1(k,Z)\labelto{i_*} H^1(k,G)\labelto{p_*} H^1(k,G/Z).\] Since by Lemma \ref{l:comm-unip} we have $H^1(k,\hs_c Z)=0$ and $H^2(k,\hs_c Z)=0$ for any cocycle $c\in Z^1(k,G/Z)$, by Serre \cite{SerreCG}, I.5.5, Corollary 2 of Proposition 39, and I.5.6, Corollary of Proposition 41, the map $p_*$ is bijective, which completes the proof of the proposition. \qed \end{subsec} \section{Killing a first cohomology class over an arbitrary field\\ of characteristic 0} \label{s:cohom} In this section we prove Theorem \ref{t:cohom}, which we state here in a more precise form. \begin{theorem}\label{t:cohom-bis} Let $\kbar$ be an algebraically closed field of characteristic 0. Let $\Gbar$ be a linear algebraic group over $\kbar$. Then there exists a natural number $d=d(\Gbar)$ with the following property: \begin{property*}[$*$] For any subfield $k\subset \kbar$ such that $\kbar$ is an algebraic closure of $k$, for any $k$-form $G$ of $\Gbar$, and for any cohomology class $\xi\in H^1(k,G)$, the class $\xi$ can be killed by a finite field extension $K/k$ of degree at most $d$. \end{property*} \end{theorem} First, we prove Theorem \ref{t:cohom-bis} for finite groups. \begin{lemma}\label{l:finite-k} Let $\Gbar$ be a finite $\kbar$-group. Then $\Gbar$ satisfies $(*)$ with $d=|\Gbar|$. \end{lemma} \begin{proof} Let $G$ be a $k$-form of $\Gbar$. A cohomology class $\xi\in H^1(k,G)$ defines a torsor $\sT=\sT_\xi$ of $G$. By definition, we have $|\sT(\kbar)|=|\Gbar|$. For any point $t\in \sT(\kbar)$, the $\Gal(\kbar/k)$-orbit of $t$ has cardinality at most $|\Gbar|$, hence, $t$ is defined over a finite extension of $k$ in $\kbar$ of degree at most $|G|$. Since this extension kills $\xi$, the proof is complete. \end{proof} In what follows, we shall need two known results. \begin{proposition}[Minkowski, cf.~Friedland \cite{finitegln}] \label{p:minkowski} For any natural number $r$, there is a constant $\beta=\beta(r)>0$ such that every finite subgroup of $\GL_r(\ZZ)$ has cardinality at most $\beta$. \end{proposition} \begin{proposition}[Lucchini Arteche \cite{LA15}, Corollary 18] \label{p:LA15} Let $k$ be a field of characteristic 0. Let $G$ be a linear algebraic $k$-group with reductive identity component. Let $T$ be a maximal $k$-torus in the identity component $G^0$ of $G$, and let $W=\sN_G(T)/T$, which is a finite group. We write $r$ for the dimension of $T$, and write $w$ for the order of $W$. Let $L/k$ be a field extension splitting $T$ whose degree we denote by $d_T$. Then there exists a finite $k$-subgroup $H$ of $G$ of order at most $w^{2r+1}d_T^{2r}$ with the property \begin{property*}[\rm SUR] For every field $K\supset k$, the natural map $H^1(K,H)\to H^1(K,G)$ is surjective. \end{property*} \end{proposition} \begin{corollary}\label{c:LA} Let $\kbar$ be an algebraically closed field of characteristic 0, and let $\Gbar$ be a linear algebraic group over $k$ with reductive identity component. Let $k\subset \kbar$ be a subfield. Then there exists a positive integer $\gamma(\Gbar)$ depending only on $\Gbar$ such that, for any $k$-form $G$ of $\Gbar$ there exists a finite $k$-subgroup $H$ of $G$ of order at most $\gamma(\Gbar)$ with the property {\rm (SUR)} of Proposition \ref{p:LA15}. \end{corollary} \begin{proof} In Proposition \ref{p:LA15} we may take for $L$ the minimal splitting field of the maximal $k$-torus $T$. Then the Galois group $\Gal(L/k)$ acts faithfully on the character group $\X^*(\overline{T})\simeq \Z^r$, and hence, \[d_T=[L:k]=\#\Gal(L/k)\le\beta(r),\] cf.~Proposition \ref{p:minkowski}. Clearly, $r$ and $w$ in Proposition \ref{p:LA15} depend only on $\Gbar$, and we may take $\gamma(\Gbar)=w^{2r+1}\hs \beta(r)^{2r}$. \end{proof} \begin{subsec} \label{ss:pf-gen-0} \emph{Reduction of Theorem \ref{t:cohom-bis} to the case when $\Gbar^0$ is reductive.} Let $G$ be a $k$-form of $\Gbar$. Consider the normal unipotent subgroup $R_u(G^0)\subset G$, where $^0$ denotes the identity component and $R_u$ denotes the unipotent radical. Write $G^\red=G/R_u(G^0)$. Similarly, write $\Gbar^\red=\Gbar/R_u(\Gbar^0)$. It is clear that the identity component of $\Gbar^\red$ is reductive and that $G^\red$ is a $k$-form of $\Gbar^\red$. Let $\pi\colon G\to G^\red$ denote the canonical surjective homomorphism. By Proposition \ref{l:Sansuc}, for any field extension $K/k$, the canonical map $\pi_*\colon H^1(K,G)\to H^1(K,G^\red)$ is bijective. We see that for any $\xi\in H^1(k,G)$, an extension $K/k$ kills $\xi$ if and only if it kills $\pi_*(\xi)$. We conclude that if Theorem \ref{t:cohom-bis} is true for $\Gbar^\red$ with bound $d(\Gbar^\red)$, then this theorem is also true for $\Gbar$ with the same bound $d(\Gbar)=d(\Gbar^\red)$. \end{subsec} \begin{subsec}\label{ss:pf-gen} \emph{Proof of Theorem \ref{t:cohom-bis}.} By \ref{ss:pf-gen-0} we may assume that $\Gbar^0$ is reductive. Let $H\subset G$ be as in Proposition \ref{p:LA15} (due to Lucchini Arteche). By Corollary \ref{c:LA} we may assume that the order of $\Hbar$ is at most $\gamma(\Gbar)$. Let $\xi\in H^1(k,G)$. From the property (SUR) of $H$, we know that $\xi$ is the image of some cohomology class $\xi_H\in H^1(k,H)$. By Lemma \ref{l:finite-k}, the class $\xi_H$, and hence $\xi$, can be killed by a finite field extension of degree at most $d(\Hbar)=|\Hbar|\le\gamma(\Gbar)$. This completes the proof of Theorem \ref{t:cohom-bis} with the bound $d(\Gbar)=\gamma(\Gbar^\red)$, and thus completes the proofs of Theorems \ref{t:cohom} and \ref{t:abs}. \qed \end{subsec} \section{Real approximation for homogeneous spaces} \label{s:real} In this section we prove the following theorem, which was communicated to us by an anonymous referee. \begin{theorem}\label{p:LA-cyclic} Let $G$ be a connected linear algebraic group over a number field $k$, and consider the homogeneous space $Y=G/C$, where $C\subset G$ is a cyclic finite $k$-subgroup. Then $Y$ has the real approximation property, that is, for any set $S$ of {\emm archimedean} places of $k$, the set $Y(k)$ is dense in $\prod_{v\in S} Y(k_v)$. In particular, if $k\subset \R$ is a real number field, then $Y(k)$ is dense in $Y(\R)$. \end{theorem} Let $k$ be a field of characteristic 0. We refer to Colliot-Th\'el\`ene \cite{CT}, Definition 2.1 and Proposition 2.2, for the definition of a {\em quasi-trivial $k$-group.} \begin{lemma}[Colliot-Th\'el\`ene] \label{l:CT} Let $G$ be a connected linear algebraic group over a field $k$ of characteristic 0. Then there exists a short exact sequence of $k$-groups \[1\lra Z'\lra G'\labelto{\nu} G\lra 1\] where $G'$ is a quasi-trivial $k$-group and $Z'$ is a $k$-group of multiplicative type contained in the center of $G'$. \end{lemma} \begin{proof} See Colliot-Th\'el\`ene \cite{CT}, Proposition-Definition 3.1, or \cite{Borovoi}, Proposition 2.8. \end{proof} \begin{proposition}\label{p:qt-ab} Let $G$ be a connected linear algebraic group over a field $k$ of characteristic 0, and let $C\subset G$ be a cyclic finite $k$-subgroup. Then there exists an isomorphism of $k$-varieties $G'/H'\isoto G/C$, where $G'$ is a quasi-trivial $k$-group and $H'\subset G'$ is an abelian $k$-subgroup. \end{proposition} \begin{proof} Consider the short exact sequence of Lemma \ref{l:CT}, and set $H'=\nu^{-1}(C)\subset G'$. Clearly, we have $G/C=G'/H'$. The $k$-subgroup $H'$ fits into a short exact sequence \[ 1\to Z'\to H'\to C\to 1,\] where $C$ is cyclic and $Z'$ is central in $H'$. By Lemma \ref{l:sH} below, the group $H'$ is abelian. \end{proof} \begin{lemma}[well-known and very easy] \label{l:sH} Let an abstract group $\sH$ be a central extension of a cyclic group, that is, we assume that $\sH$ fits into a short exact sequence \[ 1\to A\to\sH\to C\to 1,\] where $C$ is a cyclic group and $A$ is contained in the center of \hs$\sH$. Then $\sH$ is abelian. \qed \end{lemma} \begin{subsec} {\em Proof of Theorem \ref{p:LA-cyclic}.} By Proposition \ref{p:qt-ab} we have $Y=G'/H'$, where $G'$ is a quasi-trivial $k$-group and $H'\subset G'$ is an abelian $k$-subgroup. By \cite{Borovoi}, Corollary 2.3, any $k$-variety of the form $G'/H'$, where $G'$ is a quasi-trivial $k$-group and $H'\subset G'$ is an abelian $k$-subgroup, has the real approximation property. \qed \end{subsec} \section{Killing a first cohomology class over a real number field} \label{s:cohom-R} In this section we prove Theorem \ref{t:cohom-R}, which we state here in a more precise form. \begin{theorem}\label{t:cohom-R-bis} Let $G_\C$ be a linear algebraic group over the field of complex numbers $\C$. Then there exists a natural number $d_\R=d_\R(G_\C)$ with the following property: \begin{property*}[$*_\R$] For any real number field $k\subset \R$, any $k$-form $G$ of $G_\C$, and any cohomology class $\xi\in H^1(k,G)$ becoming 1 over $\R$, the class $\xi$ can be killed by a finite real extension $K/k$ of degree at most $d_\R$. \end{property*} \end{theorem} In order to prove Theorem \ref{t:cohom-R-bis}, we shall need the following elementary lemma: \begin{lemma}\label{l:degree-two} Let $k\subset \R$ be a real number field and let $L\subset \C$ be a finite {\emm normal} field extension of $k$. Set $K=L\cap \R$. Then $[L:K]\le 2$. \end{lemma} \begin{proof} If $L=K$, there is nothing to prove, so we assume that $L\neq K$. Write $\sigma\colon \C\to\C$ for the complex conjugation. Then $\sigma(k)=k$, because $k\subset\R$. Since $L$ is a normal extension of $k$, we see that $\sigma(L)=L$. Let $x\in L\smallsetminus K$, and set $p=(x+\sigma(x))/2$, $q=x\cdot \sigma(x)$. Then $p,q\in K$ and $x^2-2px+q=0$. Set $D=p^2-q\in K\subset \R$. Then $x=p+\zeta$, where $\zeta^2=D$. Since $x\in L\smallsetminus K\ \subset\ \C\smallsetminus \R$, we have $D<0$. Clearly, $K(x)=K(\zeta)$. If $x'\in L\smallsetminus K$ is another element, then similarly $x'\in K(\zeta')$, where $(\zeta')^2=D'<0$. Then $(\zeta'/\zeta)^2=D'/D>0$, hence $\zeta'/\zeta\in L\cap \R=K$, and therefore, $x'\in K(\zeta)$. Thus $L=K(\zeta)$ and $[L:K]=2$. \end{proof} We shall need a lemma and a corollary: \begin{lemma}\label{finitebdd} Let $X$ be a {\emm finite} variety over a real number field $k\subset \R$. Let $x\in X(\R)$ be an $\R$-point. Then $x$ is defined over a real number field $k_x\subset \R$ whose degree over $k$ is at most $|X_\C|$, where we write $|X_\C|$ for $|X(\C)|$. \end{lemma} \begin{proof} Since $X$ is finite, we have $X(\C)=X(\kbar)$, where $\kbar=\Qbar$ denotes the algebraic closure of $k$ in $\C$. The Galois group $\Gal(\kbar/k)$ acts on $X(\kbar)$. The point $x$ is defined over the fixed field $k_x\subset \R$ of the stabilizer of $x$ in $\Gal(\kbar/k)$. Since the cardinality of the orbit of $x$ under $\Gal(\kbar/k)$ is at most $|X(\kbar)|=|X_\C|$, we see that $[k_x:k]\leq |X_\C|$. \end{proof} \begin{corollary}\label{finite-real1finite} Let $G_\C$ be a finite group over $\C$. Then $G_\C$ satisfies $(*_\R)$ with $d_\R=|G_\C|$. \end{corollary} \begin{proof} Let $k\subset\R$ be a real number field, let $G$ be a $k$-form of $G_\C$, and let $\xi\in H^1(k,G)$ be a cohomology class becoming 1 over $\R$. Then $\xi$ defines a $k$-torsor $\sT=\sT_\xi$ of $G$ such that $\sT(\R)$ is nonempty. Let $t\in \sT(\R)$. By Lemma \ref{finitebdd} the torsor $\sT$ has a $k_t$-point over a real number field $k_t\subset \R$ whose degree over $k$ is at most $|\sT_\C|=|G_\C|$. The extension $k_t/k$ kills $\xi$, as required. \end{proof} Now we reduce Theorem \ref{t:cohom-R-bis} to the case when $G_\C$ is connected. \begin{lemma}\label{assume-conn} Assume that Theorem \ref{t:cohom-R-bis} is true for connected linear algebraic groups, and let $G_\C$ be any linear algebraic group over $\C$ (not necessarily connected). Then $G_\C$ satisfies $(*_\R)$ with \begin{align*} d_\R=d_\R(G^0_\C)\cdot |G_\C/G^0_\C|^2. \end{align*} \end{lemma} \begin{proof} Let $k\subset\RR$ be a real number field, let $G$ be a $k$-form of $G_\C$, and let $\xi\in H^1(k,G)$ be a cohomology class becoming 1 over $\R$. From the short exact sequence \[1\longrightarrow G^0\longrightarrow G\longrightarrow G\slash G^0\longrightarrow 1\] we obtain a commutative diagram (of pointed sets) with exact rows: \[ \xymatrix{ (G/G^0)(k) \ar[r]^\delta \ar[d] &H^1(k,G^0)\ar[r]^\varphi \ar[d]^{\loc_\R} &H^1(k,G)\ar[r]^\psi\ar[d]^{\loc_\R} &H^1(k,G/G^0)\ar[d]^{\loc_\R} \\ (G/G^0)(\RR) \ar[r]^{\delta_\R} &H^1(\RR,G^0)\ar[r]^{\varphi_\R} &H^1(\RR,G)\ar[r]^{\psi_\R} &H^1(\R,G/G^0). } \] By Lemma \ref{finite-real1finite}, after possibly replacing $k$ by a real extension of degree at most $|G_\C/G^0_\C|$, we may and shall assume that $\psi(\xi)=1$, and hence, $\xi=\varphi(\xi_0)$ for some $\xi_0\in H^1(k, G^0)$. By assumption, the class $(\xi_0)_\R:=\loc_\R(\xi_0)\in H^1(\RR,G^0)$ maps to $1\in H^1(\RR,G)$. It follows that $(\xi_0)_\R=\delta_\R(g)$ for some $g\in (G\slash G^0)(\RR)$. By Lemma \ref{finitebdd}, after possibly replacing $k$ by a real extension of degree at most $|G_\C/G^0_\C|$, we may and shall assume that $g\in (G/G^0)(k)$. By Serre \cite{SerreCG}, I.5.5, Proposition 39(ii), the class $\xi'_0:=\xi_0*g^{-1}\in H^1(k, G^0)$ maps to $\xi$ under $\varphi$ and becomes 1 over $\RR$. By assumption, $\xi'_0$ can be killed by a real extension $K/k$ of degree at most $d_\R(G^0_\C)$. Clearly, $K$ also kills $\xi$, which completes the proof. \end{proof} \begin{subsec} Next we reduce Theorem \ref{t:cohom-R-bis} to the case when $G_\C$ is reductive. Let $G_\C$ be a connected linear $\C$-group. We write $R_u(G_\C)$ for the unipotent radical of $G_\C$\hs, and we write $G_\C^\red=G_\C/R_u(G_\C)$, which is a connected reductive $\C$-group. Assume that Theorem \ref{t:cohom-R-bis} is true for $G_\C^\red$. An argument similar to that of Subsection \ref{ss:pf-gen-0} shows that then Theorem \ref{t:cohom-R-bis} is also true for $G_\C$ with the same bound \[ d_\R(G_\C)=d_\R(G^\red_\C).\] \end{subsec} \begin{subsec} \emph{Proof of Theorem \ref{t:cohom-R-bis}.} We have seen that it suffices to prove the theorem for a connected reductive $\C$-group. Let $G_\C$ be such a $\C$-group, and let $G$ be a $k$-form of $G_\C$. Let $\xi_G\in H^1(k,G)$ be a cohomology class such that $\loc_\R(\xi_G)=1$. By Corollary \ref{c:LA}, there exists a \emph{finite} $k$-subgroup $H\subset G$ of order at most $\gamma(G_\C)$ such that $\xi_G$ is the image of some cohomology class $\xi_H\in H^1(k,H)$. Then by Lemma \ref{l:finite-k}, the class $\xi_H$ can be killed by a finite field extension $M/k$ in $\C$ (not necessarily real, not necessarily normal) of degree at most $|H|\le\gamma(G_\C)$. Then $\xi_H$ will be killed by the normal closure $L$ of $M$ over $k$ (which is contained in $\C$ and normal over $k$) of degree at most $\gamma(G_\C)!$ over $k$. Note that $L$ might not be real. Set $K=L\cap \R$. Then \[[K:k]\le [L:k]\le \gamma(G_\C)!\hs.\] If $K=L$, then $K/k$ is a real field extension of degree at most $\gamma(G_\C)!$ \hs killing $\xi_H$ and $\xi_G$\hs, as required. Otherwise, by Lemma \ref{l:degree-two} we have $[L:K]=2$. Set $\Gamma=\Gal(L/K)$. Then $\Gamma=\{1,s\}$, where $s^2=1$. Consider the restriction $\xi_H^{(K)}=\Res_K(\xi_H)\in H^1(K,H)$. Since the Galois extension $L/K$ kills $\xi_H^{(K)}$, by the nonabelian inflation-restriction exact sequence (see Serre \cite{SerreCG}, I.5.8(a)\hs), the class $\xi_H^{(K)}$ comes from some class $\eta^{(L/K)}\in H^1(L/K, H(L)\hs)$. Let $a=a^{(L/K)}\colon \Gamma\to H(L)$ be a cocycle representing $\eta^{(L/K)}$. Then $a_1=1_H$ and $a_s\cdot \hs^s a_s=1_H$. Let $C$ denote the cyclic subgroup of $H_L$ generated by $a_s$. Since $^s a_s=a_s^{-1}\in C(L)$, we see that the $L$-subgroup $C$ of $H$ is $\Gamma$-stable, and hence, is defined over $K$. We regard $a^{(L/K)}$ as an element of $Z^1(L/K,C(L)\hs)$, and we write $\alpha^{(L/K)}$ for the class of $a^{(L/K)}$ in $H^1(L/K,C(L)\hs)$. Furthermore, we write $\alpha$ for the image of $\alpha^{(L/K)}$ in $H^1(K,C)$. Then the image of $\alpha$ in $H^1(K,H)$ is $\xi^{(K)}_H$ and hence, the image of $\alpha$ in $H^1(K,G)$ is $\Res_K(\xi_G)$. It follows that the image of $\alpha$ in $H^1(\R,G)$ is 1. We wish to kill the image $\Res_K(\xi_G)$ of $\alpha$ in $H^1(K,G)$ by a real extension of bounded degree. We consider the homogeneous space $Y\!:=G_K/C$ and the commutative diagram of pointed sets \begin{equation*}\label{e:S11} \xymatrix{ 1\ar[r] &Y(K)/G_K(K)\ar[r]^\delta\ar[d]^{\loc_\R} &H^1(K, C)\ar[r]^{i_*}\ar[d]^{\loc_\R} &H^1(K,G_K)\ar[d]^{\loc_\R}\\ 1\ar[r] &Y(\R)/G(\R)\ar[r]^{\delta_\R} &H^1(\R, C)\ar[r]^{i_*} &H^1(\R,G)\hs, } \end{equation*} where $G_K(K)=G(K)$ and $H^1(K,G_K)=H^1(K,G)$. By Serre \cite{SerreCG}, I.5.5, Corollary 1 of Proposition 36, the rows of this diagram are exact. Recall that $\alpha\in H^1(K,C)$. Consider the localization $\loc_\R(\alpha)\in H^1(\R,C)$. Then $i_*(\loc_\R(\alpha)\hs)=1\in H^1(\R,G)$, and hence, $\loc_\R(\alpha)=\delta_\R(o_\R)$ for some orbit $o_\R$ of $G(\R)$ in $Y(\R)$. Since $G_K$ is connected and $C$ is cyclic, by Theorem \ref{p:LA-cyclic} the set of $K$-points $Y(K)$ is dense in $Y(\R)$. Since $o_\R$ is open in $Y(\R)$, we see that $o_\R$ contains a $K$-rational point $y\in Y(K)\cap o_\R$. Let \[o=\ G(K)\cdot y\ \in Y(K)/G(K)\] denote the $G(K)$-orbit of $y$ in $Y(K)$. Then $\loc_\R(o)=o_\R$ and hence, \[\loc_\R(\delta(o))=\delta_\R(o_\R)=\loc_\R(\alpha)\in H^1(\R,C).\] The $K$-group $C$ is cyclic, hence abelian, and therefore, $H^1(K,C)$ is naturally an abelian group, which we shall write additively. Consider $\alpha-\delta(o)\in H^1(K,C)$. Then \[\loc_\R(\alpha-\delta(o))=0\in H^1(\R,C).\] By Corollary \ref{finite-real1finite}, the cohomology class $\alpha-\delta(o)$ can be killed by a real extension $K'$ of $K$ of degree at most $|C|\le|H|\le\gamma(G_\C)$. This means that over $K'$, if we write $\alpha'=\Res_{K'}(\alpha)\in H^1(K',C)$, then we have $\alpha'=\delta(o')\in H^1(K',C)$ for the $G(K')$-orbit $o'=\,G(K')\cdot y\ \in\, Y(K')/G(K')$. It follows that \[i_*(\alpha')=1\in H^1(K',G).\] Thus we have killed $\Res_K(\xi_G)=i_*(\alpha)$ by a real extension $K'$ of $K$ of degree at most $\gamma(G_\C)$, and we have killed $\xi_G$ by a real extension $K'$ of $k$ of degree at most $\gamma(G_\C)!\cdot\gamma(G_\C)$. This completes the proof of Theorem \ref{t:cohom-R-bis} with the bound $d_\R(G_\C)=|G_\C/G_\C^0|^2\cdot \gamma(\hs(G^0_\C)^\red)!\cdot\gamma(\hs(G^0_\C)^\red)$, and thus completes the proofs of Theorems \ref{t:cohom-R} and \ref{t:abs-R} and of Corollary \ref{finite-real}. \qed \end{subsec}
proofpile-arXiv_069-3281
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{Context} In our previous work on the subject, we argued the need to go beyond vertices when analysing complex networks. In fact, remarks to this end can be found scattered in the literature \cite{Contreras2014,Estrada2005,Milo2002,Mukhtar2011,Yeger2004}. For example, studies of gene regulatory networks have shown that ``motif-based centralities outperforms other methods" and can discern interesting network features not grasped by more traditional vertex centralities \cite{Koschutzki2007,Koschutzki2008}. Another example is provided by the notion of protein essentiality, a property now understood to be determined at the level of protein complexes, that is groups of proteins in the protein-protein interaction network (PPI) rather than at the level of individual proteins \cite{Hart2007,Ryan2013}. In addition, further biological properties have been tied to ensembles of genes or proteins, e.g. the notion of synthetic lethality, where the simultaneous deactivation of two genes is lethal while the separate deactivation of each is not \cite{Nijman2011}. Since measures of importance for nodes constitute a key tool in the study of complex networks, it is only logical to expect that similar tools for ranking groups of vertices could find widespread applications throughout network analysis.\\[-.7em] In this spirit, we proposed in \cite{Giscard2017} a measure of importance for groups of nodes (henceforth called ``subgraphs"), that has the following desirable properties: \begin{enumerate} \item Provided the edge weights are non-negative, the centrality $c(H)$ of a subgraph $H$ is always between 0 and 1. \item The precise value $c(H)$ taken by the centrality on a subgraph $H$ is the fraction of all network flows intercepted by $H$. \item For subgraphs comprising a single node $H\equiv \{i\}$, the centrality measure $c(\{i\})$ yields the same ranking than the eigenvector centrality. In other terms, it induces the eigenvector centrality over vertices. \item Computationally, $c(H)$ costs no more to compute per subgraph $H$ than ordinary vertex-centralities. What is computationally costly however, is to compute it over all subgraphs. \end{enumerate} In \cite{Giscard2017}, we have shown, by analysing real-world networks from econometry and biology, that $c(.)$ performs better than centralities defined from naive sums of vertex-centralities. Concretely, we demonstrated that subgraph centralities defined from sums of the resolvent, exponential and eigenvector centralities failed to account for even the dominant events affecting input-output economic networks. In biology, we used $c(.)$ to construct a model of protein-targeting by pathogens that achieved a $25\%$ improvement over the state of the art one.\footnote{We refer to the area under the ROC curves for both the model based on the centrality $c(.)$ and the state of the art one. These are 0.97 and 0.73 respectively.}\\[-.5em] In this work, we establish further properties of the centrality measure $c(.)$ and present its rigorous mathematical underpinnings. We also compare this centrality with the notion of group-centrality presented by Everett and Borgatti in \cite{Everett1999} on real-world networks. \subsection{Notations and centrality definition}\label{Notation1} The measure of cycle and subgraph centrality we propose is rooted in recent advances in the algebraic combinatorics of walks on graphs. Here we only define the few concepts from this background that are necessary to comprehend the centrality measure. We consider a finite network $G = (\mathcal{V} ;\mathcal{E})$ with $N=|\mathcal{V}|$ nodes and $M=|\mathcal{E}|$ edges and which may be weighted and directed. The adjacency matrix of $G$ is denoted $\mathsf{A}_G$ or simply $\mathsf{A}$. If $G$ is weighted then the entry $\mathsf{A}_{ij}$ is the weight of the edge $e_{ij}$ from $i$ to $j$ if this edge exists, and 0 otherwise.\\[-.85em] A \textit{induced subgraph} $H$ of $G$, also called simply a \textit{subgraph} of $G$ and denoted $H\prec G$, is a set of vertices $ \mathcal{V}_H\subseteq \mathcal{V}$ together with the set of all edges linking these vertices in $G$, $\mathcal{E}_H=\{e_{ij}\in\mathcal{E}:\,i,j\in\mathcal{V}_H\}$.\\[-.85em] A \textit{walk} $w$ of length $\ell(w)$ from $v_i$ to $v_j$ on $G$ is a sequence $w = e_{i i_1} e_{i_1 i_2} \cdots e_{i_{\ell-1} j}$ of $\ell$ contiguous edges. The walk $w$ is \textit{open} if $i \neq j$ and \textit{closed} otherwise.\\[-.85em] A \textit{simple cycle}, also known in the literature under the names \textit{loop}, \textit{cycle}, \textit{elementary circuit} and \textit{self-avoiding polygon}, is a closed walk $w = e_{i i_1} e_{i_1 i_2} \cdots e_{i_{\ell-1} i}$ which does not cross the same vertex twice, that is, the indices $i,i_1,\hdots,i_{\ell-1}$ are all different.\\ We now recall the definition of the centrality for cycles and subgraphs, introduced in \cite{Giscard2017}. \begin{definition}[Centrality] Let $G$ be a possibly weighted (di)graph, and let $\lambda$ be its maximum eigenvalue. Let $\mathsf{A}$ be the adjacency matrix of $G$, including weights if any. For any cycle $\gamma$, let $\mathsf{A}_{G\backslash \gamma}$ be the adjacency matrix of the graph $G$ where all vertices visited by $\gamma$ and the edges adjacent to them have been removed. Then we define the centrality $c(\gamma)$ of the cycle $\gamma$ as \vspace{-2mm} $$ c(\gamma):=\det\left(\mathsf{I}-\frac{1}{\lambda}\mathsf{A}_{G\backslash \gamma}\right). $$ More generally, for any non-empty subgraph $H$ of $G$, we define the centrality of $H$ as $$ c(H):=\det\left(\mathsf{I}-\frac{1}{\lambda}\mathsf{A}_{G\backslash H}\right). $$ \end{definition} As we have shown in \cite{Giscard2017}, these centralities not only reflect the relative importance of cycles or subgraphs, but their values have a precise meaning too. Indeed, $c(H)$ is the fraction of all information flows on the network that are intercepted by the subgraph $H$. As such, and as long as the network has no negative edge-weights, the centrality is always between 0 and 1, which is numerically advantageous, $$ 0\leq c(H)\leq 1. $$ Because it has a concrete real-world meaning as fraction of network flows, the value of the centrality can be assessed with respect to external informations when available. More generally, it enriches the analysis in that it does not only produce a ranking of groups of nodes, but it also quantitatively ties these groups' importance with an immediately meaningful quantity, e.g. a fraction of capital flow, of successions of proteins interactions or of social interactions depending on the context.\\[-.5em] It the following section we give the full, rigorous mathematical proof of the main theorem underpinning these results and which relates the centrality $c(\gamma)$ of a cycle $\gamma$ with network flows. This theorem was presented as Proposition~1 in \cite{Giscard2017} but was only given a qualitative proof there, owing to length constraints. Note, we focus on the centrality of simple cycles as it is precisely in this context that the rigorous proof appears as an extension of a number theoretic sieve. The case of arbitrary subgraphs is similar, and we operate with no loss of generality. \section{Centrality and network flows: a rigorous mathematical proof} We first need to recall some combinatorial notions introduced in the context of the extension of number theory satisfied by walks on graphs \cite{Giscard2016}. The central objects of this earlier study are \emph{hikes}, a hike $h$ being an unordered collection of disjoint closed walks. Hikes can be also be seen as equivalence classes on words $W=\gamma_{i_1}\gamma_{i_2}\cdots \gamma_{i_n}$ over the alphabet of simple cycles $\gamma_i$ of a graph. Two words $W$ and $W'$ are equivalent if and only if $W'$ can be obtained from $W$ through allowed permutations of consecutive simple cycles. In this context, two simple cycles are allowed to commute if and only if they are vertex disjoint $\mathcal{V}(\gamma_i)\cap \mathcal{V}(\gamma_j)=\emptyset \iff \gamma_i\gamma_j=\gamma_j\gamma_i$.\\[-.5em] For example, if $\gamma_1$ and $\gamma_2$ commute but neither commute with $\gamma_3$, then $\gamma_1\gamma_2$ and $\gamma_2\gamma_1$ represent the same hike, but $\gamma_1\gamma_3\gamma_2$ and $\gamma_2\gamma_3\gamma_1$ are distinct hikes.\\[-.5em] The letters $\gamma_{i_1},\cdots, \gamma_{i_n}$ found in a hike $h$ are called its prime divisors. This terminology is due to the observation that simple cycles obey the defining property of prime elements in the semi-commutative monoid $\mathcal{H}$ of hikes. In addition, they constitute the formal, semi-commutative, extension of prime numbers \cite{Giscard2016}.\\[-.7em] Two special types of hikes will be important for our purpose here:\\ A \emph{self-avoiding hike} is a hike all prime factors of which commute with one another. In other terms, it is collection of vertex-disjoint simple cycles. A walk, defined earlier in section~\ref{Notation1}, can be shown to be hikes with a unique right prime divisor \cite{Giscard2016}, a characterisation which is both necessary and sufficient so that any hike with a unique right prime divisor is a walk. It may perhaps help the reader's intuition to know that in the extension of number theory satisfied by hikes, hikes are the extension of the integers, self-avoiding hikes are the square-free integers and walks are integers of the form $p^k$, with $p$ prime and $k\in\mathbb{N}$. \\[-.7em] Now we claim that the centrality $c(\gamma)$ of a simple cycle $\gamma$ is exactly the fraction of all hikes $h$ (including infinite length ones) such that all right prime divisors of $h$ intercept $\gamma$, that is no right prime divisor of $h$ is vertex-disjoint with $\gamma$ and commutes with it. This later observation implies that $\gamma$ is the only right prime divisor of $h\gamma$. Thus, the claim we make is equivalent to stating that $c(\gamma)$ is the proportion of all hikes $h$ such that $h\gamma$ is a walk. \begin{theorem} Let $G$ be a finite (di)graph with adjacency matrix $\mathsf{A}$ and let $\gamma$ be a simple cycle on $G$. Then the total number $n_\gamma(k)$ of closed walks of length $k$ on $G$ with right prime divisor $\gamma$ is asymptotically equal to \vspace{-2mm} $$ n_\gamma(k)\sim c(\gamma)\!\left(\frac{1}{\det(\mathsf{I}-z\mathsf{A})}\right)\![k],~~\text{as}~~k\to\infty, \vspace{-2mm} $$ where $\big(1/\det(\mathsf{I}-z\mathsf{A})\big)[k]$ stands for the coefficient of order $k$ in the series $1/\det(\mathsf{I}-z\mathsf{A})$. \end{theorem} \begin{proof} The proof relies on a very general combinatorial sieve. Let $\mathcal{H}_\ell:=\{h\in\H:~\ell(h)=\ell\}$ be the set of hikes of length $\ell$, $\mathcal{P}\subsetneq\H$ be a set of primes and $\mathcal{P}^{\text{s.a.}}$ the set of all self-avoiding hikes constructible from $\mathcal{P}$. Let $S(\mathcal{H}_\ell,\mathcal{P})$ be the number of hikes in $\mathcal{H}_\ell$ which are not right-divisible by any prime of $\mathcal{P}$. The semi-commutative extension of the sieve of Erathostenes-Legendre yields $$ S(\mathcal{H}_\ell,\mathcal{P}) = \sum_{d\in \mathcal{P}^{\text{s.a.}}} \mu(d) |\mathcal{M}_d|, $$ with $|\mathcal{M}_d|$ the number of multiples of $d$ in $\mathcal{H}_\ell$. Furthermore, $\mu(d)$ is the M\"{o}bius function on hikes, which is \cite{Giscard2016} $$ \mu(h)=\begin{cases} (-1)^{\Omega(h)},&\text{if $h$ is self-avoiding,}\\ 0,&\text{otherwise,} \end{cases} $$ where $\Omega(h)$ is the number of prime divisors of $h$, including multiplicity.\\[-.7em] In order to progress, we seek a multiplicative function $\text{prob}(.)$ such that $|\mathcal{M}_d| = \text{prob}(d)|\mathcal{H}_\ell| + r(d)$, $|\mathcal{H}_\ell|:=\text{card}(\mathcal{H}_\ell)$. In this expression, $\text{prob}(d)$ approximates the probability that a hike taken uniformly at random in $\mathcal{H}_\ell$ is right-divisible by $d$. If edge-weights are present, the hikes are not all uniformly probable but follow a distribution dependent on these weights. In any case, no knowledge of this distribution is required here and the meaning of $\text{prob}(.)$ is only mentioned to help the reader understanding. Similarly, $m(d)=\text{prob}(d)|\mathcal{H}_\ell|$ is the expected number of multiples of $d$ in $\mathcal{H}_\ell$. Finally, $r(d)$ is the associated error term, arising from the fact that $|\mathcal{M}_d|$ is not truly multiplicative. Supposing that we can identify the $m(.)$ function, we would obtain $$ S(\mathcal{H}_\ell,\mathcal{P}) = \sum_{d\in \mathcal{P}^{\text{s.a.}}} \mu(d) m(d) + \sum_{d\in \mathcal{P}^{\text{s.a.}}} \mu(d) r(d). $$ Contrary to number theory, the first term does not admit any simpler form without further assumptions on $\mathcal{P}$. This is because of the possible lack of commutativity between some elements of $\mathcal{P}$. We note however that since $\mu(d)$ is non-zero if and only if $d$ is self-avoiding, and since we have required that $m(.)$ be multiplicative,\footnote{But not necessarily totally multiplicative.} then it follows that the first term is entirely determined from the values of $m(.)$ over the primes of $\mathcal{P}$.\\ We therefore turn to determining $m(\gamma)$ for $\gamma$ prime. The set of left-multiples of $\gamma$ in $\H$ is $\{h\gamma,~h\in \H\}$, which is in bijection with the set $\{h\in \H,~\ell(h)\geq \ell(\gamma)\}$. Thus, the number of left-multiples of $\gamma$ in $\mathcal{H}_{\ell}$, is exactly $|\mathcal{H}_{\ell-\ell(\gamma)}|$. Then $$ \text{prob}(\gamma) + \frac{r(\gamma)}{|\mathcal{H}_{\ell}|} = \frac{|\mathcal{H}_{\ell-\ell(\gamma)}|}{|\mathcal{H}_{\ell}|}. $$ Seeking the best possible probability function $\text{prob}(\gamma)$, let us suppose that once this function has been chosen, the error term of the above equation vanishes in the limit $\ell\to\infty$. If this is true, then we obtain $$ \text{prob}(\gamma) = \lim_{\ell\to \infty}\frac{|\mathcal{H}_{\ell-\ell(\gamma)}|}{|\mathcal{H}_{\ell}|}. $$ In order to progress, we make an important observation regarding the cardinality of the set $\mathcal{H}_{\ell}$: \begin{lemma}\label{assumptionAG} Let $G$ be a finite (directed) graph. Let $\mathcal{H}_{\ell}:=\{h\in\mathcal{H}:~\ell(h) = \ell\}$ be set of all hikes on $G$ of length $\ell$. Then, there exists $\Lambda\in \mathbb{R}^+$ and a bounded function $f:\mathbb{N}\mapsto \mathbb{R}$ such that $\lim_{\ell\to\infty}f(\ell)$ exists and for $\ell\in\mathbb{N}^*$ we have exactly \begin{equation*} |\mathcal{H}_{\ell}|=\Lambda^\ell f(\ell). \end{equation*} If the absolute value of the largest eigenvalue $\lambda$ of $G$ has multiplicity $g$, then $\Lambda = \lambda^g$. \end{lemma} \begin{proof} This follows directly from the ordinary zeta function on hikes $\zeta(z)=\det(\mathsf{I}-z\mathsf{A})^{-1}$, from which we have \begin{equation*} |\mathcal{H}_{\ell}|=\left(\frac{1}{\det(\mathsf{I}-z\mathsf{A})}\right)\![\ell] =\sum_{i_1,\cdots,\, i_N\vdash \ell} \lambda^{i_1}_{1}\lambda^{i_2}_{2}\cdots \lambda^{i_N}_{N}=\lambda^\ell\!\! \sum_{i_1,\cdots,\, i_N\vdash \ell} \lambda^{i_1-\ell}\lambda^{i_2}_{2}\cdots \lambda^{i_N}_{N} \end{equation*} where the sums run over all positive values of $i_j \geq 0$ such that $\sum_j i_j = \ell$ and $\lambda\equiv \lambda_1$ is the eigenvalue of the graph with the largest absolute value. We assume for the moment that $\lambda$ is unique and let $ f(\ell):= \sum_{i_1,\cdots, \,i_N\vdash \ell} \lambda^{i_1-\ell}\lambda^{i_2}_{2}\cdots \lambda^{i_N}_{N}. $ This function is clearly bounded and $$ \lim_{\ell\to\infty} f(\ell) = \lim_{z\to\lambda^{-1}}(1-z\lambda)\zeta(z), $$ exists and is finite. If $|\lambda|$ is not unique and has multiplicity $g$, then one should pick $\lambda^g$ for the scaling constant together with $f(\ell)=\zeta(z)[\ell]\lambda^{-g\ell}$. In all cases the Lemma follows. \end{proof} Proceeding with the result of Lemma~\ref{assumptionAG} and assuming that the largest eigenvalue is unique for simplicity, the existence of the limit for $f$ gives $$ \text{prob}(\gamma) = \lim_{\ell\to \infty}\frac{\lambda^{\ell-\ell(\gamma)} f\big(\ell-\ell(\gamma)\big)}{\lambda^\ell f(\ell)} = \lambda^{-\ell(\gamma)}. $$ The prob(.) function is multiplicative over the primes--recall these are the simple cycles--as desired. It yields $m(\gamma) = |\mathcal{H}_{\ell}| \lambda^{-\ell(\gamma)}$ and the associated error term is \begin{align*} r(\gamma) = |\mathcal{H}_{\ell-\ell(\gamma)}| - |\mathcal{H}_{\ell}| \lambda^{-\ell(\gamma)} &= \lambda^{\ell-\ell(\gamma)}\Big(f\big(\ell-\ell(\gamma)\big)-f(\ell)\Big). \end{align*} To establish the validity of these results, we need only verify that they are consistent with our initial assumption concerning the error term, namely that $r(\gamma)/|\mathcal{H}_{\ell}|$ vanishes in the limit $\ell\to\infty$. The existence of the limit of $f$ implies $\lim_{\ell\to\infty}|f\big(\ell-\ell(\gamma)\big)-f(\ell)|=0$ and therefore that $$ \lim_{\ell\to\infty }\frac{r(\gamma)}{ |\mathcal{H}_{\ell}|}=\lim_{\ell\to\infty }\,\lambda^{-\ell(\gamma)}\Big(f\big(\ell-\ell(\gamma)\big)-f(\ell)\Big) =0, $$ as required.\\ We are now ready to proceed with general self-avoiding hikes. Let $d=\gamma_1\cdots \gamma_q$ be self-avoiding. Then since $m$ is multiplicative and the length is totally additive over $\mathcal{H}$, $m(d) = \prod_i m(\gamma_i) = \lambda^{-\sum_i \ell(\gamma_i)} = \lambda^{-\ell(d)}$. The associated error term follows as $$ r(d) = |\mathcal{H}_{\ell-\ell(d)}| - |\mathcal{H}_{\ell}| \lambda^{-\ell(d)} = \lambda^{\ell-\ell(d)}\Big(f\big(\ell-\ell(d)\big)-f(\ell)\Big). $$ Inserting these forms for $m(d)$ and $r(d)$ in the semi-commutative Erathostenes-Legendre sieve yields the sieve formula \begin{equation*} S(\mathcal{H}_{\ell},\mathcal{P}) = |\mathcal{H}_{\ell}|\sum_{d\in \mathcal{P}^{\text{s.a.}}} \mu(d) \lambda^{-\ell(d)} + \lambda^{\ell}\sum_{d\in \mathcal{P}^{\text{s.a.}}} \mu(d) \lambda^{-\ell(d)} \big(f(\ell-\ell(d))-f(\ell)\big). \end{equation*} We can now progress much further on making an additional assumption concerning the nature of the prime set $\mathcal{P}$. We could consider two possibilities: i) that $\mathcal{P}$ is the set of all primes on an induced subgraph $H\prec G$; or ii) that $\mathcal{P}$ is a cut-off set, e.g. one disposes of all the primes of length $\ell(\gamma)\leq \Theta$. Remarkably, in number theory, if i) is true then ii) is true as well, and the sieve benefits from the advantages of both situations. In general however, i) and ii) are not compatible and while ii) could be used to obtain direct estimates for the number of primes of any length, a problem of great interest, we can show that this makes the sieve NP-hard to implement. We therefore focus on the first situation.\\ Let $H\prec G$ be an induced subgraph of the graph $G$ and let that $\mathcal{P}\equiv \mathcal{P}_H$ be the set of all primes (that is simple cycles) on $H$. Remark that $\sum_{d\in\mathcal{P}_H^{s.a}}\mu(d) \lambda^{-\ell(d)}$ is therefore the sum over all the self-avoiding hikes on $H$, each with coefficient $\mu(d)\lambda^{-\ell(d)}$. It follows \cite{Giscard2016} that $\sum_{d\in\mathcal{P}_H^{s.a}}\mu(d) \lambda^{-\ell(d)}=\det(\mathsf{I}-\lambda^{-1}\mathsf{A}_H)$. Concerning the error term, $$ \lambda^{\ell}\sum_{d\in \mathcal{P}_H^{\text{s.a.}}} \mu(d) \lambda^{-\ell(d)} \big(f(\ell-\ell(d))-f(\ell)\big), $$ we note that since $H$ is finite,\footnote{$G$ is finite and so are all its induced subgraphs.} the above sum involves finitely many self-avoiding hikes $d$. In addition, given that $\lim_{\ell\to\infty}f(\ell)$ exists by Lemma~\ref{assumptionAG}, $\lim_{\ell\to\infty}f(\ell-\ell(d))-f(\ell) =0$ as long as $\ell(d)$ is finite, which is guaranteed by the finiteness of $H$. We have consequently established that the error term comprises finitely many terms, each of which vanishes in the $\ell\to\infty$ limit. As a corollary, the first term becomes asymptotically dominant: \begin{equation*} S(\mathcal{H}_{\ell},\mathcal{P}) \sim |\mathcal{H}_{\ell}|\det\big(\mathsf{I}-\lambda^{-1}\mathsf{A}_H\big) ~~\text{as}~~\ell\to\infty. \end{equation*} We can make this more explicit on using the ordinary form of the zeta function on hikes $\zeta(z)=1/\det(\mathsf{I}-z\mathsf{A})$. Then $|\mathcal{H}_{\ell}|=\zeta(z)[\ell]$ is the coefficient of order $\ell$ in $\zeta(z)$, see also the proof of Lemma~\ref{assumptionAG}.\\[-.5em] \begin{remark} The error term can be given a determinantal form upon using a finite difference expansion of $f$ or a Taylor series expansion of it if one smoothly extends its domain from $\mathbb{N}$ to $\mathbb{R}$. Writing $$ f(\ell-\ell(d))-f(\ell) = \sum_{k\geq 1} \frac{\nabla^k[f](\ell)}{k!}\big(\ell(d)\big)_{(k)}, $$ with $(a)_{(k)}=\prod_{i=0}^{k-1}(a-i)$ the falling factorial and $\nabla$ the backward difference operator. Now we use the properties of the M\"{o}bius function on hikes to write $\sum_{d\in\mathcal{P}_H^{s.a}}\mu(d)\big(\ell(d)\big)_{(k)} \,z^{\ell(d)}= (\frac{d}{dz})^k \det(\mathsf{I}-z\mathsf{A}_H)$ and finally \begin{align*} S(\mathcal{H}_{\ell},\mathcal{P}) &= |\mathcal{H}_{\ell}|\,\det\!\left(\mathsf{I}-\frac{1}{\lambda}\mathsf{A}_H\right)+\lambda^{\ell}\sum_{k\geq 1}\frac{\nabla^k[f](\ell)}{k!} \det\!^{(k)}\!\Big(\mathsf{I}-\frac{1}{\lambda}\mathsf{A}_H\Big).\\[-1em] \end{align*} Here $\det\!^{(k)}\!\big(\mathsf{I}-\frac{1}{\lambda}\mathsf{A}_H\big)$ is a short-hand notation for $\big\{(\frac{d}{dz})^k \det(\mathsf{I}-z\mathsf{A}_H)\big\}\Big|_{z=\lambda^{-1}}$.\\[-.1em] \end{remark} To conclude the proof of the Theorem, we now need only choose $H$ correctly. Recall that we seek to count those walks which are left-multiples of a chosen simple cycle $\gamma$. But for $w=h\gamma$ to be a walk, the hike $h$ must be such that none of its right-prime divisor commutes with $\gamma$. This way, $\gamma$ is guaranteed to be the unique prime that can be put to the right of $h$, hence the unique right-prime divisor of $w$, making $w$ a walk. Then the sieve must eliminate all hikes $h$ with are left-multiples of primes \emph{commuting} with $\gamma$. Observe that all such primes are on $H=G\backslash \gamma$.\qed\\[-.5em] \end{proof} \begin{remark} The construction presented here is much more general than appears at first glance. In particular, it can be extended to any additive function $\rho:\,\mathcal{H}\mapsto \mathbb{R}$ over $\mathcal{H}$ other than the length, provided an equivalent of Lemma~\ref{assumptionAG} exists for $\rho$. Infinite graphs may also be considered, provided additional constraints on the notion of determinant are met. These generalisations have further applications which will be presented elsewhere. \end{remark} \section{Comparison with Everett and Borgatti's group-centralities} \subsection{Motivations and context} In our previous work on the centrality $c(H)$ \cite{Giscard2017}, we have presented comparisons with centralities obtained for $H$ upon summing up the vertex centralities of individual vertices involved in $H$. We have shown the comparative failure of these approaches which could not, for example, detect even the major crisis affecting the insurance$-$finance$-$real-estate triad in input-output networks over the period 2000-2014 period.\\[-.7em] In this section, we propose to further compare $c(H)$ with the notion of group centrality as it was introduced by Everett and Borgatti in 1999 \cite{Everett1999}. The authors of this study proposed to extend any vertex centrality to groups of vertices by summing up the centrality of the vertices of the group as calculated on a graph where other members of the group have been deleted. For example, the degree group centrality of an ensemble $H$ of vertices is equal to the external degree of $H$ in $G$. Essentially, this approach is expected to characterise the importance of the group with respect to the rest of the graph but will not be sensitive to the inner structure of the group. As a consequence, it is easy to construct synthetic graphs where group-centralities 'fail' to identify a group that should clearly be the most central. For example, a sparse graph with a single large clique can be built such that this clique is less central than a small outlier group of nodes. In our opinion however, these limitations are more theoretical than practical and it is much more important to study the behaviour of the proposed measures on \emph{real-world} networks. \subsection{The centrality $c(.)$ as an extension of the eigenvector centrality} Incidentally, Everett and Borgatti provide a strong motivation for the development of a centrality akin to the one we propose here. Indeed, noting the lack of extension for the eigenvector centrality to groups of nodes in their work, they explain that ``[The eigenvector centrality] is virtually impossible to generalise along the lines presented earlier", that is, lest one resorts to node-merging, a procedure not without problems \cite{Everett1999}. Now recall that the centrality presented here $c(H)$ induces the eigenvector centrality on singleton subgraphs comprising exactly one vertex $H=\{i\}$, a requirement which, following Everett and Borgatti, is sufficient to call $c(.)$ a proper extension of the eigenvector centrality to groups of nodes. In fact, this observation is itself a special case of a more general construction relating the centrality of simple paths with entries of the projector onto the dominant eigenvector: \begin{proposition} Let $G$ be a finite undirected graph with $\{\lambda\equiv \lambda_1, \lambda_2,\cdots ,\lambda_N\}$ its spectrum. For simplicity, we assume that the largest eigenvalue $\lambda$ of $G$ is unique. Let $W:\mathcal{E}\mapsto \mathbb{R}^+$ be the weight function, sending edges of the graph to their weights. If $G$ is not weighted then $W$ is identically 1. Let $\mathsf{P}_\lambda$ be the projector onto the dominant eigenvector of $G$ and $\eta:=\prod_{i>1}^N(1-\lambda_i/\lambda)$. Then $$ \eta(P_\lambda)_{ij} =\sum_{p:\,i\to j} \lambda^{-\ell(p)}W(p)\,c(p), $$ where the sum runs over all simple paths from $i$ to $j$ and the weight of a path is the product of the weights of the edge it traverses. \end{proposition} \begin{remark} When $i\equiv j$, the only simple path from $i$ to itself is the length 0 path that is stationary on $i$. The weight of the empty path is the empty product with value 1 and therefore we recover the result of \cite{Giscard2017} $$ \eta eig(i)^2=\eta (\mathsf{P}_\lambda)_{ii} = c(\{i\}), $$ where $eig(i)$ is the $i$th entry of the dominant eigenvector. \end{remark} \begin{proof} This relation follows from e.g. the path-sum formulation of the resolvent function $\mathsf{R}(z):=\big(\mathsf{I}-z\mathsf{A})^{-1}$ \cite{Giscard2013}. We have $$ \mathsf{R}(z)_{ij} =\sum_{p:\,i\to j} z^{\ell(p)}W(p)\,\frac{\det(\mathsf{I}-z\mathsf{A}_{G\backslash p})}{\det(\mathsf{I}-z\mathsf{A})}. $$ In particular, the case $i\equiv j$ gives the well-known adjugate formula for the inverse $\mathsf{R}(z)_{ii}=\det(\mathsf{I}-z\mathsf{A}_{ G \backslash i})/\det(\mathsf{I}-z\mathsf{A})$. Introducing the adjugate matrix $\mathrm{Adj}(\mathsf{I}-z\mathsf{A})_{ij} := \det(\mathsf{I}-z\mathsf{A})\mathsf{R}(z)_{ij}$ explicitly we have $$ \mathrm{Adj}(\mathsf{I}-z\mathsf{A})_{ij} =\sum_{p:\,i\to j} z^{\ell(p)}W(p)\,\det(\mathsf{I}-z\mathsf{A}_{G\backslash p}), $$ and the result follows on noting that $\lim_{z\to1/\lambda}\mathrm{Adj}(\mathsf{I}-z\mathsf{A}) = \eta \mathsf{P}_\lambda$.\\[-.5em] \end{proof} We can go further to establish the centrality $c(.)$ as an extension of the eigenvector centrality to groups of nodes along broadly similar lines as those advocated by Everett and Borgatti. To introduce the main result here, we need to present the (intuitive) definitions of union and intersection of subgraphs. Let $H,\,H'$ be two subgraphs of $G$. We designate by $H\cup H'$ the subgraph of $G$ whose vertex set is the set-theoretic union of the vertex sets of $H$ and $H'$, $\mathcal{V}(H\cup H')=\mathcal{V}(H)\cup\mathcal{V}(H')$. Similarly $H\cap H'$ is the subgraph of $G$ with vertex set $\mathcal{V}(H)\cap\mathcal{V}(H')$. \begin{proposition} Let $G$ be a finite graph with no negative weights and $\{H_1,\cdots H_n\}$ be a set of connected induced subgraphs of $G$. Then $$ c\Big(\bigcup_{i=1}^nH_i\Big)=\sum_{S\subseteq\{1,\cdots,n\}}(-1)^{|S|-1} c\Big(\bigcap_{s\in S} H_s\Big) . $$ \end{proposition} \begin{proof} This follows from the definition of $c(H)$ as the fraction of all network flows intercepted by $H$. A direct application of the inclusion-exclusion principle gives the result. \end{proof} An immediate corollary then explicitly shows how the centrality $c(.)$ of any group of nodes arises from the interplay between their eigenvector centralities \begin{corollary} Let $G$ be a finite graph with no negative weights. Let $\mathcal{V}_H:=\{v_1,\cdots,v_n\}\subseteq\mathcal{V}$ be a group of nodes on $G$. Then $$ c\big(\{v_1,\cdots,v_n\}\big)=\eta\,\sum_{i=1}^n eig(v_i)^2 - \sum_{i,j\in \mathcal{V}_H}f(\{v_i,v_j\})+ \sum_{i,j,k\in \mathcal{V}_H}f(\{v_i,v_j,v_k\})-\cdots, $$ where $f(\{v_i,v_j, v_k,\cdots\})$ is the fraction of all network flows intercepted by all of $v_i$, $v_j$, $v_k$, etc. \end{corollary} \subsection{Wolfe's dataset} \begin{table \centering \textbf{Centralities of groups of monkeys in Wolfe's dataset\\} \vspace{1mm} \begin{tabular}{cccccc} \textbf{Group} & \textbf{Members} & $\mathbf{c(H)}$\textbf{ in $\%$} & \thead{Degree\\group centrality}& \thead{Average closeness \\group centrality}&\thead{Group \\betweenness} \\ \hline Age 10$-$13 & 2~3~8~12~16 & 67$\%$ & 11&15&43.5\\ Age 7$-$9 & 4~5~9~10~15~17 & 57$\%$ & 5 &13.7&0 \\ Age 14$-$16 & 1~6~11~13~19& 49$\%$ & 8 &18&2.84 \\ Age 4$-$6 &7~14~18~20&34$\%$ & 5 &20.5&0 \\[.5em] Females &$6-20$&95$\%$ & 4&6.4&0.5 \\ Males &$1-5$&67$\%$ & 10&16&24.34 \\ \hline \end{tabular} \caption{\label{Monkeys} Comparison between several of Everett and Borgatti's group centralities \cite{Everett1999} and the centrality $c(H)$. The centrality values for $c(H)$ are given here in $\%$ as they give the proportions of all successions of interactions between monkeys involving at least one member of the group. The centralities $c(H)$ was computed by the FlowFraction algorithm available on the Matlab File Exchange \cite{MatlabFiles}} \vspace{-5mm} \end{table} We begin our concrete comparison with group-centralities on the Wolfe primate dataset \cite{UCINET}, a small real-world network which was studied by Everett and Borgatti. This dataset provides the number of times monkeys of a group of 20 have been spotted together next to a river by the anthropologist Linda Wolfe. Our results are shown in Table.~\ref{Monkeys}. Here the properties that $c$ is always between 0 and 1 and that its values have actual meaning are clearly advantageous. For example, we can now not only tell that the age group 10$-$13 is the most central, as Everett and Borgatti noted, but we can concretely assert that $67\%$ of all flows of interactions between monkeys involved at least one member of this group. By flow (or chain) of interactions, we mean successions of interactions between monkeys, including interactions that may occur simultaneously. For example, we can have monkey 1 interact with 3, who then interacts with 8; while concurrently 2 meets 4 etc.\\[-.8em] Similarly, we note that almost $95\%$ of all flows of interactions involved at least one female, while this percentage dropped to $64\%$ for males, in spite of male 3 being the most central individual monkey in the entire group by all measures. Thus, according to $c(H)$ and contrary to all the group centralities reported here,\footnote{Everett and Borgatti also discuss normalisations of the group-centralities. In the case of the degree group-centrality, the normalisation is defined to be the degree group centrality divided by the number of nodes which do not belong to the group under consideration. Normalisations tends to rank females ahead of males as $c(H)$ does, but they represent non-linear transformation of the original group-centralities, making their interpretation more difficult.} females are quantitatively more important in mediating social interactions than the males. Here, it may help to know that the monkeys observed by Wolfe were feral Rhesus macaques (\textit{Macaca mulatta}), a species where females stay in the group of their birth, providing its dominance rank structure, while males must change group when reaching sexual maturity, around 4 years old. Furthermore, during the mating season, females favour multiples interactions with different males including low ranking ones \cite{Lindburg1971}. Finally, females typically outnumber males, sometimes by as much as 3 to 1. These observations suggest that females should indeed account for a larger share of the all interactions between monkeys than the males.\\[-.8em] Another point of importance for the comparison is the age group 7$-$9, which is ranked higher than the age group 14$-$16 by $c(H)$ while the group-centralities consistently yield the opposite order. On this point, we observe that Rhesus macaques are peculiar in that younger females have higher social ranks than their older peers \cite{Hill1996,Waal1993}. In the closely related Japanese macaques (\textit{Macaca fuscata}), dominance rank is known to be positively correlated with the frequency of social interactions \cite{Singh1992}. \subsection{Yeast PPI network and protein complexes} In this section we study the PPI network of the yeast \textit{Saccharomyces cerevisiae}, using high quality data from \cite{Hart2007}, which provides a network comprising 5303 interactions between 1689 individual proteins. These proteins are known to belong to complexes, a curated list of which is provided by the Munich Information center on Protein Sequences (MIPS) \cite{Guldener2006}. The authors of \cite{Hart2007} have shown that some of the MIPS complexes could be recovered from a run of the MCL clustering algorithm running on the network. Our goal here is twofold: i) to show that the centrality $c(.)$ can also be used to recover MIPS protein complexes, for which it provides additional informations; and ii) that the degree group centrality fails to do so. Here, we focus specifically on the degree group centrality as the degree centrality is the vertex measure of importance which has seen the most success in biology, see e.g. \cite{Mukhtar2011}.\\[-.5em] \begin{figure}[t!] \vspace{-2mm} \centering \includegraphics[width=.8\linewidth]{ComplexProtein.eps}\\ \includegraphics[width=.8\linewidth]{DegreeGroup.eps} \caption{\label{triplets} Distributions of triplet centralities. Top: normalised triplet centralities $c(t)/\max_{t\text{ triplet}}\big(c(t)\big)$, bottom: normalised degree group centrality $g(t)/\max_{t\text{ triplet}}\big(g(t)\big)$ introduced in \cite{Everett1999}.} \end{figure} We calculated the centralities $c(.)$ and degree group centralities of all edges, connected triplets and connected quadruplets of proteins on the network. What is interesting here is the distribution of centrality values, which we show in Fig.~(\ref{triplets}) in the case of triplets.\footnote{Edges and quadruplets give broadly similar distributions. While complexes Co1, Co2 and Co3 are just as markedly visible in quadruplet data as in triplet data, quadruplets do lead to better segregation of complexes Co4, Co5 and Co6.} In the case of the centrality $c(.)$ proposed here, the distribution of triplet centrality values is organised into separate plateau-like clusters, which actually reveal the underlying protein complexes. Recovering the list of proteins involved in these clusters yields complexes which can be found in curated databases \cite{Pu2009}. Mathematically, the fact that complexes lead to clustered plateau-like centrality values for triplets means that the frequency with which proteins belonging to these complexes are involved in successions of proteins reactions depends first and foremost on the complexes themselves. In other terms, the frequency of protein activation is determined at the complex level.\\[-.5em] The dominant complex, here denoted Co1, comprises 30 proteins\footnote{It comprises proteins ASF1, EHD3, FYV4, MAM33, MRP1, MRP4, MRP10, MRP13, MRP21, MRP51, MRPS5, MRPS8, MRPS9, MRPS16, MRPS17, MRPS18, MRPS28, MRPS35, NAM9, PET123, RSM7, RSM10, RSM18, RSM19, RSM22, RSM23, RSM24, RSM25, RSM26 and RSM27.} and is found in both the MIPS database and in \cite{Pu2009}, where it is known as the mitochondrial small ribosomal large subunit. Interestingly, Co1 is identical with the third largest complex recovered by the MCL algorithm running on the same dataset \cite{Hart2007}, with the addition of the proteins ASF1 and MAM33, a nucleosome assembly factor and a protein of the mitochondrial matrix involved in oxidative phosphorylation, respectively. In the latter case, we note that several complexes involving the MAM33 and proteins of mitochondrial small ribosomal large subunit have been proposed in experimental studies \cite{MAM33}. Complex Co2 comprises 21 proteins.\footnote{These are ASF1, CDC48, CKA1, HAT1, HAT2, HHF1, HHF2, HHT2, HIF1, HIR2, PDS5, POB3, PSE1, PSH1, RAD53, RTG2, RTT106, SPT16, YDL156W, YIL070C and YKU70.} It includes the entire complex C17 determined by the MCL method \cite{Hart2007}, together with 6 additional proteins all which are been proposed to form complexes (in particular the HIR and Rad53p-Asf1p complexes) with one or more proteins of C17 in separate studies \cite{Pu2009} as well as in the MIPS database. Complex Co3 comprises 64 proteins and overlaps significantly with the nucleosomal protein and CID 14 and complexes of \cite{Pu2009}, the latter of which includes the Casein kinase II, RNA polymerase II and Cdc73/Paf1 complexes.\footnote{This complex is ASF1, CDC34, CDC48, CDC53, CDC73, CDC9, CHD1, CKA1, CKA2, CKB1, CKB2, CTR9, DOA1, GRR1, HAT1, HAT2, HHF1, HHF2, HHT2, HIF1, HIR1, HIR2, HOT1, HPC2, HTA1, KAP114, LEO1, MET30, MKT1, MRF1, NAP1, NPL4, ORC2, ORC3, ORC4, ORC5, PAF1, PDS5, PEX19, POB3, POL12, PSE1, PSH1, RAD27, RAD53, RPS1B, RRP7, RTF1, RTG2, RTT101, RTT106, SHP1, SKP1, SPO12, SPT16, TOP1, UFD1, ULP1, UTP22, YDL156W, YDR049W, YGR017W, YKU70 and YKU80}\\[-.5em] An advantage of the classification method employed here is that, contrary to MCL, it allows for overlapping complexes, i.e. proteins which functions in different complexes, as is expected biologically. At the same time, a drawback is that small centrality values are not segregated well enough to clearly distinguish clusters of values and hence complex boundaries. At least three more complexes Co4, Co5 and Co6 could possibly be distinguished, all of which can be found in MIPS database, however these are less clear cut than the first three complexes and so are left out from this work. Empirically, we found that this problem could be somewhat reduced by looking at quadruplets, quintuplets etc., but this comes at a great computational cost given the number of such objects. A random sampling scheme may be able to bypass this difficulty.\\[-.5em] In comparison, the distribution of degree group centrality shows no trace of the underlying protein complexes and reveals little more than the simple distribution of vertex degrees. While we do not recommend the use of the centrality $c(.)$ as a clustering tool owing to its greater computational cost than algorithms such as MCL, we believe that its performance in this domain bears witness to the sensitivity of the proposed centrality to underlying network features. Conversely, the notion of group-centrality may be too coarse to perceived such features in the data, at least in the case of PPI. \section{Conclusion} In this second work on the centrality $c(.)$, we have rigorously established its meaning as a fraction of network flows intercepted by any chosen ensembles of nodes. The centrality $c(.)$ not only induces the eigenvector centrality on vertices, but it is a proper extension of it through an application of the inclusion-exclusion principle on network flows. Finally, we have shown on two real-world networks that the centrality $c(.)$ is more sensitive to critical network features than existing group-centralities. In particular, the centrality of triplets of proteins in the PPI network of the yeast was sufficient to distinguish protein complexes found in curated databases of experimental results. We recall that in our previous study \cite{Giscard2017}, the centrality $c(.)$ already produced the best available model for pathogen targeting in \textit{Arabidopsis thaliana}, yielding a $25\%$ improvement of the state-of-the-art model of \cite{Mukhtar2011}. We hope that these results will spur further research on the use of the centrality in biology. \section*{Declarations} \footnotesize{ \noindent \textbf{Availability of data and material} Raw data concerning Wolfe's dataset and the PPI of the yeast can be found in \cite{UCINET} and \cite{Hart2007}, respectively. The algorithms used to compute the centrality values are available online, on the Matlab File Exchange \cite{MatlabFiles}.\\[-.5em] \noindent \textbf{Authors' contributions} P.-L. Giscard performed the research and both P.-L. Giscard and R. C. Wilson wrote the article.\\[-.5em] \noindent \textbf{Competing interests} P.-L. Giscard and R. C. Wilson declare no financial and non-financial competing interests.\\[-.5em] \noindent \textbf{Funding} P.-L. Giscard is grateful for the financial support from the Royal Commission for the Exhibition of 1851. The Royal Commission played no role in the present study and had no influence on the analysis of the data.}\\[-.5em] \noindent \textbf{Acknowledgement} We thank Paul Rochet of the Laboratoire Jean-Leray, Nantes, France, for stimulating discussions.
proofpile-arXiv_069-3322
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\sc #1}} \def\scss#1{\subsection{\sc #1}} \def\scsss#1{\subsubsection{\sc #1}} \def\centertitle#1{ \vspace{20pt} \begin{centerline}{\it #1}\end{centerline} \vspace{10pt} } \newcommand{\gamma}{\gamma} \begin{document} \begin{titlepage} \begin{flushright} {\small $\,$} \end{flushright} \vskip 1cm \centerline{\LARGE{\bf{Supersymmetric Yang-Mills Theories:}}} \vskip 0.5cm \centerline{\LARGE{\bf{not quite the usual perspective}}} \vskip 1.5cm \centerline{Sudarshan Ananth$^{\,\dagger}$, Hermann Nicolai$^{\,\star}$, Chetan Pandey$^{\,\dagger}$ and Saurabh Pant$^{\,\dagger}$} \vskip 0.5cm \centerline{${\,}^\dagger$\it {Indian Institute of Science Education and Research}} \centerline{\it {Pune 411008, India}} \vskip 0.5cm \centerline{${\,}^*$\it {Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut)}} \centerline {\it {Am M\"{u}hlenberg 1, 14476 Potsdam, Germany}} \vskip 1.5cm \centerline{\bf {Abstract}} \vskip .5cm In this paper, we take up an old thread of development concerning the characterization of supersymmetric theories without any use of anticommuting variables that goes back to one of the authors' very early work \cite{Nic1}. Our special focus here will be on the formulation of supersymmetric Yang-Mills theories, extending previous results beyond $D=4$ dimensions. This perspective is likely to provide new insights into these theories, and in particular the maximally extended $N=4$ theory. As a new result we re-derive the admissible dimensions for interacting (pure) super-Yang-Mills theories to exist. \newline This article is dedicated to the memory of Peter Freund, amongst many other things an early contributor to supersymmetry, and an author of one of the very first papers on superconformal gauge theories \cite{Freund1}. The final section contains some personal reminiscences of H.N.'s encounters with Peter Freund. \vfill \end{titlepage} \renewcommand\footnoterule{} \thispagestyle{empty} \mbox{ } \vspace{5mm} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \section{Introduction and Conventions} There is a large literature on supersymmetric Yang-Mills theories (see e.g. \cite{SYM0}), particularly concerning the maximally extended $N=4$ theory in four dimensions, or equivalently, pure super-Yang-Mills theory in $D=10$ \cite{SYM1}. This theory has been proposed to underlie M theory, either via the AdS/CFT correspondence \cite{Mal}, or, in its dimensionally reduced form, the maximally supersymmetric SU($\infty$) matrix model \cite{Matrix1,Matrix2}. In view of these far reaching conjectures it appears worthwhile and expedient to investigate supersymmetric Yang-Mills theories and their properties from {\em every} possible angle\footnote{In particular to confirm the often heard claim that maximally extended $N=4$ theory {\em defines} non-perturbative quantum gravity in the AdS$_5$ bulk in terms of the boundary theory via a holographic correspondence. At the very least, a full validation of this statement would require a {\em non-perturbative} definition of the boundary theory itself, which is not yet available.}. Here we attempt to do so by following a route different from the one usually taken. This goes back to the early work of one of the authors \cite{Nic1,Nic2} which has been followed up on only intermittently since the mid-eighties, after important early work by Flume and Lechtenfeld \cite{FL}, Dietz and Lechtenfeld \cite{DL1,DL2}, and an intriguing attempt at a closed form expression for the half-maximal $N=2$ theory by de Alfaro, Fubini, Furlan and Veneziano \cite{Fub1}. In this paper we extend, in a minor way, the old results of \cite{Nic1} by showing that the constructions presented there extend to all pure supersymmetric Yang-Mills theories and, in particular the maximally extended $N=4$ theory. We also clarify the link between the results of \cite{Nic1,Nic2} and \cite{FL,DL1,DL2,L1}. New results presented here are part of a larger ongoing project \cite{SYM2} where amongst other things, we shall extend these considerations to the next order in the coupling constant. In view of the huge body of results on the $N=4 $ theory, obtained mostly in the context of the AdS/CFT correspondence, there are numerous directions to be explored. The main result of \cite{Nic1} can be summarized as follows: for any rigidly supersymmetric theory with at most quadratic fermionic terms in the Lagrangian, there exists a non-linear and non-local transformation of the bosonic fields (``Nicolai map") that linearizes the bosonic action in such a way that the Jacobian of the bosonic field transformation equals the determinant (Berezinian) obtained upon integrating out all anticommuting fields. Specializing right away to supersymmetric gauge theories (the case of interest here), the statement is that the gauge fields admit a non-linear and non-local transformation \begin{equation} T_g[A]_\mu^a(x) \equiv A^{'a}_\mu(x,g;A) \end{equation} (where $g$ is the Yang Mills coupling constant) with the following properties: \begin{enumerate} \item Substitution of $A'(A)$ into the free Maxwell action (or rather: sum of Maxwell actions) yields the interacting theory, {\em viz.} \begin{equation} {\cal S}_0[A'(A)] = {\cal S}_g[A] \equiv \frac14 \int d^D x\, F_{\mu\nu}^a F_{\mu\nu}^a \end{equation} where \begin{equation}\label{Fmn} F_{\mu\nu}^a \,\equiv \, {\partial }_\mu A_\nu^a - {\partial }_\nu A_\mu^a + gf^{abc} A_\mu^b A_\nu^c \end{equation} is the Yang-Mills field strength [with fully antisymmetric structure constants $f^{abc}$ for the chosen gauge group, usually SU($n$)], and ${\cal S}_0$ is the free Maxwell action (that is, ${\cal S}_g$ for $g=0$). The statement also remains correct with a gauge fixing term, cf. (\ref{gf2}) below. \item The Jacobian of the transformation equals the product of the Matthews-Salam-Seiler (MSS) determinant (or Pfaffian) \cite{MSS} obtained by integrating out the gauginos and the Faddeev-Popov (FP) determinant \cite{FP} (obtained by integrating out the ghost fields $C^a,\bar{C}^a$), \begin{equation}\label{Det} {\rm det\,} \left(\frac{\delta A^{'a}_\mu(x,g;A)}{\delta A^b_\nu(y)}\right) = \Delta_{MSS} [A]\;\Delta_{FP}[A] \end{equation} at least in the sense of formal power series. \end{enumerate} One can thus characterize an important class of rigidly supersymmetric theories in a way that makes no use of anticommuting variables at all. In this contribution we will explain this result (which for $D=4$ super-Yang-Mills theory was obtained and proved long ago in\cite{Nic1,Nic2,FL,DL1,DL2}) in simple terms by explicitly rederiving the map up to ${\cal O}(g^2)$, and extending it to all pure supersymmetric Yang-Mills theories (analogous results also hold for non-anomalous matter coupled supersymmetric gauge theories, but these will be of no concern here). As a new result, using this approach we will recover the well-known result of \cite{SYM1} that interacting pure supersymmetric Yang-Mills theories can exist only in space-time dimensions $D=3,4,6,10$ (for the free theories this is simply a consequence of the equality of bosonic and fermionic degrees of freedom on shell). At least as far as the known results are concerned, this formalism does not care about the question of whether there exists an off-shell formulation, and could in principle even provide a (non-perturbative) regulator of these theories that could preserve basic features of supersymmetry and gauge invariance even in the regulated theory, though in disguised form. Accordingly, this approach adopts the opposite strategy from the usual one, of introducing ever more auxiliary and ghost degrees of freedom which in turn must be removed by yet more auxiliary gauge transformations, with commuting and anticommuting parameters, involving superspace formulations, Wess-Zumino gauges, and the like. We start with some conventions. We will be slightly cavalier about the space-time signature, which can be taken to be either Euclidean (as in \cite{Nic1,Nic2}) or Minkowskian (as in \cite{FL,DL1,DL2}). The Minkowskian signature is perhaps more convenient if one wants to avoid issues concerning the existence (or not) of Majorana spinors in Euclidean space-times. The usual assumption that (interacting) functional measures have a better chance of being rigorously defined when using a Euclidean signature is actually not so relevant in view of the fact that {\em Gaussian} functional measures are well-defined even with imaginary (oscillatory) exponents, via their 2-point correlators and Wick's theorem. Ideally, this is all that is needed here --- of course, provided one can succeed in producing a closed form expression for the map $T_g$, or something close to it, which is no small order! Such closed form solutions indeed exist for special models, such as supersymmetric quantum mechanics \cite{CG}, as well as certain Wess-Zumino-type or Landau-Ginzburg-type $N=2$ models in two dimensions (see \cite{L2}, and \cite{KK} for recent results). Alternatively, one can simply regard the main formulas in section~3 as analytic continuations of the corresponding Minkowskian ones, even independently of their derivation. We will need covariant derivatives only for the adjoint representation (in which the gauginos also transform); they are \begin{equation} \label{Dmu} D_\mu V^a \,\equiv \, {\partial }_\mu V^a + g f^{abc} A_\mu^b V^c \;\; \Rightarrow \quad [D_\mu , D_\nu] \, V^a = g f^{abc} F_{\mu\nu}^b V^c \end{equation} with the Yang-Mills field strength (\ref{Fmn}). The free scalar propagator is (with the Laplacian $\Box \equiv {\partial }^\mu {\partial }_\mu$) \begin{equation} C(x) = \int \frac{d^D k}{(2\pi)^D} \frac{e^{ikx}}{k^2} \quad {\Rightarrow} \quad - \Box C(x) = \delta} \def\dd{\dot{\d}(x) \end{equation} where $\delta} \def\dd{\dot{\d}(x) \equiv \delta} \def\dd{\dot{\d}^{(D)}(x)$ is the $D$-dimensional $\delta} \def\dd{\dot{\d}$-function. For arbitrary $D$ we have \begin{equation} C(x) = \frac1{(D-2)D\pi^{D/2}} \, \Gamma\left( \frac{D}2 +1\right) (x^2)^{1- \frac{D}2} \; ; \end{equation} in particular, for $D=4$ \begin{equation} C(x) = \frac1{4\pi^2}\cdot \frac1{x^2} \end{equation} When writing ${\partial }_\lambda}\def\tl{\tilde{\lambda} C(x-y) \equiv ({\partial }/{\partial } x^\lambda}\def\tl{\tilde{\lambda}) C(x-y) \equiv {\partial }_\lambda}\def\tl{\tilde{\lambda}^x C(x-y)$, the derivative by convention {\em always acts on the first argument}. Careful track needs to be kept of the sign flips ${\partial }_\lambda}\def\tl{\tilde{\lambda}^x C(x-y) = - {\partial }_\lambda}\def\tl{\tilde{\lambda}^y C(x-y) = + {\partial }_\lambda}\def\tl{\tilde{\lambda}^x C(y-x) = - {\partial }^y_\lambda}\def\tl{\tilde{\lambda} C(y-x)$, The free fermionic propagator is \begin{equation} \gamma} \def\cd{\dot{\c}^\mu{\partial }_\mu S_0(x) = \delta} \def\dd{\dot{\d} (x) \quad {\Rightarrow} \quad S_0 (x) = - \gamma} \def\cd{\dot{\c}^\mu {\partial }_\mu C(x) \end{equation} where the spinor indices are suppressed. This implies $S_0(x-y) = - S_0(y-x)$. The effective number of fermionic degrees of freedom (spinor components) will be designated by $r_D$, and of course depends on $D$ including extra factors of $\frac12$ for Majorana or Weyl spinors, and $\frac14$ for Majorana-Weyl spinors, respectively. For pure supersymmetric Yang-Mills theories the only possibilities are \begin{equation}\label{Dr} D\,=\, 3,4,6,10 \qquad \Longleftrightarrow \qquad r_D\,=\, 2,4,8,16 \end{equation} With Minkowskian signature, for $D=4$ space-time this corresponds to a Majorana spinor, for $D=6$ to a Weyl spinor, while for $D=10$ we have one more factor of $\frac12$ because of the Majorana-Weyl condition. We shall rederive this constraint in section~3 {\em without} any use of anticommuting objects. We also need the fermionic propagator in a gauge-field dependent background characterized by $A_\mu^a(x)$ \begin{equation} \gamma} \def\cd{\dot{\c}^\mu (D_\mu S)^{ab}(x) \equiv \gamma} \def\cd{\dot{\c}^\mu \Big[ \delta} \def\dd{\dot{\d}^{ac}{\partial }_\mu - gf^{acd} A_\mu^d(x) \Big] S^{cb}(x) = \delta} \def\dd{\dot{\d}^{ab} \delta} \def\dd{\dot{\d}(x) \end{equation} Using the standard relation $(1-X)^{-1} = 1 + X + X^2 + \cdots$ the full propagator can be expanded in terms of free propagators and the background gauge field as \begin{equation}\label{S} S^{ab}(x,y;A) = S_0^{ab}(x-y) \,+\, g \int du \, S_0^{ac}(x-u) f^{cdm} {A\!\!\!/}^m(u) S_0^{db}(u-y) \,+\, \cdots \end{equation} Below we will also use the shorthand notation \begin{equation} S^{ab} = S_0^{ab} + g (S_0 * {A\!\!\!/} * S_0)^{ab} + g^2(S_0 * {A\!\!\!/} * S_0 *{A\!\!\!/} * S_0)^{ab} \, + \cdots \end{equation} for such expansions, with the convention that the contraction of the structure constant with the gauge field is usually through the last index, as displayed above. Although the formalism works for other gauge choices as well\footnote{In particular the axial, and more specifically, the light-cone gauge, which is of special interest in view of a possible link with the results of \cite{ABR}, \cite{AT}.}, we will consider only one gauge fixing function here, namely the Landau gauge \begin{equation} G^a[A_\mu] = {\partial }^\mu A_\mu^a \end{equation} The functional integral over gauge fields will thus be understood to contain a $\delta$-functional implementing the gauge condition, that is \begin{equation}\label{deltafunctional} \int {\cal D} A_\mu^a\, (\cdots )\;\; \rightarrow \; \int {\cal D}[A_\mu^a] \, \prod_{x, a} \delta\Big(\partial^\mu A^a_\mu(x)\Big) \, (\cdots ) \end{equation} The ghost propagator \begin{equation} G^{ab} (x) \equiv \underbracket[0.5pt]{C^a(x) \bar{C}^b}(0) \end{equation} obeys \begin{equation} - {\partial }^\mu (D_\mu G)^{ab}(x) = \delta} \def\dd{\dot{\d}^{ab}\delta} \def\dd{\dot{\d} (x) \end{equation} for the Landau gauge. As with the fermions, we can expand it in terms of free propagators. While $G^{ab}(x)$ does depend on $g$ and the background field $A_\mu^a(x)$, this dependence drops out in $D_\mu G^{ab}(x)$; more specifically, we have \begin{equation}\label{DGab} D_\mu G^{ab}(x) = \delta} \def\dd{\dot{\d}^{ab} {\partial }_\mu C(x) \end{equation} with the free propagator $C(x)$. \section{${\cal R}$ prescription (Landau gauge)} A systematic order by order construction of the {\em inverse} transformation $T_g^{-1}$, and in fact a proof of the main theorem above at least for the $N=1$, $D=4$ theory is provided by the ${\cal R}$ prescription introduced in \cite{FL,DL1,Nic2,DL2,L1}. To this aim we define the ${\cal R}$ operator \begin{equation} \label{cR} {\cal R} \,\equiv\, \frac{d}{dg} \,+\, {\bf{R}} \end{equation} which can be viewed as the Lie algebra generator of the {\em inverse} map $T_g^{-1}$ \begin{equation}\label{Tginv} (T_g^{-1}A)_\mu^a(x) \,=\, A_\mu^a(x) \,+ \, \sum_{n=1}^\infty \frac1{n!} \, g^n \, \Big({\cal R}^n \big[ A \big]_\mu^a(x)\Big)_{g=0} \end{equation} For the Landau gauge the second part of the ${\cal R}$ operator, ${\bf{R}}$ on $A_\mu^a$ is defined by \begin{equation} \label{bR1} {\bf{R}}[A]_\mu^a(x) \,\equiv\, - \, \frac1{2r_D} \int du dv\, \Pi_{\mu\nu}(x-u){\rm Tr}\, \big( \gamma} \def\cd{\dot{\c}_\nu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{ba}(v-u)\big) f^{bcd} A_\rho^c(v)A_\sigma^d(v) \end{equation} with the transversal projector \begin{equation}\label{Pi1} \Pi_{\mu\nu}(x-y) \,\equiv \, \left( \delta} \def\dd{\dot{\d}_{\mu\nu} - \frac{{\partial }_\mu {\partial }_\nu}{\Box}\right)\delta} \def\dd{\dot{\d}(x-y) \, \cong \, \delta} \def\dd{\dot{\d}_{\mu\nu} \delta} \def\dd{\dot{\d}(x-y) + {\partial }_\mu C(x-y) {\partial }_\nu \end{equation} where "$\cong$" means equality in the sense of distributions. \newline In (\ref {Tginv}), we need to keep the full $g$-dependence when successively acting with ${\cal R}$ at each step of the iteration and only set $g=0$ at the very end, before inserting the result into the Taylor series expansion (if not implemented properly, crucial contributions will be missed from ${\cal O}(g^3)$ onwards, as outlined below). In the final step this series expansion has to be inverted to obtain the map $T_g$. This will be illustrated by the explicit calculation in the next section, where for simplicity we spell out all the relevant steps in detail (but only for the Landau gauge). We stress that the fermionic propagator $S$ in (\ref{bR1}) is the {\em full} propagator, and hence still depends on $g$ and the background gauge field. Furthermore, this formula is valid in all relevant dimensions, with $r_D$ and $D$ related as in (\ref{Dr}). The expression in (\ref{bR1}) follows directly from the formula (4.20) in \cite{Nic2} (originally due to \cite{FL}) \begin{equation} {\cal R}[X] \,=\, \frac{dX}{dg} \,+\, \underbracket{\delta} \def\dd{\dot{\d}_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} X \Delta_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}} \,+\, \int \underbracket{\bar C^a \underbracket{ \delta} \def\dd{\dot{\d}_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} G^a[A_\mu] \Delta_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}} s(X)} \end{equation} by working out the contractions and by substituting (\ref{DGab}). The correct prefactors and signs in (\ref{bR1}) were obtained by simply comparing this formula with the first order result in \cite{Nic1} (equation (3.24)). The ${\cal R}$ operator acts distributively, \begin{equation} {\cal R}\big[A^a_\mu(x) A_\nu^b(y) \cdots \big] \, \,= {\cal R}\big[A_\mu^a(x)\big] A_\nu^b(y)\cdots \, + \, A_\mu^a(x) {\cal R}\big[A_\nu^b(y)\big]\cdots \, + \, \dots \end{equation} From (\ref{bR1}) it follows immediately that the ${\cal R}$ operation preserves the gauge fixing function \begin{equation}\label{gf1} {\partial }^\mu \, {\cal R}\big[ A_\mu^a(x)\big] = 0 \end{equation} This will guarantee that the equality \begin{equation}\label{gf2} {\partial }^\mu (T_g(A)_\mu^a) (x) = {\partial }^\mu A_\mu^a(x) \end{equation} holds for all values of the Yang-Mills coupling constant $g$. Equations (\ref{cR}) and (\ref{bR1}) are our basic formulas, as their iterative application will yield the expansion coefficients of $T_g^{-1}$ to any desired order, though with a rapidly increasing number of terms. The ${\cal R}$ operation is compactly represented by the functional differential operator \begin{equation} {\cal R} = \frac{d}{dg} - \, \frac1{2r_D} \int dx\,du\,dv\; \Pi_{\mu\nu}(x-u){\rm Tr}\, \big( \gamma} \def\cd{\dot{\c}_\nu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{ba}(v-u)\big) f^{bcd} A_\rho^c(v)A_\sigma^d(v) \frac{\delta} \def\dd{\dot{\d}}{\delta} \def\dd{\dot{\d} A_\mu^a(x)} \end{equation} In particular, it acts as follows on the full fermionic propagator \begin{eqnarray} {\cal R}\, S^{ab}(x,y;A) &=& \int du \, S^{ac}(x,u;A) f^{cdm} \gamma} \def\cd{\dot{\c}^\lambda}\def\tl{\tilde{\lambda} A_\lambda}\def\tl{\tilde{\lambda}^m (u) S^{db}(u,y;A) \, - \\ && \hspace{-1cm} -\, \frac{g}{2r_D} \int du\, dv\, dw\; \Pi_{\mu\nu}(w-u){\rm Tr}\, \big( \gamma} \def\cd{\dot{\c}_\nu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{pe}(v-u)\big) f^{pmn} A_\rho^m(v)A_\sigma^n(v) \, \times \nonumber\\ && \hspace{2cm} \times \; S^{ac}(x,w;A) f^{cde} \gamma} \def\cd{\dot{\c}_\mu S^{db}(w,y;A) \nonumber \end{eqnarray} Importantly, the second term comes with a factor of $g$ and will therefore drop out upon setting $g=0$. However, it will contribute at the next order when acting again with $d/dg$ and then setting $g=0$; this extra contribution will appear from the third order onwards. As mentioned already, the above prescription generates the {\em inverse} map, and can, in principle, be used to calculate $T_g^{-1}$ to arbitrary order. However, while ${\cal O}(g^2)$ is still fairly straightforward to work out as shown below, the procedure quickly becomes complicated at higher orders and is already cumbersome to evaluate at ${\cal O}(g^3)$ \cite{SYM2}.\footnote{The results for $T_g^{-1}$ up to ${\cal O}(g^3)$ can already been found in \cite{DL2}, but only in an implicit form, where the $\gamma} \def\cd{\dot{\c}$-traces have not been evaluated, and $T^{-1}_g$ has not been inverted to determine the map $T_g$ itself up to this order.} Independent of the question of whether the series expansion (\ref{Tginv}) and its inverse (in the sense of a formal power series) can be elevated to closed form expressions, it is a remarkable feature that these series admit a {\em finite} radius of convergence (with suitable norms on the function space of gauge field backgrounds). This follows by inspection of the ${\cal R}$-operation, which can be seen to generate ${\cal O}(c \,n)$ new terms at the $n$-th step of the iteration, and hence only ${\cal O}(c^n \,n! )$ terms at the $n$-th order ${\cal O}(g^n)$ (where $c$ is a model dependent constant). The well known combinatorial divergences of the quantized theory, with extra factors of $n!$, are then generated upon quantization in terms of the {\em free} field $A^{'a}_\mu$, and more specifically after contracting gauge field lines in the tree-like expansion of $T_g^{-1}$ in all possible ways \cite{Nic2,DL1}. \section{Lowest order computations to ${\cal O}(g^2)$} For the Landau gauge, the lowest order result is obtained by simply setting $g=0$ in (\ref{bR1}) (the $\frac{d}{dg}$ piece being trivially zero at lowest order) \begin{eqnarray}\label{Og1} {\cal R}\big[ A\big]_\mu^a (x) \Big|_{g=0} \,&=&\, - \, \frac1{2r_D} \int du\, {\rm Tr}\, \big( \gamma} \def\cd{\dot{\c}_\mu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S_0^{ba}(u-x)\big) f^{bcd} A_\rho^c(u)A_\sigma^d(u) \nonumber\\[2mm] &=&\, -\, \int du\, {\partial }_\lambda}\def\tl{\tilde{\lambda} C(x-u) f^{abc} A_\mu^b(u) A_\lambda}\def\tl{\tilde{\lambda}^c(u) \end{eqnarray} The ghost contribution vanishes at this order because \begin{equation}\label{ghost0} \frac{{\partial }}{{\partial } u^\lambda}\def\tl{\tilde{\lambda}} {\rm Tr}\, \big(\gamma} \def\cd{\dot{\c}_\lambda}\def\tl{\tilde{\lambda} \gamma} \def\cd{\dot{\c}^{\rho\sigma} S_0(v-u)\big) \;\propto \; \delta} \def\dd{\dot{\d}(u-v) \, {\rm Tr}\, \gamma} \def\cd{\dot{\c}^{\rho\sigma} \,=\, 0 \end{equation} (but the ghost contribution does {\em not} necessarily vanish for the other gauge choices). At second order we have two contributions. The first one arises from the application of $d/dg$ to (\ref{bR1}) \begin{equation}\label{S2a} - \, \frac1{2r_D} \int du\, dv\, dw\; \Pi_{\mu\nu} (x-u){\rm Tr}\, \Big( \gamma} \def\cd{\dot{\c}_\nu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{cm}(v-w) f^{mnp} {A\!\!\!/}^p(w) S^{na}(w-u) \Big) f^{cde} A_\rho^d(v) A_\sigma^e(v) \end{equation} The ghost contribution contained in this expression simplifies to \begin{equation}\label{ghost1} +\, \frac1{2r_D} \int du\,dv\,dw\; {\partial }_\mu C(x-u) {\rm Tr}\, \left( \gamma} \def\cd{\dot{\c}_\lambda}\def\tl{\tilde{\lambda} \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{cm}(v-w) f^{mnp} {A\!\!\!/}^p(w) \frac{{\partial } S^{na}(w-u)}{{\partial } u^\lambda}\def\tl{\tilde{\lambda}} \right) f^{cde} A_\rho^d(v) A_\sigma^e(v) \end{equation} Working out the gamma traces, with \begin{equation} \frac1{2r_D} {\rm Tr}\, \big(\gamma} \def\cd{\dot{\c}_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} \gamma} \def\cd{\dot{\c}_\lambda}\def\tl{\tilde{\lambda} \gamma} \def\cd{\dot{\c}_\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b} \gamma} \def\cd{\dot{\c}_\mu \gamma} \def\cd{\dot{\c}^{\rho\sigma} \big) = - \delta} \def\dd{\dot{\d}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\lambda}\def\tl{\tilde{\lambda}} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}\mu} + \delta} \def\dd{\dot{\d}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\lambda}\def\tl{\tilde{\lambda}\mu} - \delta} \def\dd{\dot{\d}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\mu} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\lambda}\def\tl{\tilde{\lambda}\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}} - \delta} \def\dd{\dot{\d}_{\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}\mu} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\lambda}\def\tl{\tilde{\lambda}} + \delta} \def\dd{\dot{\d}_{\lambda}\def\tl{\tilde{\lambda}\mu} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}} - \delta} \def\dd{\dot{\d}_{\lambda}\def\tl{\tilde{\lambda}\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}} \delta} \def\dd{\dot{\d}^{\rho\sigma}_{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\mu} \end{equation} setting $g=0$ (so $S^{ab}$ is again replaced by $\delta} \def\dd{\dot{\d}^{ab} S_0$), and expressing everything in terms of the scalar propagator, the non-ghost part gives \begin{align} f^{abc} f^{bde} \int dv\, dw\; \Big[ & -{\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\sigma C(v-w) A_\rho^d(w) A_\mu^e(w) \nonumber\\ & + {\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\rho C(v-w) A_\sigma^d(w) A_\mu^e(w) \nonumber\\[1mm] & -{\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\mu C(v-w) A_\sigma^d(w) A_\rho^e(w) \nonumber\\[1mm] & -{\partial }_\mu C(x-v) A_\rho^c(v) {\partial }_\sigma C(v-w) A_\sigma^d(w) A_\rho^e(w) \nonumber\\[1mm] & + {\partial }_\rho C(x-v) A_\mu^c(v) {\partial }_\sigma C(v-w) A_\sigma^d(w) A_\rho^e(w) \nonumber\\[1mm] & -{\partial }_\rho C(x-v) A_\rho^c(v) {\partial }_\sigma C(v-w) A_\sigma^d(w) A_\mu^e(w) \Big] \end{align} while (\ref{ghost1}) reduces to \begin{equation} - \, f^{abc} f^{bde} \int dv\,dw\; {\partial }_\mu C(x-v) A_\rho^c(v) {\partial }_\sigma C(v-w) A_\rho^d(w) A_\sigma^e(w) \end{equation} We thus see that the gauge transformation term $\propto \, {\partial }_\mu(\cdots)$ cancels between the two expressions: the effect of the transversal projection is precisely to remove any longitudinal terms. A second set of terms comes from applying ${\bf{R}}$ to the $A_\rho^e (u) A_\sigma^d(u)$ in (\ref{bR1}) which gives \begin{eqnarray}\label{bR2} &+& \frac1{2r_D^2} \int du\,dv\,dw\,dz\; \Pi_{\mu\nu}(x-u)\, {\rm Tr}\, \big( \gamma} \def\cd{\dot{\c}_\nu \gamma} \def\cd{\dot{\c}^{\rho\sigma} S^{ba}(v-u)\big) f^{bcd} A_\rho^c(v) \, \times \nonumber\\[2mm] && \quad\quad \times \; \Pi_{\sigma\tau}(v-w) \, {\rm Tr}\, \big(\gamma} \def\cd{\dot{\c}_\tau \gamma} \def\cd{\dot{\c}^{\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}} S^{ed}(z-w)\big) f^{efg} A_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A}^f(z) A_\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b}^g(z) \end{eqnarray} As before we set $g=0$ (the ghost term does not contribute, again because of (\ref{ghost0})) to get \begin{align} f^{abc} f^{bde} \int dv\,dw\; & \Big[ - {\partial }_\rho C(x-v) A_\mu^c(v) {\partial }_\sigma C(v-w) A_\rho^d(w) A_\sigma^e(w) \nonumber\\ & + {\partial }_\rho C(x-v) A_\rho^c(v) {\partial }_\sigma C(v-w) A_\mu^d(w) A_\sigma^e(w) \Big] \end{align} Adding up all the contributions (and the factor $\frac12$ from the Taylor expansion) we obtain \begin{align}\label{Tg2inv} (T_g^{-1} A)^a_\mu(x) \,&=\, A_\mu^a(x) \, - \, g f^{abc} \int du\, {\partial }_\lambda}\def\tl{\tilde{\lambda} C(x-u) A_\mu^b(u) A_\lambda}\def\tl{\tilde{\lambda}^c(u) \nonumber\\[1mm] & \; + \,\frac12 g^2 f^{abc} f^{bde} \int dv dw\, \Big[ -{\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\sigma C(v-w) A_\rho^d(w) A_\mu^e(w) \nonumber\\ & \qquad\qquad+ {\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\rho C(v-w) A_\sigma^d(w) A_\mu^e(w) \nonumber\\[1mm] & \qquad\qquad -{\partial }_\rho C(x-v) A_\sigma^c(v) {\partial }_\mu C(v-w) A_\sigma^d(w) A_\rho^e(w) \nonumber\\[1mm] & \qquad\qquad + 2 \,{\partial }_\rho C(x-v) A_\mu^c(v) {\partial }_\sigma C(v-w) A_\sigma^d(w) A_\rho^e(w) \nonumber\\[1mm] & \qquad\qquad -2 \, {\partial }_\rho C(x-v) A_\rho^c(v) {\partial }_\sigma C(v-w) A_\sigma^d(w) A_\mu^e(w) \Big] \; + \;{\cal O}(g^3) \end{align} Inverting this result we obtain the map up to second order\footnote{Of course, with $3 f^{bde} {\partial }_{[\mu} C A_\lambda}\def\tl{\tilde{\lambda}^d A_{\rho ]}^e \,\equiv\, f^{bde} \big({\partial }_{\mu} C A_\lambda}\def\tl{\tilde{\lambda}^d A_{\rho }^e \,+\, {\partial }_{\lambda}\def\tl{\tilde{\lambda}} C A_\rho^d A_{\mu }^e \,+\, {\partial }_{\rho} C A_\mu^d A_{\lambda}\def\tl{\tilde{\lambda} }^e\big) $.} \begin{eqnarray}\label{Tg2} (T_gA)^a_\mu(x) \,&=&\, A_\mu^a(x) \, + \, g f^{abc} \int du\, {\partial }_\lambda}\def\tl{\tilde{\lambda} C(x-u) A_\mu^b(u) A_\lambda}\def\tl{\tilde{\lambda}^c(u) \\[2mm] && + \, \frac32 \, g^2f^{abc}f^{bde} \int du dv \, {\partial }_\rho C(x-u) A_\lambda}\def\tl{\tilde{\lambda}^c(u) {\partial }_{[\mu} C(u-v) A_\lambda}\def\tl{\tilde{\lambda}^d (v) A_{\rho]}^e(v) \, + \, {\cal O}(g^3) \nonumber \end{eqnarray} which agrees with the original result (Equation (3.24) from \cite{Nic1}). As mentioned already, both (\ref{Tg2inv}) and (\ref{Tg2}) can be read with both Euclidean and Minkowskian signatures, respectively. For some simple (but non-trivial) quantum correlators involving scalar operators of the $N=4$ theory these formulas do give results which, up to ${\cal O}(g^2)$, precisely agree with those obtained using more standard techniques \cite{NP}. These computations also confirm the claim of \cite{DL1,DL2} that the amount of labor required to determine quantum correlators by means of this ghost and fermion free formalism is comparable to the usual one. \section{Jacobians, fermion and ghost determinants to ${\cal O}(g^2)$} In this section, we check the main statement. First, it is easily verified that \begin{equation} {\partial }^\mu A^{'a}_\mu(x) \,=\, {\partial }^\mu A_\mu^a(x) + {\cal O}(g^3) \end{equation} Likewise, a straightforward calculation shows that \begin{equation} \frac12 \int d^D \, x\, \Big[ A^{'a}_\mu (-\Box) A^{'a}_\mu \,-\, (\partial^\mu A^{'a}_\mu))^2\Big] \,=\, \frac14 \int d^D x\, F_{\mu\nu}^a F_{\mu\nu}^a + {\cal O}(g^3) \end{equation} with the $g$-dependent Yang-Mills field strength (\ref{Fmn}) on the r.h.s. These parts of the calculation do {\em not} make use of the special value of $D$, and therefore work in all dimensions. The dependence on the dimension enters only through the second part (\ref{Det}). For the (perturbative) computation of the relevant functional determinants (or rather their logarithms) we use the standard formula \begin{equation} \log {\rm det\,} \big({\bf{1}}-{\bf{X}}\big) = {\rm Tr}\, \log \big({\bf{1}} - {\bf{X}} \big) = - \sum_{n=1}^\infty \frac1{n} {\rm Tr}\, {\bf{X}}^n \end{equation} Let us first consider the Jacobian corresponding to (\ref {Tg2}). To first order it simply vanishes because $f^{aac} =0$ (or alternatively, ${\partial }_\lambda}\def\tl{\tilde{\lambda} C(0)=0$). After a little computation we arrive at the following result \begin{eqnarray}\label{Jacdet} \log {\rm det\,} \left(\frac{\delta A^{'a}_\mu(x,g;A)}{\delta A^b_\nu(y)}\right) &=& \frac12 n g^2 \int dx\,dy\;\Big\{ (2D-3) {\partial }_\mu C(x-y) A_\mu^a(y) {\partial }_\nu C(y-x) A_\nu^a(x) \nonumber\\ && \hspace{-0.2cm} - (D-2) {\partial }_\mu C(x-y) A_\nu^a(y) {\partial }_\mu C(y-x) A_\nu^a(x) \Big\} + {\cal O}(g^3) \end{eqnarray} where we have used $f^{gcd} f^{hcd} = n\, \delta^{gh}$ and the relation \begin{equation} \int dx\,dy\;{\partial }_\mu C(x-y) A_\mu^a(y) {\partial }_\nu C(y-x) A_\nu^a(x) = \int dx\,dy\; {\partial }_\mu C(x-y) A_\nu^a(y) {\partial }_\nu C(y-x) A_\mu^a(x) \end{equation} which follows by partial integration and use of the Landau gauge condition ${\partial }^\mu A_\mu^a = 0$ (or, more precisely, the presence of the $\delta$-functional in the functional measure (\ref{deltafunctional})). For the ghost determinant the relevant functional matrix is \begin{equation} {\bf{X}}^{ab}(x,y;A) \,=\, g f^{abm} C(x-y) A_\mu^m(y) {\partial }_\mu^y \end{equation} which gives \begin{equation} \log {\rm det\,} \big({\bf{1}} - {\bf{X}}\big) = \frac12 \, n g^2 \int dx\,dy\; {\partial }_\mu C(x-y) A^m_\nu(y) {\partial }_\nu C(y-x) A_\mu^n(x) \,+ \, {\cal O}(g^3) \end{equation} Observe that the ${\cal O}(g)$ term vanishes as before. The ${\cal O}(g^2)$ term has again been simplified by using $f^{abc} f^{bad} = - n\,\delta} \def\dd{\dot{\d}^{cd}$. Because ${\rm det\,} ({\partial }^\mu D_\mu) = {\rm det\,} (D_\mu {\partial }^\mu)$, we can equivalently write \begin{eqnarray} \log {\rm det\,} \big({\bf{1}} - {\bf{X}}\big) &=& \frac12 \, n g^2 \int dx\,dy\; {\partial }_\mu C(x-y) A^m_\mu(y) {\partial }_\nu C(y-x) A_\nu^n(x) \,+\, {\cal O}(g^3) \end{eqnarray} an equality that can also be checked explicitly by partial integration and use of ${\partial }^\mu A_\mu^a = 0$. For the Matthews-Salam determinant we have (suppressing spinor indices) \begin{equation} {\bf{Y}}^{ab}(x,y;A) \,=\, g f^{abm} {\partial }_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} C(x-y) \gamma} \def\cd{\dot{\c}^\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} \gamma} \def\cd{\dot{\c}^\lambda}\def\tl{\tilde{\lambda} A_\lambda}\def\tl{\tilde{\lambda}^m(y) \end{equation} With an extra overall factor of $\frac12$ for Majorana fermions we get \begin{eqnarray} \frac12 \log {\rm det\,} \big({\bf{1}} - {\bf{Y}}\big) &=& \frac14 n g^2 {\rm Tr}\, (\gamma} \def\cd{\dot{\c}_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} \gamma} \def\cd{\dot{\c}_\lambda}\def\tl{\tilde{\lambda} \gamma} \def\cd{\dot{\c}_\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b} \gamma} \def\cd{\dot{\c}_\rho ) \int dx\,dy\; {\partial }_\alpha} \def\ad{\dot{\a}} \def\ua{{\underline \a}}\def\hA{{\widehat A} C(x-y) A_\lambda}\def\tl{\tilde{\lambda}^m(y) {\partial }_\beta} \def\bd{\dot{\b}} \def{\bar u}{{\underline \b} C(y-x) A_\rho^m (x) \nonumber\\ && + \;\; {\cal O}(g^3) \end{eqnarray} Adding all the terms and demanding equality with (\ref{Jacdet}) yields two conditions \begin{equation} 2D-3 = 1+r_D \;\;, \quad D-2 = \frac{r_D}2 \end{equation} which happily coincide and are thus satisfied for \begin{equation} D\,=\, 3,4,6,10 \qquad \Longleftrightarrow \qquad r_D\,=\, 2,4,8,16 \end{equation} (but not for any other values of $D$ and $r_D\,$). We therefore recover the old result of \cite{SYM0} without any use of anticommuting objects whatsoever.\footnote{For the {\em free} theory this equality follows trivially by demanding cancellation of the free determinants, with $$ \int {\cal D} A \, e^{\frac12 A\Box A} \,\sim\, [{\rm det\,}(-\Box)]^{-D/2} \;\; , \quad \int {\cal D} C {\cal D} \bar{C} \, e^{\bar C \Box C} \,\sim\, {\rm det\,} (-\Box)\;\; , \quad \int {\cal D}\chi \, e^{\frac12 \bar\chi {\partial\!\!/}\chi} \,\sim \, [({\rm det\,}(-\Box)]^{r_D/4} $$ which is just the statement that bosonic and fermionic degrees of freedom must match on shell.} Given that our statement about the permitted dimensions applies to the interacting theory, it is effectively equivalent to the more standard calculation to verify the closure of supersymmetry transformations which requires the use of a specific Fierz identity that is valid only for $D\,=\, 3,4,6,10$ \cite{SYM1}. At higher orders, the calculations presented here become technically involved fairly quickly. While the procedure to derive the inverse map is rigorous it does prove lengthy, with ${\cal O}(n!)$ terms at order ${\cal O}(g^n)$. It will thus be interesting to see whether there exists an algorithmic approach that leads directly to the map itself~\cite{SYM2} (see \cite{L2} for earlier work in this direction). The existence of a simpler algorithm for $T_g$ is also suggested by the comparison of formulas (\ref{Tg2}) and (\ref{Tg2inv}), and the fact that the MSS and the FP determinants $\Delta_{MSS}$ and $\Delta_{FP}$ involve only structures of the type ${\partial } C A \cdots {\partial } C A$, whereas the ${\cal R}$-prescription leads to various other structures as well. All this indicates that the map $T_g$ itself may have a simpler structure than the inverse map $T_g^{-1}$, a feature that can also be seen in other examples. \section{Afterword: personal memories of Peter Freund (by H.N.)} {\it This final section contains some of H.N.'s personal reminiscences of his encounters with Peter Freund since the early 80ies.} Peter Freund was a source of numerous unusual and fertile ideas in physics that also inspired parts of my own early work, and thus had an important influence on my career. Perhaps best remembered is the pivotal role he played in the development of modern Kaluza-Klein theories, thus contributing to their revival after many decades of dormancy \cite{CF,FR}. I had the privilege of meeting Peter Freund many times, in particular on the occasion of several visits to Chicago. But in the early 80ies he also came to CERN, where I was employed as a junior staff member of the CERN Theory Division at the time. On one of these visits (as far as I remember, in the wake of my work with Bernard de Wit on $N=8$ supergravity) he enquired whether I would be interested in joining the University of Chicago as a junior faculty member (on what I suppose is nowadays called a tenure track position). For me this was definitely a very attractive option, but I finally did not move to the US, mainly for family reasons, settling for a less glamorous position at the University of Karlsruhe (with Julius Wess). Our main scientific overlap in those days was Kaluza-Klein supergravity, which centered to a large extent on the famous Freund-Rubin solution \cite{FR}, the first real and concrete example of a theory with {\em preferential} compactification to four dimensions -- and still the only one, as far as I am aware! In fact, at about the same time, with Antonio Aurilia and Paul Townsend we had also been wondering about the meaning of an expectation value for the 4-form field \cite{ANT}, showing that the cosmological constant could be interpreted as an integration constant (and hence its value somehow be endowed with a dynamical origin). Regrettably, however, as Paul Townsend aptly put it later, we did ``miss the boat on the Freund-Rubin solution"! About two years later the heterotic string \cite{GHMR} appeared on stage, eclipsing everything else that had come before, and rolling over the CERN Theory Division like a tsunami. Offering for the first time real prospects for linking string theory to Standard Model physics there immediately arose the question what the link was between this totally new theory and the more established purely bosonic string or the superstring. It was again Peter Freund who (after early premonitions of E$_8\,\times\,$E$_8$) stepped forward with an audacious idea, namely the proposal that the heterotic string was actually some compactified version (though of a strange kind) of the bosonic string in $D=26$ \cite{Het}. This idea crucially inspired our own work \cite{CENT}. I was actually amazed at all the attention we got for that paper -- for a while this was the only thing people wanted to hear about from me! I even got an invitation from Murray Gell-Mann to his newly founded Santa Fe Institute to speak about this work, a task that I accepted with considerable trepidation because I was aware that the select audience there would consist of some of the smartest minds on the planet, some of whom I knew were not particularly positively inclined towards the idea (I still remember David Gross greeting me at the local airport with the words ``We have seen your paper, and we don't believe your claims"). Looking back, it must be said that the idea finally did not fly as we had hoped, remaining mostly a kinematic scheme, and has by now largely faded away from the string landscape (like so many other ideas). An especially memorable encounter happened in 2008 when Peter invited me to Timisoara to deliver the annual Schr\"odinger Lecture at the local university, an event sponsored by the University of Vienna, of which Peter had been put in charge in recognition of his enduring attachment to the old world cultural charm of the no longer existing Austro-Hungarian empire. Timisoara is the place where Peter was born. On the occasion of this visit he showed me many of the places of his early childhood, telling me about his multicultural upbringing and how he grew up learning to speak so many different languages (Romanian, Hungarian and German, for starters), that later enabled him to become such an impressive polyglot. And he also told me how as a child he only barely escaped the Nazi terror, largely crediting the ineffectiveness of the Romanian bureaucracy for saving his life, because these people had been neither eager nor efficient in implementing the invaders' new rules. On my last visit to Chicago, he invited me not only to a performance of the Chicago Symphony Orchestra (including a Sibelius symphony which we both found boring), but took me along to some fancy reception at the local Austrian Consulate on top of the Lake Point Tower building, an architechtural landmark right on the shore of Lake Michigan, to which he had been invited for some reason. I had obviously {\em not} been invited, but went along anyway, though with a bit of embarrassment as our dress code did not match the standards expected at such an event. Nevertheless, up on the 67th floor, and without being taken much note of by the Austrian Consul nor his other guests, we had a great time, enjoying the food and the wine, with fabulous views of Lake Michigan and the surrounding Chicago skyline. The last time I saw him was on occasion of his visit to Berlin and to AEI in Potsdam where he delivered a colloquium on (also his!) ``passion for physics", and I had the the opportunity to invite him (and Jan Plefka) to dinner in the rooftop restaurant on top of the Reichstag, with a splendid vista of the Berlin night sky which he enjoyed very much. But the thing that sticks in my mind more than anything else is Peter's great love and appreciation of music, especially of the vocal kind. So most of our `off-physics talk' revolved around music, with me learning a lot about his preferences, and also his `dislikes' in musical matters (for instance, he admired Richard Wagner and Francis Poulenc, but had no appreciation at all for Bach Cantatas). Not just being a musical expert, he was also a quasi-professional singer and performer (as people in Chicago will surely remember). This is something you would also notice when he gave physics seminars, which always had a kind of operatic touch (and I often thought of Don Giovanni singing on stage while listening to him). Accordingly, my visits to all of his three places in Chicago would invariably end with us trying (with me on the piano) to do some of the highlights of the Lieder repertoire, such as {\em Die Winterreise} by Schubert, or various Schumann Lieder (some of which, by the way, he treasured as the absolute culmination point of this musical genre). I will fondly remember Peter Freund as a great friend and a great physicist. \vskip 0.5cm \noindent {\bf Acknowledgments:} H.N. would like to thank O. Lechtenfeld and J. Plefka for discussions and correspondence, and IISER Pune for hospitality in November 2019. \vskip 0.5cm
proofpile-arXiv_069-3430
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Security forums hide a wealth of information, but mining it requires novel methods and tools. The problem is driven by practical forces: there is useful information that could help improve security, but the volume of the data requires an automated method. The challenge is that there is a lot of ``noise", there is lack of structure, and an abundance of informal and hastily written text. At the same time, security analysts need receive focused and categorized information, which can help their task of shifting through it further. We define the problem more specifically below. Given a security forum, we want to extract threads of interest to a security analyst. We consider two associated problems that together provide a complete solution. First, the input is all the data of a forum, and the user specifies its interest by providing one or more bag-of-words of interest. Arguably, providing keywords is a relatively easy task for the user. The goal is to return all the threads that are of interest to the user, and we use the term {\bf relevant} to indicate such threads. A key challenge here is how to create a robust solution that is not overly sensitive to the omission of potentially important keywords. We use the term {\bf identification} to refer to this problem. \begin{figure*}[t!] \includegraphics[keepaspectratio, width=1\textwidth]{Fig1-modeljj.pdf} \centering \caption {An overly-simplified overview of analyzing a forum using the REST approach: i) {\bf project} all threads to embedding space, ii) {\bf select} relevant threads using keyword-based selection, iii) {\bf expand} by adding similar threads, iv) {\bf classify} the threads into classes using supervised learning. We illustrate the embedding space as a three dimensional space. } \label{fig:model} \end{figure*} Second, we add one more layer of complexity to the problem. To further facilitate the user, we want to group the relevant threads into classes. Again, the user defines these classes by providing keywords for each class. We refer to this step as the {\bf classification} problem. Note that the user can specify the classes of interest fairly arbitrarily, as long as there is training data for the supervised-learning classification. There is relatively limited work on extracting information from security forums, and even less work on using embedding techniques in analyzing online forum data. We can group prior work in the following categories. First, there is work that analyzes security forums to identify malicious activity~\cite{Portnoff2017,Tavabi2018,Joobin2018}. Moreover, there are some efforts to detect malicious users \cite{Li2014,Marin2018_keyhacker} and emerging threats on forums and other social networks~\cite{Sapienza2017_USC1,Sapienza2018_USC2}. Second, there are several studies on analyzing online forums without a security focus~\cite{Zhang2017,Cong2008SIGIR}. Third, there is a large body of work in embedding techniques for: (a) analyzing text in general~\cite{Mikolov2013,fasttext2017}, and (b) improving text classification~\cite{Bert2018,Wang2018,Tang2015}. Also, note that there exist techniques that can do transfer learning between forums and thus, eliminate the need to have training data for every forum~\cite{Joobin2018}. We discuss related work in more detail in our related work section. We propose a systematic approach to identify and classify threads of interest based on a multi-step weighted embedding approach. Our approach consists of two parts: (a) we propose a similarity-based approach with thread embedding to extract relevant threads reliably, and b) we propose a weighted-embedding based classification method to group relevant threads into user-defined classes. The key technical foundation of our approach relies on: (a) building on a word embedding to define thread embedding, and (b) conducting similarity and classification at the thread embedding level. Figure~\ref{fig:model} depicts a high-level visualization of the key steps of our approach: (a) we start with a word embedding space and we define a thread embedding where we project the threads of the forum, (b) we identify relevant threads to the user-provided keywords, (c) we expand this initial set of relevant threads using thread similarity in the thread embedding, (d) we develop a novel weighted embedding approach to classify threads into the four classes of interest using ensemble learning. In particular, we use similarity between each word in the the forums and representing keywords of each class in order to up-weight the word embedding vectors. Then we use weighted embeddings to train an ensemble classifier using supervised learning. We evaluate the proposed method with three security forums with 163k posts and 21k unique threads. The users in these forums seem to have a wide range of goals and intentions. For the evaluation, we created a labelled dataset of 1350 labeled threads across three forums, which we intend to make available to the research community. We provide more information on our datasets in the next section. Our results can be summarized into the following points: {\bf a. Providing robustness to initial keyword selection.} We show that our similarity-based expansion of the user-defined keywords provides significant improvement and stability compared to simple keyword-based matching. First, the effect of the initial keyword set is minimized: by going from 240 to 300 keywords, the keyword-based method identifies 25\% more threads, while the similarity based method increases by only 7\%. Second, our approach increases the number of relevant threads by 73-309\% depending on the number of keywords. This suggests that our approach is less sensitive to omissions of important keywords. {\bf b. The relevant threads are 22-25\% of the total threads.} Our approach reduces the amount of threads to 22-25\% of the initial threads. Clearly, these results will vary depending on the keywords given by the user and the type of the forum. {\bf c. Improved classification accuracy.} Our approach classifies threads of interest in four different classes with an accuracy of 63.3-76.9\% and weighted average F1 score 76.8\% and 74.9\% consistently outperforming five other approaches. {\em Our work in perspective.} Our work is building block towards a systematic, easy to use, and effective mining tool for online forums in general. Although here we focused on security forums, it could easily apply to other forums, and provide the users with the ability to define topics of interest by providing one or more set of keywords. We argue that our approach is easy to use since it is robust and forgiving w.r.t. the initial keyword set. \iffalse {\color{green} =============== OLD reuse or delete ==== {\bf (a)} We propose a similarity-based approach with word embeddings to extract most likely relevant threads. {\bf (b)} We propose a simple weighted embedding based classification method to categorize harmful content in security forums ----- \noindent The main goal of this work is analyzing discussion threads published in security forums to identify and extract potential harmful contents. These contents might be posted by potential hackers or can be used by such users. The output of this work can be utilized by security analyst to extract the emerging threats and monitoring cybersecurity activities in security forums. This type of contents of interest might be hidden and mixed in several different discussions in the security forums. The empirical analysis in Figure \ref{fig:relevant} shows that less than 25\% of the threads in each forum are relevant threads of interest. These discussion are usually question answering sequences in which users talking about their concerns and seeking for solutions. Among them people are conducting discussion about security concerns such as announcing about a security attack or reporting emerging threats which is called \textit{announcing}. On the other hand they are questioning or posting regarding possible ways of making malicious activity such as selling and buying stolen cards or offering a hacking services to others which is called \textit{product and services}. Moreover, these postings are not limited to trading on the security forums but also some free services such as offering free guidance to attack other people or posting malicious tutorials to conduct such activities which is called \textit{tutorials}. The last group of discussion which we have observed in the security forums are some general discussions in which people provide new information about cybersecurity topics like defining new terms or expressing a personal story in dealing with cybersecurity issues, called \textit{Articles}. Identifying and recognizing such threads among various discussion threads in the security forum is a research question we are going to address in this paper. We are facing several challenges to tackle this problem. First, even though it looks trivial that all the discussion on the security forum should be related to such topics but empirical analysis show the opposite, therefore there are numerous unrelated threads needed to be filtered out. Second, in the domain of the security forums there are limited work with good understanding of the features and the patterns that emerge. To this end, we proposed a comprehensive approach based on word embedding similarity to address these research challenges. Our method consists of two important components: 1) efficient and accurate method to identify relevant threads given sets of keywords, and 2) a novel weighted word-embedding approach to classify threads into four defined categories. The key technical novelty is that we proposed an embedding based approach to involve the data labels to the classification task. We use the compatibility score of each word to each class label as weight to refine word embedding vectors and run a separate classifier for each class in an ensemble learning approach and aggregate labels with a weighted voting method. } \fi \section{Definitions and Datasets} \label{sec:data} \begin{table}[h] \centering \small \begin{tabular}{|p{1.1cm}|r|r|r|r} \hline & OffensComm.\xspace & HackThisSite\xspace & EthicalHackers\xspace \\ \hline Posts & 25538 & 84,745 & 54176 \\ \hline Users & 5549 & 5904 & 2970 \\ \hline Threads & 3542 & 8504 & 8745 \\ \hline \end{tabular} \caption {The basic statistics of our forums.} \label{tab:forums} \end{table} We have collected data from three different forums: OffensiveCommunity\xspace, HackThisSite\xspace and EthicalHackers\xspace. These forums seem to bring together a wide range of users: system administrators, white-hat hackers, black-hat hackers, and users with variable skills, goals and intentions. We briefly describe our three forums below. {\bf a. OffensiveCommunity\xspace (OC):} This forum seems to be on the fringes of legality. As the name suggests, the forum focuses on ``offensive security", namely, breaking into systems. Indeed, many posts provide step by step instructions on how to compromise systems, and advertise hacking tools and services. {\bf b. HackThisSite\xspace (HT):} As the name suggests, this forum has also an attacking orientation. There are threads that describe how to break into websites and systems, but there are also more general discussions about the users' experiences in cyber-security. {\bf EthicalHackers\xspace (EH):} This forum seems to consist mostly of ``white hat" hackers, as its name suggests. Many threads are about making systems more secure. However, there are many discussions with malicious intents are going on in this forum. Moreover, there are some notification discussions to alert about emerging threats. \begin{figure}[!htb] \begin{minipage}{0.32\linewidth} \includegraphics[width=\linewidth]{Fig2A_postInThread_offcomm.pdf} \centering {(a)OffensComm.\xspace} \end{minipage} \begin{minipage}{0.32\linewidth} \includegraphics[width=\linewidth]{Fig2B_postInThread_hackthissite.pdf} \centering {(b)HackThisSite\xspace} \end{minipage} \begin{minipage}{0.32\linewidth}% \includegraphics[width=\linewidth]{Fig2C_postInThread_ethicalhacker.pdf} \centering {(c)EthicalHackers\xspace} \end{minipage} \caption{CCDF of the number of Post per thread ($\log$-$\log$ scale).} \label{fig:ccdf} \end{figure} {\bf Basic forum statistics.} We present basic statistics of our forums in Table \ref{tab:forums}. We also study some of their properties and make the following two observations. {\bf Observation 1: More than half of the threads have one post!} In Figure \ref{fig:ccdf}, we plot the complementary cumulative distribution function of the number of post per thread for our forums. We observe the skewed distribution that describes the behavior of large systems. In addition, the distribution shows that more than half of thread has one single post in the threads and 73\% of the threads has one or two posts in threads. {\bf Observation 2: The first post defines the thread.} Prior research~\cite{Zhang2017} seems to confirm something that we intuitively expect: the first post of the thread pretty much defines the thread. Intrigued, we sampled and manually verified that this seems to be the case. Specifically, we inspected a random sample of 10\% of the relevant threads (found by our approach), and we found that more than 97\% of the follow up posts fall in line with the topic of the thread: while a majority of them, express appreciation, agreement etc. For example, the follow up posts to a malicious tutorial in OffensiveCommunity\xspace were: ``Great Tut", ``Thank you for sharing", ``Nice post", ``Work[s] great for me!" {\bf Defining the classes of interest.} As we explained in the introduction, we want to further help a security analyst by giving them the ability to define classes of interest among the threads of interest. These are user-defined classes. To ground our study, we focus on the following classes, which we argue could be of interest to a security analyst. {\bf a. Alerts\xspace :} These are threads where users are reporting about being attacked by a hackers or notifying about exploits and vulnerabilities. An example from EthicalHackers\xspace is a thread with the title ``Worm Hits Unsecured Space Station Laptops" and the first line of the first post is ``NASA spokesman Kelly Humphries said in a statement that this was not the first time that the ISS had been affected by malware, merely calling it a “nuisance.”" {\bf b. Services\xspace:} These are threads where users are offering or requesting malicious hacking services or products. An example from OffensiveCommunity\xspace is a thread with the title ``Need hacking services'' and this first line ``Im new to this website. Im not a hacker. Would like to hire hacking services to hack email account, Facebook account and if possible iPhone.'' {\bf c. Hacks\xspace:} These are threads where users post detailed instructions for performing malicious activities. The difference with the above category is that the information is offered for free here. An example from OffensiveCommunity\xspace is a thread titled ``Hack admin account in XP, Vista, Windows 7 and Mac - Complete beginners guide!!" with a first line: ``Hack administrator account in XP OS – Just by using command prompt is one of the easiest ways (without installation of any programs).....". As expected, these posts are often lengthy as they convey detailed information. {\bf d. Experiences\xspace:} These are threads where users share their experience related to general security topics. Often users provide a personal story, a review or an article on a cyber-security concept or event. For example, in HackThisSite\xspace a thread titled ``Stupid people stories", the author explains cyber-security mistakes that he made. The sets of keywords which ``define" each class are shown in Table \ref{tab:keywords}. Clearly, these sets will be provided by the user depending on classes of interest. Note that these keywords are also provided to our annotators as hints for labeling process. \subsection{Establishing the Groundtruth} \begin{table}[t] \centering \small \begin{tabular}{|l|r|r|r|r|r|r|} \hline & \multicolumn{2}{|c|}{OffensComm.\xspace} & \multicolumn{2}{|c|}{HackThisSite\xspace} & \multicolumn{2}{|c|}{EthicalHackers\xspace} \\ \hline Labeled & \multicolumn{2}{|c|}{450} & \multicolumn{2}{|c|}{450} & \multicolumn{2}{|c|}{450}\\ \hline \hline & \# & \% & \# & \% & \# & \% \\ \hline Hacks\xspace & 202 &45\% & 31 &7\% & 42 &9\% \\ \hline Services\xspace & 204 &45\% & 286 &64\% & 166 &37\%\\ \hline Alerts\xspace & 27 &6\% & 20 &4\% & 78 &18\%\\ \hline Experiences\xspace & 17 &4\% & 113 &25\% & 164 &36\% \\ \hline \end{tabular} \caption { Our groundtruth data for each forum and the breakdown per class. \label{tab:labels} } \end{table} For validating our classification method, we need groundtruth to do both the training and the validation. We randomly selected 450 threads among the relevant threads from each forum as selected by the identification part. The labelling involves five annotators that manually label each thread to a category based on the definitions and examples of the four classes which we listed above. The annotators were selected from a educated and technically savvy group of individuals to improve the quality of the labels. We then combine the ``votes", and assign the class selected by the majority. We assess the annotators' agreement based on the Fleiss-Kappa coefficient and we show the results in Table~\ref{tab:kohenCoeff}. We see that there is a high annotator agreement across all forums as the Fleiss-Kappa coefficient is 78.6. 92.6, 70.3 for OffensiveCommunity\xspace, HackThisSite\xspace and EthicalHackers\xspace respectively. With this process, we labelled 1350 posts in three forums and we present our labeled data in Table~\ref{tab:labels}. We make our groundtruth available to the researchers in the community in order to foster follow up research. \footnote{Data is provided at the following link: https://github.com/icwsmREST2019/RESTDATA.} \begin{table}[t] \centering \small \begin{tabular}{|l|r|r|r|r|} \hline Label & Hacks\xspace & Services\xspace & Alerts\xspace & Experiences\xspace \\ \hline OffensComm.\xspace & 0.778 & 0.702 & 0.816 & 0.732\\ \hline HackThisSite\xspace & 0.953 & 0.966 & 0.793 & 0.875\\ \hline EthicalHackers\xspace & 0.682 & 0.733 & 0.766 & 0.620 \\ \hline \end{tabular} \caption { Assessing the annotator agreement using the Fleiss-Kappa coefficient for each class for our three datasets. \label{tab:kohenCoeff} } \end{table} \subsection{Challenges of simple keyword-based filtering} Given a set of keywords, the most straightforward approach in identifying relevant documents (or threads here) is to count the combined frequency with which these keywords appear in the document. A user needs to identify the keywords that best describe the topics and concepts of interest, which can be challenging for non-trivial scenarios~\cite{Wang2016}. We outline some of the challenges below. \begin{itemize} \item The user may not be able to provide all keywords of interest. In some cases, the user is not aware of a term, and in some cases, this not even possible: consider the case where we want to find the name of a new malware that has not yet emerged. \item Stemming, variations and compound words is a concern. The root of a word can appear in many different versions: e.g. hackers, hacking, hacked hackable, etc. There exist partial solutions for stem but challenges still remain~\cite{stemming2011}. \item Spelling errors and modifications and linguistic variations. Especially for an international forum, different languages and backgrounds can add noise. \end{itemize} The above challenges motivated us to consider a new approach that uses a small number of indicative keywords to create a seed set of threads, and then use similarity in the embedding space to find more similar threads, as we describe in the next section. \section{Identifying threads of interest} \label{sec:identify} \begin{table}[t] \centering \small \begin{tabular}{c|c} \hline Symbol & Description \\ \hline \hline $\word_i$ & Word $i$ in a forum\\ $\vec{v_i}$ & Embedded vector for word $i$ \\ $\word_{i,k}$ & Value of dimension $k$ in embedded vector for word $i$ \\ $t_\tidx$ & Thread $\tidx$ \\ $\mdim$ & The dimensions of the word embedding space \\ $n$ & Number of words in a thread\\ $d$ & Number of words in a forum \\ $\Words(t_\tidx)$ & Set of words in thread $\tidx$ \\ $D$ & Set of words in a forum \\ $\vec{\pro}(e)$ & Embedding projection of entity $e$ (word, thread etc) \\ $Sim(w,c)$ & Similarity of vectors $w$ and $c$\\ $\classw_\classi$ & The ``center of gravity" word for class $\classi$ \\ $\vec{\beta}_\classi $ & Affinity vector of class $\classi$ \\ $\vec{\beta}_\classi[i] $ & Value of Affinity vector of class $\classi$ at index $i$ \\ $WS_l$ & Keyword set l for identifying relevant threads\\ $\ThWord$ & Keyword threshold in identifying relevant threads\\ $\ThSim$ & Similarity threshold in identifying relevant threads\\ \hline \hline \end{tabular} \caption { The symbol table with the key notations.} \label{tab:def} \end{table} We present our approach for selecting relevant threads starting from sets of keywords provided by the user. Our approach consists of the following phases: (a) a keyword matching step, where we use the user-defined keywords to identify relevant threads that contain these keywords, and (b) a similarity-based phase, where we identify threads that are ``similar" to the ones identified above. The similarity is established at the word embedding space as we describe later. \subsection{Phase 1: Keyword-based selection} Given a set or sets of keywords, we identify the threads where these keywords appear. A simple text matching approach can distinguish all occurrence of such keywords in the forum threads. In more detail, we follow the steps below: {\bf Step 1:} The user provide a set or sets of keywords $WS_l$, which capture the user's topics of interest. Having sets of keywords enables the user to specify combinations of concepts. For example, in our case we use, the following sets: (a) hacking related, (b) exhibiting concern and agitation, and (c) searching and questioning. {\bf Step 2:} We count the frequency of each keyword in all the threads. This can be done easily with elastic search or any other straightforward implementation. {\bf Step 3:} We identify the relevant threads, as the threads that contain a sufficient number of keywords from each set of keywords $WildersSecurity\xspace_l$. This can be defined by a threshold, $T_{key_l}$, for each set of keywords. Going beyond simple thresholds in this space, we envision a flexible system, where the user can specify complex queries that involve combinations of several different keyword sets $WildersSecurity\xspace_l$. For example, the user may want to find threads with: (a) at least 5 keywords from set $WildersSecurity\xspace_1$ and 3 keywords from $WildersSecurity\xspace_2$, or (b) at least 10 keywords from $WildersSecurity\xspace_3$. \subsection{Phase 2: Similarity-based selection} We propose an approach to extract additional relevant threads based on their similarity to existing relevant threads. Our approach is inspired by and combines elements from earlier approachs~\cite{Mikolov2013,Shen2018}, which we discuss and contrast with our work in the related work section. {\bf Overview of our approach.} Our approach follows the steps below, which are also depicted visually in Figure~\ref{fig:model}. The input is a forum, a set of keywords, and set of relevant threads, as identified by the keyword-based phase above. {\bf Step 1. Determining the embedding space.} We project every word as a point in a $\mdim$-dimensional space using a word embedding approach. Therefore, every word is represented by a vector of $\mdim$ dimensions. {\bf Step 2. Projecting threads.} We project all the threads in an appropriately constructed multi-dimensional space: both the relevant threads selected from the keyword-based selection and the non-selected ones. The thread projection is derived from the projections of its words, as we describe below. {\bf Step 3. Identifying relevant threads.} We identify more relevant threads among the non-selected threads that are ``sufficiently-close" to the relevant threads in the thread embedding space. The advantage of using similarity at the level of threads is that thread similarity can detect high-order levels of similarity, beyond keyword-matching. Thus, we can identify threads that do not necessary exhibit the keywords, but use other words for the same ``concept". We show examples of that in Tables~\ref{tab:SimilSample} and~ \ref{tab:SimilSample2}. {\bf Our similarity-base selection in depth.} We provide some details in several aspects of our approach. {\bf Step 1: in depth.} We train a skip-gram word embedding model to project every word as a vector in a multi-dimensional space~\cite{Mikolov2013}. Note that we could not use pre-trained embedding models, since there are many words in our corpus that do not exist in the dictionary of previous models. The number of dimensions of the word embedding can be specified by the user: NLP studies usually opt for a few hundred dimensions. We discuss how we selected our dimensions in our experiments section. At the end of this step, every word $\word_i$ is projected to $\vec{\pro}(\word_i)$ or $\vec{\word}_i$, a real-value $\mdim$-dimensional vector, $(\word_i[1],\word_i[2], ...,\word_i[\mdim])$. A good embedding ensures that two words are similar, if they are close in the embedding space. {\bf Step 2: in depth.} We project the threads in an $2\mdim$-space, by ``doubling" the $\mdim$-dimensional space that we used for words as we will show below. The thread projection is a function of the vectors of its words and captures both the average and the maximum values of the vectors of its words. \indent {\bf a. Capturing the average: $\pro_{avg}(t_\tidx)$.} Here, we want to capture the average ``values" of the vectors of the words in the thread. For thread $t_\tidx$, the average projection, $\pro_{avg}(t_\tidx)$ is calculated as follows for each dimension $l$ in the $\mdim$-dimensional word space: \begin{equation} \label{eq:avg} \vec{\pro}_{avg}(t_\tidx)[l] = \frac{1}{|\Words(t_\tidx)|} \cdot \sum_{\word_i \in \Words(t_\tidx)}^{} \vec{v}_i[l] , \end{equation} Recall that $\Words(t_\tidx)$ is the set of words of the thread. For simplicity, we refer to projection of word $\word_i$ as $\vec{\word}_i$ instead of the more complex $\vec{\pro}(\word_i)$. \indent {\bf b. Capturing the high values: $\pro_{max}(t_\tidx)$.} Averaging can fail to represent adequately the ``outlier" values, and to overcome this, we calculate a vector of maximum values, $\vec{\pro}_{max}(t_\tidx)$, for each thread. For each dimension $l$ in the word embedding, $\pro_{max}[l]$ is the maximum value of that dimension over all existing $l$-dimension values among all the words in the thread, which we can state more formally below: \begin{equation} \label{eq:max} \vec{\pro}_{max}(t_\tidx)[l] = \max_{\word_i \in \Words(t_\tidx)} \vec{v_{i}}[l] \end{equation} Finally, we create the projection of thread $t_\tidx$ by using both these vectors, $\vec{\pro}_{avg}(t_\tidx)$ and $\vec{\pro}_{max}(t_\tidx)$, as this combination has been shown to provide good results~\cite{Shen2018}. Specifically, we concatenate the vectors and we create the thread representations in an $2\mdim$-dimensional space. \begin{equation} \label{eq:concat} \vec{\pro}(t_\tidx) = (\vec{\pro}_{avg}(t_\tidx) , \vec{\pro}_{max}(t_\tidx)) \end{equation} {\bf Step 3: in depth.} We identify similar threads at the $2\mdim$-space-dimensional space of thread embedding from step 2. We propose to use the cosine-similarity determine the similarity among threads, which seems to give good results in practice. Most importantly, we can control what constitutes a {\em sufficiently-similar} thread using a threshold $\ThSim$. The threshold needs to strike a balance between being too selective and too loose in its definition of similarity. Furthermore, note the right threshold value also depends on the nature of the problem and the user preferences. For example, a user may want to be very selective, if the resources for further analyzing relevant threads is limited or if the starting number of threads is large. \section{Classifying threads of interest} \label{sec:classification} \begin{figure}[htb] \includegraphics[scale=0.4]{Fig3_Classifier.pdf} \centering \caption {A visual overview of our classifier } \label{fig:classifier} \end{figure} We present our approach for classifying relevant threads into user defined classes. To ground the discussion, we presented four classes on which we focus here, but our approach can be applied for any number and type of classes as long as there is training data for the supervised learning. {\bf Defining Affinity.} We use the term {\bf affinity}, $\vec{\beta}_\classi[i]$ , to refer to the ``contribution" of word $\word_i$ in a thread towards our belief that the thread belongs to class $\classi$. Recall also that each class $\classi$ is characterized by a group of words that we denote as $WordClass\xspace_\classi$. These sets of words are an input to our algorithm and in practice they will be provided by the user. {\bf High-level overview creating our classifier.} Our approach consists of the following steps, which are visually represented in Figure~\ref{fig:classifier}. {\bf Step 1.} We create a representation of every class $\classi$ into the word embedding space by using the words that define the class, $WordClass\xspace_\classi$. {\bf Step 2.} For all the words in the forum, we calculate the affinity of the word $\word_i$ for each class $\classi$, $\vec{\beta}_\classi[i]$. {\bf Step 3.} For each class, we create a weighted embedding by using the affinity to adjust the embedding projection of each word for each class. {\bf Step 4.} We use weighted embedding to train an ensemble classifier using supervised learning. {\bf Using the classifier.} Given a thread, we calculate its projection in the embedding space, and then we pass it to the classifier to determine its class. {\bf Our algorithm in more detail.} In the remainder of this section, we provide a more in depth description of the algorithm. {\bf Step 1: in depth.} For each class $\classi$, we use the set of words $WordClass\xspace(\classi)$, and to define a representation, $\vec{\classw}_\classi$, for that class in the word embedding space. We project each word in $WordClass\xspace(\classi)$ to the embedding space by using the same word embedding model, which we trained in the previous section. Then, we define the class vector $\classw_\classi$ to be the average of the word embeddings of the words in $WordClass\xspace(\classi)$ similarly to equation~\ref{eq:avg}. Note that these class embedding vectors correspond to each column of the matrix $C_{(\mdim, c)}$ in Figure \ref{fig:classifier}, where $\mdim$ in the dimension of the embedding and $c$ is the number of classes. {\bf Step 2: in depth.} The affinity of each word $\word_i$ in the forum for each class is calculated by the similarity of the word $\word_i$ to $\vec{\classw}_\classi$, which represents the class in this space. We calculate the proximity using the cosine similarity, as follows: \begin{equation} \label{eq:sim} Sim(\vec{\word_i},\vec{\classw}_\classi) = \frac{\vec{\word}_i \cdot \vec{\classw}_\classi} {||\vec{\word_i}|| \cdot || \vec{\classw}_\classi||} \end{equation} Then, for each class $\classi$, we create vector $\vec{\beta_\classi}$ whose element $[i]$ corresponds to the affinity of word $\word_i$ of the forum $D$. Specifically, we normalize the values by using \textit{Softmax} of the similarity vector $Sim(\word_i,w_\classi)$ as follows: \begin{equation} \label{eq:softmax} \vec{\beta_\classi}[i] = \frac{exp(Sim(\vec{\word}_i, \vec{\classw}_\classi))} {\sum_{y_j \in \Dict}^{}exp(Sim(\vec{y}_j, \vec{\classw}_\classi))} \space , \end{equation} where $y_j \in \Dict$ iterates through all the words in the forum. Note that $\vec{\beta_\classi}$ corresponds to a row $\classi$ in matrix $B_{d,c}$ in figure \ref{fig:classifier}, where $c$ is the number of classes and $d$ is the total number of words in the forum. {\bf Step 3: in depth.} For each class $\classi$, we create a ``custom" word embedding, $VC_\classi(m,d)$ in Figure \ref{fig:classifier}. Each such matrix that is focused on detecting threads of $\classi$ and it will be used in our ensemble classification. For each class, we create, $VC_\classi(m,d)$, a class-specific word embedding by modifying the word projections, $\vec{\word}_i$ using the affinity of the word $\vec{\beta}_\classi[i]$ for class. Formally, we calculate $VC_\classi$ by calculating column $VC_\classi[ *, i]$ as follows: \begin{equation} \label{eq:VC} VC_\classi[ *, i] = \vec{\beta_\classi}[i] \cdot \vec{\word}_i \end{equation} where $\vec{\beta}_\classi[i]$ is the affinity value of word $\word_i$ for class \classi. For each thread $t_\tidx$, we calculate the projection of the thread by calculating $\vec{\pro}_{avg}(t_\tidx)$ and $\vec{\pro}_{max}(t_\tidx)$ using the modified word projections, $\vec{\beta_\classi}[i] \cdot \vec{\word}_i$, captured in the $VC_\classi(m,d)$ matrix and using equations \ref{eq:avg} and \ref{eq:max}. Finally, we create the projection of each thread in the $2\mdim$-space, using equation~\ref{eq:concat}. {\bf Step 4: in depth:} We use weighted embeddings of threads to train an ensemble classifier using supervised learning. For each class $\classi$, we train the classifier by using the weighted representation vector in a supervised learning. Each $VC_\classi$ in Figure \ref{fig:classifier} becomes the basis for a classifier with weighted penalty in favor of class $\classi$. The ensemble classifier combines the classification results from each $VC_\classi$ classifier using the max-voting approach~\cite{maxvoting1998}. {\bf Using contextual features.} Apart from the words in the forum, we can also consider other types of features, which we refer to as contextual features of the threads. One could think of various such features, but here we list the features that we use in our evaluation: (1) number of newlines, (2) length of the text, (3) number of replies in the thread (following posts after the first post), (4) average number of newlines in replies, (5) average length of replies, and (6) the aggregated frequency of the words of each bag-of-words set provided by the user. These features capture contextual properties of the posts in the threads, and provide additional information not necessarily captured by the words in the thread. Empirically, we find that these features improve the classification accuracy significantly. The inspiration to introduce such features came from manually inspection of posts and threads. For example, we observed that Hacks\xspace and Experiences\xspace usually have longer posts than other. Moreover, Hacks\xspace threads contain a larger number of newline characters. An interesting question is to assess the value of such metrics when used in conjuction with word-based features. \section{Experimental Results} \begin{figure}[t] \includegraphics[width=0.7\linewidth,scale=0.25]{Fig4_BestM.pdf} \centering \caption { Selecting the number of dimensions of word embedding: the accuracy of REST\xspace for different dimensions in OffensiveCommunity\xspace.} \label{fig:bestM} \end{figure} \begin{table}[t] \centering \small \begin{tabular}{|c|c|c|c|} \hline Hacks\xspace & Services\xspace & Alerts\xspace & Experiences\xspace \\ \hline \hline Tutorial & tool & announced & article \\ guide & price & reported & story \\ steps & pay & hacked & challenge \\ \hline \end{tabular} \caption {$WordClass\xspace$, the set of words which "define" each class.} \label{tab:keywords} \end{table} We present our experimental results and evaluation of our approach. \subsection{Conducting our study} We use the three forums that presented in Table~\ref{tab:forums} and the groundtruth, which we created as we explained in section definitions. {\bf Keywords sets}: We considered three keyword sets to capture relevant threads. These keywords set are: (a) hacking related, (b) exhibiting concern and agitation, and (c) searching and questioning. We collected a set of more than 290 keywords in three sets. We started with a small core group of keywords, which we expanded by adding their synonyms using thesaurus.com and Google's dictionary. We ended up with 68, 207 and 17 keywords for the three groups respectively. These keyword sets are used in extracting relevant threads with the keyword-based selection. We select a thread, if it contains at least one word from each keyword set: $T_{key_1},T_{key_2},T_{key_3} >=1 $. As we discussed earlier, there are many different ways to perform this selection in the presence of multiple groups of words and depending on the needs of the problem. {\bf Pre-processing text:} As with any NLP method, we do several pre-processing steps in order to extract an appropriate set of words from the target document. First we tokenize the documents in to bigrams, then we remove the stopwords, numbers and IP addresses based on a recent work~\cite{Joobin2018}. In addition, here we opt to focus on the title and the first post of a threads instead of using all the posts. Our rationale is based on the two observations regarding the nature of the threads: (a) most of them have one post anyway, and (b) the title and the post typically define their essence. In the future, we will examine the effect of using the text from more posts from each thread. {\bf Identification: Implementation choices.} The identification algorithm requires some implementation choices, which we describe below. {\bf Embedding parameters:} We set the window size to 10 and we tried several different values as the dimension of the embedding between 50-300, and we found that $\mdim = 100$ with the highest accuracy as depicted in Figure \ref{fig:bestM} and \mdim is in the range of choice of other studies in this space. {\bf Similarity threshold: $\ThSim = 0.96$.} The similarity threshold $\ThSim$ determines the ``selectiveness" in identifying similar threads, as we described in a previous section. We find that a value of 0.96 worked best among all the different values we tried. It strikes the balance between being: sufficiently selective to filter out non-relevant threads, but sufficiently flexible to identify similar threads. {\bf Classification: Implementation choices.} We present the implementation choices for our classification study. {\bf Evaluation Metrics:} We used the accuracy of the classification along with the average weighted F1 score, which is designed to take into consideration the size of the different classes in the data. {\bf Our classifier.} We use random forest as our classification engine, which performed better than several others that we examined, including SVM, Neural Networks, and K-nearest-neighbors. Results are not shown due to space limitations. {\bf Class defining words:} The set of keywords we have used for each class are as shown in Table \ref{tab:keywords}. {\bf Baseline methods.} We evaluate our approach against five other state of the arts methods, which we briefly describe below. \begin{itemize} \item {\bf Bag of Words (BOW)}: This methods uses the word frequency (more accurately the TFIDF value) as its main feature \cite{mccallum1998naive,Joobin2017,Jin2016}. \item {\bf Non-negative Matrix Factorization (NMF)}: This method uses linear-algebra to represent high-dimensional data into low-dimensional space, in an effort to capture latent features of the data~\cite{Lee1999}. \item {\bf Simple Word Embedding Method (SWEM)}: There is a family of methods that use the word2vec as their basis, and use a recently proposed method \cite{Shen2018}. \item {\bf FastText (FT)}: Similar to NMF and SWEM, FastText represents words and text in a low dimensional space~\cite{fasttext2017}. \item {\bf Label Embedding Attentive Model (LEAM)}: This is the most recent approach \cite{Wang2018} claims to outperform other state of art methods including PTE~\cite{Tang2015}. We used their provided linear implementation of their attentive model. \item {\bf Bidirectional Encoder Representations from Transformers (BERT)} : This is a new pre-trained Deep Bidirectional Transformer for Language Understanding introduced by Google~\cite{Bert2018}. BERT provides contextual representation for text, which can be used for a wide range of NLP tasks. As we discuss later, BERT did not provide good results initially, and we created a tuned version to make the comparison more meaningful. \end{itemize} \subsection{ Results 1: Thread Identification} \begin{figure}[t] \includegraphics[width=1\linewidth]{Fig5_CompSimilKey.pdf} \centering \caption {The robustness of the similarity approach to the initial keywords: number of relevant threads as a function of the number of keywords for OffensiveCommunity\xspace.} \label{fig:sensitivity} \end{figure} \begin{table}[t] \centering \small \begin{tabular}{|p{1.5cm}|r|r|r|r} \hline Relev. Threads & OffensComm.\xspace & HackThisSite\xspace & EthicHack\xspace \\ \hline \hline Keyword & 291 & 840 & 893 \\ \hline Similarity & 505 & 1121 & 1360 \\ \hline Total & 796 & 1961 & 2753 \\ \hline Total(\%) & 22\% & 23\% & 25\% \\ \hline \end{tabular} \caption {The relevant threads and their identification method: keywords and similarity. The total percentage refers to the selected threads over all the threads in the forum.} \label{tab:relevant} \end{table} We present the results from the identification part of our approach. {\bf Our similarity-based method is robust to the number of initial keywords.} We want to evaluate the impact of the number of keywords to the similarity based method. In Figure \ref{fig:sensitivity}, we show the robustness of each identification methods to the initial set of keywords for OffensiveCommunity\xspace . By adding 60 keywords, from 240 to 300, the keyword-based method identifies 25\% more threads, while the similarity based method has only 7\% increment. Similarly, doubling the initial size of the keywords results in 242\% increase for the keyword-based method but only 45\% in the similarity-based method. We argue that our approach is more robust to the initial number of keywords. First, with less number of keywords, we retrieve more threads. Second, an increase in the number of keywords has less relative increase in the number of threads. This is an initial indication that our approach can achieve good results, even with a small initial set of keywords. {\bf Evaluation of our approach: High precision and reasonable recall.} We show that our approach is effective in identifying relevant threads. Evaluating precision and recall would have been easy if all the threads in a forum were labelled. Instead, we use an indirect method to gauge recall and precision as we describe below. {\bf Indirect estimation of recall.} We consider as ``groundtruth" the relevant threads that we find with set of keywords in keyword-based selection method and report how many of those threads that our method finds with only 50\% of the keywords in similarity-based selection. The experiment is shown in Figure \ref{fig:sensitivity}. We use only 50\% of the keywords to extract the relevant threads with the similarity selection approach, and then compare it with the relevant threads identified with larger set of keywords [60-100]\%. We show in Table \ref{tab:relevantRecall} that with 50\% of the keywords we can identify more than 60-70\% of the relevant threads, which we identify if we have more keywords available. \begin{table}[t] \centering \small \begin{tabular}{|p{1.6cm}|r|r|r|r|r||r|} \hline keywords \% & 60 & 70 & 80 & 90 & 100 & Avg. \\ \hline OffensComm.\xspace & 78.2 & 76.9 & 72.9& 70.8 & 70.1 & 70.94\\ \hline HackThisSite\xspace & 74.82 & 72.01 & 70.68 & 69.92 & 69.74 & 71.43 \\ \hline EthicHack\xspace & 68.41 & 60.4 & 60.8& 57.2 & 56.51 & 60.67 \\ \hline \end{tabular} \caption {Identification: Indirect "gauge" of Recall: We report how many threads our method finds with 50\% keywords compared to the keyword based selection with larger sets of keywords [60-100]\% .} \label{tab:relevantRecall} \end{table} \begin{table}[t] \centering \small \begin{tabular}{|l|c|c|c||c|} \hline & OffensComm.\xspace & HackThisSite\xspace & EthicHack\xspace & Avg.\\ \hline Precision & 98.2 & 97.5 & 97.0& 97.5\\ \hline \end{tabular} \caption { Identification Precision: the precision of the identified thread of interest with the similarity-based method. } \label{tab:relevantPercision} \end{table} {\bf Estimating precision.} To evaluate precision, we want to identify what percentage of the retrieved threads are relevant. To this end, we resort to manual evaluation. We have labeled 300 threads from each dataset retrieved with 50\% of the keywords and we get our annotators to identify if they are relevant. We show the results in Table \ref{tab:relevantPercision}. We understand that on average more than 97.5\% of the threads identified with the similarity based method are relevant with an inter-annotator agreement Fleiss-Kappa coefficient of 0.952. \begin{figure}[t] \includegraphics[width=1\linewidth]{Fig6_relevantthreads.pdf} \centering \caption {Number of relevant thread in each forums identified by our approach: (a) irrelevant (not selected), (b) selected via keyword matching and (c) selected via similarity.} \label{fig:relevant} \end{figure} {\bf The power of the embedding in determining similarity.} We find that the similarity step identifies threads that are deemed relevant to a human reader, but are not ``obviously similar", if you examine the threads word for word. We provide a few examples of threads that were identified by the keyword-based selection, and the related similar threads that our approach identified. Table \ref{tab:SimilSample} and \ref{tab:SimilSample2} illustrate how the retrieved thread are similar to the target thread conceptually, without matching linguistically. {\bf A four-fold thread reduction.} Our approach reduces the amount of threads to only 22-25\% of the initial threads as shown in Table~\ref{tab:relevant}. Figure~\ref{fig:relevant} depicts the same data visually. Clearly, these results will vary depending on the keywords given by the user and the type of the forum. \begin{table*}[ht] \centering \small \begin{tabular}{|c|p{4cm}|p{8cm}|} \hline Selection Method & Title & Post \\ \hline Keyword selected & [ULTIMATE] How to SPREAD your viruses successfully [TUTORIAL] & Educational Purposes NOT MINE In this tutorial I will show you how to spread your trojans/viruses etc. I will show you many methods, and later you choose which one .... \\ \hline \multirow{3}{*}{Similarity selected } & Botnet QA! & Just something I compiled quickly. Im also posting my bot setup guide soon. If you want any questions or links added to the Q\&A, please ask and Ill add them. \\\cline{2-3} & The COMPLETE beginners guide to hacking &another great guide i found :D Sections: 1) Introduction 2) The hacker manifesto 3) What is hacking? 4) Choosing your path 5) Where should I start? 6) Basic terminology 7) Keylogging... \\\cline{2-3} & [TUT]DDoS Attack - A life lesson & Introduction I know their are a lot more ways to DoS than are shown here, but ill let you figure them out yourself. If you find any mistake in this tutorial please tell me^^ What is \'DDoS\'? \\ \hline \end{tabular} \caption {Examples of similar threads for class Hacks\xspace: threads offering hacking tutorials.} \label{tab:SimilSample} \end{table*} \begin{table*}[ht] \centering \small \begin{tabular}{|c|p{4cm}|p{8cm}|} \hline Selection Method & Title & Post \\ \hline Keyword selected & Blackmailed! How to hack twitter? & Hey, everyone. Im new on this website and I need help. Im trying to hack a twitter account because theyve been harassing me and reporting isnt helping at all. \\ \hline \multirow{4}{*}{Similariry selected} & Need hacking services & IIm new to this website. Im not a hacker. Would like to hire hacking services to hack email account, Facebook account and if possible iPhone. Drop me a pm if you can do it. Fee is negotiable. Thanks \\\cline{2-3} & Hello hacker members & My name is XXXX and im looking for someone to help me crack a WordPress password from a site that has stolen all our copyrighted content. Weve reported to google but is taking forever. I have the username of the site, just need help to crack the password so i can remove our content. Please message me with details if you can help \\\cline{2-3} & finding a person with his email & Hello guys! I need to find out how I can find a person \'behind\' an email! Let me explain please ... \\\cline{2-3} & Hi & hello everyone im new here and i want to learn how to hack an account any account in fact fb twitter even credit card hope you code help me out who knows maybe i can help you in the future right give and take \\ \hline \end{tabular} \caption {Examples of similar threads for class Services\xspace: threads looking for hacking services.} \label{tab:SimilSample2} \end{table*} \subsection{Results 2: Thread Classification} We present the results of our classification study. {\bf REST\xspace compared to the state-of-the-art.} Our approach compares favourably against the competition. Table \ref{tab:results} summarizes the results of the baseline methods and our REST\xspace for three forums. REST\xspace outperforms other baseline methods with at least 1.4 percentage point in accuracy and 0.7 percentage point in F1 score, except BERT. First, using BERT ``right out of the box" did not give good results initially. However, we fine-tuned BERT for this domain. BERT performs poorly on two sites, HackThisSite\xspace and EthicalHackers\xspace, while it performs well for OffensiveCommunity\xspace. We attribute this to the limited training data in terms of text size and also the nature of the language users use in such forums. For example, we found that the titles of two misclassified threads contained typos and used unconventional slang and writing structure `` Hw 2 gt st4rtd with r3v3r53 3ngin33ring 4 n00bs!!'', ``metaXploit 3xplained fa b3ginners!!!''. We intend to investigate BERT and how it can be tuned further in future work. Note that methods BOW and NMF did not assign any instances to the minority classes correctly, therefore the value of F1 score in Table \ref{tab:results} is reported as NA. \begin{table*}[ht] \centering \small \begin{tabular}{|l|r||r|r|r|r|r|r|| r|} \hline Datasets & Metrics & BOW & NMF & SWEM & FastText & LEAM & BERT & {\bf REST} \\ \hline \hline \multirow{2}{*}{OffensComm.\xspace} & Accuracy & 75.33$\pm$0.1 & 74.31$\pm$0.1 & 75.55$\pm$0.21 & 74.64$\pm$0.15 & 74.88$\pm$0.22 & \textbf{78.58$\pm$ 0.08} & 77.1$\pm$0.18 \\\cline{2-9} & F1 Score & NA & NA & 74.15$\pm$0.23 & 72.5$\pm$0.15 & 72.91$\pm$0.18 & \textbf{78.47$\pm$0.01}& 75.10$\pm$0.14\\ \hline \hline \multirow{2}{*}{HackThisSite\xspace} & Accuracy & 65.3$\pm$0.41 & 69.46$\pm$0.12 &73.27$\pm$0.10 & 69.92$\pm$0.08 & 74.6$\pm$0.04 & 68.99$\pm$0.4 & \textbf{76.8$\pm$ 0.1} \\\cline{2-9} & F1 Score & NA & 70.23$\pm$0.13 & 71.89$\pm$0.14 & 65.81$\pm$0.4 & 71.41$\pm$0.09 &63.61$\pm$0.41& \textbf{74.47$\pm$0.24}\\% \\ \hline \hline \multirow{2}{*}{EthicalHackers\xspace} & Accuracy & 59.74$\pm$ 0.21 & 58.3$\pm$ 0.15 & 61.3$\pm$ 0.17 & 59.73$\pm$ 0.21 & 61.80 $\pm$0.13 & 54.91$\pm$ 0.32 & \textbf{63.3$\pm$ 0.09} \\\cline{2-9} & F1 Score & NA & 57.83$\pm$0.16 & 59.6$\pm$0.23 & 59.5$\pm$0.13 & 60.9$\pm$0.17 & 51.78$\pm$0.15 & \textbf{61.7$\pm$0.21}\\ \hline \end{tabular} \caption { Classification: the performance of the five different methods in classifying threads in 10-fold cross validation. \label{tab:results}} \end{table*} \begin{figure}[htb!] \includegraphics[width=1\linewidth]{Fig7_Features.pdf} \centering \caption {Classification accuracy for two different features sets in 10-fold cross validation in OffensiveCommunity\xspace forum.} \label{fig:featureComp} \end{figure} {\bf The contextual features improves classification for all approaches.} We briefly discussed contextual features in our classification section. We conduced experiments with and without these features for all six algorithms and we show the results in Figure \ref{fig:featureComp} for OffensiveCommunity\xspace. Including the contextual features in our classification improves the accuracy for all approaches (on average by 2.4\%). The greatest beneficiary is the Bag-of-Words method whose accuracy improves by roughly 6\%. \section{Related Work} \label{sec:related} We summarize related work group into areas of relevance. {\bf a. Identifying entities of interest in security forums.} Recently there have been a few efforts focused on extracting entities of interest in security forums. A very interesting study focuses on the dynamics of the black-market of hacking goods and services and their pricing~\cite{Portnoff2017}, which for us is one of the categories of interest. Some other recent efforts focus on identifying malicious IP addresses that are reported in the forum~\cite{Joobin2018,Joobin2017}, which is relatively different task, as there, the entity of interest has a well-defined format. Another interesting work~\cite{Tavabi2018} uses a word embedding technique focusing identifying vulnerabilities and exploits. {\bf b. Identifying malicious users and events.} Several studies focus on identifying key actors and malicious users in security forums by utilizing their social and linguistics behavior. \cite{Li2014,Marin2018_keyhacker,abbasi2014}. Another work~\cite{Gharibshah2019WWW,Sapienza2017_USC1} identifies emerging threats by monitoring threads activities and the behavior of malicious users and correlating it with information from security experts on Twitter. Users' behaviors studied \cite{Zhabiz2020,eli2016} to identify abnormal users in interaction with each others. Another study~\cite{Sapienza2018_USC2} detects emerging security concerns by monitoring the keywords used in forums and other online platforms, such as blogs. {\bf c. Analyzing other online forums.} Researchers have analyzed a wide range of online forums such as blogs, commenting platforms, reddit etc. Indicatively, we refer to a few recent studies. Google~\cite{Zhang2017} analyzed {\em question-answer} type of forums and they also published the large dataset that they collected. Another study focusing on detecting question-answer threads within a discussion forum using linguistic features~\cite{Cong2008SIGIR}. Despite many common algorithmic approaches, we argue that each type of forum and different focus questions necessitate novel algorithms. {\bf d. NLP, Bag-of-Words, and Word Embedding techniques.} Natural Language Processing is a vast field, and even the more recent approaches, such as query transformation and word embedding have benefited from significant numbers of studies \cite{Scells2018,mccallum1998naive,Mikolov2013,Le2014_Doc2Vec,Li2014,Jin2016,Wang2018,Zamani2017,Shen2018,Lee1999} ~Most recently, several methods focus on combining word embedding and deep learning approaches for text classification~\cite{Wang2018,Zamani2017,Shen2018,Bert2018}. We now discuss the most relevant previous efforts. These efforts use word embedding representation and they use it for classification for text, but: (a) neither of those focuses on forums, (b) there are some other technical differences with our work. The first work, predictive text embedding (PTE) \cite{Tang2015}, uses a network-based approach, where each thread is described by a network of interconnected entities (title, body, author etc). The second study, LEAM \cite{Wang2018}, uses a word embedding and a Neural Network classifier to create a thread embedding. LEAM argues that it outperforms PTE, and as we show here, we outperform LEAM. Recently Google introduced BERT~\cite{Bert2018}, a deep pre-trained bidirectional transformers for language understanding which uses a pre-trained unsupervised language model on large corpus of data. Although the power of large data set for training is indisputable, at the same time, we saw first hand the need for some customization for each domain space. Finally, there are some efforts that use Doc2Vec to identify the embedding of a document (equivalently threads in our case). However, these techniques would not work well here due to the small size of the datasets~\cite{Le2014_Doc2Vec}. This technique could be applied in much larger forums, and we will consider it in such a scenario in the future. \section{Conclusion} There is a wealth of information in security forums, but still, the analysis of security forums is in its infancy, despite several promising recent works. We propose a novel approach to identify and classify threads of interest based on a multi-step weighted word embedding approach. As we saw, our approach consists of two parts: (a) a similarity-based approach to extract relevant threads reliably, and b) weighted embedding-based classification method to classify threads of interest into user-defined classes. The key novelty of the work is a multi-step weighted embedding approach: we project words, threads and classes in the embedding space and establish relevance and similarity there. Our work is a first step towards developing an easy-to-use methodology that can harness some of the information in security forums. The easy-of-use stems from the ability of our method to operate with an initial set of bag-of-words, which our system uses to seeds to identify threads that the user is interested in. \section{ Acknowledgments} This work is supported by DHS ST Cyber Security (DDoSD) HSHQDC-14-R-B00017 grant, NSF NeTS 1518878 and UC-NL-CRT LFR 18548554.
proofpile-arXiv_069-3462
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Shaping codes are used to encode information for use on channels with a cost constraint. A prominent application is in data transmission with a power constraint, where constellation shaping is achieved by addressing into a suitably designed multidimensional constellation or, equivalently, by incorporating, either explicitly or implicitly, some form of non-equiprobable signaling. An excellent reference on this topic is Fischer~\cite{Fischer}. More recently, shaping codes have been proposed for use in data storage on flash memories subject to a constraint on memory cell wear. In that application, storage system requirements often impose a rate constraint, and the data source may be structured, rather than unconstrained. Motivated by this scenario, this paper investigates information-theoretic properties and design of rate-constrained fixed-to-variable length shaping codes for noiseless, memoryless costly channels and general i.i.d. sources. The analysis relies on the theory of word-valued sources developed in Nishiara and Morita~\cite{NishiaraMorita}. Our primary interest is in the design of codes that minimize the average cost per code symbol for a given rate, or expansion factor, which we refer to as the \emph{type-\Romannum{1} shaping problem}. We also consider the well-studied problem of designing codes that minimize average cost per code symbol, or \emph{total cost}, which we refer to as the \emph{type-\Romannum{2} shaping problem}. The word-valued source analysis provides a natural link between shaping codes and codes that efficiently map a sequence of i.i.d. source symbols into an output sequence of symbols that are approximately independent and distributed according to a specified target distribution. Such codes have been studied in the context of random number generating source codes by Han and Uchida~\cite{HanUchida} and as distribution matching (DM) codes by B\"{o}cherer and Mathar~\cite{BochererMatharDM}, B\"{o}cherer~\cite{BochererThesis}, Amjad and B\"{o}cherer~\cite{AmjadBocherer}, B\"{o}cherer and Amjad~\cite{BochererAmjad}, Schulte and B\"{o}cherer~\cite{SchulteBocherer} and Schulte and Steiner~\cite{SchulteSteiner}. Our shaping code analysis suggests a new performance measure - generalized expansion factor (GEF) - for fixed-to-variable length DM codes which we use to study codes that minimize informational divergence and normalized informational divergence from a shaping code perspective. There is a substantial literature on shaping codes and, more recently, a body of work relating to DM codes. Therefore, before summarizing our results in more detail, we provide a brief review of relevant work in both of these areas as a framework for our contributions. \subsection{Shaping Codes} \subsubsection{Codes minimizing total cost} The problem of coding for noiseless costly channels, or source coding with unequal symbol costs, traces its conceptual origins to Shannon's 1948 paper that launched the study of information theory~\cite{Shannon}. In that paper, Shannon considered the problem of transmitting information over a telegraph channel. The channel symbols -- dots and dashes -- have different time durations, which can be interpreted as transmission costs. Shannon determined the symbol probabilities that maximize the data transmission rate with integer symbol costs. This result was then generalized to arbitrary positive symbol costs by Krause~\cite{Krause} and Csisz\'{a}r~\cite{Csiszar1969}. Several researchers have considered the problem of designing codes for costly channels with an i.i.d. source. Most of this work has emphasized construction of codes that minimize average cost per source symbol, which we refer to as \emph{total cost}, without an explicit rate constraint. In Karp~\cite{Karp}, costly channel coding was studied from an algebraic perspective, and the problem of designing a shaping code to minimize total cost was recast as an integer programming problem. However, this code design approach is not computationally practical, and the algorithm proposed to reduce the complexity will result in sub-optimal results. In Golin et al.~\cite{Golin}, a dynamic programming solution for this integer programming problem was proposed, providing a polynomial time bound on complexity. Other approaches using tree-based constructions were proposed in~\cite{Krause}, Melhorn~\cite{Mehlhorn}, and Csisz\'{a}r and K\"{o}rner~\cite{CsiszarKorner}. They all constructed asymptotically optimal prefix-free variable-length shaping codes. A universal coding scheme based on types was also introduced in ~\cite{CsiszarKorner}. A special case, corresponding to a uniform i.i.d. source in which all codewords are equally likely to occur, was studied by Varn~\cite{Varn}, who proposed a variable-length code construction that minimizes the average codeword cost for a fixed codebook size. This coding technique was then incorporated into a universal coding scheme in Iwata~\cite{Iwata}, which combines LZ78 compression with Varn coding. Later in this paper, we generalize the Iwata scheme, which can be viewed as an embodiment of a separation theorem proved in Section~\ref{OptimalCodes}, and further explore properties and applications of Varn codes. A generalization of Huffman coding for unequal symbol costs was proposed in Gilbert ~\cite{Gilbert}. In Guazzo~\cite{Guazzo}, practical arithmetic coding was introduced. This coding technique was then generalized by Savari and Gallager~\cite{SavariGalleger} and its properties, such as optimality and coding delay, were analyzed. However, the analysis is based on infinite precision arithmetic coding, which cannot be realized in practice. In B\"{o}cherer and Mathar~\cite{BochererMatharDM} and B\"{o}cherer~\cite{BochererThesis}, a variable-to-fixed length code construction called geometric Huffman coding was used to design codes for an i.i.d. uniform source that asymptotically minimize the total cost of a noiseless channel with unequal symbol durations. (This construction matches codeword probabilities to dyadic symbol distributions that optimally approximate the optimal symbol distribution.) We emphasize that all of the codes mentioned above considered the problem of minimizing cost per source symbol, i.e., total cost, with no explicit consideration of rate. The dependence of total cost on code rate was not thoroughly investigated. \subsubsection{Rate-constrained codes minimizing average cost} The problem of designing rate-constrained codes for costly channels has received less attention. The maximum entropy of a stationary Markov chain on a finite-state channel with associated symbol/transition costs, along with the entropy-maximizing symbol/transition probabilities, can be found in McEliece and Rodemich~\cite{McElieceRodemich}, Justesen and H{\o}holdt~\cite{Justesen}, and Khandekar, McEliece, and Rodemich~\cite{Khandekar}. In McEliece~\cite{McElieceBook} and, later, B\"{o}cherer~\cite{BochererThesis}, the special case corresponding to a memoryless channel is addressed. B\"{o}cherer and Mathar~\cite{BochererMatharDM} and B\"{o}cherer~\cite{BochererThesis} apply the geometric Huffman coding approach to design variable-to-fixed length codes that match codeword probabilities to dyadic symbol distributions that approximate the entropy-maximizing probability mass function for memoryless costly channels subject to an average cost constraint, thereby asymptotically achieving the maximum rate. The state-splitting algorithm~\cite{ACH}, which was developed to construct finite-state codes for constrained channels, has been extended for application to construction of codes for costly channels. Heegard, Marcus, and Siegel~\cite{HMS} studied a class of channels with average runlength constraints, which represent a special case of noiseless channels with a cost constraint. They constructed variable-to-variable length synchronous codes using state-splitting techniques adapted for channels with variable-length symbols. Khayrallah and Neuhoff~\cite{KhayNeu} and McLaughlin and Khayrallah~\cite{McLKhay} construct fixed-to-fixed length and variable-to-fixed length codes based on state-splitting methods for magnetic recording and constellation shaping applications. Krachkovsky et al.~\cite{Krach} determine a costly channel model matched to a Markov source and construct corresponding codes using enumerative techniques for application to transmission over an intersymbol-interference channel. All of these works strive to construct codes that come close to the capacity-cost functions originally presented in~\cite{McElieceBook},\cite{McElieceRodemich}, and~\cite{Justesen}. Other recent work relating to this problem has been motivated by non-volatile memory applications, so we briefly describe the corresponding costly channel model. NAND flash memory uses floating-gate transistors, commonly referred to as \emph{cells}, to store information in the form of different cell voltage levels. The flash memory cells gradually wear out with repeated writing and erasing, referred to as program/erase cycling, and the damage caused by the cycling is dependent on the programmed voltage levels~\cite{LiuNVMW2016},~\cite{LiuSieGC16}. The costly channel model associates to each cell voltage level a wear cost reflecting the extent of the damage induced by writing that level. Recently, in~\cite{Jagmohan}, Jagmohan et al. proposed \textit{endurance coding}, intended for shaping of programmed data for flash memories. For a given cost model and a specified target code rate, the optimal distribution of cell levels that minimizes the average cost was determined analytically, reproducing the results in the references cited above. For single bit per cell (SLC) flash memory, with associated level costs of 0 and 1, greedy enumerative codes that minimize the number of cells with cost 1 were designed and evaluated in terms of the rate-cost trade-off. However, endurance coding is intended for uniform i.i.d. source data. For structured source data, which would include a general i.i.d. source, the idea of combining source compression with endurance coding was proposed, but the relationship between the code performance and the code rate for arbitrary sources was not thoroughly studied. In Sharon et al.~\cite{Sharon}, low-complexity, rate-1, fixed-length \textit{direct shaping codes} for structured data were proposed for use on SLC flash memory. The code construction used a greedy approach based upon an adaptively-built encoding dictionary that does not require knowledge of the source statistics. This construction was extended to a direct shaping code compatible with two-bit per cell (MLC) flash memory operation by Liu et al. in~\cite{LiuNVMW2016}, \cite{LiuSieGC16}. However, it was proved in Liu and Siegel~\cite{LiuISIT2017} that direct shaping codes are in general suboptimal. (Our experimenal results in Section~\ref{sec:experiment} contain a comparison of a shaping scheme motivated by our analysis to a direct shaping code on MLC flash memory.) \subsubsection{Summary of contributions on shaping codes} In this paper, our goal is to systematically study the fundamental performance limits of fixed-to-variable length shaping codes from a rate and distribution perspective. We first use known properties of word-valued sources to determine the symbol occurrence probability of shaping code output sequences (Lemma~4). We then derive an upper bound on the code sequence entropy rate (Lemma~5). Using these results, we are able to reduce the problem of minimizing average code symbol cost subject to a constraint on the code rate to an optimization problem for an i.i.d. process. This problem can be viewed as the dual problem to the entropy-maximization problem considered in the prior literature. We refer to this minimization problem as the \textit{type-\Romannum{1} shaping problem}, and we call shaping codes that achieve the minimum average cost for a given rate \textit{optimal type-\Romannum{1} shaping codes.} We develop a theoretical bound on the trade-off between the rate -- or more precisely, the corresponding \emph{expansion factor} -- and the average cost of a type-\Romannum{1} shaping code (Theorem~6). We then study shaping codes that minimize total cost (minimum average cost per source symbol). We refer to the problem of minimizing the total cost as the \textit{type-\Romannum{2} shaping problem} and shaping codes that achieve the minimum total cost are referred to as \textit{optimal type-\Romannum{2} shaping codes}. We derive the relationship between the code expansion factor and the total cost and determine the optimal expansion factor (Theorem~7). We then prove an equivalence theorem showing that an optimal type-\Romannum{1} shaping code can be realized using an optimal type-\Romannum{2} shaping code for another suitably chosen costly channel model (Theorem~8). We can therefore solve the type-\Romannum{1} shaping problem using known coding techniques such as generalized Shannon-Fano codes~\cite{CsiszarKorner} . A consequence of the analysis is a separation theorem for type-\Romannum{1}I shaping codes, which states that optimal shaping can be achieved by a concatenation of lossless compression and optimal shaping for a uniform i.i.d. source. This provides an alternative architecture for implementing asymptotically optimal shaping codes using, for example, Varn codes. Finally, we prove a separation theorem for type-\Romannum{1} shaping codes with given expansion factor, using a careful analysis of the behavior of the minimum average cost as a function of the expansion factor. \subsection{Distribution Matching (DM) Codes} \subsubsection{Applications of DM codes to shaping} The application of non-equiprobable signaling in the context of coding with a cost constraint reflects the interesting interplay between shaping codes and DM-type codes (in the broad sense of codes that map an i.i.d. sequence of source symbols to an output sequence of symbols that are approximately independent and distributed according to $\{P_i\}$). Beginning with the work on constellation shaping, there have been a number of applications of DM-type codes to coding for a costly channel. In \cite{Forney}, signal constellations with non-uniform symbol probabilities were used for efficient modulation on band-limited channels. Noting the dual nature of non-equiprobable signaling and source coding, several authors proposed the use of ``reverse'' source codes derived from, for example, Huffman codes, Tunstall codes, and arithmetic codes as DM codes for shaping applications. See, for example, works by Kschishang and Pasupathy~\cite{KP}, Ungerboeck~\cite{Ungerboeck}, Abrahams~\cite{Abrahams}, Baur and B\"{o}cherer\cite{Baur}. Gallager~\cite[p. 208]{Gallager} proposed a method of generating symbols with a biased distribution to be combined with linear coding as an approach to achieving capacity of an asymmetric channel. This idea was incorporated into a general scheme that can use capacity-achieving codes for symmetric channels, such as polar codes, to achieve the capacity of arbitrary discrete memoryless asymmetric channels in Mondelli et al.~\cite{Mondelli}. In~\cite{BochererSteinSchulte}, B\"{o}cherer et al. propose a scheme that combines DM codes (such as constant composition codes) with systematic error correction codes. This scheme can be regarded as a simplification of the bootstrap scheme in B\"{o}cherer and Mathar~\cite{BochererMatharLDPC}, which concatenates the check bits generated by the systematic ECC encoder with the following information bits and applies a DM encoder to them. In~\cite{Mondelli}, the authors also proved that the bootstrap scheme, which they refer to as a chaining construction, can be used to achieve the capacity of any discrete memoryless asymmetric channel. \subsubsection{DM codes with optimality measures} In Han~\cite{HanBook} and Visweswariah et al.~\cite{Vis1998}, it was shown that an optimal variable-length source code can be regarded as an optimal variable-length DM code for a uniform distribution. The criterion for optimality was the vanishing of a form of normalized conditional Kullback-Leibler (KL) divergence between a subset of codewords of fixed length and words generated i.i.d. with the target distribution, asymptotically in the block length. This result was further developed in Han and Uchida~\cite{HanUchida}, where an optimal variable-length source code with cost, meaning a code that minimizes total cost, was shown to be an optimal DM code. The maximum achievable rate of non-prefix-free DM codes was discussed in Uchida~\cite{Uchida}. In~\cite{BochererMatharDM}, dyadic probability mass functions with some optimality properties were used to match the capacity-achieving probability distribution of a discrete memoryless channel, and variable-to-fixed length geometric Huffman codes, mentioned earlier, were used as DM codes. Normalized informational divergence -- defined as the KL-divergence between a codeword probability distribution and the distribution of the codewords when generated i.i.d. by the target distribution, normalized by the codeword length -- was introduced as the DM code optimality measure. It was then proved that geometric Huffman coding is asymptotically optimal, in the sense that the normalized informational divergence converges to zero as the codeword length increases. Other fixed-length DM codes with vanishing normalized informational divergence were presented in Ramabadran~\cite{Ramabadran} and~\cite{SchulteBocherer}. Constellation shaping techniques have also been adapted for use in DM coding. For example, a DM code using shell mapping was presented in~\cite{SchulteSteiner} and a DM code using trellis shaping was presented by Gultekin et al.~\cite{Gultekin}. In~\cite{AmjadBocherer}, the notions of informational divergence and normalized information divergence were extended to measure the performance of fixed-to-variable length codes. Optimality of complete Tunstall code trees with respect to minimizing informational divergence was proved, a result we extend in Section~\ref{sec::compare}. An efficient algorithm for finding binary DM codes that minimize the normalized informational divergence, based on an iterative adaptation of binary Tunstall coding, was presented, and asymptotic optimality with increasing block length was established. In~\cite{BochererAmjad} the relationship between normalized information divergence of a DM code and its rate was studied, a topic that we further address in Section~\ref{sec::compare}. \subsubsection{Summary of contributions on DM codes} In this paper, we systematically study the problem of designing optimal fixed-to-variable length, prefix-free DM codes from the perspective of word-valued sources and shaping codes. The degree of distribution matching is measured by the KL-divergence between the distribution on word-valued source output sequences and the distribution on those sequences generated i.i.d. according to the target distribution. Vanishing asymptotic normalized KL-divergence at the sequence level, suggested by the approach in~\cite{NishiaraMorita} and also studied by Soriaga~\cite{Soriaga}, is used as the criterion for optimality. We first characterize the expansion factor of an optimal DM code for a general i.i.d. source (Theorem 12). We then show that an optimal type-\Romannum{2} shaping code for a cost model determined by the negative logarithm of a target distribution is an optimal DM code for that distribution (Theorem 13). (This ``self-information'' cost model was also used in~\cite{SchulteSteiner} to design information divergence optimal fixed-to-fixed DM codes using shell mapping.) The connection between shaping codes and DM codes suggests another measure for evaluating DM code performance, which we refer to as \textit{generalized expansion factor} (GEF). We establish a lower bound on the generalized expansion factor, and show that a code that achieves the lower bound is an optimal DM code (Theorem 15). This implies that Varn codes are asymptotically optimal DM codes for a uniform i.i.d. source. Using the GEF, we also extend the separation theorem of shaping codes to DM codes (Theorem 16). Finally, we discuss relationships between different DM code performance measures. We show that for a DM code with fixed codebook size, minimizing the GEF is equivalent to minimizing the informational divergence introduced in~\cite{AmjadBocherer}, leading to the conclusion that Varn codes designed for the appropriate cost model minimize informational divergence (Theorem 17), generalizing a result for binary Tunstall codes in~\cite{AmjadBocherer}. We also give an explicit description of the relationship between the normalized informational divergence of a DM code and its expansion factor (Theorem 18), refining a bound in~\cite{BochererAmjad}. \subsection{Organization of the Paper} The remainder of the paper is organized as follows. In Section \ref{Preliminaries}, we use known properties of word-valued sources to determine the symbol occurrence probability of shaping code output sequences and the lower bound on the symbol distribution entropy. In Section~\ref{OptimalCodes}, we analyze the distribution, cost, and rate properties of fixed-to-variable length shaping codes. The analysis is then used to prove the equivalence theorem and separation theorem. In Section~\ref{sec:DM}, we establish the equivalence between optimal distribution matching codes and optimal shaping codes. Section~\ref{sec:gef} introduces the generalized expansion factor and proves the separation theorem for DM codes. Section~\ref{sec::compare} compares different DM code performance measures. In Section~\ref{sec:experiment}, we apply a shaping scheme motivated by our theoretical results to a multilevel flash memory. and we show simulation results illustrating the application of Varn codes to DM coding. Section~\ref{sec:conclude} concludes the paper. \section{Information-theoretic Preliminaries} \label{Preliminaries} \subsection{Basic Model} First, we fix some notation. Let $\mathbf{X} = X_1X_2\ldots$, where $X_i \sim X$ for all $i$, be an i.i.d. source with alphabet $\mathcal{X}=\{\alpha_1,\ldots,\alpha_u\}$. We use $|\mathcal{X}|$ to denote the size of the alphabet and use $P(x^*)$ to denote the probability of any finite sequence $x^*$. Let $\mathcal{Y}=\{\beta_1,\ldots , \beta_v\}$ be an alphabet and $\mathcal{Y}^*$ be the set of all finite sequences over $\mathcal{Y}$, including the null string $\lambda$ of length 0. Each $\beta_i$ is associated with a cost $C_i$. Without loss of generality, we assume that $0\leq C_1\leq C_2\leq \ldots C_v$, and we also assume that there exists at least one pair of costs, $C_i$ and $C_j$, such that $C_i\neq C_j$. We use a cost vector $\mathcal{C}=[C_1, C_2, \ldots, C_v]$ to represent the cost associated with alphabet $\mathcal{Y}$. A general shaping code is defined as a prefix-free variable-length mapping $\phi: \mathcal{X}^q \rightarrow \mathcal{Y}^*$ which maps a length-$q$ data word $x_1^q$ to a variable-length codeword $y^*$. We use $\mathbf{Y}$ to denote the process $\phi(\mathbf{X}^q)$, where $\mathbf{X}^q $ is the vector process $X_1^q, X_{q+1}^{2q}, \ldots$. The entropy rate of the process $\mathbf{Y}$ is \begin{equation} H(\mathbf{Y})=\lim_{n\rightarrow \infty}\frac{1}{n}H(Y_1Y_2\ldots Y_n). \end{equation} We denote the length of a codeword $\phi(x_1^q)$ by $L(\phi(x_1^q))$ and the expected length of codewords generated by a sequence of length-$q$ source words is given by \begin{equation} E(L)=\sum_{x_1^q\in \mathcal{X}^q} P(x_1^q)L(\phi(x_1^q)). \end{equation} The \textit{expansion factor} is defined as the ratio of the expected codeword length to the length of the input source word, namely \begin{equation} f=E[L]/q. \end{equation} \begin{remark} Endurance codes and direct shaping codes can be treated as special cases of this general class of shaping codes. Endurance codes are used when the source has a uniform i.i.d. distribution, with entropy rate $H(\mathbf{X})= \log_2 |\mathcal{X}|$. A length-$m$ direct shaping code is a shaping code with $q=1$, $f=1$, where both $\mathbf{X}$ and $\mathbf{Y}$ have alphabet size $2^m$. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} The pair $\mathbf{X}$ and $\phi$ form a word valued source, as defined in\cite{NishiaraMorita}. The following theorem, proved in\cite{NishiaraMorita}, gives the entropy rate of the shaping code $\phi(\mathbf{X}^q)$. \begin{theorem} \label{Hiroyoshitheorem} For a prefix-free variable-length code $\mathbf{Y} = \phi(\mathbf{X}^q)$ such that $H(\mathbf{X}^q)<\infty$ and $E(L)<\infty$, the entropy rate of the encoder output satisfies \begin{equation} H(\mathbf{Y})=\frac{H(\mathbf{X}^q)}{E(L)}=\frac{qH(\mathbf{X})}{E(L)}. \end{equation}\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{theorem} \subsection{Asymptotic Symbol Occurrence Probability} For simplicity and without loss of generality, we assume $q=1$. The mapping is $\phi: \mathcal{X}\rightarrow \mathcal{Y}^*$. Let $y_1^l$ denote the first $l$ symbols of $\phi(\mathbf{X})$. We assume the cost is independent and additive, so the cost of sequence $y_1^l$ can be expressed as \begin{equation} W(y_i^l) = \sum_{i=1}^v N_i(y_1^l)C_i \vspace{-1ex} \end{equation} where $N_i(y_1^l)$ stands for the number of occurrences of $\beta_i$ in sequence $y_1^l$. The cost per code symbol is therefore $\sum_i N_i(y_1^l)C_i/l$. Let \begin{equation} Q(y_1^l)=Pr\{Y_1^l =y_1^l\} \end{equation} denote the probability distribution of $Y_1^l$. The expected cost per symbol of a length-$l$ shaping code sequence is \begin{equation} \vspace{-1ex} \label{equ:averagewearcost1} \begin{split} W_l &= \sum_{y_1^l \in \mathcal{Y}^l} Q(y_1^l)W(y_i^l) /l\\ & = \sum_{i=1}^v \sum_{y_1^l \in \mathcal{Y}^l} Q(y_1^l) N_i(y_1^l) C_i /l. \end{split} \vspace{-1ex} \end{equation} The asymptotic expected cost per symbol, or \textit{average cost} of a shaping code is \vspace{-1ex} \begin{equation} \label{equ:averagewearcost2} A(\phi(\mathbf{X}))=\lim_{l \rightarrow \infty}W_l . \end{equation} Let \begin{equation} \label{equ:averagewearcost3} \hat{P_i} = \lim_{l\rightarrow \infty} \sum_{y_1^l} Q(y_1^l)N_i(y_1^l)/l\\ =\lim_{l\rightarrow \infty} \frac{E(N_i(Y_1^l))}{l} \end{equation} be the asymptotic probability of occurrence of $\beta_i$. Then the average cost of a shaping code can be expressed as \begin{equation} A(\phi(\mathbf{X})) = \sum_i \hat{P_i} C_i. \end{equation} In the rest of this subsection, we will show how to calculate $\hat{P_i}$. Define the prefix operator $\pi$ as $y_1^n\pi^i = y_1^{n-i}$ for $0\leq i < n$ and $y_1^n\pi^i = \lambda$ for $i\geq n$. Let $\pi\{y^*\}$ denote the set of all the prefixes of a sequence $y^*$. We denote by $\mathcal{G}_\phi(y_1^l)$ the set of all sequences $x^*\in \mathcal{X}^*$ such that $y_1^l$ is a prefix of $\phi(x^*)$ but not of $\phi(x^*\pi)$. That is, \begin{equation} \mathcal{G}_{\phi}(y_1^l) = \{x^*\in \mathcal{X}^*|y_1^l\in\pi\{\phi(x^*)\}\wedge|\phi(x^*\pi)|<l\} \end{equation} and the distribution of $y_1^l$ can be expressed as \begin{equation} Q(y_1^l) = \sum_{x^*\in \mathcal{G}_{\phi}(y_1^l)} P(x^*). \end{equation} We define by $M_l$ the minimum length of a sequence $x_1^n$ such that $|\phi(x_1^n)|\geq l$ and let $S_{M_l}$ be the length of $\phi(x_1^{M_l})$. Note that \begin{equation} S_{M_l-1}<l\leq S_{M_l}. \end{equation} According to \cite{NishiaraMorita}, the random variable $M_l$ satisfies the property of being a \textit{stopping rule} for the sequence of i.i.d. random variables $\{\phi(X^\infty)\}$. Wald's equality \cite{Wald} then implies that \begin{equation} \label{equ:walds} E(N_i(\phi(X_1^{M_l})))=E(N_i(\phi(X)))E(M_l). \vspace{-1ex} \end{equation} The following two lemmas were proved in \cite{NishiaraMorita}. \vspace{-1ex} \begin{lemma} \label{lemma:11} Given a nonnegative-valued function $f$, let $F_i=f(X_i)$. If $E(F)<\infty$, then \begin{equation} \lim_{l\rightarrow \infty}\frac{E(F_{M_l})}{l}=0. \end{equation}\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{lemma} \begin{remark} The previous lemma is not obvious, because even when $E(F)<\infty$, $E(F_{M_l})$ is not necessarily equal to $E(F)$. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \begin{lemma} \label{lemma::wald} If $E[L]<\infty$, then \begin{equation} \lim_{l\rightarrow \infty}\frac{E[M_l]}{l} = \frac{1}{E(L)}. \end{equation}\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{lemma} Using these results, we derive a lemma which tells us how to calculate the asymptotic occurrence probability of the encoder output process $\mathbf{Y}$. \vspace{-1em} \begin{lemma} \label{lemma:10} For a prefix-free variable-length code $\phi: \mathcal{X}^q \rightarrow \mathcal{Y}^*$ such that $E(N_i(\phi(X^q))) < \infty$ for all symbols $\beta_i$ and ${E(L)<\infty}$, the asymptotic probability of occurrence $\hat{P_i}$ of $\beta_i$ is given by \begin{equation} \hat{P_i}=E(N_i(\phi(X^q)))\frac{1}{E(L)}. \end{equation} \end{lemma} \begin{IEEEproof} See Appendix \ref{appen:1}. \end{IEEEproof} It is easy to check that $\sum_i \hat{P_i} = 1$, so this distribution is well defined. \subsection{Lower Bound on Symbol Distribution Entropy} Consider a prefix-free variable-length code $\phi$ as in Lemma~\ref{lemma:10}. Let $\hat{Y}_1^l$ denote an i.i.d. sequence of length $l$ and with distribution $\{\hat{P_i}\}$. The probability of a length-$l$ sequence $y_1^l$ with respect to this distribution is $\hat{P}(y_1^l) = \prod_i \hat{P_i}^{N_i(y_1^l)}$. The Kullback-Leibler (KL) divergence (also known as the KL-distance or relative entropy)~\cite{Cover} is a measure of the inefficiency caused by an approximation. The KL-divergence between $Y_1^l$ and $\hat{Y}_1^l$ is \begin{equation} D(Y_1^l || \hat{Y}_1^l) = \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 \frac{Q(y_1^l)}{\hat{P}(y_1^l)}. \end{equation} The following lemma provides a lower bound on the symbol distribution entropy. \begin{lemma} \label{upperbound_marginaldistribution} The entropy $H(\hat{Y}) = -\sum_i \hat{P_i}\log_2 \hat{P_i}$ is lower bounded by the entropy rate of the shaping code sequence, i.e., \begin{equation} H(\hat{Y}) \geq H(\mathbf{Y}). \end{equation} Specifically, \begin{equation} \lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \hat{Y}_1^l) = H(\hat{Y}) - H(\mathbf{Y}). \end{equation \end{lemma} \begin{IEEEproof} We rewrite the $D(Y_1^l || \hat{Y}_1^l)$ as \begin{equation} \label{I_divergence1} \begin{split} &D(Y_1^l || \hat{Y}_1^l) = \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 \frac{Q(y_1^l)}{\hat{P}(y_1^l)}\\ & = \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 Q(y_1^l) - \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 \hat{P}(y_1^l)\\ & = -H(Y_1^l) - \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 \hat{P}(y_1^l). \end{split} \vspace{-1ex} \end{equation} The second term of the right-hand side of this equation is \vspace{-1ex} \begin{equation} \label{I_divergence2} \begin{split} \sum_{y_1^l\in \mathcal{Y}^l}& Q(y_1^l) \log_2 \hat{P}(y_1^l) = \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \log_2 \prod_i \hat{P_i}^{N_i(y_1^l)}\\ & = \sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) \sum_i {N_i(y_1^l)}\log_2 \hat{P_i}\\ & = \sum_i \log_2 \hat{P_i}\sum_{y_1^l\in \mathcal{Y}^l} Q(y_1^l) {N_i(y_1^l)}. \vspace{-1ex} \end{split} \end{equation} Combining equations (\ref{I_divergence1}) and (\ref{I_divergence2}), we have \begin{equation} \label{I_divergence3} \begin{split} &\lim_{l\rightarrow \infty} \frac{1}{l} D(Y_1^l || \hat{Y}_1^l) \\ &=-\lim_{l\rightarrow \infty} \frac{1}{l}H(Y_1^l) - \sum_i \log_2 \hat{P_i}\lim_{l\rightarrow \infty}\sum_{y_1^l\in \mathcal{Y}^l} \frac{Q(y_1^l) {N_i(y_1^l)}}{l}\\ &= -H(\mathbf{Y}) - \sum_i \hat{P_i} \log_2 \hat{P_i} = H(\hat{Y}) - H(\mathbf{Y}). \vspace{-1ex} \end{split} \end{equation} Using the fact that $D(Y_1^l || \hat{Y}_1^l) \geq 0$, we have \begin{equation} H(\hat{Y})\geq H(\mathbf{Y}). \end{equation} This completes the proof. \end{IEEEproof} \begin{remark} \label{remark::iid} From the proof, we see that $H(\hat{Y}) = H(\mathbf{Y})$ implies $\lim_{l\rightarrow \infty}\frac{1}{l}D(Y^l|| \hat{Y}^l)=0$. Therefore, the codeword sequence $Y_1Y_2\cdots$ approximates an i.i.d. sequence generated by $\hat{Y}$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \begin{example} \label{example1} Consider a uniform i.i.d. binary source $\mathbf{X}$ and a prefix-free variable-length code defined by the mapping $\{00\rightarrow 000, 01 \rightarrow 001, 10 \rightarrow 01, 11\rightarrow 1 \}$. The occurrence probabilities of symbols 0 and 1 are $2/3$ and $1/3$, respectively. The symbol distribution entropy is \begin{equation} H(\hat{Y})=-\frac{1}{3}\log_2\frac{1}{3}-\frac{2}{3}\log_2\frac{2}{3}\simeq 0.9183. \end{equation} The entropy rate of the shaping code sequence is \begin{equation} H(\mathbf{Y})= \frac{H(\mathbf{X}^2)}{E(L)}=\frac{2}{2.25} = 0.8889. \end{equation} We see that, in this case, $H(\mathbf{Y})< H(\hat{Y})$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{example} \section{Optimal Shaping Codes} \label{OptimalCodes} \subsection{Cost Minimizing Probability Distribution} In this subsection, we discuss the properties of optimal shaping codes. We consider two scenarios. First, we analyze shaping codes that minimize the average cost with a given expansion factor. We then analyze shaping codes that minimize the expected cost per source symbol, or total cost. We refer to the first minimization problem as the \textit{type-\Romannum{1} shaping problem}, and we call shaping codes that achieve the minimum average cost for a given expansion factor \textit{optimal type-\Romannum{1} shaping codes}. The following theorem gives a lower bound on the average cost and the corresponding asymptotic symbol occurrence probabilities. \vspace{-0.5em} \begin{theorem}\label{performance:opt} Given the source $\mathbf{X}$ and cost vector $\mathcal{C}$, the average cost of a type-\Romannum{1} shaping code $\phi: \mathcal{X}^q \rightarrow \mathcal{Y}^*$ with expansion factor $f$ is lower bounded by $\sum_{i}\hat{P}_iC_i$, with \begin{equation} \hat{P}_i=\frac{1}{N}2^{-\mu C_i}, \end{equation} where $N$ is a normalization constant such that $\sum_i \hat{P}_i =1$ and $\mu$ is a non-negative constant such that $\sum_i -\hat{P}_i \log \hat{P}_i=H(\mathbf{X})/f$ \end{theorem} \begin{IEEEproof} From Theorem~\ref{Hiroyoshitheorem} and Lemma~\ref{upperbound_marginaldistribution}, we see that, for a shaping code $\phi$ with expansion factor $f$, the following inequality holds: \begin{equation} H(\hat{Y})\geq H(\mathbf{Y})= \frac{qH(\mathbf{X})}{E(L)} = \frac{H(\mathbf{X})}{f}. \end{equation} To calculate the minimum possible average cost, we must solve the optimization problem: \begin{equation}\label{equ:optimize} \begin{aligned} & \underset{\hat{P}_i}{\text{minimize}} & & \sum_{i}\hat{P}_iC_i\\ & \text{subject to} & & H(\hat{Y})\geq \frac{H(\mathbf{X})}{f}\\ & && \sum_{i}\hat{P}_i=1. \end{aligned} \vspace{-0.8em} \end{equation} We divide this optimization problem into two parts. First, we fix $H(\hat{Y})$ and find the optimal symbol occurrence probabilities. Then we find the optimal $H(\hat{Y})$ to minimize the average cost. The optimization problem then becomes \begin{equation}\label{equ:optimizetwostep} \begin{aligned} & \underset{H(\hat{Y})}{\text{minimize}}\quad \underset{\hat{P}_i}{\text{minimize}} & & \sum_{i}\hat{P}_iC_i\\ & \text{subject to} & & H(\hat{Y})\geq \frac{H(\mathbf{X})}{f}\\ & && \sum_{i}\hat{P}_i=1. \end{aligned} \vspace{-0.8em} \end{equation} If we fix $H(\hat{Y})$, we can solve the optimization problem by using the method of Lagrange multipliers. The solution is \vspace{-0.3em} \begin{equation} \hat{P}_i=\frac{1}{N}2^{-\mu C_i} \vspace{-0.5em} \end{equation} where $N=\sum_i 2^{-\mu C_i}$ is a normalization constant and $\mu$ is a non-negative constant such that \vspace{-0.3em} \begin{equation} \label{equ:relation} H(\hat{Y})=\sum_{i}-\hat{P}_i\log_2 \hat{P}_i.\vspace{-0.5em} \end{equation} Note that $\mu = 0$ if and only if $H(\hat{Y})=\log_2|\mathcal{Y}|$. For simplicity, let $h$ denote $H(\hat{Y})$. Then $\mu$ and $N$ are functions of $h$, which we denote by $N\stackrel{\mathrm{def}}{=}N(h)$ and $\mu\stackrel{\mathrm{def}}{=}\mu(h)$, respectively. Let $C(h)=\sum_i \frac{C_i}{N(h)}2^{-\mu(h)C_i}$ be the minimum cost, given that $h\geq \frac{H(\mathbf{X})}{f}$. From (\ref{equ:relation}), we see that \vspace{-0.3em} \begin{equation} \label{equ::mcliece} C(h)=\frac{h-\log_2 N}{\mu}\quad \text{when $\mu>0$}. \vspace{-0.3em} \end{equation} The optimization problem we have reduced to here, minimizing the average cost of a probability mass function subject to a lower bound on entropy, is dual to the problem considered in prior work such as~\cite[Problem 1.8]{McElieceBook} and ~\cite[Sec. 5.2]{BochererThesis}, which is a special case of results in~\cite{McElieceRodemich}, \cite{Justesen}, and \cite{Khandekar}. The relationship between entropy rate and average cost discussed in these papers has the same functional form as (\ref{equ::mcliece}). We can apply the analysis in~\cite[Sec. 5.2]{BochererThesis}, to conclude that \begin{equation} \label{equ::averagecostandh} \frac{\mathrm{d}h}{\mathrm{d}C} = \mu \Rightarrow \frac{\mathrm{d}C}{\mathrm{d}h} > 0\quad \text{when $\mu > 0$}. \end{equation} Therefore, the minimum cost for a shaping code with expansion factor $f$ is achieved when $h=H(\hat{Y})= \frac{H(X)}{f}$. Note that we have minimized average cost by optimizing the asymptotic symbol occurrence probability $\hat{P_i}$ of a prefix-free variable-length mapping whose output entropy rate is fixed, without consideration of whether the output sequence coincides with an i.i.d. sequence. \end{IEEEproof} \begin{remark} \label{rmk::compareendurance} If the source $\mathbf{X}$ has a uniform distribution, then $\mu$ satisfies $-f\sum_i \hat{P}_i \log \hat{P}_i=\log_2 |\mathcal{X}|$. Thus, we recover the result in \cite{Jagmohan} characterizing endurance codes with minimum average cost.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \begin{remark} When the minimum average cost is achieved, we have $H(\hat{Y})= H(\mathbf{Y})$. Thus, the codeword sequence approximates an i.i.d. sequence generated by distribution $\{\hat{P}_i\}$ (see Remark~\ref{remark::iid}). \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} Given a prefix-free variable-length shaping code $\phi: \mathcal{X}^q \rightarrow \mathcal{Y}^*$, assume that after $nq$ source symbols are encoded, the codeword sequence is $\phi(x_1^{nq})$. As in equations (\ref{equ:averagewearcost1}), (\ref{equ:averagewearcost2}), (\ref{equ:averagewearcost3}), we formally define the expected cost per source symbol, or \textit{total cost} of a shaping code as \begin{equation} \begin{split} T(\phi(\mathbf{X}^q))& = \frac{\sum_i E(N_i(\phi(X^{nq}))) C_i }{nq} = \frac{\sum_i E(N_i(\phi(X^q)))C_i}{q}\\ &=\frac{E(L)}{q}\frac{\sum_{i} E(N_i(\phi(X^q)))C_i}{E(L)} = f\sum_{i=1}^v \hat{P_i}C_i. \end{split} \end{equation} We refer to the problem of minimizing the total cost as the \textit{type-\Romannum{2} shaping problem}. Shaping codes that achieve the minimum total cost are referred to as \textit{optimal type-\Romannum{2} shaping codes}. The corresponding optimization problem is as follows: \begin{equation} \begin{aligned} & \underset{\hat{P}_i,f}{\text{minimize}} & & f\sum_{i=1}^v\hat{P}_iC_i\\ & \text{subject to} & & H(\hat{Y})\geq H(\mathbf{Y}) = \frac{H(\mathbf{X})}{f}\\ & && \sum_{i}\hat{P}_i=1. \end{aligned} \end{equation} Using Theorem~\ref{performance:opt}, we can calculate the total cost as a function of the expansion factor $f$. Fig.~\ref{fig:costvsf} shows the total cost curve for a quaternary source and code alphabet, a uniformly distributed source $\mathbf{X}$, and cost vector $\mathcal{C}=[1,2,3,4]$. There is an optimal value of $f$ and corresponding minimum total cost. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{figures/costvsf_0123} \caption{Total cost versus $f$ for random source with $\mathcal{C}=[1,2,3,4]$. } \label{fig:costvsf} \end{figure} We now determine the minimum achievable total cost of a shaping code. \begin{theorem} \label{opt_expansion} Given the source $\mathbf{X}$ and cost vector $\mathcal{C}$, if $C_1 \neq 0$, then the minimum total cost of a shaping code $\phi: \mathcal{X}^q \rightarrow \mathcal{Y}^*$ is given by $f\sum_{i}\hat{P}_iC_i$, where $\hat{P}_i=2^{-\mu C_i}$, $\mu$ is a positive constant selected such that $\sum_i 2^{-\mu C_i}=1$. The corresponding expansion factor $f$ is \begin{equation} f = \frac{H(\mathbf{X})}{-\sum_i \hat{P}_i \log_2 \hat{P}_i}. \end{equation} If $C_1 = 0$, then the total cost is a decreasing function of $f$. \end{theorem} \begin{IEEEproof} See Appendix~\ref{appen:2}. \end{IEEEproof} \begin{remark} \label{rmk::comparetotalcost} For a positive cost vector $\mathcal{C}$, the minimum achievable total cost is \begin{equation} \begin{split} &T(\phi(\mathbf{X}^q)) = f\sum_i \hat{P}_iC_i =\frac{H(\mathbf{X})}{-\sum_i \hat{P}_i \log_2 \hat{P}_i}\sum_i \hat{P}_i C_i\\ & =\frac{H(\mathbf{X})}{-\sum_i \hat{P}_i \log_2 2^{-\mu C_i}}\sum_i \hat{P}_i C_i = \frac{H(\mathbf{X})}{\mu\sum_i \hat{P}_i C_i}\sum_i \hat{P}_i C_i\\ & = \frac{H(\mathbf{X})}{\mu}. \end{split} \end{equation} In~\cite[Theorem~4.4]{CsiszarKorner} and~\cite[Theorem~1]{HanUchida}, the minimum total cost of a prefix-free variable-length code was determined. The capacity of a noiseless finite-state costly channel, which is essentially the inverse of the minimum total cost, was considered in~\cite{McElieceRodemich},~\cite{Justesen},~\cite{Khandekar},~\cite{BochererSCC}, and~\cite{BochererThesis} from combinatorial and probabilistic perspectives. Equivalences between the combinatorial and probabilistic definitions of capacity were established, extending the original results of Shannon. However, these works did not address the code expansion factor and asymptotic symbol occurrence probability corresponding to the minimum total cost. In~\cite[Problems 1.8]{McElieceBook},~\cite{Marcus} and~\cite[Sec. 5.2]{BochererThesis}, the relationship between the maximum entropy of a probability mass function on an alphabet with cost subject to an average cost constraint was discussed. However, these works did not explore the functional relationship between the total cost and the expansion factor of a code. Here, using the word-valued source perspective, we establish the relationship between the total cost of a rate-constrained prefix-free code and its expansion factor. This relationship plays an important role in the proof of the separation theorem (Theorem~\ref{separation:type1}) in Appendix~\ref{appen:type1}. We also address the special case of zero lowest cost, i.e., $C_1 =0$, in which no global minimum can be reached. \end{remark} \begin{remark} \label{remark:compression} If we only apply optimal lossless compression to the source $\mathbf{X}$, the code sequence has a uniform distribution. Therefore, we have $\mu = 0$ and $N=|\mathcal{Y}| > 1$. This implies that simply applying compression to the source data is not the best way to reduce the total cost. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \subsection{Optimal Data Shaping Code Design} Many previous works investigated type-\Romannum{2} shaping code design. For example, see~ \cite{Karp}, ~\cite{Varn}, and~\cite{CsiszarKorner} . In this subsection, we consider the problem of designing an optimal type-\Romannum{1} shaping code by transforming this problem into a type-\Romannum{2} shaping problem. Combining Theorems~\ref{performance:opt} and~\ref{opt_expansion}, we can prove the following equivalence theorem. \begin{theorem} \label{thm:typeonetypetwo} A code that achieves the minimum total cost for cost vector $\mathcal{C}'$ also achieves minimum average cost for cost vector $\mathcal{C}$ and expansion factor $f$ if \begin{equation} C_i' = -\log_2 \hat{P}_i, \end{equation} where $\{\hat{P}_i\}$ are the probabilities minimizing average cost for the cost vector $\mathcal{C}$ and expansion factor $f$. \end{theorem} \begin{IEEEproof} First we consider the optimal type-\Romannum{2} shaping code $\phi: \mathcal{X}^q\rightarrow \mathcal{Y}^*$ with cost vector $\mathcal{C}'$. By Theorem~\ref{opt_expansion}, this code generates codeword sequence with probability of occurrence $P_i' =2^{-\mu C_i'} $, where $\mu$ satisfies the equation \begin{equation} \label{muresult} \sum_i 2^{-\mu C_i'} = 1. \end{equation} Since $C_i' = -\log_2 \hat{P}_i$, it is easy to check that the solution of equation (\ref{muresult}) is $\mu = 1$. This means when the minimum total cost is achieved, the probability of occurrence of codeword sequence is $P_i' = 2^{-C_i'} = \hat{P}_i$ and the expansion factor of this code is \begin{equation} f' = \frac{H(X)}{-\sum_i P_i'\log_2 P_i'} = f \end{equation} Referring to Theorem~\ref{performance:opt}, we see that $\phi$ is also optimal with respect to minimizing average cost with cost vector $\mathcal{C}$ and expansion factor $f$. \end{IEEEproof} When designing a type-\Romannum{1} shaping code with expansion factor $f$ and cost vector $\mathcal{C}$, we can first calculate the desired distribution $\{\hat{P}_i\}$, then transform this problem into a type-\Romannum{2} shaping code problem for the channel with symbol cost $\{C_i' = -\log_2 \hat{P}_i\}$. Thus we can apply known type-\Romannum{2} shaping code algorithms to solve this problem. \begin{remark} For an arbitrary i.i.d. source and a positive cost vector $\mathcal{C}$, generalized Shannon-Fano codes~\cite[Theorem 4.4]{CsiszarKorner} are tree-based variable-length codes, $\phi: \mathcal{X}^q\rightarrow \mathcal{Y}^*$, whose total cost is upper bounded by \begin{equation} \begin{aligned} T(\phi) &< \frac{H(\mathbf{X})}{\mu} + \frac{\max_i \{C_i\}}{q}\\ &\rightarrow \frac{H(\mathbf{X})}{\mu}\quad \text{as $q\rightarrow \infty$.} \end{aligned} \end{equation} This coding scheme includes dividing the $[0,1]$ interval based on $P(x_1^q)$ and calculating $2^{-\mu W}$, where $W$ is the cost of a codeword. This construction may become impractical when $q$ is large. \end{remark} \begin{remark} \label{Varn} For a uniform i.i.d. source and a positive cost vector $\mathcal{C}$, Varn codes \cite{Varn} are tree-based, variable-length codes $\phi_K: \mathcal{X}^{\log_{|\mathcal{X}|} K} \rightarrow \mathcal{Y}^*$ that minimize total cost for a specified codebook size $K$. In \cite{Savari}, bounds were established on the average \textit{codeword} cost for the Varn code with codebook size $K$, denoted $C(\phi_K)$. Specifically, \begin{equation} \frac{\log_2 K}{\mu} \leq C(\phi_K) \leq \frac{\log_2 K}{\mu} + \max_i \{C_i\}. \end{equation} Dividing by $\log_{|\mathcal{X}|}K$, we see that the total cost of the Varn code with codebook size $K$ is bounded by \begin{equation} \frac{\log_2 |\mathcal{X}|}{\mu} \leq T(\phi_K) \leq \frac{\log_2 |\mathcal{X}|}{\mu} + \frac{\max_i \{C_i\}}{\log_{|\mathcal{X}|} K}. \end{equation} Therefore \begin{equation} \lim_{K\rightarrow \infty}T(\phi_K) = \frac{\log_2 |\mathcal{X}|}{\mu} \end{equation} which implies that Varn codes are asymptotically optimal type-\Romannum{2} shaping codes (see Remark~\ref{rmk::comparetotalcost}). \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} We now present a separation theorem for type-\Romannum{2} shaping codes. It states that minimum total cost can be achieved by a concatenation of optimal lossless compression with an optimal type-\Romannum{2} shaping code for a uniform i.i.d. source. The proof uses a construction based on typical sequences. \begin{theorem} \label{separation} Given the source $\mathbf{X}$ and cost vector $\mathcal{C}$, the minimum total cost can be achieved by a concatenation of an optimal lossless compression code with an optimal type-\Romannum{2} shaping code for a uniform i.i.d. source \end{theorem} \begin{IEEEproof} See Appendix~\ref{appen:3}. \end{IEEEproof} An example of an optimal type-\Romannum{2} shaping scheme that illustrates Theorem~\ref{separation} was described by Iwata in~\cite{Iwata}. It uses a concatenation of an LZ78 code and a Varn code as outer and inner codes, respectively. There is also a separation theorem for type-\Romannum{1} shaping codes, stating that minimum average cost for a given expansion factor can be achieved by a concatenation of optimal lossless compression with an optimal type-\Romannum{1} shaping code for a uniform i.i.d. source and suitable expansion factor. The proof relies on the type-\Romannum{2} separation theorem and the equivalence between type-\Romannum{2} and type-\Romannum{1} shaping codes established in Theorem~\ref{thm:typeonetypetwo}. It requires an analysis of the behavior of the total cost function in the vicinity of the expansion factor that minimizes total cost. \begin{theorem} \label{separation:type1} Given the source $\mathbf{X}$, cost vector $\mathcal{C}$ and expansion factor $f$, the minimum average cost can be achieved by a concatenation of an optimal lossless compression code with a binary optimal type-\Romannum{1} shaping code for uniform i.i.d. source and expansion factor \begin{equation} f' = \frac{f}{H(\mathbf{X})}. \end{equation \end{theorem} \begin{IEEEproof} See Appendix~\ref{appen:type1}. \end{IEEEproof} \section{Distribution Matching Code Design} \label{sec:DM} Given a target distribution $\{P_i\}$, distribution matching (DM) considers the problem of mapping an i.i.d. sequence of source symbols to an output sequence of symbols that are approximately independent and distributed according to $\{P_i\}$. An optimal DM code must satisfy two conditions: the codeword sequence has symbol occurrence probabilities $\hat{P}_i = P_i$, and the output sequence looks like an i.i.d. sequence. We measure the latter property using the asymptotic normalized KL-divergence defined in Lemma~\ref{upperbound_marginaldistribution}. It has been shown in Theorems~\ref{performance:opt} and \ref{opt_expansion} that an optimal shaping code will generate an output sequence such that $\lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \hat{Y}_1^l) = 0$. Thus the output sequence sequence approximates an i.i.d. sequence with symbol occurrence probability distribution $\{\hat{P_i}\}$. This implies that we can solve the distribution matching problem by designing a corresponding shaping code. In this section, we consider the problem of designing optimal DM codes. We first formulate the problem of \textit{generating an i.i.d. sequence} and then show the connection between DM codes and shaping codes. We then propose a \textit{generalized expansion factor} to measure the performance of a DM code. A comparison of DM code performance measures is also presented. \subsection{Problem Formulation} We use the asymptotic normalized Kullback-Leibler divergence~\cite{Soriaga} to formally define an optimal DM code $\phi$ for distribution $\{P_i\}$. \begin{definition} A variable-length mapping $\phi: \mathcal{X}^q\rightarrow \mathcal{Y}^*$ is an optimal DM code for distribution $\{P_i\}$ if \begin{equation} \lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \tilde{Y}_1^l) = 0, \end{equation} where $\mathbf{\tilde{Y}}$ is an i.i.d. process with distribution $\{P_i\}$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{definition} By combining Theorem~\ref{Hiroyoshitheorem} and Lemma~\ref{upperbound_marginaldistribution}, we can prove the following theorem. \begin{theorem} \label{thm:expanofdistribuion} The expansion factor of a mapping satisfies the lower bound \begin{equation} f = \frac{H(\mathbf{X})}{H(\mathbf{Y})} \geq \frac{H(\mathbf{X})}{H(\hat{Y})} \end{equation} with equality if and only if $\lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \hat{Y}_1^l) = H(\hat{Y}) - H(\mathbf{Y}) = 0$. When $f = \frac{H(\mathbf{X})}{H(\hat{Y})}$, this code is an optimal DM code for distribution $\{{P_i}\}$. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{theorem} \begin{remark} \label{randomiid} Assuming this mapping is an optimal compression, the compression ratio $g$ is \begin{equation} g = \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}. \end{equation} By Theorem~\ref{Hiroyoshitheorem} and Lemma~\ref{upperbound_marginaldistribution}, we have \begin{equation} H(\hat{Y}) \geq H(\mathbf{Y}) = \frac{H(\mathbf{X})}{g} = \log_2 |\mathcal{Y}|. \end{equation} Since $H(\hat{Y}) \leq \log_2 |\mathcal{Y}|$, we know that $H(\hat{Y}) = H(\mathbf{Y}) = \log_2 |\mathcal{Y}|$ and \begin{equation} \lim_{l\rightarrow\infty} \frac{1}{l}D(Y_1^l||\hat{Y}_1^l) = 0. \end{equation} This implies the codeword sequence looks i.i.d. and has probability of occurrence \begin{equation} \hat{P_i} = \frac{1}{|\mathcal{Y}|}\quad \text{for all $i$}. \end{equation} This proves the well-known fact that the output of an optimal compression approximates a uniform i.i.d. sequence \cite{Vis1998},\cite{HanBook},\cite{HanUchida}.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} Let $\mathbf{\tilde{Y}}$ be the i.i.d. process with distribution $\{P_i\}$. As in the derivation of (\ref{I_divergence3}) in Lemma~\ref{upperbound_marginaldistribution}, we find \begin{equation} \label{equ::kltypeone} \begin{aligned} &\lim_{l\rightarrow \infty} \frac{1}{l} D(Y_1^l || \tilde{Y}_1^l) \\ &=-\lim_{l\rightarrow \infty} \frac{1}{l}H(Y_1^l) - \sum_i \log_2 P_i\lim_{l\rightarrow \infty}\sum_{y_1^l\in \mathcal{Y}^l} \frac{Q(y_1^l) {N_i(y_1^l)}}{l}\\ &= -H(\mathbf{Y}) - \sum_i \hat{P_i} \log_2 P_i = -\sum_{i} \hat{P_i} \log_2 P_i - \frac{H(\mathbf{X})}{f}\\ & = \frac{-f\sum_{i} \hat{P_i} \log_2 P_i - H(\mathbf{X})}{f}. \end{aligned} \end{equation} From Theorem~\ref{opt_expansion}, we know that for a channel with cost $\{C_i = -\log_2 P_i\}$, the total cost $-f\sum_{i} \hat{P_i} \log_2 P_i$ is lower bounded by $H(\mathbf{X})$. The shaping code that achieves this lower bound has the following two properties: \begin{itemize} \item The probability of occurrence of symbol $\beta_i$ satisfies $\hat{P}_i = P_i$ for all $\beta_i$, \item The asymptotic normalized KL-divergence between $\mathbf{Y}$ and $\hat{Y}$ satisfies \begin{equation} \lim_{l\rightarrow \infty}\frac{1}{l} D(Y_1^l|| \hat{Y}_1^l) = 0. \end{equation} \end{itemize} This implies that this code generates a sequence that approximates an i.i.d. sequence with distribution $\{P_i\}$. This analysis also implies that the expansion factor of an optimal DM code is \begin{equation} f_\text{opt} = \frac{H(\mathbf{X})}{-\sum_i \hat{P_i}\log_2 \hat{P_i}}= \frac{H(\mathbf{X})}{-\sum_i P_i\log_2 P_i}. \end{equation} We summarize in the following theorem the relationship between optimal shaping codes and optimal DM codes, extending the result in~\cite{HanUchida} by explicitly showing the optimal expansion factor. \begin{theorem} \label{thm:shapinganddm} The optimal type-\Romannum{2} shaping code with cost vector $\mathcal{C}$, or the equivalent type-\Romannum{1} shaping code from Theorem~\ref{thm:typeonetypetwo}, is an optimal DM code for distribution $\{P_i\}$ if \begin{equation} \label{equ::shell} C_i = -\log_2 P_i \end{equation} for every symbol $\beta_i$, in the sense that \begin{equation} \lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \tilde{Y}_1^l) = 0. \end{equation} The expansion factor of this optimal DM code is \begin{equation} f_\text{opt} = \frac{H(\mathbf{X})}{-\sum_i P_i\log_2 P_i}. \end{equation}\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{theorem} \begin{remark} Shell mapping was used in~\cite{SchulteSteiner} to design fixed-length DM codes with uniformly distributed input bits. The shell mapper that minimizes informational divergence (introduced later in Section~\ref{sec::comparegef}) uses the ``self-information'' weight function $C_i = -\log_2 P_i$ and the optimal expansion factor is determined by a search. Theorem~\ref{thm:shapinganddm} considers a more general variable-length DM code with arbitrary i.i.d. source and characterizes the optimal expansion factor. Codes minimizing informational divergence are discussed further in Section~\ref{sec::comparegef}. \end{remark} \section{Generalized Expansion Factor} \label{sec:gef} The relationship between optimal shaping codes and optimal DM codes was established above. The total cost of the shaping code suggests an alternative performance measure for a DM code which will be useful when analyzing the optimality of a shaping-based DM code construction and in proving a separation theorem for DM codes. Specifically, we define the \textit{generalized expansion factor} (GEF) of a prefix-free variable-length code as follows. \begin{definition} Given a prefix-free variable-length code $\phi: \mathcal{X}^{q}\rightarrow \mathcal{Y}^*$ and a set of positive real numbers $\{P_1,P_2,\ldots,P_v\}$ such that $\sum P_i =1$, the {\bf generalized expansion factor} of this code is defined as \begin{equation} \label{func::gefdef} F(\phi,P_1,\ldots,P_v) =- f\frac{\sum_i \hat{P_i}\log_2 P_i}{\log_2 |\mathcal{Y}|} \end{equation} where $f$ is the code expansion factor and $\{\hat{P_i}\}$ is the asymptotic symbol occurrence probability distribution.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{definition} For simplicity, we sometimes use $F$ to represent $F(\phi,P_1,\ldots,P_v)$. The following theorem shows that $F$ can be used to evaluate an optimal DM code. \vspace{-2ex} \begin{theorem} \label{thm::geflowerbound} Given a prefix-free variable-length code $\phi: \mathcal{X}^{q}\rightarrow \mathcal{Y}^*$ and a set of positive real numbers $\{P_1,P_2,\ldots,P_v\}$ such that $\sum P_i =1$, the generalized expansion factor of this mapping is lower bounded by \vspace{-2ex} \begin{equation} F(\phi,P_1,\ldots,P_v) \geq \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}. \vspace{-1ex} \end{equation} If $F = \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}$, this mapping is an optimal DM code for the target distribution $\{P_i\}$, in the sense that \begin{equation} \lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \tilde{Y}_1^l) = 0. \end{equation} \qe \end{theorem} \begin{IEEEproof} Assume symbol $\beta_i$ in the codeword sequence has cost $C_i = -\log_2 P_i$. The total cost of this mapping is \begin{equation} \label{func::totalcostvsgef} T(\phi) = f \sum_i \hat{P_i} C_i = -f\sum_i \hat{P_i}\log_2 P_i. \end{equation} Comparing equations~(\ref{func::gefdef}) and~(\ref{func::totalcostvsgef}) we have \begin{equation} \label{equ::totalgefequivalence} F(\phi,P_1,\ldots,P_v) = \frac{T(\phi)}{\log_{2}|\mathcal{Y}|}. \end{equation} This indicates that the GEF of a DM code is equivalent to its total cost when applying it to a costly channel with cost $C_i = -\log_2 P_i.$ From Theorem~\ref{opt_expansion}, we know the total cost of a prefix-free mapping satisfies the lower bound \begin{equation} \label{equ::NEF} T(\phi) \geq \frac{H(\mathbf{X})}{\mu} \end{equation} where $\mu$ is a constant such that $\sum 2^{-\mu C_i} = 1$. Since $C_i = -\log_2 P_i$, it is easy to check that $\mu = 1$ and \begin{equation} \label{equ::totalcostgefequivalence} F(\phi,P_1,\ldots,P_v) = \frac{T(\phi)}{\log_{2}|\mathcal{Y}|} \geq \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}. \end{equation} When the minimum GEF is achieved, this code is also an optimal type-\Romannum{2} shaping code with $C_i = -\log_2 P_i$. Theorem~\ref{thm:shapinganddm} then implies that this code is an optimal DM code for the target distribution $\{P_i\}$, in the sense that \begin{equation} \lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \tilde{Y}_1^l) = 0. \end{equation} \end{IEEEproof} \begin{remark} \label{remark:varnasdm} As shown in Remark~\ref{Varn}, for a uniform i.i.d. source and a cost vector $\mathcal{C}$, a Varn code $\phi_K: \mathcal{X}^{\log_{|\mathcal{X}|} K} \rightarrow \mathcal{Y}^*$ is an asymptotically optimal type-\Romannum{2} shaping code. If the costs are given by $C_i= -\log_2 P_i$, where $\sum_i P_i = 1$, the total cost is bounded by \begin{equation} \log_2 |\mathcal{X}| \leq T(\phi_K) \leq \log_2 |\mathcal{X}| + \frac{\max_i \{C_i\}}{\log_{|\mathcal{X}|} K}. \end{equation} Equation~(\ref{equ::totalgefequivalence}) implies that for the target distribution $\{P_i\}$, Varn codes minimize GEF for a specified codebook size $K$. Thus, a Varn code can be regarded as a DM code with generalized expansion factor bounded by \begin{equation} \frac{\log_2 |\mathcal{X}|}{\log_2 |\mathcal{Y}|}\leq F\leq \frac{\log_2 |\mathcal{X}|}{\log_2 |\mathcal{Y}|}(1 + \frac{\max_i\{-\log_2 P_i\}}{\log_{2} K}). \end{equation} Therefore, we have \begin{equation} \lim_{K\rightarrow \infty}F(\phi_K,P_1,\ldots,P_v) = \frac{\log_2 |\mathcal{X}|}{\log_2 |\mathcal{Y}|} \end{equation} which implies that Varn codes are asymptotically optimal DM codes. Fig.~\ref{Tunstall23_distribution} and Fig.~\ref{Tunstall23_normalized_rate} show the probability of occurrence and generalized expansion factor of binary Varn codes (i.e., with $\mathcal{X} = \mathcal{Y} = \{0,1\}$) for a target distribution $P_0=2/3$, $P_1 = 1/3$. As the codebook size $K$ increases, we see that the probability of occurrence $\hat{P_0}$ approaches the target distribution value $P_0 =2/3$ and the generalized expansion factor approaches the theoretical lower bound $1$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/Unknown} \caption{Probability of occurrence $\hat{P_0}$ of a Varn code for the target distribution $\{2/3, 1/3\}$.} \label{Tunstall23_distribution} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/Unknown-2} \caption{Generalized expansion factor of a Varn code for the target distribution $\{2/3, 1/3\}$.} \label{Tunstall23_normalized_rate} \end{figure} \end{remark} The separation theorem for shaping codes in Theorem~\ref{separation} now extends naturally to DM codes. \begin{theorem} \label{separation:DM} An optimal DM code can be constructed by a concatenation of optimal lossless compression with an optimal DM code for a uniform i.i.d. source, in the sense that the minimum generalized expansion factor can be achieved by such a concatenation. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{theorem} \begin{remark} When $P_1 = P_2 = \cdots = P_v = \frac{1}{|\mathcal{Y}|}$, the generalized expansion factor reduces to \begin{equation} F = f \frac{\sum_i \hat{P_i} \log_2 |\mathcal{Y}|}{\log_2 |\mathcal{Y}|} = f. \end{equation} This provides the motivation for designating $F$ by this name.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \begin{remark} \label{rmk::han} We use an example to illustrate the difference between the generalized expansion factor and the normalized conditional divergence introduced in~\cite{HanUchida} and~\cite{HanBook} when the encoder has finite length. Given a ternary source with alphabet $\mathcal{X}=\{\alpha_1,\alpha_2,\alpha_3\}$ and probability distribution $\{\frac{1}{2}, \frac{1}{4},\frac{1}{4}\}$, consider two codes defined by the mappings \begin{equation} \begin{aligned} &\Phi_1: \{\alpha_1 \rightarrow 0, \alpha_2 \rightarrow 10, \alpha_3 \rightarrow 11 \},\\ &\Phi_2: \{\alpha_1 \rightarrow 00, \alpha_2 \rightarrow 10, \alpha_3 \rightarrow 11 \}. \end{aligned} \end{equation} Their generalized expansion factors for target distribution $\{1/2,1/2\}$ are \begin{equation} F_1 = \frac{3}{2} < F_2 = 2. \end{equation} This suggests that $\Phi_1$ is a better approximation of an optimal DM code for target distribution $\{\frac{1}{2}, \frac{1}{2}\}$ (in fact, $\Phi_1$ is an optimal DM code). The normalized conditional divergences are \begin{equation} \begin{aligned} D&(\Phi_1(X)||V|I)=\\ &\frac{1}{2}(\log_2 \frac{1}{1/2}) +\frac{1}{2}(\frac{1}{2}\log_2 \frac{1/2}{1/4} + \frac{1}{2}\log_2 \frac{1/2}{1/4}) = 1 \end{aligned} \end{equation} \begin{equation} \begin{aligned} D&(\Phi_2(X)||V|I) \\&= \frac{1}{2}\log_2 \frac{1/2}{1/4} + \frac{1}{4}\log_2 \frac{1/4}{1/4} + \frac{1}{4}\log_2 \frac{1/4}{1/4} = \frac{1}{2}. \end{aligned} \end{equation} We find that $D(\Phi_1(X)||V|I) > D(\Phi_2(X)||V|I)$, which suggests the opposite conclusion that $\Phi_2$ would be a better approximation of the optimal DM code.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \section{Comparison of DM Performance Measures} \label{sec::compare} In this section, we use a shaping code perspective to study DM codes whose performance is measured using informational divergence and normalized informational divergence. \subsection{Generalized Expansion Factor and Informational Divergence} \label{sec::comparegef} In this subsection we study the relationship between the generalized expansion factor and the informational divergence introduced in \cite{AmjadBocherer}, which is also used as a performance measure for DM codes. Consider a variable-length code $\phi: \mathcal{X}^{\log_{|\mathcal{X}|} K} \rightarrow \mathcal{Y}^*$ with codebook size $K$. We use $\mathcal{L}$ to denote the set of all codewords generated by this mapping. The leaf probability, or the probability of a codeword $y_1^l$, is defined as \begin{equation} P^{\mathcal{L}}(y_1^l) = P(y_1)P(y_2)\ldots P(y_l) = \prod P_i ^{N_i(y_1^l)}. \end{equation} This is also the probability of sequence $y_1^l$ generated by an i.i.d. source with distribution $\{P_i\}$. The true probability of codeword $y_1^l$ is the probability of the corresponding source sequence $\phi^{-1}(y_1^l)$. The \textit{informational divergence} ({I-divergence}) between these two distributions is defined as \begin{equation} I = \sum_{y_1^l \in \mathcal{L}} P(\phi^{-1}(y_1^l)) \log_2 \frac{P(\phi^{-1}(y_1^l))}{P^{\mathcal{L}}(y_1^l)}. \end{equation} Now we use the same code for type-\Romannum{2} shaping. We set the cost of each symbol to be $C_i = -\log_2 P_i$. The cost of codeword $y_1^l$ is \begin{equation} \begin{aligned} W(y_1^l)& = \sum C_i N_i(y_1^l) = -\sum \log_2 P_i ^{N_i(y_1^l)}\\ & = -\log_2 \prod P_i^{N_i(y_1^l)} = -\log_2 P^{\mathcal{L}}(y_1^l) \end{aligned} \end{equation} and the total cost of this shaping code, or equivalently the GEF, is \begin{equation} \begin{aligned} F&(\phi,P_1,\ldots,P_v) = \frac{T(\phi)}{\log_2 |\mathcal{Y}|}\\& = \frac{1}{\log_{|\mathcal{X}|} K \log_2 |\mathcal{Y}|}\sum_{y_1^l \in \mathcal{L}} P(\phi^{-1}(y_1^l))W(y_1^l)\\ &= -\frac{\log_2 |\mathcal{X}|}{\log_{2} K\log_2 |\mathcal{Y}|}\sum_{y_1^l \in \mathcal{L}} P(\phi^{-1}(y_1^l)) \log_2 P^{\mathcal{L}}(y_1^l). \end{aligned} \end{equation} The {I-divergence} of this code can then be expressed in terms of its GEF, namely \begin{equation} \label{equ:IDivergencetocost} \begin{aligned} I& = \sum_{y_1^l \in \mathcal{L}} P(\phi^{-1}(y_1^l)) \log_2 \frac{P(\phi^{-1}(y_1^l))}{P^{\mathcal{L}}(y_1^l)}\\ & =\sum_{y_1^l \in \mathcal{L}} P(\phi^{-1}(y_1^l)) \log_2 P(\phi^{-1}(y_1^l))\\&- \sum_{y_1^l \in \mathcal{L}}P(\phi^{-1}(y_1^l)) \log_2 P^{\mathcal{L}}(y_1^l)\\ & = F\frac{\log_{2}K \log_2 |\mathcal{Y}|}{\log_2 |\mathcal{X}|} - H(\mathbf{X}^{\log_{|\mathcal{X}|} K}) \\ & =( F - \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|})\frac{\log_{2}K \log_2 |\mathcal{Y}|}{\log_2 |\mathcal{X}|}. \end{aligned} \end{equation} Since $\log_2 K$ is a constant, minimizing $I$ is equivalent to minimizing $F$. This equation shows the relationship between {I-divergence} and GEF, and also highlights the duality between costly channel coding and DM coding. As shown in Remark~\ref{remark:varnasdm}, Varn codes minimize GEF for a uniform i.i.d. source. Therefore we can conclude the following optimality theorem for Varn codes. \begin{theorem} \label{thm::varnminimizeidivergence} Let $\{P_i\}$ be a target distribution. A code $\phi: \mathcal{X}^{\log_{|\mathcal{X}|} K}\rightarrow \mathcal{Y}^*$ for a uniform i.i.d. source that minimizes {I-divergence} is given by a Varn code designed for costs $C_i = -\log_2 P_i$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{theorem} \begin{remark} In~\cite{Savari}, Savari showed that Varn codes and reverse Tunstall codes are identical when finding exhaustive prefix-free codes (i.e., when $(K-1)/(|\mathcal{Y}| - 1)$ is an integer). Specifically, a Tunstall code designed to compress distribution $\{P_i\}$ and a Varn code designed for costly channel $\{C_i = -\log_2 P_i\}$ generate identical code trees. Therefore a reverse Tunstall code minimizes the {I-divergence} when the target distribution is binary (i.e., when $|\mathcal{Y}| = 2$). This was also proved in~\cite[Proposition 1]{AmjadBocherer} using a different method. However, for non-exhaustive codes this equivalence does not exist, and it remains unknown whether a reverse Tunstall code minimizes {I-divergence} when the target distribution is non-binary (i.e., when $|\mathcal{Y}| > 2$). Therefore Theorem~\ref{thm::varnminimizeidivergence} can be viewed as a generalization of~\cite[Proposition 1]{AmjadBocherer}. \end{remark} \subsection{Type-I Shaping Problem and Normalized I-Divergence} \label{subsec::tonenid} Another measure for DM codes used in~\cite{AmjadBocherer} is normalized {I-divergence}. In this subsection, we study its properties using the perspective of the type-\Romannum{1} shaping problem. Normalized {I-divergence} is defined as \begin{equation} \mathscr{I}= \frac{I}{E(L)}. \end{equation} Using (\ref{equ:IDivergencetocost}), we rewrite this as \begin{equation} \label{equ::inorm} \begin{aligned} \mathscr{I} &= ( F - \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|})\frac{\log_{2}K \log_2 |\mathcal{Y}|}{E(L)\log_2 |\mathcal{X}|} \\ & = ( - f\frac{\sum_i \hat{P_i}\log_2 P_i}{\log_2 |\mathcal{Y}|} - \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|})\frac{\log_2 |\mathcal{Y}|}{f}\\ &= \sum_i \hat{P_i}C_i - \frac{H(\mathbf{X})}{f} \end{aligned} \end{equation} where $C_i = -\log_2 P_i$. From equations~(\ref{equ::kltypeone}) and~(\ref{equ::inorm}) we see that asymptotic normalized KL-divergence and normalized {I-divergence} are identical for i.i.d. distribution matching. We divide the problem of finding the minimum $\mathscr{I}$ into two parts. First we fix the expansion factor $f$ and find the minimum achievable $\mathscr{I}$, denoted by $\mathscr{I}_\text{min}(f)$. Then we find the optimal $f$ to minimize $\mathscr{I}_\text{min}(f)$. The result is found by noting the similarity to the type-\Romannum{1} shaping problem and invoking Theorem~\ref{performance:opt}. \begin{theorem} \label{thm::nidtypeone} Let $\phi$ be a prefix-free variable-length mapping with expansion factor $f$. Let $\{P_i\}$ be the target distribution and set $C_i = -\log_2 P_i$. The minimum normalized {I-divergence} $\mathscr{I}_\text{min}(f)$ with fixed $f$ is \begin{equation} \label{equ::iminfthm} \mathscr{I}_\text{min}(f) = \sum_i\hat{P_i}C_i - \frac{H(\mathbf{X})}{f}, \end{equation} where $\hat{P_i} = \frac{2^{-\mu C_i}}{\sum_i 2^{-\mu C_i}}$ and $H(\hat{Y}) = -\sum_{i} \hat{P_i}\log_2 \hat{P_i} = H(\mathbf{X}) / f$. \end{theorem} \begin{IEEEproof} We must solve the following optimization problem, which is closely related to the type-\Romannum{1} shaping problem. \begin{equation}\label{} \begin{aligned} & \underset{\hat{P}_i}{\text{minimize}} & & \sum_{i}\hat{P}_iC_i - \frac{H(\mathbf{X})}{f}\\ & \text{subject to} & &H(\hat{Y})\geq \frac{H(\mathbf{X})}{f}\\ & && \sum_{i}\hat{P}_i=1. \end{aligned} \end{equation} From Theorem~\ref{performance:opt}, we immediately have \begin{equation} \label{equ::iminandd} \mathscr{I}_\text{min}(f) = \sum_i \hat{P_i}C_i - \frac{H(\mathbf{X})}{f} = \sum_i \hat{P_i}C_i - H(\hat{Y}), \end{equation} where $\hat{P_i} = \frac{2^{-\mu C_i}}{\sum_j 2^{-\mu C_j}}$ and $H(\hat{Y}) = -\sum_{i} \hat{P_i}\log_2 \hat{P_i} = H(\mathbf{X}) / f$. \end{IEEEproof} The next proposition determines the derivative of $\mathscr{I}_\text{min}(f)$ and finds the optimal expansion factor, $f_\text{opt}$, that minimizes $\mathscr{I}_\text{min}(f)$. \begin{prop} \label{prop::propertyiminf} The first derivative of $\mathscr{I}_\text{min}(f)$ is \begin{equation} \label{equ:didmu1} \frac{\mathrm{d}\mathscr{I}_\text{min}}{\mathrm{d}f} = \frac{H(\mathbf{X})}{f^2} \frac{\mu - 1}{\mu}\quad \mu > 0. \end{equation} Let $f_\text{opt}= -H(\mathbf{X}) /{\sum_i P_i \log_2 P_i}$. Then $\mathscr{I}_\text{min}(f)$ is continuous, strictly monotone decreasing on $[\frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}, f_\text{opt})$ (or, for $\mu\in [0,1)$) and continuous, strictly monotone increasing on $(f_\text{opt}, +\infty)$ (or, for $\mu\in (1,\infty)$). When $f = f_\text{opt}$, $\mathscr{I}_\text{min}(f_\text{opt}) =0$. \end{prop} \begin{IEEEproof} We have studied the behavior of minimum total cost with fixed $f$ in Appendices~\ref{appen:2} and~\ref{appen:type1}. Here we use the same technique to study $\mathscr{I}_\text{min}(f)$. Note that $\mathscr{I}_\text{min}(f)$ is a function of $\mu$. The derivative $\mathrm{d}f/\mathrm{d}\mu$ is already given in equation (\ref{equ:dfdmu}), and it is easy to check that \begin{equation} \label{equ:didmu} \frac{\mathrm{d}\mathscr{I}_\text{min}}{\mathrm{d}\mu} = \frac{ (\mu - 1)\ln 2 \sum_{i< j}2^{-\mu (C_i+C_j)} (C_i - C_j )^2}{N^2}. \end{equation} Applying the chain rule along with (\ref{equ:didmu}) and (\ref{equ:dfdmu}), we have \begin{equation} \label{equ:didf} \frac{\mathrm{d}\mathscr{I}_\text{min}}{\mathrm{d}f} = \frac{H(\mathbf{X})}{f^2} \frac{\mu - 1}{\mu}. \end{equation} Let $f_\text{opt}=\frac{H(\mathbf{X})}{-\sum_i P_i \log_2 P_i}$ (or, equivalently, let $\mu = 1$). Equations~(\ref{equ:didmu}) and~(\ref{equ:didf}) imply that $\mathscr{I}_\text{min}(f)$ is continuous, strictly monotone decreasing on $[\frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}, f_\text{opt})$ (or, for $\mu\in [0,1)$) and strictly monotone increasing on $(f_\text{opt}, +\infty)$ (or, for $\mu\in (1,\infty)$). The minimum of $\mathscr{I}_\text{min}(f)$ is achieved when $f = f_\text{opt}$. We have \begin{equation} \begin{aligned} \mathscr{I}_\text{min}(f_\text{opt}) &= \sum_i \hat{P_i}C_i - \frac{H(\mathbf{X})}{f_{\text{opt}}} \\ & = \sum_i \frac{2^{-C_i}}{\sum_j 2^{-C_j}} C_i+ \sum_i P_i \log_2 P_i \\ & =-\sum_i \frac{P_i}{\sum_j P_j} \log_2 P_i+ \sum_i P_i \log_2 P_i = 0. \end{aligned} \end{equation} This completes the proof. \end{IEEEproof} \begin{remark} In~\cite[Sec 5.1]{BochererThesis}, the author studied the minimum KL-divergence between a pmf $\{\hat{P_i}\}$ and the target distribution $\{P_i\}$, where each $\hat{P_i}$ is associated with a cost $w_i$ and the average cost of the pmf is upper bounded by $C$. The analysis is similar to the analysis in Proposition~\ref{prop::propertyiminf} if we specialize to the case where $w_i = -\log_2 P_i = C_i$. The KL-divergence is \begin{equation} \begin{aligned} \label{equ::inormpmf} D &= D(\hat{P_i}|| P_i) = \sum_i \hat{P_i} \log_2 \frac{\hat{P_i}}{P_i} \\ & = - \sum_i \hat{P_i} \log_2 P_i + \sum_i\hat{P_i} \log_2 \hat{P_i}\\ & = \sum_i \hat{P_i} C_i- H(\hat{P}). \end{aligned} \end{equation} By combining equation~(\ref{equ::inorm}) with Lemma~\ref{upperbound_marginaldistribution}, we have \begin{equation} \begin{aligned} \mathscr{I} &= \sum_i \hat{P_i}C_i - \frac{H(\mathbf{X})}{f} = \sum_i \hat{P_i}C_i - H(\mathbf{Y})\\ & \geq \sum_i \hat{P_i}C_i - H(\hat{P}) = D, \end{aligned} \end{equation} with equality if and only if the output process generated by $\phi$ approximates an i.i.d. process (Remark~\ref{remark::iid}). The minimum KL-divergence with average cost upper bounded by a specified average cost $C$, denoted by $D(C)$, was also studied in~\cite[Sec 5.1]{BochererThesis}. The pmf that achieves $D(C)$ is \begin{equation} \label{equ::de} \hat{P_i} = \frac{2^{-\mu C_i}}{\sum_j 2^{-\mu C_j}},\quad \sum_i \hat{P_i}C_i = C, \end{equation} when $C \leq \sum P_i C_i$, or $\mu \geq 1$. When $C > \sum P_i C_i$, by setting $\mu = 1$, we have $\hat{P_i} = P_i$, $ \sum_i \hat{P_i}C_i < C$, and \begin{equation} D(C) = \sum_i \hat{P_i} C_i- H(\hat{P}) = 0. \end{equation} By comparing equations~(\ref{equ::iminfthm}) and~(\ref{equ::de}), we conclude that \begin{equation} \mathscr{I}_\text{min}(f) = D(C), \end{equation} where \begin{equation} C = \sum_i \hat{P_i}C_i\quad f = \frac{H(\mathbf{X})}{-\sum_i \hat{P_i}\log_2 \hat{P_i}}\quad\hat{P_i} = \frac{2^{-\mu C_i}}{\sum_j 2^{-\mu C_j}}, \end{equation} when $\mu \geq 1$, or equivalently when $C \leq \sum P_i C_i$ and $f \geq f_\text{opt}$. \end{remark} The derivative $\mathrm{d}D(C)/\mathrm{d}C$ was found in~\cite[Sec. 5.1]{BochererThesis}. Using the chain rule, we know that \begin{equation} \frac{\mathrm{d}D(C)}{\mathrm{d}C} =\begin{cases} \frac{\mathrm{d}\mathscr{I}_\text{min}}{\mathrm{d}f}\frac{\mathrm{d}f}{\mathrm{d}C}\quad &\text{when $C \leq \sum P_i C_i$ ($\mu \geq 1$),}\\ 0 &\text{when $C > \sum P_i C_i$.} \end{cases} \end{equation} From (\ref{equ::averagecostandh}) we have \begin{equation} \frac{\mathrm{d}h}{\mathrm{d}C} =\frac{\mathrm{d}\frac{H(\mathbf{X})}{f}}{\mathrm{d}C} = \mu \quad\Rightarrow\quad \frac{\mathrm{d}f}{\mathrm{d}C} = -\frac{\mu f^2}{H(\mathbf{X})}. \end{equation} By combining this with (\ref{equ:didmu1}), we have \begin{equation} \frac{\mathrm{d}D(C)}{\mathrm{d}C} =\begin{cases} 1-\mu \quad &\text{when $C \leq \sum P_i C_i$ ($\mu \geq 1$),}\\ 0 &\text{when $C > \sum P_i C_i$.} \end{cases} \end{equation} Therefore Proposition~\ref{prop::propertyiminf} allows us to recover the derivative of $D(C)$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \begin{remark} In~\cite[Sec. V]{BochererAmjad}, a bound on the rate of a prefix-free variable-length DM code in the vicinity of $\mathscr{I}=0$ was given. Proposition~\ref{prop::propertyiminf} gives an explicit relationship between $\mathscr{I}$ and the code rate over a wider range of rates.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \begin{remark} In~\cite[Section 6.2.2]{Soriaga}, Soriaga considered the case where system requirements dictate that the expansion factor of the DM code encoder can not exceed $f_0$, where $f_0 < f_{opt}$. In such a case, the code cannot be an optimal DM code for target distribution $\{P_i\}$. We may try to approximate an optimal DM code by designing a code with $f \leq f_0$ that minimizes the asymptotic normalized KL-divergence, $\lim_{l\rightarrow \infty} \frac{1}{l}D(Y_1^l || \tilde{Y}_1^l)$. We denote by $\mathscr{D}(f_0)$ the minimum possible value of this divergence. The relationship between $\mathscr{D}(f_0)$ and $f_0$ for a finite-order Markov target distribution was given in~\cite{Soriaga} and the result is also applicable to the i.i.d. case considered here. Since the asymptotic normalized KL-divergence for a fixed $f$ is lower bounded by $\mathscr{I}_\text{min}(f)$ (Theorem~\ref{thm::nidtypeone}) and $\mathscr{I}_\text{min}(f)$ is strictly monotone decreasing when $f \leq f_0 < f_{opt}$ (Proposition~\ref{prop::propertyiminf}), we have \begin{equation} \begin{aligned} \mathscr{D}(f_0) &=\mathscr{I}_\text{min}(f_0) = \sum_i \hat{P_i}C_i - \frac{H(\mathbf{X})}{f_0} \\ & = \sum_i \hat{P_i}\frac{(-\log_2 {\hat{P_i}}- \log_2 N)}{\mu}- \frac{H(\mathbf{X})}{f_0}\\ &= \frac{-\sum_i \hat{P_i}\log_2 {\hat{P_i}}}{\mu}-\sum_i P_i\frac{\log_2 N}{\mu}-\frac{H(\mathbf{X})}{f_0}\\\\ & = \frac{H(\mathbf{X})}{\mu f_0} - \frac{\log_2 N}{\mu} - \frac{H(\mathbf{X})}{f_0}, \end{aligned} \end{equation} where $\mu$ and $N$ are constants such that $N = \sum_i 2^{-\mu C_i}$ and $\sum_i -\hat{P}_i \log \hat{P}_i=H(\mathbf{X})/f_0$, with $\hat{P}_i = \frac{1}{N} 2^{-\mu C_i}$. Because the code that achieves this lower bound with expansion factor $f_0$ is an optimal type-\Romannum{1} shaping code, based on the equivalence theorem we can extend the result in~\cite[Section 6.2.2]{Soriaga} by concluding that this code is an optimal DM code for target distribution $\{\hat{P}_i\}$. \end{remark} \begin{remark} In Theorem~\ref{thm::nidtypeone}, we have shown that when $\mathscr{I} \rightarrow 0$, then $f\rightarrow f_\text{opt}$. This implies that the GEF of the code satisfies \begin{equation} F = \frac{f \mathscr{I} + H(\mathbf{X})}{\log_2|\mathcal{Y}| } \rightarrow \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}. \end{equation} Similarly, as shown in Appendix~\ref{appen:type1}, when $F\rightarrow H(\mathbf{X})/\log_2 |\mathcal{Y}|$ (or, equivalently, when total cost $T\rightarrow H(\mathbf{X})$), then $f\rightarrow f_\text{opt}$ and \begin{equation} \mathscr{I}= \frac{F - \frac{H(\mathbf{X})}{\log_2 |\mathcal{Y}|}}{f}\rightarrow 0. \end{equation} In view of the equivalence between asymptotic normalized KL-divergence and $\mathscr{I}$, these observations extend Theorem~\ref{thm::geflowerbound} by providing bounds on asymptotic normalized KL-divergence in the vicinity of $F = H(\mathbf{X})/ \log_2 |\mathcal{Y}|$.\hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} \end{remark} \section{Experimental Results} \label{sec:experiment} \subsection{Optimal Data Shaping Code for MLC Flash Memory} \label{sec:nand} We evaluated the performance of shaping codes on a multilevel-cell (MLC) NAND flash memory. In MLC flash, the cells are arranged in a rectangular array (also called a \textit{block}) and each row of cells is called a \textit{wordline}. The cells can be programmed to four different voltage levels, denoted $\{0,1,2,3\}$, so each cell can store two bits of information. It was shown in \cite{LiuSieGC16}, \cite{LiuNVMW2016} that MLC flash memory can be modeled as a costly channel with alphabet $\{0,1,2,3\}$, where the cost of the erase level 0 can be taken to be $C_0=0$. Using the methodology described in \cite{LiuSieGC16}, the cost vector for the memory was found empirically to be \begin{equation} \mathcal{C} = [0, 0.58, 0.87, 1.29]. \end{equation} From Theorem~\ref{opt_expansion}, we know that the total cost is a decreasing function of the expansion factor. To assess the performance of optimal shaping, and to permit a comparison to the direct-shaping code in \cite{LiuSieGC16}, \cite{LiuNVMW2016}, we applied a rate-1, type-\Romannum{1} shaping code to the ASCII representation of the English-language text of \textit{The Count of Monte Cristo}. The ``optimal'' shaping scheme was designed according to the principles suggested by the equivalence theorem and separation theorem. We first compressed the file using the LZ77 algorithm. The observed compression rate was $g = 1/2.740$. We then used Theorem~\ref{performance:opt} to compute the target symbol occurrence probabilities of a shaping code that minimizes average cost for a uniform i.i.d. source, a cost vector $\mathcal{C}$, and the expansion factor $f'=f/g= 2.740$. The resulting symbol occurrence probability distribution was given by \begin{equation} \hat{P}=[ 0.8606, 0.0989, 0.0335, 0.0070]. \end{equation} Using Theorem~\ref{thm:typeonetypetwo}, we computed the costs for the equivalent code that minimizes total cost, yielding the cost vector \begin{equation} \mathcal{C}'= [0.2167, 3.3378, 4.8983, 7.1585]. \end{equation} We constructed a Varn code with codebook size $K=256$ based on the cost vector $\mathcal{C}'$. This code is a length-8, type-\Romannum{2} shaping code and the concatenation of the compression and the Varn code is a rate-1, type-\Romannum{1} shaping code. The expansion factor of the Varn code is 2.768, which is close to the expansion factor of the optimal type-\Romannum{2} shaping code for cost vector $\mathcal{C}'$, where $f_{opt} = 2.737$. Its codeword length distribution is shown in Fig.~\ref{fig::length_count}. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{figures/length_distribution_thecount} \caption{Codeword length distribution of Varn code with the codebook size $K=256$ for English-language text. } \label{fig::length_count} \end{figure} To characterize the performance of the designed shaping code, we performed a program/erase (P/E) cycling experiment on the MLC flash memory by repeating the following steps, which collectively represent one P/E cycle. The experiment was conducted with the uncoded source data, and then with the output data from the shaping code. \begin{itemize} \item Erase the MLC flash memory block. \item Program the MLC flash memory. \item For each successive programming cycle, ``rotate" the data, so the data that was written on the $i$th wordline is written on the $(i+1)$st wordline, wrapping around the last wordline to the first wordline. \item After every 100 P/E cycles, erase the block and program pseudo-random data. Then perform a read operation, record bit errors, and calculate the bit error rate. \end{itemize} Fig.~\ref{eng:fig1} shows the average bit error rates (BERs) for the uncoded source data, the direct shaping code \cite{LiuSieGC16}, and the optimal shaping code. The results indicate that the optimal shaping code provides a significant increase in the memory lifetime compared to no shaping and direct shaping. \begin{figure}[h] \centering \subfigure[]{\label{eng:fig1}\includegraphics[width=0.49\columnwidth]{figures/TCMC_FIG1_new.png}} \subfigure[]{\label{eng:fig2}\includegraphics[width=0.49\columnwidth]{figures/TCMC_FIG2_new.png}} \vspace{-0.75em} \caption{BER performance for English-language text.} \end{figure} As a way of comparing the performance of optimal shaping to that of data compression alone, we rescaled the P/E cycle count of the shaping code by the compression ratio 2.740 and compared the result to P/E cycling of pseudo-random data. This corresponds to a BER comparison based upon the total amount of source data stored in the memory. The results, shown in Fig.~\ref{eng:fig2}, indicate that the performance of optimal shaping is superior to data compression alone as a function of total source data written. A similar experiment was conducted for a Chinese-language text, \textit{Collected Works of Lu Xun, Volumes 1--4}, represented using UTF-16LE encoding. We constructed a Varn code with codebook size $K=256$ based on the cost vector \begin{equation} \mathcal{C}'= [0.4222, 2.6647,3.7860, 5.4099]. \end{equation} The expansion factor of the the Varn code was 1.751, which is close to the expansion factor of the optimal type-\Romannum{2} shaping code, $f_{opt} = 1.759$. Its codeword length distribution is shown in Fig.~\ref{fig::length_luxun}. The BER results are shown in Fig.~\ref{chn:fig1} and Fig.~\ref{chn:fig2}. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{figures/length_distribution_luxun} \caption{Codeword length distribution of Varn code with codebook size $K=256$ for Chinese-language text.} \label{fig::length_luxun} \end{figure} \begin{figure}[h] \centering \subfigure[]{\label{chn:fig1}\includegraphics[width=0.49\columnwidth]{figures/LUXUN_FIG1_new.png}} \subfigure[]{\label{chn:fig2}\includegraphics[width=0.49\columnwidth]{figures/LUXUN_FIG2_new.png}} \vspace{-0.75em} \caption{BER performance for Chinese-language text.} \end{figure} \subsection{Varn Codes for Distribution Matching} Remark~\ref{remark:varnasdm} shows that the upper bound on the generalized expansion factor of Varn codes decreases as the codebook size increases. This suggests that as the codebook size of a Varn code increases, the approximation to an optimal DM code should improve. In this subsection, we empirically tested this premise by constructing Varn codes with codebook size $K = 100$, 1000 and 10000, respectively, for a target distribution $\{P_0, P_1\} = \{\frac{2}{3}, \frac{1}{3}\}$. The measure of goodness we used here was similar to the serial test in~\cite[Section 2.11]{NIST}, namely KL-divergences for patterns of increasing length. Codeword sequences with 10000 codewords were generated using the random number sequence collected from \cite{random}. The first 71514 bits in codeword sequences were used for comparison (71514 was the length of the codeword sequence generated by the Varn code with codebook size $K = 100$). The probability of occurrence of length 1, 2 and 3 patterns was calculated. For example, we define the probability of occurrence of `10' ($P_{10}'$) and `101' ($P_{101}'$) in codeword sequence $y_1^l$ as \begin{equation} P_{10}' = \frac{\{\text{the number of subsequences $y_i^{i+1}= $`10'$$\}}}{l-1}, \end{equation} \begin{equation} P_{101}' = \frac{\{\text{the number of subsequences $y_i^{i+2} = $`101'$$\}}}{l-2}. \end{equation} The first-, second-, and third-order KL-divergences between $P'$ and distribution $\{P_0, P_1\} = \{\frac{2}{3}, \frac{1}{3}\}$ were calculated, using the following definitions: \begin{equation} I_1 = \sum_{i \in \{0,1\}} P'_{i} \log_2 \frac{P'_{i}}{P_i} \end{equation} \begin{equation} I_2 = \sum_{i \in \{0,1\}} \sum_{j \in \{0,1\}} P'_{ij} \log_2 \frac{P'_{ij}}{P_i P_j} \end{equation} \begin{equation} I_3 = \sum_{i \in \{0,1\}} \sum_{j \in \{0,1\}} \sum_{k\in \{0,1\}} P'_{ijk} \log_2 \frac{P'_{ijk}}{P_i P_jP_k}. \end{equation} The results are shown in Table~\Romannum{1}. The divergences decrease as $K$ increases, indicating that the approximation to an i.i.d. sequence with target distribution $\{P_0, P_1\} = \{\frac{2}{3}, \frac{1}{3}\}$ is improving. \begin{table}[h] \centering \label{table::Idivergence} \begin{tabular}{cccccc} \hline & $P_0'$ & $I_1$ & $I_2$ & $I_3$ \\ \hline $K=100$ & 0.6447 &0.0015 & 0.0032 & 0.0055 \\ $K=1000$ & 0.6498 & 0.00091 & 0.0018 & 0.0027 \\ $K=10000$ &0.6602 & 0.00014 & 0.00027 & 0.00028 \\ \hline \end{tabular} \caption{First- , second-, and third-order KL-divergence} \end{table} \section{Conclusion} \label{sec:conclude} In this paper, we studied information-theoretic properties and performance limits of a general class of shaping codes. We determined the asymptotic symbol occurrence probability distribution, and used it to determine the minimum achievable average cost for a type-\Romannum{1} shaping code. Using these results, we determined the minimum total cost and optimal expansion factor for a type-\Romannum{2} shaping code. A consequence of this analysis is an equivalence theorem, stating that a type-\Romannum{1} shaping code with a given expansion factor and a cost vector can be realized by a type-\Romannum{2} shaping code. We then proved a separation theorem stating that optimal shaping can be achieved by a concatenation of optimal lossless compression and optimal shaping for a uniform i.i.d. source. Experimental results showed that optimal shaping can provide a significant increase in flash memory lifetime when applied to English-language and Chinese-language texts, providing total data capacity greater than that achieved by data compression alone. We also studied properties of prefix-free variable-length distribution matching (DM) codes from the perspective of shaping. We characterized optimal DM codes in terms of the asymptotic normalized divergence and showed that when the divergence equals zero, a DM code encoder generates a codeword sequence that looks i.i.d., with symbol occurrence probability equal to the target distribution. We showed that optimal type-\Romannum{2} shaping codes can be used to construct optimal DM codes. This suggested the definition of the \text{generalized expansion factor} as a performance measure for DM codes and implied a separation theorem for DM codes. We also established the relationship between the generalized expansion factor and the informational divergence of a DM code. The relationship between the type-\Romannum{1} shaping problem and the minimization of normalized informational divergence was also studied. Simulation results showed an increase in distribution matching performance of Varn codes designed for a Bernoulli distribution as the codebook size increases. \section*{Acknowledgment} This work was supported in part by National Science Foundation (NSF) Grant CCF-1619053.
proofpile-arXiv_069-3578
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{LOOP_Concept.png} \caption{We propose to replace the classic iterative solvers with a parametric (set) function that can be trained to directly map the input to the optimal parameters.} \label{fig:concept} \end{figure} Optimization problems are ubiquitous in computational sciences and engineering. Classic solutions to optimization problems involve iterative algorithms often relying on predetermined first and second-order methods like (sub)gradient ascent/descent, conjugate gradients, simplex basis update, among others. These methods often come with theoretical convergence guarantees, which is desirable, but their iterative nature could be limiting in applications requiring near-real time inference. In addition, these algorithms' performance remains the same regardless of the number of times a similar optimization problem is visited. Recently, there has been an emerging interest in leveraging machine learning to enhance the efficiency of optimization processes and address some of these shortcomings. The machine learning based solutions are often referred to as \emph{Learning to Optimize} (L2O) methods in the literature. While L2O methods do not come with theoretical guarantees, they hold the promise of: 1) reducing the number of iterations needed to arrive at a solution, and 2) improving over time as more optimization problems are visited. L2O allows for transferring recent advances in machine learning, e.g., self-supervised learning, meta-learning, and continual learning, to learn data-driven optimization algorithms that could improve over time. We note that most existing L2O methods aim to learn a function that receives the current loss or its gradient, and based on the memory of previous loss values (or gradients) provide an update for the optimization parameters. Hence, these methods do not eliminate the iterative nature of the solution but aim at improving the iterative solution to: 1) reduce the number of total iterations, and 2) leading to better solutions for non-convex problems. In this paper, we consider an inherently different use-case of machine learning in solving optimization problems. We propose to replace the classic iterative solutions of an optimization problem with a trainable parametric (set) function that directly maps the input of the optimization problem to the optimal parameters in a single feed forward. Figure \ref{fig:concept} demonstrates this concept. This process, which we denote as \emph{Learning to Optimize the Optimization Process} ($\mathcal{LOOP}$), is inspired by biological systems that are capable of solving complex optimization problems upon encountering the problem multiple times. By omitting the traditional iterative solutions, $\mathcal{LOOP}$~overcomes one of the major optimization bottlenecks enabling near-real-time optimization in a wide range of critical applications. $\mathcal{LOOP}$~ is particularly suitable when one needs to perform a certain type of optimization (e.g., linear/quadratic programming) over a specific distribution of input data (e.g., sensors data collection) repeatedly. These problems abound in practice, with examples being cyber-physical infrastructures, autonomous vehicle networks, sensor networks monitoring a physical field, financial markets, and supply chains. For example, the resiliency and cost-effectiveness of our cyber-physical energy system relies on finding optimal energy dispatch decisions in near-real-time. This is a prime example of an optimization that is required to be repeatedly solved over the distribution of electricity demands on the power grid. Another example is traffic flow management in transportation networks, where traffic control systems need to determine traffic lights' status based on the traffic measurements continuously. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{LOOP_Training.png} \caption{Our two proposed approaches for training $\mathcal{LOOP}$: 1) with solver in the loop (left), and 2) without solver in the loop and by directly minimizing the objective function (right).} \label{fig:training} \end{figure*} At a first glance, the use of neural networks for solving frequently solved optimization problems may seem inefficient. However, such paradigm shift would allow us to leverage recent advances in deep learning, in particular, deep learning on edge-devices, continual learning, and transfer learning to improve the performance of an optimizer over time, even for a fixed given computational budget. Below we enumerate our specific contributions. \begin{enumerate} \item Providing a generic framework, $\mathcal{LOOP}$, for replacing the traditional iterative optimization algorithms with a trainable parametric (set) function that outputs the optimal arguments/parameters in a single feed forward. \item Proposing two generic approaches for training parametric (set) functions to solve a certain type of optimization problem over a distribution of input data. \item Demonstrating the success of our $\mathcal{LOOP}$~ framework in solving various types of optimization problems including linear/nonlinear regression, principal component analysis, the transport-based core-set, and the quadratic programming in supply management application. \end{enumerate} \section{Prior Work} One of the classic applications of machine learning in optimization has been in predicting proper hyper-parameters to solve an optimization problem. Such hyper-parameters could include learning rate, momentum decay, and regularization coefficients, among others. The existing literature on learning to predict proper hyper-parameters include approaches based on sequential model-based Bayesian optimization (SMBO) \cite{hutter2011sequential,bergstra2011algorithms,snoek2012practical}, and gradient-based methods \cite{bengio2000gradient,maclaurin2015gradient,wei2021meta}. At their core, these methods instantiate different variations of the same optimization algorithm, e.g., stochastic gradient descent (SGD), by selecting different hyper-parameters. More recently, a large body of work has focused on leveraging machine learning to improve the optimization process by replacing the engineered traditional optimizers with learnable ones. These methods, referred to as Learning to Optimize (L2O) approaches, are based on learning a parametric function, often in the form of a recurrent neural network, that receives the current loss (or its gradient) as input and outputs the parameter updates \cite{gregor2010learning,li2016learning,andrychowicz2016learning,wichrowska2017learned,chen2017learning}. Such methods are effective in optimizing a wide range of optimization problems by reducing the number of iterations and often achieve better solutions for non-convex optimization problems. Chen et al. \cite{chen2021learning} provide a comprehensive review of these approaches and their numerous applications. Unlike the hyper-parameter search methods that instantiate different variations of the same optimization algorithm (e.g., SGD), L2O approaches effectively search over an expansive space of optimization algorithms to find an optimal algorithm. The optimal algorithm (i.e., the learned optimizer) fits input data distribution for a specific optimization problem (e.g., linear/quadratic programming); hence, it can lead to better performance than generic algorithms. In this paper, our focus is entirely different from both hyper-parameter optimization approaches, and \emph{L2O} approaches discussed above. Instead of searching in the space of possible optimizers, our goal is to replace the optimization algorithm with a parametric (set) function that directly maps the optimization's input data to the optimal arguments/parameters. The motivation behind such transition is to: 1) discard iterations altogether, 2) have an optimizer that improves over time and encounters more optimization problems of a specific type. More importantly, the proposed framework allows one to leverage some of the core machine learning concepts, including continual/lifelong learning, transfer learning, domain adaptation, few/one/zero-shot learning, model compression (through sparse training and/or training), and many others into the improving the optimization process. Several recent papers in the literature leverage deep neural networks to approximate the output of an optimization algorithm, which is in essence similar to our proposed framework, $\mathcal{LOOP}$. In VoxelMorph, for instance, Balakrishnan et al. \cite{balakrishnan2019voxelmorph} trained a convolutional neural network to register medical images; image registration is a non-convex optimization problem often solved through time-consuming iterative and multi-scale solvers. In an entirely different application, Pan et al. \cite{pan2020deepopf} trained a neural network to predict the set of independent operating variables (e.g., energy dispatch decisions) for optimal power flow (OPF) optimization problems, denoted as DeepOPF. They showed that DeepOPF requires a fraction of the time used by conventional solvers while resulting in competitive performance. More recently, Knyazev et al. \cite{knyazev2021parameter} trained a neural network to directly predict the parameters of an input network (with unseen architecture) to solve the CIFAR-10 and ImageNet datasets. $\mathcal{LOOP}$~ is the common theme behind these seemingly unrelated works. In this paper, we establish $\mathcal{LOOP}$~ as a generic alternative framework to the traditional optimization algorithms, as well as, the L2O approaches, and show that many optimization problems can be directly solved through training neural networks. \section{Method} \label{sec:method} We start by considering unconstrained optimization problems of the following type: \begin{align} u^*=\argmin_u f(\mathcal{X},u) \end{align} where $\mathcal{X}=\{x_n\in \mathbb{R}^d \}_{n=1}^{N}$ is the set of inputs to the distribution, $u\in\mathbb{R}^l$ is the optimization parameters, and $f(\mathcal{X},u)$ is the objective function with respect to parameters $u$ and inputs $\mathcal{X}$. To replace this optimization with a set function approximator, we propose two approaches, shown in Figure \ref{fig:training}. \begin{itemize} \item {\bf Solver in the $\mathcal{LOOP}$--} In our first formulation, during the training, we use the traditional solvers to obtain $u^*$ and use it as the ground truth. Then we pose the problem as a supervised learning problem. Our training objective is shown below: \begin{align} \argmin_\theta~~& \mathbb{E}_{\mathcal{X}\sim P_\mathcal{X}}[ d(\phi_\theta(\mathcal{X}),u^*)]\nonumber\\ s.t.~~~~~~ & u^* = \argmin_u f(\mathcal{X},u) \end{align} where $d(\cdot,\cdot):\mathbb{R}^l\times\mathbb{R}^l\rightarrow \mathbb{R}_+$ is a discrepancy/distance defined in $\mathbb{R}^l$, and $\phi_\theta$ denotes our set neural network, and $P_\mathcal{X}$ is a set distribution. \item {\bf Without Solver--} The use of a solver in our first formulation could be limiting, as such solvers are often computationally expensive turning the training excruciatingly slow. More importantly, in non-convex problems the calculated $u^*$ for input $\mathcal{X}$ is not unique (e.g., due to different initialization), which leads to solving a regression problem with changing targets. To avoid these problems, in our second formulation, we directly optimize the objective function and with a slight abuse of the term call it a ``self-supervised'' formulation: \begin{align} \argmin_\theta~~& \mathbb{E}_{\mathcal{X}\sim P_\mathcal{X}}[ f(\mathcal{X},\phi_\theta(\mathcal{X}))] \end{align} where the expected objective value over the distribution of the input sets is minimized. \end{itemize} Note, for constrained problems (depending on the use case) we leverage different optimization techniques. For instance, we can enforce simple constraints (e.g., $u\geq 0$) into our model (i.e., the set function) using Rectified Linear Unit (ReLU) activations in the output layer of our network. Also, we can use the Lagrange dual function and absorb the constraints into our objective function as penalty terms. Next we describe the different optimization problems we consider in this paper. \subsection{Problem 1: Linear/Nonlinear Regression} We start by the simple and yet routine problem of regression. Let $\mathcal{X}_i=\{(x_n^i\in \mathbb{R}^d, y_n^i\in\mathbb{R})\}_{n=1}^{N_i}$ where the goal is to learn a parametric function $\rho_u:\mathbb{R}^d\rightarrow \mathbb{R}$. Here, index $i$ refers to the i'th regression problem of interest. In linear regression, $\rho_u(x)=u^T x$ (where we have absorbed the bias into $x$ for simplicity of notation). Similarly, for nonlinear regression $\rho_u(x)=u^T\psi(x)$, where $\psi:\mathbb{R}^d \rightarrow \mathbb{R}^l$ is a nonlinear mapping to a feature space (i.e., the kernel space). The optimization problem is then as follows: \begin{align} u^*=\argmin_{u} \frac{1}{2}\sum_{n=1}^{N} \|\rho_u(x_n)-y_n\|_2^2+\lambda \Omega(u) \end{align} where $\Omega(u)$ is the regularization term (e.g., $\ell_2$ or $\ell_1$ norm), and $\lambda$ is the regularization coefficient. Our goal is then to learn a network that can solve the regression problem for unseen input data. \subsection{Problem 2: Principal Component Analysis} Next, we consider the principle components analysis (PCA) problem, which is commonly used to project high-dimensional sets of samples into a lower dimensional space while maximizing the preserved variation of the data. Let $\mathcal{X}_i=\{x_n^i\in\mathbb{R}^d\}_{n=1}^{N_i}$, then PCA seeks an orthornormal set of $k$ vectors, $\{w_l\}_{l=1}^k$ such that: \begin{align} w_l=\argmax_w&~~ w^T S_i w \\ \nonumber s.t.~~~&~~w^T_jw_l=\left\{ \begin{array}{lr} 1 & j=l \\ 0 & j<l \end{array} \right. \end{align} where $S_i=\frac{1}{N_i}\sum_{n=1}^{N_i}(x_n^i-\bar{x}^i)(x_n^i-\bar{x}^i)^T$ is the covariance matrix of the data, and $\bar{x}^i=\frac{1}{N_i}\sum_{n=1}^{N_i} x_n^i$ is the mean. Deriving the close form of solution for this problem involves calculation of the eigenvectors of the covariance matrix, i.e., \begin{align*} S_iw^*_l=\lambda_l w^*_l \end{align*} here $\lambda_l$ and $w^*_l$ are the l'th eigenvalue and eigenvector, respectively. This optimization problem can be presented as a set-function that receives a set of d-dimensional points, $\mathcal{X}^i$ with cardinality $|\mathcal{X}_i|=N_i$, and returns $U^*=[w_1^*,w_2^*,...,w_k^*]$. Using this representation, $\mathcal{LOOP}$ ~approximates the discussed set-function and outputs the top $k$ principle components for the input set. Put differently, we aim to find a $\phi_\theta$, such that $\phi_\theta(\mathcal{X})\approx U^*$ for $\mathcal{X}\sim P_\mathcal{X}$. \subsection{Problem 3: Transport-based Core-set} For our third problem, we consider the transport-based core-set problem. The notion of core-set originates from computational geometry \cite{agarwal2005geometric} and has been widely used in different machine learning tasks. Constructing a core-set from a large dataset is an optimization problem of finding a smaller set that can best approximate the original dataset with respect to a certain measure. Claici et al. \cite{claici2018wasserstein} leveraged optimal transport theory and introduced Wasserstein measure to calculate the core-set. Their work aims to minimize the Wasserstein distance of the core-set from a given input data distribution. In this paper we consider this transport-based core-set problem with respect to a fixed size of output. Let $\mathcal{X}=\{x_n\in \mathbb{R}^d\}_{n=1}^N$ be an input set. Here we assume that elements of each set are i.i.d. samples from an underlying distribution. Our sets are represented as empirical distributions, i.e., $p(x)=\frac{1}{N}\sum_{n=1}^N\delta(x-x_n)$. Given a size $M$ ($M\ll N$), we seek a set $\mathcal{U}^*=\{\mu_m\in\mathbb{R}^d\}_{m=1}^M$ with the empirical distribution $q_\mathcal{U}(x)=\frac{1}{M}\sum_{m=1}^M\delta(x-\mu_m)$, such that \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{LOOP_Coreset.pdf} \caption{For an input set of $\mathcal{X}=\{x_n\in\mathbb{R}^d\}_{n=1}^N$, $\mathcal{LOOP}$~ returns a core-set $\mathcal{U}=\{\mu_m\in\mathbb{R}^d\}_{m=1}^M$ such that it minimizes the Wasserstein distance between the empirical distributions $p(x)=\frac{1}{N}\sum_{n=1}^N\delta(x-x_n)$ and $q_\mathcal{U}(x)=\frac{1}{M}\sum_{m=1}^M\delta(x-\mu_m)$, i.e., $W_2(p,q_\mathcal{U})$.} \label{fig:coreset} \end{figure} \begin{align} \label{eq:barycenter} \mathcal{U}^*=\argmin_{\mathcal{U}} W_2(p,q_\mathcal{U}) \end{align} where $W_2(\cdot, \cdot)$ denotes the 2-Wasserstein distance. The existing approach to this optimization problem relies on iterative linear programming methods to compute the optimal transports in each iteration. We intend to replace this costly process with an a parametric set function $\phi_\theta$ such that $\phi_\theta(\mathcal{X})\approx \mathcal{U}^*$ for $\mathcal{X}\sim P_\mathcal{X}$. Figure \ref{fig:coreset} demonstrates this process. It is worth noting that the transport-based core-set problem is equivalent to the free-support Wasserstein barycenter problem \cite{cuturi2014fast} when there is only one input distribution. \subsection{Problem 4: Supply management in Cyber-Physical Systems} Lastly, we plan to utilize $\mathcal{LOOP}$~to solve the fundamental problem of supply management in Cyber-Physical Systems (CPS). The electric power grid is an example of a CPS that is increasingly facing supply-demand issues. Power networks are large-scale systems spanning multiple cities, states, countries, and even continents and are characterized as a complex interconnect of multiple entities with diverse functionalities. The grid of future will differ from the current system by the increased integration of decentralized generation, distributed storage as well as communications and sensing technologies. These advancements, combined with climate change concerns, resiliency needs, and electrification trends, are resulting in a more distributed and interconnected grid, requiring decisions to be made at scale and in a limited time window. At its basic form, energy supply-demand problem seeks to find the most cost effective power production for meeting the end-users' needs and can be formulated as, \begin{align} \label{eq:ED} &\argmin_u \sum_{n=1}^{N} C_n(u_n) \\ \nonumber &~~~s.t.~~~~ \sum_{n=1}^N u_n= \sum_{m=1}^M x_m, & \underline{u}_n \leq u_n \leq \overline{u}_n \end{align} where $u_n$ is the produced electric power from source $n$ and $C_n$ is its corresponding cost, which is a quadratic function. Given that $u_n$ represents the power output, it is bounded by physical limitation of the resource $n$, i.e., $\overline{u}_n $ and $\underline{u}_n$. In this setup, $x_m$ refers to the hourly electric demand in node $m$ (where the term `node' identifies an end-user/consumer). Note, the values of $x_m$ are positive. The equality constraint ensures the supply-demand balance. In practice, this problem is solved on an hourly basis to serve the predicted electric demand for the next hour. We aim to approximate this process with a parametric set function, such that $\phi_\theta(\mathcal{X})\approx U^*$ for $\mathcal{X}\sim P_\mathcal{X}$. \section{Experiments} In this section, we demonstrate the application of $\mathcal{LOOP}$~ in the problems enumerated in Section \ref{sec:method} and compare it to the traditional solvers. Throughout this section, GT refers to the Ground Truth and Solver refers to the results obtained from using commercial solvers to solve optimization problems of interest. For each problem and for each model architecture, we repeat the training of our $\mathcal{LOOP}$~ models five times, and we test the performance on a set of 100 problems per model. We then report the mean and standard deviations of all experiments over the five models and the 100 test sets. We start by laying out the specifics of our models and then discuss the implementation details for each problem. \subsection{Models} Given that the inputs to all our optimization problems are in the form of sets of possibly different cardinalities, we pose these problems as learning permutation invariant deep neural networks on set-structured data. To that end, we use: Deep Sets \cite{zaheer2017deep} with different pooling mechanisms and Set Transformer networks \cite{lee2019set}. \begin{itemize} \item {\bf Deep Sets} are permutation invariant neural architectures (i.e., the output remains unchanged under any permutation of input set's elements), which consist of: 1) a multi-layer perceptron (MLP) encoder, 2) a global pooling mechanism (e.g., average pooling), and 3) a MLP decoder that projects the pooling representation to the output: \begin{align} \label{eq:DeepSets} \phi(\mathcal{X})=\psi(pool(\{\eta(x_1),\cdots,\eta(x_n)\})) \end{align} where $\eta$ is an encoder (an MLP) which extracts features from each element of $\mathcal{X}$ independently, resulting in a permutation equivariant function on the input set, and $\psi$ represents the decoder which generates final output after a pooling layer ($pool$). Note that, to achieve a permutation invariance set function, the pooling mechanisms must be a permutation invariance operator (e.g., average pooling, or more advanced methods like \cite{naderializadeh2021pooling}). Specifically, we use global average pooling (GAP) and Sliced-Wasserstein Embedding (SWE) \cite{naderializadeh2021pooling,lu2021slosh} respectively as the pooling layer. In summary, we apply DeepSets-GAP and DeepSets-SWE models to the optimization problems stated above. \item {\bf Set transformer} follow a similar blueprint of permutation equivariant encoder, permutation invariant pooling, and permutation equivariant decoder as in Deep Sets. However, the feature extractor (i.e. encoder) in the Deep Sets model only acts on each element of a set independently, while set transformers use attention to propagate information between the set elements in the encoder. This effectively allows the encoder to model interactions between elements, which could be crucial to approximate a parametric (set) function in some learning tasks. More precisely, the encoder is a stack of multiple trainable (Induced) Set Attention Blocks (SAB and ISAB) \cite{lee2019set} that perform self-attention operations on a set and produce output containing information about pairwise interactions between elements. Note that these blocks are permutation equivariant, that is, for any permutation $\pi$ of elements in $\mathcal{X}=\{x_i\}_{i=1}^n$, $block(\pi\mathcal{X})=\pi block(\mathcal{X})$. As a composition of permutation equivariant blocks, the encoder is also permutation equivariant and captures higher-order interaction features. The decoder aggregates the features by a learnable pooling layer, Pooling by Multihead Attention (PMA) \cite{lee2019set} and send them through a SAB to get output. Here PMA is a permutation invariant operator, and the rest of the operators (SAB or ISAB) are all permutation equivariant, resulting in the overall architecture of the Set Transformer to be permutation invariant. \end{itemize} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{linear_nonlinear.pdf}\vspace{-.2in} \caption{Performance comparison between $\mathcal{LOOP}$~and the solver for three different model architectures (left) and for the two proposed learning settings (with or without the solver in the $\mathcal{LOOP}$) for the linear regression (a) and nonlinear regression (b) problems. The plots on the right show the performance of the Set Transformer network and the solver as a function of the number of training samples.} \label{fig:regression} \end{figure} \subsection{Problem 1: Linear/Nonlinear Regression} \textit{Dataset}: We follow a generative model $y=w^T\phi(x)+\epsilon$, where $\phi(\cdot)$ is the feature map, $w$ contains the ground truth parameters of our regression problem, and $\epsilon$ denotes noise. Regarding the feature maps, in the linear case we have $\phi(x)=[1,x]^T$ and in the nonlinear case we select $\phi(x)=[\rho(x-\mu_1),...,\rho(x-\mu_M)]$ for $\rho(x)$ being a radial basis function and $\{\mu_m\}_{m=1}^M$ form a grid in a predefined interval (e.g., $[-10,10]$). To generate each dataset, $\mathcal{X}_i$, we first sample the set cardinality, $N_i$, uniformly from a predefined interval. Then, we sample $w$, $\{\epsilon_n\}_{n=1}^{N_i}$, and $\{x_n\}_{n=1}^{N_i}$, and generate our $(x^i_n,y^i_n)$ pairs (train and test). For each model architecture and for each learning setting (i.e., with and without solver in the $\mathcal{LOOP}$) we train our $\mathcal{LOOP}$~ model 5 times and report the test MSE of our model, the solver, and the ground truth. In addition, for the set transformer architecture, we report the test performance of our trained $\mathcal{LOOP}$~ model and the solver as a function of the number of training samples. Figure \ref{fig:regression} show cases our results for the linear and nonlinear regression problems. We see that while for all architectures $\mathcal{LOOP}$~ is able to perform comparable with the solver, for the Set Transformer architecture the gap between $\mathcal{LOOP}$~ and the solver becomes very small. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{PCA2.pdf} \caption{$\mathcal{LOOP}$'s~ performance in predicting the principle components as measured by the cosine similarity between the solver's and our model's outputs (left). We also provide the performance of the network as a function of the input data cardinality (right). The last row visualizes the first eigenvector calculated by our $\mathcal{LOOP}$~model on four different problems with random pairs of digits. We can see that the network's output is both quantitatively and qualitatively aligned with the first principle component.} \label{fig:pca} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{BaryCenter.pdf} \caption{Performance comparison between $\mathcal{LOOP}$~and the solver for our three different architectures (left) and for the two proposed learning settings. The yaxis represents the average Wasserstein distance between the input distribution, $p$, and the core-set distribution $q_\mathcal{U}$, when the core-set are: 1) random samples from the uniform distribution, Rand, 2) the output of the solver, and 3) the output of our $\mathcal{LOOP}$~model. The plots on the right show the performance of the Set Transformer network and the solver as a function of the number of training samples.} \label{fig:barycenter} \end{figure} \subsection{Problem 2: Principal Component Analysis} \textit{Dataset}: We used the MNIST \cite{lecun1998mnist} dataset to sample train and test sets. MNIST contains 60,000 train and 10,000 test images of handwritten digits. The size of a single image is $28\times 28$. In the training stage, we first select pairs of random digits to sample images from. Then a random number of data ranging from 500 to 1000, is uniformly sampled from the two selected digits to form the input set. Given an input set, $\mathcal{X}_i$, our network aims to predict the top $K=5$ eigenvectors of the input data. In ``solver in the $\mathcal{LOOP}$" approach, the top $K=5$ eigenvectors are calculated by the solver. Then our set transformer \cite{lee2019set} is trained to maximize the cosine similarities between the ground-truth eigenvectors and the predicted ones (supervised learning). In our ``no solver" approach, the set transformer maximizes the area under the curve of the captured variances along the predicted eigenvectors. We train 5 different models for this experiment and evaluated each model on 100 different test problems. For our metric, we calculate the cosine similarities between the predicted vectors and the principle components obtained from the solver. The mean and standard deviation of the cosine similarities for each eigenvector is depicted in Figure \ref{fig:pca} (left). We also show the performance of the trained model as a function of different number of training samples from 128 to 2048 (on the right). Results of ``solver in the $\mathcal{LOOP}$" and ``no solver" training are shown for the set transformer model in \ref{fig:pca}. Our network is able to effectively predict the top principle components in all experiments, while having a higher fidelity for the ones with larger eigenvalues. \subsection{Problem 3: Transport-based Core-set} \textit{Dataset}: We generate datasets by drawing samples from random 2D Gaussian Mixture Models (GMMs). We start by initializing a random number of Gaussians with random means but a fixed covariance matrix. Then, we draw random number of samples from each of these randomly initialized Gaussians to generate our input sets, $\mathcal{X}^i$. \textit{Results:} Results of the two training approaches ``solver in the $\mathcal{LOOP}$" and ``no solver," for our three model architectures are shown in \ref{fig:barycenter}. Given that the problem is equivalent to the free-support Wasserstein barycenter problem, we used the solver from the Python Optimal Transport package \cite{flamary2021pot} as our baseline solver. To compare the output of our $\mathcal{LOOP}$~model with the solver, we calculate the objective function, i.e., $W_2(p,q_\mathcal{U})$ as our metric (the lower the better). Also, to provide a reference of comparison for the reader, we also consider the Wasserstein distance between the input distribution, $p$, and a uniform distribution in the input domain, $\bar{q}$, which we refer to as Rand. Therefore, Rand in our experiments play the role of the line of chance. As a practical point, we used the Sliced-Wasserstein distance (SWD) \cite{kolouri2019generalized} as the objective function in the ``no solver" training, as SWD is significantly faster to compute than the Wasserstein distance. Finally, we also compare the performance of $\mathcal{LOOP}$~ and the solver as a function of number of training samples in Figure \ref{fig:barycenter}. \subsection{Problem 4: Supply management in Cyber-Physical Systems} \textit{Dataset:} We use the dataset from publicly available IEEE 2000-bus system \cite{xu2017application} as the seed data to generate hourly energy data for one week. We use different load profiles for weekdays and weekends and randomly scale the original data. The scaling coefficient lies between 0.95 and 1.05. This process results in $24\times 7$ data points. We use the data of odd hours for training and that of even hours for testing. The IEEE 2000-bus system is a 2,000 nodes graph representing a realistic large scale electric grid. This network consists of 1,125 demand nodes (electricity consumers) and 544 supply nodes (electricity producers). The number of these supply and demand nodes define the dimensions of the input and the output for our network. \textit{Results:} For ``solver in the $\mathcal{LOOP}$", we use the mean squared error as the loss function, \begin{align} & L=\frac{\sum_{n=1}^{N}(u_n-u_n^{*} )^2 }{n} \label{eq:ED_loss_n} \end{align} where $\{u_n^*\}_{n=1}^N$ are the solver's output. We use the quadratic programming (QP) solver from the CVXPY library \cite{diamond2016cvxpy} as our solver. In our second learning setting with no solver in the $\mathcal{LOOP}$, we write the Lagrange dual function and absorb the optimization constraints into our objective as penalty terms. Therefore, the loss function will consist of three terms (see below), \begin{align} &L=\sum_{n=1}^{N} C_n(u_n)+\lambda _1\left ( \sum_{n=1}^N u_n- \sum_{m=1}^M x_m \right )^2\nonumber \\ &+\lambda _2\left [ (ReLU(\underline{u}_n-u_n))^2+ (ReLU(u_n-\overline{u}_n))^2\right ],\label{eq:ED_loss_n_2} \end{align} where $\lambda_i$s are the penalty coefficients, and we use $\lambda_1 =0.001$, $\lambda_2 =10$ in our experiments. In \ref{eq:ED_loss_n_2}, the first term represents cost of electricity production. The second term enforces the total supply and demand to be equal, and the third term penalizes violations of the inequality constraints. We bound the output according to inequality constraints for testing results. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{SupplyManagement.pdf} \caption{Performance of our $\mathcal{LOOP}$~model as measured by the distance of its output from the optimal solution and from the feasible set (i.e., the set of solutions that satisfy the constraints). We define the optimality distance as $\sum_{n=1}^{N}\left | u_n-u_n^* \right | / \sum_{n=1}^{N}u_n^*$ , where $u_n^*$ and $u_n$ refer to the solver's and $\mathcal{LOOP}$'s~outputs. The feasibility distance is derived by $\sum_{n=1}^{N}\left | u_n-u_n^{proj} \right | / \sum_{n=1}^{N}u_n^{proj}$ where $u_n^{proj}$ denotes projection of $u_n$ onto the feasible set. This figure depicts performance for $\mathcal{LOOP}$~and the solver for our three different architectures and for the two proposed learning settings; with solver (left) and without solver (right). } \label{fig:supply_management} \end{figure} Figure \ref{fig:supply_management} illustrates the results for two $\mathcal{LOOP}$~approaches (``solver in the $\mathcal{LOOP}$" and ``no solver") for different models. To quantify the performance of our $\mathcal{LOOP}$~model we report two metrics: 1) optimality, which measures how far we are from the solver's output, and 2) feasibility, which measures how far $\mathcal{LOOP}$'s output is from the feasible set. We measure feasibility by projecting the output of the network onto the feasible set and measure the projection distance. Finally, we note that the gap between our two learning settings for the supply management problem is large. We expect this gap to be reduced by a more careful tuning of the penalty terms $\lambda_1$ and $\lambda_2$. \section{Conclusion} This paper presents a novel alternative for existing iterative methods to solve optimization problems. Specifically, this paper introduces $\mathcal{LOOP}$~(Learning to Optimize Optimization Process) framework which approximates the optimization process with a trainable parametric (set) function. Such a function maps optimization inputs to the optimal parameters in a single feed forward. We proposed two approaches for training $\mathcal{LOOP}$; using a traditional solver for providing ground truth (supervised learning) and without a solver in the $\mathcal{LOOP}$ (self-supervised learning). The performance of the proposed methods are showcased in the contexts of diverse optimization problems; (i) linear and non-linear regression, (ii) principal component analysis, (iii) transport-based Core-set, and (iv) supply management in Cyber-Physical Systems. We used three separate models in our experiments, namely deep sets with global average pooling (GAP) , deep sets with sliced-Wasserstein Embedding and set transformers. Our results demonstrate that one can reduce an optimization problem to a single forward mapping while staying within reasonable distance from optimal solutions (calculated using commercial solvers). $\mathcal{LOOP}$~holds the promise for the next generation of optimization algorithms that improve by solving more examples of an optimization problem. In future, we intend to leverage recent advancements in deep learning on edge-devices, continual learning and transfer learning to continuously improve $\mathcal{LOOP}$'s~ performance over time. \bibliographystyle{ieee_fullname}
proofpile-arXiv_069-3749
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A \emph{tree-decomposition} of a graph $G$ is a pair $(T, \mathcal{B})$ where $T$ is a tree and $\mathcal{B}:=\{B_t \mid t \in V(T)\}$ is a collection of subsets of vertices of $G$ satisfying: \begin{itemize} \item $V(G)= \bigcup_{t \in V(T)} B_t$, \item for each $uv \in E(G)$, there exists $t \in V(T)$ such that $u,v \in B_t$, and \item for each $v \in V(G)$, the set of all $w \in V(T)$ such that $v \in B_w$ induces a connected subtree of $T$. \end{itemize} We call each member of $\mathcal{B}$ a {\em bag}. If $T$ is a path, then we say a tree-decomposition $(T,\mathcal{B})$ of $G$ is a {\em path-decomposition} of $G$. Since a path can be written as a sequence of vertices, we think of a path-decomposition of $G$ as a sequence of sets of vertices $B_1,B_2,\ldots,B_s$ such that \begin{itemize} \item $V(G)= \bigcup_{1\le t \le s} B_t$, \item for each $uv \in E(G)$, there exists $1\le t \le s$ such that $u,v \in B_t$, and \item for each $v \in V(G)$, the sets $B_i$ containing $v$ are consecutive in the sequence. \end{itemize} For a tree-decomposition $(T,\mathcal{B})$ of $G$, the \emph{chromatic number} of $(T, \mathcal{B})$ is $\max \{\chi(G[B_t]) \mid t \in V(T)\}$. The \emph{tree-chromatic number} of $G$, denoted $\chi_T(G)$, is the minimum chromatic number taken over all tree-decompositions of $G$. The \emph{path-chromatic number} of $G$, denoted $\chi_P(G)$, is defined analogously, where we insist that $T$ is a path instead of an arbitrary tree. Both these parameters were introduced by Seymour \cite{Seymour16}. Evidently, $\chi_T(G) \leq \chi_P(G) \leq \chi(G)$ for all graphs $G$. The \emph{closed neighborhood} of a set of vertices $U$, denoted $N[U]$, is the set of vertices with a neighbor in $U$, together with $U$ itself. For every enumeration $\sigma=v_1,\ldots,v_n$ of the vertices of a graph $G$, we denote by $P_{\sigma}^G$ the sequence $X_1, \ldots, X_n$ of sets of vertices of $G$ such that \[ X_{\ell}=N[ \{v_1, \dots, v_{\ell}\}] \setminus \{v_1,\ldots,v_{\ell-1}\}. \] Observe that every vertex $v_i$ of $G$ belongs to $X_i$, and for $v_iv_j \in E(G)$ with $i<j$, both $v_i$ and $v_j$ belong to $X_i$. Furthermore, for $v_i \in V(G)$, if $m$ is the first index such that $v_i \in N[\{v_m\}]$, then $v_i \in X_{j}$ if and only if $ m\le j \le i$. So, $P_{\sigma}^G$ is indeed a path decomposition of $G$. Let $\chi(P_{\sigma}^G)$ be the chromatic number of $P_{\sigma}^G$. The following shows that for every graph $G$, there is an enumeration $\sigma$ of $V(G)$ such that $\chi(P_{\sigma}^G)=\chi_P(G)$. \begin{LEM}\label{enumeration} If $G$ has path-chromatic number $k$, then there is some enumeration $\sigma$ of $V(G)$ such that $P_{\sigma}^G$ has chromatic number $k$. \end{LEM} We prove this later in this section. Furthermore, the obvious modification of a standard dynamic programming algorithm (see Section 3 of \cite{SV2009}) yields a $O(n4^n)$-time algorithm to test if $G$ has path-chromatic number at most $k$. We write $[n]$ for $\{1,2,\ldots,n\}$. For a graph $G$ with vertex set $V(G)$, let $R_m(G)$ be the graph with vertex set $\{(i,v) \mid i \in [m], v \in V(G) \cup \{v_0\}\}$ where $v_0 \not \in V(G)$, such that two distinct $(i,v)$ and $(i',v')$ are adjacent if and only if one of the following holds: \begin{itemize} \item $i=i'$ and exactly one of $v$ or $v'$ is $v_0$, or \item $i\neq i'$, $v, v' \in V(G)$ and $v v' \in E(G)$. \end{itemize} For a subset of vertices $S$, we let $\tuple{S}$ denote the subgraph induced by $S$ (the underlying graph will always be clear). We also abbreviate $\chi(\tuple{S})$ by $\chi(S)$. The main theorems of this paper are the following. For an enumeration $\sigma=v_1,\ldots,v_n$ of $V(G)$ with $P_{\sigma}^G=X_1,X_2,\ldots,X_n$, we say $\sigma$ is {\em special} if \begin{itemize} \item $\chi(P_{\sigma}^G)=\chi_P(G)$ and \item for every $1\le i \le n$ with $\chi(X_i)=\chi_P(G)$, $v_i$ has no neighbor in $\{v_1,\ldots,v_{i-1}\}$. \end{itemize} \begin{THM}\label{mainthm} Let $n$ and $k$ be positive integers, with $k \geq 2$. For every integer $m \geq n+k+2$ and every graph $G$ with $\chi_p(G)=k$ and $|V(G)|=n$, the path-chromatic number of $R_m(G)$ is $k$ if there is a special enumeration of $V(G)$. Otherwise, the path-chromatic number of $R_m(G)$ is $k+1$. \end{THM} Theorem~\ref{mainthm} does not guarantee that applying $R_m$ always increases path-chromatic number. On the other hand, our second theorem shows that applying $R_m$ \emph{twice} always increases path-chromatic number. \begin{THM}\label{mainsec} Let $G$ be a graph with $\chi_P(G)=k$ and $|V(G)|=n$. For all integers $\ell$ and $m$ such that $m \ge n+k+2$ and $\ell \ge m (n+1)+k+3$, the path-chromatic number of $R_{\ell}(R_{m}(G))$ is strictly larger than $k$. \end{THM} Theorem \ref{mainthm} easily implies the following corollary. \begin{COR} \label{notequal} For every positive integer $k$, there is an infinite family of $k$-connected graphs $G$ for which $\chi_T(G) \neq \chi_P(G)$. \end{COR} These are the first known examples of graphs with differing tree-chromatic and path-chromatic numbers, which settles a question of Seymour \cite{Seymour16}. Seymour also suspects that there is no function $f: \mathbb{N} \to \mathbb{N}$ for which $\chi_P(G) \leq f(\chi_T(G))$ for all graphs $G$, but unfortunately our results are not strong enough to derive this stronger conclusion. Our results also imply that the family of Mycielski graphs have unbounded path-chromatic numbers. For $k\geq 2$, the \emph{$k$-Mycielski graph $M_k$}, is the graph with $3 \cdot 2^{k-2}-1$ vertices constructed recursively in the following way. $M_2$ is a single edge and $M_k$ is obtained from $M_{k-1}$ by adding $3\cdot 2^{k-3}$ vertices $w,u_1,u_2,\ldots,u_{3\cdot 2^{k-3}-1}$ and adding edges $wu_i$ for all $i$ and $u_iv_j$ for all $i\neq j$ such that $v_iv_j \in E(M_{k-1})$ where $v_1,v_2,\ldots,v_{3\cdot 2^{k-3}-1}$ are the vertices of $M_{k-1}$. Here we say $u_i$ \emph{corresponds} to $v_i$. It is easy to show (see \cite{Mycielski55}) that for all $k \geq 2$, $M_k$ is triangle-free and $\chi(M_k)=k$. \begin{COR} \label{unbounded} For every positive integer $c$, there exists a positive integer $n(c)$ such that the $n(c)$-Mycielski graph has path-chromatic number larger than $c$. \end{COR} We prove Corollary~\ref{notequal} and Corollary~\ref{unbounded} in Section~\ref{sec:corollaries}. We finish this section by proving Lemma~\ref{enumeration}. \begin{proof}[Proof of Lemma~\ref{enumeration}.] For every path-decomposition $(P,\mathcal{B})=B_1,B_2,\ldots,B_s$ of $G$, we prove that there exists an enumeration $\sigma$ of $V(G)$ such that the chromatic number of $P_{\sigma}^G$ is at most that of $(P,\mathcal{B})$. Let $\sigma=v_1,v_2,\ldots,v_n$ be an enumeration of $V(G)$ such that for all $u,v \in V(G)$, if the last bag of $(P,\mathcal{B})$ containing $u$ comes before the last bag of $(P,\mathcal{B})$ containing $v$ then $u$ comes before $v$ in $\sigma$. It is easy to show that such an enumeration always exists. Let $P^G_{\sigma}=X_1,X_2,\ldots,X_n$ and for $1\le i \le n$, let $B_{\ell(i)}$ be the last bag of $(P,\mathcal{B})$ containing $v_i$. It is enough to prove that for $1\le i \le n$, $B_{\ell(i)}$ contains $X_i$. Suppose $v_j \in X_i \setminus B_{\ell(i)}$. Obviously $i\neq j$, and since $v_j \in X_i$, we obtain $i<j$. Let $B_{f(j)}$ be the first bag of $(P,\mathcal{B})$ containing $v_j$. Since the bags containing $v_j$ are consecutive in $(P,\mathcal{B})$, $v_j \not \in B_{\ell(i)}$ and $\ell(i) \le \ell(j)$, we obtain that $\ell(i)<f(j)$. Let $v_k$ be a neighbour of $v_j$ with $k \le i$. Such a $v_k$ exists since $v_j \in X_i$. Then, $\ell(k) \le \ell(i) $ since $k\le i$, so $\ell(k) <f(j)$. Therefore, there is no bag of $(P,\mathcal{B})$ containing both $v_k$ and $v_j$ because the last bag containing $v_k$ comes before the first bag containing $v_j$. But this is a contradiction since $v_kv_j \in E(G)$. Thus, $X_i \subseteq B_{\ell(i)}$ as claimed, and we deduce that the chromatic number of $P_{\sigma}^G$ is at most that of $(P,\mathcal{B})$. \end{proof} \section{Deriving the Corollaries} \label{sec:corollaries} Assuming Theorems \ref{mainthm} and \ref{mainsec}, it is straightforward to derive Corollaries \ref{notequal} and \ref{unbounded}, which we do in this section. Let $C_n$ denote the $n$-cycle. \begin{LEM}\label{cycle} For all odd integers $n \ge 5$ and all integers $m \geq n+4$, the path-chromatic number of $R_m(C_n)$ is 3. \end{LEM} \begin{proof} Evidently, $\chi_P(C_n)=2$. Hence, by Theorem~\ref{mainthm}, it is enough to show that every enumeration $\sigma = v_1, \dots, v_n$ of $V(C_n)$ is not special. Let $P_{\sigma}^G=X_1,X_2,\ldots,X_n$. Let $(L,M,R)$ be the partition of $V(C_n)$ such that for every $v\in V(C_n)$, \begin{itemize} \item $v \in L$ if both neighbors of $v$ come before $v$ in $\sigma$, \item $v\in R$ if both neighbors of $v$ come after $v$ in $\sigma$, \item $v \in M$ otherwise. \end{itemize} Suppose $M$ is not empty and let $v_{\ell}$ be a vertex of $M$. Obviously, the chromatic number of $\tuple{X_{\ell}}$ is at least 2 because it contains both $v_{\ell}$ and a neighbor of $v_{\ell}$. However, $v_{\ell}$ has a neighbor appearing before $v_{\ell}$ in $\sigma$, so $\sigma$ is not special. So, we may assume $M$ is empty. Since $L$ and $R$ are both stable sets, it follows that $C_n$ is 2-colorable, a contradiction. This completes the proof. \end{proof} On the other hand, we also have the following easy lemma. \begin{LEM} \label{c5treechi} For all integers $n \ge 4$ and all positive integers $m$, $R_m(C_n)$ has tree-chromatic number $2$. \end{LEM} \begin{proof} It clearly suffices to show that $R_m(C_n)$ has tree-chromatic number at most $2$. Let $V(C_n)=\{v_1, \dots, v_n\}$ with $v_j$ adjacent to $v_{j'}$ if and only if $|j-j'| \in \{1,n-1\}$. Let the vertex set of $R_m(C_n)$ be $\{(i,v_j)\mid i \in [m], j\in \{0\} \cup [n]\}$. We now describe a tree-decomposition $(T, \mathcal{B})$ of $R_m(C_n)$. Let $T$ be a star with a center vertex $c$ and $m$ leaves $\ell(1),\ldots,\ell(m)$. Let \begin{itemize} \item $B_c=\{(i,v_j) \mid i \in [m], j\in \{2,3,\ldots,n\}\}$, \item $B_{\ell(s)}=\{(s,v_j) \mid j \in \{0,1,2,\ldots,n\}\} \cup \{(i,v_j)\mid i \in [m], j\in \{2,n\}\}$. \end{itemize} We claim that $(T,\mathcal{B})$ is a tree-decomposition of $R_m(C_n)$ where $\mathcal{B}=\{ B_t \mid t \in V(T)\}$. For $i \in [m]$ and $v_j \in V(C_n)\cup \{v_0\}$, the vertex $(i,v_j)$ of $R_m(C_n)$ belongs to $B_{\ell(i)}$. If two distinct vertices $(i,v_j)$ and $(i',v_{j'})$ of $R_m(C_n)$ are adjacent, then either $i=i'$ and one of $v_j$ and $v_{j'}$ is $v_0$ or $i\neq i'$, $j,j'\in [n]$ and $|j-j'| \in \{1,n-1\}$. If the first case holds, then both vertices belong to $B_{\ell(i)}$. If the second case holds, then if either $v_j=v_1$ or $v_{j'}=v_1$ then both vertices belong to $B_{\ell(i)}$, and if neither $v_j$ nor $v_{j'}$ is $v_1$, then both belong to $B_c$. Lastly, for $(i,v_j) \in R_m(C_n)$, if $v_j \notin \{v_0,v_1\}$ then $(i,v_j)$ belongs to $B_c$, so $\{t \mid (i,v_j) \in B_t\}$ automatically induces a subtree in $T$. If $v_j$ is either $v_0$ or $v_1$, then only $B_{\ell(i)}$ contains $(i,v_j)$. Hence, $(T,\mathcal{B})$ is a tree-decomposition, as claimed. The set of vertices $(i,v_j)$ of $B_c$ with even $j$ (or odd $j$) is independent. Hence, $\chi(B_c)$ is at most $2$. Moreover, for $i\in [m]$, both of $\{(i,v)\mid v \in V(C_n)\}$ (note $v_0 \notin V(C_n)$) and $B_{\ell(i)}\setminus \{(i,v)\mid v\in V(C_n)\}$ are independent, so $\chi(B_{\ell(i)})$ is at most $2$. We conclude that $(T,\mathcal{B})$ has chromatic number at most $2$. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary~\ref{notequal}] For every odd integer $n \ge 5$ and every integer $m\ge n+4$, Lemma~\ref{cycle} and Lemma~\ref{c5treechi} show that the tree-chromatic number and path-chromatic number of $R_m(C_{n})$ are different. To complete the proof, we prove that $R_m(C_{n})$ is $k$-connected for every $n \ge k$ and $m \geq n+4$. We prove that for every set $U$ of vertices of $R_m(C_{n})$ of size at most $k-1$, $R_m(C_{n}) - U$ is connected. Again, let $V(C_n)=\{v_1,v_2,\ldots,v_n\}$ with $v_j$ adjacent to $v_{j'}$ if $|j-j'| \in \{1,n-1\}$ and $V(R_m(C_n))=\{(i,v_j)\mid i \in [m], j \in \{0\} \cup [n]\}$. Since $m>|U|$, there exists $i^* \in [m]$ such that no vertex in $\{(i^*,v_j)\mid j \in \{0\}\cup [n]\}$ belongs to $U$. Without loss of generality, $i^*=1$. We claim that for every vertex $(i,v_j)$ of $R_m(C_{n}) - U$, there is a path from $(1,v_0)$ to $(i,v_j)$. We may assume that $(i,v_j) \neq (1,v_0)$. If $i=1$, then $(1,v_0),(1,v_j)$ is a path. Hence, we may assume that $i\neq 1$. If $v_j\neq v_0$, then $(1,v_0), (1,v_{j+1}), (i,v_j)$ is a path, where $(1,v_{n+1})=(1,v_1)$. If $v_j=v_0$, there exists $j' \in [n]$ such that $(i,v_{j'}) \notin U$ since $n > |U|$. Then $(1,v_0),(1,v_{j'+1}),(i,v_{j'}), (i,v_0)$ is a path. Therefore, $R_m(C_n) - U$ is connected. This completes the proof. \end{proof} Recall that $M_k$ denotes the $k$-Mycielski graph. \begin{LEM}\label{Mycielskian} For all positive integers $n,m$ and all integers $r\ge m+n$, $M_r$ contains $R_m(M_n)$ as an induced subgraph. \end{LEM} \begin{proof} Take a sequence $G_n, G_{n+1},\cdots,G_r$ of induced subgraphs of $M_r$ where $G_i$ is isomorphic to $M_i$ and $G_i$ is an induced subgraph of $G_{i+1}$ for $i=n,n+1,\ldots,r-1$. Let $V(G_n)= \{v^n_1,v^n_2,\ldots,v^n_{3\cdot 2^{n-2}-1}\}$, and for $s > n$, let $V(G_s) \setminus V(G_{s-1})=\{v^s_0,v^s_1,\ldots,v^s_{3\cdot 2^{s-3}-1}\}$ where $v^s_0$ is complete to the other vertices in this set and $v^s_{i}$ corresponds to $v^n_i$ for $ 1\le i \le 3\cdot 2^{n-2}-1$. Then, the graph induced by $\{v^x_y \mid n+1\le x \le m+n , 0\le y \le 3 \cdot 2^{n-2}-1\}$ is isomorphic to $R_m(M_n)$. \end{proof} Lemma \ref{Mycielskian} and Theorem \ref{mainsec} together imply Corollary \ref{unbounded}. Thus, it only remains to prove Theorems \ref{mainthm} and \ref{mainsec}, which we do in the remaining section. \section{Proofs of Theorems \ref{mainthm} and \ref{mainsec}} In this section, we prove Theorem \ref{mainthm} and Theorem \ref{mainsec}. Throughout this section, $G$ is a graph with $n$ vertices and $R_m(G)$ has vertex set $\{(i,v) \mid i \in [m], v\in V(G)\cup\{v_0\}\}$. For $I \subseteq [m]$ and $U \subseteq V(G)\cup\{v_0\}$, we set $[I,U]=\{(i,v) \mid i \in I, v\in U\}$. We start with the following lemmas. \begin{LEM}\label{isomorphic} For $I \subseteq [m]$ and $U\subseteq V(G)$, suppose $|I| \ge \chi(U)$. Then there exists a map $f:U \to [I,U]$ such that \begin{itemize} \item for every $v \in U$, $f(v)$ belongs to $[I,\{v\}]$, and \item $f$ is an isomorphism from $\tuple{U}$ to $\tuple{f(U)}$. \end{itemize} Furthermore, for all $i^* \in [m]\setminus I$ and all $v^* \in V(G)\setminus U$, $\tuple{[I,U]\cup \{(i^*,v^*)\}}$ contains an isomorphic copy of $\tuple{U\cup \{v^*\}}$ as an induced subgraph. \end{LEM} \begin{proof} Let $\chi(U)=c$. Let $\mathcal{U}=(U_1,U_2,\ldots,U_c)$ be a partition of $U$ into independent sets of $G$. Take $c$ distinct elements from $I$, say $i_1,i_2,\ldots,i_c$, and for $v\in U$, let $f(v)=(i_s,v)$ if $v\in U_s$. We claim that $f$ is an isomorphism from $\tuple{U}$ to $\tuple{f(U)}$ Let $v$ and $v'$ be distinct vertices in $U$. If $v$ and $v'$ are adjacent, they are contained in distinct classes of $\mathcal{U}$, so $f(v)$ and $f(v')$ are adjacent by the definition of $R_m(G)$. If $v$ and $v'$ are non-adjacent, there are no edges between $[I,\{v\}]$ and $[I,\{v'\}]$. Hence, $f(v)$ and $f(v')$ are non-adjacent. Thus, $f$ is an isomorphism from $\tuple{U}$ to $\tuple{f(U)}$. For the last part, let $i^* \in [m]\setminus I$ and $v^* \in V(G)\setminus U$. Let $f^*$ be the map obtained from $f$ by adding $f^*(v^*)=(i^*,v^*)$. Since $i^*\not \in I$, it easily follows that $f^*$ is an isomorphism from $\tuple{U\cup \{v^*\}}$ to $\tuple{f^*(U\cup \{v^*\})}$. This completes the proof. \end{proof} When considering $k$-colorings of a graph, we always use $[k]$ for the set of colors. \begin{LEM}\label{coloring} For $I \subseteq [m]$ and $U \subseteq V(G)$, let $\chi( U)=c$. If $|I|\geq c$, the chromatic number of $\tuple{[I,U]}$ is $c$. Moreover, if $|I| > c$, then for every $c$-coloring $C$ of $\tuple{[I,U]}$ and every $i \in I$, $[\{i\},U]$ uses all $c$ colors of $C$. In other words, $C([\{i\},U])=[c]$ for every $i\in I$. \end{LEM} \begin{proof} Let $(U_1,U_2,\ldots,U_c)$ be a partition of $U$ into independent sets of $G$. Then, $([I,U_1],[I,U_2],\ldots,[I,U_c])$ is a partition of $[I,U]$ and each set is independent in $\tuple{[I,U]}$. Hence, the chromatic number of $\tuple{[I,U]}$ is at most $c$. On the other hand, $\chi([I,U]) \geq c$ follows from Lemma \ref{isomorphic}. Thus, the chromatic number of $\tuple{[I,U]}$ is $c$. For the second part, let $C:[I,U] \to [c]$ be a $c$-coloring of $\tuple{[I,U]}$. Fix $i \in I$. Since $|I\setminus i|$ is still greater than or equal to $c$, we can apply Lemma~\ref{isomorphic} to $[I\setminus i,U]$. Let $f$ be a map from $U$ to $[I\setminus i,U]$ as in the statement of Lemma~\ref{isomorphic}. Let $F=f(U)$, and $C_F$ be the restriction of $C$ on $F$. As $\tuple{f(U)}$ is not $(c-1)$-colorable, for each color $\alpha \in [c]$, there must be a vertex $v_{\ell_{\alpha}} \in U$ such that $f(v_{\ell_{\alpha}}) \in C_F^{-1}(\alpha)$ and $f(v_{\ell_{\alpha}})$ has a neighbor in $C_F^{-1}(\beta)$ for every $\beta \in [c]\setminus \alpha$. Then, $(i,v_{\ell_{\alpha}})$ also has a neighbor in $C_F^{-1}(\beta)$ for every $\beta \in [c]\setminus \alpha$, so $C((i,v_{\ell_{\alpha}}))$ is $\alpha$. Hence, $[\{i\},U]$ sees all colors, which proves the second part. \end{proof} \begin{LEM}\label{special} For a graph $G$ with path-chromatic number $k \geq 2$, let $\sigma=v_1,v_2,\ldots,v_n$ be a special vertex enumeration of $G$. Let $P_{\sigma}^{G}=X_1,X_2,\ldots,X_n$. For $j\in [n]$, if $\chi(X_{j})=k$ then $\chi(X_j \setminus v_j) = k-1$. \end{LEM} \begin{proof} It is obvious that $\chi(X_j \setminus v_j) \ge k-1$. We may assume that $X_j \setminus v_j \neq \emptyset$. Let $j'$ be the smallest index such that $v_{j'} \in X_j\setminus v_j$. Note that $j'>j$ and since $v_{j'}$ is contained in $X_{j}$, it has a neighbor in $\{v_1,\ldots,v_{j}\}$. Hence, by the definition of a special vertex enumeration, $\chi(X_{j'}) \leq k-1$. However, by the choice of $j'$, $X_{j} \setminus v_j$ is a subset of $X_{j'}$. Thus, $\chi( X_{j} \setminus v_j)\le k-1$, as required. \end{proof} For an enumeration $\sigma$ of vertices and a vertex $v$, let $\sigma(<v)$ denote the set of vertices which come before $v$ in $\sigma$ and $\sigma(\le v)=\sigma(<v)\cup \{v\}$. \begin{LEM}\label{bound} Let $m\ge n+1$ and $\mu=(i_1,v_{j_1}),(i_2,v_{j_2}),\ldots, (i_{m(n+1)}, v_{j_{m(n+1)}})$ be an enumeration of the vertices of $R_m(G)$. Let $k$ be the chromatic number of $P_{\mu}^{R_m(G)}$. For each $v\in V(G)$, let $t(v)$ be the vertex in $[[m],\{v\}]$ which comes first in $\mu$. Suppose that for all $1\le j < j' \le n$, $t(v_j)$ comes before $t(v_{j'})$ in $\mu$ and let $\sigma=v_1,v_2,\ldots,v_n$ be the corresponding enumeration of $V(G)$. Let $P_{\sigma}^G=X_1,X_2,\ldots,X_n$. Then, \begin{itemize} \item[(1)] the chromatic number of $P_{\sigma}^G$ is at most $k$, and \item[(2)] if $\chi( X_{\ell})=k$ for some $\ell \in [n]$, then $\mu( \leq t(v_{\ell}))$ contains at most $k$ vertices in $[[m],\{v_0\}]$. \end{itemize} \end{LEM} \begin{proof} For all $v \in V(G)$, let $f(v)\in [m]$ be such that $t(v)=(f(v),v)$. Let $P_{\mu}^{R_m(G)} = Y_{(i_1,v_{j_1})},Y_{(i_2,v_{j_2})},\ldots, Y_{(i_{m(n+1)}, v_{j_{m(n+1)}})}$. For the first statement, it suffices to show that for all $\ell \in [n]$, $\tuple{Y_{(f(\ell),v_{\ell})}}$ contains $\tuple{X_{\ell}}$ as an induced subgraph. Let $I=[m] \setminus \{f(v_1),\ldots, f(v_\ell)\}$. Then, $|I| \ge m-\ell \ge n+1-\ell =1+(n-\ell) > |X_{\ell}\setminus v_\ell| \ge \chi ( X_{\ell}\setminus v_\ell)$. Moreover, $f(v_\ell) \not \in I$ and $v_{\ell} \not \in X_{\ell}\setminus v_{\ell}$. By Lemma~\ref{isomorphic}, $\tuple{[I, X_\ell \setminus v_\ell] \cup \{(f(v_\ell), v_\ell)\}}$ contains $\tuple{X_{\ell}}$ as an induced subgraph. Since $t(v_j)$ comes before $t(v_{j'})$ in $\mu$ for all $1\le j < j' \le n$, it follows that $Y_{(f(\ell),v_{\ell})}$ contains $[I, X_\ell \setminus v_\ell] \cup \{(f(v_\ell), v_\ell)\}$, as required. For the second statement, suppose $\mu( \leq t(v_{\ell}))$ has exactly $r$ vertices in $[[m],\{v_0\}]$, with $r \geq k+1$. By relabeling, we may assume that $(i,v_0)$ is in $\mu( \leq t(v_{\ell}))$ for all $i \in [r]$ and that $(r,v_0)$ appears last in $\mu$ among them. Observe that $Y_{(r,v_0)}$ contains $\{(r,v_0)\} \cup [[r],X_{\ell}]$. Let $C$ be a $k$-coloring of $Y_{(r,v_0)}$. By Lemma \ref{coloring}, since $r > \chi(X_{\ell})$, for every $k$-coloring of $\tuple{[[r],X_{\ell}]}$, $[\{r\},X_{\ell}]$ sees all $k$ colors. Hence $C([\{r\},X_{\ell}])=[k]$. But then there is no available color for $(r, v_0)$, which yields a contradiction. This completes the proof. \end{proof} We are now ready to prove Theorem \ref{mainthm}, which we restate for the reader's convenience. \begin{reptheorem}{mainthm} Let $n$ and $k$ be positive integers, with $k \geq 2$. For every integer $m \geq n+k+2$ and every graph $G$ with $\chi_p(G)=k$ and $|V(G)|=n$, the path-chromatic number of $R_m(G)$ is $k$ if there is a special enumeration of $V(G)$. Otherwise, the path-chromatic number of $R_m(G)$ is $k+1$. \end{reptheorem} \begin{proof} By Lemma \ref{isomorphic}, $R_m(G)$ contains $G$ as an induced subgraph, so $\chi_P(R_m(G)) \ge \chi_P(G) =k$. We break the proof up into a series of claims. \begin{clm} If the path-chromatic number of $R_m(G)$ is $k$, then there exists a special enumeration of the vertices of $G$. \end{clm} \begin{subproof}[Subproof] Let $\mu=(i_1,v_{j_1}),(i_2,v_{j_2}),\ldots, (i_{m(n+1)}, v_{j_{m(n+1)}})$ be an enumeration of the vertices of $R_m(G)$ such that $P_{\mu}^{R_m(G)}$ has chromatic number $k$. Let $P_{\mu}^{R_m(G)}=Y_{(i_1,v_{j_1})},Y_{(i_2,v_{j_2})},\ldots, Y_{(i_{m(n+1)}, v_{j_{m(n+1)}})}$. For each $v \in V(G)$, let $t(v)$ be the vertex in $[[m],\{v\}]$ that appears first in $\mu$. By renaming the vertices in $G$, we may assume that $t(v_j)$ comes before $t(v_{j'})$ in $\mu$ for all $1\le j <j' \le n$. Let $\sigma=v_1,v_2,\ldots,v_n$ be the corresponding enumeration of $V(G)$. We claim that $\sigma$ is a special enumeration of $V(G)$. For each $v \in V(G)$, let $f(v)\in [m]$ be such that $t(v)=(f(v),v)$. By (1) of Lemma~\ref{bound}, $P_{\sigma}^G$ has chromatic number at most $k$. Hence, $\chi(P_{\sigma}^G)=k$. Let $P_{\sigma}^G=X_1,X_2,\ldots,X_n$. Suppose $\sigma$ is not special. Then, there exists $\ell \in [n]$ such that $\chi(X_{\ell})=k$ and $v_{\ell}$ has a neighbor in $\{v_1,v_2,\ldots,v_{\ell-1}\}$. Let $I_0 =\{i \mid (i,v_0) \in \mu( \leq t(v_{\ell}))\}$. By (2) of Lemma~\ref{bound}, $|I_0| \le k$. Let $I=[m] \setminus (I_0 \cup \{f(v_1),\ldots,f(v_{\ell})\})$. Since $|I|\ge m-k -\ell \ge n-\ell+2 > |X_{\ell}| \geq \chi(X_{\ell})$, it follows that $\chi ( [I,X_{\ell}])=k$ and for every $k$-coloring of $\tuple{[I,X_{\ell}]}$, $[\{i\},X_{\ell}]$ sees all colors for every $i \in I$ by Lemma \ref{coloring}. Let $(i,v)$ be the first vertex of $[I,X_{\ell} \cup \{v_0\}]$ that appears in $\mu$. Either $v=v_0$ or $(i,v)$ is adjacent to $(i,v_0)$. In either case, $Y_{(i,v)}$ contains $[I,X_{\ell}] \cup \{(i,v_0)\}$. Since $P_{\mu}^{R_m(G)}$ has chromatic number $k$, there exists a $k$-coloring $C$ of $\tuple{Y_{(i,v)}}$. Note that $C[[\{i\},X_{\ell}]]=[k]$. But $(i,v_0)$ is complete to $[\{i\},X_{\ell}]$, so there is no available color for $(i,v_0)$, a contradiction. \end{subproof} Let $\sigma=v_1,v_2,\ldots,v_n$ be an enumeration of $V(G)$ with $\chi(P_{\sigma}^G)=k$. Let $\mu$ be the following enumeration of $V(R_m(G))$ \[ (1,v_1),\dots,(m,v_1),\dots,(1,v_n), \dots, (m,v_n),(1,v_0),\dots,(m,v_0). \] Let $P_{\mu}^{R_m(G)}=Y_{(1,v_1)},Y_{(2,v_1)},\dots,Y_{(m,v_1)},\ldots,Y_{(m,v_n)},Y_{(1,v_0)},\ldots,Y_{(m,v_0)}$. \begin{clm} \label{mainclaim} For all $i \in [m]$ and all $j \in [n]$, the chromatic number of $\tuple{Y_{(i,v_j)}}$ is at most $\chi(X_j)+1$. \end{clm} \begin{subproof}[Subproof] Suppose $\chi(X_j)=c$ and let $(U_1,U_2,\ldots,U_c)$ be a partition of $X_j$ into independent sets of $G$. Observe that $Y_{(i,v_j)}$ is a subset of $[[m],X_j\cup \{v_0\}]$, and $$ [[m],X_j\cup \{v_0\}] = [[m],\{v_0\}] \cup \left(\bigcup_{p=1}^c [[m],U_p]\right). $$ Each set in the union is independent in $R_m(G)$, thus it follows that $\chi(Y_{(i,v_j)}) \leq c+1$. \end{subproof} \begin{clm} The chromatic number of $P_{\mu}^{R_m(G)}$ is at most $k+1$. \end{clm} \begin{subproof}[Subproof] For every $i \in [m]$, $Y_{(i,v_0)}$ is a subset of $[[m],\{v_0\}]$ which is an independent set of $R_m(G)$. Hence, $\chi(Y_{(i,v_0)})\le 1$. By Claim~\ref{mainclaim}, the chromatic number of $\tuple{Y_{(i,v_j)}}$ is at most $k+1$ for $i\in [m], j\in [n]$. Thus, $\chi(P_{\mu}^{R_m(G)}) \le k+1$, as required. \end{subproof} \begin{clm} If $\sigma$ is special, then $P_{\mu}^{R_m(G)}$ has chromatic number $k$. \end{clm} \begin{subproof}[Subproof] Fix $i^* \in [m]$ and $j^* \in \{0\} \cup [n]$. We will show that $\chi(Y_{({i}^*,v_{j^{*}})}) \le k$. If $j^*=0$, then $\chi(Y_{({i}^*,v_{j^{*}})}) =1$, so may assume $j^*\neq 0$. By Claim~\ref{mainclaim}, if $\chi(X_{j^{*}})\le k-1$, then $\chi(Y_{({i}^*,v_{j^{*}})}) \le k$. Hence, we may assume that $\chi(X_{j^{*}})=k$ and that $v_{j^{*}}$ has no neighbor in $\{v_1,v_2,\ldots,v_{j^*-1}\}$ by the definition of a special enumeration. By Lemma~\ref{special}, there is a partition $(U_1^*,U_2^*,\ldots,U_{k-1}^*)$ of $X_{j^{*}} \setminus v_{j^{*}}$ into independent sets of $G$. For $i>i^*$, ${(i,v_{j^{*}})}$ has no neighbor in $\mu(<(i^*,v_{j^{*}}))$ since $v_{j^{*}}$ has no neighbor in $\{v_1,v_2,\ldots,v_{j^*-1}\}$. So, $Y_{(i^*,v_{j^{*}})}$ is contained in $[[m],X_{j^{*}}\setminus v_{j^{*}}\cup \{v_0\}] \cup \{(i^*,v_{j^{*}})\}$. Let $C$ be the map from $Y_{(i^*,v_{j^{*}})}$ to $[k]$ defined as \begin{itemize} \item for $i \neq i^*$, $C((i,v))=s$ for all $v \in U_s^*$ and $C((i,v_0))=k$, \item $C((i^*,v))=k$ for all $v \in X_{j^{*}}$, and \item $C((i^*,v_0))=k-1$. \end{itemize} It is easy to see that $C$ is a $k$-coloring of $\tuple{ Y_{(i,v_{j^{*}})}}$. Thus, $P_{\mu}^{R_m(G)}$ has chromatic number $k$, as required. \end{subproof} This last claim completes the entire proof. \end{proof} We finish the paper by proving Theorem \ref{mainsec}. \begin{reptheorem}{mainsec} Let $G$ be a graph with $\chi_P(G)=k$ and $|V(G)|=n$. For all integers $\ell$ and $m$ such that $m \ge n+k+2$ and $\ell \ge m (n+1)+k+3$, the path-chromatic number of $R_{\ell}(R_{m}(G))$ is strictly larger than $k$. \end{reptheorem} \begin{proof} Since $m\ge n+k+2$, Theorem~\ref{mainthm} shows that $R_{m}(G)$ has chromatic number either $k$ or $k+1$. If $\chi_P(R_{m}(G)) =k+1$, then since $\ell \ge m(n+1)+k+3 = |V(R_{m}(G))| +(k+1)+2$, the path chromatic number of $R_{\ell}(R_{m}(G))$ is either $k+1$ or $k+2$ which is strictly bigger than $k$. So, we may assume that $\chi_P(R_{m}(G)) = k$. To prove that $\chi_P(R_{\ell}(R_{m}(G))) > \chi_P(R_{m}(G))$, it suffices to show that there is no special vertex enumeration of $R_{m}(G)$ by Theorem~\ref{mainthm}. Towards a contradiction, let $\mu=(i_1,v_{j_1}),(i_2,v_{j_2}),\ldots , (i_{m(n+1)}, v_{j_{m(n+1)}})$ be a special vertex enumeration of $R_{m}(G)$. Let $P_{\mu}^{R_{m}(G)} =Y_{(i_1,v_{j_1})},Y_{(i_2,v_{j_2})},\ldots, Y_{(i_{m(n+1)}, v_{j_{m(n+1)}})}$. For each vertex $v_j$ of $G$, let $t(v_j)$ be the vertex that appears first in $\mu$ among $[[m],\{v_j\}]$. We may assume that $t(v_j)$ comes before $t(v_{j'})$ in $\mu$ for every $1\le j < j' \le n$. Let $f(v_j) \in [m]$ be such that $t(v_j)=(f(v_j),v_j)$. Let $\sigma=v_1,\ldots,v_n$. By (1) of Lemma~\ref{bound}, $P_{\sigma}^G$ has chromatic number $k$. Choose $j \in [n]$ such that $\chi( X_j)=k$. We claim that $\chi(X_j \setminus v_j)=k-1$. Let $I_0=\{i \in [m] \mid (i,v_0)\in \mu(<t(v_j))\}$ and $I=[m] \setminus (I_0 \cup \{f(v_1),f(v_2),\ldots, f(v_j)\})$. By (2) of Lemma~\ref{bound}, $|I_0|\le k$. So, $|I|\ge m - k -j \ge (n-j+1)+1 > |X_j \setminus v_j| \ge \chi(X_j \setminus v_j)$. By Lemma~\ref{isomorphic}, $$ \chi([I,X_j \setminus v_j]) \ge \chi(X_j \setminus v_j ). $$ Note that $Y_{t(v_j)}\setminus t(v_j)$ contains $[I,X_j\setminus v_j]$. So, if $\chi(Y_{t(v_j)}) <k$ then $\chi([I,X_j\setminus v_j]) <k$, and if $\chi(Y_{t(v_j)}) =k$ then by Lemma~\ref{special}, $\chi([I,X_j\setminus v_j])<k$ as well. In either case, $$ k-1 \ge \chi([I,X_j \setminus v_j]). $$ Combining these inequalities, we obtain $k-1 \ge \chi(X_j \setminus v_j )$. Moreover, it is obvious that $\chi(X_j \setminus v_j ) \ge k-1$ since $\chi(X_j) =k$. Therefore, $\chi(X_j \setminus v_j ) = k-1$. Again, as $|I|> \chi(X_j \setminus v_j)$, it follows that for every $(k-1)$-coloring of $\tuple{[I,X_j\setminus v_j]}$, and every $i \in I$, $[\{i\}, X_j\setminus v_j]$ sees all $k-1$ colors by Lemma~\ref{coloring}. Let $(i,v)$ be the first vertex of $[I,X_j \cup \{v_0\}] $ that appears in $\mu$. Observe that $Y_{(i,v)}$ contains $[I,X_j \setminus v_j] \cup \{(i,v_0)\}$. For every $(k-1)$-coloring of $\tuple{[I,X_j \setminus v_j]}$, $[\{i\},X_j\setminus v_j]$ sees all $k-1$ colors, so $\tuple{[I,X_j \setminus v_j] \cup \{(i,v_0)\}}$ is not $(k-1)$-colorable since $(i,v_0)$ is complete to $[\{i\},X_j\setminus v_j]$. Thus, $\chi([I,X_j \setminus v_j] \cup \{(i,v_0)\})=k$, and so $\chi(Y_{(i,v)})=k$. Since $\mu$ is special, $(i,v)$ has no neighbor in $\mu(<(i,v))$. So, $v$ is either $v_0$ or $v_j$. By Lemma~\ref{special}, $\tuple{Y_{(i,v)}} \setminus (i,v)$ is $(k-1)$-colorable. If $v=v_0$ then $Y_{(i,v)} \setminus (i,v)$ contains $[I,X_j]$. By Lemma~\ref{isomorphic}, $\tuple{[I,X_j]}$ contains $\tuple{X_j}$ as an induced subgraph, contradicting that $\tuple{Y_{(i,v)}\setminus (i,v)}$ is $(k-1)$-colorable. If $v=v_j$ then $Y_{(i,v)} \setminus (i,v)$ contains $[I,X_j \setminus v_j] \cup \{(i,v_0)\}$. Again, the chromatic number of $\tuple{[I,X_j \setminus v_j] \cup \{(i,v_0)\}}$ is $k$, a contradiction. Therefore, $\mu$ is not special. This completes the proof. \end{proof} \textbf{Acknowledgments.} We would like to thank Jan-Oliver Fröhlich, Irene Muzi, Claudiu Perta and Paul Wollan for many helpful discussions. We would also like to thank the anonymous referees for numerous suggestions in improving the paper.
proofpile-arXiv_069-4114
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Given an undirected graph $G = (V ,E)$ with a set $V$ of even number of vertices and a set $E$ of unweighted edges. Two graph bisection problems will be considered in the paper, the max-bisection problem and the min-bisection problem. The goal of the max-bisection problem is to divide $V$ into two subsets $A$ and $B$ of the same size so as to maximize the number of edges between $A$ and $B$, while the goal of the min-bisection problem is to minimize the number of edges between $A$ and $B$. In theory, both bisection problems are NP-hard for general graphs \cite{Ref26,NPhard2007}. These classical combinatorial optimization problems are special cases of graph partitioning \cite{Ref25}. The graph partitioning has many applications, for example, divide-and-conquer algorithms \cite{Ref50}, compiler optimization \cite{Ref39}, VLSI circuit layout \cite{Ref8}, load balancing \cite{Ref33}, image processing \cite{Ref62}, computer vision \cite{Ref46}, distributed computing \cite{Ref51}, and route planning \cite{Ref19}. In practice, there are many general-purpose heuristics for graph partitioning, e.g. \cite{Ref31,Ref58,Ref59} that handle particular graph classes, such as \cite{Ref53,Ref16,Ref59}. There are also many practical exact algorithms for graph bisection that use the branch-and-bound approach \cite{Ref47,Delling2014}. These approaches make expensive usage of time and space to obtain lower bounds \cite{Ref1,Ref3,Ref30,Delling2014}. On conventional computers, approximation algorithms have gained much attention to tackle the max-bisection and the min-bisection problems. The max-bisection problem, for example, has an approximation ratio of 0.7028 due to \cite{Feige2006} which is known to be the best approximation ratio for a long time by introducing the RPR$^2$ rounding technique into semidefinite programming (SDP) relaxation. In \cite{Guruswami2011}, a poly-time algorithm is proposed that, given a graph admitting a bisection cutting a fraction $1-\varepsilon$ of edges, finds a bisection cutting an $(1-g(\varepsilon))$ fraction of edges where $g(\varepsilon) \to 0$ as $\varepsilon \to 0$. A 0.85-approximation algorithm for the max-bisection is obtained in \cite{Raghavendra2012}. In \cite{ZiXu2014}, the SDP relaxation and the RPR$^2$ technique of \cite{Feige2006} have been used to obtain a performance curve as a function of the ratio of the optimal SDP value over the total weight through finer analysis under the assumption of convexity of the RPR$^2$ function. For the min-bisection problem, the best known approximation ratio is $O(log\,n)$ \cite{Ref55} with some limited graph classes have known polynomial-time solutions such as grids without holes \cite{Ref22} and graphs with bounded tree width \cite{Ref37}. The aim of the paper is to propose an algorithm that represents the two bisection problems as Boolean constraint satisfaction problems where the set of edges are represented as set of constraints. The algorithm prepares a superposition of all possible graph bisections using an amplitude amplification technique then evaluates the set of constraints for all possible bisections simultaneously and then amplifies the amplitudes of the best bisections that achieve the maximum/minimum satisfaction to the set of constraints using a novel amplitude amplification technique that applies an iterative partial negation and partial measurement. The proposed algorithm targets a general graph where it runs in $O(m^2)$ for a graph with $m$ edges and in the worst case runs in $O(n^4)$ for a dense graph with number edges close to $m = {\textstyle{{n(n - 1)} \over 2}}$ with $n$ vertices to achieve an arbitrary high probability of success of $1-\epsilon$ for small $\epsilon>0$ using a polynomial space resources. The paper is organized as follows; Section 2 shows the data structure used to represent a graph bisection problem as a Boolean constraint satisfaction problem. Section 3 presents the proposed algorithm with analysis on time and space requirements. Section 4 concludes the paper. \section{Data Structures and Graph Representation} Several optimization problems, such as the max-bisection and the min-bisection problems, can be formulated as Boolean constraint satisfaction problems \cite{CombOptBook,BAZ05} where a feasible solution is a solution with as many variables set to 0 as variables set to 1, i.e. balanced assignment, as follows: for a graph $G$ with $n$ vertices and $m$ edges, consider $n$ Boolean variables $v_0, \ldots , v_{n-1}$ and $m$ constraints by associating with each edge $(a, b) \in E$ the constraint $c_l=v_a\oplus v_b$, with $l=0,1,\ldots,m-1$, then the max-bisection is the problem that consists of finding a balanced assignment to maximize the number of constraints equal to logic-1 from the $m$ constraints, while the min-bisection is the problem that consists of finding a balanced assignment to maximize the number of constraints equal to logic-0 from the $m$ constraints, such that if a Boolean variable is set to 0 then the associated vertex belongs to the first partition and if a Boolean variable is set to 1 then the associated vertex belongs to the second partition. \begin{center} \begin{figure*}[htbp] \begin{center} \setlength{\unitlength}{3947sp \begingroup\makeatletter\ifx\SetFigFont\undefined \def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6} \expandafter\x\fmtname xxxxxx\relax \def\y{splain \ifx\x\y \gdef\SetFigFont#1#2#3 \ifnum #1<17\tiny\else \ifnum #1<20\small\else \ifnum #1<24\normalsize\else \ifnum #1<29\large\else \ifnum #1<34\Large\else \ifnum #1<41\LARGE\else \huge\fi\fi\fi\fi\fi\fi \csname #3\endcsname \else \gdef\SetFigFont#1#2#3{\begingroup \count@#1\relax \ifnum 25<\count@\count@25\fi \def\x{\endgroup\@setsize\SetFigFont{#2pt} \expandafter\x \csname \romannumeral\the\count@ pt\expandafter\endcsname \csname @\romannumeral\the\count@ pt\endcsname \csname #3\endcsname \fi \fi\endgroup \begin{picture}(3346,3654)(1090,-3361) \thinlines \put(2616,240){\circle*{90}} \put(2064,-489){\circle*{90}} \put(3733,-605){\circle*{90}} \put(2523,-951){\circle*{90}} \put(3387,-984){\circle*{90}} \put(2614,-66){\circle*{90}} \put(3109,173){\circle*{90}} \put(3109,-579){\circle*{90}} \put(2950,-2668){\circle*{90}} \put(2945,-1800){\circle*{90}} \put(3421,-1814){\circle*{90}} \put(4364,-1807){\circle*{90}} \put(3396,-2662){\circle*{90}} \put(3916,-2665){\circle*{90}} \put(3935,-1818){\circle*{90}} \put(4351,-2667){\circle*{90}} \put(1308,-1573){\circle*{90}} \put(1312,-2050){\circle*{90}} \put(1318,-2500){\circle*{90}} \put(1319,-2966){\circle*{90}} \put(2261,-2976){\circle*{90}} \put(2254,-2508){\circle*{90}} \put(2255,-2056){\circle*{90}} \put(2253,-1579){\circle*{90}} \put(2608,243){\line(-3,-4){564}} \put(2611,243){\line( 4,-3){1128}} \put(2058,-481){\line( 1,-1){470}} \put(3599,-63){\makebox(1.6667,11.6667){\SetFigFont{5}{6}{rm}.}} \put(2618,233){\line( 0,-1){282}} \put(3724,-596){\line(-2, 1){1128}} \put(3112,158){\line( 0,-1){752}} \put(3375,-975){\line(-2, 3){282}} \put(2535,-960){\line( 1, 2){564}} \put(3698,-580){\line(-3, 4){564}} \put(3698,-586){\line(-1, 0){564}} \put(3378,-968){\line(-1, 0){846}} \put(2061,-494){\line( 4, 3){564}} \put(2955,-1807){\line( 1, 0){470}} \put(2950,-1812){\line( 0,-1){846}} \put(2960,-1807){\line( 1,-2){423}} \put(3439,-1794){\line( 1,-1){893}} \put(3395,-1820){\line(-1,-2){423}} \put(2965,-2650){\line( 1, 0){423}} \put(3384,-2650){\line( 2, 3){564}} \put(3384,-2660){\line( 1, 0){517}} \put(3948,-1797){\line( 1, 0){423}} \put(3924,-2665){\line( 0, 1){846}} \put(4367,-1807){\line( 0,-1){846}} \put(3905,-2660){\line( 1, 0){470}} \multiput(3630,-1498)(0.00000,-11.94915){119}{\makebox(1.6667,11.6667){\SetFigFont{5}{6}{rm}.}} \put(1307,-1575){\line( 1, 0){940}} \put(1312,-1580){\line( 0,-1){470}} \put(1322,-1585){\line( 2,-1){940}} \put(2237,-1570){\line(-2,-1){940}} \put(2242,-1580){\line(-2,-3){940}} \put(1321,-2052){\line( 1, 0){940}} \put(2259,-2053){\line(-2,-1){940}} \put(2252,-2052){\line( 0,-1){470}} \put(1322,-2505){\line( 1, 0){940}} \put(1331,-2510){\line( 2,-1){940}} \put(1317,-2972){\line( 1, 0){940}} \put(2245,-2515){\line(-2,-1){940}} \multiput(1779,-1286)(0.00000,-11.97452){158}{\makebox(1.6667,11.6667){\SetFigFont{5}{6}{rm}.}} \put(2039,-708){$v_1$} \put(3790,-692){$v_3$} \put(2401,-1162){$v_7$} \put(3461,-1073){$v_5$} \put(2595,-270){$v_2$} \put(3168,138){$v_6$} \put(2927,-662){$v_4$} \put(2410,210){$v_0$} \put(2883,-1720){$v_0$} \put(3379,-1720){$v_1$} \put(3842,-1715){$v_4$} \put(4324,-1735){$v_5$} \put(2893,-2886){$v_2$} \put(3365,-2877){$v_3$} \put(3842,-2881){$v_6$} \put(4276,-2877){$v_7$} \put(1112,-1666){$v_0$} \put(1095,-2100){$v_2$} \put(1110,-2582){$v_4$} \put(1090,-3050){$v_7$} \put(2353,-3059){$v_5$} \put(2353,-2582){$v_6$} \put(2358,-2129){$v_3$} \put(2343,-1671){$v_1$} \put(1688,-3339){(b)} \put(2864,-1248){(a)} \put(3562,-3344){(c)} \end{picture \end{center} \caption{(a) A random graph with 8 vertices and 12 edges, (b) A max-bisection instance for the graph in (a) with 10 edges connecting the two subsets, and (c) (b) A min-bisection instance for the graph in (a) with 3 edges connecting the two subsets.} \label{graphex} \end{figure*} \end{center} For example, consider the graph $G$ shown in Figure \ref{graphex}(a). Let $G = (V ,E)$, where, \begin{equation} \begin{array}{l} V=\{0,1,2,3,4,5,6,7\},\\ E=\{(0,1),(0,2),(0,3),\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1,2),(1,7),(2,3),\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(3,4),(3,6),(4,5),\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(4,6),(5,7),(6,7)\}.\\ \end{array} \end{equation} Assume that each vertex $a \in V$ is associated with a Boolean variable $v_a$, then the set of vertices $V$ can be represented as a vector $X$ of Boolean variables as follows, \begin{equation} X=(v_0,v_1,v_2,v_3,v_4,v_5,v_6,v_7), \end{equation} \noindent and if each edge $(a, b) \in E$ is associated with a constraint $c_l=v_a\oplus v_b$ then the set of edges $E$ can be represented as a vector $Z$ of constraints as follows, \begin{equation} Z=(c_0,c_1,c_2,c_3,c_4,c_5,c_6,c_7,c_8,c_9,c_{10},c_{11}), \end{equation} \noindent such that, \begin{equation} \begin{array}{l} c_0= (v_0\oplus v_1), c_1= (v_0\oplus v_2), c_2= (v_0\oplus v_3),\\ c_3= (v_1\oplus v_2), c_4= (v_1\oplus v_7), c_5= (v_2\oplus v_3), \\ c_6= (v_3\oplus v_4), c_7= (v_3\oplus v_6), c_8= (v_4\oplus v_5), \\ c_9= (v_4\oplus v_6), c_{10}= (v_5 \oplus v_7), c_{11}= (v_6\oplus v_7).\\ \end{array} \end{equation} In general, a bisection $G_P$ for the graph $G$ can be represented as $G_p = (x ,z(x))$ such that each vector $x \in \{0,1\}^n $ of variable assignments is associated with a vector $z(x)\in \{0,1\}^m$ of constraints evaluated as functions of the variable assignment $x$. In the max-bisection and the min-bisection problems, the vector $x$ of variable assignments are restricted to be balanced so there are $M = \left( {\begin{array}{*{20}c} n \\ {{\textstyle{n \over 2}}} \\ \end{array}} \right)$ possible variable assignments among the $N=2^n$ possible variable assignments, and the solution of the max-bisection problem is to find the variable assignment that is associated with a vector of constraints that contains the maximum number of 1's, and the solution for the min-bisection problem is to find the variable assignment that is associated with a vector of constraints that contains the maximum number of 0's. For example, for the graph $G$ shown in Figure \ref{graphex}(a), a max-bisection for $G$ is $((0,1,0,1,0,1,1,0),(1,0,1,1,1,1,1,0,1,1,1,1))$ with 10 edges connecting the two partitions as shown in Figure \ref{graphex}(b), and a min-bisection for $G$ is $((0,0,0,0,1,1,1,1),(0,0,0,0,1,0,1,1,0,0,0,0))$ with 3 edges connecting the two partitions as shown in Figure \ref{graphex}(c). It is important to notice that a variable assignment $x= (0,1,0,1,0,1,1,0)$ is equivalent to $\overline x= (1,0,1,0,1,0,0,1)$, where $\overline x$ is the bit-wise negation of $x$. \section{The Algorithm} Given a Graph $G$ with $n$ vertices and $m$ edges. The proposed algorithm is divided into three stages, the first stage prepares a superposition of all balanced assignments for the $n$ variables. The second stage evaluates the $m$ constraints associated with the $m$ edges for every balanced assignment and stores the values of constraints in constraint vectors entangled with the corresponding balanced assignments in the superposition. The third stage amplifies the constraint vector with maximum (minimum) number of satisfied constraints using a partial negation and iterative measurement technique. \begin{center} \begin{figure*}[htbp] \begin{center} \setlength{\unitlength}{3947sp \begingroup\makeatletter\ifx\SetFigFont\undefined \def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6} \expandafter\x\fmtname xxxxxx\relax \def\y{splain \ifx\x\y \gdef\SetFigFont#1#2#3 \ifnum #1<17\tiny\else \ifnum #1<20\small\else \ifnum #1<24\normalsize\else \ifnum #1<29\large\else \ifnum #1<34\Large\else \ifnum #1<41\LARGE\else \huge\fi\fi\fi\fi\fi\fi \csname #3\endcsname \else \gdef\SetFigFont#1#2#3{\begingroup \count@#1\relax \ifnum 25<\count@\count@25\fi \def\x{\endgroup\@setsize\SetFigFont{#2pt} \expandafter\x \csname \romannumeral\the\count@ pt\expandafter\endcsname \csname @\romannumeral\the\count@ pt\endcsname \csname #3\endcsname \fi \fi\endgroup \begin{picture}(4341,2349)(579,-1764) \thinlines \put(2750,-477){\oval(236, 72)[tr]} \put(2750,-477){\oval(236, 72)[tl]} \put(2554,-360){\line( 0,-1){235}} \put(2554,-595){\line( 1, 0){376}} \put(2930,-595){\line( 0, 1){235}} \put(2930,-360){\line(-1, 0){376}} \put(2639,-535){\vector( 2, 1){282}} \put(4407,-239){\oval(236, 72)[tr]} \put(4407,-239){\oval(236, 72)[tl]} \put(4211,-122){\line( 0,-1){235}} \put(4211,-357){\line( 1, 0){376}} \put(4587,-357){\line( 0, 1){235}} \put(4587,-122){\line(-1, 0){376}} \put(4296,-297){\vector( 2, 1){282}} \put(4404,114){\oval(236, 72)[tr]} \put(4404,114){\oval(236, 72)[tl]} \put(4208,231){\line( 0,-1){235}} \put(4208, -4){\line( 1, 0){376}} \put(4584, -4){\line( 0, 1){235}} \put(4584,231){\line(-1, 0){376}} \put(4293, 56){\vector( 2, 1){282}} \put(4410,377){\oval(236, 72)[tr]} \put(4410,377){\oval(236, 72)[tl]} \put(4214,494){\line( 0,-1){235}} \put(4214,259){\line( 1, 0){376}} \put(4590,259){\line( 0, 1){235}} \put(4590,494){\line(-1, 0){376}} \put(4299,319){\vector( 2, 1){282}} \put(4399,-676){\oval(236, 72)[tr]} \put(4399,-676){\oval(236, 72)[tl]} \put(4203,-559){\line( 0,-1){235}} \put(4203,-794){\line( 1, 0){376}} \put(4579,-794){\line( 0, 1){235}} \put(4579,-559){\line(-1, 0){376}} \put(4288,-734){\vector( 2, 1){282}} \put(4395,-950){\oval(236, 72)[tr]} \put(4395,-950){\oval(236, 72)[tl]} \put(4199,-833){\line( 0,-1){235}} \put(4199,-1068){\line( 1, 0){376}} \put(4575,-1068){\line( 0, 1){235}} \put(4575,-833){\line(-1, 0){376}} \put(4284,-1008){\vector( 2, 1){282}} \put(4394,-1283){\oval(236, 72)[tr]} \put(4394,-1283){\oval(236, 72)[tl]} \put(4198,-1166){\line( 0,-1){235}} \put(4198,-1401){\line( 1, 0){376}} \put(4574,-1401){\line( 0, 1){235}} \put(4574,-1166){\line(-1, 0){376}} \put(4283,-1341){\vector( 2, 1){282}} \put(2214,-445){\oval(84,92)} \put(2026,-306){\framebox(383,811){}} \put(2212,-306){\line( 0,-1){188}} \put(3310,-693){\oval(84,92)} \put(3311,-943){\oval(84,92)} \put(3312,-1282){\oval(84,92)} \put(946,129){\line( 1, 0){229}} \put(949,-223){\line( 1, 0){234}} \put(1180, 19){\framebox(176,215){}} \put(1180,267){\framebox(176,215){}} \put(1181,-343){\framebox(176,215){}} \put(1359,366){\line( 1, 0){141}} \put(1365,130){\line( 1, 0){139}} \put(1359,-223){\line( 1, 0){141}} \put(945,369){\line( 1, 0){229}} \put(1506,-307){\framebox(383,811){}} \put(2406,128){\line( 1, 0){705}} \put(4055,-1282){\line( 1, 0){141}} \put(3505,122){\line( 1, 0){705}} \put(3511,-228){\line( 1, 0){705}} \put(3511,367){\line( 1, 0){705}} \put(4055,-1512){\line( 1, 0){517}} \put(3310,-1021){\line( 0, 1){705}} \put(3310,-1326){\line( 0, 1){141}} \put(2408,371){\line( 1, 0){705}} \put(953,-445){\line( 1, 0){1598}} \put(3119,-319){\framebox(383,811){}} \put(2411,-220){\line( 1, 0){705}} \put(942,-1509){\line( 1, 0){2726}} \put(941,-1276){\line( 1, 0){2726}} \put(941,-937){\line( 1, 0){2726}} \put(941,-686){\line( 1, 0){2726}} \put(4062,-688){\line( 1, 0){141}} \put(4203,-688){\line(-1, 0){141}} \put(4058,-931){\line( 1, 0){141}} \put(4199,-931){\line(-1, 0){141}} \put(1889,367){\line( 1, 0){135}} \put(1889,128){\line( 1, 0){135}} \put(1892,-224){\line( 1, 0){133}} \put(2923,-445){\line( 1, 0){1645}} \put(3674,-1589){\framebox(378,1014){}} \put(1190,-322){$H$} \put(1190, 46){$H$} \put(1190,288){$H$} \put(3226, 19){$C_v$} \put(2132, 19){$U_f$} \put(1602, 10){$D$} \put(3800,-1091){$Q$} \put(4665,363){$\left| {v_0} \right\rangle$ } \put(4665,100){$\left| {v_1} \right\rangle $} \put(4665,-250){$\left| {v_{n-1}} \right\rangle$ } \put(4665,-1558){$\left| {1} \right\rangle $} \put(4665,-499){$\left| {1} \right\rangle $} \put(4665,-714){$\left| {c_0} \right\rangle $} \put(4665,-977){$\left| {c_1} \right\rangle $} \put(4665,-1307){$\left| {c_{m-1}} \right\rangle $} \put(520,297){$\left| {0} \right\rangle $} \put(520, 58){$\left| {0} \right\rangle $} \put(520,-274){$\left| {0} \right\rangle $} \put(520,-505){$\left| {ax_1} \right\rangle $} \put(520,-714){$\left| {0} \right\rangle $} \put(520,-989){$\left| {0} \right\rangle $} \put(520,-1319){$\left| {0} \right\rangle $} \put(520,-1558){$\left| {ax_2} \right\rangle $} \put(1400,627){$O\left( {\sqrt[4]{n}} \right)$} \put(3620,-1755){$O\left( {{n^4}} \right)$} \put(520,-1170){ $\vdots$ } \put(520,-120){ $\vdots$ } \put(956,-1170){ $\vdots$ } \put(956,-120){ $\vdots$ } \put(4081,-1170){ $\vdots$ } \put(4081,-120){ $\vdots$ } \put(4670,-120){ $\vdots$ } \put(4670,-1170){ $\vdots$ } \put(3230,-1170){ $\vdots$ } \put(2643,-374){$M_1$} \end{picture \end{center} \caption{A quantum circuit for the proposed algorithm.} \label{alg} \end{figure*} \end{center} \subsection{Balanced Assignments Preparation} To prepare a superposition of all balanced assignments of $n$ qubits, the proposed algorithm can use any amplitude amplification technique, e.g. \cite{Grover1997,Younes2007,Younes2013}. An extra step should be added after the amplitude amplification to create an entanglement between the matched items and an auxiliary qubit $\left| {ax_1 } \right\rangle$, so that the correctness of the items in the superposition can be verified by applying measurement on $\left| {ax_1 } \right\rangle$ without having to examine the superposition itself. So, if $\left| {ax_1 } \right\rangle = \left| {1 } \right\rangle$ at the end of this stage, then the superposition contains the correct items, i.e. the balanced assignments, otherwise, repeat the preparation stage until $\left| {ax_1 } \right\rangle = \left| {1 } \right\rangle$. This is useful for not having to proceed to the next stages until the preparation stage succeeds. The preparation stage to have a superposition of all balanced assignments of $n$ qubits will use the amplitude amplification technique shown in \cite{Younes2013} since it achieves the highest known probability of success using fixed operators and it can be summarized as follows, prepare a superposition of $2^n$ states by initializing $n$ qubits to state $\left| {0} \right\rangle$ and apply $H^{\otimes n}$ on the $n$ qubits \begin{equation} \begin{array}{l} \left| {\Psi _0 } \right\rangle = \left( {H^{ \otimes n}} \right) \left| 0 \right\rangle ^{ \otimes n} \\ \,\,\,\,\,\,\,\,\,\,\,\,\, = \frac{1}{{\sqrt N }}\sum\limits_{j = 0}^{N - 1} {\left| j \right\rangle }, \\ \end{array} \end{equation} \noindent where $H$ is the Hadamard gate, and $N=2^n$. Assume that the system $\left| {\Psi _0 } \right\rangle$ is re-written as follows, \begin{equation} \label{ENheqn39} \begin{array}{l} \left|\Psi_0\right\rangle = \frac{1}{{\sqrt N }} \sum\limits_{\scriptstyle j = 0, \hfill \atop \scriptstyle j \in X_T \hfill}^{N - 1} {\left| j \right\rangle} + \frac{1}{{\sqrt N }} \sum\limits_{\scriptstyle j = 0, \hfill \atop \scriptstyle j \in X_F \hfill}^{N - 1} {\left| j \right\rangle},\\ \end{array} \end{equation} \noindent where $X_T$ is the set of all balanced assignments of $n$ bits and $X_F$ is the set of all unbalanced assignments. Let $M=\left( {\begin{array}{*{20}c} n \\ {{\textstyle{n \over 2}}} \\ \end{array}} \right)$ be the number of balanced assignments among the $2^n$ possible assignments, $\sin (\theta ) = \sqrt {{M \mathord{\left/ {\vphantom {M N}} \right.\kern-\nulldelimiterspace} N}}$ and $0 < \theta \le \pi /2$, then the system can be re-written as follows, \begin{equation} \left|\Psi_0\right\rangle = \sin (\theta )\left| {\psi _1 } \right\rangle + \cos (\theta )\left| {\psi _0 } \right\rangle, \end{equation} \noindent where $\left| {\psi _1 } \right\rangle= \left| {\tau } \right\rangle$ represents the balanced assignments subspace and $\left| {\psi _0 } \right\rangle$ represents the unbalanced assignments subspace. Let $D=WR_0 \left( \phi \right)W^\dag R_\tau \left( \phi \right)$, $R_0 \left( \phi \right) = I - (1 - e^{i\phi } )\left| 0 \right\rangle \left\langle 0 \right|$, $R_\tau \left( \phi \right) = I - (1 - e^{i\phi } )\left| \tau \right\rangle \left\langle \tau \right|$, where $W=H^{\otimes n}$ is the Walsh-Hadamard transform \cite{hoyer00}. Iterate the operator $D$ on $\left|\Psi_0\right\rangle$ for $q$ times to get, \begin{equation} \left| {\Psi _1 } \right\rangle = D^{q}\left| {\Psi_0 } \right\rangle = a_q \left| {\psi _1 } \right\rangle + b_q \left| {\psi _0 } \right\rangle , \end{equation} \noindent such that, \begin{equation} \label{aqeqn} a_q = \sin (\theta )\left( {e^{iq\phi } U_q \left( y \right) + e^{i(q - 1)\phi } U_{q - 1} \left( y \right)} \right), \end{equation} \begin{equation} b_q = \cos (\theta )e^{i(q - 1)\phi } \left( {U_q \left( y \right) + U_{q - 1} \left( y \right)} \right), \end{equation} \noindent where $y=cos(\delta)$, $\cos \left( \delta \right) = 2\sin ^2 (\theta )\sin ^2 ({\textstyle{\phi \over 2}}) - 1$, $0<\theta\le \pi/2$, and $U_q$ is the Chebyshev polynomial of the second kind \cite{ChebPoly} defined as follows, \begin{equation} U_q \left( y \right) = \frac{{\sin \left( {\left( {q + 1} \right)\delta } \right)}}{{\sin \left( \delta \right)}}. \end{equation} Setting $\phi=6.02193\approx1.9168\pi$, $M=\left( {\begin{array}{*{20}c} n \\ {{\textstyle{n \over 2}}} \\ \end{array}} \right)$, $N=2^n$ and, $q = \left\lfloor {{\textstyle{\phi \over {\sin (\theta)}}}} \right\rfloor$, then $\left| {a_q } \right|^2 \ge 0.9975$ \cite{Younes2013}. The upper bound for the required number of iterations $q$ to reach the maximum probability of success is, \begin{equation} \label{ENheqn64} q = \left\lfloor {{\textstyle{\phi \over {\sin (\theta)}}}} \right\rfloor \le 1.9168\pi\sqrt {\frac{N}{M}}, \end{equation} \noindent and using Stirling's approximation, \begin{equation} n! \approx \sqrt {2\pi n} \left( {\frac{n}{e}} \right)^n, \end{equation} \noindent then, the upper bound for required number of iterations $q$ to prepare the superposition of all balanced assignments is, \begin{equation} q \approx 1.9168\sqrt[4]{{\frac{{\pi ^5 }}{{2 }}n}} = O\left( \sqrt[4]{n} \right). \end{equation} It is required to preserve the states in $\left| {\psi _1 } \right\rangle$ for further processing in the next stage. This can be done by adding an auxiliary qubit $\left| {ax_1 } \right\rangle$ initialized to state $\left| {0} \right\rangle$ and have the states of the balanced assignments entangled with $\left| {ax_1 } \right\rangle = \left| {1} \right\rangle$, so that, the correctness of the items in the superposition can be verified by applying measurement on $\left| {ax_1 } \right\rangle$ without having to examine the superposition itself. So, if $\left| {ax_1 } \right\rangle = \left| {1 } \right\rangle$, then the superposition contains the balanced assignments, otherwise, repeat the preparation stage until $\left| {ax_1 } \right\rangle = \left| {1 } \right\rangle$. This is useful to be able to proceed to the next stage when the preparation stage succeeds. To prepare the entanglement, let \begin{equation} \begin{array}{l} \left| {\Psi _2 } \right\rangle = \left| {\Psi _1 } \right\rangle \otimes \left| {0 } \right\rangle\\ \,\,\,\,\,\,\,\,\,\,\,\,\,= a_q \left| {\psi _1 } \right\rangle \otimes \left| {0 } \right\rangle + b_q \left| {\psi _0 } \right\rangle \otimes \left| {0 } \right\rangle,\\ \end{array} \end{equation} \noindent and apply a quantum Boolean operator $U_f$ on $\left| {\Psi _2 } \right\rangle$, where $U_f$ is defined as follows, \begin{equation} U_f \left| {x,0} \right\rangle = \left\{ {\begin{array}{*{20}c} {\left| {x ,0} \right\rangle ,{\rm if }\, \left|x\right\rangle \in \left|\psi _0\right\rangle,} \\ {\left| {x ,1} \right\rangle ,{\rm if }\, \left|x\right\rangle \in \left|\psi _1\right\rangle,} \\ \end{array}} \right. \end{equation} \noindent and $f:\left\{ {0,1} \right\}^n \to \{ 0,1\}$ is an $n$ inputs single output Boolean function that evaluates to True for any $x \in X_T$ and evaluates to False for any $x \in X_F$, then, \begin{equation} \begin{array}{l} \left| {\Psi _3 } \right\rangle = U_f \left| {\Psi _2 } \right\rangle \\ \,\,\,\,\,\,\,\,\,\,\,\,\,= a_q \left| {\psi _1 } \right\rangle \otimes \left| {1 } \right\rangle + b_q \left| {\psi _0 } \right\rangle \otimes \left| {0 } \right\rangle.\\ \end{array} \end{equation} Apply measurement $M_1$ on the auxiliary qubit $\left| {ax_1 } \right\rangle$ as shown in Figure \ref{alg}. The probability of finding $\left| {ax_1 } \right\rangle=\left| {1} \right\rangle$ is, \begin{equation} Pr{(M_1 = 1)} = \left| {a_q } \right|^2 \ge 0.9975, \end{equation} \noindent and the system will collapse to, \begin{equation} \left|\Psi_3^{(M_1 = 1)}\right\rangle = \left| {\psi _1 } \right\rangle \otimes \left| {1 } \right\rangle. \end{equation} \subsection{Evaluation of Constraints} There are $M$ states in the superposition $\left|\Psi_3^{(M_1 = 1)}\right\rangle$, each state has an amplitude ${\textstyle{1 \over {\sqrt M }}}$, then let $\left|\Psi_4\right\rangle$ be the system after the balanced assignment preparation stage as follows, \begin{equation} \left|\Psi_4\right\rangle = \alpha \sum\limits_{k = 0}^{M - 1} {\left| x_k \right\rangle}, \end{equation} \noindent where $\left|ax_1\right\rangle$ is dropped from the system for simplicity and $\alpha = {\textstyle{1 \over {\sqrt M }}}$. For a graph $G$ with $n$ vertices and $m$ edges, every edge $(a, b )$ connecting vertcies $a,b \in V$ is associated with a constraint $c_l=v_a\oplus v_b$, where $v_a$ and $v_b$ are the corresponding qubits for vertices $a$ and $b$ in $\left|\Psi_4\right\rangle$ respectively such that $0 \le l < m$, $0 \le m \le {\textstyle{{n(n - 1)} \over 2}}$, $0 \le a,b \le n-1$ and $a \ne b$, where ${\textstyle{{n(n - 1)} \over 2}}$ is the maximum number of edges in a graph with $n$ vertices. To evaluate the $m$ constraints associated with the edges, add $m$ qubits initialized to state $\left|0\right\rangle$, \begin{equation} \begin{array}{l} \left|\Psi_5\right\rangle = \left|\Psi_4\right\rangle \otimes \left|0\right\rangle^{\otimes m}\\ \,\,\,\,\,\,\,\,\,\,\,\,\,=\alpha \sum\limits_{k = 0}^{M - 1} { {\left| x_k \right\rangle \otimes \left|0 \right\rangle^{\otimes m} } }.\\ \end{array} \end{equation} For every constraint $c_l=v_a \oplus v_b$, apply two $Cont\_\sigma_X$ gates, $Cont\_\sigma_X(v_a,c_l)$ and $Cont\_\sigma_X(v_b,c_l)$, so that $\left|c_l\right\rangle=\left|v_a\oplus v_b\right\rangle$. The collection of all $Cont\_\sigma_X$ gates applied to evaluate the $m$ constraints is denoted $C_v$ in Figure \ref{alg}, then the system is transformed to, \begin{equation} \left| {\Psi _6 } \right\rangle = \alpha \sum\limits_{k = 0}^{M - 1} {\left( {\left| x_k \right\rangle \otimes \left| {c_0^k c_1^k \ldots c_{m-1}^k } \right\rangle } \right)}, \end{equation} \noindent where $\sigma _X$ is the Pauli-X gate which is the quantum equivalent to the NOT gate. It can be seen as a rotation of the Bloch Sphere around the X-axis by $\pi$ radians as follows, \begin{equation} \sigma _X = \left[ {\begin{array}{*{20}c} 0 & 1 \\ 1 & 0 \\ \end{array}} \right], \end{equation} and $Cont\_U(v,c)$ gate is a controlled gate with control qubit $\left|v\right\rangle$ and target qubit $\left|c\right\rangle$ that applies a single qubit unitary operator $U$ on $\left|c\right\rangle$ only if $\left|v\right\rangle=\left|1\right\rangle$, so every qubit $\left|c_l^k\right\rangle$ carries a value of the constraint $c_l$ based on the values of $v_a$ and $v_b$ in the balanced assignment $\left| x_k \right\rangle$, i.e. the values of $v_a^k$ and $v_b^k$ respectively. Let $\left| z_k \right\rangle=\left|{c_0^k c_1^k \ldots c_{m-1}^k }\right\rangle$, then the system can be re-written as follows, \begin{equation} \left| {\Psi _6 } \right\rangle = \alpha \sum\limits_{k = 0}^{M - 1} {\left( {\left| x_k \right\rangle \otimes \left| {z_k} \right\rangle } \right)}, \end{equation} \noindent where every $\left| x_k \right\rangle$ is entangled with the corresponding $\left| z_k \right\rangle$. The aim of the next stage is to find $\left| z_k \right\rangle$ with the maximum number of $\left| 1 \right\rangle$'s for the max-bisection problem or to find $\left| z_k \right\rangle$ with the minimum number of $\left| 1 \right\rangle$'s for the min-bisection problem. \begin{center} \begin{figure*}[htbp] \begin{center} \setlength{\unitlength}{3947sp \begingroup\makeatletter\ifx\SetFigFont\undefined \def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6} \expandafter\x\fmtname xxxxxx\relax \def\y{splain \ifx\x\y \gdef\SetFigFont#1#2#3 \ifnum #1<17\tiny\else \ifnum #1<20\small\else \ifnum #1<24\normalsize\else \ifnum #1<29\large\else \ifnum #1<34\Large\else \ifnum #1<41\LARGE\else \huge\fi\fi\fi\fi\fi\fi \csname #3\endcsname \else \gdef\SetFigFont#1#2#3{\begingroup \count@#1\relax \ifnum 25<\count@\count@25\fi \def\x{\endgroup\@setsize\SetFigFont{#2pt} \expandafter\x \csname \romannumeral\the\count@ pt\expandafter\endcsname \csname @\romannumeral\the\count@ pt\endcsname \csname #3\endcsname \fi \fi\endgroup \begin{picture}(5618,2421)(585,-1902) \thinlines \put(1120,-1674){\framebox(1208,1124){}} \put(3616,-1684){\framebox(1331,1187){}} \put(5945,-1573){$\left|ax_2\right\rangle$} \put(5945,-1323){$\left|c_{m-1}\right\rangle$} \put(5945,-994){{$\left|c_1\right\rangle$}} \put(5945,-740){{$\left|c_0\right\rangle$}} \put(585,-1573){$\left|0\right\rangle$} \put(585,-1323){$\left|0\right\rangle$} \put(585,-994){$\left|0\right\rangle$} \put(585,-740){$\left|0\right\rangle$} \put(1220,-1545){$\mathop {}\nolimits_{V }$} \put(1578,-1545){$\mathop {}\nolimits_{V }$} \put(2139,-1545){$\mathop {}\nolimits_{V }$} \put(3768,-1545){$\mathop {}\nolimits_{V }$} \put(4131,-1545){$\mathop {}\nolimits_{V }$} \put(4692,-1545){$\mathop {}\nolimits_{V }$} \put(610,-1220){$\vdots$} \put(3509,-1220){$\vdots$} \put(5980,-1220){$\vdots$} \put(1810,-747){$\ldots$} \put(1810,-966){$\ldots$} \put(1810,-1303){$\ldots$} \put(1810,-1522){$\ldots$} \put(4388,-1303){$\ldots$} \put(4388,-966){$\ldots$} \put(4388,-747){$\ldots$} \put(4388,-1522){$\ldots$} \put(4100,-463){$MIN$} \put(1450,-475){$MAX$} \put(2104,-1892){(a)} \put(4651,-1892){(b)} \put(5231,-1511){\oval(236, 72)[tr]} \put(5231,-1511){\oval(236, 72)[tl]} \put(2678,-1514){\oval(236, 72)[tr]} \put(2678,-1514){\oval(236, 72)[tl]} \put(1278,-715){\circle*{90} \put(1646,-939){\circle*{90} \put(2206,-1270){\circle*{90} \put(3002,-1494){\oval(84,92)} \put(3732,-713){\oval(84,92)} \put(3831,-716){\circle*{90} \put(3932,-714){\oval(84,92)} \put(4101,-932){\oval(84,92)} \put(4199,-936){\circle*{90} \put(4300,-931){\oval(84,92)} \put(4645,-1272){\oval(84,92)} \put(4749,-1273){\circle*{90} \put(4863,-1272){\oval(84,92)} \put(5555,-1494){\oval(84,92)} \put(5035,-1394){\line( 0,-1){235}} \put(5035,-1629){\line( 1, 0){376}} \put(5411,-1629){\line( 0, 1){235}} \put(5411,-1394){\line(-1, 0){376}} \put(5120,-1569){\vector( 2, 1){282}} \put(4100,-977){\line( 0, 1){ 94}} \put(3001,-1539){\line( 0, 1){ 94}} \put(5554,-1539){\line( 0, 1){ 94}} \put(1186,-1610){\framebox(176,215){}} \put(1554,-1610){\framebox(176,215){}} \put(1276,-686){\line( 0,-1){705}} \put(943,-1505){\line( 1, 0){235}} \put(1364,-1501){\line( 1, 0){188}} \put(2112,-1614){\framebox(176,215){}} \put(2288,-1501){\line( 1, 0){188}} \put(2016,-935){\line( 1, 0){1328}} \put(2016,-1277){\line( 1, 0){1328}} \put(1824,-1273){\line(-1, 0){893}} \put(1824,-935){\line(-1, 0){893}} \put(1828,-716){\line(-1, 0){893}} \put(2012,-716){\line( 1, 0){1332}} \put(2015,-1501){\line( 1, 0){ 94}} \put(1737,-1499){\line( 1, 0){ 94}} \put(2202,-1255){\line( 0,-1){141}} \put(1644,-927){\line( 0,-1){470}} \put(2482,-1397){\line( 0,-1){235}} \put(2482,-1632){\line( 1, 0){376}} \put(2858,-1632){\line( 0, 1){235}} \put(2858,-1397){\line(-1, 0){376}} \put(2567,-1572){\vector( 2, 1){282}} \put(3739,-1607){\framebox(176,215){}} \put(4107,-1607){\framebox(176,215){}} \put(3829,-683){\line( 0,-1){705}} \put(3917,-1498){\line( 1, 0){188}} \put(4665,-1611){\framebox(176,215){}} \put(4841,-1498){\line( 1, 0){188}} \put(4289,-1498){\line( 1, 0){ 94}} \put(4569,-1274){\line( 1, 0){1320}} \put(4377,-1270){\line(-1, 0){893}} \put(4377,-932){\line(-1, 0){893}} \put(4381,-713){\line(-1, 0){904}} \put(4565,-713){\line( 1, 0){1330}} \put(4749,-1396){\line( 0, 1){141}} \put(4861,-1321){\line( 0, 1){ 94}} \put(4298,-978){\line( 0, 1){ 94}} \put(3931,-763){\line( 0, 1){ 94}} \put(3732,-760){\line( 0, 1){ 94}} \put(4197,-922){\line( 0,-1){470}} \put(4567,-1501){\line( 1, 0){ 94}} \put(4644,-1315){\line( 0, 1){ 94}} \put(4569,-932){\line( 1, 0){1326}} \put(3740,-1496){\line(-1, 0){254}} \put(2866,-1496){\line( 1, 0){470}} \put(5411,-1495){\line( 1, 0){470}} \end{picture \end{center} \caption{Quantum circuits for (a)the MAX operator and (b) the MIN operator, followed by a partial measurement then a negation to reset the auxiliary qubit $\left| {ax_2 } \right\rangle$. } \label{mmfig} \end{figure*} \end{center} \subsection{Maximization of the Satisfied Constraints} Let $\left| {\psi _c } \right\rangle$ be a superposition on $M$ states as follows, \begin{equation} \left| {\psi _c } \right\rangle = \alpha \sum\limits_{k = 0}^{M - 1} { \left| {z_k } \right\rangle }, \end{equation} \noindent where each $\left| {z_k } \right\rangle$ is an $m$-qubit state and let $d_k=\left\langle {z_k } \right\rangle$ be the number of 1's in state $\left| {z_k } \right\rangle$ such that $\left| {z_k } \right\rangle \ne \left| {0 } \right\rangle^{\otimes m}$, i.e. $d_k \ne 0$. This will be referred to as the 1-distance of $\left| {z_k } \right\rangle$. The max-bisection graph $\left| {x_{max}} \right\rangle$ is equivalent to find the state $\left| {z_{max}} \right\rangle$ with $d_{max}=max\{d_k,\,0\le k \le M-1\}$ and the state $\left| {z_{min} } \right\rangle$ with $d_{min}=min\{d_k,\,0\le k \le M-1\}$ is equivalent to the min-bisection graph $\left| {x_{min}} \right\rangle$. Finding the state $\left| {z_{min} } \right\rangle$ with the minimum number of 1's is equivalent to finding the state with the maximum number of 0's, so, to clear ambiguity, let $d_{max1}=d_{max}$ be the maximum number of 1's and $d_{max0}=d_{min}$ be the maximum number of 0's, where the number of 0's in $\left| {z_k } \right\rangle$ will be referred to as the 0-distance of $\left| {z_k } \right\rangle$. To find either $\left| {z_{max} } \right\rangle$ or $\left| {z_{min} } \right\rangle$, when $\left| {\psi _c } \right\rangle$ is measured, add an auxiliary qubit $\left| {ax_2 } \right\rangle$ initialized to state $\left| 0 \right\rangle$ to the system $\left| {\psi _c } \right\rangle$ as follows, \begin{equation} \begin{array}{l} \left| {\psi _m } \right\rangle = \left| {\psi _c } \right\rangle \otimes \left| 0 \right\rangle\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,= \alpha \sum\limits_{k = 0}^{M - 1} { \left| {z_k } \right\rangle } \otimes \left| 0 \right\rangle.\\ \end{array} \end{equation} To main idea to find $\left| {z_{max} } \right\rangle$ is to apply partial negation on the state of $\left| {ax_2 } \right\rangle$ entangled with $\left| {z_k } \right\rangle$ based on the number of 1's in $\left| {z_k } \right\rangle$, i.e. more 1's in $\left| {z_k } \right\rangle$, gives more negation to the state of $\left| {ax_2 } \right\rangle$ entangled with $\left| {z_k } \right\rangle$. If the number of 1's in $\left| {z_k } \right\rangle$ is $m$, then the entangled state of $\left| {ax_2 } \right\rangle$ will be fully negated. The $m^{th}$ partial negation operator is the $m^{th}$ root of $\sigma _X$ and can be calculated using diagonalization as follows, \begin{equation} V=\sqrt[m]{\sigma _X} = \frac{1}{2}\left[ {\begin{array}{*{20}c} {1 + t} & {1 - t} \\ {1 - t} & {1 + t} \\ \end{array}} \right], \end{equation} \noindent where $t={\sqrt[m]{{ - 1}}}$, and applying $V$ for $d$ times on a qubit is equivalent to the operator, \begin{equation} V^d = \frac{1}{2}\left[ {\begin{array}{*{20}c} {1 + t^d } & {1 - t^d } \\ {1 - t^d } & {1 + t^d } \\ \end{array}} \right], \end{equation} \noindent such that if $d=m$, then $V^m=\sigma _X$. To amplify the amplitude of the state $\left| {z_{max} } \right\rangle$, apply the operator MAX on $\left| {\psi _m } \right\rangle$ as will be shown later, where MAX is an operator on $m+1$ qubits register that applies $V$ conditionally for $m$ times on $\left|ax_2 \right\rangle$ based on the number of 1's in $\left| {c_0 c_1 \ldots c_{m-1} } \right\rangle$ as follows (as shown in Figure \ref{mmfig}(a)), \begin{equation} MAX = Cont\_V(c_0 ,ax_2 )Cont\_V(c_1 ,ax_2 ) \ldots Cont\_V(c_{m - 1} ,ax_2), \end{equation} \noindent so, if $d_1$ is the number of $c_l=1$ in $\left| {c_0 c_1 \ldots c_{m-1} } \right\rangle$ then, \begin{equation} MAX\left( {\left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left| 0 \right\rangle } \right) = \left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left( {\frac{{1 + t^{d_1} }}{2}\left| 0 \right\rangle + \frac{{1 - t^{d_1} }}{2}\left| 1 \right\rangle } \right). \end{equation} Amplifying the amplitude of the state $\left| {z_{min} } \right\rangle$ with the minimum number of 1's is equivalent to amplifying the amplitude of the state with the maximum number of 0's. To find $\left| {z_{min} } \right\rangle$, apply the operator MIN on $\left| {\psi _m } \right\rangle$ as will be shown later, where MIN is an operator on $m+1$ qubits register that applies $V$ conditionally for $m$ times on $\left|ax_2 \right\rangle$ based on the number of 0's in $\left| {c_0 c_1 \ldots c_{m-1} } \right\rangle$ as follows (as shown in Figure \ref{mmfig}(b)), \begin{equation} MIN = Cont\_V(\overline{c_0} ,ax_2 )Cont\_V(\overline{c_1} ,ax_2 ) \ldots Cont\_V(\overline{c_{m-1}} ,ax_2), \end{equation} \noindent where $\overline{c_l}$ is a temporary negation of $c_l$ before and after the application of $Cont\_V(c_l ,ax_2)$ as shown in Figure \ref{mmfig}, so, if $d_0$ is the number of $c_l=0$ in $\left| {c_0 c_1 \ldots c_{m-1} } \right\rangle$ then, \begin{equation} MIN\left( {\left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left| 0 \right\rangle } \right) = \left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left( {\frac{{1 + t^{d_0} }}{2}\left| 0 \right\rangle + \frac{{1 - t^{d_0} }}{2}\left| 1 \right\rangle } \right). \end{equation} For the sake of simplicity and to avoid duplication, the operator $Q$ will denote either the operator $MAX$ or the operator $MIN$, $d$ will denote either $d_1$ or $d_0$, $\left| {z_s } \right\rangle$ will denote either $\left| {z_{max} } \right\rangle$ or $\left| {z_{min} } \right\rangle$, and $d_s$ will denote either $d_{max1}$ or $d_{max0}$, so, \begin{equation} Q\left( {\left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left| 0 \right\rangle } \right) = \left| {c_0 c_1 ...c_{m - 1} } \right\rangle \otimes \left( {\frac{{1 + t^{d} }}{2}\left| 0 \right\rangle + \frac{{1 - t^{d} }}{2}\left| 1 \right\rangle } \right), \end{equation} \noindent and the probabilities of finding the auxiliary qubit $\left|ax_2 \right\rangle$ in state ${\left| 0 \right\rangle }$ or ${\left| 1 \right\rangle }$ when measured is respectively as follows, \begin{equation} \begin{array}{l} Pr{(ax_2 = 0)} = \left| {\frac{{1 + t^d }}{2}} \right|^2 = \cos ^2 \left( {\frac{{d\pi }}{{2m}}} \right), \\ Pr{(ax_2 = 1)} = \left| {\frac{{1 - t^d }}{2}} \right|^2 = \sin ^2 \left( {\frac{{d\pi }}{{2m}}} \right). \\ \end{array} \end{equation} To find the state ${\left| z_s \right\rangle }$ in $\left| {\psi _m } \right\rangle$, the proposed algorithm is as follows, as shown in Figure \ref{mmfig}: \begin{itemize} \item[1-] Let $\left| {\psi _r } \right\rangle = \left| {\psi _m } \right\rangle$. \item[2-] Repeat the following steps for $r$ times, \begin{itemize} \item[i-] Apply the operator $Q$ on $\left| {\psi _r } \right\rangle$. \item[ii-] Measure $\left| ax_2 \right\rangle$, if $\left|ax_2 \right\rangle=\left|1 \right\rangle$, then let the system post-measurement is $\left| {\psi _r } \right\rangle$, apply $\sigma_X$ on $\left| ax_2 \right\rangle$ to reset to $\left| 0 \right\rangle$ for the next iteration and then go to Step (i), otherwise restart the stage and go to Step (1). \end{itemize} \item[3-] Measure the first $m$ qubits in $\left| {\psi _r } \right\rangle$ to read $\left| z_s \right\rangle$. \end{itemize} For simplicity and without loss of generality, assume that a single $\left| z_s \right\rangle$ exists in $\left| \psi_v \right\rangle$, although such states will exist in couples since each $\left| z_s \right\rangle$ is entangled with a variable assignment $\left| x_s \right\rangle$ and each $\left| x_s \right\rangle$ is equivalent to $\left| \overline {x_s} \right\rangle$, moreover, different variable assignments might give rise to constraint vectors with maximum distance, but such information is not known in advance. Assuming that the algorithm finds $\left|ax_2 \right\rangle=\left|1 \right\rangle$ for $r$ times in a row, then the probability of finding $\left|ax_2 \right\rangle=\left|1 \right\rangle$ after Step (2-i) in the $1^{st}$ iteration, i.e. $r=1$ is given by, \begin{equation} Pr^{(1)}{(ax_2 = 1)} = \alpha ^2 \sum\limits_{k = 0}^{M - 1} {\sin ^2 \left( {\frac{{d_k \pi }}{{2m}}} \right)}. \label{probax2} \end{equation} The probability of finding $\left|\psi_r \right\rangle=\left|z_s \right\rangle$ after Step (2-i) in the $1^{st}$ iteration, i.e. $r=1$ is given by, \begin{equation} Pr^{(1)}{(\psi_{r} = z_s)} = \alpha ^2 {\sin ^2 \left( {\frac{{d_s \pi }}{{2m}}} \right)} . \end{equation} The probability of finding $\left|ax_2 \right\rangle=\left|1 \right\rangle$ after Step (2-i) in the $r^{th}$ iteration, i.e. $r>1$ is given by, \begin{equation} Pr^{(r)}{(ax_2 = 1)} = \frac{{\sum\limits_{k = 0}^{M - 1} {\sin ^{2r} \left( {\frac{{d_k \pi }}{{2m}}} \right)} }}{{\sum\limits_{k = 0}^{M - 1} {\sin ^{2(r - 1)} \left( {\frac{{d_k \pi }}{{2m}}} \right)} }}. \end{equation} \begin{figure}[htbp] \centerline{\includegraphics{fig12.eps}} \caption{The probability of success for a max-bisection instance of the graph shown in Figure \ref{graphex} with $n=8$ and $m=12$ where the probability of success of $\left|ax_2\right\rangle$ is 0.6091 after the first iteration and with probability of success of 0.7939 after iterating the algorithm where the probability of success of $\left|z_{max}\right\rangle$ is amplified to reach the probability of success of $\left|ax_2\right\rangle$.} \label{fig21} \end{figure} The probability of finding $\left|\psi_r \right\rangle=\left|z_s \right\rangle$ after Step (2-i) in the $r^{th}$ iteration, i.e. $r>1$ is given by, \begin{equation} Pr^{(r)}{(\psi_{r} = z_s)} = \frac{{{\sin ^{2r} \left( {\frac{{d_s \pi }}{{2m}}} \right)} }}{{\sum\limits_{k = 0}^{M - 1} {\sin ^{2(r - 1)} \left( {\frac{{d_k \pi }}{{2m}}} \right)} }}. \end{equation} To get the highest probability of success for $Pr{(\psi_{r} = z_s)}$, Step (2) should be repeated until, $\left| Pr^{(r)}{(ax_2 = 1)} - Pr^{(r)}{(\psi_r = z_s)} \right| \le \epsilon$ for small $\epsilon \ge 0$ as shown in Figure \ref{fig21}. This happens when $\sum\nolimits_{k = 0,k\ne s}^{M - 1} {\sin ^{2r} \left( {{\textstyle{{d_k \pi } \over {2m}}}} \right)} \le \epsilon$. Since the Sine function is a decreasing function then for sufficient large $r$, \begin{equation} \sum\limits_{k = 0,k\ne s}^{M - 1} {\sin ^{2r} \left( {\frac{{d_k \pi }}{{2m}}} \right)} \approx \sin ^{2r} \left( {\frac{{d_{ns} \pi }}{{2m}}} \right), \end{equation} \noindent where $d_{ns}$ is the next maximum distance less than $d_s$. The values of $d_s$ and $d_{ns}$ are unknown in advance, so let $d_s=m$ be the number of edges, then in the worst case when $d_s=m$, $d_{ns}=m-1$ and $m=n(n-1)/2$, the required number of iterations $r$ for $\epsilon = 10^{ - \lambda }$ and $\lambda>0$ can be calculated using the formula, \begin{equation} 0 < \sin ^{2r} \left( {\frac{{(m-1) \pi }}{{2m}}} \right) \le \epsilon, \end{equation} \begin{equation} \begin{array}{l} r \ge \frac{{\log \left( \epsilon \right)}}{{2\log \left( {\sin \left( {\frac{{\left( {m - 1} \right)\pi }}{{2m}}} \right)} \right)}} \\ \,\,\,\, = \frac{{\log \left( {10^{ - \lambda } } \right)}}{{2\log \left( {\cos \left( {\frac{\pi }{{2m}}} \right)} \right)}} \\ \,\,\,\, \ge \lambda \left( {\frac{{2m}}{\pi }} \right)^2 \\ \,\,\,\,=O\left( {m^2 } \right),\\ \end{array} \end{equation} \noindent where $0 \le m \le {\textstyle{{n(n - 1)} \over 2}}$. For a complete graph where $m={\textstyle{{n(n - 1)} \over 2}}$, then the upper bound for the required number of iterations $r$ is $O\left( {n^4 } \right)$. Assuming that a single $\left|z_s \right\rangle$ exists in the superposition will increase the required number of iterations, so it is important to notice here that the probability of success will not be over-cooked by increasing the required number of iteration $r$ similar to the common amplitude amplification techniques. \subsection{Adjustments on the Proposed Algorithm} During the above discussion, two problems will arise during the implementation of the proposed algorithm. The first one is to finding $\left|ax_2 \right\rangle=\left|1 \right\rangle$ for $r$ times in a row which a critical issue in the success of the proposed algorithm to terminate in polynomial time. The second problem is that the value of $d_s$ is not known in advance, where the value of $Pr^{(1)}{(ax_2 = 1)}$ shown in Eqn. \ref{probax2} plays an important role in the success of finding $\left|ax_2 \right\rangle=\left|1 \right\rangle$ in the next iterations, this value depends heavily on the density of 1's, i.e. the ratio ${\textstyle{{d_s } \over m}}$. Consider the case of a complete graph with even number of vertices, where the number of egdes $m = {\textstyle{{n(n - 1)} \over 2}}$ and all $\left|z_k \right\rangle$'s are equivalent and each can be taken as $\left|z_s \right\rangle$ then, \begin{equation} Pr^{(1)}{(ax_2 = 1)} = M \alpha ^2 {\sin ^2 \left( {\frac{{d_s\pi }}{{2m}}} \right)}. \label{probax2_2} \end{equation} This case is an easy case where setting $m=d_s$ in $m^{th}$ root of $\sigma _X$ will lead to a probability of success of certainty after a single iteration. Assuming a blind approach where $d_{s}$ is not known, then this case represents the worst ratio ${\textstyle{{d_s } \over m}}$ where the probability of success will be $\approx0.5$ for sufficient large graph. Iterating the algorithm will not lead to any increase in the probability of both $\left|z_s \right\rangle$ and $\left|ax_2 \right\rangle$. In the following, adjustments on the proposed algorithm for the max-bisection and the min-bisection graph will be presented to overcome these problems, i.e. to be able to find $\left|ax_2 \right\rangle=\left|1 \right\rangle$ after the first iteration with the highest probability of success without a priori knowledge of $d_s$. \begin{figure}[htbp] \centerline{\includegraphics{fig22.eps}} \caption{The probability of success for a max-bisection instance of the graph shown in Figure \ref{graphex} with $n=8$, $m=12$, $\mu_{max}=31$ and $\delta=0.9$, where the probability of success of $\left|ax_2\right\rangle$ is 0.9305 after the first iteration and with probability of success of 0.9662 after iterating the algorithm where the probability of success of $\left|z_{max}\right\rangle$ is amplified to reach the probability of success of $\left|ax_2\right\rangle$.} \label{fig22} \end{figure} \subsubsection*{Adjustment for the Max-Bisection Problem} In an arbitrary graph, the density of 1's will be ${\textstyle{{d_{max1} } \over m}}$. In the case of a complete graph, there are $M$ states with 1-distance ($d_k$) equals to ${\textstyle{{n^2} \over 4}}$. This case represents the worst density of 1's where the density will be ${\textstyle{{n^2 } \over {2n(n - 1)}}}$ slightly greater than 0.5 for arbitrary large $n$. Iterating the proposed algorithm will not amplify the amplitudes after arbitrary number of iterations. To overcome this problem, add $\mu_{max}$ temporary qubits initialized to state $\left|1 \right\rangle$ to the register $\left| {c_0 c_1 ...c_{m - 1} } \right\rangle$ as follows, \begin{equation} \left| {c_0 c_1 ...c_{m - 1} } \right\rangle \to \left| {c_0 c_1 \ldots c_{m - 1}c_{m}c_{m+1}\ldots c_{m+\mu_{max}-1} } \right\rangle, \end{equation} \noindent so that the extended number of edges $m_{ext}$ will be $m_{ext}=m+\mu_{max}$ and $V=\sqrt[{m_{ext} }]{{\sigma _X }}$ will be used instead of $V=\sqrt[{m}]{{\sigma _X }}$ in the MAX operator, then the density of 1's will be ${\textstyle{{n^2 + 4\mu _{max } } \over {2n(n - 1) + 4\mu _{max } }}}$. To get a probability of success $Pr_{max}$ to find $\left|ax_2 \right\rangle=\left|1 \right\rangle$ after the first iteration, \begin{equation} Pr ^{(1)} {\left( {ax_2 = 1} \right)} = M \alpha ^2 \sin ^2 \left( {\frac{{\pi \left( {{\textstyle{{n^2 } \over 4}} + \mu _{\max } } \right)}}{{2\left( {{\textstyle{{n(n - 1)} \over 2}} + \mu _{\max } } \right)}}} \right) \ge Pr_{\max }, \end{equation} \noindent then the required number of temporary qubits $\mu_{max}$ is calculated as follows, \begin{equation} \mu _{\max } \ge \frac{1}{{1 - \omega }}\left( {\frac{{n^2 }}{2}\left( {2\omega - 1} \right) - \frac{n}{2}\omega } \right), \end{equation} \noindent where $\omega = {\textstyle{2 \over \pi }}\sin ^{ - 1} \left( {\sqrt {{{{Pr_{\max } } \over {M\alpha ^2 }}}} } \right)$ and $Pr_{\max } < {\textstyle{{M \alpha ^2 }}}$, with $M\alpha^2=1$ so let $Pr_{\max } = \delta {\textstyle{{M \alpha ^2 } }}$ such that $0<\delta<1$. For example, if $\delta=0.9$, then $Pr ^{(1)} \left( {ax_2 = 1} \right)$ will be at least 90$\%$ as shown in Figure \ref{fig22}. To conclude, the problem of low density of 1's can be solved with a polynomial increase in the number of qubits to get the solution $\left|z_{max} \right\rangle$ in $O\left(m_{ext}^2\right)=O\left( {n^4 } \right)$ iterations with arbitrary high probability $\delta<1$ to terminate in poly-time, i.e. to read $\left|ax_2 \right\rangle=\left|1 \right\rangle$ for $r$ times in a row. \subsubsection*{Adjustment for the Min-Bisection Problem} Similar to the above approach, in an arbitrary graph, the density of 0's will be ${\textstyle{{d_{max0} } \over m}}$. In the case of a complete graph, there are $M$ states with 0-distance ($d_k$) equals to ${\textstyle{{n(n - 1)} \over 2}}-{\textstyle{{n^2} \over 4}}$. This case represents the worst density of 0's where the density will be ${\textstyle{{n-2 } \over {2(n - 1)}}}$ slightly less than 0.5 for arbitrary large $n$. Iterating the proposed algorithm will not lead to any amplification after arbitrary number of iterations. To overcome this problem, add $\mu_{min}$ temporary qubits initialized to state $\left|0 \right\rangle$ to the register $\left| {c_0 c_1 ...c_{m - 1} } \right\rangle$ as follows, \begin{equation} \left| {c_0 c_1 ...c_{m - 1} } \right\rangle \to \left| {c_0 c_1 \ldots c_{m - 1}c_{m}c_{m+1}\ldots c_{m+\mu_{min}-1} } \right\rangle, \end{equation} \noindent so that the extended number of edges $m_{ext}$ will be $m_{ext}=m+\mu_{min}$ and $V=\sqrt[{m_{ext} }]{{\sigma _X }}$ will be used instead of $V=\sqrt[{m}]{{\sigma _X }}$ in the MIN operator, then the density of 0's will be ${\textstyle{{n^2 -2n + 4\mu _{min } } \over {2n(n - 1) + 4\mu _{min } }}}$. To get a probability of success $Pr_{max}$ to find $\left|ax_2 \right\rangle=\left|1 \right\rangle$ after the first iteration, \begin{equation} Pr ^{(1)} \left( {ax_2 = 1} \right) = M \alpha ^2 \sin ^2 \left( {\frac{{\pi \left( {{\textstyle{{n(n - 1)} \over 2}} - {\textstyle{{n^2 } \over 4}} + \mu _{min } } \right)}}{{2\left( {{\textstyle{{n(n - 1)} \over 2}} + \mu _{min } } \right)}}} \right) \ge Pr_{\max }, \end{equation} \noindent then the required number of temporary qubits $\mu_{min}$ is calculated as follows, \begin{equation} \mu _{\min } \ge \frac{{n^2 }}{4}\left( {\frac{{2\omega - 1}}{{1 - \omega }}} \right) + \frac{n}{2}, \end{equation} \noindent where $\omega = {\textstyle{2 \over \pi }}\sin ^{ - 1} \left( {\sqrt {{{{Pr_{\max } } \over {M\alpha ^2 }}}} } \right)$ and $Pr_{\max } < {\textstyle{{M \alpha ^2 }}}$, with $M\alpha^2=1$ so let $Pr_{\max } = \delta {\textstyle{{M \alpha ^2 } }}$ such that $0<\delta<1$. For example, if $\delta=0.9$, then $Pr ^{(1)} \left( {ax_2 = 1} \right)$ will be at least 90$\%$. To conclude similar to the case of the max-bisection graph, the problem of low density of 0's can be solved with a polynomial increase in the number of qubits, larger than the case of the max-bisection graph, to get the solution $\left|z_{min} \right\rangle$ in $O\left(m_{ext}^2\right)=O\left( {n^4 } \right)$ iterations with arbitrary high probability $\delta<1$ to terminate in poly-time, i.e. to read $\left|ax_2 \right\rangle=\left|1 \right\rangle$ for $r$ times in a row. \section{Conclusion} Given an undirected graph $G$ with even number of vertices $n$ and $m$ unweighted edges. The paper proposed a BQP algorithm to solve the max-bisection problem and the min-bisection problem, where a general graph is considered for both problems. The proposed algorithm uses a representation of the two problems as a Boolean constraint satisfaction problem, where the set of edges of a graph are represented as a set of constraints. The algorithm is divided into three stages, the first stage prepares a superposition of all possible equally sized graph partitions in $O\left( {\sqrt[4]{n}} \right)$ using an amplitude amplification technique that runs in $O\left( {\sqrt {{\textstyle{N \over M}}} } \right)$, for $N=2^n$ and $M$ is the number of possible graph partitions. The algorithm, in the second stage, evaluates the set of constraints for all possible graph partitions. In the third stage, the algorithm amplifies the amplitudes of the best graph bisection that achieves maximum/minimum satisfaction to the set of constraints using an amplitude amplification technique that applies an iterative partial negation where more negation is given to the set of constrains with more satisfied constrains and a partial measurement to amplify the set of constraints with more negation. The third stage runs in $O(m^2)$ and in the worst case runs in $O(n^4)$ for a dense graph. It is shown that the proposed algorithm achieves an arbitrary high probability of success of $1-\epsilon$ for small $\epsilon>0$ using a polynomial increase in the space resources by adding dummy constraints with predefined values to give more negation to the best graph bisection.
proofpile-arXiv_069-4164
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{MERA state} We define the MERA network as follows. See Fig.~\ref{MERAFig}. We start with a single site with a $1$-dimensional Hilbert space (thus, up to an irrelevant choice of phase, the state of the system on this site is fixed; call this initial state $\psi_0$). We then apply a series of isometries to this state, giving a new state \begin{equation} \label{psidef} \psi=W_L V_L \ldots W_3 V_3 W_2 V_2 W_1 V_1 \psi_0, \end{equation} for some $L$, where $L$ is the number of ``levels". The final state $\psi$ is a state on $N=2^{L}$ sites. Each $V_k$ is an isometry that maps a system on $2^{k-1}$ sites with some Hilbert space dimension $D_{k-1}$ on each site to a system on $2^{k}$ sites with some Hilbert space dimension $D'_k$ on each site. We number the sites before applying $V_k$ by numbers $0,1,2,...,2^{k-1}$ and after applying $V_k$ by numbers $0,1,2,...,2^{k}-1$. Each $V_k$ is a product of isometries on each of the $2^{k-1}$ sites, mapping each site to a pair of sites; the $j$-th site is mapped to a pair of sites $2j,2j+1$. Each $W_k$ is another isometry. The isometry $W_k$ preserves the number of sites, mapping a system of $2^{k}$ sites with dimension $D'_k$ on each site to a system of $2^{k}$ sites with dimension $D_{k}$ on each site. Each $W_k$ is also a product of isometries, but in this case it is a product of isometries on pairs of sites; it maps each pair $2j+1,2j+2 \,{\rm mod}\, 2^{k}$ to the same pair. We will say that isometries $W_i$ with smaller $i$ are at {\it higher} levels of the MERA while those with larger $i$ are at {\it lower} levels of the MERA. That is, the {\it height} of a level will increase as we move upwards in the figure. Each ``level" of the MERA will include two rows of the figure, one with the isometry $W$ and one with the isometry $V$. \begin{figure} \includegraphics[width=8cm]{mera.pdf} \caption{Illustration of MERA network. Circle at top represents state $\psi_0$. Isometry $V_1$ is represented by the lines leading to a pair of circles below it. Isometry $W_1$ is represented by the filled rectangle mapping that pair of circles to another pair of circles (note that in this case, $W_1$ could be absorbed into a redefinition of $V_1$, while $W_i$ for $i>1$ cannot be absorbed into $V_i$). Isometry $V_2$ maps each circle in the pair to another pair of circles. Isometry $W_2$ maps the four sites to another four sites. The isometry on sites $1,2$ is represented by the filled rectangle in the middle, while the isometry on sites $0,3$ is represented by the lines leading to half a filled rectangle on left and right sides of the figure.} \label{MERAFig} \end{figure} Note that pairs of sites are defined modulo $2^{k}$ in the definition of $W_k$. If the sites are written on a line in order $0,...,2^{k}-1$, then $W_k$ will entangle the rightmost and leftmost sites. The introduction of $W_1$ in the definition of $\Psi$ above is slightly redundant, since $V_1$ already produces entanglement between sites $0,1$; however, we leave $W_1$ in to keep the definition of the MERA consistent from level to level. We will explain the choice of dimensions $D_k,D'_k$ later. In a difference from traditional MERA states, the dimensions $D_k,D'_k$ will be chosen differently at each level. Further, the dimension $D_{k}$ will be larger than $D'_k$. That is, the $W_k$ (sometimes called ``disentanglers") will have the effect of increasing the Hilbert space dimension of each site, and hence of the system as a whole. The isometries $W_k,V_k$ will be chosen randomly. More precisely, each $V_k$ is product of isometries on each of the $2^k$ sites, mapping each site to a pair of sites. Each of the isometries in this product will be chosen at random from the Haar uniform distribution, independently of all other isometries. Similarly, each $W_k$ is also a product of isometries, each of which will again be chosen at random from the Haar uniform distribution, independently of all other isometries. \section{Entanglement Entropy of Interval} We now estimate the entanglement entropy of an interval of sites. We start with some notation. We write $[i,j]$ to denote the interval of sites $i,i+1,...,j-1,l$. We define $\psi(k)=W_{k} V_{k} ... W_1 V_1 \psi_0$, so that $\psi=\psi(L)$ and we define $\sigma(k)=|\psi(k)\rangle\langle \psi(k)|$. We define $\phi(k)=V_{k} W_{k-1} V_{k-1} ... W_1 V_1 \psi_0$ and we define $\tau(k)=|\phi(k)\rangle\langle\phi(k)|$. We begin with an upper bound to the von Neumann entropy using a recurrence relation. We then derive a similar recurrence relation for the expectation value of the second Renyi entropy and use that to lower bound the expected von Neumann entropy. We then combine these bounds to get an estimate on the expected entropy of an interval. These general bounds will hold for any sufficiently large choice of $D_k,D'_k$; we then specialize to a particular choice to obtain the desired state with large entanglement. \subsection{Upper Bound to von Neumann Entropy By Recurrence Relation} We begin with a trivial upper bound for $S(\sigma(k)_{[i,j]})$, which denotes the von Neumann entropy of the reduced density matrix of $\sigma(k)$ on the interval $[i,j]$. Since the $W_k$ are isometries, we have \begin{equation} \label{isomresultW} i \,{\rm mod}\, 2=1,j \,{\rm mod}\, 2=0 \; \rightarrow \; S(\sigma(k)_{[i,j]})=S(\tau(k)_{[i,j]}). \end{equation} That is, in the case that $i$ is odd and $j$ is even, the interval $[i,j]$ in state $\sigma(k)$ is obtained by an isometry acting on that interval in the state $\tau(k)$. If $i=j$, we have the bound \begin{equation} \label{ieqjbound} i = j \; \rightarrow \; S(\sigma(k)_{[i,j]}) \leq D_k. \end{equation} In all other cases (if, for example $i$ is even or $j$ is odd or both), the entropy can be bounded above using subadditivity: \begin{equation} \label{subaddbound} S(\sigma(k)_{[i,j]})\leq S(\sigma(k)_{[m,n]})+\log(D_k) (|m-i| + |n-j|), \end{equation} for any choice $m,n$. Combining Eqs.~(\ref{isomresultW},\ref{subaddbound}) gives us the bound \begin{equation} \label{svnsigmabound} S(\sigma(k)_{[i,j]}) \leq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0} \Bigl( S(\tau(k)_{[m,n]})+\log(D_k) (|m-i| + |n-j|) \Bigr). \end{equation} Although in fact this equation holds for any choices of $m,n$, for all applications we will restrict to $m,n$ such that $|m-i|\leq 1,|n-j| \leq 1$. We emphasize that in the above equation, and from now on, all differences, such as $m-i$, are taken modulo the number of sites at the given level of the MERA. When we compute a difference such as $m-i$, by $|m-i|$ we mean the integer $k$ with minimum $|k|$, such that $m-i = k$ modulo the number of sites. Similarly, if we write, for two sites, $i,j$ that $i=j+1$, we again mean modulo the number of sites at the given level. Of course, if we have the empty interval, which we write as $[i,j]$ for $i=j+1 \,{\rm mod}\, 2^k$, then the entropy is equal to $0$. Similarly we have \begin{equation} \label{svntaubound} S(\tau(k)_{[i,j]}) \leq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1} \Bigl( S(\sigma(k-1)_{[m/2,(n-1)/2]})+\log(D'_{k}) (|m-i| + |n-j|) \Bigr). \end{equation} We will only use Eq.~(\ref{svntaubound}) with $|m-i| \leq 1, |n-j| \leq 1$. \subsection{Expectation Value of Renyi Entropy} We now obtain a recurrence relation for the expectation value of the Renyi entropy $S_2$, defined by $S_2(\rho)=-\log({\rm tr}(\rho^2))$. The analogue of Eq.~(\ref{isomresultW}) still holds for $S_2$: \begin{equation} \label{S2isomresultW} i \,{\rm mod}\, 2=1,j \,{\rm mod}\, 2=0 \; \rightarrow \; S_2(\sigma(k)_{[i,j]})=S_2(\tau(k)_{[i,j]}), \end{equation} as does \begin{equation} \label{reduce} S_2(\sigma(k)_{[i,j]}) \leq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0} \Bigl( S_2(\tau(k)_{[m,n]})+\log(D_k) (|m-i| + |n-j|) \Bigr) \end{equation} and \begin{equation} \label{reduce2} S_2(\tau(k)_{[i,j]}) \leq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1} \Bigl( S_2(\sigma(k-1)_{[m/2,(n-1)/2]})+\log(D'_{k}) (|m-i| + |n-j|) \Bigr). \end{equation} We refer to Eqs.~(\ref{reduce},\ref{reduce2}) as the {\it reduction equations}. These equations make sense also in the case that we have $m>n$. This can occur, for example, if $j=i$ or $j=i+1$ in which case the equations allow us to bound $S_2(\sigma(k)_{[i,j]}) \leq \log(D_k) |j-i+1|$ and $S_2(\tau(k)_{[i,j]}) \leq \log(D'_k) |j-i+1|$ We now show that the upper bound given by repeatedly applying these equations to obtain the optimum result (i.e., the result which minimizes the $S_2$) is tight for the expectation value of $S_2$, up to some corrections proportional to a certain level in the MERA. That is, we give a lower bound on the expectation value of $S_2$. Consider first $S_2(\sigma(k)_{[i,j]})$. Assume first that $i \neq j$ and $i\,{\rm mod}\, 2=0,j \,{\rm mod}\, 2=0$ (we discuss the other cases later; they will be very analogous to this case). We write the Hilbert space of the system of sites $0,...,2^{k}-1$ as a tensor product of four Hilbert spaces. These will be labelled $A_1,A_2,B,R$ where $A_1$ is the Hilbert space on site $i-1$, $A_2$ is the Hilbert space site $i$, $B$ is the Hilbert space on sites $i+1,...,j$, and $R$ is the Hilbert space on all other sites. In this case, the Hilbert space on a set of sites refers to the case in which there is a $D_k$-dimensional Hilbert space on each site. In this notation, $S_2(\sigma(k)_{[i,j]})=S_2(\sigma(k)_{A_2 B})$. The isometry $W_{k}$ is a product of isometries on pairs of sites. Write $W_{k}=W X$ where $W$ is the isometry acting on the pair of sites $i-1,i$ and $X$ is the product of all other isometries. The isometry $W_{k}$ maps from a system of $2^k$ sites to a system of $2^k$ sites, but it changes the Hilbert space dimension from $D'_{k}$ to $D_k$. We introduce different notation to write the Hilbert space of the system with a $D'_{k}$-dimensional space on each site. We write it is a product of spaces $a,b,r$, where $a$ is the Hilbert space for sites $i-1,i$, $b$ is the Hilbert space on sites $i+1,...,j$, and $r$ is the Hilbert space on all other sites. Then, \begin{equation} S_2(\sigma(k))_{A_2 B})=S_2\Bigl({\rm tr}_{A_1 r}(W \tau(k) W^\dagger)\Bigr). \end{equation} That is, we wish to compute the entanglement entropy of $W\phi(k)$ on $A_2 b$. The isometry $W$ is from $a$ to $A_1 \otimes A_2$. Note that since the logarithm is a concave function and so the negative of the logarithm is a convex function, we have \begin{eqnarray} \label{convexity} E\Bigl[S_2\Bigl({\rm tr}_{A_1 r}(|W \tau(k) W^\dagger)\Bigr)\Bigr]_W &=& -E\Bigl[\log {\rm tr}\Bigl([{\rm tr}_{A_1 r}(W \tau(k) W^\dagger)]^2\Bigl)\Bigr]_W \\ \nonumber &\geq & -\log E\Bigl[{\rm tr}\Bigl([{\rm tr}_{A_1 r}(W \tau(k) W^\dagger)]^2\Bigl)\Bigr]_W, \end{eqnarray} where $E[\ldots]_W$ denotes the average over $W$. The trace ${\rm tr}\Bigl([{\rm tr}_{A_1 r}(W \tau(k) W^\dagger)]^2\Bigl)$ is a second-order polynomial in $W$ and second-order polynomial in the complex conjugate of $W$. For an arbitrary isometry $W$ from a Hilbert space of dimension $d_1$ to a Hilbert space of dimension $d_2$ (in this case, $d_1=(D'_{k})^2$ and $d_2=(D_k)^2$ since $W$ is an isometry from pairs of sites to pairs of sites; note also that $d_2 \geq d_1$ since this is an isometry), we can average this trace over choices of $W$ using the identity for the matrix elements of $W$ and $\overline W$: \begin{eqnarray} \label{identity} &&E[W_{ij} \overline W_{kl} W_{ab} \overline W_{cd}]_W \\ \nonumber &=& c\Bigl( \delta_{ik} \delta_{jl} \delta_{ac} \delta_{bd}+\delta_{ic} \delta_{jd} \delta_{ka}\delta_{lb} \Bigr) \\ \nonumber &&+c'\Bigl( \delta_{ik} \delta_{jd} \delta_{ac} \delta_{lb} + \delta_{ic} \delta_{jl} \delta_{ka} \delta_{bd} \Bigr), \end{eqnarray} where \begin{equation} \label{cis} c=\frac{d_1^2 \cdot(d_1^2d_2^2+d_1d_2)-d_1\cdot(d_1^2d_2+d_1d_2^2)}{(d_1^2d_2^2+d_1d_2)^2-(d_1^2d_2+d_1d_2^2)^2}, \end{equation} and \begin{equation} \label{cprimeis} c'= \frac{d_1\cdot(d_1^2d_2^2+d_1d_2)-d_1^2\cdot(d_1^2d_2+d_1d_2^2)}{(d_1^2d_2^2+d_1d_2)^2-(d_1^2d_2+d_1d_2^2)^2} \end{equation} Eq.~(\ref{identity}) is illustrated in Fig.~\ref{WidFig}. Some of these averages are similar to calculations in Ref.~\onlinecite{hw}. Note that the right-hand side of Eq.~(\ref{identity}) is the most general function that is invariant under unitary rotations $W \rightarrow U W U'$ for arbitrary unitaries $U,U'$ and invariant under interchange $i,j \leftrightarrow a,b$ or $k,l \leftrightarrow c,d$. The constants $c,c'$ can be fixed by taking traces with $\delta_{ik} \delta_{jl} \delta_{ac} \delta_{bd}$ and $\delta_{ik} \delta_{jd} \delta_{ac} \delta_{lb}$ and computing the expectation value. The trace of the right-hand side with $\delta_{ik} \delta_{jl} \delta_{ac} \delta_{bd}$ is equal to $c(d_1^2d_2^2+d_1d_2)+c'(d_1^2d_2+d_1d_2^2)$. One can readily show that the trace with the left-hand side is equal to $d_1^2$, as the trace with $\delta_{ik} \delta_{jl} \delta_{ac} \delta_{bd}$ is independent of the choice of $W$. The trace of the right-hand side with $\delta_{ik} \delta_{jd} \delta_{ac} \delta_{lb}$ is equal to $c'(d_1^2d_2^2+d_1d_2)+c(d_1^2d_2+d_1d_2^2)$, while the trace with the left hand side is equal to $d_1^2$. So, this gives \begin{equation} d_1^2=c(d_1^2d_2^2+d_1d_2)+c'(d_1^2d_2+d_1d_2^2), \end{equation} \begin{equation} d_1=c'(d_1^2d_2^2+d_1d_2)+c(d_1^2d_2+d_1d_2^2). \end{equation} Solving these gives Eqs.~(\ref{cis},\ref{cprimeis}). \begin{figure} \includegraphics[width=10cm]{Wav.pdf} \caption{Identity for average of $W$. Left-hand side represents expectation value of a product of two powers of $W$ and two powers of $\overline W$. Right-hand side pictorially shows the result of Eq.~(\ref{identity}), where the arcs represents Kronecker $\delta$.} \label{WidFig} \end{figure} If we use Eq.~(\ref{identity}) to compute $E\Bigl[{\rm tr}\Bigl([{\rm tr}_{A_1 r}(W \tau(k) W^\dagger)]^2\Bigl)\Bigr]_W$, we find a sum of four terms, one for each of the terms on the right-hand side of Eq.~(\ref{identity}). The result is \begin{eqnarray} \label{resultS2av} &&(c+c')D_k^3\Bigl({\rm tr}\Bigl([{\rm tr}_{a r}(\tau(k))]^2\Bigr)+{\rm tr}\Bigl([{\rm tr}_{r}(\tau(k))]^2\Bigl)\Bigr) \\ \nonumber &=& \frac{D_k^3}{D_k^4+D_k^2}\Bigl({\rm tr}\Bigl([{\rm tr}_{a r}(\tau(k))]^2\Bigr)+{\rm tr}\Bigl([{\rm tr}_{r}(\tau(k))]^2\Bigl)\Bigr) \\ \nonumber &=& \frac{1}{D_k}\cdot(1-O(1/D_k))\cdot \Bigl({\rm tr}\Bigl([{\rm tr}_{a r}(\tau(k))]^2\Bigr)+{\rm tr}\Bigl([{\rm tr}_{r}(\tau(k))]^2\Bigl)\Bigr), \end{eqnarray} where the asymptotic $O(...)$ notation refers to scaling in $D_k$ in this equation. Note that since both terms on the right-hand side of last line of Eq.~(\ref{resultS2av}) are positive, the last line is at most equal to twice the maximum term. Note that the terms on the right-hand side are related to Renyi entropy; for example, minus the logarithm of ${\rm tr}\Bigl([{\rm tr}_{a r}(\tau(k))]^2\Bigl)$ is equal to $S_2(\tau_{ar}(k))$, i.e. the $S_2$ Renyi entropy of $\tau(k)$ on $ar$, and similarly for $r$. So, we get: \begin{equation} \label{logsumexp} E[S_2(\sigma(k)_{[i,j]})]_W \geq -\log\Bigl\{\sum_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0} \exp\Bigl(-S_2(\tau(k)_{[m,n]})-\log(D_k) (|m-i| + |n-j|) \Bigr)\Bigr\}. \end{equation} The reader might note that we have in fact only derived Eq.~(\ref{subaddboundavg}) in the case that $j \,{\rm mod}\, 2=0$. However, by averaging over isometries at both left and right ends of the interval, one can handle the case that $j \,{\rm mod}\, 2=1$ identically to above. Note that also that since all terms in the sum of Eq.~(\ref{logsumexp}) are positive, the sum is bounded by a constant times the maximum. So, minus the logarithm of the right-hand side is equal to $$ \log(D_k)+{\rm max}\Bigl(S_2(\tau_{ar}(k)))+S_2(\tau_{r})\Bigr)-O(1).$$ So, using Eq.~(\ref{convexity}), we have that \begin{equation} \label{subaddboundavg} E[S_2(\sigma(k)_{[i,j]})]_W \geq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0} \Bigl( S_2(\tau(k)_{[m,n]})+\log(D_k) (|m-i| + |n-j|) \Bigr)-O(1). \end{equation} Here the $O(1)$ notation refers to a term bounded by a constant, independent of all dimension $D_k,D'_k$. We will also want the result that \begin{equation} \label{exprecursigma} E[\exp\Bigl(-S_2(\sigma(k)_{[i,j]})\Bigr)]_W \leq \sum_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0} \exp\Bigl(-S_2(\tau(k)_{[m,n]})-\log(D_k) (|m-i| + |n-j|) \Bigr). \end{equation} Note that the left-hand side of the above equation is the quantity $E\Bigl[{\rm tr}\Bigl([{\rm tr}_{A_1 r}(W \tau(k) W^\dagger)]^2\Bigl)\Bigr]_W$ that we have been considering and Eq.~(\ref{logsumexp}) follows from this by convexity. We also give the analogs of Eq.~(\ref{logsumexp},\ref{subaddboundavg},\ref{exprecursigma}) for the entropy of $\tau(k)$: \begin{equation} \label{logsumexptau} E[S_2(\tau(k)_{[i,j]})]_W \geq -\log\Bigl\{{\rm sum}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1} \exp\Bigl(-S_2(\sigma(k-1)_{[m/2,(n-1)/2]})-\log(D'_{k}) (|m-i| + |n-j|) \Bigr)\Bigr\}, \end{equation} \begin{equation} \label{subaddboundavgtau} E[S_2(\tau(k)_{[i,j]})]_W \geq {\rm min}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1} \Bigl( S_2(\sigma(k-1)_{[m/2,(n-1)/2]})+\log(D'_{k-1}) (|m-i| + |n-j|) \Bigr)-O(1), \end{equation} \begin{equation} \label{exprecurtau} E[\exp\Bigl(-S_2(\tau(k)_{[i,j]})\Bigr)]_W \leq {\rm sum}_{m,n \, s.t. \, |m-i| \leq 1, |n-j| \leq 1}^{m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1} \exp\Bigl(-S_2(\sigma(k-1)_{[m/2,(n-1)/2]})-\log(D'_{k-1}) (|m-i| + |n-j|) \Bigr). \end{equation} This now allow us to upper and lower bound the expectation value of $S_2$ for any interval $[i,j]$. Given an interval $[i,j]$, let a {\it reduction sequence} denote a sequence of choices at each level to reduce $[i,j]$ to the empty interval so that at each step we apply either Eq.~(\ref{reduce}) or Eq.~(\ref{reduce2}) until we are left with $i>j$ at which point we are left with the empty interval which has entropy $0$. That is, such a sequence consists first of a choice $m,n$ with $m \,{\rm mod}\, 2=1,n\,{\rm mod}\, 2=0$, followed by a choice $m,n$ with $m \,{\rm mod}\, 2 =0, n \,{\rm mod}\, 2=1$, and so on, with $i,j$ at each step being determined by the $m,n$ at the previous step. For such a sequence $Q$, let $S(Q)$ denote the upper bound to the entropy obtained from the reduction equations; in this reduction, once we obtain the empty interval, we use the fact that that has entropy $0$. Let $h(Q)$ denote the {\it height} of a given reduction sequence, namely the number of times we apply the reduction equations until we arrive at the empty interval. Note that the height increases by $2$ every time we change the level by $1$ since we apply both equations. Then, we have the result that \begin{equation} S_2(\tau_{[i,j]})\leq {\rm min}_Q S(Q), \end{equation} and \begin{equation} \label{seqsum} E[\exp(-S_2(\tau_{[i,j]}))] \leq \sum_Q \exp(-S(Q)), \end{equation} as follows by using Eqs.~(\ref{exprecursigma},\ref{exprecurtau}) to sum $\exp(-S_2(\ldots))$ over reduction sequences. As a point of notation, from now on we use $E[...]$ to denote the average over all $W,V$ in the MERA. Finally, we have \begin{lemma} \label{lbndlemma} \begin{equation} \label{lbnd} E[S_2(\tau_{[i,j]})] \geq {\rm min}_Q S(Q)-O(1) h(Q). \end{equation} \begin{proof} From Eq.~(\ref{seqsum}) and convexity of $-\log(\ldots)$, \begin{equation} E[S_2(\tau_{[i,j]})] \geq \log\{\sum_Q \exp(-S(Q))\}. \end{equation} Write the sum over $Q$ inside the logarithm as a sum over levels, \begin{equation} \sum_Q \exp(-S(Q))=\sum_h \sum_{Q, h(Q)=h} \exp(-S(Q)). \end{equation} Since there are at most $4^Q$ sequences of height $h$ (we have at most two choices at each side of the sequence, \begin{equation} \sum_{Q, h(Q)=h} \exp(-S(Q)) \leq {\rm max}_{Q,h(Q)=h} \exp(-S(Q)+\ln(4) h(Q)). \end{equation} Now, we use a general identity. Let $g(x)$ be any positive function such that $\sum_{x=1,2,\ldots} g(x)^{-1}$ converges to some constant $c$. Then, for any function $f(x)$, we have that $\sum_{x=1,2,\ldots} f(x) \leq c \cdot {\rm max}_{x=1,2,\ldots} f(x) g(x)$. To verify this identity, minimize $c \cdot {\rm max}_{x=1,2,\ldots} f(x) g(x)$ over positive functions $f$ subject to a constraint on $\sum_{x=1,2,\ldots} f(x)$; the minimum will be attained for $f(x)$ proportional to $1/g(x)$ and plugging in this choice of $f(x)$ gives the identity. So, picking $g(x)=(1/2)^x$, we get that \begin{eqnarray} && \sum_h {\rm max}_{Q,h(Q)=h} \exp(-S(Q)+\ln(4) h(Q)) \\ \nonumber & \leq & {\rm max}_h {\rm max}_{Q,h(q)=h} \exp(-S(Q)+\ln(8) h(Q)) \\ \nonumber &=& {\rm max}_{Q} \exp(-S(Q)+\ln(8) h(Q)). \end{eqnarray} Then, Eq.~(\ref{lbnd}) follows, choosing the $O(1)$ constant to be $\log(8)$. \end{proof} \end{lemma} We remark (we will not need this for this paper) that for some choices of $D_k,D'_k$, the sum in Eq.~(\ref{seqsum}) will be dominated by a single reduction sequence. In that event, it will be possible to tighten Eq.~(\ref{lbnd}) by improving on the term $-O(1)h(Q)$ on the right-hand side. Further, we also have \begin{lemma} \label{vnineq} The following inequalities for the von Neumann entropy hold: \begin{equation} \label{l1} S(\tau_{[i,j]})\leq {\rm min}_Q S(Q), \end{equation} \begin{equation} \label{l2} E[S(\tau_{[i,j]})] \geq {\rm min}_Q S(Q)-O(1) h(Q). \end{equation} \begin{proof} Eq.~(\ref{l1}) holds by the reduction equations (\ref{svnsigmabound},\ref{svntaubound}) for $S$, and Eq.~(\ref{l2}) follows by lemma \ref{lbndlemma} since $S$ is greater than $S_2$. \end{proof} \end{lemma} \section{Choice of $D_k,D'_k$} We now give the choice of $D_k,D'_k$. At the bottom of the MERA state, the leaves have dimension $D_L$ chosen to be any fixed value greater than $1$. For example, we may take $D_L=2$. Then, we would follow the recursion relations: \begin{equation} \label{apprecur1} \log(D'_k) \approx \log(D_{k})-\epsilon 2^{L-k}, \end{equation} for all $k$ and for $k<L$ \begin{equation} \label{apprecur2} \log(D_{k})\approx 2\log(D'_{k+1})-\epsilon 2^{L-k}. \end{equation} The value $\epsilon$ here will be related to the $\epsilon$ in the Brandao-Horodecki paper and to the mutual information that we find between intervals. The factor of $2^{L-k}$ represents the length scale associated to a given level in the MERA state: there are roughly $2^{L-k}$ leaves of the MERA in the future light-cone of a given node at level $k$. Here, the ``future light-cone" refers to the leaves such that there is a path in the MERA starting at the given node at level $k$ and moving downward, ending at the given leaf. The usual terminology in MERA instead refers to a {\it causal cone} of operators being mapped upwards to higher levels of the MERA; we discuss this later. We write the approximation symbol $\approx$ rather than the equals symbol $=$ because the dimensions $D_k,D'_k$ should be integers. So, the recursion relations we use to obtain integer dimensions are \begin{equation} D'_k=\lceil \exp\{ \log(D_{k})-\epsilon 2^{L-k} \} \rceil, \end{equation} \begin{equation} D_{k} =\lceil \exp\{ 2\log(D'_{k+1})-\epsilon 2^{L-k} \} \rceil. \end{equation} We choose $\epsilon,L$ so that $D_0=1$. This can be done taking $L \sim 1/\epsilon$ so that the total number of sites in the system is equal to $\exp(\Theta(1/\epsilon))$. The calculation is essentially that in Ref.~\onlinecite{hastings1darea} and Ref.~\onlinecite{bh}, where both papers used a recursion relation for the entropy. Let us study the recursion relations ignoring the complications of the ceiling; that is, we treat Eqs.~(\ref{apprecur1},\ref{apprecur2}) as if they were exact. The ceiling in the correct recursion relations has negligible effect on the scaling behavior. We have $D_L$ given. Then, $\log(D_{L-1})=2\log(D_L)-3\epsilon$. Then, $\log(D_{L-2})=4\log(D_L)-12\epsilon$ and $\log(D_{L-3})=8\log(D_L)-36\epsilon$. In general, \begin{equation} \label{Dkest} \log(D_{L-m}) \approx 2^m\log(D_L)-3 m\epsilon 2^{(m-1)}. \end{equation} This remains positive until $m \sim 1/\epsilon$; so, as claimed, we can take $L \sim 1/\epsilon$. Note also that for all $m<L-1$, \begin{equation} \log(D_{L-m}) \gtrsim 2^m \epsilon. \end{equation} We say that for all $m<L-1$ because $\log(D_1)$ must be positive, so $\log(D_2)$ must be at least $\epsilon 2^{L-2}$; for many choices of $\epsilon,L$, as similar inequality will hold even for $m=L-1$ and we will always choose $\epsilon,L$ such that this holds. \subsection{Entanglement Entropy For This Choice} We now estimate the entanglement entropy for this choice of $D_k,D'_k$ for an interval $[i,j]$. We make a remark on the Big-O notation that we use. When we say in lemma \ref{intentlemma} and lemma \ref{milemma} that a quantity is $\Omega(x)$, we mean that it is lower bounded by $c_1x-c_2\log(l)-c_3$ for some positive constants $c_1,c_2,c_3$ which do not depend on $D_L,\epsilon$. We emphasize this because otherwise one might worry about subleading terms hidden in the Big-O notation: since the leading term often involves a factor of $\epsilon$ (at least in lemma \ref{milemma}), a quantity such as $\epsilon l$ becomes large only once $l$ becomes large enough and so one might worry about the simultaneously limits of large $l$ and small $\epsilon$. The notation $O(1)$ continues to refer to a quantity bounded by a constant, independent of $\epsilon,D_L$. \begin{lemma} \label{intentlemma} The expected entanglement entropy of an interval $[i,j]$ with length $l=j-i+1$ with $l \neq N/2$ is lower bounded by \begin{equation} \label{intentlower} E[S(\tau_{[i,j]})] \geq \Omega(\log(D_{L-\log_2(l)})). \end{equation} \begin{proof} We estimate ${\rm min}_Q S(Q)-O(1) h(Q)$ and apply lemma \ref{vnineq}. For any choice of $[i,j]$, for any sequence $Q$, each time we apply Eq.~(\ref{reduce}) or Eq.~(\ref{reduce2}), it is possible that we produce a positive term, $\log(D_k) (|m-i| + |n-j|)$ or $\log(D'_{k-1}) (|m-i| + |n-j|)$, respectively. Let us say that if $|m-i|=1$ then the term is applied at the ``left end" of the interval, while if $|n-j|=1$, then the term is applied at the right end of the interval (as the interval changes as we change level in the MERA by applying Eqs.~(\ref{reduce},\ref{reduce2}), we continue to define the left end and right end in the natural way). One may verify that at least every other time we apply the equations, we must produce a positive term at the left end and at least every other time we apply the equations, we must produce a positive term at the right end. That is, if Eq.~(\ref{reduce}) does not produce a positive term at the left (or right) end, then Eq.~(\ref{reduce2}) must produce a positive term at the left (or right, respectively) end. The only exception to this is if the interval becomes sufficiently long that it includes all sites at the given level of the MERA; this does not happen for the intervals considered here. So, \begin{equation} \label{SQlower} S(Q)\geq 2(D_L+D_{L-1}+D_{L-2}+\ldots+D_{L-\lfloor h(Q)/2 \rfloor}). \end{equation} We now estimate the minimum $h(Q)$. Every time we apply Eq.~(\ref{reduce}) and then Eq.~(\ref{reduce2}), an interval of length $l$ turns into an interval of length at least $l/2-2$. The factor of $-2$ occurs because Eq.~(\ref{reduce}) can reduce the length by at most $2$; then Eq.~(\ref{reduce2}) can further reduce the length by at most $2$ more, and then divide the length by $2$. Thus, $2$ applications of this pair of equations can map an interval of length $l$ to one of length $(l/2-2)/2-2=l/2-3$, and $k$ applications can map an interval of length $l$ to one of length $l/2^k-4$. Once the length becomes four or smaller, than the length can be mapped to zero by a pair of applications. Thus, $h(Q)$ is greater than or equal to $2k$ with $l/2^{k}-4 \leq 4$, so $l/2^{k}\leq 8$ (in fact this estimate is not quite tight, as if the length of the interval is nonzero after $k$ applications, then $h(Q)>2k$). So, \begin{equation} \label{hQlower} h(Q) \geq 2 \log_2(l)-O(1). \end{equation} Combining Eqs.~(\ref{SQlower},\ref{hQlower}) gives Eq.~(\ref{intentlower}). Here we use Eq.~(\ref{Dkest}) to estimate $D_{L-h(Q)}$ and note that $\log(D_{L-\log_2(l)+O(1)}) \geq \Omega(\log(D_{L-\log_2(l)}))$. Further, we use the fact that the term $-O(1) h(Q)$ in $S(Q) - O(1) h(Q)$ is asymptotically negligible compared to $S(Q)$. \end{proof} \end{lemma} \subsection{Mutual Information} We now estimate the mutual information between a pair of neighboring intervals, each of length $l$. We lower bound this by $\epsilon l$. This implies a similar lower bound on the mutual information between a single interval $[i,j]$ of length $2l$ and its two neighboring intervals of length $l$. \begin{lemma} \label{milemma} The expected mutual information between two neighboring intervals $[i,j]$ and $[j',k]$, with $j'=j+1$ and $l=j-i+1=k-j$ is lower bounded by \begin{equation} \label{milower} E[I([i,j];[j',k])] \geq \Omega(\epsilon l). \end{equation} \begin{proof} Call $[i,j]$ the ``left interval" and call $[j',k]$ the ``right interval". Let $Q_L, Q_R$ be reduction sequences for $[i,j]$ and $[j',k]$, respectively, which minimize $S(Q)-O(1)h(Q)$. So, $E[S(\tau_{[i,j]})+S(\tau_{[j',k]})] \geq S(Q_L)+S(Q_R)-O(1) h(Q_L)-O(1)h(Q_R)$. We now show that $S(\tau_{[i,k]}) \leq S(Q_L)+S(Q_R)-\Omega(\epsilon) l$. Note that always the optimum reduction sequences have $h(Q_L), h(Q_R) \leq {\rm const}. \times \log(l))$. So, this upper bound on $S(Q)$ will imply Eq.~(\ref{milower}). This bound will be based on constructing a reduction sequence for $[i,l]$; however, we will in one case need to also use subadditivity and then construct further reduction sequences. That is, it will not simply be a matter of applying Eqs.~(\ref{reduce},\ref{reduce2}) with a given sequence but a more general reduction procedure will be needed. Let $i$ be the left end of interval $[i,j]$ and $j$ be the right end. Refer to Fig.~\ref{redcurveFig}. The reduction sequence $Q_L$ describes how both the left and right end of the left interval move as we change levels in the MERA. Let $i_0,i_1,i_2,...,i_{h(Q_L)}$ be the sequence describing where the left end is after each application and let $j_0,...,j_{h(Q_L)}$ describe where the right end is. That is, after $k$ applications of Eqs.~(\ref{reduce},\ref{reduce2}), we have a new interval $[i_k,j_k]$. Eventually, after $h(Q_L)$ applications of the reduction equation,the interval has length zero so that $i_{h(Q_L)}=j_{h(Q_L)}+1$. Similarly, let $j'_0,...,j'_{h(Q_R)}$ and $k_0,...,k_{h(Q_R)}$ be the left and right ends of the right interval. Let $S_L(Q_L)$ denote the sum of the quantities $\log(D_k)|m-i|$ or $\log(D'_{k-1}) |m-i|$ obtained using Eqs.~(\ref{reduce},\ref{reduce2}) for reduction sequence $Q_L$ , while let $S_R(Q_L)$ denote the sum of $\log(D_k)|n-j|$ or $\log(D'_{k-1}) |n-j|$. That is, these are the sum of the terms at the left or right ends of the interval, so that $S(Q_L)=S_L(Q_L)+S_R(Q_L)$. Define $S_L(Q_R)$ and $S_R(Q_R)$ similarly so $S(Q_R)=S_L(Q_R)+S_R(Q_R)$. Suppose first that $i_a=k_a$ for some given $a$, i.e., the left end of $Q_L$ meets the right end of $Q_R$. Then, define a reduction sequence $Q$ by taking the sequence $i_0,...,i_a$ for the left end of $Q$ and $k_0,...,k_a$ for the right end of $Q$. Then, $S(\tau_{[i,l]}) \leq S(Q)\leq S_L(Q_L)+S_R(Q_R)=S(Q_L)+S(Q_R)-S_R(Q_L)-S_L(Q_R)$. However, referring to the calculation in lemma \ref{intentlemma}, $S_R(Q_L)\geq \Omega(\epsilon l)$ as is $S_L(Q_R)$, which gives the desired result. So, let us assume that $i_a \neq k_a$ for all $a$. Suppose, without loss of generality that $h(Q_L) \geq h(Q_R)$. In fact, it may not be possible that $h(Q_L)$ will ever differ from $h(Q_R)$ for the optimal sequences $Q_L,Q_R$ for the given pair of intervals and for the given choice of dimensions in the network so it might suffice to always assume that $h(Q_L)=h(Q_R)$, but we are able to lower bound the mutual information even in this possibly hypothetical case (it is possible for $h(Q_L)$ to differ from $h(Q_R)$ if the intervals have different length). To simplify notation, let $h=h(Q_R)$. Define $B_L(Q_L)$ to be the sum over the first $h$ applications of Eqs.~(\ref{reduce},\ref{reduce2}) in reduction sequence $Q_L$ of $\log(D_k)|m-i|$ or $\log(D'_{k-1}) |m-i|$, while let $B_R(Q_L)$ denote the sum over the first $h$ applications of $\log(D_k)|n-j|$ or $\log(D'_{k-1}) |n-j|$. The notation $B_L$ or $B_R$ is intended to indicate that these are the contributions to $S_L$ or $S_R$ arising from the first $h$ applications, i.e., at the ``bottom" of the MERA. Consider applying Eqs.~(\ref{reduce},\ref{reduce2}) a total of $Q_R$ times, using the sequence $i_0,...,i_{h}$ for the left end and $k_0,...,k_{h}$ for the right end. Note that this sequence of reductions may not end at the empty interval; rather, it leaves the interval $[i_{h},k_{h}]$. This gives \begin{equation} \label{firstbound} S(\tau_{[i,k]}) \leq B_L(Q_L)+S_R(Q_R)+\Delta, \end{equation} where $\Delta$ is either \begin{equation} \Delta=S(\tau(L-h/2)_{[i_{h},k_{h}]}) \end{equation} if $h$ is even or \begin{equation} \Delta=S(\sigma(L-(h-1)/2)_{[i_{h},k_{h}]}) \end{equation} if $h$ is odd. That is, $\Delta$ is the entropy of the interval that remains after applying the reduction sequence. \begin{figure} \includegraphics[width=8cm]{redcurve.pdf} \caption{Illustration of part of a MERA network. Only a fragment of the network is shown, so that the three lines leaving upwards connect to other parts of the network, as do the two lines leaving downwards. We illustrate computing mutual information between two intervals, each of three sites. The left interval is represented by the unfilled circles on the leaves of the MERA, while the right interval is represented by the circles with diagonal lines. Unfilled circles with thin outer lines and circles with diagonal lines at higher levels of the MERA represent the intervals that result from applying Eqs.~(\ref{reduce},\ref{reduce2}) for the optimum sequences $Q_L,Q_R$, respectively. Both sequences have $h=2$. When computing the optimum reduction of the $6$-site interval containing both of these $3$-site intervals, the resulting intervals contain the unfilled circles with thin outer lines, and the circles with diagonal lines and also the unfilled circles with thicker outer lines. Filled circles indicate sites not in any of these reduction sequences. The squiggly lines crossing the lines of the MERA network represent contributions to $S_R(Q_L)$ and $S_L(Q_R)$, while the dashed squiggly line crossing the line at the top represents an extra term in the entropy to reduce the $6$-site interval. The difference between these is equal to the expectation value of the mutual information, up to subleading terms.} \label{redcurveFig} \end{figure} Now use subadditivity. To simplify notation, let us suppose that $h$ is even (we simply do this so that we can write $\tau(...)$ everywhere, rather than having to specify either $\tau(...)$ or $\sigma(...)$ in each case). Then, \begin{equation} \label{split} \Delta \leq S(\tau(L-h/2)_{[i_h,j_h]}) + S(\tau(L-h/2)_{[j_h+1,k_h]}). \end{equation} Note that $k_h=j'_{h}-1$ since the right interval vanishes after $h$ applications of the reduction equations; this makes the interval $[j_h+1,k_h]$ look more symmetric in left and right. Note also that if $h(Q_L)=h(Q_R)$, then $j_h+1=i_h$. We can then upper bound $S(\tau(L-h/2)_{[i_h,j_h]})$ using a reduction sequence with left end $i_h,...,i_{h(Q_L)}$ and right end $j_h,...,j_{h(Q_L)}$, giving \begin{equation} S(\tau(L-h/2)_{[i_h,j_h]}) \leq S_L(Q_L)-B_L(Q_L) + S_R(Q_L)-B_R(Q_L). \end{equation} So, from Eqs.~(\ref{firstbound},\ref{split}), \begin{equation} S(\tau_{[i,k]}) \leq S_L(Q_L)+\Bigl( S_R(Q_L)-B_R(Q_L) \Bigr) + S_R(Q_R)+S(\tau(L-h/2)_{[j_h+1,k_h]}). \end{equation} However, \begin{equation} S(\tau(L-h/2)_{[j_h+1,k_h]}) \leq B_R(Q_L)+S_L(Q_R)-\Omega(\epsilon l), \end{equation} which gives the desired bound on the mutual information. To see this, estimate $S(\tau(L-h/2)_{[j_h+1,k_h]})$ using another reduction sequence. In Fig.~\ref{redcurveFig}, the interval $[j_h+1,k_h]$ consists of the two sites with open circles on the row two rows above the bottom (i.e., the bottom row of the level one level above the bottom). Then entropy $S(\tau(L-h/2)_{[j_h+1,k_h]})$ is less than or equal to $(k_h-j_h)*\log(D_{L-h/2})$. However, $B_R(Q_L)+S_L(Q_R)$ is greater than or equal to $(k_h-j_h)*\log(D_{L-h/2})$ as can be seen in the figure \ref{redcurveFig}; that is, the entropy of the two sites $[j_h+1,k_h]$ is greater than or equal to the sum of logarithms of dimensions of bonds cut by squiggly lines. If $k_h-j_h$ is sufficiently large, then in fact $S(\tau(L-h/2)_{[j_h+1,k_h]}) \leq (k_h-j_h)*\log(D_{L-h/2}) - \Omega(\epsilon l)$; this simply requires that $k_h-j_h$ be large enough that at least one pair of sites in the interval $[j_h+1,k_h]$ emerge from same isometry as occurs in the figure. Alternately, if $k_h=j_h+1$, then $S(\tau(L-h/2)_{[j_h+1,k_h]})\leq \log(D_{L-h})$, while $B_R(Q_L)+S_L(Q_R)\geq 2\log(D_{L-h})$. If $k_h \leq j_h$, then $S(\tau(L-h/2)_{[j_h+1,k_h]})=0$. The remaining case is the $k_h=j_h+2$ but that the two sites do not emerge from the same isometry. However, in this case $S(\tau(L-h/2)_{[j_h+1,k_h]})\leq 2 \log(D_{L-h/2})$. However, by the same calculation in lemma \ref{intentlemma} that gave Eq.~(\ref{SQlower}), we find that $B_R(Q_L)\geq D_L+D_{L-1}+D_{L-2}+\ldots+D_{L-h/2}$ and $S_L(Q_R)\geq D_L+D_{L-1}+D_{L-2}+\ldots+D_{L-h/2}$, so $B_R(Q_L)+S_L(Q_R) -S(\tau(L-h/2)_{[j_h+1,k_h]}) \geq 2D_{L-h/2+1}+\ldots$ which is $\Omega(\epsilon l)$. \end{proof} \end{lemma} \section{Correlation Decay} We now discuss decay of correlations in this state. We do not prove correlation decay. However, we conjecture that for the MERA state above, for any two regions $X,Y$ separated by distance $l$, we have $Cor(X:Y)\leq C 2^{-l/\xi}$ for some $C=O(1)$ and some $\xi$ bounded by $O(1)/\epsilon$ with probability that tends to $1$ as $D_{L-\log_2(l)}$ tends to infinity. In the discussion, we briefly discuss controlling rare events. The simplest version of this correlation decay to consider is when $X$ consists of a single site and $Y$ is separated from $X$ by at least $1$ site. Thus, the site in $X$ consists of one of the two sites which is in the output of some given isometry $W$, and $Y$ does not contain the other site which is in the output of that isometry. Let us divide the system into three subsystems. Let $B=X$. Let $E$ be the other site which is in the output of the same isometry as $X$, and let $A$ consist of the rest of the system. (We rename $X$ as $B$ to make the notation more suggestive of a quantum channel from Alice to Bob, as we will use ideas from quantum channels.) Since $Y\subset A$, it suffices to consider correlation functions $\langle \psi | O_A O_B |\psi \rangle$ for $O_A,O_B$ supported on $A,B$ respectively. Consider the two subsystems $A$ and $BE$, and make a singular value decomposition of the wavefunction $\psi$, so that we write \begin{equation} \psi=\sum_{\alpha} A(\alpha) |\alpha\rangle_A \otimes |\alpha\rangle_{BE}, \end{equation} where $|\alpha\rangle_A$ and $|\alpha\rangle_{BE}$ are complete bases of states on $A$ and $BE$, respectively, and $A(\alpha)$ are complex scalars with $\sum_{\alpha} |A(\alpha)|^2=1$. Let $O_A$ have matrix elements$(O_A)_{\beta,\alpha}$ in this basis. Then, \begin{eqnarray} && \langle \psi i | O_A O_B |\psi \rangle \\ \nonumber &=&\sum_{\alpha,\beta} \overline{A(\beta)} A(\alpha) \Bigl( {}_A\langle \beta | O_A | \alpha \rangle_A \Bigr) \Bigl( {}_{BE} \langle \beta | O_B | \alpha \rangle_{BE} \Bigr) \\ \nonumber &=& {\rm tr}(\tilde O_A O_B), \end{eqnarray} where $\tilde O_A$ is defined by its matrix elements \begin{equation} (\tilde O_A)_{\beta\alpha}=(O_A)_{\alpha\beta}\overline{A(\beta)} A(\alpha). \end{equation} To estimate the correlation decay, we must maximize the correlation function over $O_A,O_C$ with $\Vert O_A \Vert,\Vert O_C \Vert\leq 1$. Since the maximization over operators with bounded infinity norm may not be easy, we instead derive a bound in terms of a maximization over operators with bounded $\ell_2$ norm (which we write $|\ldots|_2$) for which the maximization reduces to a problem in linear algebra. We have \begin{eqnarray} \label{weakbound} |\tilde O_A|_2 &\equiv &\sqrt{{\rm tr}(\tilde O_A^2)} \\ \nonumber &=&\sqrt{\sum_{\beta\alpha} |A(\beta)|^2 |A(\alpha)|^2|(O_A)_{\beta\alpha}|^2} \\ \nonumber & \leq & \Bigl( {\rm max}_{\alpha} |A(\alpha)|^2\Bigr) \cdot |O_A|_2 \\ \nonumber & \leq & \Bigl( {\rm max}_{\alpha} |A(\alpha)|^2\Bigr) \sqrt{d_A}, \end{eqnarray} where the last line follows since $\Vert O_A \Vert \leq 1$. Let $A,B,E$ have Hilbert space dimensions $d_A,d_B,d_E$ respectively (in our particular case, we have $d_B=d_E=D_L)$. In fact, while the Hilbert space dimension of $A$ diverges with system size, since the rank of the density matrix on $BE$ is at most $(D'_{L-1})^2$, we can assume $d_A=(D'_{L-1})^2$. It is not hard to show that ${\rm max}_{\alpha} |A(\alpha)|^2$ is approximately equal to $1/d_A$ times a constant with high probability (i.e., with probability that tends to $1$ as $d_A$ tends to infinity). To see this, note that the $|A(\alpha)|^2$ are the eigenvalues of the reduced density matrix on the two sites {\it entering} the isometry $W$. Each of those two sites is the output of some isometries; call those isometries $V,V'$. For random choices of $V,V'$, for arbitrary input state to $V \otimes V'$, indeed the output state on the given two sites will have all the eigenvalues close to $1/d_A$. So, $|\tilde O_A|_2$ is bounded by a constant times $1/\sqrt{d_A}$, with high probability. At the same time $|O_B|_2$ is bounded by $\sqrt{d_B}$ if we define the $\ell_2$ norm using the trace on $B$, rather than the trace on $BE$. We can in fact tighten the bound (\ref{weakbound}) on $|\tilde O_A|_2$, if desired. Let $\rho$ be the diagonal matrix with entries $|A(\alpha)|^2$. We have $|\tilde O_A|_2^2={\rm tr}(O_A \rho O_A \rho)$. Note that ${\rm tr}(O_A \rho O_A \rho)\leq {\rm tr}(O_A \rho \rho^\dagger O_A^\dagger)={\rm tr}(O_A^\dagger O_A \rho^2)$. For $\Vert O_A \Vert \leq 1$, we have ${\rm tr}(O_A^\dagger O_A \rho^2) \leq {\rm tr}(\rho^2)$. So, $|\tilde O_A|_2 \leq \sqrt{{\rm tr}(\rho^2)}$. Note that this is equal to the exponential of minus one-half the $S_2$ entropy of $\rho$. Define a super-operator ${\cal E}(\ldots)$ by \begin{equation} {\cal E}(O)=\sqrt{\frac{d_B}{d_A}} {\rm tr}_E(W O W^\dagger). \end{equation} This super-operator is a quantum channel multiplied by the scalar $\sqrt{\frac{d_B}{d_A}}$. Then, \begin{equation} Cor(X:Y) \leq {\rm const.} \times {\rm max}_{\tilde O_A, |\tilde O_A|_2\leq 1} {\rm max}_{O_B, |O_B|_2 \leq 1} \Bigl( {\rm tr}(O_B {\cal E}(\tilde O_A))- {\rm tr}(O_B {\cal E}(\sqrt{d_A} \rho)) {\rm tr}(\tilde O_A)\Bigr), \end{equation} where we have rescaled $\tilde O_A,O_B$ to have $\ell_2$ norm equal to $1$, absorbing factors of $1/\sqrt{d_A}$ and $\sqrt{d_B}$ into ${\cal E}(\ldots)$, and where the constant is present because the bound (before re-scaling) is that $|\tilde O_A|_2$ is bounded by a {\it constant} times $1/\sqrt{d_A}$, with high probability. We now consider the super-operator ${\cal E}(\ldots)$. We now consider the case of general $d_A,d_B,d_E$. The state ${\cal E}(\rho)$, which is the output state of this super-operator for the density matrix as input, may not itself be exactly maximally mixed. However, it is very close to maximally mixed with high probability if $d_B<<d_A d_E$ and if $\rho$ is close to maximally mixed. Further, for any traceless operator $O$, we have ${\rm tr}({\cal E}(O))=0$. Hence, the maximally mixed state is very close to a right-singular vector of ${\cal E}$ if $d_B<<d_A d_E$ and there is a singular value of ${\cal E}$ very close to $1$. So, the term $-{\rm tr}(O_B {\cal E}(\sqrt{d_A} \rho)) {\rm tr}(\tilde O_A)\Bigr)$ is close to projecting out the largest singular vector of ${\cal E}(\ldots)$. So, the important quantity for correlations is the magnitude of the second largest singular vector. Indeed, what we would like to have is that ${\cal E}(\ldots)$ is a non-Hermitian expander (non-Hermitian in that ${\cal E}(\ldots)$ is not Hermitian viewed as a linear super-operator), meaning that it has one singular value close to $1$ and all others separated from $1$ by a gap. Calculating the singular values of ${\cal E}(\ldots)$ is likely similar to the calculation in Ref.~\onlinecite{rugqe}, with some additional complications because we are interested in a very different choice of dimensions. For one thing, $d_E$ and $d_B$ are comparable here rather than having $d_E<<d_B$. For another thing, $d_A \neq d_B$, so the super-operator ${\cal E}(\ldots)$ has a multiplicative prefactor $\sqrt{d_B/d_A}$ compared to a quantum channel. We leave a proof that it is an expander for a future paper. However, we give some numerical and analytical evidence. Let $x=d_B/(d_A d_E)$ and $y=d_A/(d_B d_E)$. We conjecture that ${\cal E}(\ldots)$ is an expander if $x,y<<1$. More precisely, what we conjecture is that for a random choice of $W$ with high probability the difference between the largest singular value and $1$ is bounded by some polynomial in $x,y$ and also that the second largest singular value is bounded by some polynomial in $x,y$. Note that certainly we do not expect to get an expander if $y \approx 1$. If $y=1$, then all singular values are equal to $1$. We can estimate the average over $W$ of the sum of squares of the singular values of ${\cal E}(\ldots)$ using the same techniques as we used to estimated $E[\exp(-S_2(\ldots))]_W$ previously, as this sum of squares is also a second order polynomial in $W$ and in $\overline W$. For $d_B=d_E$ and $d_A<<d_B d_E$, one finds that this sum of squares is equal to $d_A$ up to subleading corrections. The number of non-zero singular values is equal to $d_B^2$ in this case, so that if all singular values (with the exception of the largest) have roughly the same magnitude, then this magnitude is roughly \begin{equation} \sqrt{d_A/d_B^2}=\sqrt{y}. \end{equation} We have numerically investigated the properties of this super-operator. First, we observe that qualitatively that there indeed is a gap once $x,y<<1$. In Fig.~\ref{gapfig}, we show an example with $d_A=80,d_B=d_E=10$. Even in this case, where $y=0.8$ which is not that small, we observe a distinct gap between the first singular value and the rest. We plot the singular values $\lambda(i)$ in descending order as a function of $i$ from $i=0,\ldots,d_B^2-1$. \begin{figure} \includegraphics[width=12cm]{channel80.pdf} \caption{Singular values of ${\cal E}(\ldots)$ for a random choice of $W$ for $d_A=80$, $d_B=d_E=10$. Singular values $\lambda(i)$ are plotted in descending order. Largest singular value is equal to $1.0067\ldots$, while next largest singular values are $0.9596\ldots,0.9592\ldots,0.9587\ldots,\ldots$.} \label{gapfig} \end{figure} Next, to test the scaling of the singular values, we first consider the particular case that $d_A=d_B=d_E$. This is {\it not} the relevant case of interest for the MERA state constructed, however it is still interesting as a way to test the scaling. In this case, we have $\sqrt{d_A/d_B^2}=1/\sqrt{d_B}$. What we find is that indeed scaling holds. We are able to construct a scaling collapse, plotting the singular values $\lambda(i)$ from $i=1,\ldots,d_B^2-1$ in descending order, i.e., not including the leading singular value. In this plot we plot $\lambda(i)*\sqrt{d_B}$ as a function of $i/d_B^2$. As shown in Fig.~\ref{collapseFig}, we are able in this way to almost perfectly collapse curves for different choices of $d_B$. Further, the collapse holds even for the leading singular values; that is, we have observed that the second largest singular value scales as $1/\sqrt{d_B}$. So, in this case, we have strong numerical evidence for the polynomial decay as a function of $x,y$. \begin{figure} \includegraphics[width=12cm]{collapse.pdf} \caption{Singular values of ${\cal E}(\ldots)$ for two random choices of $W$, one with $d_A=d_B=d_E=d=10$ (shown in blue) and the other with $d_A=d_B=d_E=d=20$ (shown in green). $y$-axis shows $\sqrt{d} \lambda(i)$, while $x$-axis shows $i/d^2$. The largest singular value for each super-operator is not plotted. The two largest singular values for the first super-operator are equal to $1.05\ldots,0.645\ldots$, while the two largest second values for the second super-operator are equal to $1.025\ldots,0.456\ldots$.} \label{collapseFig} \end{figure} Before considering the case of interest to us, let us explain why we are interested in having a polynomial decay of the second largest singular value as a function of $x,y$. This is due to our desire for exponential decay of correlations at all length scales, not just for a single $X$ with $Y$ separated from $X$ by one site. The MERA states used to describe a conformal field theory at criticality display a power law decay of correlation functions as a function of distance\cite{cftmera}. To understand this polynomial decay, consider a correlation of two operators $O_i,O_j$ supported on single sites, $i,j$. Then one can iteratively map an operator such as $O_i$ or $O_j$ into an operator at higher levels of the MERA. This map is a linear map; it is in fact related to the adjoint of a super-operator such as ${\cal E}(\ldots)$ that we consider; the map in Ref.~\onlinecite{cftmera} is regarded as moving operators up to higher levels of the MERA rather than, as we have described it, moving states to lower levels of the MERA. To move up one level in the MERA, one must apply two super-operators, as each level in the MERA corresponds to two isometries $V,W$. This linear map leads to an exponential decay of the difference between the operator and the identity operator as a function of level in the usual MERA states; since the number of levels between $i,j$ is logarithmic in $i,j$, this leads to a polynomial decay. The reason for the exponential decay is that in such MERAs, the isometries are taken in a scale-invariant fashion, so that they are the same at all levels (or all except the bottom few levels) and so the super-operator has a fixed gap to the second largest singular value at all levels. In our MERA, however, the isometries change with level. Thus, we hope that the decay when moving from one level to the next will be polynomial in $x,y$. Since $y \approx \exp(-\epsilon 2^{L-k})$ for isometries in $W_k$, a polynomial decay in the smallest $y$ (which occurs at the highest level, giving a $y$ which is exponentially small in the spacing between sites) will lead to an exponential decay in $i-j$. One complication in this is that when we map an operator on a single site $i$ of the MERA to higher levels of the MERA, result is no longer an operator supported on a single site. However, the so-called {\it causal cone} of such an operator (i.e., the support of the operator after it is mapped to higher levels of the MERA; this support is the same as the set of sites which have $i$ in their light-cone as we have defined the light-cone) does not consist of a single site at each level. Rather, the causal cone consists of some small number of sites\cite{cftmera}, depending upon the exact MERA chosen. However, it seems likely that, since we are considering an $\ell_2$ norm, if we can show a gap in the singular values of the super-operator corresponding to the map of a single site operator upwards by one level of the MERA, it will also be possible to show a gap in the map of an operator supported on some small number of sites, as the $\ell_2$ norm does have the nice property that the singular values of a product of super-operators can be determined from the singular values of the individual super-operators. If we instead worked with $\ell_\infty$ norms, there would be difficult multiplicativity questions that would arise and perhaps having a bound in the $\ell_\infty \rightarrow \ell_\infty$ norm of a pair of super-operators would not help in bounding the $\ell_\infty \rightarrow \ell_\infty$ norm of the product. In this way, we conjecture that it will be possible to show at least an exponential decay of $Cor(X:Y)$ for distances sufficiently large compared to the diameter of $X$ and the diameter of $Y$. A more difficult question is whether we can show an exponential decay even if the diameters of $X,Y$ are large compared to the distance between $X,Y$. We conjecture that this will also hold. We take an operator $O_X$ and apply the super-operator ${\cal E}(\ldots)^\dagger$ to map $O_X$ upward in the MERA and similarly map $O_Y$ and apply this process repeatedly until $X,Y$ meet. The intuitive idea is that at every step of this process we consider the site $i$ at the leftmost edge of $X$ and we decompose the operator $O_X$ on $X$ into a sum of two terms, $O_X^0+O_X^\perp$, where $O_X^0$ is the identity operator on $i$ tensored with some other operator on the rest of $X$, while $O_X^\perp$ vanishes after tracing over $i$. The site $i$ is one of two sites output from some given isometry. Assume that the other site output from that isometry is to the left of $i$ so that it is not in $X$; in this case we say that ``a site is traced over at the left end". Note that it is not necessary that a site be traced over on a given step; for example, if $X$ consists of two sites which are output from the same isometry, then no site is traced over. However, if a site is not traced over on a given step, then a site must be traced over at the next step. So, suppose that a site is traced over. Let ${\cal E}(\ldots)$ be the super-operator associated with this isometry and tracing over the site $i-1$. We would then use the bound on singular values of the super-operator ${\cal E}(\ldots)$ to show that the $\ell_2$ norm of $O_X^\perp$ decays by an amount $\exp(-{\rm const.} \times \epsilon 2^{L-k})$ after applying the super-operator ${\cal E}(\ldots)^\dagger$ to map it to an operator higher in the MERA, while $O_X^0$ maps to an operator with increased separation between $X$ and $Y$. In this manner, we conjecture that at some level $k$ , with $k \sim \log(l)$, we must have a decay in $\ell_2$ norm by $\exp(-{\rm const.} \times \epsilon 2^{L-k})$. When we turn to the case of interest to us, with $d_A>>d_B$ but $y<<1$, we do not find a clear scaling collapse. In this case, since $x<<y$, we might hope that the scaling collapse would hold with two different super-operators with the same $y$. In Fig.~\ref{nocollapseFig}, we see that this is not the case for three different super-operators with $y=1/2$ for both and $d_B=d_E=10,16,20$. It is possible, however, that for large enough $d_B$ at fixed $y$ the singular values will eventually collapse on each other; the curves are becoming flatter with increasing $d_B$ suggesting that this may happen. If such a collapse happens for large $d_B$ for the entire curve, then indeed the second largest singular value must be proportional to $\sqrt{y}$ for large $d_B$. Even if there are corrections to this which vanish polynomially in $d_B$, this would still suffice. Some evidence for a collapse is shown in Fig.~\ref{coll3Fig}. Here we show an attempt to collapse the three curves by rescaling $(\lambda(i)-{\rm const.}) d_B^{\alpha}$, where the constant $0.706\ldots$ is chosen to match the approximate crossing point of the curves and $\alpha=2/3$ was chosen after some experimentation. Good collapse is seen between the curves with $d_B=16,20$, while the curve with $d_B=10$ does not collapse as well, especially for large $i$. \begin{figure} \includegraphics[width=12cm]{nocollapse3.pdf} \caption{Singular values of ${\cal E}(\ldots)$ for three random choices of $W$, one with $d_A=50$, $d_B=d_E=d=10$ (shown in blue), $d_A=128$, $d_B=d_E=16$ (shown in green), and $d_A=200, d_B=d_E=20$ (shown in red). $y$-axis shows $\lambda(i)$, while $x$-axis shows $i/d^2$.} \label{nocollapseFig} \end{figure} \begin{figure} \includegraphics[width=12cm]{coll3.pdf} \caption{Re-scaled singular values for same three channels as in Fig.~\ref{nocollapseFig} for $i=1,\ldots,d_B^2-1$.} \label{coll3Fig} \end{figure} \section{Discussion} While this work was in progress, another work constructed a MERA state for which the entanglement entropy was exactly given by the minimum length of curves cutting through the MERA network\cite{adsmera}. There are two main differences in the type of states constructed. First, we used random tensors, instead of the perfect tensors used there. Second, we considered a very different set of choices of dimensions at different levels of the MERA, in our goal of constructing a state with high entanglement and low correlations. These two choices may have some interpretation in the language of holography and quantum gravity as follows. The different choice of dimensions may correspond to some different choice of geometry in the bulk space, rather than an $AdS$ geometry. The choice of random tensors, however, might be interpretable in terms of quantum fluctuations in the bulk geometry: instead of the entanglement entropy being exactly expressed in terms of a single curve cutting through the MERA, the optimum reduction sequence (note that each reduction sequence corresponds uniquely to a curve) gives only upper and lower bounds on the expected entanglement entropy, with a possible logarithmic difference between those results. However, the expected exponential of minus the $S_2$ Renyi entropy, $E[\exp(-S_2(\ldots))]$, can be exactly expressed as a {\it sum} over reduction sequences (or curves). This difference between a minimization and a sum is reminiscent of the difference between classical and quantum mechanics (least action path compared to path integral). If the dimensions $D_k,D'_k$ become large (and importantly also the differences between certain sums of the $D_k,D'_k$ become large), then the sum becomes dominated by a single curve. This is perhaps reminiscent of the fact that certain random matrix theories can be interpreted as a sum over random surfaces, with the limit of large matrix size in the random matrix theory involving a sum only over a single genus; our theory is a more general kind of random matrix theory, but perhaps something similar happens. Finally, the reader might note that we only prove results about the expectation value of the entanglement entropy, rather than proving results about the entanglement entropy for a specific choice of isometries in the MERA. For example, lemma \ref{intentlemma} only lower bounds the expectation value of the entanglement entropy for intervals of length $l$. The reader might wonder: is there a specific choice of isometries for which for all intervals of length $l$, the entanglement entropy is within some constant factor of its expectation value? In this paper, we did not worry at all about trying to prove such results. However, we briefly mention some possible ways to try to do this. One might, for example, try to use concentration of measure arguments to estimate fluctuations about the average. This could perhaps show that the probability of a ``bad event", such as low entanglement entropy (or perhaps long correlation length, if indeed it is true that the state is short-range correlated as we conjecture), is exponentially small in dimension. This approach has the downside that the system size is exponentially large in $D_L$, so that even if a bad event is exponentially unlikely in any particular part of the system, it may be likely to occur somewhere. To resolve this issue, one might try to use the Lovasz local lemma in some way: it might be possible to show that the event that the entanglement entropy of some given interval $[i,j]$ was small was independent of the event that the entanglement entropy of some other interval $[i',j']$ was small if $|i-i'|,|j-j'|$ are sufficiently large. Or, more simply stated: perhaps if a bad event occurs locally, one might resample those isometries and leave the other isometries unchanged. Perhaps another approach to avoiding having bad events occur somewhere is to reduce the amount of randomness: rather than choosing all isometries independently at random, one might instead take all isometries $W$ at a given level to be the same and sample that isometry at random, independently for each level, and similarly take all $V$ at a given level to be the same. This approach has the downside that it complicates the calculations of the entanglement entropy. For example, if we consider $S_2(\sigma(k)_{[i,j]}$ and $i \,{\rm mod}\, 2=0$ and $j \,{\rm mod}\, 2=1$, then $\exp(-S_2(\ldots))$ is now a fourth order polynomial in $W$ and $\overline W$, where $W$ is the isometry at the given level of the MERA. This leads to additional terms in the equation for $S_2$, beyond those in Eq.~(\ref{exprecursigma}). These extra terms likely do not change the result that we have found for the mutual information, however. One simple way to reduce the randomness without complicating the calculation of the asymptotic behavior of the entanglement entropy is to choose the isometries at each level of the MERA to repeat with some sequence. That is, if we consider the isometry $W$ in some given level of the MERA, if this isometry is a product of $n$ isometries on pairs of sites, rather than choosing them all independently as done in this paper, and rather than choosing them all the same, one could choose them so that $W_1,...,W_a$ are sampled independently for some $a$, and then have the sequence repeat so that $W_i=W_{i-a}$. In this way, if we calculate entanglement entropy of an interval short compared to $a$, we find the same Eqs.~(\ref{exprecursigma},\ref{exprecurtau}). We keep $a$ the same at every level; then, a large interval of some length $l$ would have additional terms present at the lower levels, but once one reached a level of the MERA of order $\log_2(l)$, then we find the same Eqs.~(\ref{exprecursigma},\ref{exprecurtau}); note that it is at such a level that the dominant contributions to the entanglement entropy occur and so one finds the same results as in lemmas \ref{intentlemma},\ref{milemma} for the asymptotic behavior. We leave these questions aside, however, until a proof of the correlation decay is given. As a final remark, one may modify the state by changing the recursion relations (\ref{apprecur1},\ref{apprecur2}) by replacing the factor $2^{L-k}$ by $(2^{L-k})^{\kappa}$ for an exponent $\kappa$. Having done this, for any $\kappa<1$, for sufficiently small $\epsilon$, one can take $L$ arbitrarily large (i.e., $L$ is no longer restricted to be of order $1/\epsilon$) and have $\log(D_{k}),\log(D'_k)$ roughly proportional to $2^{L-k}$ (the factor $(2^{L-k})^{\kappa}$ becomes negligibly small). In this manner, it seems likely that the resulting state will combine a {\it volume} law for entanglement entropy with almost exponentially decaying correlation functions (correlation between two regions separated by $l$ sites proportional to $\exp(-l^\kappa/{\rm const.})$ for some constant). Generalizing this to higher dimensional MERA states\cite{hdmera}, we conjecture that one can obtain MERA states in $d$ spatial dimensions with volume law entanglement and correlations decaying as $\exp(-l^\kappa/{\rm const.})$ for any $\kappa<d$ (in particular, for $d=2$ it seems that one can obtain super-exponential correlation decay and volume law entanglement). {\it Acknowledgments---} I thank Fernando Brandao for explaining Ref.~\onlinecite{bh} and for many useful discussions especially on the generalizations discussed in the last paragraph. I thank C. King for useful comments on multiplicativity of norms for super-operators.
proofpile-arXiv_069-4212
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A classic theorem of Dirac~\cite{Dirac-thm} from 1952 asserts that every graph on $n$ vertices with minimum degree at least $n/2$ is hamiltonian if $n\ge 3$. Following Dirac's result, numerous results on hamiltonicity properties on graphs with restricted degree conditions have been obtained\, (see, for instance, \cite{MR1373655, Ron-gold-hamiltonian-survey}). Traditionally, under similar conditions, results for a graph being hamiltonian, hamiltonian-connected, and pancyclic are obtained separately. We may ask, under certain conditions, if it is possible to uniformly show a graph possessing several hamiltonicity properties. The work on finding the square of a hamiltonian cycle in a graph can be seen as an attempt in this direction. However, it requires quite strong degree conditions for a graph to contain the square of a hamiltonian cycle, for examples, see~\cite{Posa-H2conjecture,MR1399673, Komlos, Degreesum-H2, 1412.3498}. For bipartite graphs, finding the existence of a spanning ladder is a way of simultaneously showing the graph having many hamiltonicity properties~(see~\cite{2-factor-bipartite, MR2646098}). In this paper, we introduce another approach of uniformly showing the possession of several hamiltonicity properties in a graph: we show the existence of a spanning {\it Halin graph} in a graph under given minimum degree condition. A tree with no vertex of degree 2 is called a {\it homeomorphically irreducible tree}\,(HIT). A {\it Halin graph } $H$ is obtained from a HIT $T$ of at least 4 vertices embedded in the plane by connecting its leaves into a cycle $C$ following the cyclic order determined by the embedding. According to the construction, the Halin graph $H$ is denoted as $H=T\cup C$, and the HIT $T$ is called the {\it underlying tree} of $H$. A wheel graph is an example of a Halin graph, where the underlying tree is a star. Halin constructed Halin graphs in ~\cite{Halin-halin-graph} for the study of minimally 3-connected graphs. Lov\'asz and Plummer named such graphs as Halin graphs in their study of planar bicritical graphs~\cite{LP-Halingraph-Con}, which are planar graphs having a 1-factor after deleting any two vertices. Intensive researches have been done on Halin graphs. Bondy~\cite{Bondy-pancyclic} in 1975 showed that a Halin graph is hamiltonian. In the same year, Lov\'asz and Plummer~\cite{LP-Halingraph-Con} showed that not only a Halin graph itself is hamiltonian, but each of the subgraph obtained by deleting a vertex is hamiltonian. In 1987, Barefoot~\cite{Barefoot} proved that Halin graphs are hamiltonian-connected, i.e., there is a hamiltonian path connecting any two vertices of the graph. Furthermore, it was proved that each edge of a Halin graph is contained in a hamiltonian cycle and is avoided by another~\cite{Skupie-uniformly-hamiltonian}. Bondy and Lov\'asz~\cite{Almost-pancyclic-halin}, and Skowro\'nska~\cite{pancyclicity-Halin-graphs}, independently, in 1985, showed that a Halin graph is almost pancyclic and is pancyclic if the underlying tree has no vertex of degree 3, where an $n$-vertex graph is {\it almost pancyclic} if it contains cycles of length from 3 to $n$ with the possible exception of a single even length, and is {\it pancyclic} if it contains cycles of length from 3 to $n$. Some problems that are NP-complete for general graphs have been shown to be polynomial time solvable for Halin graphs. For example, Cornu\'ejols, Naddef, and Pulleyblank\,\cite{Cornuejols-halin} showed that in a Halin graph, a hamiltonian cycle can be found in polynomial time. It seems so promising to show the existence of a spanning Halin subgraph in a given graph in order to show the graph having many hamiltonicity properties. But, nothing comes for free, it is NP-complete to determine whether a graph contains a (spanning) Halin graph~\cite{Halin-NP}. Despite all these nice properties of Halin graphs mentioned above, the problem of determining whether a graph contains a spanning Halin subgraph has not yet well studied except a conjecture proposed by Lov\'asz and Plummer~\cite{LP-Halingraph-Con} in 1975. The conjecture states that {\it every 4-connected plane triangulation contains a spanning Halin subgraph}\,(disproved recently \cite{Disprove-LP-Con}). In this paper, we investigate the minimum degree condition for implying the existence of a spanning Halin subgraph in a graph, and thereby giving another approach for uniformly showing the possession of several hamiltonicity properties in a graph under a given minimum degree condition. We obtain the following result. \begin{THM}\label{main result} There exists $n_0>0$ such that for any graph $G$ with $n\ge n_0$ vertices, if $\delta(G)\ge (n+1)/2$, then $G$ contains a spanning Halin subgraph. \end{THM} Note that an $n$-vertex graph with minimum degree at least $(n+1)/2$ is 3-connected if $n\ge 4$. Hence, the minimum degree condition in Theorem~\ref{main result} implies the 3-connectedness, which is a necessary condition for a graph to contain a spanning Halin subgraph, since every Halin graph is 3-connected. A Halin graph contains a triangle, and bipartite graphs are triangle-free. Hence, $K_{\lfloor \frac{n}{2}\rfloor,\lceil \frac{n}{2}\rceil}$ contains no spanning Halin subgraph. Immediately, we see that the minimum degree condition in Theorem~\ref{main result} is best possible. \section{Notations and definitions} We consider simple and finite graphs only. Let $G$ be a graph. Denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively, and by $e(G)$ the cardinality of $E(G)$. We denote by $\delta(G)$ the minimum degree of $G$ and by $\Delta(G)$ the maximum degree. Let $v\in V(G)$ be a vertex and $S\subseteq V(G)$ a subset. Then $G[S]$ is the subgraph of $G$ induced on $S$. Similarly, $G[F]$ is the subgraph induced on $F$ if $F\subseteq E(G)$. The notation $\Gamma_G(v,S)$ denotes the set of neighbors of $v$ in $S$, and $deg_G(v,S)=|\Gamma_G(v,S)|$. We let $\Gamma_{\overline{G}}(v,S)=S-\Gamma_G(v,S)$ and $deg_{\overline{G}}(v,S)=|\Gamma_{\overline{G}}(v,S)|$. Given another set $U\subseteq V(G)$, define $\Gamma_G(U,S)=\cap_{u\in U}\Gamma_G(u,S)$, $deg_G(U,S)=|\Gamma_G(U,S)|$, and $N_G(U,S)=\cup_{u\in U}\Gamma_G(u,S)$. When $U=\{u_1,u_2,\cdots, u_k\}$, we may write $\Gamma_G(U,S)$, $deg_G(U,S)$, and $N_G(U,S)$ as $\Gamma_G(u_1,u_2,\cdots, u_k,S)$, $deg_G(u_1,u_2,\cdots, u_k,S)$, and $N_G(u_1,u_2,\cdots, u_k,S)$, respectively, in specifying the vertices in $U$. When $S=V(G)$, we only write $\Gamma_G(U)$, $deg_G(U)$, and $N_G(U)$. Let $U_1,U_2 \subseteq V(G)$ be two disjoint subsets. Then $\delta_G(U_1,U_2)=\min\{deg_G(u_1,U_2)\,|\, u_1\in U_1\}$ and $\Delta_G(U_1,U_2)=\max\{deg_G(u_1,U_2)\,|\, u_1\in U_1\}$. Notice that the notations $\delta_G(U_1,U_2)$ and $\Delta_G(U_1,U_2)$ are not symmetric with respect to $U_1$ and $U_2$. We denote by $E_G(U_1,U_2)$ the set of edges with one end in $U_1$ and the other in $U_2$, the cardinality of $E_G(U_1,U_2)$ is denoted as $e_G(U_1,U_2)$. We may omit the index $G$ if there is no risk of confusion. Let $u,v\in V(G)$ be two vertices. We write $u\sim v$ if $u$ and $v$ are adjacent. A path connecting $u$ and $v$ is called a $(u,v)$-path. If $G$ is a bipartite graph with partite sets $A$ and $B$, we denote $G$ by $G(A,B)$ in emphasizing the two partite sets. In constructing Halin graphs, we use ladder graphs and a class of ``ladder-like'' graphs as substructures. We give the description of these graphs below. \begin{DEF}\label{ladder} An $n$-ladder $L_n=L_n(A,B)$ is a balanced bipartite graph with $A=\{a_1,a_2,\cdots, a_n\}$ and $B=\{b_1,b_2,\cdots, b_n\}$ such that $a_i\sim b_j$ iff $|i-j|\le 1$. We call $a_ib_i$ the $i$-th rung of $L_n$. If $2n (mod \,\,4) \equiv 0$, then we call each of the shortest $(a_1,b_n)$-path $a_1b_2\cdots a_{n-1}b_n$ and $(b_1,a_n)$-path $b_1a_2\cdots b_{n-1}a_n$ a side of $L_n$; and if $2n (mod \,\,4) \equiv 2$, then we call each of the shortest $(a_1,a_n)$-path $a_1b_2\cdots b_{n-1}a_n$ and $(b_1,b_n)$-path $b_1a_2\cdots a_{n-1}b_n$ a side of $L_n$. \end{DEF} Let $L$ be a ladder with $xy$ as one of its rungs. For an edge $gh$, we say $xy$ and $gh$ are {\it adjacent} if $x\sim g, y\sim h$ or $x\sim h, y\sim g$. Suppose $L$ has its first rung as $ab$ and its last rung as $cd$, we denote $L$ by $ab-L-cd$ in specifying the two rungs, and we always assume that the distance between $a$ and $c$ and thus between $b$ and $d$ is $|V(L)|/2$\,(we make this assumption for being convenient in constructing other graphs based on ladders). Under this assumption, we denote $L$ as $\overrightarrow{ab}-L-\overrightarrow{cd}$. Let $A$ and $B$ be two disjoint vertex sets. We say the rung $xy$ of $L$ is {\it contained} in $A\times B$ if either $x\in A, y\in B$ or $x\in B, y\in A$. Let $L'$ be another ladder vertex-disjoint with $L$. If the last rung of $L$ is adjacent to the first rung of $L'$, we write $LL'$ for the new ladder obtained by concatenating $L$ and $L'$. In particular, if $L'=gh$ is an edge, we write $LL'$ as $Lgh$. We now define five types of ``ladder-like'' graphs, call them $H_1, H_2, H_3, H_4$ and $H_5$, respectively. Let $L_{n}$ be a ladder with $a_1b_1$ and $a_{n}b_{n}$ as the first and last rung, respectively, and $x,y,z,w, u$ be five new vertices. Then each of $H_i$ is obtained from $L_{n}$ by adding some specified vertices and edges as follows. Additionally, for each $i$ with $1\le i\le 5$, we define a graph $T_i$ associated with $H_i$. A depiction of a ladder $L_4$, $H_1,H_2,H_3,H_4,H_5$ constructed from $L_4$, and the graph $T_i$ associated with $H_i$ is given in Figure~\ref{cydder}. \begin{itemize} \item [$H_1$: ] Adding two new vertices $x, y$ and the edges $xa_1,xb_1, ya_{n}, yb_{n}$ and $xy$. Let $T_1=H_1[\{x,y,a_1,b_1, a_n, b_n\}]$. \item [$H_2$: ] Adding three new vertices $x,y,z$ and the edges $za_1,zb_1, xz,xb_1, ya_{n}, yb_{n}$ and $xy$. Let $T_2=H_2[\{x,y,z, a_1,b_1, a_n, b_n\}]$. \item [$H_3$: ] Adding three new vertices $x,y,z$ and the edges $xa_1,xb_1, ya_{n}, yb_{n}, xz, yz$, and either $za_{i}$ or $zb_i$ for some $1\le i \le n$. Let $T_3=H_3[\{x,y,z,a_1,b_1, a_n, b_n\}]$. \item [$H_4$: ] Adding four new vertices $x,y,z, w$ and the edges $wa_1,wb_1, xw, xb_1, ya_{n}, yb_{n},xz,yz$, and either $za_{i}$ or $zb_i$ for some $1\le i \le n$ such that $a_i$ or $b_i$ is a vertex on the side of $L$ which has $b_1$ as one end. Let $T_4=H_4[\{x,y,z,w,a_1,b_1, a_n, b_n\}]$. \item [$H_5$: ] Adding five new vertices $x,y,z, w,u$. If $2n (mod\,\, 4) \equiv 2$, adding the edges $wa_1,wb_1, xw, xb_1, ua_{n}, ub_{n}, yu, yb_{n}, xz,yz$, and either $ za_{i}$ or $zb_i$ for some $1\le i \le n$ such that $a_i$ or $b_i$ is a vertex on the shortest $(b_1,b_n)$-path in $L$; and if $2n (mod \,\,4) \equiv 0$, adding the edges $wa_1,wb_1, xw, xb_1, ua_{n}, ub_{n}, yu, ya_{n}, xz, yz$, and either $ za_{i}$ or $zb_i$ for some $1\le i \le n$ such that $a_i$ or $b_i$ is a vertex on the shortest $(b_1,a_n)$-path in $L$. Let $T_5=H_5[\{x,y,z,w,u,a_1,b_1, a_n, b_n\}]$. \end{itemize} Let $i=1,2,\cdots, 5$. Notice that each of $H_i$ is a Halin graph, and the graph obtained from $H_5$ by deleting the vertex $z$ and adding the edge $xy$ is also a Halin graph. Except $H_1$, each $H_i$ has a unique underlying tree. Notice also that $xy$ is an edge on the cycle along the leaves of any underlying tree of $H_1$ or $H_2$. For each $H_i$ and $T_i$, call $x$ the {\it left end} and $y$ the {\it right end}, and call a vertex of degree at least 3 in the underlying tree of $H_i$ a {\it Halin constructible vertex}. By analyzing the structure of $H_i$, we see that each internal vertex on a/the shortest $(x,y)$-path is a Halin constructible vertex. Noting that any vertex in $V(H_1)-\{x,y\}$ can be a Halin constructible vertex. We call $a_1b_1$ the head link of $T_i$ and $a_nb_n$ the tail link of $T_i$, and for each of $T_3, T_4, T_5$, we call the vertex $z$ not contained in any triangles the {\it pendent vertex}. The notations of $H_i$ and $T_i$ are fixed hereafter. Let $T\in \{T_1,\cdots, T_5\}$ be a subgraph of a graph $G$. Suppose that $T$ has head link $ab$, tail link $cd$, and possibly the pendent vertex $z$. Suppose $G-V(T)$ contains a spanning ladder $L$ with first rung $c_1d_1$ and last rung $c_nd_n$ such that $c_1d_1$ is adjacent to $ab$, $c_nd_n$ is adjacent to $cd$. Additionally, if the pendent vertex $z$ of $T$ exists, then $z$ has a neighbor $z'$ on a shortest path between the two ends of $T$. Then $abLcd\cup T$ or $abLcd\cup T \cup \{zz'\}$ is a spanning Halin subgraph of $G$. This technique is frequently used later on in constructing a Halin graph. The following proposition gives another way of constructing a Halin graph based on $H_1$ and $H_2$. \begin{PRO}\label{Prop:Halin_H1_H2} For $i=1,2$, let $G_i\in \{H_1,H_2\}$ with left end $x_i$ and right end $y_i$ be defined as above, and let $u_i\in V(G_i)$ be a Halin constructible vertex, then $G_1\cup G_2-\{x_1y_1,x_2y_2\}\cup\{x_1x_2, y_1y_2,u_1u_2\}$ is a Halin graph spanning on $V(G_1)\cup V(G_2)$. \end{PRO} \textbf{Proof}.\quad For $i=1,2$, let $G_i$ be embedded in the plane, and let $T_{G_i}$ be a underlying plane tree of $G_i$. Then $T':=T_{G_1}\cup T_{G_2}\cup \{u_1u_2\}$ is a homeomorphically irreducible tree spanning on $V(G_1)\cup V(G_2)$. Moreover, we can draw the edge $u_1u_2$ such that $T_{G_1}\cup T_{G_2}\cup \{u_1u_2\}$ is a plane graph. Since $G_i[E(G_i-T_{G_i})-\{x_iy_i\}]$ is an $(x_i,y_i)$-path spanning on the leaves of $T_{G_i}$ obtained by connecting the leaves following the order determined by the embedding, we see $G_1[E(G_1-T_{G_1})-\{x_1y_1\}]\cup G_2[E(G_2-T_{G_2})-\{x_2y_2\}]\cup \{x_1x_2, y_1y_2\}$ is a cycle spanning on the leaves of $T'$ obtained by connecting the leaves following the order determined by the embedding of $T'$. Thus $G_1\cup G_2-\{x_1y_1,x_2y_2\}\cup\{x_1x_2, y_1y_2,u_1u_2\}$ is a Halin graph. \hfill $\square$\vspace{1mm} \begin{figure}[!htb] \psfrag{L_4}{$L_4$} \psfrag{H_1}{$H_1$} \psfrag{H_2}{$H_2$} \psfrag{H_3}{$H_3$} \psfrag{H_4}{$H_4$} \psfrag{H_5}{$H_5$} \psfrag{a1}{$a_1$} \psfrag{a2}{$a_2$} \psfrag{a3}{$a_3$} \psfrag{a4}{$a_4$} \psfrag{b1}{$b_1$} \psfrag{b2}{$b_2$} \psfrag{b3}{$b_3$} \psfrag{b4}{$b_4$} \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{z}{$z$} \psfrag{u}{$u$} \psfrag{w}{$w$} \psfrag{L_4}{$L_4$} \psfrag{T_1}{$T_1$} \psfrag{T_2}{$T_2$} \psfrag{T_3}{$T_3$} \psfrag{T_4}{$T_4$} \psfrag{T_5}{$T_5$} \begin{center} \includegraphics[scale=0.38]{Cydder.eps}\\ \end{center} \vspace{-3mm} \caption{{\small $L_4$, $H_i$ constructed from $L_4$, and $T_i$ associated with $H_i$ for each $i=1,2,\cdots, 5$ }}\label{cydder} \end{figure} \section{Proof of Theorem~\ref{main result}} In this section, we prove Theorem~\ref{main result}. Following the standard setup of proofs applying the Regularity Lemma, we divide the proof into non-extremal case and extremal cases. For this purpose, we define the two extremal cases in the following. Let $G$ be an $n$-vertex graph and $V$ its vertex set. Given $0\le \beta\le 1$, the two extremal cases are defined as below. {\noindent \textbf {Extremal Case 1.}} $G$ has a vertex-cut of size at most $5\beta n$. {\noindent \textbf {Extremal Case 2.}} There exists a partition $V_1\cup V_2$ of $V$ such that $|V_1|\ge (1/2-7\beta)n$ and $\Delta(G[V_1])\le \beta n$. {\noindent \textbf {Non-extremal case.}} We say that an $n$-vertex graph with minimum degree at least $(n+1)/2$ is in {\it non-extremal case } if it is in neither of Extremal Case 1 and Extremal Case 2. The following three theorems deal with the non-extremal case and the two extremal cases, respectively, and thus give a proof of Theorem~\ref{main result}. \begin{THM}\label{extremal1} Suppose that $0<\beta \ll 1/(20\cdot17^3)$ and $n$ is a sufficiently large integer. Let $G$ be a graph on $n$ vertices with $\delta(G)\ge (n+1)/2$. If $G$ is in Extremal Case 1, then $G$ contains a spanning Halin subgraph. \end{THM} \begin{THM}\label{extremal2} Suppose that $0<\beta \ll 1/(20\cdot17^3)$ and $n$ is a sufficiently large integer. Let $G$ be a graph on $n$ vertices with $\delta(G)\ge (n+1)/2$. If $G$ is in Extremal Case 2, then $G$ contains a spanning Halin subgraph. \end{THM} \begin{THM}\label{non-extremal} Let $n$ be a sufficiently large integer and $G$ an $n$-vertex graph with $\delta(G)\ge (n+1)/2$. If $G$ is in the Non-extremal case, then $G$ has a spanning Halin subgraph. \end{THM} We need the following ``Absorbing Lemma'' in each of the proofs of Theorems~\ref{extremal1} - \ref{extremal2} in dealing with ``garbage'' vertices. \begin{LEM}[Absorbing Lemma]\label{absorbing} Let $F$ be a graph such that $V(F)$ is partitioned as $S\cup R$. Suppose that (i) $\delta(R,S)\ge 3|R|$, (ii) for any two vertices $u,v\in N(R,S)$, $deg(u,v,S)\ge 6|R|$, and (iii) for any three vertices $u,v, w\in N(N(R,S),S)$, $deg(u,v,w, S)\ge 7|R|$. Then there is a ladder spanning on $R$ and some other $7|R|-2$ vertices from $S$. \end{LEM} \textbf{Proof}.\quad Let $R=\{w_1,w_2,\cdots, w_r\}$. Consider first that $|r|=1$. Choose $x_{11},x_{12},x_{13}\in \Gamma(w_1,S)$. By (ii), there are distinct vertices $y^1_{12}\in \Gamma(x_{11},x_{12}, S)$ and $y^1_{23}\in \Gamma(x_{12},x_{13}, S)$. Then the graph $L$ on $\{w_1,x_{11},x_{12},x_{13}, y^1_{12}, y^1_{23}\}$ with edges in $$ \{w_1x_{11}, w_1x_{12}, w_1x_{13}, y^1_{12}x_{11}, y^1_{12}x_{12}, y^1_{23}x_{12}, y^1_{23}x_{13}\} $$ is a ladder covering $R$ with $|V(L)|=6$. Suppose now $r\ge 2$. For each $i$ with $1\le i\le r$, choose distinct\,(and unchosen) vertices $x_{i1}, x_{i2}, x_{i3}\in \Gamma(w_i, S)$. This is possible since $deg(x, S)\ge 3|R|$ for each $x\in R$. By (ii), we choose distinct vertices $y_{12}^1, y_{23}^1,\cdots, y_{12}^r, y_{23}^r$ different from the existing vertices already chosen such that $y_{12}^i\in \Gamma(x_{i1}, x_{i2}, S)$ and $y_{23}^i\in \Gamma(x_{i2}, x_{i3}, S)$ for each $i$, and at the same time, we chose distinct vertices $z_1,z_2,\cdots, z_{r-1}$ from the unchosen vertices in $S$ such that $z_i\in \Gamma(x_{i3}, x_{(i+1),1}, S)$ for each $1\le i\le r-1$. Finally, by (iii), choose distinct vertices $u_1, u_2,\cdots, u_{r-1}$ from the unchosen vertices in $S$ such that $u_i\in \Gamma(x_{i3}, x_{i+1,1}, z_i, S)$. Let $L$ be the graph with $$ V(L)=R\cup \{x_{i1}, x_{i2}, x_{i3},y_{12}^i, y_{23}^i, z_i, u_i, x_{r1}, x_{r2}, x_{r3}, y_{12}^r, y_{23}^r \,|\, 1\le i\le r-1 \}\quad \mbox{and} $$ $E(L)$ consisting of the edges $w_rx_{r1}, w_rx_{r2},w_rx_{r3}, y_{12}^rx_{r1}, y_{12}^rx_{r2},y_{23}^rx_{r2},y_{23}^rx_{r3}$ and the edges indicated below for each $1\le i\le r-1$: $$ w_i\sim x_{i1},x_{i2}, x_{i3};\, y_{12}^i\sim x_{i1}, x_{i2};\, y_{23}^i\sim x_{i2},x_{i3}; \, z_i\sim x_{i3}, x_{i+1,1}; \, u_i\sim x_{i3}, x_{i+1,1}, z_i. $$ It is easy to check that $L$ is a ladder covering $R$ with $|V(L)|=8r-2$. Figure~\ref{insterL} gives a depiction of $L$ for $|R|=2$. \hfill $\square$\vspace{1mm} \begin{figure}[!htb] \psfrag{w_1}{$w_1$} \psfrag{w_2}{$w_2$} \psfrag{x_{11}}{$x_{11}$} \psfrag{x_{12}}{$x_{12}$} \psfrag{x_{8}}{$x_{8}$} \psfrag{x_{13}}{$x_{13}$} \psfrag{x_{21}}{$x_{21}$} \psfrag{x_{22}}{$x_{22}$} \psfrag{x_{23}}{$x_{23}$} \psfrag{y_{12}^1}{$y_{12}^1$} \psfrag{y_{23}^1}{$y_{23}^1$} \psfrag{y_{12}^2}{$y_{12}^2$} \psfrag{y_{23}^2}{$y_{23}^2$} \psfrag{u_1}{$u_1$} \psfrag{z_1}{$z_1$} \begin{center} \includegraphics[scale=0.5]{insertL.eps}\\ \end{center} \vspace{-3mm} \caption{Ladder $L$ of order 14}\label{insterL} \end{figure} The following simple observation is heavily used in the proofs explicitly or implicitly. \begin{LEM}\label{common-vertex} Let $U=\{u_1,u_2\cdots, u_k\},S\subseteq V(G)$ be subsets. Then $deg(u_1,u_2,\cdots, u_k, S)\ge |S|-(deg_{\overline{G}}(u_1,S)+ \cdots +deg_{\overline{G}}(u_k,S)) \ge |S|-k(|S|-\delta(U,S))$. \end{LEM} Extremal Case 1 is relatively easy among the three cases, therefore we prove Theorem~\ref{extremal1} first below. \subsection{Proof of Theorem~\ref{extremal1}} We assume that $G$ has a vertex-cut $W$ such that $|W|\le 5\beta n$. As $\delta(G)\ge (n+1)/2$, by simply counting degrees we see $G-W$ has exactly two components. Let $V_1$ and $V_2$ be the vertex set of the two components, respectively. Then $(1/2-5\beta )n \le |V_i|\le (1/2+5\beta)n$. We partition $W$ into two subsets as follows: $$ W_1=\{w\in W\,|\, deg(w,V_1)\ge (n+1)/4-2.5\beta n\}\quad \mbox{and}\quad W_2=W-W_1. $$ As $\delta(G)\ge (n+1)/2$, we have $deg(w,V_2)\ge (n+1)/4-2.5\beta n$ for any $w\in W_2$. Since $G$ is 3-connected and $(1/2-5\beta)n>3$, there are three independent edges $p_1p_2$, $q_1q_2$, and $r_1r_2$ between $G[V_1\cup W_1]$ and $G[V_2\cup W_2]$ with $p_1, q_1, r_1\in V_1\cup W_1$ and $p_2,q_2,r_2\in V_2\cup W_2$. For $i=1,2$, by the partition of $W_i$, we see that $\delta(W_i, V_i)\ge 3|W_i|+3$. As $\delta(G)\ge (n+1)/2$, we have $\delta(G[V_i])\ge (1/2-5\beta)n$. Then, as $|V_i|\le (1/2+5\beta)n$, for any $u,v\in V_i$, $deg(u,v, V_i)\ge (1/2-25\beta)n\ge 6|W_i|+2$, and for any $u,v, w\in V_i$, $deg(u,v, w, V_i)\ge (1/2-35\beta)n\ge 7|W_i|+2$. By Lemma~\ref{absorbing}, we can find a ladder $L_i$ spans $W_i-\{p_i,q_i\}$ and another $7|W_i-\{p_i,q_i\}|-2$ vertices from $V_i-\{p_i, q_i\}$ if $W_i-\{p_i, q_i\}\ne \emptyset$. Denote $a_ib_i$ and $c_id_i$ the first and last rung of $L_i$\,(if $L_i$ exists), respectively. Let $$ G_i=G[V_i-V(L_i)] \quad \mbox{and}\quad n_i=|V(G_i)|. $$ Then for $i=1,2$, \begin{equation*}\label{Gisize} n_i\ge (n+1)/2-5\beta n-7|W_i|\ge (n+1)/2-40\beta n \quad \mbox{and}\quad \delta(G_i)\ge \delta(G[V_i])-7|W_i|\ge (n+1)/2-40\beta n. \end{equation*} Let $i=1,2$. We now show that $G_i$ contains a spanning subgraph isomorphic to either $H_1$ or $H_2$ as defined in the beginning of this section. Since $|n_i|\le (1/2+5\beta)n$ and $\delta(G_i)\ge (n+1)/2-40\beta n$, any subgraph of $G_i$ induced on at least $(1/4-40\beta)n$ vertices has minimum degree at least $(1/4-85\beta)n$, and thus has a matching of size at least 2. Hence, when $n_i$ is even, we can choose independent edges $e_i=x_iy_i$ and $f_i=z_iw_i$ with $$ x_i, y_i \in \Gamma_{G_i}(p_i)-\{q_i\}\quad \mbox{and}\quad z_i, w_i \in \Gamma_{G_i}(q_i)-\{p_i\}. $$ (Notice that $p_i$ or $q_i$ may be contained in $W_i$, and in this case we have $deg_{G_i}(p_i), deg_{G_i}(q_i)\ge (1/4-40\beta)n$.) And if $n_i$ is odd, we can choose independent edges $g_iy_i$, $f_i=z_iw_i$, and a vertex $x_i$ with $$ g_i, x_i, y_i \in \Gamma_{G_i}(p_i)-\{q_i\}, x_i\in \Gamma_{G_i}(g_i, y_i)-\{p_i,q_i\} \quad \mbox{and}\quad z_i, w_i \in \Gamma_{G_i}(q_i)-\{x_i, p_i\}, $$ where the existence of the vertex $x_i$ is possible since the subgraph of $G_i$ induced on $\Gamma_{G_i}(p_i)$ has minimum degree at least $(1/2-40\beta)n-((1/2+5\beta)n-|\Gamma_{G_i}(p_i)|)\ge |\Gamma_{G_i}(p_i)|-45\beta n$, and hence contains a triangle. In this case, again, denote $e_i=x_iy_i$. Let $$ \begin{cases} G_i'=G_i-\{p_i,q_i\}, & \text{if $n_i$ is even}; \\ G_i'=G_i-\{p_i, q_i,g_i\}, & \text{if $n_i$ is odd}. \end{cases} $$ By the definition above, $|V(G_i')|$ is even. The following claim is a modification of (1) of Lemma 2.2 in~\cite{MR2646098}. \begin{CLA}\label{H12} For $i=1,2$, let $a_{i}'b'_{i}, c_i'd_i'\in E(G_i')$ be two independent edges. Then $G_i'$ contains two vertex disjoint ladders $Q_{i1}$ and $Q_{i2}$ spanning on $V(G_i')$ such that $Q_{i1}$ has $e_i=x_iy_i$ as its first rung, $a_i'b_i' $ as its last rung, and $Q_{i2}$ has $c_i'd_i'$ as its first rung and $f_i=z_iw_i$ as its last rung, where $e_i$ and $f_i$ are defined prior to this claim. \end{CLA} \textbf{Proof}.\quad We only show the claim for $i=1$ as the case for $i=2$ is similar. Notice that by the definition of $G_1'$, $|V(G_1')|$ is even. Since $|V(G_1')|\le (1/2+5\beta)n$ and $\delta(G_1')\ge (n+1)/2-40\beta n-2\ge |V(G_1')|/2+4$, $G_1'$ has a perfect matching $M$ containing $e_1, f_1, a_1'b_1', c_1'd_1'$. We identify $a_1'$ and $c_1'$ into a vertex called $s'$, and identify $b_1'$ and $d_1'$ into a vertex called $t'$. Denote $G_1''$ as the resulting graph and let $s't'\in E(G_1'')$ if the two vertices are not adjacent. Partition $V(G_1'')$ arbitrarily into $U$ and $V$ with $|U|=|V|$ such that $x_1, z_1, s'\in U$, $y_1, w_1, t'\in V$, and let $M':=M-\{a_1'b_1', c_1'd_1'\}\cup \{s't'\}\subseteq E_{G_1'}(U,V)$. Define an auxiliary graph $H'$ with vertex set $M'$ and edge set defined as follows. If $xy, uv\in M'$ with $x,u\in U$ then $xy\sim_{H'} uv$ if and only if $x\sim_{G_1'} v$ and $y\sim_{G_1'} u$\,(we do not include the case that $x\sim_{G_1'} u$ and $y\sim_{G_1'} v$ as we defined a bipartition here). Particularly, for any $pq\in M'-\{s't'\}$ with $p\in U$, $pq\sim_{H'} s't'$ if and only if $p\sim_{G_1'} b_1', d_1'$ and $q\sim_{G_1'} a_1', c_1'$. Notice that a ladder with rungs in $M'$ is corresponding to a path in $H'$ and vice versa. Since $(1/2-40\beta)n-2\le |V(G_1')|\le (1/2+5\beta)n-2$ and $\delta(G_1')\ge (n+1)/2-40\beta n-2$, any two vertices in $G_1'$ has at least $(1/2-130\beta) n$ common neighbors. This together with the fact that $|U|=|V|\le |V(G_1'')|/2\le (1/4+2.5\beta)n $ gives that $\delta(U,V), \delta(V,U)\ge (1/4-132.5\beta)n$. Hence $$ \delta(H')\ge (1/4-132.5\beta)n-\left((1/4+2.5\beta)n-(1/4-132.5\beta)n\right)= (1/4-267.5\beta)n\ge |V(H')|/2+1, $$ since $\beta <1/2200$ and $n$ is very large. Hence $H'$ has a hamiltonian path starting with $e_1$, ending with $f_1$, and having $s't'$ as an internal vertex. The path with $s't'$ replaced by $a_1'b_1'$ and $c_1'd_1'$ is corresponding to the required ladders in $G_1'$. \hfill $\square$\vspace{1mm} We may assume $n_1$ is even and $n_2$ is odd and construct a spanning Halin subgraph of $G$\,(the construction for the other three cases follow a similar argument). Recall that $p_1p_2, q_1q_2, r_1r_2$ are the three prescribed independent edges between $G[V_1\cup W_1]$ and $G[V_2\cup W_2]$, where $p_1,q_1, r_1\in V_1\cup W_1$ and $p_2, q_2, g_2, r_2\in V_2\cup W_2$. For a uniform discussion, we may assume that both of the ladders $L_1$ and $L_2$ exist. Let $i=1,2$. Recall that $L_i$ has $a_ib_i$ as its first rung and $c_id_i$ as its last rung. Choose $a_i'\in \Gamma_{G_i'}(a_i)$, $b_i'\in \Gamma_{G_i'}(b_i)$ such that $a_i'b_i'\in E(G)$ and $c_i'\in \Gamma_{G_i'}(c_i)$, $d_i'\in \Gamma_{G_i'}(d_i)$ such that $c_i'd_i'\in E(G)$. This is possible as $\delta(G_i')\ge (n+1)/2-40\beta n-2$. Let $Q_{1i}$ and $Q_{2i}$ be the ladders of $G_i'$ given by Claim~\ref{H12}. Set $H_a=Q_{11}L_1 Q_{12}\cup \{p_1x_1, p_1y_1, q_1z_1, q_1w_1\}$. Assume $Q_{21}L_2 Q_{22}$ is a ladder can be denoted as $\overrightarrow{x_2y_2}-Q_{21}L_2 Q_{22}-\overrightarrow{z_2w_2}$. To make $r_2$ a Halin constructible vertex, we let $H_b=Q_{21}L_2 Q_{22}\cup \{g_2x_2, g_2y_2, p_2g_2, p_2y_2, q_2z_2, q_2w_2\}$ if $r_2$ is on the shortest $(y_2,w_2)$-path in $Q_{21}L_2 Q_{22}$, and let $H_b=Q_{21}L_2 Q_{22}\cup \{g_2x_2, g_2y_2, p_2g_2, p_2x_2, q_2z_2, q_2w_2\}$ if $r_2$ is on the shortest $(x_2,z_2)$-path\,(recall that $g_2,x_2,y_2\in \Gamma_{G_2}(p_2)$). Let $H=H_a\cup H_b\cup \{p_1p_2, r_1r_2, q_1q_2\}$. Then $H$ is a spanning Halin subgraph of $G$ by Proposition~\ref{Prop:Halin_H1_H2} as $H_a\cup p_1q_1\cong H_1$ and $H_b\cup p_2q_2\cong H_2$. Figure~\ref{halin2} gives a construction of $H$ for the above case when $r_2$ is on the shortest $(y_2,w_2)$-path in $Q_{21}L_2 Q_{22}$. \begin{figure}[!htb] \psfrag{x_1}{$x_1$} \psfrag{x_2}{$x_2$} \psfrag{y_1}{$y_1$} \psfrag{y_2}{$y_2$} \psfrag{a_2}{$a_2$}\psfrag{a_2'}{$a_2'$} \psfrag{b_2}{$b_2$} \psfrag{c_2}{$c_2$} \psfrag{d_2}{$d_2$} \psfrag{a_1}{$a_1$} \psfrag{b_1}{$b_1$} \psfrag{c_1}{$c_1$} \psfrag{d_1}{$d_1$} \psfrag{L_2}{$L_2$} \psfrag{L_1}{$L_1$} \psfrag{b_2'}{$b_2'$} \psfrag{c_2'}{$c_2'$} \psfrag{d_2'}{$d_2'$} \psfrag{a_1'}{$a_1'$} \psfrag{b_1'}{$b_1'$} \psfrag{c_1'}{$c_1'$} \psfrag{d_1'}{$d_1'$}\psfrag{d}{$d$} \psfrag{z_1}{$z_1$} \psfrag{z_2}{$z_2$} \psfrag{w_1}{$w_1$} \psfrag{w_2}{$w_2$} \psfrag{g_2}{$g_2$} \psfrag{p_1}{$p_1$} \psfrag{p_2}{$p_2$} \psfrag{p_3}{$p_3$} \psfrag{q_1}{$q_1$} \psfrag{q_2}{$q_2$} \psfrag{r_1}{$r_1$} \psfrag{r_2}{$r_2$} \psfrag{s_1}{$s_1$} \psfrag{s_2}{$s_2$} \psfrag{q_3}{$q_3$} \psfrag{Q_11'}{$Q_{11}$}\psfrag{Q_12'}{$Q_{12}$}\psfrag{Q_21'}{$Q_{21}$}\psfrag{Q_22'}{$Q_{22}$} \psfrag{L^T_1}{$L^T_1$} \psfrag{F}{$F$} \begin{center} \includegraphics[scale=0.4]{halin2.eps}\\ \end{center} \vspace{-3mm} \caption{A Halin graph $H$ }\label{halin2} \end{figure} \subsection{Proof of Theorem~\ref{extremal2}} Recall Extremal Case 2: There exists a partition $V_1\cup V_2$ of $V$ such that $|V_1|\ge (1/2-7\beta)n$ and $\Delta(G[V_1])\le \beta n$. Since $\delta(G)\ge (n+1)/2$, the assumptions imply that $$ (1/2-7\beta)n \le |V_1| \le (1/2+\beta)n \quad \mbox{and} \quad (1/2-\beta)n \le |V_2| \le (1/2+7\beta)n. $$ Let $\beta$ and $\alpha$ be real numbers satisfying $\beta \le \alpha/20$ and $ \alpha\le (1/17)^3$. Set $\alpha_1=\alpha^{1/3}$ and $\alpha_2=\alpha^{2/3}$ We first repartition $V(G)$ as follows. \begin{eqnarray*} V_2'& = & \{v\in V_2\,|\, deg(v,V_1)\ge (1-\alpha_1)|V_1|\} , V_{01}=\{v\in V_2-V_2'\,|\, deg(v,V_2')\ge (1-\alpha_1)|V_2'|\}, \\ V_1' & = & V_1\cup V_{01}, \quad \mbox{and}\quad V_0=V_2-V_2'-V_{01}. \end{eqnarray*} \begin{CLA}\label{V2-V2'_size} $|V_{01}|, |V_0|\le |V_2-V_2'|\le \alpha_2|V_2|$. \end{CLA} \textbf{Proof}.\quad Notice that $e(V_1, V_2)\ge (1/2-7\beta)n |V_2| \ge \frac{1/2-7\beta}{1/2+\beta}|V_1||V_2|\ge (1-\alpha)|V_1||V_2|$ as $\beta \le \alpha/20$. Hence, \begin{eqnarray*} (1-\alpha)|V_1||V_2|\le & e(V_1, V_2) & \le e(V_1, V_2')+e(V_1, V_2-V_2')\le |V_1||V_2'|+(1-\alpha_1)|V_1||V_2-V_2'|. \end{eqnarray*} This gives that $|V_2-V_2'|\le \alpha_2|V_2|$, and thus $|V_{01}|, |V_0|\le |V_2-V_2'|\le \alpha_2|V_2|$. \hfill $\square$\vspace{1mm} As a result of moving vertices from $V_2$ to $V_1$ and by Claim~\ref{V2-V2'_size}, we have the following. \begin{eqnarray} \Delta(G[V_1'])& \le & \beta n +|V_{01}|\le \beta n+\alpha_2|V_2|, \nonumber \\ \delta(V_1', V_2') & \ge & (1/2-\beta)n-|V_2-V_2'| \ge (1/2-\beta)n-\alpha_2|V_2| , \nonumber\\ \delta(V_2', V_1') &\ge &(1-\alpha_1)|V_1|\ge (1-\alpha_1)(1/2-7\beta)n, \label{degrees} \\ \delta(V_{0}, V_1')&\ge & (n+1)/2-(1-\alpha_1)|V_2'|-|V_0| \ge 3\alpha_2n+8 \ge 3|V_0|+10 , \nonumber \\ \delta(V_{0}, V_2')&\ge & (n+1)/2-(1-\alpha_1)|V_1|-|V_0| \ge 3\alpha_2n+8 \ge 3|V_0|+10, \nonumber \end{eqnarray} where the last two inequalities hold because we have $7\beta+10/n \le \alpha$, and $\alpha \le (1/8)^3$. \begin{CLA}\label{nowheel} We may assume that $\Delta(G)<n-1$. \end{CLA} \textbf{Proof}.\quad Suppose on the contrary and let $w\in V(G)$ such that $deg(w)=n-1$. Then by $\delta(G)\ge (n+1)/2$ we have $\delta(G-w)\ge (n-1)/2$, and thus $G-w$ has a hamiltonian cycle. This implies that $G$ has a spanning wheel subgraph, in particular, a spanning Halin subgraph of $G$. \hfill $\square$\vspace{1mm} \begin{CLA}\label{subgraph_K} There exists a subgraph $T\subseteq G$ such that $|V(T)|\equiv n\,(\mbox{mod}\,\,2)$, where $T$ is isomorphic to some graph in $\{T_1,T_2,\cdots, T_5\}$. Assume that $T$ has head link $x_1x_2$ and tail link $y_1y_2$. Let $m=n-|V(T)|$. Then $G-V(T)$ contains a balanced spanning bipartite graph $G'$ with partite sets $U_1$ and $U_2$ and a subset $W$ of $U_1\cup U_2$ with at most $\alpha_2n$ vertices such that the following holds: \begin{itemize} \item[$($i$)$] $deg_{G'}(x, V(G')-W)\ge (1-\alpha_1-2\alpha_2)m$ for all $x\not\in W$; \item[$($ii$)$] There exists $x_1'x_2', y_1'y_2'\in E(G')$ such that $x_i', y_i'\in U_i-W$, $x_{3-i}'\sim x_i$, and $y_{3-i}'\sim y_i$, for $i=1,2$; and if $T$ has a pendent vertex, then the vertex is contained in $V'_1\cup V'_2-W$. \item[$($iii$)$] There are $|W|$ vertex-disjoint 3-stars\,($K_{1,3}$s) in $G'-\{x_1',x_2', y_1', y_2'\}$ with the vertices in $W$ as their centers. \end{itemize} \end{CLA} \textbf{Proof}.\quad By~\eqref{degrees}, for $i=1,2$, we notice that for any $u,v,w\in V_i'$, \begin{eqnarray}\label{common} deg(u,v,w, V_{3-i}') & \ge & |V_{3-i}'|-3(|V_{3-i}'|-\delta(V_i', V_{3-i}'))\ge (1/2-28\beta -3\alpha_1)n>n/4. \end{eqnarray} We now separate the proof into two cases according to the parity of $n$. {\noindent \bf Case 1. $n$ is even. } Suppose first that $\max\{|V_1'|, |V_2'|\}\le n/2$. We arbitrarily partition $V_0$ into $V_{10}$ and $V_{20}$ such that $|V'_1\cup V_{10}|=|V'_2\cup V_{20}|=n/2$. Suppose $G[V_1']$ contains an edge $x_1u_1$ and there is a vertex $u_2\in \Gamma(u_1, V_2')$ such that $u_2$ is adjacent to a vertex $y_2\in V_2'$. By~\eqref{common}, there exist distinct vertices $x_2\in \Gamma(x_1, u_1, V_2')-\{y_2,u_1\}, y_1\in \Gamma(y_2, u_2, V_1')-\{x_1,u_1\}$. Then $G[\{x_1, u_1, x_2, y_1, u_1, y_2\}]$ contains a subgraph $T$ isomorphic to $T_1$. So we assume $G[V_1']$ contains an edge $x_1u_1$ and no vertex in $\Gamma(u_1, V_2')$ is adjacent to any vertex in $V_2'$. As $\delta(G)\ge (n+1)/2$, $\delta(G[V'_2\cup V_{20}])\ge 1$. Let $u_2\in \Gamma(u_1, V_2') $ and $u_2y_2\in E(G[V'_2\cup V_{20}])$. Since $deg(u_2, V_1')\ge (n+1)/2-|V_0|>|V_1'\cup V_{10}|-|V_0|$ and $deg(y_2, V_1')\ge 3|V_0|+10$, $\deg(u_2, y_2, V_1'\cup V_{10})\ge 2|V_0|+10$. Let $x_2\in \Gamma(x_1, u_1, V_2')-\{y_2,u_2\}, y_1\in \Gamma(y_2, u_2, V_1')-\{x_1,u_1\}$. Then $G[\{x_1, u_1, x_2, y_1, u_2, y_2\}]$ contains a subgraph $T$ isomorphic to $T_1$. By symmetry, we can find $T\cong T_1$ if $G[V_2']$ contains an edge. Hence we assume that both $V_1'$ and $V_2'$ are independent sets. Again, as $\delta(G)\ge (n+1)/2$, $\delta(G[V'_1\cup V_{10}]), \delta(G[V'_2\cup V_{20}])\ge 1$. Let $x_1u_1\in E(G[V'_1\cup V_{10}])$ and $y_2u_2\in E(G[V'_2\cup V_{20}])$ such that $x_1\in V_1'$ and $u_2\in \Gamma(u_1, V_2')$. Since $deg(x_1, V_2')\ge (n+1)/2-|V_0|>|V_2'\cup V_{20}|-|V_0|$ and $deg(u_1,V_2')\ge 3|V_0|+10$, we have $deg(x_1,u_1, V_2')\ge 2|V_0|+10$. Hence, there exists $x_2\in \Gamma(x_1, u_1, V_2')-\{y_2,u_2\}$. Similarly, there exists $ y_1\in \Gamma(y_2, u_2, V_1')-\{x_1,u_1\}$. Then $G[\{x_1, u_1, x_2, y_1, u_2, y_2\}]$ contains a subgraph $T$ isomorphic to $T_1$. Let $m=(n-6)/2, U_1=(V_1'-V(T))\cup V_{10}$ and $U_2=(V_2'-V(T))\cup V_{20}$, and $W=V_0-V(T)$. We then have $|U_1|=|U_2|=m$. Let $G'=(V(G)-V(T), E_G(U_1, U_2))$ be the bipartite graph with partite sets $U_1$ and $U_2$. Notice that $|W|\le |V_0|\le \alpha_2|V_2|<\alpha_2n$. By~(\ref{degrees}), we have $deg_{G'}(x, V(G')-W)\ge (1-\alpha_1-2\alpha_2)m$ for all $x\notin W$. This shows (i). By the construction of $T$ above, we have $x_1, y_1\in V_1'-W$. Let $i=1, 2$. By~(\ref{degrees}), we have $\delta(V_0, U_i-W)\ge 3|V_0|+6$. Applying statement (i), we have $e_{G'}(\Gamma_{G'}(x_1, U_2-W), \Gamma_{G'}(x_2, U_1-W)), e_{G'}(\Gamma_{G'}(y_1, U_2-W), \Gamma_{G'}(y_2, U_1-W))\ge (3|V_0|+4)(1-2\alpha_1-4\alpha_2)m>2m$. Hence, we can find independent edges $x_1'x_2'$ and $y_1'y_2'$ such that $x_i', y_i'\in U_i-W$, $x_{3-i}'\sim x_i$, and $y_{3-i}'\sim y_i$. This gives statement (ii). Finally, as $\delta(V_0, U_i-W)\ge 3|V_0|+6$, we have $\delta(V_0, U_i-W-\{x_1',x_2', y_1', y_2'\})\ge 3|V_0|+2$. Hence, there are $|W|$ vertex-disjoint 3-stars with their centers in $W$. Otherwise we have $\max\{|V_1'|, |V_2'|\}> n/2$. Assume, w.l.o.g., that $|V_1'|\ge n/2+1$. Then $\delta(G[V_1'])\ge 2$ and thus $G[V_1']$ contains two vertex-disjoint paths isomorphic to $P_3$ and $P_2$, respectively. Let $m=(n-8)/2$. We consider three cases here. Case (a): $|V_1'|-5\le m$. Then let $x_1u_1w_1, y_1v_1\subseteq G[V_1']$ be two vertex-disjoint paths, and let $x_2\in \Gamma(x_1, u_1, w_1, V_2'), y_2\in \Gamma(y_1, v_1, V_2')$ and $z\in \Gamma(w_1, v_1, V_2')$ be three distinct vertices. Then $G[\{x_1, u_1, w_1, x_2, z, y_1, v_1, y_2\}]$ contains a subgraph $T$ isomorphic to $T_4$. Notice that $|V_2'-V(T)|\le m$. We arbitrarily partition $V_0$ into $V_{10}$ and $V_{20}$ such that $|V'_1\cup V_{10}|=|V'_2\cup V_{20}|=m$. Let $U_1=(V_1'-V(T))\cup V_{10}$, $U_2=(V_2'-V(T))\cup V_{20}$, and $W=V_0$. Hence we assume $|V_1'|-5 = m+t_1$ for some $t_1\ge 1$. This implies that $|V_1'|\ge n/2+t_1+1$ and thus $\delta(G[V_1'])\ge t_1+2$. Let $V_1^0$ be the set of vertices $u \in V_1'$ such that $deg(u, V_1')\ge \alpha_1 m$. Case (b): $|V_1^0|\ge |V_1'|-5-m$. Then we form a set $W$ with $|V'_1|-5-m$ vertices from $V_1^0$ and all the vertices of $V_0$. Then $|V_1'-W|=m+5+t_1-(|V_1'|-5-m)=m+5=n/2+1$, and hence $\delta(G[V_1'-W])\ge 2$. Similarly as in Case (a), we can find a subgraph $T$ of $G$ contained in $G[V_1'-W]$ isomorphic to $T_4$. Let $U_1=V_1'-V(T)-W$, $U_2=(V_2'-V(T))\cup W$. Then $|U_1|=|U_2|=m$. Thus we have Case (c): $|V_1^0|< |V_1'|-5-m$. Suppose that $|V_1'-V_1^0|=m+5+t_1'=n/2+t_1'+1$ for some $t_1'\ge 1$. This implies that $\delta(G[V_1'-V_1^0])\ge t_1'+2$. We show that $G[V_1'-V_1^0]$ contains $t_1'+2$ vertex-disjoint 3-stars. To see this, suppose $G[V_1'-V_1^0]$ contains a subgraph $M$ of at most $s<t_1'+2$ 3-stars. By counting the number of edges between $V(M)$ and $V_1'-V_1^0-V(M)$ in two ways, we get that $t_1'|V_1'-V_1^0-V(M)|\le e_{G-V_1^0}(V(M), V_1'-V_1^0-V(M))\le 4s \Delta(G[V_1'-V_1^0])\le 4s \alpha_1m$. Since $|V_1'-V_1^0|=m+5+t_1'=n/2+t_1'+1$, $|V_1'-V_1^0-V(M)|\ge m -3t_1'\ge m-6\alpha_2m$, where the last inequality holds as $|V_1'|\le (1/2+\beta)n +\alpha_2|V_2'|$ implying that $t_1'\le |V_1'|-m -5\le 2\alpha_2m$. This, together with the assumption that $\alpha\le (1/8)^3$ gives that $s\ge t_1'+2$, showing a contradiction. Hence we have $s\ge t_1'+2$. Let $x_1u_1w_1$ and $y_1v_1$ be two paths taken from two 3-stars in $M$. Then we can find a subgraph $T$ of $G$ isomorphic to $T_4$ the same way as in Case (a). We take exactly $t_1'$ 3-stars from the remaining ones in $M$ and denote the centers of these stars by $W'$. Let $U_1=V_1'-V_1^0-V(T)-W'$, $W=W'\cup V_1^0\cup V_0$, and $U_2=(V_2'-V(T))\cup W$. Then $|U_1|=|U_2|=m$. For the partition of $U_1$ and $U_2$ in all the cases discussed in the paragraph above, we let $G'=(V(G)-V(T), E_G(U_1, U_2))$ be the bipartite graph with partite sets $U_1$ and $U_2$. Notice that $|W|\le |V_0|\le \alpha_2 n$ if Case (a) occurs, $|W|\le |V_0|+|V_1'|-m-5\le (1/2+\beta)n+|V_0|-n/2\le \alpha_2n$ if Case (b) occurs, and $|W|=|W'\cup V_1^0\cup V_0|=|V_1'-U_1-V(T)|+|V_0|\le (1/2+\beta)n-(1/2-4)n+|V_0|\le \alpha_2n$ if Case (c) occurs. Since $\delta(V_2', V_1') \ge (1-\alpha_1)|V_1|$ from~\eqref{degrees} and $|V_1'-U_1|\le 2\alpha_2m$, we have $\delta(U_2-W, U_1-W)\ge (1-\alpha_1-2\alpha_2)m$. On the other hand, from~\eqref{degrees}, $\delta(V_1', V_2') \ge (1/2-\beta)n-\alpha_2|V_2|$. This gives that $\delta(U_1-W, U_2-W)\ge (1-\alpha_1-2\alpha_2)m$. Hence, we have $deg_{G'}(x, V(G')-W)\ge (1-\alpha_1-2\alpha_2)m$ for all $x\notin W$. According to the construction of $T$, we have $x_1,y_1\in V_1'-W$. Applying statement (i), we have $e_{G'}(\Gamma_{G'}(x_1, U_2-W), \Gamma_{G'}(x_2, U_1-W)), e_{G'}(\Gamma_{G'}(y_1, U_2-W), \Gamma_{G'}(y_2, U_1-W))\ge (3|V_0|+4)(1-2\alpha_1-4\alpha_2)m>2m$. Hence, we can find independent edges $x_1'x_2'$ and $y_1'y_2'$ such that $x_i', y_i'\in U_i-W$, $x_{3-i}'\sim x_i$, and $y_{3-i}'\sim y_i$. By the construction of $T$, $T$ is isomorphic to $T_4$, and the pendent vertex $z\in V_2'\subseteq V_1\cup V_2'-W$. This gives statement (ii). Finally, as $\delta(V_0, U_1-W)\ge 3\alpha_2n+5\ge 3|W|+5$, we have $\delta(V_0, U_1-W-\{x_1', x_2', y_1', y_2'\})\ge 3|W|+1$. By the definition of $V_1^0$, we have $\delta(V_1^0, V_1'-W-\{x_1', x_2', y_1', y_2'\})\ge \alpha_1m-\alpha_2 n-4\ge 3|W|$. For the vertices in $W'$ in Case (c), we already know that there are vertex-disjoint 3-stars in $G'$ with centers in $W'$. Hence, regardless of the construction of $W$, we can always find $|W|$ vertex-disjoint 3-stars with their centers in $W$. {\noindent \bf Case 2. $n$ is odd. } Suppose first that $\max\{|V_1'|, |V_2'|\}\le (n+1)/2$ and let $m=(n-7)/2$. We arbitrarily partition $V_0$ into $V_{10}$ and $V_{20}$ such that, w.l.o.g., say $|V'_1\cup V_{10}|=(n+1)/2$ and $|V'_2\cup V_{20}|=(n-1)/2$. We show that $G[V_1'\cup V_{10}]$ either contains two independent edges or is isomorphic to $K_{1,(n-1)/2}$. As $\delta(G)\ge (n+1)/2$, we have $\delta(G[V_1'\cup V_{10}])\ge 1$. Since $n$ is sufficiently large, $(n+1)/2>3$. Then it is easy to see that if $G[V_1'\cup V_{10}]\not\cong K_{1,(n-1)/2}$, then $G[V_1'\cup V_{10}]$ contains two independent edges. Furthermore, we can choose two independent edges $x_1u_1$ and $y_1v_1$ such that $u_1, v_1\in V_1'$. This is obvious if $|V_{10}|\le 1$. So we assume $|V_{10}|\ge 2$. As $\delta(V_0, V_1')\ge 3|V_0|+10$, by choosing $x_1,y_1\in V_{10}$, we can choose distinct vertices $u_1\in \Gamma(x_1, V_1')$ and $v_1\in \Gamma(y_1, V_1')$. Let $x_2\in \Gamma(x_1, u_1, V_2'), y_2\in \Gamma(y_1, v_1, V_2')$ and $z\in \Gamma(u_1, v_1, V_2')$. Then $G[\{x_1, u_1, x_2, y_1, v_1, y_2,z \}]$ contains a subgraph $T$ isomorphic to $T_3$. We assume now that $G[V_1'\cup V_{10}]$ is isomorphic to $K_{1, (n-1)/2}$. Let $u_1$ be the center of the star $K_{1, (n-1)/2}$. Then each leave of the star has at least $(n-1)/2$ neighbors in $V_2'\cup V_{20}$. Since $|V_2'\cup V_{20}|=(n-1)/2$, we have $\Gamma(v, V_2'\cup V_{20})=V_2'\cup V_{20}$ if $v\in V_1'\cup V_{10}-\{ u_1\}$. By the definition of $V_0$, $\Delta(V_0, V_{1}')< (1-\alpha_1)|V_1|$ and $\Delta(V_0, V_2')<(1-\alpha_1)|V_2'|$, and so $u_1\in V_1'$, $V_{10}=\emptyset$ and $V_{20}=\emptyset$. We claim that $V_2'$ is not an independent set. Otherwise, by $\delta(G)\ge (n+1)/2$, for each $v\in V_2'$, $\Gamma(v, V_1')=V_1'$. This in turn shows that $u_1$ has degree $n-1$, showing a contradiction to Claim~\ref{nowheel}. So let $y_2v_2\in E(G[V_2'])$ be an edge. Let $w_1\in \Gamma(v_2, V_1')-\{u_1\}$ and $w_1u_1x_1$ be the path containing $w_1$. Choose $y_1\in \Gamma(y_2, v_2, V_1')-\{w_1,u_1,x_1\}$ and $x_2\in \Gamma(x_1, u_1, w_1, V_2')-\{y_1,v_1\} $. Then $G[\{x_1, u_1, x_2, w_1, v_2, y_2, y_1\}]$ contains a subgraph $T$ isomorphic to $T_2$. Let $U_1=(V_1'-V(T))\cup V_{10}$ and $U_2=(V_2'-V(T))\cup V_{20}$ and $W=V_0-V(T)$. We have $|U_1|=|U_2|=m$ and $|W|\le |V_0|\le \alpha_2n$. Otherwise we have $\max\{|V_1'|, |V_2'|\}\ge (n+1)/2+1$. Assume, w.l.o.g., that $|V_1'|\ge (n+1)/2+1$. Then $\delta(G[V_1'])\ge 2$ and thus $G[V_1']$ contains two independent edges. Let $m=(n-7)/2$ and $V_1^0$ be the set of vertices $u \in V_1'$ such that $deg(u, V_1')\ge \alpha_1 m$. We consider three cases here. Since $|V_1'|\ge (n+1)/2+1>m+4$, we assume $|V_1'|=m+4+t_1$ for some $t_1\ge 1$. Case (a): $|V_1^0|\ge |V_1'|-m-4$. Then we form a set $W$ with $|V'_1|-4-m$ vertices from $V_1^0$ and all the vertices of $V_0$. Then $|V_1'-W|=m+4+t_1-(|V_1'|-4-m)=m+4=(n+1)/2+1$, and we have $\delta(G[V_1'-W])\ge 2$. Hence $G[V_1'-W]$ contains two independent edges. Let $x_1u_1, y_1v_1\subseteq E(G[V_1'-W])$ be two independent edges, and let $x_2\in \Gamma(x_1, u_1, V_2'), y_2\in \Gamma(y_1, v_1, V_2')$ and $z\in \Gamma(w_1, v_1, V_2')$ be three distinct vertices. Then $G[\{x_1, u_1, x_2, z, y_1, v_1, y_2\}]$ contains a subgraph $T$ isomorphic to $T_3$. Let $U_1=V_1'-V(T)-W$, $U_2=(V_2'-V(T))\cup W$. Then $|U_1|=|U_2|=m$ and $|W|\le |V_0|+|V_1'-U_1|\le |V_2-V_2'|+\beta n +4\le \alpha_2n$. Thus we have $|V_1^0|< |V_1'|-4-m$. Suppose that $|V_1'-V_1^0|=m+4+t_1'=(n+1)/2+t_1'$ for some $t_1'\ge 1$. This implies that $\delta(G[V_1'-V_1^0])\ge t_1'+1$. Case (b): $t_1'\ge 2$. We show that $G[V_1'-V_1^0]$ contains $t_1'+2$ vertex-disjoint 3-stars. To see this, suppose $G[V_1'-V_1^0]$ contains a subgraph $M$ of at most $s$ vertex disjoint 3-stars. We may assume that $s< t_1'+2$. Then we have $(t_1-1) |V_1'-V_1^0-V(M)|\le e_{G-V_1^0}(V(M), V_1'-V_1^0-V(M))\le 4s\Delta(G[V_1'-V_1^0])$. Since $|V_1'-V_1^0|=m+5+t_1'=(n+1)/2+t_1'$, $|V_1'-V_1^0-V(M)|\ge m -3t_1'\ge m-6\alpha_2m$, where the last inequality holds as $|V_1'|\le (1/2+\beta)n +\alpha_2 |V_2'|$ implying that $t_1'\le |V_1'|-m -5\le 2\alpha_2m$. This, together with the assumption that $\alpha\le (1/8)^3$ gives that $s\ge t_1'+2$, showing a contradiction. Hence we have $s\ge t_1'+2$. Let $x_1u_1$ and $y_1v_1$ be two paths taken from two 3-stars in $M$. Then we can find a subgraph $T$ of $G$ isomorphic to $T_3$ the same way as in Case (a). We take exactly $t_1'$ 3-stars from the remaining ones in $M$ and denote the centers of these stars by $W'$. Let $U_1=V_1'-V_1^0-V(T)-W'$, $W=W'\cup V_1^0\cup V_0$, and $U_2=(V_2'-V(T))\cup W$. Then $|U_1|=|U_2|=m$. Case (c): $t_1'=1$. In this case, we let $m=(n-9)/2$. If $G[V_1'-V_1^0]$ contains a vertex adjacent to all other vertices in $V_1'-V_1^0$, we take this vertex to $V_2'$. This gets back to Case (a). Hence, we assume that $G[V_1'-V_1^0]$ has no vertex adjacent to all other vertices in $V_1'-V_1^0$. Then by the assumptions that $\delta(G)\ge (n+1)/2$ and $|V_1'-V_1^0|=(n+1)/2+1$, we can find two copies of vertex disjoint $P_3$s in $G[V_1'-V_1^0]$. Let $x_1u_1w_1$ and $y_1v_1z_1$ be two $P_3$s in $G[V_1']$. There exist distinct vertices $x_2\in \Gamma(x_1, u_1, w_1, V_2'), y_2\in \Gamma(y_1, v_1, z_1, V_2')$ and $z\in \Gamma(w_1,z_1, V_2')$. Then $G[\{x_1, u_1, w_1, x_2, y_1, v_1,z_1, y_2,z\}]$ contains a subgraph $T$ isomorphic to $T_5$. Let $U_1=V_1'-V_1^0-V(T)$, $W=V_1^0\cup V_0$, and $U_2=V_2'-V(T)$. Then $|U_1|=|U_2|=m$. For the partition of $U_1$ and $U_2$ in all the cases discussed in Case 2, we let $G'=(V(G)-V(T), E_G(U_1, U_2))$ be the bipartite graph with partite sets $U_1$ and $U_2$. Similarly as in Case 1, we can show that all the statements (i)-(iii) hold. \hfill $\square$\vspace{1mm} Let $W_1=U_1\cap W$ and $W_2=U_2\cap W$. For $i=1,2$, by the definition of $W$, we see that $\delta(W_i, U_i-\{x_1', y_1', x_2', y_2'\})\ge 3|W_i|$. And for any $u,v\in U_i$, $\Gamma(u,v, U_{3-i})\ge 6|W_i|$, and for any $u,v, w\in U_i$, $\Gamma(u,v, w, U_{3-i})\ge 7|W_i|$. By Lemma~\ref{absorbing}, we can find ladder $L_i$ spanning on $W_i$ and another $7|W_i|-2$ vertices from $U_i-\{x'_1, x_2', y_1', y_2'\}$ if $W_i\ne \emptyset$. Denote $a_{1i}a_{2i}$ and $b_{1i}b_{2i}$ the first and last rungs of $L_i$\,(if $L_i$ exists), respectively, where $a_{1i}, b_{1i}\in U_1$. Let $$ U'_i=U_i- V(L_i), \quad \mbox{}\quad m'=|U_1'|=|U_2'|, \quad \mbox{and} \quad G''=G''(U_1'\cup U_2', E_G(U_1', U_2')). $$ Since $|W|\le \alpha_2n$, $m\ge (n-9)/2$, and $n$ is sufficiently large, we have $1/n+7|W|\le 15\alpha_2m$. As $\delta(G'-W)\ge (1-\alpha_1-2\alpha_2)m$ and $\alpha\le (1/17)^3$, we obtain the following: \begin{equation*}\label{U'size} \delta(G'')\ge 7m'/8+1. \end{equation*} Let $a_{2i}'\in \Gamma(a_{1i}, U_2')$, $a_{1i}'\in \Gamma(a_{2i}, U_1')$ such that $a_{1i}'a_{2i}'\in E(G)$; and $b_{2i}'\in \Gamma(b_{1i}, U_2')$, $b_{1i}'\in \Gamma(b_{2i}, U_1')$ such that $b_{1i}'b_{2i}'\in E(G)$. We have the claim below. \begin{CLA}\label{spanning ladder} The balanced bipartite graph $G''$ contains three vertex-disjoint ladders $Q_1$, $Q_2$, and $ Q_3$ spanning on $V(G'')$ such that the first rung of $Q_1$ is $x_1'x_2'$ and the last rung of $Q_1$ is $a_{11}'a_{21}'$, the first rung of $Q_2$ is $b_{11}'b_{21}'$ and the last rung of $Q_2$ is $a_{12}'a_{22}'$, the first rung of $Q_3$ is $b_{12}'b_{22}'$ and the last rung of $Q_3$ is $y_1'y_2'$. \end{CLA} \textbf{Proof}.\quad Since $\delta(G'')\ge 7m'/8+1>m'/2+6$, $G''$ has a perfect matching $M$ containing the following edges: $x_1'x_2', a_{11}'a_{21}', b_{11}'b_{21}', a_{12}'a_{22}', b_{12}'b_{22}', y_1'y_2'$. We identify $a_{11}'$ and $b_{11}'$, $a_{21}'$ and $b_{21}'$, $a_{12}'$ and $b_{12}'$, and $a_{22}'$ and $b_{22}'$ as vertices called $c_{11}'$, $c_{21}'$, $c_{12}'$, and $c_{22}'$, respectively. Denote $G^*=G^*(U_1^*,U_2^*)$ as the resulting graph and let $c_{11}'c_{21}', c_{12}'c_{22}'\in E(G^*)$ if they do not exist in $E(G^*)$. Denote $M':=M-\{a_{11}'a_{21}', b_{11}'b_{21}', a_{12}'a_{22}', b_{12}'b_{22}'\}\cup \{c_{11}'c_{21}', c_{12}'c_{22}'\}$. Define an auxiliary graph $H'$ on $M'$ as follows. If $xy, uv\in M'$ with $x,u\in U_1'$ then $xy\sim_{H'} uv$ if and only if $x\sim_{G'} v$ and $y\sim_{G'} u$. Particularly, for any $pq\in M'-\{c_{11}'c_{21}', c_{12}'c_{22}'\}$ with $p\in U_2'$, $pq\sim_{H'} c_{11}'c_{21}'$\,(resp. $pq\sim_{H'} c_{12}'c_{22}'$) if and only if $p\sim_{G'} a_{11}', b_{11}'$ and $q\sim_{G'} a_{21}', b_{21}'$\,(resp. $p\sim_{G'} a_{12}', b_{12}'$ and $q\sim_{G'} a_{22}', b_{22}'$). Notice that there is a natural one-to-one correspondence between ladders with rungs in $M'$ and paths in $H'$. Since $\delta_{G^*}(U_1^*,U_2^*), \delta_{G^*}(U_2^*,U_1^*)\ge 3m'/4+1$, we get $\delta(H')\ge m'/2+1$. Hence $H'$ has a hamiltonian path starting with $x_1'x_2'$, ending with $y_1'y_2'$, and having $c_{11}'c_{21}'$ and $c_{12}'c_{22}'$ as two internal vertices. The path with the vertex $c_{11}'c_{21}'$ replaced by $a_{11}'a_{21}'$ and $b_{11}'b_{21}'$, and with the vertex $c_{12}'c_{22}'$ replaced by $a_{12}'a_{22}'$ and $b_{12}'b_{22}'$ is corresponding to the required ladders in $G''$. \hfill $\square$\vspace{1mm} If $T\in \{T_1,T_2\}$, then $$ H=x_1x_2Q_1L_1Q_2L_2Q_3y_1y_2\cup T. $$ is a spanning Halin subgraph of $G$. Suppose now that $T\in \{T_3,T_4,T_5\}$ and $z$ is the pendent vertex. Then $z\in V_1'\cup V_2'-W$ by Claim~\ref{subgraph_K}. Suppose, w.l.o.g., that $z\in V_2'-W$. Then by (i) of Claim~\ref{subgraph_K} and the definition of $U_1'$, we have $deg_G(z, U_1')\ge deg_G(z, U_1'-W_1)\ge (1-\alpha_1-10\alpha_2)m>m/2+1$. So $z$ has a neighbor on each side of the ladder $Q_1L_1Q_2L_2Q_3$, which has $m$ vertices on each side. Let $H'$ be obtained from $x_1x_2Q_1L_1Q_2L_2Q_3y_1y_2\cup T$ by suppressing the degree 2 vertex $z$. Then $H'$ is a Halin graph such that there exists one side of $Q_1L_1Q_2L_2Q_3$ with each vertex on it as a degree 3 vertex on a underlying tree of $H'$. Let $z'$ be a neighbor of $z$ such that $z'$ has degree 3 in the underlying tree of $H'$. Then $$ H=x_1x_2Q_1L_1Q_2L_2Q_3y_1y_2\cup T\cup\{zz'\}, $$ is a spanning Halin subgraph of $G$. \subsection{Proof of Theorem~\ref{non-extremal}} In this section, we prove Theorem~\ref{non-extremal}. In the first subsection, we introduce the Regularity Lemma, the Blow-up Lemma, and some related results. Then we show $G$ contains a subgraph $T$ isomorphic to $T_1$ if $n$ is even and to $T_2$ if $n$ is odd. By showing that $G-V(T)$ contains a spanning ladder $L$ with its first rung adjacent to the head link of $T$ and its last rung adjacent to the tail link of $T$, we get a spanning Halin subgraph $H$ of $G$ formed by $L\cup T$. \subsubsection{The Regularity Lemma and the Blow-up Lemma} For any two disjoint non-empty vertex-sets $A$ and $B$ of a graph $G$, the \emph{density} of $A$ and $B$ is the ratio $d(A,B):=\frac{e(A,B)}{|A|\cdot|B|}$. Let $\varepsilon$ and $\delta$ be two positive real numbers. The pair $(A,B)$ is called $\varepsilon $-regular if for every $X\subseteq A$ and $Y\subseteq B$ with $|X|>\varepsilon |A|$ and $|Y|>\varepsilon |B|$, $|d(X,Y)-d(A,B)|<\varepsilon $ holds. In addition, if $\delta(A,B)>\delta |B|$ and $\delta(B,A)>\delta|A|$, we say $(A,B)$ an $(\varepsilon ,\delta)$-super regular pair. \begin{LEM}[\textbf{Regularity lemma-Degree form~\cite{Szemeredi-regular-partitions}}]\label{regularity-lemma} For every $\varepsilon >0$ there is an $M=M(\varepsilon )$ such that if $G$ is any graph with $n$ vertices and $d\in[0,1]$ is any real number, then there is a partition of the vertex-set $V(G)$ into $l+1$ clusters $V_0,V_1,\cdots,V_l$, and there is a spanning subgraph $G'\subseteq G$ with the following properties. \vspace{-2mm} \begin{itemize} \item $l\le M$; \item $|V_0|\le \varepsilon n$, all clusters $|V_i|=|V_j|\le \lceil\varepsilon n\rceil$ for all $1\le i\ne j\le l$; \item $deg_{G'}(v)>deg_G(v)-(d+\varepsilon )n$ for all $v\in V(G)$; \item $e(G'[V_i])=0$ for all $i\ge 1$; \item all pairs $(V_i,V_j)$ ($1\le i\ne j\le l$) are $\varepsilon $-regular, each with a density either $0$ or greater than $d$. \end{itemize} \end{LEM} \begin{LEM}[\textbf{Blow-up lemma~\cite{Blow-up}}]\label{blow-up} For every $\delta, \Delta, c>0$, there exists an $\varepsilon =\varepsilon (\delta, \Delta, c)$ and $\gamma=\gamma(\delta, \Delta, c)>0$ such that the following holds. Let $(X,Y)$ be an $(\varepsilon , \delta)$-super-regular pair with $|X|=|Y|=N$. If a bipartite graph $H$ with $\Delta(H)\le \Delta$ can be embedded in $K_{N,N}$ by a function $\phi$, then $H$ can be embedded in $(X,Y)$. Moreover, in each $\phi^-(X)$ and $\phi^-(Y)$, fix at most $\gamma N$ special vertices $z$, each of which is equipped with a subset $S_z$ of $X$ or $Y$ of size at least $cN$. The embedding of $H$ into $(X,Y)$ exists even if we restrict the image of $z$ to be $S_z$ for all special vertices $z$. \end{LEM} Besides the above two lemmas, we also need the two lemmas below regarding regular pairs. \begin{LEM}\label{regular-pair-large-degree} If $(A,B)$ is an $\varepsilon $-regular pair with density $d$, then for any $A'\subseteq A$ with $|A'|\ge \varepsilon |A|$, there are at most $\varepsilon |B|$ vertices $b\in B$ such that $deg(b, A')<(d-\varepsilon )|A'|$. \end{LEM} \begin{LEM}[\textbf{Slicing lemma}]\label{slicing lemma} Let $(A,B)$ be an $\varepsilon $-regular pair with density $d$, and for some $\nu >\varepsilon $, let $A'\subseteq A$ and $B'\subseteq B$ with $|A'|\ge \nu|A|$, $|B'|\ge \nu|B|$. Then $(A',B')$ is an $\varepsilon '$-regular pair of density $d'$, where $\varepsilon '=\max\{\varepsilon /\nu, 2\varepsilon \}$ and $d'>d-\varepsilon $. \end{LEM} \subsubsection{Finding subgraph $T$} \begin{CLA}\label{find_K} Let $n$ be a sufficient large integer and $G$ an $n$-vertex graph with $\delta(G)\ge (n+1)/2$. If $G$ is not in Extremal Case 2, then $G$ contains a subgraph $T$ isomorphic to $T_1$ if $n$ is even and to $T_2$ if $n$ is odd. \end{CLA} \textbf{Proof}.\quad Suppose first that $n$ is even. Let $xy\in E(G)$ be an edge. We show that $G[N(x)-\{y\}]$ contains an edge $x_1x_2$ and $G[N(y)-\{x\}]$ contains an edge $y_1y_2$ such that the two edges are independent. Since $G$ is not in Extremal Case 2, it has no independent set of size at least $(1/2-7\beta)n$. Hence, we can find the two desired edges, and $G[\{x,y,x_1,x_2,y_1,y_2\}]$ contains a subgraph $T$ isomorphic to $T_1$. Then assume that $n$ is odd. We show in the first step that $G$ contains a subgraph isomorphic to $K_4^-$\,($K_4$ with one edge removed). Let $yz\in E(G)$. As $\delta(G)\ge (n+1)/2$, there exists $y_1\in \Gamma(y,z)$. If there exists $y_2\in \Gamma(y,z)-\{y_1\}$, we are done. Otherwise, $(\Gamma(y)-\{y_1,z\})\cap(\Gamma(z)-\{y_1,y\}) =\emptyset$. As $\delta(G)\ge (n+1)/2$, $y_1$ is adjacent to a vertex $y_2\in \Gamma(y)\cup \Gamma(z)-\{y_1,y,z\}$. Assume $y_2\in \Gamma(z)-\{y_1,y\}$. Then $G[\{y,y_1,z,y_2\}]$ contains a copy of $K_4^-$. Choose $x\in \Gamma(y)-\{z,y_1, y_2\}$ and choose an edge $x_1x_2\in G[\Gamma(x)-\{y,y_1, y_2,z\}]$. Then $G[\{y,y_1,z,y_2, x, x_1, x_2\}]$ contains a subgraph $T$ isomorphic to $T_2$. \hfill $\square$\vspace{1mm} Let $T$ be a subgraph of $G$ as given by Claim~\ref{find_K}. Suppose the head link of $T$ is $x_1x_2$ and the tail link of $T$ is $y_1y_2$. Let $G'=G-V(T)$. We show in next section that $G'$ contains a spanning ladder with first rung adjacent to $x_1x_2$ and its last rung adjacent to $y_1y_2$. Let $n'=|V(G')|$. Then we have $\delta(G')\ge (n+1)/2-7\ge n'/2-4\ge (1/2-\beta)n'$, where $\beta$ is the parameter defined in the two extremal cases. \subsubsection{Finding a spanning ladder of $G'$ with prescribed end rungs} \begin{THM}\label{ladder} Let $n'$ be a sufficiently large even integer and $G'$ the subgraph of $G$ obtained by removing vertices in $T$. Suppose that $\delta(G')\ge (1/2-\beta)n'$ and $G=G[V(G')\cup V(T)]$ is in Non-extremal case, then $G'$ contains a spanning ladder with its first rung adjacent to $x_1x_2$ and its last rung adjacent to $y_1y_2$. \end{THM} \textbf{Proof}.\quad We fix the following sequence of parameters \[ 0<\varepsilon \ll d\ll \beta \ll 1 \] and specify their dependence as the proof proceeds. Let $\beta$ be the parameter defined in the two extremal cases. Then we choose $d\ll \beta$ and choose \[ \varepsilon =\frac{1}{4}\epsilon(d/2,3,d/4) \] following the definition of $\epsilon$ in the Blow-up Lemma. Applying the Regularity Lemma to $G'$ with parameters $\varepsilon $ and $d$, we obtain a partition of $V(G')$ into $\ell+1$ clusters $V_0,V_1,\cdots, V_{\ell}$ for some $\ell \le M\le M(\varepsilon )$, and a spanning subgraph $G''$ of $G'$ with all described properties in the Regularity Lemma. In particular, for all $v\in V(G')$, \begin{equation}\label{G prime delta} deg_{G''}(v)>deg_{G'}(v)-(d+\varepsilon )n'\ge (1/2-\beta-\varepsilon -d)n'\ge (1/2-2\beta)n' \end{equation} provided that $\varepsilon +d\le \beta$. On the other hand, \begin{equation*} e(G'')\ge e(G')-\frac{(d+\varepsilon )}{2}(n')^2>e(G')-d(n')^2 \end{equation*} by $\varepsilon <d$. We further assume that $\ell=2k$ is even; otherwise, we eliminate the last cluster $V_{\ell}$ by removing all the vertices in this cluster to $V_0$. As a result, $|V_0|\le 2\varepsilon n'$, and \begin{eqnarray}\label{order_relation} (1-2\varepsilon )n'\le \ell N=2kN\le n', \end{eqnarray} where $N=|V_i|$ for $1\le i\le \ell$. For each pair $i$ and $j$ with $1\le i \ne j \le \ell$, we write $V_i\sim V_j$ if $d(V_i,V_j)\ge d$. As in other applications of the Regularity Lemma, we consider the {\it reduced graph $G_r$}, whose vertex set is $\{1,2,\cdots, r\}$ and two vertices $i$ and $j$ are adjacent if and only if $V_i\sim V_j$. From $\delta(G'')>(1/2-2\beta)n'$, we claim that $\delta(G_r)\ge (1/2-2\beta)\ell$. Suppose not, and let $i_0\in V(G_r)$ be a vertex with $deg_{Gr}(i_0)<(1/2-2\beta)\ell$. Let $V_{i_0}$ be the cluster in $G$ corresponding to $i_0$. Then we have \begin{equation*} (1/2-\beta)n'|V_{i_0}|\le |E_{G'}(V_{i_0},V-V_{i_0})|<(1/2-2\beta)\ell N|V_{i_0}|+2\varepsilon n'|V_{i_0}| < (1/2-\beta)n'|V_{i_0}|. \end{equation*} This gives a contradiction by $\ell N\le n'$ from inequality~\eqref{order_relation}. Let $x\in V(G')$ be a vertex and $A$ a cluster. We say $x$ is {\it typical} to $A$ if $deg(x,A)\ge (d-\varepsilon )|A|$, and in this case, we write $x\sim A$. \begin{CLA}\label{x1-y2} Each vertex in $\{x_1,x_2,y_1,y_2\}$ is typical to at least $(1/2-2\beta)l$ clusters in $\{V_1, \cdots, V_l\}$. \end{CLA} \textbf{Proof}.\quad Suppose on the contrary that there exists $x\in \{x_1,x_2,y_2,y_2\}$ such that $x$ is typical to less than $(1/2-2\beta)l$ clusters in $\{V_1, \cdots, V_l\}$. Then we have $deg_{G'}(x)<(1/2-2\beta)l N+(d+\varepsilon ) n'\le (1/2-\beta)n'$ by $lN\le n'$ and $d+\varepsilon \le \beta$. \hfill $\square$\vspace{1mm} Let $x\in V(G')$ be a vertex. Denote by $\mathcal{V}_x$ the set of clusters to which $x$ typical. \begin{CLA}\label{x1-x2} There exist $V_{x_1}\in \mathcal{V}_{x_1}$ and $V_{x_2}\in \mathcal{V}_{x_2}$ such that $d(V_{x_1}, V_{x_2})\ge d$. \end{CLA} \textbf{Proof}.\quad We show the claim by considering two cases based on the size of $| \mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}|$. Case 1. $| \mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}|\le 2\beta l$. Then we have $| \mathcal{V}_{x_1}- \mathcal{V}_{x_2}|\ge (1/2-4\beta)l$ and $| \mathcal{V}_{x_2}\cap \mathcal{V}_{x_1}|\ge (1/2-4\beta)l$. We conclude that there is an edge between $\mathcal{V}_{x_1}- \mathcal{V}_{x_2}$ and $\mathcal{V}_{x_2}- \mathcal{V}_{x_1}$ in $G_r$. For otherwise, let $\mathcal{U}$ be the union of clusters in $\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}$. Then $|V_0\cup \mathcal{U}\cup V(T)| \le 5\beta n$ is a vertex-cut of $G$, implying that $G$ is in Extremal Case 1. Case 2. $| \mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}|>2\beta l$. We may assume that $\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}$ is an independent set in $G_r$. For otherwise, we are done by finding an edge within $\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}$. Also we may assume that $E_{G_r}(\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}, \mathcal{V}_{x_1}-\mathcal{V}_{x_2})=\emptyset$ and $E_{G_r}(\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}, \mathcal{V}_{x_2}-\mathcal{V}_{x_1})=\emptyset$. Since $\delta(G_r)\ge (1/2-2\beta)l$ and $\delta_{G_r}(\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}, \mathcal{V}_{x_1}\cup \mathcal{V}_{x_2})=0$, we know that $l-|\mathcal{V}_{x_1}\cup \mathcal{V}_{x_2}|\ge (1/2-2\beta)l$. Hence, $|\mathcal{V}_{x_1}\cup \mathcal{V}_{x_2}|=|\mathcal{V}_{x_1}|+|\mathcal{V}_{x_2}|- |\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}|\le (1/2+2\beta)l$. This gives that $|\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}|\ge |\mathcal{V}_{x_1}|+|\mathcal{V}_{x_2}|-(1/2+2\beta)l \ge (1/2-2\beta)l+(1/2-2\beta )l-(1/2+2\beta )l \ge (1/2-6\beta )l$. Let $\mathcal{U}$ be the union of clusters in $\mathcal{V}_{x_1}\cap \mathcal{V}_{x_2}$. Then $|\mathcal{U}|\ge (1/2-7\beta)n$ and $\Delta(G[\mathcal{U}])\le (d+\varepsilon )n'\le \beta n$. This shows that $G$ is in Extremal Case 2. \hfill $\square$\vspace{1mm} Similarly, we have the following claim: \begin{CLA}\label{y1-y2} There exist $V_{y_1}\in \mathcal{V}_{y_1}-\{V_{x_1}, V_{x_2}\}$ and $V_{y_2} \in \mathcal{V}_{y_2}-\{V_{x_1}, V_{x_2}\}$ such that $d(V_{y_1}, V_{y_2})\ge d$. \end{CLA} \begin{CLA}\label{hamiltonian_path} The reduced graph $G_r$ has a hamiltonian path $X_1Y_1\cdots X_kY_k$ such that $\{X_1, Y_1\}=\{V_{x_1}, V_{x_2}\}$ and $\{X_k, Y_k\}=\{V_{y_1}, V_{y_2}\}$. \end{CLA} \textbf{Proof}.\quad We contract the edges $V_{x_1}V_{x_2}$ and $V_{y_1}V_{y_2}$ in $G_r$. Denote the two new vertices as $V_x'$ and $V_y'$ respectively, and denote the resulting graph as $G_r'$. Then we show that $G_r'$ contains a hamiltonian $(V_x', V_y')$-path. This path is corresponding to a required hamiltonian path in $G_r$. To show $G_r'$ has a hamiltonian $(V_x', V_y')$-path, we need the following generalized version of a result due to Nash-Williams~\cite{MR0284366} : Let $Q$ be a 2-connected graph of order $m$. If $\delta(Q)\ge \max\{(m+2)/3+1, \alpha(Q)+1\}$, then $Q$ is hamiltonian connected, where $\alpha(Q)$ is the size of a largest independent set of $Q$. We claim that $G_r'$ is $2\beta l$-connected. Otherwise, let $S$ be a vertex-cut of $G_r'$ with $|S|<2\beta l$ and $\mathcal{S}$ the vertex set corresponding to $S$ in $G$ . Then $|\mathcal{S}\cup V_0\cup V(T)|\le 2\beta n'+2\varepsilon n' <5\beta n$, showing that $G$ is in Extremal Case 1. Since $n'=Nl+|V_0|\le (l+2)\varepsilon n'$, we have $l\ge 1/\varepsilon -2\ge 1/\beta$. Hence, $G_r'$ is 2-connected. As $G$ is not in Extremal Case 2, $\alpha(G_r')\le (1/2-7\beta)l$. By $\delta(G_r)\ge (1/2-2\beta)l$, we have $\delta(G_r')\ge (1/2-2\beta)l-2\ge \max\{(l+2)/3+1, (1/2-7\beta)l+1\}$. Thus, by the result on hamiltonian connectedness given above, we know that $G_r'$ contains a hamiltonian $(V_x', V_y')$-path. \hfill $\square$\vspace{1mm} Following the order of the clusters on the hamiltonian path given in Claim~\ref{hamiltonian_path}, for $i=1,2,\cdots, k$, we call $X_i, Y_i$ partners of each other and write $P(X_i)=Y_i$ and $P(Y_i)=X_i$. \begin{CLA}\label{super-regular} For each $1\le i\le k$, there exist $X_i'\subseteq X_i$ and $Y_i'\subseteq Y_i$ such that $(X_i', Y_i')$ is $(2\varepsilon , d-3\varepsilon )$-super-regular, $|Y_1'|=|X_1'|+1$, $|Y_k'|=|X_k'|+1$, and $|X_i'|=|Y_i'|$ for $2\le i\le k-1$. Additionally, each pair $(Y_i', X_{i+1}')$ is $2\varepsilon $-regular with density at least $d-\varepsilon $ for $i=1,2,\cdots, k$, where $X_{k+1}'=X_1'$. \end{CLA} \textbf{Proof}.\quad For each $1\le i\le k$, let \begin{eqnarray*} X_i'' &=& \{x\in X_i\,|\, deg(x,Y_i)\ge (d-\varepsilon )N\},\, \mbox{and} \\ Y_i'' &=& \{y\in Y_i\,|\,deg(y,X_i)\ge (d-\varepsilon )N\}. \end{eqnarray*} If necessary, we either take a subset $X_i'$ of $X_i''$ or take a subset $Y_i'$ of $Y_i''$ such that $|Y_1'|=|X_1'|+1$, $|Y_k'|=|X_k'|+1$, and $|X_i'|=|Y_i'|$ for $2\le i\le k-1$. Since $(X_i,Y_i)$ is $\varepsilon $-regular, we have $|X_i''|, |Y_i''|\ge (1-\varepsilon )N$. This gives that $|X_1'|, |X_k'|\ge (1-\varepsilon )N-1$ and $|X_i'|=|Y_i'|\ge (1-\varepsilon )N$ for $2\le i\le k-1$. As a result, we have $\deg(x, Y_i')\ge (d-2\varepsilon )N$ for each $x\in X_i'$ and $deg(y, X_i')\ge (d-2\varepsilon )N-1\ge (d-3\varepsilon )N$ for each $y\in Y_i'$. By Slicing lemma\,(Lemma~\ref{slicing lemma}), $(X_i', Y_i')$ is $2\varepsilon $-regular. Hence $(X_i', Y_i')$ is $(2\varepsilon , d-3\varepsilon )$-super-regular for each $1\le i\le k$. By Slicing lemma again, we know that $(Y_i', X_{i+1}')$ is $2\varepsilon $-regular with density at least $d-\varepsilon $. \hfill $\square$\vspace{1mm} For $1\le i\le k$, we call $(X_i', Y_i')$ a super-regularized cluster\,(sr-cluster). Denote $R=V_0\cup (\bigcup\limits_{i=1}^{k}((X_i\cup Y_i)-(X_i'\cup Y_i')))$. Since $|(X_i\cup Y_i)-(X_i'\cup Y_i')|\le 2\varepsilon N$ for $2\le i \le k-1$ and $|(X_1\cup Y_1)-(X_1'\cup Y_1')|, |(X_k\cup Y_k)-(X_k'\cup Y_k')|\le 2\varepsilon N+1$, we have $|R|\le 2\varepsilon n+ 2k \varepsilon N+2 \le 3\varepsilon n'$. As $n'$ is even and $|X_1'|+|Y_1'|+\cdots +|X_k'|+|Y_k'|$ is even, we know $|R|$ is even. We arbitrarily group vertices in $R$ into $|R|/2$ pairs. Given two vertices $u,v\in R$, we define a $(u,v)$-chain of length $2t$ as distinct clusters $A_1, B_1, \cdots, A_t, B_t$ such that $u\sim A_1\sim B_1\sim \cdots \sim A_t\sim B_t\sim v$ and each $A_j$ and $B_j$ are partners, in other words, $\{A_j,B_j\}=\{X_{i_j}, Y_{i_j}\}$ for some $i_j\in \{1, \cdots, k\}$. We call such a chain of length $2t$ a $2t$-chain. \begin{CLA}\label{absorbing-pre} For each pair $(u,v)$ in $R$, we can find a $(u,v)$-chain of length at most 4 such that every sr-cluster is used in at most $d^2N/5$ chains. \end{CLA} \textbf{Proof}.\quad Suppose we have found chains for the first $m<2\varepsilon n'$ pairs of vertices in $R$ such that no sr-cluster is contained in more than $d^2N/5$ chains. Let $\Omega$ be the set of all sr-clusters that are used exactly by $d^2N/5$ chains. Then \begin{eqnarray*} \frac{d^2N}{5}|\Omega| &\le & 4m <8\varepsilon n' \le 8\varepsilon \frac{2kN}{1-2\varepsilon }, \end{eqnarray*} where the last inequality follows from (\ref{order_relation}). Therefore, \begin{eqnarray*} |\Omega| &\le &\frac{80k\varepsilon }{d^2(1-2\varepsilon )}\le \frac{80l \varepsilon }{d^2}\le \beta l/2, \end{eqnarray*} provided that $1-2\varepsilon \ge 1/2$ and $80\varepsilon \le d^2\beta /2$. Consider now a pair $(w,z)$ of vertices in $R$ which does not have a chain found so far, we want to find a $(w,z)$-chain using sr-clusters not in $\Omega$. Let $\mathcal{U}$ be the set of all sr-clusters adjacent to $w$ but not in $\Omega$, and let $\mathcal{V}$ be the set of all sr-clusters adjacent to $z$ but not in $\Omega$. We claim that $|\mathcal{U}|,|\mathcal{V}|\ge (1/2-2\beta)l$. To see this, we first observe that any vertex $x\in R$ is adjacent to at least $(1/2-3\beta/2)l$ sr-clusters. For instead, \begin{eqnarray*} (1/2-\beta)n' &\le &deg_{G'}(x) < (1/2-3\beta/2)lN+ (d-2\varepsilon )lN+ 3\varepsilon n', \\ &\le& (1/2-3\beta/2+d+2\varepsilon )n' \\ &<& (1/2-\beta)n' \,\,(\mbox{provided that $d+2\varepsilon < \beta/2$ }), \end{eqnarray*} showing a contradiction. Since $|\Omega|\le \beta l/2$, we have $|\mathcal{U}|,|\mathcal{V}|\ge (1/2-2\beta)l$. Let $P(\mathcal{U})$ and $P(\mathcal{V})$ be the set of the partners of clusters in $\mathcal{U}$ and $\mathcal{V}$, respectively. By the definition of the chains, a cluster $A\in \Omega$ if and only its partner $P(A)\in \Omega$. Hence, $(P(\mathcal{U})\cup P(\mathcal{V}))\cap \Omega=\emptyset$. Notice also that each cluster has a unique partner, and so we have $|P(\mathcal{U})|=|\mathcal{U}|\ge (1/2-2\beta)l $ and $|P(\mathcal{V})|=|\mathcal{V}|\ge (1/2-2\beta)l $. If $E_{G_r}(P(\mathcal{U}), P(\mathcal{V}))\ne \emptyset$, then there exist two adjacent clusters $B_1\in P(\mathcal{U})$, $A_2\in P(\mathcal{V})$. If $B_1$ and $A_2$ are partners of each other, then $w\sim A_2\sim B_1\sim z$ gives a $(w,z)$-chain of length 2. Otherwise, assume $A_1=P(B_1)$ and $B_2=P(A_2)$, then $w\sim A_1\sim B_1\sim A_2\sim B_2\sim z$ gives a $(w,z)$-chain of length 4. Hence we assume that $E_{G_r}(P(\mathcal{U}), P(\mathcal{V}))= \emptyset$. We may assume that $P(\mathcal{U})\cap P(\mathcal{V}) \ne \emptyset $. Otherwise, let $\mathcal{S}$ be the union of clusters contained in $V(G_r)-(P(\mathcal{U})\cup P(\mathcal{V}))$. Then $\mathcal{S}\cup R\cup V(T)$ with $|\mathcal{S}\cup R\cup V(T)|\le 4\beta n'+3\varepsilon n'+7\le 5\beta n$ \,(provided that $3\varepsilon +7/n'<\beta$) is a vertex-cut of $G$, implying that $G$ is in Extremal Case 1. As $E_{G_r}(P(\mathcal{U}), P(\mathcal{V}))= \emptyset$, any cluster in $P(\mathcal{U})\cap P(\mathcal{V})$ is adjacent to at least $(1/2-2\beta)l$ clusters in $V(G_r)-(P(\mathcal{U})\cup P(\mathcal{V}))$ by $\delta(G_r)\ge (1/2-2\beta)l$. This implies that $|P(\mathcal{U})\cup P(\mathcal{V})|\le (1/2+2\beta)l$, and thus $|P(\mathcal{U})\cap P(\mathcal{V})|\ge |P(\mathcal{U})|+|P(\mathcal{V})|-|P(\mathcal{U})\cup P(\mathcal{V})|\ge (1/2-6\beta)l$. Then $P(\mathcal{U})\cap P(\mathcal{V})$ is corresponding to a subset $V_1$ of $V(G)$ such that $|V_1|\ge (1/2-6\beta)lN\ge (1/2-7\beta)n$ and $\Delta(G[V_1])\le (d+\varepsilon )n'\le \beta n$. This implies that $G$ is in Extremal Case 2, showing a contradiction. \hfill $\square$\vspace{1mm} For each cluster $Z\in \{X_1', Y_1', \cdots, X_k', Y_k'\}$, let $R_2(Z)$ denote the set of vertices in $R$ using $Z$ in the 2-chains and $R_4(Z)$ denote the set of vertices in $R$ using $Z$ in the 4-chains given by Claim~\ref{absorbing-pre}. By the definition of 2-chains and 4-chains, we have the following holds. \begin{CLA}\label{small-ladders1} For each $i=1,2,\cdots, k$, if $R_2(X_i')\ne \emptyset$, then $|R_2(X_i')|=|R_2(Y_i')|$; and if $R_4(X_i')\ne \emptyset$, then $|R_4(X_i')|=|R_4(Y_{i+1}')|$. \end{CLA} \begin{CLA}\label{small-ladders} For each $i=1,2,\cdots, k$, if $R_2(X_i')\ne \emptyset$, then there exist vertex-disjoint ladders $L_{2x}^i$ and $L_{2y}^i$ covering all vertices in $R_2(X_i')\cup R_2(Y_i')$ such that $|X_i'\cap V(L_{2x}^i\cup L_{2y}^i)|=|Y_i'\cap V(L_{2x}^i\cup L_{2y}^i)|$; and if $R_4(X_i')\ne \emptyset$, then there exist three vertex disjoint ladders $L_{4x}^i, L_{4xy}^i, L_{4y}^{i+1}$ covering all vertices in $R_4(X_i')\cup R_4(Y_{i+1}')$ such that $V(L_{4x}^i)\subseteq X_i'\cup Y_i'$, $V(L_{4xy}^i)\subseteq Y_i'\cup X_{i+1}'$, and $V(L_{4y}^{i+1})\subseteq X_{i+1}'\cup Y_{i+1}'$, and that $|X_i'\cap V(L_{4x}^i\cup L_{4xy}^i)|=|Y_i'\cap V(L_{4x}^i\cup L_{4xy}^i\cup L_{4y}^{i+1})|= |X_{i+1}'\cap V(L_{4x}^i\cup L_{4xy}^i\cup L_{4y}^{i+1})|=|Y_{i+1}'\cap V( L_{4xy}^i\cup L_{4y}^{i+1})|$. \end{CLA} \textbf{Proof}.\quad Notice that by Claim~\ref{super-regular}, $(X_i', Y_i')$ is $(2\varepsilon , d-3\varepsilon )$-super-regular and $(Y_i, X_{i+1})$ is $2\varepsilon $-regular. Assume $R_2(X_i')\ne \emptyset$. By Claim~\ref{absorbing-pre} and Claim~\ref{small-ladders1}, we have $|R_2(X_i')|=|R_2(Y_i')|\le d^2N/5$. Let $R_2(X_i')=\{x_1, \cdots, x_r\}$. For each $j=1,\cdots, r$, since $|\Gamma(x_j,X_i')|\ge (d-2\varepsilon )|X_i'|>2\varepsilon |X_i'|$, by Lemma~\ref{regular-pair-large-degree}, there exists a vertex set $B_j\subseteq Y_i'$ with $|B_j|\ge (1-2\varepsilon )|Y_i'|$ such that $B_j$ is typical to $\Gamma(x_j,X_i')$. If $r\ge 2$, for $j=1,\cdots, r-1$, there also exists a vertex set $B_{j,j+1}\subseteq Y_i'$ with $|B_{j,j+1}|\ge (1-4\varepsilon )|Y_i'|$ such that $B_{j,j+1}$ is typical to both $\Gamma(x_j,X_i')$ and $\Gamma(x_{j+1}, X_i')$. That is, for each vertex $b_1\in B_j$, we have $deg(b_1,\Gamma(x_j,X_i'))\ge (d-5\varepsilon )|\Gamma(x_j,X_i')|>4|R|$, and for each vertex $b_2\in B_{j,j+1}$, we have $deg(b_2,\Gamma(x_j,X_i')), deg(b_2,\Gamma(x_{j+1},X_i'))\ge (d-5\varepsilon )|\Gamma(x_{j},X_i')|>4|R|$. When $r\ge 2$, since $|B_j|, |B_{j,j+1}|, |B_{j+1}|\ge (d-4\varepsilon )|Y_i'|>2\varepsilon |Y_i'|$, there is a set $A\subseteq X_i'$ with $|A|\ge (1-6\varepsilon )|X_i'|\ge |R|$ such that $A$ is typical to each of $B_j$, $B_{j+1}$ and $B_{j+1}$. Notice that $(d-5\varepsilon )|B_j|, (d-5\varepsilon )|B_{j,j+1}|, (d-5\varepsilon )|B_{j+1}|\ge (d-5\varepsilon )(1-4\varepsilon )|Y_i'|>3|R|$. Hence we can choose distinct vertices $u_1, u_2, \cdots, u_{r-1}\in A$ such that $deg(u_j, B_{j}), deg(u_j, B_{j,j+1}), deg(u_j, B_{j+1})\ge 3|R|$. Then we can choose distinct vertices $y_{23}^j\in \Gamma(u_j, B_{j}), z_{j}\in \Gamma(u_j, B_{j,j+1})$ and $y_{12}^{j+1}\in \Gamma(u_j, B_{j+1})$ for each $j$, and choose distinct and unchosen vertices $y_{12}^1\in B_1$ and $y_{23}^r\in B_r$. Finally, as for each vertex $b_1\in B_j$, we have $deg(b_1,\Gamma(x_j,X_i'))>4|R|$ and for each vertex $b_2\in B_{j,j+1}$, we have $deg(b_2,\Gamma(x_j,X_i')), deg(b_2,\Gamma(x_{j+1},X_i'))>4|R|$, we can choose $x_{j1}, x_{j2}, x_{j3}\in \Gamma(x_j, X_i')-\{u_1,\cdots, u_{r-1}\}$ such that $y_{12}^j\in \Gamma(x_{j1}, x_{j2}, Y_i')$, $y_{23}^j\in \Gamma(x_{j2}, x_{j3}, Y_i')$, and $z_j\in \Gamma(x_{i3}, x_{i+1,1}, Y_i')$. Let $L_{2x}^i$ be the graph with $$ V(L_{2x}^i)=R_2(X_i')\cup \{x_{i1}, x_{i2}, x_{i3},y_{12}^i, y_{23}^i, z_i, u_i, x_{r1}, x_{r2}, x_{r3}, y_{12}^r, y_{23}^r \,|\, 1\le i\le r-1 \}\quad \mbox{and} $$ $E(L_{2x}^i)$ consisting of the edges $x_rx_{r1}, x_rx_{r2},x_rx_{r3}, y_{12}^rx_{r1}, y_{12}^rx_{r2},y_{23}^rx_{r2},y_{23}^rx_{r3}$ and the edges indicated below for each $1\le i\le r-1$: $$ x_i\sim x_{i1},x_{i2}, x_{i3};\, y_{12}^i\sim x_{i1}, x_{i2};\, y_{23}^i\sim x_{i2},x_{i3}; \, z_i\sim x_{i3}, x_{i+1,1}; \, u_i\sim x_{i3}, x_{i+1,1}, z_i. $$ It is easy to check that $L_{2x}^i$ is a ladder spanning on $R_2(X_i')$, $4|R_2(X_i')|-1$ vertices from $X_i'$ and $3|R_2(X_i')|-1$ vertices from $Y_i'$. Similarly, we can find a ladder $L_{2y}^i$ spanning on $R_2(Y_i')$, $4|R_2(Y_i')|-1$ vertices from $X_i'$ and $3|R_2(X_i')|-1$ vertices from $X_i'$. Clearly, we have $|X_i'\cap V(L_{2x}^i\cup L_{2y}^i)|=|Y_i'\cap V(L_{2x}^i\cup L_{2y}^i)|$. Assume now that $R_4(X_i')\ne \emptyset$. Then by Claim~\ref{absorbing-pre}, we have $|R_4(X_i')|=|R_4(Y_{i+1}')|$. By the similar argument as above, we can find ladder $L_{4x}^i, L_{4y}^{i+1}$ such that $R_4(X_i')\subseteq V(L_{4x}^i), R_4(Y_{i+1}')\subseteq V(L_{4y}^{i+1})$. Furthermore, we have \begin{eqnarray*} |X_i'\cap V(L_{4x}^i)| & = & 4|R_4(X_i')|-1,\quad |Y_i'\cap V(L_{4x}^i)|\,\,=\,\,3|R_4(X_i')|-1; \\ |Y_{i+1}'\cap V(L_{4y}^{i+1})| & = & 4|R_4(Y_{i+1}')|-1,\quad |X_{i+1}'\cap V(L_{4y}^{i+1})|\,\,=\,\,3|R_4(Y_{i+1}')|-1. \end{eqnarray*} Finally, we claim that we can find a ladder $L_{4xy}^i$ between $(Y_i', X_{i+1}')$ such that $|Y_i'\cap V(L_{4xy}^i)|= |X_{i+1}'\cap V(L_{4xy}^i)|=|R_4(Y_{i+1}')|$ and is vertex-disjoint from $L_{4x}^i\cup L_{4y}^{i+1}$. Since $3|R_4(Y_{i+1}')|\le 3d^2N/5$ and $(Y_i', X_{i+1}')$ is $2\varepsilon $-regular with density at least $d-\varepsilon $ by Claim~\ref{super-regular}, a similar argument as in the proof of Lemma~\ref{super-regular}, we can find $Y_i''\subseteq Y_i'-V(L_{4x}^i)$ and $X_{i+1}''\subseteq X_{i+1}'-V( L_{4y}^{i+1})$ such that $(Y_i'', X_{i+1}'')$ is $(4\varepsilon , d-5\varepsilon )$-super-regular and $|Y_i''|=|X_{i+1}''|$, and thus is $(4\varepsilon , d/2)$-super-regular\,(provided that $\varepsilon \le d/10$). Notice that there are at least $(d-9\varepsilon )|Y_i''|\ge d|Y_i''|/4$ vertices typical to $X_{i+1}''$, and there are at least $(d-9\varepsilon )|X_{i+1}''|\ge d^2|X_{i+1}''|/4$ vertices typical to $Y_i''$. Applying the Below-up Lemma\,(Lemma~\ref{blow-up}), we can find a ladder $L_{4xy}^i$ within $(Y_i'', X_{i+1}'')$ such that $|Y_i'\cap V(L_{4xy}^i)|= |X_{i+1}'\cap V(L_{4xy}^i)|=|R_4(Y_{i+1}')|$. It is routine to check that $L_{4x}^i, L_{4y}^{i+1}, L_{4xy}^i$ are the desired ladders. \hfill $\square$\vspace{1mm} For each $i=1,2,\cdots, k$, let $X_i^{**}=X_i'-V(L_{2x}^i\cup L_{2y}^i\cup L_{4x}^i\cup L_{4xy}^i \cup L_{4y}^{i})$ and $Y_i^{**}=Y_i'-V(L_{2x}^i\cup L_{2y}^i\cup L_{4x}^i\cup L_{4xy}^i \cup L_{4y}^{i})$. Using Lemma~\ref{regular-pair-large-degree}, for $i\in \{1,\cdots, k-1\}$, choose $y_i^*\in Y_i^{**}$ such that $|A_{i+1}|\ge dN/4$, where $A_{i+1}:=X_{i+1}^{**}\cap \Gamma(y_i^*) $. This is possible, as $(Y_i^{**}, X_{i+1}^{**})$ is $4\varepsilon $-regular \,(applying Slicing lemma based on $(Y_i', X_{i+1}')$). Similarly, choose $x_{i+1}^*\in A_{i+1}$ such that $|B_{i}|\ge dN/4$, where $B_{i}:=Y_i^{**}\cap \Gamma(x_{i+1}^*)$. Let $S=\{y_{i}^*, x_{i+1}^*\,|\, 1\le i \le k-1\}$, and let $X_i^*=X_i^{**}-S$ and $Y_i^*=Y_i^{**}-S$. We have the following holds. \begin{CLA}\label{final-super-pair} For each $i=1,2,\cdots, k$, $(X_i^*, Y_i^*)$ is $(4\varepsilon , d/2)$-super-regular such that $|Y_1^*|=|X_1^*|+1$, $|Y_k^*|=|X_k^*|+1$, and $|X_i^*|=|Y_i^*|$ for $2\le i\le k-1$. \end{CLA} \textbf{Proof}.\quad Since $|R_2(X_i')|, |R_4(Y_{i+1}')|\le d^2N/5$ for each $i$, we have $|X_i^*|, |Y_i^*|\ge (1-\varepsilon -d^2)N-1$. As $\varepsilon , d \ll 1$, we can assume that $1-\varepsilon -d^2-1/N<1/2$. Thus, by Slicing lemma based on the $2\varepsilon $-regular pair $(X_i', Y_i')$, we know that $(X_i^*, Y_i^*)$ is $4\varepsilon $-regular. Recall from Claim~\ref{super-regular} that $(X_i', Y_i')$ is $(2\varepsilon , d-3\varepsilon )$-super-regular, as $4|R_2(X_i')|, 4|R_4(Y_{i+1}')|< d^2|Y_i^*|$, we know that for each $x\in X_i^*$, $deg(x, Y_i^*)\ge (d-3\varepsilon -d^2)|Y_i^*|> d|Y_i^*|/2$. Similarly, we have for each $y\in Y_i^*$, $deg(y, X_i^*)\ge d|X_i^*|/2$. Thus $(X_i^*, Y_i^*)$ is $(4\varepsilon , d/2)$-super-regular. Finally, Combining Claims~\ref{super-regular} and \ref{small-ladders}, we have $|Y_1^*|=|X_1^*|+1$, $|Y_k^*|=|X_k^*|+1$, and $|X_i^*|=|Y_i^*|$ for $2\le i\le k-1$. \hfill $\square$\vspace{1mm} For each $i=1,2,\cdots, k-1$, now set $B_{i+1}:=Y_i^*\cap \Gamma(x_{i+1}^*)$ and $C_i:=X_i^*\cap \Gamma(y_i^*)$. Since $(X_i^*, Y_i^*)$ is $(4\varepsilon , d/2)$-super-regular, we have $|B_{i+1}|, |C_i|\ge d|X_i^*|/2>d|X_i^*|/4$. Recall from Claim~\ref{hamiltonian_path} that $\{X_1, Y_1\}=\{V_{x_1}, V_{x_2}\}$ and $\{X_k, Y_k\}=\{V_{y_1}, V_{y_2}\}$. We assume, w.l.o.g., that $X_1= V_{x_1}$ and $X_k=V_{y_1}$. Let $A_{1}=X_1^*\cap \Gamma(x_1)$, $B_{1}=Y_1^*\cap \Gamma(x_2)$, $C_{k}=X_k^*\cap \Gamma(y_1)$, and $D_{k}=Y_k^*\cap \Gamma(y_2)$. Since $deg(x_1, X_1)\ge (d-\varepsilon )N$, we have $deg(x_1, X^*_1)\ge (d-\varepsilon -2\varepsilon -d^2)N\ge d|X_1^*|/4$, and thus $|A_1|\ge d|X_1^*|/4$. Similarly, we have $|B_1|, |C_k|, |D_k|\ge d|X_1^*|/4$. For each $1\le i\le k$, we assume that $L_{2x}^i=a_1^ib_1^i-L_{2x}^i-c_1^id_1^i$, $L_{2y}^i=a_2^ib_2^i-L_{2y}^i-c_2^id_2^i$, $L_{4x}^i=a_3^ib_3^i-L_{4x}^i-c_3^id_3^i$, $L_{4xy}^i=a_4^ib_4^i-L_{4xy}^i-c_4^id_4^i$, and $L_{4y}^i=a_5^ib_5^i-L_{4y}^i-c_5^id_5^i$, where $a_j^i, c_j^i\in Y_i'\subseteq Y_i$ for $j=1,2,\cdots, 5$. For $j=1,2,\cdots, 5$, let $A_j^i=X_i^*\cap \Gamma(a_j^i)$, $C_j^i=X_i^*\cap \Gamma(c_j^i)$, $B_j^i=Y_i^*\cap \Gamma(b_j^i)$, and $D_j^i=Y_i^*\cap \Gamma(d_j^i)$. Since $(X_i', Y_i')$ is $(2\varepsilon , d-3\varepsilon )$-super-regular, for $j=1,2,3,5$, we have $|\Gamma(a_j^i, X_i')|, |\Gamma(c_j^i, X_i')|\ge (d-3\varepsilon )|X_i'|$ and $|\Gamma(b_j^i, Y_i')|, |\Gamma(d_j^i, Y_i')|\ge (d-3\varepsilon )|Y_i'|$. From the proof of Claim~\ref{small-ladders}, the pair $(Y_i^{''}, X_{i+1}^{''})$ is $(4\varepsilon , d-5\varepsilon )$-super-regular. Hence, $|\Gamma(a_4^i, X_{i+1}')|, |\Gamma(c_4^i, X_{i+1}')|\ge (d-4\varepsilon )|X_{i+1}'|$ and $|\Gamma(b_4^i, Y_i')|, |\Gamma(d_4^i, Y_i')|\ge (d-4\varepsilon )|Y_i'|$. Thus, we have $|A_j^i|, |B_j^i|, |C_j^i|, |D_j^i| \ge (d-4\varepsilon )|X_i'|-d^2N\ge d|X_i^*|/4=d|Y_i^*|/4$. We now apply the Blow-up lemma on $(X_i^*, Y_i^*)$ to find a spanning ladder $L^i$ with its first and last rungs being contained in $A_i\times B_i$ and $C_i\times D_i$, respectively, and for $j=1,2,\cdots, 5$, its $(2j)$-th and $(2j+1)$-th rungs being contained in $A_j^i\times B_j^i$ and $C_j^i\times D_j^i$, respectively. We can then insert $L_{2x}^i$ between the 2nd and 3rd rungs of $L^i$, $L_{2y}^i$ between the 4th and 5th rungs of $L^i$, $L_{4x}^i$ between the 6th and 7th rungs of $L^i$, $L_{4xy}^i$ between the 8th and 9th rungs of $L^i$, and $L_{4y}^i$ between the 10th and 11th rungs of $L^i$ to obtained a ladder $\mathcal{L}^i$ spanning on $X_i\cup Y_i-S$. Finally, $\mathcal{L}^1y_1^*x_2^*\mathcal{L}^2\cdots y_{k-1}^*x_k^* \mathcal{L}^k$ is a spanning ladder of $G'$ with its first rung adjacent to $x_1x_2$ and its last rung adjacent to $y_1y_2$. The proof is then complete. \hfill $\square$\vspace{1mm} \bibliographystyle{plain}
proofpile-arXiv_069-4219
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} By enabling logically-centralized and direct control of a network forwarding plane, Software-Defined Networking (SDN) holds great promises in terms of improving network management and performance, while lowering costs at the same time. Realizing this vision is challenging though as SDN requires major changes to a network architecture before the benefits can be realized~\cite{vissicchio2014opportunities}. This is problematic as existing networks tend to have a huge installed base of devices, management tools, and human operators that are not familiar with SDN, leading to significant deployment hurdles. As a result, the number of SDN deployments has been rather limited in scope; there have been efforts in private backbones~\cite{google_b4_sigcomm2013, microsoft_swan_sigcomm2013} and software deployments at the network edge~\cite{Casado:2012:FRE:2342441.2342459}. In order to kickstart a wide-scale SDN deployment, we argue that operators need to be offered with SDN-based technologies possessing at least three key characteristics. First, the advantages of SDN should be readily apparent with only a \emph{small deployment}. Ideally, benefits should be reaped with the deployment of a single SDN device; as comfort and enthusiasm increases, new SDN devices can be incrementally deployed. Second, they should be \emph{low-risk}. In particular, they should require minimum changes to existing operational practices and should be compatible with currently deployed technologies. Finally, they should offer a \emph{high return}, meaning the SDN-based technologies should solve a timely problem. As an example of such a technology, we show how we can significantly improve the performance of existing IP routers, \emph{i.e.} ``supercharge'' them, by combining them with SDN-enabled devices. Supercharging a router is a low-risk, high-reward operation. First, it provides operators with a strong incentives to deploy SDN-enabled device as they enable them to increase the lifetime of their routers, at a considerably lower cost than buying new ones\footnote{Current SDN switches are orders of magnitude cheaper than fully equipped routers.}. Second, supercharging a router does not change the existing router's behavior, just its performance. Consequently, network operators can conveniently troubleshoot and maintain the original network. Third, once enough routers have been supercharged, those deployed SDN equipments can be used to implement a more disruptive SDN architecture. In this short paper, we supercharge one particular aspect of the router performance: its convergence time after a link or a node failure. Current routers are often slow to converge after a link failure because of the time it takes to update their forwarding tables; this is an entry-by-entry process that can go on for potentially hundreds thousand of entries. Our key insight is that, by coupling together a router and a SDN switch, we can build a 2-stage forwarding table which spans across the two devices with a first lookup done in the router and the second one in the switch. With this type of hierarchical FIB, one can speed up the convergence by tagging entries with the same primary and backup Next-Hop (NH) in the first table, and then actually direct the traffic to the primary or backup NH in the second table. This way, if the primary NH fails, only the few entries on the switch have to be updated. One contribution of our work is to show how we can provision those tagging entries in a router using only a vanilla routing protocol. Besides convergence, several other aspects of a router performance can be ``supercharged'' by having a 2-stage forwarding table. Among others, the size of the router forwarding tables can be increased using a SDN switch as a cache (similarly to~\cite{ballani2009making}). In this case, the router table would contain aggregated entries that would get resolved in the switch table. Similarly, poor load-balancing decisions made by routers due to sub-optimal stateless hash-functions~\cite{rfc2992, cao2000performance} can be overwritten dynamically as the traffic traverses the neighboring SDN switch leading to better network utilization. In all three examples, the factor limiting the performance is the \emph{hardware design} itself, \emph{i.e.}, the forwarding table organization, its forwarding table size, or the hash function used by the router. Unlike software, this cannot be improved without buying new equipment, hence the interest. In~\cite{sdx_sigcomm2014}, Gupta \emph{et al.} used a similar technique to scale an SDN-based Internet Exchange Point with the aim of decreasing the amount of forwarding rules that has to be maintained in the SDN switch by leveraging neighboring router resources. While we target convergence, not space, our contribution is also the opposite of theirs. We show how a SDN switch can improve the performance of the router. As such, our work nicely complements theirs. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/fib_organization_router.pdf} \caption{In a classical router, the Forwarding Information Base (FIB) is flat, meaning each entry points to the actual physical L2 NH. Upon a failure of $R2$, every single entry (512k) has to be updated to restore full connectivity, a time-consuming operation.} \label{fig:fib_router} \end{figure} \myitem{Today's (slow) convergence}. The convergence time of traditional IP routers is directly linked to the time it takes for the router to update its hardware-based Forwarding Information Base (FIB) after it detects the failure. To achieve fast lookup and limit memory cost, the FIB only contains the information strictly necessary to forward packet. In the case of Ethernet interface, each FIB entry maps an IP destination to the L2 NH address (\emph{i.e.}, MAC address) of the chosen IP NH as well as the output interface. In most routers, the FIB is flat, meaning each FIB entry is mapped to a different (but possibly identical in content) L2 NH entry. As an illustration, consider the network depicted in Fig.~\ref{fig:fib_router}. $R1$ is an edge router connected to the router of two providers, $R2$ and $R3$. Each of these provider routers advertise a full Internet routing table composed of more than 512{,}000 IPv4 prefixes~\cite{cidr_report}. Also, as $R2$ is cheaper than $R3$, $R1$ is configured to prefer $R2$ for all destinations. In such a case, each of the 512k FIB entries in $R1$ is associated to a distinct L2 NH entry which all contain the physical MAC address of $R2$ ({\sffamily 00:aa}). Upon the failure of a $R2$, every single entry of $R1$ FIB has to be updated creating a significant downtime. Our measurements on a recent router (see \S\ref{sec:evaluation}) shows that it actually takes \emph{several minutes} for $R1$ to fully converge, during which traffic is lost. With the ever rising cost of downtime~\cite{cerin2013downtime} and as services increasingly rely on high-availability, convergence of the order of minutes is simply not acceptable. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/fib_organization_supercharged.pdf} \caption{In a supercharged router, the combined FIB is hierarchical, each FIB entry in the router points to a virtual L2 NH or pointer that is resolved in the SDN switch. Upon failure of $R2$, only \emph{one entry}---the pointer value---needs to update to restore full connectivity.\vspace{-0.5cm}} \label{fig:fib_supercharged} \end{figure} \myitem{Supercharging convergence.} Equipping routers with a hierarchical FIB~\cite{FMBDVSBF11} is an obvious solution to the convergence problem mentioned above. In a hierarchical FIB, each IP destination is mapped to a pointer that resolves to the actual L2 NH to be used. Upon failure of a L2 NH, only pointer values have to be updated. Since the number of L2 NH is several order of magnitude smaller than the number of FIB entries, convergence is greatly improved. Unfortunately, hierarchical FIB designs also means much more complex hardware, and therefore, more expensive routers. Fig.~\ref{fig:fib_supercharged} illustrates how we can provide \emph{any} router (here $R1$) with a hierarchical FIB, spanning two devices, by combining it with a SDN switch. To provision forwarding entries in this hierarchical FIB, we built a \emph{supercharged controller}. While the controller can rely on (typically) OpenFlow to provision forwarding entries in a SDN switch, dynamically provisioning specific forwarding entries in a router is trickier. Our key insight is that the supercharged controller can use any routing protocol spoken by the router as a provisioning interface. Indeed, FIB entries in a router directs traffic to the L2 NH associated to the L3 NH learned via the routing protocol. Our supercharged controller interposes itself between the router and its peers (we explain how to make this reliable in \S\ref{sec:implementation}), computes primary and backup NH for all IP destinations, and provisions L2 NH ``pointers'' by setting the IP NH field to a virtual L3 NH that gets resolved by the router into a L2 NH using {\sffamily ARP}. Upon failure of $R2$ in Fig.~\ref{fig:fib_supercharged}, all the controller has to do to convergence is to modify the switch rule to ({\sffamily rewrite(00:ff) to (02:bb,2)}) in order to converge \emph{all traffic} to $R3$. \myitem{Contributions.} We make the following contributions: \begin{itemize}[leftmargin=*] \setlength{\itemsep}{0pt} \item \textbf{Supercharging router convergence:} We propose novel ways to combine SDN and legacy networking equipment to improve convergence times (\S\ref{sec:supercharging}). \item \textbf{Implementation:} We describe a fully working prototype implementation of a supercharger controller, combining OpenFlow/Floodlight and ExaBGP (\S\ref{sec:implementation}). Our implementation is efficient, reliable, and can be used to supercharge \emph{any} router. \item \textbf{Hardware-based Evaluation:} We supercharged a hardware router (Cisco Nexus 7k) and thoroughly evaluated its performance (\S\ref{sec:evaluation}). To ensure precise measurements, we developed a FPGA-based traffic generator which detects traffic loss within 70$\mu$s. With respect to the normal router convergence under similar conditions, the supercharged version converged systematically within 150ms, a 900$\times$ reduction! \end{itemize} \section{Supercharging convergence} \label{sec:supercharging} In this section, we describe how to supercharge the convergence of any existing router using SDN equipment to build a hierarchical forwarding table. \myitem{Overview.} Since the number of destinations is much greater than the number of neighbors, many destinations (IP prefixes) will share the same primary and backup NH. We refer to the couple (primary NH, backup NH) as \emph{backup-group}. For instance, in Fig.~\ref{fig:fib_supercharged}, all 512k prefixes share $(R2,R3)$ as backup-group. If $R2$ fails, all entries will be rewritten to point to $R3$. In a supercharged router, we use the router to \emph{tag} the traffic according to the backup-group it belongs to and use the switch to \emph{redirect} the tagged traffic to the master or backup NH depending on its status. We use the destination MAC address as the tag and provision it in the router using the virtual NH field in routing announcements. Fig.~\ref{fig:overview} depicts the overall architecture. \begin{figure} \centering \includegraphics[width=.65\columnwidth]{figures/supercharged_organization} \caption{Supercharged router overview} \label{fig:overview} \vspace{-10px} \end{figure} \myitem{Provisioning \emph{tagging entries} in the router's FIB.} To provision entries in the router's FIB, a routing daemon is interposed between the router and its peers. Its role is to compute the backup-groups for every IP destination. For simplicity, we assume that BGP is used as routing protocol, but other intra-domain routing protocols such as OSPF or IS-IS can also be used~\cite{fibbing_sigcomm_2015}. The routing daemon assigns a Virtual IP NH (VNH) and a corresponding virtual MAC (VMAC) address to each distinct backup-group and rewrites the routing NH in the corresponding announcements that it directs to the supercharged router. In Fig.~\ref{fig:overview}, the backup-group for 1.0.0.0/24 is $(peer_1,peer_n)$ and the corresponding (VNH, VMAC) is {\sffamily (10.1.1.1, 00:ff)}. Upon reception of a route associated with a VNH, the router issues an ARP request to resolve it to a MAC address. This ARP request is caught by the SDN controller which replies with the corresponding VMAC address. After that, the supercharged router will use the VMAC as the destination MAC for all the corresponding traffic sent in the data-plane. \begin{lstlisting}[caption={Online algorithm computes backup-group},label={alg:bck_group_computation}] bck_groups = {} routing_table = {} def compute_backup_groups(bgp_upd): old = routing_table[bgp_upd.pfx] insert(routing_table, bgp_upd) new = routing_table[bgp_upd.pfx] if old: if not new: send_withdraw(bgp_upd.pfx) else: if new != old: if len(new) == 1: send(bgp_upd) else: if (new[0].nh, new[1].nh) != (old[0].nh, old[1].nh): if new[0].nh not in bck_groups: bck_groups[new[0].nh] = {} if new[1].nh not in bck_groups[new[0].nh]: bck_groups[new[0].nh][new[1].nh] = get_new_vnh_vmac() rewrite_nh(bgp_upd, bck_groups[new[0].nh][new[1].nh].nh) send(bgp_upd) else: send(bgp_upd) \end{lstlisting} \myitem{Computing backup-groups.} Listing~\ref{alg:bck_group_computation} describes an online algorithm for computing the backup-group. In essence, the algorithm maintains an ordered list of known NH for each IP prefix with the two first elements identifying the backup-group. The algorithm sends a routing update with a VNH whenever one of these elements change. Observe that the total number of backup-groups depends on the number of peers $n$ the supercharged router has. Taking into account all the neighbors of the supercharged router, the total number of backup-groups is $\frac{n!}{(n - 2)!}$. For instance, considering a router with 10 neighbors (a lot in practice), the number of backup-groups is only 90. In this paper, we worked with backup-group of size 2, which can protect from any single link or node failure. Our algorithm in general though and can compute backup-groups of any size. \myitem{Directing tagged traffic to the appropriate NH in the switch's FIB.} The controller provisions dedicated flow entries to match on the VMAC associated to each backup-group. By default, these rules direct the traffic to the primary NH. Upon a node or a link failure, all the backup-group entries for which the unreachable NH was the primary NH are rewritten to direct the traffic to the backup NH instead. In the worst case, the number of flow rewritings that has to be done is the number of peers of the supercharged router, \emph{i.e.} a small constant value. Listing~\ref{alg:data_plane_convergence} describes how the controller determines what flow to install. \begin{lstlisting}[caption={Data-plane convergence procedure },label={alg:data_plane_convergence}] def data_plane_convergence(peer_down_id): for backup_nh in bck_groups[peer_down_id]: install_flow( match(dst_mac=bck_groups[peer_down_id][backup_nh].vmac), modify(dst_mac=get_mac(backup_nh)), fwd(output_port=get_port(backup_nh)) ) \end{lstlisting} \section{Implementation} \label{sec:implementation} We now briefly describe a reliable implementation of a supercharged controller. All our source code is available at {\small\url{https://github.com/nsg-ethz/supercharged_router}.} \myitem{Controller.} We built our prototype atop ExaBGP~\cite{exabgp} as \emph{BGP controller}, FreeBFD~\cite{freebfd} as \emph{BFD daemon} (failure detection), and Floodlight~\cite{floodlight} as \emph{SDN controller}. ExaBGP enables us to establish BGP adjacencies and programmatically receive and send BGP routes over them. We extended ExaBGP with a complete implementation of the BGP Decision Process, the full algorithm to compute backup groups (see Listing~\ref{alg:bck_group_computation}) and the ability to rewrite BGP NH on-the-fly. FreeBFD provides a user-space implementation of the Bidirectional Forwarding Detection Protocol (BFD)~\cite{bfd_rfc}. We use it to speed up the discovery of peer failure. Upon a peer failure announcement produced by FreeBFD, ExaBGP uses the REST API provided by Floodlight to push the corresponding rewrite rules in the data-plane (see~\S\ref{sec:supercharging}). We also extended Floodlight with an {\sffamily ARP} resolver in order to reply to the {\sffamily ARP} queries generated by the router for resolving the virtual NH to the corresponding virtual MAC address. \myitem{Reliability.} Any underlying SDN switch or any control-plane component of the supercharged controller can fail at any time. Since our goal is to enable fast convergence, our controller must be able to survive to any component failure to be of any use. Fortunately, reliability at both the data-plane and the control-plane is easily ensured. At the data-plane level, reliability is obtained by using at least two SDN-enabled switches connected to each supercharged router. Observe that redundant SDN switches can be shared across multiple supercharged routers that share physical connectivity, reducing the costs. At the control-plane level, reliability is enforced by running at least 2 instances of the controller and connecting them to the corresponding supercharged router. Interestingly, no state needs to be synchronized across the backups as both backups will receive exactly the same input (BGP routes) and run the exact same deterministic algorithm and, hence, eventually compute the same outcome. The cost is the supercharged router to receive two copies of each route, and for the peers to configure an extra BGP session---slightly increasing the load in the control-plane. However, we note that control-plane memory is inexpensive (being classical DRAM) and routers maintain multiple BGP adjacencies already, for obvious redundancy reasons. \section{Evaluation} \label{sec:evaluation} We now present a thorough evaluation of the convergence time of a recent hardware router prior and after supercharging it using our prototype implementation. We then illustrate the scalability of our controller implementation using micro-benchmarks. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/lab_setup.pdf} \caption{Overview of our HW-based convergence lab. $R2$ is rendered inaccessible, causing $R1$ to switch to $R3$ for every single prefixes. At the same time, we use FPGAs to precisely ($\mu$s resolution) measure the convergence time. Ultimately, we compare the convergence time of the supercharged R1 and the standalone R1. } \label{fig:lab_Setup} \end{figure} \myitem{Setup and methodology.} Our complete setup is depicted in Fig.\ref{fig:lab_Setup}. It consists of 3 routers Cisco Nexus 7k C7018 (running NX-OS v6.2, with no hierarchical FIB) interconnected through a HP E3800 J9575A Openflow-enabled switch. Using this setup, we measured the convergence time of $R1$ prior and after supercharging it. To do so, we loaded $R2$ and $R3$ with an increasing number of actual BGP routes collected from the RIPE RIS dataset~\cite{ripe:ris}. Both $R2$ and $R3$ were loaded with the same feed to ensure that they both advertise the same set of prefixes. In both cases (supercharged and not supercharged), $R1$ was configured to prefer $R2$ for all the destinations. Once all routes were advertised, we started to inject traffic at $R1$ using a FPGA-based generator (see below). To compute a representative distribution of the convergence time across different prefixes, we generated traffic towards 100 IP addresses, randomly selected among the IP prefixes advertised by $R2$ and $R3$, and including the first and last prefix advertised. We configured $R2$ and $R3$ to send all receiving traffic to another FPGA-based board, acting as sink. To ensure that the same detection time in both experiments, we configured BFD on $R2$ on both experiments. We then disconnected $R2$ from the switch, triggering the convergence process at $R1$; subsequently, we measure the time until recovering full connectivity. \myitem{Custom-built hardware-based traffic generator.} Since this project deals with \emph{fast} convergence, we needed a way to accurately measure small convergence time. Our choice rapidly went to hardware-based measurement, using FPGA boards. Using the FPGAs, we were able to measure convergence time \emph{with a precision of only 70 $\mu$s}. Such a precision would be impossible to achieve using software-based measurements. We measured the convergence time by monitoring the maximum inter-packet delays seen by each flow between two FPGA boards: a source and a sink. For the FPGA boards, we used a system-on-chip architecture with \emph{(i)} an embedded MicroBlaze soft processor \emph{(ii)} an Ethernet MAC core, and (iii) either a traffic generator (source) or traffic monitor (sink). The traffic monitor matches the destination IP to a content-addressable memory (CAM) containing the expected destination IPs, before it updates the corresponding maximum inter-packet delay. We implemented both, source and sink, on Xilinx ML605 evaluation boards featuring a Virtex-6 XC6VLX240T-1FFG1156 FPGA. We programmed the source FPGA to continuously send a stream of 64-byte UDP packets to each of the 100 IPs over an 1G Ethernet connection. Doing so generated a traffic load of about 725 MBit/s, which corresponds to about 1.4M packets/s in total and 14K packets/s per flow. \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/convergence_time} \caption{With respect to the normal convergence time which increases linearly with the number of prefixes, our supercharged router \emph{systematically converged within 150ms}. In contrast, the non-supercharged router took more than 2 minutes to converge in the worst-case.} \label{fig:convergence_time} \vspace{-10px} \end{figure} \myitem{The non-supercharged R1 took {\raise.17ex\hbox{$\scriptstyle\sim$}}2.5min to converge in the worst-case.} Using the methodology above, we measured the convergence time of the router prior and after the supercharging process for an increasing number of prefixes (from 1k to 500k). We repeated the experiment 3 times per number of advertised prefixes. Since for each experiment, we measured the convergence of 100 prefixes, we ended up with 300 statistically representative data points per measurement. Fig.~\ref{fig:convergence_time} depicts the distribution of the convergence time using box-plots; both the non-supercharged and supercharged routers are displayed. Each box shows the inter-quartile range of the convergence time; the line in the box depicts the median value; and the whiskers show 5th and 95th percentiles. The numbers on top are the maximal convergence time recorded. For the non-supercharged R1, we can see that the convergence time is roughly linear\footnote{The linearity of convergence time is not well reflected in Fig.~\ref{fig:convergence_time} because of the non-uniform scaling of the $x$-axis.} in the number of prefixes in the FIB. This is because FIB entries are updated one-by-one; while the first FIB entry is updated immediately, irregardless of the total number of prefixes, the last entry updated must wait for all the preceding FIB entries to be updated. This worst-case highlights undesirability of the non-supercharged approach: as the FIB grows, so does the convergence time. Here, we see that $R1$ took close than 2.5min to converge when loaded with 512k. \myitem{The supercharged R1 systematically converged \emph{within 150ms}, for all prefixes.} Thanks to its hierarchical FIB design, the supercharged R1's convergence time was constant---irrespective of the number of prefixes. This is illustrated in Fig.~\ref{fig:convergence_time} by a almost horizontal line around 150ms. With respect to the above worst-case, this constitutes a 900$\times$ improvement factor. Interestingly, the worst-case convergence time of a supercharged router is still more than two times faster that the best-case convergence time of its standalone counterpart. Indeed, in the best case, it took 375 ms for the standalone R1 to update the first FIB entry. \myitem{The supercharged controller processed each BGP update under 125ms.} While supercharging router drastically improves its data-plane convergence time, it slightly increases its control-plane convergence time due to the need to re-compute the backup-group upon every BGP announcement and, potentially, update the virtual NH. To quantify this overhead, we measured the time our unoptimized, python-based BGP controller took to process two times 500K updates from two different peers. In the worst-case, processing an update took 0.8s but the 99th percentile was only 125ms. We argue that this is a reasonable price to pay for improving the convergence by several orders of magnitude. \section{Related Work} \label{sec:related_work} \myitem{Routing.} The problem of minimizing down time during convergence has been well studied in the domain of distributed routing protocols~\cite{SBPFFB13, FB07, FFEB05, FMBDVSBF11}. Among all these works, BGP Prefix Independent Convergence (PIC)~\cite{FMBDVSBF11} is certainly the most relevant. PIC introduces the idea of using a hierarchical FIB design in order to speed-up router convergence upon peering link failure. In essence, our supercharged router replicates the functionality of PIC but on \emph{any} routers (even old ones), without requiring expensive line-cards update. \myitem{SDN.} FatTire~\cite{reitblatt2013fattire} is a domain-specific language which aims at simplifying the design of fault-tolerant network programs that can quickly converge by leveraging fast-failover mechanisms provided in recent versions of OpenFlow~\cite{openflow1.4}. While FatTire targets fully-deployed OpenFlow networks, we show that we can already speed up the convergence of existing network with a single SDN switch. In~\cite{gamperli2014evaluating}, Gamperli \emph{et al.} evaluated the effect of centralization on BGP convergence. They showed that the convergence time decreases as more and more of the network-wide decision get centralized. Supercharging routers is a direct complement to their work. Once enough routers have been supercharged, one can use~\cite{gamperli2014evaluating} at the network-level to speed-up convergence even more. Just as a supercharged router, SDX~\cite{sdx_sigcomm2014} is also an example of how routing and SDN can coexist in a symbiotic way, providing each other benefit. While SDX showed how router can boost SDN equipment performance, we show how SDN equipment can boost router performance. Also, our technique can immediately be applied to the SDX environment in order to boost the convergence time upon the failure of an IXP participant equipment. \myitem{Incremental SDN deployment.} RouteFlow~\cite{routeflow} and Panopticon~\cite{Panopticon-atc14} proposed techniques to incrementally deploy SDN equipments in existing networks with the aim of reaping early benefits. RouteFlow enables operator to build fully-fledged IP router out of a SDN switch, while Panopticon enables to steer traffic away from a L2 domain to SDN equipment where it could be processed. In contrast to supercharging routers, none of them improve the performance of existing equipment. In~\cite{agarwal2013traffic}, Agarwal \emph{et al.} proposed a way to improve the Traffic Engineering (TE) performance of existing networks even in partial deployment of SDN capability, highlighting another aspect of the network that can be ``supercharged'' using SDN devices. \section{Conclusions} \label{sec:conclusion} We boost the convergence time of legacy routers by combining them with SDN equipment in a novel way, essentially building a hierarchical forwarding table spanning across devices. Through thorough evaluations on real hardware, we demonstrated significant gains with convergence time reduced by up to 900$\times$. We believe this paper opens up many interesting future directions for integrating legacy routing and SDN devices in a more ``symbiotic way''. By juxtaposing the agility of the SDN with the tried-and-true routers prevalent in the industry today, we take the best of both worlds and take the first steps towards electrifying modern day networks through supercharged networking devices. \bibliographystyle{IEEEtran}
proofpile-arXiv_069-4310
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{MiCRObE} \label{sec:microbe} \vspace{-0.5em} \subsection{Feature Calibration} \vspace{-0.5em} The process of calibration is to learn a one dimensional model that computes the probability of each label $e$ given a single feature $f$. Here, the sparse features can be semantic units that may or may not speak in the same vocabulary as the target labels. In addition to providing a simple $\operatorname{max-calibration}$ based label classifier, the calibration process also helps in feature selection that can be used to significantly speed up the training of classifiers like SVM or logistic regression. The feature selection process yields: (a) Automatic synonym expansion using visual similarity: As an example, we allow the sparse feature named \emph{Canyon} from one of our base models to predict an entity \emph{Grand Canyon} which is not in the set of input sparse features. Similarly \emph{Clock Tower} will be able to predict \emph{Big Ben}. (b) Automatic expansion to related terms based on visual co-occurrence: For example, we will get the feature \emph{water} for the label \emph{boat} which can be used as a supporting evidence for the boat classifier.\newline In other words, ``Canyon'', ``Clock Tower'', ``cooking'', ``water" are features but ``Grand Canyon'', ``boat" and ``Big Ben'' are labels. Formally put, the number of input features is a sparse 150,000 dimensional vector which is a combination of predictions from various classifiers. The output is a target label set of labels. The calibration model is a function $p_{e|f}(x)$ that is defined over pairs of label ($e$) and feature ($f$) that is learned according to an isotonic regression. We use a modified version of the Platt's scaling~\cite{Platt99probabilisticoutputs} to model this probability: \begin{equation} p_{e|f}(x) = \alpha \left(\sigma(\beta x + \gamma) - \sigma(\gamma) \right) \end{equation} where $\sigma(x)=\frac{1}{1+\exp(-x)}$ is the sigmoid function and $\alpha, \beta, \gamma$ are functions of $e$ and $f$. We enforce $\alpha,\beta \geq 0$ so that the function $p_{e|f}(x)$ monotonically increases with $x$ (the feature value). Furthermore, since $p_{e|f}(x)$ is a probability, we need to enforce that $p_{e|f}(x_{\operatorname{max}}) \leq 1$ where $x_{\operatorname{max}}$ is the maximum feature value from the training data. The scale $\alpha$ allows the estimated probability to plateau to a value less than 1.0 (a property that cannot be enforced in normal Platt's scaling). For example, one of the input sparse feature is the detection ``Canyon'' from a base image classifier. There are at least a dozen canyon's in the entire world (including Grand Canyon). It is reasonable for the probability of grand canyon to have a value less than 1.0 even if the input sparse feature ``Canyon'' fired with the highest confidence from an extremely precise base classifier. Furthermore, the offset term enforces that $p_{e|f}(0)=0$, which helps when dealing with sparse features. Thus, we only capture positively correlating features for a label $e$. Fitting of $p_{e|f}(x)$ can be either done by minimizing the squared error or the log-loss over all instances (video-frames in our case) where $x_f> 0$. We used squared loss in our implementations as we found it to be more robust near the boundaries, especially given that $p_{e|f}(x)$ is enforced to zero. For each instance where $x_f > 0$ we also have a ground-truth value associated with label $e$ as $g_e$. Given training examples $(w_t, x_t, g_t)_{t \in T}$ where $w_t$ is the weight of the example\footnote{To speed up the implementation, we quantize the feature values in buckets of size $10^{-4}$ and the weight $w$ is the total number of examples that fell in that bucket and $g$ is the mean ground-truth value in that bucket.}, $x_t$ is the feature value and $g_t$ is the ground-truth, we estimate $\alpha, \beta, \gamma$ by solving the following regularized least squares \begin{equation} (\hat{\alpha}, \hat{\beta}, \hat{\gamma}) = \operatorname{argmin} \sum_{t \in T} w_t (p_{e|f}(x_t) - g_t)^2 + \lambda (\alpha^2 + \beta^2 + \gamma^2) \end{equation} subject to $\alpha \geq 0$ and $\beta \geq 0$. $\lambda$ is tuned on a held out set to minimize the held out squared loss. We estimate 9 billion triples of $(\alpha, \beta, \gamma)$ and only retain the ones where the estimated $\alpha> 0$. Since the problem has only 3 variables, we can compute the exact derivative and Hessian w.r.t. $\alpha, \beta, \gamma$ at each point and do a Newton update. The various $(e,f)$ pairs are processed in parallel. Once the function $p_{e|f}(x)$ is learned, we choose up to the top K features sorted according to $p_{e|f}(x_{max})$ (the maximum probability of the label given that the feature). The outcome is a set $F_e$ of positively correlated features for each label $e$. \vspace{-0.5em} \subsection{Max Calibration Model} \vspace{-0.5em} Once the calibrations $p_{e|f}(x)$ are learned for (label, feature) pairs, the $\operatorname{max-calibration}$ model is an optimistic predictor of the probability of each entity $e$ given the set of all features that fired in the frame $x$ as \begin{equation} p_e(x) = \max_{f} p_{e|f}(x_f) \label{eqn:maxcal} \end{equation} Note that the max calibration model works best when the input features are sparse outputs that have some semantic meaning. Despite the simplicity and robustness of the $\operatorname{max-calibration}$ model, there are several drawbacks that may limit it from yielding the best performance:\newline (a) The max calibration model uses noisy ground truth data (assumes all frames in the video are associated with the label). At the very least, we need to correct this by learning another model that uses a cleaner ground truth. \newline (b) Furthermore, the non-linear operation of doing a max on all the probabilities may result in overly optimistic predictions for labels which have a lot of features $F_e$. Hence the output will no longer be well calibrated (unless we learn another calibrator on top of the max-calibrated score). \newline (c) Each feature is treated independently, hence the $\operatorname{max-calibration}$ model cannot capture the correlations between the various features. Max calibration model can only deal with sparse features. For example, we cannot use continuous valued features like the output of an intermediate layer from a deep network. As a result, we will use the max-calibration model as a bootstrapping mechanism for training a second order model. \textbf{Hard Negative Mining:} The $\operatorname{max-calibration}$ model provides calibrated probabilities for all labels in our vocabulary. It is a simple model and is extremely efficient to compute. Hence, we will exploit this property to mine good positives and hard-negatives (i.e., the ones that score high according to the $\operatorname{max-calibration}$ model). The mining process for an entity $e$ can be described formally as sorting (from highest to lowest) all the training examples (video frames in our case) according to the $\operatorname{max-calibration}$ score of $e$ and retaining the top $M$ examples. We chose $M$ such that it captures more than 95\% of the positives. Since the number of training examples is huge (e.g., 3.6 billion frames, in our case), we do this approximately using map-reduce where, in each of the W workers, we collect the top $k$ examples and choose the top M examples from the resulting $k W$ examples. Although this approach is approximate, if $kW$ is chosen to be sufficiently larger than $M$, we can guarantee that we can recover almost all of the top $M$ examples. The expected number of the true top $M$ examples that will be recovered by choosing the top $M$ examples from this $kW$ sized set is given as \begin{equation} \small E(k, W, M) = k + \sum_{i=k+1}^{M} \left(1 - \frac{1}{W} \right)^{i-1} \sum_{j=0}^{k-1} \binom{i-1}{j} (W-1)^{-j} \end{equation} For example if, $M=80000$ examples, $W=4000$ workers and $k = 40$ examples/worker, this evaluates to $79999.8126$. In general, setting $kW=2M$ yields a good guarantee. In the next section, we show how to get the top $k$ examples from each worker efficiently. \vspace{-0.5em} \subsection{Choosing Top-$k$ Examples per Worker} \vspace{-0.5em} The brute force approach to achieve this is to compute the max calibration score using (\ref{eqn:maxcal}) for each label $e$ given the features $\mathbf{x}$ for all examples that belong to the worker $w$ and insert ${p_e(\mathbf{x}), e, \mathbf{x}}$ into a $k$-sized priority queue (which is keyed by the max calibrated probability $p_e(\mathbf{x})$) for the label $e$. Unfortunately, this can be very time consuming, especially when assigning millions of examples per worker. In this section, we propose an approach that makes this mining extremely efficient and is particularly tuned towards the $\operatorname{max-calibration}$ model. The idea is to only score labels which are guaranteed to enter the priority queue. As a result of this, computing $p_{e|f}(x_f)$ becomes less and less frequent as more examples are processed in the worker and the priority queue for $e$ continues to get updated. From the calibration step, we have a set of shortlisted features $F_e$ for each label $e$. If we invert this list, we get a set of shortlisted labels $E_f$ for each feature $f$. In each worker $w$, we also maintain a priority queue $\mathbf{Q}(e, w)$ for each label $e$ that stores up to the top-$k$ examples (according to the $\operatorname{max-calibration}$ score). In each worker $w$, for each feature $f$, we store an inverse lookup to labels $E_f(w)$ which is initially $E_f$. In addition, we also store a minimum feature firing threshold $\tau_{f,e}$ such that only if $(x_f \geq \tau_{f,e})$ for some $f$, we will insert $e$ into the priority queue. Initially $\tau_{f,e} = 0$, which implies that every label $e \in E_f$ for all $f$ such that $x_f > 0$ will be scored. Let the minimum $\operatorname{max-calibration}$ value in the priority queue be $Q_{min}(e,w)$. This is zero if the size of the priority queue is less than $k$, otherwise (when the size is equal to $k$) it is the smallest element in the queue. For each training example, let $\mathbf{x}$ be the sparse feature vector and $g$ be the corresponding ground-truth. Let $p_e(\mathbf{x})$ be the score of $e$ according to the $\operatorname{max-calibration}$ model for this instance. Instead of computing $p_e(\mathbf{x})$ explicitly for \emph{all} labels using the $\operatorname{max-calibration}$ model, we only compute it for a subset of the labels which are guaranteed to enter the priority queue $\mathbf{Q}(e,w)$ as follows: $p_e(\mathbf{x})$ is initially zero for all $e$ (an empty map). For each feature $f : x_f > 0$ and for each label $e \in E_f(w)$, if $x_f < \tau_{f,e}$ and $p_{e|f}(x_f) < Q_{min}(e)$, we update $\tau_{f,e}$ to $x_f$. In addition if $p_{e|f}(x_f) < Q_{min}(e)$, we remove $e$ from $E_f(w)$ (so we have fewer labels in the inverse lookup for $f$). On the other hand, if $x_f \geq \tau_{f,e}$ and $p_{e|f}(x_f) \geq Q_{min}(e)$, we update $p_e(\mathbf{x})$ to $\max(p_e(\mathbf{x}), p_{e|f}(x_f))$. For each $e$ where $p_e(\mathbf{x}) > 0$, insert $\{p_e(\mathbf{x}), e, \mathbf{x}, g\}$ to the priority queue $Q(e, w)$. These $M$ examples become the training data for another second order classifier. Since the second order model is trained only on these $M$ examples, it is important to retain the distribution at inference time. The second order model may not do well with an odd-man-out data point (that is not in the distribution of these $M$ examples) is seen. Hence at inference time, we put a threshold on the lowest $\operatorname{max-calibration}$ score of any positive example seen in the $M$ training examples. \textbf{Popular Labels:} For popular YouTube labels like \emph{Minecraft}, $M$ needs to be sufficiently high to capture a significant fraction of the positives. For example, \emph{Minecraft} occurs in $3\%$ of YouTube videos in our set. On a dataset of $10$ million videos with frames sampled at 1fps and each video having an average length of 4 minutes, we have $72$ million frames that correspond to \emph{Minecraft}. In this case $M$ needs to be much higher than $72$ million and this is not feasible to fit in a single machine.\footnote{The second order classifier is trained in parallel across different workers, but we train each label in a single machine. } When considering each example for such labels to be added to the top-$k$ list in each worker, we do a random sampling with a probability $p$ which is proportional to $\frac{M}{positives(label)}$ [i.e., the step ``insert $\{p_e(\mathbf{x}), e, \mathbf{x}\}$ to the priority queue'' is done with this probability]. \vspace{-0.5em} \subsection{Training the Second Order Model} \vspace{-0.5em} Given the top $M$ examples of positives and negatives obtained from the hard negative mining using the first order $\operatorname{max-calibration}$ model, the second order model learns to discriminate the good positives and hard-negatives in this set. At inference time, for each example $X$, we check if the $\operatorname{max-calibration}$ score is at least as much as the $\operatorname{max-calibration}$ score of any positive example in this training set. Note that checking if the $\operatorname{max-calibration}$ is at least $\tau_e$ is equivalent to checking if at least one of the feature values passes a certain threshold. Formally put \begin{equation} \max_{f} p_{e|f}(x_f) \geq \tau_e \equiv \bigcup_{f} I(x_f \geq p_{e|f}^{-1}(\tau_e)) \end{equation} \noindent $p_{e|f}(x_f)$ is monotonically increasing and hence its inverse is uniquely defined. At the inference time, we check if at least one feature exceeds the certain threshold $\eta_{e|f} = p_{e|f}^{-1}(\tau_e)$ which is pre-computed during initialization. If the $\operatorname{max-calibration}$ score exceeds this threshold, we apply the second order model $Q_e(\mathbf{x})$ to compute the final score of the label $e$ for the example $\mathbf{x}$. If the max calibration score does not exceed this threshold, the final score $P_e(\mathbf{x})$ of the label $e$ is set to zero. This is essentially a 2-stage cascade model, where a cheap max calibration model is used as an initial filter, followed by a more accurate and more expensive second-order model. We used logistic regression and mixture of experts as candidates for this second order model. \vspace{-0.5em} \subsection{Mixture of Experts} \vspace{-0.5em} Recall that we train a binary classifier for each label $e$. $y=1$ denotes the existence of $e$ in the features $\mathbf{x}$. Mixture of experts (MoE) was first proposed by Jacobs and Jordan \cite{Jordan94hierarchicalmixtures}. In an MoE model, each individual component models a different binary probability distribution. The probability according to the mixture of $H$ experts is given as \begin{equation} p(y = 1 | \mathbf{x}) = \sum_{h} p(h | \mathbf{x}) p(y = 1 | \mathbf{x}, h) \end{equation} where the conditional probability of the hidden state given the features is a soft-max over $H+1$ states $p(h | \mathbf{x}) = \frac{\exp(\mathbf{w}_h^{T} \mathbf{x})}{1 + \sum_{h'} \exp(\mathbf{w}_{h'}^{T} \mathbf{x})}$. The last $(H+1)^{th}$ state is a dummy state that always results in the non-existence of the entity. When $H=1$, it is a product of two logistics and hence is more general than a single logistic regression. The conditional probability of $y=1$ given the hidden state and the features is a logistic regression given as $p(y = 1 | \mathbf{x}, h) = \sigma(\mathbf{u}_h^{T} \mathbf{x})$. The parameters to be estimated are the softmax gating weights $\mathbf{w}_h$ for each hidden state and the expert logistic weights $\mathbf{u}_h$. For the sake of brevity, we will denote $p_{y|x} = p(y=1 | \mathbf{x})$, $p_{h|\mathbf{x}} = p(h | \mathbf{x})$ and $p_{h} = p(y=1 | \mathbf{x},h)$ Given a set of training data ${(\mathbf{x}_i, g_i)}_{i=1\ldots N}$ for each label where $\mathbf{x}_i$ is the feature vector and $g_i$ is the corresponding boolean ground-truth, we optimize the regularized loss of the data which is given by \begin{equation} \sum_{i=1}^{N} w_i \mathcal{L} \left[p_{y|\mathbf{x}_i}, g_t\right] +\lambda_{2} \left(\Vert \mathbf{w} \Vert_{2}^2 + \Vert \mathbf{u} \Vert_{2}^2 \right) \label{eqn:loss} \end{equation} where the loss function $\mathcal{L}(p,g)$ is the log-loss given as \begin{equation} \mathcal{L}(p,g) = -g \log p - (1-g) \log (1-p) \label{eqn:logloss} \end{equation} and $w_i$ is the weight for the $i^{th}$ example. \textbf{Optimization:} Note that we could directly write the derivative of $\mathcal{L} \left[p_{y|\mathbf{x}}, g\right]$ with respect to the softmax weight $\mathbf{w}_h$ and the logistic weight $\mathbf{u}_h$ as \begin{eqnarray*} \frac{\partial \mathcal{L} \left[p_{y|\mathbf{x}}, g\right]}{\partial \mathbf{w}_h} &=& \mathbf{x} \frac{p_{h|\mathbf{x}} \left(p_{y|h,\mathbf{x}} - p_{y|\mathbf{x}}\right) \left(p_{y|\mathbf{x}}-g\right)}{p_{y|\mathbf{x}}(1-p_{y|\mathbf{x}})} \\ \frac{\partial \mathcal{L} \left[p_{y|\mathbf{x}}, g\right]}{\partial \mathbf{u}_h} &=& \mathbf{x} \frac{p_{h|\mathbf{x}} p_{y|h,\mathbf{x}} (1- p_{y|h,\mathbf{x}})\left(p_{y|\mathbf{x}}-g\right)}{p_{y|\mathbf{x}}(1-p_{y|\mathbf{x}})} \end{eqnarray*} Our implementation uses the \textit{ceres} library~\cite{ceres-solver} to solve the minimization in (\ref{eqn:loss}) to obtain the weights $(\mathbf{w}_h, \mathbf{u}_h)$ using the Broyden Fletcher Goldfarb Shanno algorithm (LBFGS). We also implemented an EM variant where the collected statistics are used to re-estimate the softmax and the logistic weights (both of which are convex problems). However, in practice, we found that LBFGS converges much faster than EM and also produces better objective in most cases. For all our experiments, we report accuracy numbers using the LBFGS optimization. \textbf{Initialization:} When $H$ (the number of mixtures) is greater than one, we select $H$ positive examples according to the non-deterministic KMeans$++$ sampling strategy~\cite{arthur2007k}. The features of these positive examples become the gating weights (the offset term is set to zero). The expert weights are all initialized to zero. We then run LBFGS until the relative change in the objective function ceases to exceed $10^{-6}$. When $H=1$, we initialize the expert weights to the weights obtained by solving a logistic regression, while the gating weights are all set to zero. Such an initialization ensures that the likelihood of the trained MoE model is at least as much as the one obtained from the logistic regression. In our experiments, we also found consistent improvements by using MoE with 1 mixture compared to a logistic regression and small improvements by training a MoE with (up to) $5$ mixtures compared to a single mixture. Furthermore, for multiple mixtures, we run several random restarts and pick the one that has the best objective. \textbf{Hyperparameter Selection:} In order to determine the best $L_2$ weight $\lambda$ on $\mathbf{w}_h$ and $\mathbf{u}_h$, we split the training data into two equal video dis-joint sets and grid search over $\lambda$ and train a \emph{logistic regression} with an $L_2$ weight of $\lambda$. For our experiments, we start $\lambda=10^{-2}$ and increase it by a factor of $2$ in each step. Once we find that the holdout loss is starting to increase, we stop the search. \textbf{Training times:}The total training time (from calibration to training the MoE model) for training the frame-level model takes less than 8 hours by using all features on the 10.8 million training videos by distributing the load across 4000 machines. When the number of mixtures is greater than one, the majority of the training time is spent doing the random restarts and hyper-parameter sweep. The corresponding training time for the video level model is anywhere between twelve to sixteen hours on the 12 million set. Training the same models on the sports videos takes less than an hour. Inference takes $\leq1$s per 4-minute video. \section{Conclusion} \vspace{-0.5em} We studied the problem of efficient large scale video classification (12-million videos) with a large label space (150,000 labels). We proposed to use image-based classifiers which have been trained either on video thumbnails or on Flickr images in order to represent the video frames, thereby avoiding a costly pre-training step on video frames. We demonstrate that we can organically discover the correlated features for a label using the max calibration model. This allows us to bypass the curse of dimensionality by providing a small set of features for each label. We provided a novel technique for hard negative mining using an underlying max-calibration model and use it to train a second order mixture of experts model. MiCRObE can be used as a frame-level classification method that does not require human-selected, frame-level ground truth. This is crucial when attempting to classify into a large space of labels. MiCRObE shows substantial improvements in precision of the learnt fusion model against other simpler baselines like max-calibration and models trained using random negatives and provides the highest level of performance at the task of frame-level classification. We also show how to adapt this model for video classification. Finally, we provide an LSTM based model that is capable of the highest video-level performance on YT-12M. Performance could further be improved by late-fusing outputs of the two algorithms. \section{Introduction} \vspace{-0.5em} \label{sec:intro} Video classification is the task of producing a label that is relevant to the video given its frames. A good video level classifier is a one that not only provides accurate frame labels, but also best describes the entire video given the features and the annotations of the various frames in the video. For example, a video might contain a tree in some frame, but the label that is central to the video might be something else (e.g., ``hiking''). The granularity of the labels that are needed to describe the frames and the video depends on the task. Typical tasks include assigning one or more global labels to the video, and assigning one or more labels for each frame inside the video. In this paper we deal with a truly large scale dataset of videos that best represents videos in the wild. Much of the advancements in object recognition and scene understanding comes from convolutional neural networks ~\cite{liris2011,ji2013,karpathy2014large,simonyan2014two}. The key factors that enabled such large scale success with neural networks were improvements in distributed training, advancements in optimization techniques and architectural improvements\cite{szegedy14going}. While the best published results~\cite{lan2014beyond} on academic benchmarks such as UCF-101 use motion features such as IDTF~\cite{wang11}, we will not make use of them in this work due to their high computational cost. \begin{figure}[tbp] \includegraphics[width=0.5\textwidth]{figures/microbe_diagram.png} \caption{Overview of the MiCRObE training pipeline.} \label{fig:microbe} \end{figure} Training neural networks on video is a very challenging task due to the large amount of data involved. Typical approaches take an image-based network, and train it on all the frames from all videos in the training dataset. We created a benchmark dataset on which this would simply be infeasible. In our dataset we have 12 million videos. Assuming a sampling rate of 1 frame per second, this would yield 2.88 billion frames. Training an image-based on such a large number of images would simply take too long with current generation hardware. Another challenge which we aim to address is how to handle a very large number of labels. In our dataset we have 150,000 labels. We approach the problem of training using such a video corpus using two key ideas: 1) we use CNNs that were trained using video thumbnails or Flickr images as base features; and 2) the scale is large enough that only distributed algorithms may be used. Assuming a base image-based CNN classifier of 150,000 classes, and that on average 100 of these classes trigger per frame, an average video in our dataset would be represents using 24,000 features. In a na\"{i}ve linear classifier this may require up to 3.6 billion weight updates. Assuming a single pass over the data, in the worst case it would generate $43 \times 10^{15}$ updates. The main contribution of this paper is describing two orthogonal methods which can be used to learn efficiently on such a dataset. The first method consists of a cascade of steps. We propose to use an initial relatively weak classifier to quickly learn feature-class mappings while pruning as many of these correlations as possible. This classifier is then used for hard negative mining for a second order classifier which then is improved iteratively. The second method employs an optimized neural network architecture using long short-term memory (LSTM) neurons~\cite{hochreiter97long} and hierarchical softmax~\cite{morin2005hierarchical}, while using a distributed training architecture~\cite{dean2012large}. We present two methods for both frame-level and video-level classification. The first, named MiCRObE (Max Calibration mixtuRe Of Experts, see Figure~\ref{fig:microbe}) is described in Section~\ref{sec:microbe}, while the second method which we abbreviate as LSTM is described in Section~\ref{sec:lstm}. \section{Video and Frame-Level Prediction LSTM} \vspace{-0.5em} \label{sec:lstm} We also tackle the task of frame-level and video-level prediction using recurrent neural networks. In this section we describe our approach. A recurrent network operates over a temporally ordered set of inputs $\boldsymbol{x} = \left\{x_1, \dotsc, x_T\right\}$. $x_t$ corresponds to the features at time step $t$. For each time step $t$ the network computes a hidden state $h_t$ which depends on $h_{t-1}$ and the current input, and bias terms. The output $y_t$ is computed as a function of the hidden state at the current time step: \begin{align} h_t &= \mathcal{H}(W_{x}x_t + W_hh_{t-1} + b_h) \\ y_t &= W_oh_t + b_o \end{align} \noindent where $W$ denote the weight matrices. $W_x$ denotes the weight matrix corresponding to the input features, $W_h$ denotes the weights by which the previous hidden state is multiplied, and $W_o$ denotes the weight matrix that is used to compute the output. $b_h$ and $b_o$ denote the hidden and output bias. $\mathcal{H}$ is the hidden state activation function, and is typically chosen to be either the sigmoid or the $\tanh$ function. This type of formulations suffers from the vanishing gradient problem~\cite{bengio94learning}. Long Short-Term Memory neurons have been proposed by Schmidhuber~\cite{hochreiter97long} as type of neuron which does not suffer from this. Thus, LSTM networks can learn longer term dependencies between inputs, which is why we chose to use them for our purposes. The output of the hidden layer $h_t$ for LSTM is computed as follows: \begin{align} i_t &= \sigma (W_{xi}x_t + W_{hi}h_{t-1} + W_{ci}c_{t-1} + b_i) \label{eqi}\\ f_t &= \sigma (W_{xf}x_t + W_{hf}h_{t-1} + W_{cf}c_{t-1} + b_f) \label{eqf}\\ c_t &= f_tc_{t-1} + i_t\ \tanh(W_{xc}x_t + W_{hc}h_{t-1} + b_c) \label{eqc}\\ o_t &= \sigma (W_{xo}x_t + W_{ho}h_{t-1} + W_{co}c_t + b_o) \label{eqo}\\ h_t &= o_t\ \tanh(c_t) \label{eqh} \end{align} \noindent where $\sigma$ is the sigmoid function. The main difference between this formulation and the RNN is that the $i_t$ decides whether to use the input to update the state, $f_t$ decides whether to forget the state, and $o_t$ decides whether to output. In some sense, this formulation introduces data control flow driven by the state and input of the network. For the first time step, we set $c_{0} = \mathbf{0}$ and $h_{0} = \mathbf{0}$. However, the initial states could also be represented by using a learned bias term. For the purposes of both video and frame-level classification, we consider a neural network which has frame-level classifications as inputs. These scores are be represented in a sparse vector $\boldsymbol{s_t} = \left\{s_t^i | \forall s_t^i > 0, s_t^i \in S_t\right\}$, where $S_t$ is a vector containing the scores for all classes at time $t$. The first layer of the network at time $t$ computes its output as $x_t = \sum_i s_t^i w^i + b$, where $b$ is the bias term. This formulation of $s_t$, significantly reduces the amount of computation needed for both the forward and backward pass because it only considers those elements of $S_t$ which have values greater than zero. For our networks, the number of non-zero elements per frame is less than 1\% of the total possible. In or experiments, each class is represented internally (as $w^i$) with 512 weights. On top of this layer, we stack 5 LSTM layers with 512 units each~\cite{graves2013speech}. We unroll the LSTM layers for 30 time steps, which is equivalent to using 30 seconds' worth of video at a time for training. Each of the top LSTM layers is further connected to a hierarchical softmax layer~\cite{morin2005hierarchical}. In this layer, we use a splitting factor of 10, and a random tree to approximate the hierarchy. Similarly to~\cite{ng2015beyond} we use a linearly increasing weight for each time step, starting with $1/30$ for the first frame, and assigning a weight of $1$ to the last frame. This allows the model to not have to be penalized heavily when trying to make a prediction with few frames. We also investigated using an uniform weight for each frame and max-pooling over the LSTM layer, but in our video-level metrics, these methods proved inferior to the linear weighting scheme. In our dataset, videos have an average of 240 seconds. Therefore, when using the 30-frame unrolled LSTM model, it is not clear what to do in order to obtain a video-level prediction. In order to process the entire video, we split it into 30-second chunks. Starting with the first chunk of the video, we predict at every frame, and save the state at the end of the sequence. When processing subsequent chunks, we feed the previously saved state back into the LSTM. At the end of the video we have as many sets of predictions as we have frames. We investigated using max-pooling and average pooling over the predictions, and as an alternative, taking the prediction at the last frame. For every video, our LSTM model produces a 512-dimensional representation. This is the state of the top-most LSTM layer at the last frame in the video. This also allows other classifiers to be trained using this representation. The training was done using distributed stochastic gradient descent~\cite{dean2012large} using 20 model replicas. We used a learning rate of 0.3, and employed the AdaGrad update rule~\cite{duchi2011adaptive}. The training took less than 5 days before convergence. Inference takes $\leq1.4$ seconds for the average 4-minute video. \section{Related Work} \vspace{-0.5em} \label{sec:related} Our work is targeted at large scale video classification. In terms of public benchmarks, the largest publicly available benchmark is Sports-1M~\cite{karpathy2014large}, which contains over one million videos, and 487 labels. The best performing classification method on the Sports-1M benchmark has been using a frame-level convolutional deep neural network, with either max-pooling or LSTM on top~\cite{ng2015beyond}. Using the same benchmark, Karpathy~\etal~\cite{karpathy2014large}, and Tran~\etal~\cite{tran14corr} propose using a convolutional deep network for making frame and video-level predictions, while Karpathy~\etal~\cite{karpathy2014large} also present results on using hand-crafted features and deep networks. The inputs to these networks is raw pixels, and the networks are trained through convolutional part, resulting in a very long training time. Other large scale video classification methods~\cite{aradhye09,toderici10,yang2011discriminative} used hand-crafted features and per-class AdaBoost classifiers, but were only able to use a fraction of the videos to train the per-class models. Unlike previous work, our goal is to provide fast training times with models capable of frame-level and video-level prediction, while allowing for a much larger number of labels and videos to be used for training. Many of the best performing models in machine learning problems such as image classification, pattern recognition and machine translation come by fusion of multiple classifiers \cite{sebastiano_fusion,saenko_fusion}. Given scores $p_l^{j}$ for $j=1,\ldots,M$ from each of the $M$ sources for a label $l$, the traditional fusion problem is a function $\hat{p}_l = f(p_l^{1:M})$ that maps these probabilities to a single probability value. This problem is well studied and one of the most popular techniques is Bayes fusion which has been successfully applied to vision problems~\cite{Kittler98e,manduchi_cvpr}. Voting based fusion techniques like majority and sum based are extremely popular mostly because of they are simple and non parametric. The current best result on image net is based on a simple ensemble average of six different classifiers that output ImageNet labels~\cite{sergey_ensemble}. The fundamental assumption in these settings is that each of the $M$ sources need to speak the same vocabulary as that of the target. What if the underlying sources do not speak the same vocabulary, yet output semantically meaningful units? For example, the underlying classifier only detected \emph{canyon}, \emph{river} and \emph{rafting}. Can we learn to infer the probability of the target label being \emph{Grand Canyon} from these detections? Another extreme is to have the underlying classifier so fine-grained that it has (for example) the label \emph{African elephant}, but does not have the label \emph{elephant}. If the label \emph{elephant} is present in the target vocabulary, can we learn to infer the relation \emph{African elephant} $\Rightarrow$ \emph{elephant} organically from the data? One approach is to treat the underlying classification outputs as \emph{features} and train the classifiers for each label based on these features. This idea has been proposed in the context of scene classification~\cite{li2010object}. This approach can quickly run into the curse of dimensionality, especially if the underlying feature space is huge (which is indeed the case for our problem). \section{Experimental Setup} \vspace{-0.5em} \textbf{Datasets:} We created a new dataset of \textbf{12 million YouTube videos} spanning about $150,000$ visual labels from Freebase~\cite{bollacker2008freebase}. We selected these 12 million videos such that each of them have a view count of at least $10,000$. The $150,000$ labels were selected by removing music topics such as songs, albums and people (to remain within the visual domain and not having to concentrate on face recognition). YouTube provides the labels of the videos which obtained by running a Freebase-based annotator \cite{simonet-wole-13} on title, description and other metadata sources. We retrieved the videos belonging to each label by using the YouTube Topics API~\cite{youtube_topics}. This annotation is fairly reliable for high view videos where the weighted precision is over $95\%$ based on human evaluation. Many of the labels are however extremely fine-grained, making them visually very similar or even indistinguishable. Some examples are \textit{Super Mario 1}, \textit{Super Mario 2}, \textit{FIFA World Cup 2014}, \textit{FIFA World Cup 2015}, \textit{african elephant}, \textit{asian elephant}, etc. These annotations are only available at the video-level. Another dataset that we used for is \textbf{Sports-1M dataset} \cite{karpathy2014large} that consists of roughly 1.2 million YouTube sports videos annotated with 487 classes. We will evaluate our models both at the video level and the frame level. \textbf{Features:} We extract two sets of sparse features from each frame (sampled at 1 fps) for the videos in our training and test set. One set of features are the prediction outputs of an \textit{Inception}-derived deep neural network \cite{szegedy14going} trained on \emph{YouTube thumbnail images}. This model by itself performs much worse on our training set, because YouTube thumbnails are noisy (tend to be visually more attractive than describing the concept in the video) and is only a single frame from the entire YouTube video. The number of unique sparse features firing from this model on our 10 million training set is about $110,000$. In our experiments section, we will abbreviate these features as \textbf{TM} which stands for \emph{thumbnail model}. Another set of features are the predictions of a deep neural network with a similar architecture as the \textbf{TM} model, but is largely trained on \emph{Flickr data}. The target labels are again from the metadata of the Flickr photos and are similar spirit to image net labels~\cite{krizhevsky2012imagenet}. Moreover, they are much less fine-grained than the YouTube labels. The vocabulary size of these labels is about $17,000$. For example, the label \emph{Grand Canyon} won't be present. Instead the label \emph{Canyon} will be present. We will abbreviate these features as \textbf{IM} that stands for \emph{Image models}. For both models we process the images by first resizing them to $256 \times 256$ pixels, then randomly sampling a $220 \times 220$ region and randomly flipping the image horizontally with $50\%$ probability when training. Similarly to the LSTM model, the training was performed on a cluster using Downpour Stochastic Gradient Descent~\cite{dean12} with a learning rate of $10^{-3}$ in conjunction with a momentum of $0.9$ and weight decay of $0.0005$. \vspace{-0.5em} \subsection{Training and Evaluation} \vspace{-0.5em} \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Features & Dataset & \textbf{Self} & \textbf{MaxCal}& \shortstack{\textbf{Logit} \\ \small{Random Negs.}} & \shortstack{\textbf{Logit} \\ \small{Hard Negs.}} & \shortstack{\textbf{MiCRObE} (1 mix) \\ \small{Hard Negs}.} & \shortstack{\textbf{MiCRObE} (5 mix) \\ \small{Hard Negs}.} \\ \hline \textbf{IM} & \multirow{3}{*}{YT-12M} & 4.0\% & 20.0\% & 27.0\% & 29.2\% & 31.3\% & 32.4\% \\ \cline{1-1}\cline{3-8} \textbf{TM} & ~ & 19.0\% & 28.0\% & 31.0\% & 39.8\% & 40.6\% & 41.0\% \\ \cline{1-1}\cline{3-8} \textbf{IM}$+$\textbf{TM} & ~ & 7.0\% &33.0\% & 40.6\% & 42.5\% & 43.9\% & 43.8\% \\ \hline \textbf{IM} & \multirow{3}{*}{Sports-1M} & 0.9\% & ~ & 25.6\% & 35.0\% & 39.3\% & 39.8\% \\ \cline{1-1}\cline{3-8} \textbf{TM} & ~ & 1.2\% & ~ & 33.9\% & 45.7\% & 46.8\% & 49.9\% \\ \cline{1-1}\cline{3-8} \textbf{IM}$+$\textbf{TM} & ~ & 1.5\% & 39.0\% & 41.0\% & 47.8\% & 49.8\% & 50.2\% \\ \hline \end{tabular} \end{center} \caption{Frame level model evaluation against the video-level ground truth. The values in the table represent hit@1.} \label{tab:afp} \vspace{-1em} \end{table*} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.45\textwidth]{figures/model_ROC.pdf} \end{center} \caption{The ROC for frame-level predictions for two models using human ratings for ground truth.} \label{figure:roc} \vspace{-1em} \end{figure} \begin{table*}[tbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Features & Benchmark & \shortstack{\textbf{MaxCal} \\ ~} & \shortstack{\textbf{Logit} \\ \small{Hard Negs.}} & \shortstack{\textbf{MiCRObE} (1 mix) \\ \small{Hard Negs}.} & \shortstack{\textbf{MiCRObE} (5 mix) \\ \small{Hard Negs}.} & \shortstack{\textbf{LSTM} \\ ~} \\ \hline \textbf{IM} & \multirow{3}{*}{YT-12M} & 20.0\% & ~ & 36.2\% & 36.6\% & 44.4\% \\ \cline{1-1}\cline{3-7} \textbf{TM} & & 28.0\% & ~ & 47.3\% & 47.3\% & 45.7\% \\ \cline{1-1}\cline{3-7} \textbf{IM}$+$\textbf{TM} & & 29.0\% & 49.3\% & 50.1\% & 49.5\% & 52.3\% \\ \hline \textbf{IM} & \multirow{3}{*}{Sports-1M} & 28.2\% & 45.0\% & 46.5\% & 47.2\% & 52.8\%\\ \cline{1-1}\cline{3-7} \textbf{TM} & & 38.6\% & 54.5\% & 55.4\% & 56.0\% & 58.8\% \\ \cline{1-1}\cline{3-7} \textbf{IM}$+$\textbf{TM} & & 40.3\% & 54.7\% & 56.8\% & 57.0\% & 59.0\% \\ \hline \end{tabular} \end{center} \caption{Hit@1 for the video level models against the ground truth.} \label{tab:video_precision} \vspace{-1em} \end{table*} We partition the training data using a $90/10$ split. This results in about $10.8$ million videos in the training partition and $1.2$M videos in the test partition. The ground-truth is only available at video-level. We train two kinds of models: \textbf{Frame level models}: These models are trained to predict the label from a single frame. To provide robustness, a contagious idea is to feed MiCRObE the aggregated features over more than one frame. The features for each frame are obtained by averaging the features in a $\pm 2$ second window. For training the max-calibration model, we will use all the frames from each video in our training set and assume that every frame is associated with the video level ground-truth labels. For mining the collection of hard negatives and good positives for the second stage model, we randomly sample 10 frames from each video and mine the top $100,000$ scoring examples for each label from the resulting $108$ million frames (where the scoring is done using the maxcal model). At the inference time, we annotate each frame (sampled at 1fps) in the video using the trained MiCRObE cascade model. The output of the LSTM model is evaluated at every frame, while passing the state forward. Since we don't have frame level ground truth at such a large scale, we either (a) convert the frame level labels to the video level labels using the max-aggregation of the frame level probabilities and evaluate against the video-level ground truth (See Table \ref{tab:afp}), or (b) send a random sample of frames from a random sample of output labels to human raters (Figure~\ref{figure:roc}). Note that the predictions of the underlying base models are entities which have some overlap with the target vocabulary. The precision numbers in the \textbf{Self} column are the accuracy of the base classifiers by themselves. For the combined model \textbf{IM+TM}, we take the maximum score of an entity coming from either of the models (Table~\ref{tab:afp}). In order to prepare the data for human rating, we took a random set of $6,000$ videos which did not appear in the training set. For each video, we computed the output probabilities for all labels. For those labels which had an output probability of greater than $0.1$, we took all the frames which passed the thresholding, sorted the scores, and split the entire score range into $25$ equally sized buckets. From each bucket, we randomly sampled a frame and a score. For each model we evaluated, we randomly sampled $250$ labels (with $25$ frames each), and sent this set to human raters. The total number of labels from which we sampled these $250$ was $3,541$ for MiCRObE, and $1,568$ for the LSTM model. The resulting ROC is depicted in Figure~\ref{figure:roc}. We only considered frames for which there was a quorum (at least $2$ raters had to agree). The MiCRObE method is well suited for frame-level classification due to the fact that during the learning process it actively uses frame-level information and mines hard examples. As such, it provides better performance than the LSTM method on this task (Figure~\ref{figure:roc}). \textbf{Video level models}: These models are trained to predict the labels directly from the aggregated features from the video. The sparse features (available at each frame) are aggregated at the video level and the fusion models are directly trained to predict video-level labels from the (early) aggregated features. For this early feature aggregation, we collect feature specific statistics like mean, top-$k$ (for $k=1,2,3,4,5$) of each feature over the entire video. For example the label ``soccer'' from the TM model will expand to six different features \textit{TM:Soccer:Mean} (which is the average score of this feature over the entire video), \textit{TM:Soccer:1} (which is the highest score of this feature over the video), \textit{TM:Soccer:2} (which is the second highest score of this feature) and so on. The LSTM model remains unchanged from the frame-level prediction task. The video-level label is obtained by averaging the frame-level scores. The results are summarized in Table~\ref{tab:video_precision}. On the Sports-1M benchmark, which is video-level, the LSTM method yields 59.0\% hit@1. Karpathy~\etal~\cite{karpathy2014large} report 60.9\%, while Tran~\etal~\cite{tran14corr} report 61.1\% using a single network which was trained specifically for the task, starting from pixels. Similarly, Ng~\etal~\cite{ng2015beyond} report 72.1\%. However, in order to obtain a single prediction for the video, the video is passed through the network 240 times, which would not be possible in our scenario, since we are concerned about both learning and inference speed. In terms of video classification, MiCRObE was adapted to use feature aggregation and it provides comparable performance to LSTM model (a hit@1 within 2.8\% on YT-12M, and within 3\% on Sports-1M). The LSTM model, unlike MiCRObE, learns a representation of the input labels, and makes use of the sequential nature of the data. Compared to previous work concentrating on large video classification, our methods do not require training CNNs on the raw video data, which is desirable when working with large numbers of videos. Our best performing single-pass video-level model is within $2.1\%$ hit@1 of the best published model which does not need multiple model evaluations per frame~\cite{tran14corr} (trained directly from frames).
proofpile-arXiv_069-4374
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:Introduction} The classification of various continuous phase transitions has been successfully discussed from the viewpoint of the Landau-Ginzburg-Wilson (LGW) paradigm~\cite{GL,GLW}. The essential principles of the paradigm are the clarification of (local) order parameters and the characterization of breaking symmetries. Recently, the possibility of deconfined critical phenomena (DCP)~\cite{SenthilVBSF2004,SenthilBSVF2004,SenthilBSVF2005} has attracted considerable attention as a quantum phase transition (QPT) beyond the LGW paradigm. DCP have been predicted to occur at the QPT point between a magnetically ordered phase, such as the N\'eel phase, and the valence-bond solid (VBS) phase in two dimensional (2D) systems. Remarkably, this phase transition is continuous, although the symmetry group in one phase is not the subset of another phase. The well-known models that are expected to exhibit DCP are the generalized Heisenberg models with multibody interactions for SU(N) spins namely, SU(N) $JQ_m$ models~\cite{Sandvik2007}. Considerable effort has been expended to numerically determine whether the QPT of this model family is of the second order or weak first order, however, a satisfactory result has not yet been obtained~\cite{Sandvik2007,LouSK2007,MelkoRG2008,Sandvik2010,KaulS2012,KuklovAB2008,ChenK2013,HaradaK2013,PujariKDFA2013}. An interesting aspect of DCP is that the transition may occur independently of the lattice geometry~\cite{SenthilBSVF2004,SenthilBSVF2005}. In a previous study~\cite{HaradaK2013}, we evaluated the critical exponent, $\nu_{\rm QPT}$, at the QPT point between the N\'eel and VBS phase in SU(N) $JQ_m$ models on both square and honeycomb lattices using quantum Monte Carlo (QMC) calculations. From the finite-size scaling (FSS) analysis, we confirmed that $\nu_{\rm QPT}$ is independent of the lattice geometry but depends on the SU(N) symmetry. This result strongly suggested the presence of DCP in the SU(N) $JQ_m$ models. However, $\nu_{\rm QPT}$ for the SU(3) models exhibits a systematic shift toward the trivial value of $\nu_{\rm QPT}=1/D (D=3)$ as the system size increases. Therefore, the possibility of a first-order transition remains in the case of SU(3). The nature of the QPT point is important in the discussion of finite-temperature properties, because it can strongly affect the topology of the thermal phase diagram and also the criticality, as shown in Fig.~\ref{fig7}. The SU(N) $JQ_m$ models are expected to exhibit a thermal phase transition if the VBS pattern is characterized by spontaneous symmetry breaking of the lattice. Thus, consideration of the critical properties of thermal transitions in the vicinity of the QPT point may yield a different perspective on the possibility of DCP occurring in SU(N) $JQ_m$ models. The universality class of the thermal transition has been discussed for both SU(2) $JQ_2$~\cite{Tsukamoto2009} and $JQ_3$~\cite{JinS2013} models on the square lattice. The VBS pattern on the square lattice is described by a columnar dimer configuration, which is characterized by the spontaneous breaking of $\pi/2$-rotational symmetry around the center of the plaquette. Thus, the $Z_4$ symmetry breaking of the VBS order parameter is expected at the critical temperature. In the 2D case, several models that exhibit $Z_4$ symmetry breaking exist, such as the Ashkin-Teller model~\cite{AKmodel} including the 4-state Potts model~\cite{WuF1982} and the 2D classical XY spin model with the $Z_4$ field (XY+$Z_4$ model). In such models, the critical exponent, $\eta$, always satisfies the condition $\eta=1/4$. However, the observed exponent $\eta \sim 0.59$ of the SU(2) $JQ_2$ model differs from the expected value~\cite{Tsukamoto2009}. In the SU(2) $JQ_2$ model, the VBS order is very weak because the QPT point is located in the vicinity of the limit, and the model can only be expressed using the multibody interacting $Q_m$ term (the dimer limit). To enhance the VBS order, Jin et al. have focused on the SU(2) $JQ_3$ models~\cite{JinS2013}. The QMC results they have obtained~\cite{JinS2013} indicate that the criticality is well explained by the Gaussian conformal-field theory with central charge $c=1$; the thermal exponent, $\nu$, monotonically increases as the system approaches the QPT point, while the following relations between the exponents, $\eta=1/4, \gamma/\nu=7/4$, and $\beta/\nu=1/8$, are retained. This is a characteristic aspect of the 2D weak Ising universality class~\cite{SuzukiM1974}, and the same behavior has also been observed in the 2D XY+$Z_4$ model~\cite{Jose1977,Rastelli2004a,Rastelli2004b}. In the case of the classical spin model, $\nu$ monotonically increases as the $Z_4$ symmetry-breaking field, $h_4$, is suppressed and finally diverges at the XY limit, where the Kosterliz-Thouless (KT) transition takes place. The authors in ref. \cite{JinS2013} have observed that an enhancement of the U(1) symmetry of the VBS order parameter is observed at close proximity to the transition temperature and the QPT point, when the system size is smaller than a characteristic length scale. Since it has been noted that the emergence of additional U(1) symmetry is an important signature of DCP~\cite{SenthilBSVF2005,LouSK2007}, the numerical result in ref. \cite{JinS2013} is consistent with the presence of a deconfined critical point in the SU(2) $JQ_3$ model. However, the observation of U(1) symmetry in the vicinity of the QPT point seems to be natural, because the $Z_4$ field in the classical model is always marginal at a transition temperature and the system becomes the pure XY model at the $h_4 \rightarrow 0$ limit~\cite{Jose1977}. Thus, the emergence of U(1) symmetry cannot be regarded as sufficient evidence for the presence of a deconfined critical point in this case. Since the possibility of a first-order transition has been suggested in SU(3) $JQ_2$ model case~\cite{HaradaK2013}, where the same $Z_4$ field is broken, systematic studies of SU(N) symmetry are necessary. In contrast to the square-lattice case, the nature of the symmetry-breaking field is different for the honeycomb-lattice case. When the columnar VBS pattern is characterized by $\pi/3$ rotational symmetry breaking, the corresponding classical model is expected to be the XY+$Z_3$ model. Since the $Z_3$ field is relevant in two dimensions, the universality class is explained by the 2D three-state Potts model~\cite{Baxter1982}, and the emergence of the U(1) symmetry in the VBS order parameter may then be suppressed in the vicinity of the QPT. Although this is correct in the case of SU(2) spins, the higher SU(N)-symmetric case seems to be controversial. The discussion of DCP is based on the noncompact complex projective (NCCP$^{N-1}$) theory with $Z_k$ symmetry-breaking fields~\cite{SenthilVBSF2004,SenthilBSVF2005}. In this theory, although the $Z_3$ symmetry-breaking fields is relevant, it becomes irrelevant as N increases~\cite{SachdevRAJ1990,SenthilBSVF2004}. For the SU(2) case, which corresponds to the NCCP$^1$ theory, recent QMC results have indicated that the $Z_3$ field is $relevant$ but almost marginal at the QPT~\cite{PujariKDFA2013}. Therefore, one can expect the first-order transition at the QPT point in the SU(2) case and a change of criticality as N increases. This indicates that the criticality of the thermal transitions and the topology of the phase diagram are determined base on the order of the QPT. If the QPT is continuous, as is expected for larger values of N, and the system approaches the QPT, whether or not the universality classes of the thermal transition are affected is a nontrivial question. Our previous QMC calculations suggest that the same criticality exists at the QPT regardless of the lattice geometry~\cite{HaradaK2013}. This implies that the phase diagram topologies are identical in both the square- and the honeycomb-lattice cases. If one focuses on the most likely and simplest case, two scenarios for the thermal phase diagram can be expected depending on the order of the QPT point: (a) The QPT transition is of the second order and the thermal transition is always continuous (Fig.~\ref{fig7}(a)); and (b) The QPT is a weak first-order transition and the multicritical point exists at a finite temperature (Fig.~\ref{fig7}(b)). When scenario (b) occurs, we expect to observe crossover behavior and for $\nu$ to change to the trivial value, $\nu=1/D (D=2)$. From the above discussion, the importance of calculating the thermal phase diagram for different values of N and various lattice geometries with high accuracy is apparent. Further, such calculations can allow us to consider the possibility of the DCP scenario in the SU(N) $JQ_m$ models. Thus, in this paper, we systematically study the thermal phase transitions of the $JQ_2$ model on the square lattice and the $JQ_3$ model on the honeycomb lattice for SU(3) and SU(4) spins. The layout of this paper is as follows. In Sec. II, we study the thermal transition of the SU(N) $JQ_m$ model. We begin by introducing the model details and the order parameters evaluated in the QMC computations. In Sec. III, we present the results of the finite-size scaling analysis for the obtained numerical data. The criticality of the thermal transition is discussed for the square-lattice and the honeycomb-lattice cases. Then, we discuss possible scenarios for the QPT of both models from the perspective of the thermal phase diagram. Finally, we summarize our results in Sec. IV. \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\linewidth]{fig7.eps} \end{center} \caption{\label{fig7} Schematic phase diagram and renormalization flow. The thick solid (dashed) curves correspond to the second (first) order transition. The horizontal axis, $\lambda$, is the coupling ratio of the Heisenberg term, $J$, and the multibody interaction term $Q_m$. The open square represents a discontinuous transition. Each solid circle denotes a fixed point, such as the 2D Ising, three-state Potts, and multicritical fixed points. The coordination origin corresponds to the low-temperature fixed point. All arrows indicate renormalization flows. (a) DCP scenario and (b) first-order transition scenario.} \end{figure} \section{Models and Method} \label{sec:Model and Method} We consider the SU(N) $JQ_{2}$ model on the square lattice and the SU(N) $JQ_{3}$ model on the honeycomb lattice. Both models are simply expressed by the color-singlet-projection operator, $P_{ij}$, which is defined as $P_{ij}=-\frac{1}{N}\sum_{\alpha=1}^{N}\sum_{\beta=1}^{N} S_i^{\alpha \beta}{\bar S}_j^{\beta \alpha}$, where $S_i^{\alpha \beta}$ is the SU(N) spin generator and ${\bar S}_j^{\beta \alpha}$ is its conjugate. The model Hamiltonian can be expressed as \begin{eqnarray} {\mathcal H}=-J \sum_{( ij )} P_{ij} - Q_2 \sum_{( ij ) ( kl )} P_{ij}P_{kl}, \label{Ham1} \end{eqnarray} for the square-lattice case and \begin{eqnarray} {\mathcal H}=-J \sum_{( ij )} P_{ij} - Q_3 \sum_{( ij )( kl )( mn )} P_{ij}P_{kl}P_{mn}, \label{Ham2} \end{eqnarray} \begin{figure}[htb] \begin{center} \includegraphics[width=8cm]{fig1.eps} \end{center} \caption{\label{fig1} (Color online) (a) Color-singlet projection operator on a bond. The bold ellipsoids denote a color-singlet dimer state and correspond to $P_{ij}$s. (b) Projection operators for $Q_2$ and $Q_3$ terms. (c) Coordination index, $\mu$.} \end{figure} for the honeycomb-lattice case, where $(ij)$ indicates the nearest-neighbor sites. The summation for the $Q_m$ terms runs over all pairs without breaking the rotational symmetry of the lattice, as illustrated in Fig. \ref{fig1}. Since the present lattices are bipartite, the fundamental (conjugate) representation is adapted for the SU(N) spins on A(B) sites. For the Hamiltonians (\ref{Ham1}) and (\ref{Ham2}), we performed QMC calculations up to $L=256$ for the square- and $L=132$ for the honeycomb-lattice cases, respectively. (The number of sites, ${\mathcal N}$, corresponds to ${\mathcal N}=L^2$ and ${\mathcal N}=2L^2$, respectively.) The QMC code used here is based on the massively parallelized Loop algorithm~\cite{TodoMS2012} provided in the ALPS project code~\cite{ALPS}. In the computations, we measured the VBS amplitude, which is defined as $\Psi_{\boldsymbol r} \equiv \sum_{\mu=1}^{z} \exp[ \frac{2\pi i}{z} \mu ] \hat{P}_{\boldsymbol r,r_{\mu}}$, where $\hat P_{\boldsymbol r,r_\mu}$ is the diagonal component of the projection operator, $z$ is the coordination number of a lattice, and ${\boldsymbol r_\mu}$ represents the neighboring site of ${\boldsymbol r}$ in the $\mu$ direction, respectively (see Fig. \ref{fig1} (b)). From $\Psi_{\boldsymbol r}$, the VBS order parameter, which is defined as $\Psi \equiv L^{-2}\sum_{{\boldsymbol r}} \Psi_{\boldsymbol r}$. After $\Psi_{\boldsymbol r}$ was evaluated, we obtained further quantities: the Binder ratio $B_R \equiv \langle \Psi^4 \rangle/ \langle \Psi^2 \rangle ^ 2$; the VBS correlation function, $C({\boldsymbol r}) \equiv \langle \Psi_{\boldsymbol r}\Psi_{\boldsymbol 0} \rangle$; the correlation ratio $C_R \equiv \frac{C(L/2,L/2)}{C(L/4,L/4)}$; the correlation length, $\xi \equiv \frac{1}{|\Delta {\boldsymbol Q}|}\sqrt{ \frac{S({{\boldsymbol Q}_c})}{S(\Delta {\boldsymbol Q})} -1}$; and the static structure factor, $S({\boldsymbol Q})=L^{-2}\sum_{{\boldsymbol r},{\boldsymbol r'}} \exp [-i {\boldsymbol Q}({\boldsymbol r}-{\boldsymbol r}')] \langle \Psi_{\boldsymbol r}\Psi_{\boldsymbol r'} \rangle$. Here, $\Delta {\boldsymbol Q}$ denotes the distance between the order wave-vector, ${{\boldsymbol Q}_c}={\boldsymbol 0}$, and the nearest-neighbor positions, $(0,2 \pi /L_y)$ or $(2 \pi /L_x, 0)$. In this paper, we discuss the thermal transition criticality by changing the coupling constants, $J$ and $Q_m$. It is convenient to introduce a length scale associated with the distance from the QPT point, where the ground state changes from the N\'eel state to the VBS state. The QPT points were previously evaluated in ref. ~\cite{HaradaK2013} and are summarized in table \ref{table1}. The coupling ratio, $\lambda=J/(J+Q_m)$, of the QPT point depends strongly on the lattice geometry, and also on the degree of freedom of the SU(N) spin. Therefore, we introduce a normalized coupling constant that is defined as $\Lambda=\lambda/\lambda_{c}$, where $\lambda_{c}$ is the critical value at the QPT point. From this definition, one can easily see that $\Lambda=0$ and $1$ correspond to the dimer limit and the QPT point, respectively. \begin{table} \begin{center} \begin{tabular}{ccc} \hline \hline SU(N) & $JQ_2$ & $JQ_3$\\ \hline 2 & $\lambda_c=0.042$ & $\lambda_c=0.456$\\ 3 & $\lambda_c=0.665$ & $\lambda_c=0.796$\\ 4 & $\lambda_c=0.917$ & $\lambda_c=0.985$\\ \hline \hline \end{tabular} \caption{\label{table1} Critical points of SU(N) $JQ_m$ models. $\lambda_c$ is the critical value of the coupling ratio defined as $\lambda \equiv J/(J+Q_m)$, where $J$ and $Q_m$ are the coupling constants. All values are given in ref.~\cite{HaradaK2013}. } \end{center} \end{table} \section{Numerical Results and Finite-Size Scaling Analysis} \label{ResultsDiscussion} In Figs. \ref{fig2a} and \ref{fig2b}, we show the temperature dependence of $C_R$, $B_R$, $\xi$, and $S({\boldsymbol Q}_c)$ at $\Lambda=0.5$, which is the middle distance between the QPT point and the dimer limit. Since clear crosses are always observed for $0 \le \Lambda \lesssim 1$ as the temperature decreases, the thermal transition from the paramagnetic to the VBS phase is expected to be of the second order. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\linewidth]{fig2_a.eps} \end{center} \vspace{-0.5cm} \caption{\label{fig2a} (Color online) Temperature dependence of $C_R$, $B_R$, $\xi/L$, and $S({\boldsymbol Q}_c)L^{-\frac{\gamma}{\nu}}$ in the SU(3) square-lattice model at $\Lambda=0.5$. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.6\linewidth]{fig2_b.eps} \end{center} \vspace{-0.5cm} \caption{\label{fig2b} (Color online) Temperature dependence of $C_R$, $B_R$, $\xi/L$, and $S({\boldsymbol Q}_c)L^{-\frac{\gamma}{\nu}}$ for the SU(4) honeycomb-lattice model at $\Lambda=0.5$.} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\linewidth]{fig3.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig3} (Color online) (a) Critical temperature and (b) renormalization group eigenvalue, $y_t$, for temperature in the square-lattice case. The open squares (circles) are the SU(3) (SU(4)) results. $y_t$ is estimated by extrapolation to the thermodynamic limit, $\Lambda=0$ corresponds to the dimer limit, where $J=0$, and $\Lambda=1$ is the QPT point.} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\linewidth]{fig4.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig4} (Color online) $\Lambda$ dependence of effective $\eta$ and $\gamma/\nu$ of SU(N) $JQ_2$ models. All values were evaluated from the assumptions, $C(R=L/\sqrt{2})|_{T\sim T_c} \sim L^{-\eta}$ and $S({\boldsymbol Q}_c)|_{T\sim T_c} \sim L^{\frac{\gamma}{\nu}}$ , which are approximately satisfied in the vicinity of the critical temperatures. The vertical colored lines are critical temperatures and the black horizontal lines correspond to the values of the exponents for the 2D Ising universality class ($\eta=1/4$ and $\gamma/\nu=7/4$).} \end{figure} To discuss the universality class, we performed a FSS analysis of $\xi$, $C_R$, $B_R$, and $S({\boldsymbol Q}_c)$, assuming the scaling forms, $\xi/L \sim g_{\xi}[ L^{y_t}(T-T_c) ]$, $C_R \sim g_{C_R}[L^{y_t}(T-T_c)]$, $B_R \sim g_{B_R}[L^{y_t}(T-T_c)]$, and $S({\boldsymbol Q}_c)L^{-\frac{\gamma}{\nu}} \sim g_{S_Q}[L^{y_t}(T-T_c)]$, where $y_t={\nu}^{-1}$ and $g_X[x]$ is a scaling function. We applied the Bayesian scaling analysis~\cite{HaradaKBayes2011} to the FSS analysis of larger system size sets and estimated the values as follows. First, the critical temperature, $T_c$, and $y_t$ were evaluated from $\xi$ and $C_R$ (or $\xi$ and the Binder ratio $B_R$), because their scaling forms contain only two variables, $T_c$ and $y_t$. Both $T_c$ and $y_t$ were optimized simultaneously from the $\xi$ and $C_R$ data set. In detail, we evaluated $T_c$ and $y_t$ for several data sets labeled $L_{{\rm max}}$ that include four different system sizes, for example, $L_{{\rm max}}=128$ includes $L=\{48,64,96,128\}$, $L_{{\rm max}}=96$ includes $L=\{48,60,72,96\}$, and so on. Since apparent system-size dependence is observed for $\Lambda > 0$, we evaluated the extrapolated values of $T_c$ and $y_t$ in the limit $L_{{\rm max}} \rightarrow \infty$ from the large-system sets. (One example of this size dependence is the result at $\Lambda=0.15$ for SU(3) shown in Fig. \ref{fig11}.) After we obtained $T_c$ and $y_t$ for the thermodynamic limit, $\eta$ and $\gamma/\nu$ were independently from the correlation function, $C({\boldsymbol r})$, and $S({\boldsymbol Q}_c)$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\linewidth]{fig11.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig11} (Color online) System-size dependence of $y_t$ estimated from $B_R$. All $y_t$ values were evaluated from the Bayesian scaling analysis for several data sets labeled $L_{{\rm max}}$ (see text). The extrapolated values are expected to be those in the thermodynamic limit. The dotted line is a guide for the eye. The inset is the same result plotted as a function of $\beta/L_{{\rm max}}$.} \end{figure} We summarize the estimated $y_t(=\nu^{-1})$ and $T_c$ in Fig. \ref{fig3} for the square-lattice case (the SU(N) $JQ_2$ model). In the square-lattice model, $y_t$ ($\nu$) monotonically decreases (increases) as the system approaches the quantum critical point, in both the SU(3) and the SU(4) cases. In contrast to $y_t$, we observe that $\eta$ and $\gamma/\nu$ take constant values for $\Lambda<0.97$. Figures \ref{fig4} (a) and (b) show the $\Lambda$ dependence of the effective $\eta$ estimated from the assumption that $C(R) \sim R^{-\eta}$. From Figs. \ref{fig4} (a) and (b), it is apparent that $\eta$ clearly crosses $\eta=1/4$ at critical temperatures within the error bars. In the same manner, the effective $\gamma/\nu$ is estimated from the form, $S({\boldsymbol Q}_c=0) \sim L^{\gamma/\nu}$. Figures \ref{fig4} (c) and (d) present $\gamma/\nu$ evaluated from the data for $L \ge 96$. We can confirm from Fig. \ref{fig4} that $\gamma/\nu$ crosses the value $7/4$ at critical temperatures. Thus we conclude that $\eta$ and $\gamma/\nu$ satisfy $\eta=1/4$ and $\gamma/\nu=7/4$ at critical temperatures, within the error bars. The obtained exponents are the same as those of the 2D Ising universality class. $y_t$ ($\nu$) itself varies depending on $\Lambda$, but the other exponents, such as $\eta$ and $\gamma/\nu$, are constant. This behavior is known as the 2D Ising weak universality~\cite{SuzukiM1974} and is consistent with the results reported in ref. \cite{JinS2013} for the SU(2) $JQ_3$ model. To approach the QPT point from the finite temperature region, we performed these calculations at very low fixed temperatures by varying $\lambda$. With limited system size, we observed an apparent increase in $y_t$. However, as we discuss below, this is due to crossover from the mean-field type behavior to the true asymptotic behavior, and should not be taken as an evidence suggesting a first-order transition. (This is a slightly confusing point since the mean-field value for $y_t=2$ happens to be equal to the expected value for the first-order transition in two dimensions.) Figure \ref{fig11} shows the system size dependence of $y_t$ at $k_BT_c/(J+Q_2)=1/20$, $1/32$, $1/64$, and $1/256$ for the SU(3) case. Each value of $y_t$ is the FSS result of $B_R$ for several data sets labeled $L_{{\rm max}}$ that include three different system sizes; for example, in the square-lattice case, $L_{{\rm max}}=48$ includes $L=\{24,32,48\}$, $L_{{\rm max}}=72$ contains $L=\{48,64,72\}$ and so on. At $k_BT_c/(J+Q_2)=1/20$, $y_t$ $systematically$ decreases as $L_{{\rm max}}$ increases and takes the approximate value $y_t \sim 0.23$ in the thermodynamic limit. However, in the lower temperature region, we observe that $y_t$ exhibits a crossover from the mean-field value; the data for small $L_{{\rm max}}$ indicate $y_t \sim 2 (=\frac{1}{\nu})$, but $y_t$ decreases suddenly when the system size becomes larger than a characteristic length, $L_c$. In the case of $k_BT_c/(J+Q_2)=1/32$ and $1/64$, we estimated $L_c \sim 72$ and $L_c \sim 192$, respectively. However, when $k_BT_c/(J+Q_2)=1/256$, we obtained a data with the exponents of the mean-field value in both the SU(3) and SU(4) case. Therefore, we can obtain the correct values from the data for $L>L_c$ in the FSS analysis, while we estimate the mean-field values from the $L<L_c$ data. This $L_c$ is natively related to the development of the correlation length along the imaginary-time direction, $\xi_{\tau}$; the thermal criticality can be observed after $\xi_{\tau}$ approximately exceeds the inverse temperature, $\beta$. (It is expected that $L_c \sim \xi_{\tau} \sim a \beta$, where $a$ is an unknown constant.) In the present case, the correlation along the real space direction is well developed for $\xi_{\tau} < \beta$. Thus, the system can be described by an effective model with long-range interactions. Similar crossover is observed for the critical exponent $y_t$ $(=1/\nu)$ in the 2D Ising models with long-range interactions~\cite{Luijten1997}. In the Ising model, $y_t$ depends on the ratio between the interaction range and system size. When the interacting range is significantly larger than the system size, mean-field-type behavior is observed. From the extrapolated results for $y_t$, it can be stated that the universality class of the thermal transition for $T_c \ge 1/64$ is explained by that of the 2D classical XY+$Z_4$ model, and is therefore the weak 2D Ising universality class. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\linewidth]{fig5.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig5} (Color online) (a) Critical temperature and (b) renormalization group eigenvalue for temperature in the honeycomb-lattice case. $y_t$ is also evaluated from extrapolation to the thermodynamic limit. The inset of (b) is the system-size dependence of $y_t$ for the results obtained from the fixed-temperature calculations at $k_BT_c/(J+Q_3)=1/20$ and $1/64$ (see in text). The label $t_c$ in the figure means $k_BT_c/(J+Q_3)$. The open squares (circles) are the results for the SU(3) (SU(4)) spins. The black dotted line is the value of the 2D three-state Potts case, $y_t=6/5$. } \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=\linewidth]{fig6.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig6} (Color online) $\Lambda$ dependence of estimated $\eta$ and $\gamma/\nu$ for $JQ_3$ model on honeycomb lattice. (a) and (b) ((c) and (d)) are $\eta$ ($\gamma/\nu$) results for the SU(3) and SU(4) cases, respectively. The vertical colored lines denote critical temperatures and the black horizontal lines are critical exponents for the 2D three-state Potts model, $\eta=4/15$ and $\gamma/\nu=26/15$.} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.95\linewidth]{fig10.eps} \end{center} \vspace{-0.7cm} \caption{\label{fig10} (Color online) Finite-size scaling analysis for SU(3) honeycomb-lattice case at $k_BT_c/(J+Q_3)=1/64$. (a) Correlation length and (b) static structure factor. } \end{figure} Next, we focus on the criticality of the SU(N) $JQ_3$ model on the honeycomb lattice. In Fig. \ref{fig5}, we summarize $y_t$=$\nu^{-1}$ and $T_c$ for the SU(3) and SU(4) $JQ_3$ model. We find that $y_t$=6/5 ($\nu=5/6$) is well satisfied even in the vicinity of the QPT limit of $\Lambda=1$ and that the size dependence of $y_t$ is quite small for $\Lambda \lesssim 0.95$. This value is consistent with that of the 2D three-state Potts universality. In Fig. \ref{fig6}, we show $\eta$ and $\gamma/\nu$ that were estimated in the same manner as in the square-lattice case. The 2D three-state Potts universality is also confirmed directly; $\eta=4/15$ and $\gamma/\nu=26/15$ are satisfied at the critical temperatures within error bars. The present columnar VBS pattern is characterized by the $\pi/3$-rotational symmetry breaking, reflecting the honeycomb-lattice background. Thus, it is expected that the related classical model with the same universality class is the 2D XY+$Z_3$ model. Since the $Z_3$ field is strongly relevant in two dimensions, the exponents are not affected by the coupling constants $J$ and $Q_3$. The fact that the $Z_3$ field is relevant may help us discuss the possibility of DCP occurring. If the present honeycomb $JQ_3$ model can be well mapped onto the 2D three-state Potts model and the change in the coupling ratio can be regarded as the variation of certain parameters, for example, the transverse field in the conventional 2D Ising model, the criticality in the QPT limit is explained by the 3D three-state Potts model. In that case, the QPT should exhibit a weak first-order transition~\cite{WuFY3D3Potts,WuFY3D3Potts2,BloteHWJ1979,JankeW1997} and the first-order transition line should extend in the finite-temperature region. The length of the first-order transition line may be finite but is too short to be observed (see Fig.~\ref{fig7} (b)). This means that the value of $\nu$ should change from the 2D three-state Potts value to the trivial value of $\nu=1/D$ $(D=2)$ via the value at the multicritical fixed point. However, such crossover behavior is not observed when the system approaches the QPT point. We also performed the fixed-temperature calculations for the honeycomb-lattice case. When we vary $\lambda$ for fixed $k_BT/(J+Q_3)=1/64$, where the critical point corresponds to $\Lambda \sim 0.99$, the 2D three-state Potts universality is still observed. Figure \ref{fig10} shows the FSS results for the SU(3) case. We obtain data collapse for $L>80$ if we set the critical exponents to those of the 2D three-state Potts universality. In the case of the honeycomb-lattice model, we expect that the crossover behavior from the mean-field theory exists for $L<80$, but it is very weak. Therefore, it is difficult to identify the conventional system-size dependence. This result indicates that the development of $L_c$ is relatively suppressed in comparison with the square-lattice case at the same temperature. The obtained thermal phase diagram for $\Lambda \lesssim 0.99$ supports the possibility of scenario (a) in Fig. \ref{fig7}, because it seems unlikely that $\nu$ will approach the trivial value of $1/D$ in both the square-lattice and the honeycomb-lattice cases. If the scenario (b) occurs, the multicritical point should exist at quite a low temperature, i.e., $k_BT/(J+Q_m) < O(10^{-2})$. This is still consistent with our previous discussion of the QPT point~\cite{HaradaK2013}; a systematic increase in $\nu_{\rm QPT}$ towards the trivial value is observed for $L>128$ in the SU(3) square-lattice model. If the dynamical exponent for the DCP is unity, $k_BT/(J+Q_m) < O(10^{-2})$ corresponds to the length scale, $L > O(10^2)$. This implies that the correlation length is very large and almost diverging. \section{Summary} \label{Summary} In this paper, we have investigated the thermal transitions of $JQ_2$ models on the square lattice and $JQ_3$ models on the honeycomb lattice for SU(3) and SU(4) spins. We have found that the criticality of the SU(N) square-lattice model is well explained by the 2D weak Ising universality class in both the SU(3) and SU(4) cases, which is in agreement with Jin and Sandvik's result~\cite{JinS2013} for the SU(2) $JQ_3$ model. The thermal exponent, $\nu$, monotonically increases as the system approaches the QPT limit, and the decrease in $\nu$ that should occur if $\nu$ eventually reaches its first-order transition value of $1/D$ has not been observed. Thus, the first-order transition appears to be less likely for $k_BT_c/(J+Q_m) > O(10^{-2})$. In the honeycomb-lattice case, reflecting the fact that the $Z_3$ field is strongly relevant, $\nu$ always exhibits the 2D three-state Potts value. From the obtained results, we have discussed possible scenarios for the thermal phase diagram. If the first-order transition occurs, we may observe critical behaviors with strong system-size corrections. However for $k_BT_c/(J+Q_m)>1/64$, cross-over behavior is not observed clearly in our results. To determine the thermal phase diagram (a) or (b) occurring in the present models, the numerical calculations for extremely large system sizes are required, because the drastic development of $L_c$ is expected in the vicinity of the QPT. \section*{Acknowledgments} \label{ackno} We thank T. Okubo for fruitful discussions. This work is supported by MEXT Grand-in-Aid for Scientific Research (b) (25287097) and Scientific Research (c) (26400392). We are grateful for use of the computational resources of the K computer in the RIKEN Advanced Institute for Computational Science through the HPCI System Research project (Project ID: hp120283 and hp130081). We also thank numerical resources in the ISSP Supercomputer Center at University of Tokyo, and the Research Center for Nano-micro Structure Science and Engineering at University of Hyogo.
proofpile-arXiv_069-4408
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recent work in large-scale data analysis and Randomized Linear Algebra (RLA) has focused on developing so-called sketching algorithms: given a data set and an objective function of interest, construct a small ``sketch'' of the full data set, e.g., by using random sampling or random projection methods, and use that sketch as a surrogate to perform computations of interest for the full data set (see~\cite{Mah-mat-rev_BOOK} for a review). Most effort in this area has adopted an \emph{algorithmic perspective}, whereby one shows that, when the sketches are constructed appropriately, one can obtain answers that are approximately as good as the exact answer for the input data at hand, in less time than would be required to compute an exact answer for the data at hand. From a \emph{statistical perspective}, however, one is often more interested in how well a procedure performs relative to an hypothesized model than how well it performs on the particular data set at hand. Thus an important question to consider is whether the insights from the algorithmic perspective of sketching carry over to the statistical setting. To address this, in this paper, we develop a unified approach that considers both the \emph{statistical perspective} and \emph{algorithmic perspective} on recently-developed randomized sketching algorithms in RLA, and we provide bounds on two statistical objectives for several types of random projection and random sampling sketching algorithms. \subsection{Overview of the problem} The problem we consider in this paper is the ordinary least-squares (LS or OLS) problem: given as input a matrix $X \in \mathbb{R}^{n \times p}$ of observed features or covariates and a vector $Y \in \mathbb{R}^n$ of observed responses, return as output a vector $\beta_{OLS}$ that solves the following optimization problem: \begin{equation} \label{NoiseLinMod} \beta_{OLS} = \arg \min_{\beta\in\mathbb{R}^p}\|Y - X \beta\|_2^2 . \end{equation} \noindent We will assume that $n$ and $p$ are both very large, with $n \gg p$, and for simplicity we will assume $\mbox{rank}(X) = p$, e.g., to ensure a unique full-dimensional solution. The LS solution, $\beta_{OLS} = (X^T X)^{-1} X^T Y$, has a number of well-known desirable statistical properties~\cite{ChatterjeeHadi88}; and it is also well-known that the running time or computational complexity for this problem is $O(n p^2)$~\cite{GVL96}.% \footnote{That is, $O(n p^2)$ time suffices to compute the LS solution from Problem~(\ref{NoiseLinMod}) for arbitrary or worst-case input, with, e.g., the Cholesky Decomposition on the normal equations, with a QR decomposition, or with the Singular Value Decomposition~\cite{GVL96}.} For many modern applications, however, $n$ may be on the order of $10^6-10^9$ and $p$ may be on the order of $10^3-10^4$, and thus computing the exact LS solution with traditional $O(n p^2)$ methods can be computationally challenging. This, coupled with the observation that approximate answers often suffice for downstream applications, has led to a large body of work on developing fast approximation algorithms to the LS problem~\cite{Mah-mat-rev_BOOK}. One very popular approach to reducing computation is to perform LS on a carefully-constructed ``sketch'' of the full data set. That is, rather than computing a LS estimator from Problem~(\ref{NoiseLinMod}) from the full data $(X,Y)$, generate ``sketched data'' $(SX, SY)$ where $S \in \mathbb{R}^{r \times n}$, with $r \ll n$, is a ``sketching matrix,'' and then compute a LS estimator from the following sketched problem: \begin{equation} \label{NoiseLinModSketched} \beta_S = \arg \min_{\beta \in \mathbb{R}^p}\|SY - S X \beta\|_2^2. \end{equation} \noindent Once the sketching operation has been performed, the additional computational complexity of $\beta_S$ is $O(r p^2)$, i.e., simply call a traditional LS solver on the sketched problem. Thus, when using a sketching algorithm, two criteria are important: first, ensure the accuracy of the sketched LS estimator is comparable to, e.g., not much worse than, the performance of the original LS estimator; and second, ensure that computing and applying the sketching matrix $S$ is not too computationally intensive, e.g., that it is much faster than solving the original problem exactly. \subsection{Prior results} Random sampling and random projections provide two approaches to construct sketching matrices $S$ that satisfy both of these criteria and that have received attention recently in the computer science community. In terms of running time guarantees, the running time bottleneck for random projection algorithms for the LS problem is the application of the projection to the input data, i.e., actually performing the matrix-matrix multiplication to implement the projection and compute the sketch. By using fast Hadamard-based random projections, however, Drineas et al.~\cite{DrinMuthuMahSarlos11} developed a random projection algorithm that runs on arbitrary or worst-case input in $o(np^2)$ time. (See~\cite{DrinMuthuMahSarlos11} for a precise statement of the running time.) As for random sampling, Drineas et al.~\cite{DMM06,DMMW12_JMLR} have shown that if the random sampling is performed with respect to nonuniform importance sampling probabilities that depend on the \emph{empirical statistical leverage scores} of the input matrix $X$, i.e., the diagonal entries of the \emph{hat matrix} $H = X(X^T X)^{-1} X^T$, then one obtains a random sampling algorithm that achieves much better results for arbitrary or worst-case input.% Leverage scores have a long history in robust statistics and experimental design. In the robust statistics community, samples with high leverage scores are typically flagged as potential outliers (see, e.g.,~\cite{ChatterjeeHadi86,ChatterjeeHadi88, Hampel86, HW78, Huber81}). In the experimental design community, samples with high leverage have been shown to improve overall efficiency, provided that the underlying statistical model is accurate (see, e.g.,~\cite{Royall70, Zaslavsky08}). This should be contrasted with their use in theoretical computer science. From the algorithmic perspective of worst-case analysis, that was adopted by Drineas et al.~\cite{DrinMuthuMahSarlos11} and Drineas et al.~\cite{DMMW12_JMLR}, samples with high leverage tend to contain the most important information for subsampling/sketching. Thus it is beneficial for worst-case analysis to bias the random sample to include samples with large leverage scores or to rotate with a random projection to a random basis where the leverage scores are approximately uniformized. The running-time bottleneck for this leverage-based random sampling algorithm is the computation of the leverage scores of the input data; and the obvious well-known algorithm for this involves $O(np^2)$ time to perform a QR decomposition to compute an orthogonal basis for $X$~\cite{GVL96}. By using fast Hadamard-based random projections, however, Drineas et al.~\cite{DMMW12_JMLR} showed that one can compute approximate QR decompositions and thus approximate leverage scores in $o(np^2)$ time; and (based on previous work~\cite{DMM06}) this immediately implies a leverage-based random sampling algorithm that runs on arbitrary or worst-case input in $o(np^2)$ time~\cite{DMMW12_JMLR}. Readers interested in the practical performance of these randomized algorithms should consult \textsc{Bendenpik}~\cite{AMT10} or \textsc{LSRN}~\cite{MSM14_SISC}. In terms of accuracy guarantees, both Drineas et al.~\cite{DrinMuthuMahSarlos11} and Drineas et al.~\cite{DMMW12_JMLR} prove that their respective random projection and leverage-based random sampling LS sketching algorithms each achieve the following worst-case (WC) error guarantee: for any arbitrary $(X, Y)$, \begin{equation} \label{eqn:ErrorWCE} \|Y - X \beta_S \|_2^2 \leq (1+ \kappa)\|Y - X \beta_{OLS} \|_2^2, \end{equation} with high probability for some pre-specified error parameter $\kappa \in (0,1)$. This $1+ \kappa$ relative-error guarantee% \footnote{The nonstandard parameter $\kappa$ is used here for the error parameter since $\epsilon$ is used below to refer to the noise or error process.} is extremely strong, and it is applicable to arbitrary or worst-case input. That is, whereas in statistics one typically assumes a model, e.g., a standard linear model on $Y$, \begin{equation} \label{EqnLinModel} Y = X \beta + \epsilon, \end{equation} where $\beta \in \mathbb{R}^p$ is the true parameter and $\epsilon \in \mathbb{R}^n$ is a standardized noise vector, with $\mathbb{E}[\epsilon]=0$ and $\mathbb{E}[\epsilon\epsilon^T]=I_{n \times n}$, in Drineas et al.~\cite{DrinMuthuMahSarlos11} and Drineas et al.~\cite{DMMW12_JMLR} no statistical model is assumed on $X$ and $Y$, and thus the running time and quality-of-approximation bounds apply to any arbitrary $(X,Y)$ input data. \subsection{Our approach and main results} In this paper, we address the following fundamental questions. First, under a standard linear model, e.g., as given in Eqn.~(\ref{EqnLinModel}), what properties of a sketching matrix $S$ are sufficient to ensure low statistical error, e.g., mean-squared error? Second, how do existing random projection algorithms and leverage-based random sampling algorithms perform by this statistical measure? Third, how does this relate to the properties of a sketching matrix $S$ that are sufficient to ensure low worst-case error, e.g., of the form of Eqn.~(\ref{eqn:ErrorWCE}), as has been established previously~\cite{DrinMuthuMahSarlos11,DMMW12_JMLR,Mah-mat-rev_BOOK}? We address these related questions in a number of steps. In Section~\ref{SecFramework}, we will present a framework for evaluating the algorithmic and statistical properties of randomized sketching methods in a unified manner; and we will show that providing WC error bounds of the form of Eqn.~(\ref{eqn:ErrorWCE}) and providing bounds on two related statistical objectives boil down to controlling different structural properties of how the sketching matrix $S$ interacts with the left singular subspace of the design matrix. In particular, we will consider the oblique projection matrix, $\Pi_S^U = U (SU)^{\dagger} S$, where $(\cdot)^\dagger$ denotes the Moore-Penrose pseudo-inverse of a matrix and $U$ is the left singular matrix of $X$. This framework will allow us to draw a comparison between the WC error and two related statistical efficiency criteria, the statistical prediction efficiency (PE) (which is based on the prediction error $\mathbb{E}[\|X(\widehat{\beta} - \beta)\|_2^2]$ and which is given in Eqn.~(\ref{DefnSPE}) below) and the statistical residual efficiency (RE) (which is based on residual error $\mathbb{E}[\|Y - X \widehat{\beta}\|_2^2]$ and which is given in Eqn.~(\ref{DefnSRE}) below); and it will allow us to provide sufficient conditions that any sketching matrix $S$ must satisfy in order to achieve performance guarantees for these two statistical objectives. In Section~\ref{SecMainResults}, we will present our main theoretical results, which consist of bounds for these two statistical quantities for variants of random sampling and random projection sketching algorithms. In particular, we provide upper bounds on the PE and RE (as well as the worst-case WC) for four sketching schemes: (1) an approximate leverage-based random sampling algorithm, as is analyzed by Drineas et al.~\cite{DMMW12_JMLR}; (2) a variant of leverage-based random sampling, where the random samples are \emph{not} re-scaled prior to their inclusion in the sketch, as is considered by Ma et al.~\cite{MMY15_JMLR}; (3) a vanilla random projection algorithm, where $S$ is a random matrix containing i.i.d. Gaussian or Rademacher random variables, as is popular in statistics and scientific computing; and (4) a random projection algorithm, where $S$ is a random Hadamard-based random projection, as analyzed in~\cite{BoutsGitt12}. For sketching schemes (1), (3), and (4), our upper bounds for each of the two measures of statistical efficiency are identical up to constants; and they show that the RE scales as $1+ \frac{p}{r}$, while the PE scales as $\frac{n}{r}$. In particular, this means that it is possible to obtain good bounds for the RE when $p \lesssim r \ll n$ (in a manner similar to the sampling complexity of the WC bounds); but in order to obtain even near-constant bounds for PE, $r$ must be at least of constant order compared to $n$. For the sketching scheme (2), we show, on the other hand, that under the (strong) assumption that there are $k$ ``large'' leverage scores and the remaining $n-k$ are ``small,'' then the WC scales as $1+ \frac{p}{r}$, the RE scales as $1+ \frac{pk}{rn}$, and the PE scales as $\frac{k}{r}$. That is, sharper bounds are possible for leverage-score sampling without re-scaling in the statistical setting, but much stronger assumptions are needed on the input~data. We also present a lower bound developed in subsequent work by Pilanci and Waniwright~\cite{PilanciWainwright} which shows that under general conditions on $S$, the upper bound of $\frac{n}{r}$ for PE can not be improved. Hence our upper bounds in Section~\ref{SecMainResults} on PE can not be improved. In Section~\ref{SecDiscussion}, we will provide a brief discussion and conclusion. For space reasons, we do not include in this conference version the proofs of our main results or our empirical results that support our theoretical findings; but they are included in the technical report version of this paper~\cite{RaskuttiMahoney}. \subsection{Additional related work} Very recently, Ma et al.~\cite{MMY15_JMLR} considered statistical aspects of leverage-based sampling algorithms (called \emph{algorithmic leveraging} in~\cite{MMY15_JMLR}). Assuming a standard linear model on $Y$ of the form of Eqn.~(\ref{EqnLinModel}), the authors developed first-order Taylor approximations to the statistical RE of different estimators computed with leverage-based sampling algorithms, and they verified the quality of those approximations with computations on real and synthetic data. Taken as a whole, their results suggest that, if one is interested in the statistical performance of these randomized sketching algorithms, then there are nontrivial trade-offs that are not taken into account by standard WC analysis. Their approach, however, does not immediately apply to random projections or other more general sketching matrices. Further, the realm of applicability of the first-order Taylor approximation was not precisely quantified, and they left open the question of structural characterizations of random sketching matrices that were sufficient to ensure good statistical properties on the sketched data. We address these issues in this paper. Subsequent work by Pilanci and Wainwright~\cite{PilanciWainwright} also considers a statistical perspective of sketching. Amongst other results, they develop a lower bound which confirms that using a single randomized sketching matrix $S$ can not achieve a better PE than $\frac{n}{r}$. This lower bound complements the upper bounds developed in this paper. Their main focus is to use this insight to develop an iterative sketching scheme which yields bounds on the SPE when an $ r\times n$ sketch is applied repeatedly. \section{General framework and structural results} \label{SecFramework} In this section, we develop a framework that allows us to view the algorithmic and statistical perspectives on LS problems from a common perspective. We then use this framework to show that existing worst-case bounds as well as our novel statistical bounds for the mean-squared errors can be expressed in terms of different structural conditions on how the sketching matrix $S$ interacts with the data $(X,Y)$. \subsection{A statistical-algorithmic framework} Recall that we are given as input a data set, $(X, Y) \in \mathbb{R}^{n\times p} \times \mathbb{R}^n$, and the objective function of interest is the standard LS objective, as given in Eqn.~(\ref{NoiseLinMod}). Since we are assuming, without loss of generality, that $\mbox{rank}(X)=p$, we have that \begin{equation} \label{eqn:beta_opt_full} \beta_{OLS} = X^{\dagger}Y = (X^T X)^{-1}X^T Y, \end{equation} where $(\cdot)^{\dagger}$ denotes the Moore-Penrose pseudo-inverse of a matrix. To present our framework and objectives, let $S \in \mathbb{R}^{r \times n}$ denote an \emph{arbitrary} sketching matrix. That is, although we will be most interested in sketches constructed from random sampling or random projection operations, for now we let $S$ be \emph{any} $r \times n$ matrix. Then, we are interested in analyzing the performance of objectives characterizing the quality of a ``sketched'' LS objective, as given in Eqn~(\ref{NoiseLinModSketched}), where again we are interested in solutions of the form \begin{equation} \label{eqn:beta_opt_sketched} \beta_S=(SX)^{\dagger}SY . \end{equation} (We emphasize that this does \emph{not} in general equal $((SX)^T SX)^{-1}(SX)^T SY$, since the inverse will \emph{not} exist if the sketching process does not preserve rank.) Our goal here is to compare the performance of $\beta_S$ to $\beta_{OLS}$. We will do so by considering three related performance criteria, two of a statistical flavor, and one of a more algorithmic or worst-case flavor. From a statistical perspective, it is common to assume a standard linear model on $Y$, \begin{equation*} Y = X \beta + \epsilon, \end{equation*} where we remind the reader that $\beta \in \mathbb{R}^p$ is the true parameter and $\epsilon \in \mathbb{R}^n$ is a standardized noise vector, with $\mathbb{E}[\epsilon]=0$ and $\mathbb{E}[\epsilon\epsilon^T]=I_{n \times n}$. From this statistical perspective, we will consider the following two~criteria. \begin{itemize} \item The first statistical criterion we consider is the \emph{prediction efficiency} (PE), defined as follows: \begin{equation} \label{DefnSPE} C_{PE}(S) = \frac{\mathbb{E}[\|X (\beta - \beta_S)\|_2^2]}{\mathbb{E}[\|X (\beta - \beta_{OLS})\|_2^2]} , \end{equation} where the expectation $\mathbb{E}[\cdot]$ is taken over the random noise $\epsilon$. \item The second statistical criterion we consider is the \emph{residual efficiency} (RE), defined as follows: \begin{equation} \label{DefnSRE} C_{RE}(S) = \frac{\mathbb{E}[\|Y - X \beta_S\|_2^2]}{\mathbb{E}[\|Y - X \beta_{OLS}\|_2^2]} , \end{equation} where, again, the expectation $\mathbb{E}[\cdot]$ is taken over the random noise $\epsilon$. \end{itemize} Recall that the standard relative statistical efficiency for two estimators $\beta_1$ and $\beta_2$ is defined as $\mbox{eff}(\beta_1,\beta_2)=\frac{\mbox{Var}(\beta_1)}{\mbox{Var}(\beta_2)}$, where $\mbox{Var}(\cdot)$ denotes the variance of the estimator (see, e.g.,~\cite{Lehmann98}). For the PE, we have replaced the variance of each estimator by the mean-squared prediction error. For the RE, we use the term residual since for any estimator $\widehat{\beta}$, $Y - X \widehat{\beta}$ are the residuals for estimating $Y$. From an algorithmic perspective, there is no noise process $\epsilon$. Instead, $X$ and $Y$ are arbitrary, and $\beta$ is simply computed from Eqn~(\ref{eqn:beta_opt_full}). To draw a parallel with the usual statistical generative process, however, and to understand better the relationship between various objectives, consider ``defining'' $Y$ in terms of $X$ by the following ``linear model'': \begin{equation*} Y = X \beta + \epsilon, \end{equation*} where $\beta \in \mathbb{R}^p$ and $\epsilon \in \mathbb{R}^n$. Importantly, $\beta$ and $\epsilon$ here represent different quantities than in the usual statistical setting. Rather than $\epsilon$ representing a noise process and $\beta$ representing a ``true parameter'' that is observed through a noisy $Y$, here in the algorithmic setting, we will take advantage of the rank-nullity theorem in linear algebra to relate $X$ and $Y$ \footnote{The rank-nullity theorem asserts that given any matrix $X \in \mathbb{R}^{n \times p}$ and vector $Y \in \mathbb{R}^n$, there exists a unique decomposition $Y = X \beta + \epsilon$, where $\beta$ is the projection of $Y$ on to the range space of $X^T$ and $\epsilon = Y-X\beta$ lies in the null-space of $X^T$~\cite{Meyer00}.} To define a ``worst case model'' $ Y = X \beta + \epsilon$ for the algorithmic setting, one can view the ``noise'' process $\epsilon$ to consist of any vector that lies in the null-space of $X^T$. Then, since the choice of $\beta \in \mathbb{R}^p$ is arbitrary, one can construct any arbitrary or worst-case input data $Y$. From this algorithmic case, we will consider the following~criterion. \begin{itemize} \item The algorithmic criterion we consider is the \emph{worst-case} (WC) error, defined as follows: \begin{equation} \label{DefnWCE} C_{WC}(S) = \sup_{Y} \frac{\|Y - X \beta_S\|_2^2}{\|Y - X \beta_{OLS}\|_2^2}. \end{equation} \end{itemize} This criterion is worst-case since we take a supremum $Y$, and it is the performance criterion that is analyzed in Drineas et al.~\cite{DrinMuthuMahSarlos11} and Drineas et al.~\cite{DMMW12_JMLR}, as bounded in Eqn.~(\ref{eqn:ErrorWCE}). Writing $Y$ as $X \beta + \epsilon$, where $X^T \epsilon = 0$, the worst-case error can be re-expressed as: \begin{equation*} C_{WC}(S) = \sup_{Y= X \beta + \epsilon,\; X^T \epsilon = 0} \frac{\|Y - X \beta_S\|_2^2}{\|Y - X \beta_{OLS}\|_2^2}. \end{equation*} Hence, in the worst-case algorithmic setup, we take a supremum over $\epsilon$, where $X^T \epsilon = 0$, whereas in the statistical setup, we take an expectation over $\epsilon$ where $\mathbb{E}[\epsilon] = 0$. Before proceeding, several other comments about this algorithmic-statistical framework and our objectives are worth mentioning. \begin{itemize} \item From the perspective of our two linear models, we have that $\beta_{OLS} = \beta + (X^T X)^{-1} X^T \epsilon$. In the statistical setting, since $\mathbb{E}[\epsilon \epsilon^T] = I_{n \times n}$, it follows that $\beta_{OLS}$ is a random variable with $\mathbb{E}[\beta_{OLS}] = \beta$ and $\mathbb{E}[(\beta - \beta_{OLS})(\beta - \beta_{OLS})^T] = (X^T X)^{-1}$. In the algorithmic setting, on the other hand, since $X^T \epsilon = 0$, it follows that $\beta_{OLS} = \beta$. \item $C_{RE}(S)$ is a statistical analogue of the worst-case algorithmic objective $C_{WC}(S)$, since both consider the ratio of the metrics $\frac{\|Y - X \beta_S\|_2^2}{\|Y - X \beta_{OLS}\|_2^2}$. The difference is that a $\sup$ over $Y$ in the algorithmic setting is replaced by an expectation over noise $\epsilon$ in the statistical setting. A natural question is whether there is an algorithmic analogue of $C_{PE}(S)$. Such a performance metric would be: \begin{equation} \label{eqn:nonexistent_obj} \sup_{Y} \frac{\|X (\beta - \beta_S)\|_2^2}{\|X (\beta-\beta_{OLS})\|_2^2}, \end{equation} where $\beta$ is the projection of $Y$ on to the range space of $X^T$. However, since $\beta_{OLS} = \beta + (X^T X)^{-1} X^T \epsilon$ and since $X^T \epsilon = 0$, $\beta_{OLS} = \beta $ in the algorithmic setting, the denominator of Eqn.~(\ref{eqn:nonexistent_obj}) equals zero, and thus the objective in Eqn.~(\ref{eqn:nonexistent_obj}) is not well-defined. The ``difficulty'' of computing or approximating this objective parallels our results below that show that approximating $C_{PE}(S)$ is much more challenging (in terms of the number of samples needed) than approximating $C_{RE}(S)$. \item In the algorithmic setting, the sketching matrix $S$ and the objective $C_{WC}(S)$ can depend on $X$ and $Y$ in any arbitrary way, but in the following we consider only sketching matrices that are either independent of both $X$ and $Y$ or depend only on $X$ (e.g., via the statistical leverage scores of $X$). In the statistical setting, $S$ is allowed to depend on $X$, but not on $Y$, as any dependence of $S$ on $Y$ might introduce correlation between the sketching matrix and the noise variable $\epsilon$. Removing this restriction is of interest, especially in light of the recent results that show that one can obtain WC bounds of the form Eqn.~(\ref{eqn:ErrorWCE}) by constructing $S$ by randomly sampling according to an importance sampling distribution that depends on the \emph{influence scores}---essentially the leverage scores of the matrix $X$ augmented with $-Y$ as an additional column---of the $(X, Y)$ pair. \item Both $C_{PE}(S)$ and $C_{RE}(S)$ are qualitatively related to quantities analyzed by Ma et al.~\cite{MMY15_JMLR}. In addition, $C_{WC}(S)$ is qualitatively similar to $\mbox{Cov}(\widehat{\beta} | Y)$ in Ma et al., since in the algorithmic setting $Y$ is treated as fixed; and $C_{RE}(S)$ is qualitatively similar to $\mbox{Cov}(\widehat{\beta})$ in Ma et al., since in the statistical setting $Y$ is treated as random and coming from a linear model. That being said, the metrics and results we present in this paper are not directly comparable to those of Ma et al. since, e.g., they had a slightly different setup than we have here, and since they used a first-order Taylor approximation while we do not. \end{itemize} \subsection{Structural results on sketching matrices} We are now ready to develop structural conditions characterizing how the sketching matrix $S$ interacts with the data $X$; this will allow us to provide upper bounds for the quantities $C_{WC}(S), C_{PE}(S)$, and $C_{RE}(S)$. To do this, recall that given the data matrix $X$, we can express the singular value decomposition of $X$ as $X = U \Sigma V^T$, where $U \in \mathbb{R}^{n \times p}$ is an orthogonal matrix, i.e., $U^T U = I_{p \times p}$. In addition, we can define the \emph{oblique projection}~matrix \begin{equation} \Pi_S^U := U (SU)^\dagger S . \end{equation} Note that if $\mbox{rank}(SX) = p$, then $\Pi_S^U$ can be expressed as $\Pi_S^U = U (U^T S^T S U)^{-1} U^T S^T S$, since $U^T S^T S U$ is invertible. Importantly however, depending on the properties of $X$ and how $S$ is constructed, it can easily happen that $\mbox{rank}(SX) < p$, even if $\mbox{rank}(X) = p$. Given this setup, we can now state the following lemma, which characterizes how $C_{WC}(S)$, $C_{PE}(S)$, and $C_{RE}(S)$ depend on different structural properties of $\Pi_S^U$ and $SU$. \blems \label{LemProj} For the algorithmic setting, \begin{eqnarray*} C_{WC}(S) &=& 1 + \\ & & \hspace{-20mm} \sup_{\delta \in \mathbb{R}^p, U^T \epsilon = 0 } \biggr[ \frac{\| (I_{p \times p} - (SU)^{\dagger}(SU) )\delta\|_2^2}{\|\epsilon\|_2^2} + \frac{\|\Pi_S^U \epsilon \|_2^2}{\|\epsilon\|_2^2}\biggr]. \end{eqnarray*} For the statistical setting, \begin{equation*} C_{PE}(S) = \frac{\| (I _{p \times p}- (SU)^{\dagger} SU) \Sigma V^T \beta\|_2^2}{p} + \frac{\|\Pi_S^U\|_F^2}{p}, \end{equation*} and \begin{equation*} C_{RE}(S) = 1+ \frac{C_{SPE}(S) - 1}{n/p - 1 } . \end{equation*} \elems \noindent Several points are worth making about Lemma~\ref{LemProj}. \begin{itemize} \item For all $3$ criteria, the term which involves $(SU)^{\dagger} SU$ is a ``bias'' term that is non-zero in the case that $\mbox{rank}(SU) < p$. For $C_{PE}(S)$ and $C_{RE}(S)$, the term corresponds exactly to the statistical bias; and if $\mbox{rank}(SU) = p$, meaning that $S$ is a \emph{rank-preserving} sketching matrix, then the bias term equals $0$, since $(SU)^{\dagger} SU = I_{p \times p}$. In practice, if $r$ is chosen smaller than $p$ or larger than but very close to $p$, it may happen that $\mbox{rank}(SU) < p$, in which case this bias is incurred. \item The final equality $C_{RE}(S) = 1+ \frac{C_{PE}(S) - 1}{n/p - 1 }$ shows that in general it is much more difficult (in terms of the number of samples needed) to obtain bounds on $C_{PE}(S)$ than $C_{RE}(S)$---since $C_{RE}(S)$ re-scales $C_{PE}(S)$ by $p/n$, which is much less than $1$. This will be reflected in the main results below, where the scaling of $C_{RE}(S)$ will be a factor of $p/n$ smaller than $C_{PE}(S)$. In general, it is significantly more difficult to bound $C_{PE}(S)$, since $\|X(\beta - \beta_{OLS})\|_2^2$ is $p$, whereas $\|Y - X \beta_{OLS}\|_2^2$ is $n-p$, and so there is much less margin for error in approximating $C_{PE}(S)$. \item In the algorithmic or worst-case setting, $\sup_{\epsilon \in \mathbb{R}^n/\{ 0\}, \Pi^U \epsilon = 0 } \frac{\|\Pi_S^U \epsilon \|_2^2}{\|\epsilon\|_2^2}$ is the relevant quantity, whereas in the statistical setting $\|\Pi_S^U\|_F^2$ is the relevant quantity. The Frobenius norm enters in the statistical setting because we are taking an average over homoscedastic noise, and so the $\ell_2$ norm of the eigenvalues of $\Pi_S^U$ need to be controlled. On the other hand, in the algorithmic or worst-case setting, the worst direction in the null-space of $U^T$ needs to be controlled, and thus the spectral norm enters. \end{itemize} \section{Main theoretical results} \label{SecMainResults} In this section, we provide upper bounds for $C_{WC}(S)$, $C_{PE}(S)$, and $C_{RE}(S)$, where $S$ correspond to random sampling and random projection matrices. In particular, we provide upper bounds for $4$ sketching matrices: (1) a vanilla leverage-based random sampling algorithm from Drineas et al.~\cite{DMMW12_JMLR}; (2) a variant of leverage-based random sampling, where the random samples are \emph{not} re-scaled prior to their inclusion in the sketch; (3) a vanilla random projection algorithm, where $S$ is a random matrix containing i.i.d. sub-Gaussian random variables; and (4) a random projection algorithm, where $S$ is a random Hadamard-based random projection, as analyzed in~\cite{BoutsGitt12}. \subsection{Random sampling methods} \label{SecSampling} Here, we consider random sampling algorithms. To do so, first define a random sampling matrix $\tilde{S} \in \mathbb{R}^n$ as follows: $\tilde{S}_{ij} \in \{0, 1\}$ for all $(i,j)$ and $\sum_{j=1}^n \tilde{S}_{ij} = 1$, where each row has an independent multinomial distribution with probabilities $(p_i)_{i=1}^n$. The matrix of cross-leverage scores is defined as $L = U U^T \in \mathbb{R}^{n \times n}$, and $\ell_i = L_{ii}$ denotes the leverage score corresponding to the $i^{th}$ sample. Note that the leverage scores satisfy $\sum_{i=1}^n{\ell_i} = \mbox{trace}(L) = p$ and $0 \leq \ell_i \leq 1$. The sampling probability distribution we consider $(p_i)_{i=1}^n$ is of the form $p_i = (1 - \theta) \frac{\ell_i}{p} + \theta q_i$, where $\{q_i\}_{i=1}^n$ satisfies $0 \leq q_i \leq 1$ and $\sum_{i=1}^n {q_i} = 1$ is an arbitrary probability distribution, and $0 \leq \theta < 1$. In other words, $p_i$ is a convex combination of a leverage-based distribution and another arbitrary distribution. Note that for $\theta = 0$, the probabilities are proportional to the leverage scores, whereas for $\theta = 1$, the probabilities follow the distribution defined by $\{q_i\}_{i=1}^n$. We consider two sampling matrices, one where the random sampling matrix is re-scaled, as in Drineas et al.~\cite{DrinMuthuMahSarlos11}, and one in which no re-scaling takes place. In particular, let $S_{NR} = \tilde{S}$ denote the random sampling matrix (where the subscript $NR$ denotes the fact that no re-scaling takes place). The re-scaled sampling matrix is $S_{R} \in \mathbb{R}^{r \times n} = \tilde{S} W$, where $W \in \mathbb{R}^{n \times n}$ is a diagonal re-scaling matrix, where $[W]_{jj} = \sqrt{\frac{1}{r p_j}}$ and $W_{ji} = 0$ for $j \neq i$. The quantity $\frac{1}{p_j}$ is the re-scaling factor. In this case, we have the following result. \btheos \label{ThmOne} For $S = S_{R}$, with $r \geq \frac{C p}{(1-\theta)} \log\big(\frac{C' p}{(1-\theta)} \big)$, then with probability at least $0.7$, it holds that $\mbox{rank}(S_R U) = p$ and that: \begin{eqnarray*} C_{WC}(S_{R}) & \leq & 1+12 \frac{p}{r} \\ C_{PE}(S_{R}) & \leq & 44 \frac{n}{r}\\ C_{RE}(S_{R}) & \leq & 1+ 44 \frac{p}{r} . \end{eqnarray*} \etheos \vspace{-3mm} Several things are worth noting about this result. First, note that both $C_{WC}(S_{R})-1$ and $C_{RE}(S_{R})-1$ scale as $\frac{p}{r}$; thus, it is possible to obtain high-quality performance guarantees for ordinary least squares, as long as $\frac{p}{r} \rightarrow 0$, e.g., if $r$ is only slightly larger than $p$. On the other hand, $C_{PE}(S_{R})$ scales as $\frac{n}{r}$, meaning $r$ needs to be close to $n$ to provide similar performance guarantees. Next, note that all of the upper bounds apply to any data matrix $X$, without assuming any additional structure on $X$. Also note that the distribution $\{q_i\}_{i=1}^n$ does not enter the results which means our bounds hold for any choice of $\{q_i\}_{i=1}^n$ and don't depend on $\theta$. This allows to consider different distributions. A standard choice is uniform, i.e., $q_i = \frac{1}{n}$ (see e.g. Ma et al.~\cite{MMY15_JMLR}). The other important example is that of \emph{approximate} leverage-score sampling developed in ~\cite{DMMW12_JMLR} that reduces computation. Let $(\tilde{\ell_i})_{i=1}^n$ denote the approximate leverage scores developed by the procedure in~\cite{DMMW12_JMLR}. Based on Theorem 2 in ~\cite{DMMW12_JMLR}, $|\ell_i - \tilde{\ell_i}| \leq \theta$ where $0 < \theta < 1$ for $r$ sufficiently large. Now, using $p_i = \frac{\tilde{\ell_i}}{p}$, $p_i$ can be re-expressed as $p_i = (1-\theta) \frac{\ell_i}{p} + \theta q_i$ where $(q_i)_{i=1}^n$ is a distribution (unknown since we only have a bound on the approximate leverage scores). Hence, the performance bounds achieved by approximate leveraging are equivalent to those achieved by adding $\theta$ multiplied by a uniform or other arbitrary distribution. Next, we consider the leverage-score estimator without re-scaling $S_{NR}$. In order to develop nontrivial bounds on $C_{WC}(S_{NR})$, $C_{PE}(S_{NR})$, and $C_{RE}(S_{NR})$, we need to make a (strong) assumption on the leverage-score distribution on $X$. To do so, we define the following. \bdes[k-heavy hitter leverage distribution] A sequence of leverage scores $(\ell_i)_{i=1}^n$ is a \emph{k-heavy hitter} leverage score distribution if there exist constants $c, C > 0$ such that for $1 \leq i \leq k$, $\frac{c p}{k} \leq \ell_i \leq \frac{C p}{k}$ and for the remaining $n-k$ leverage scores, $\sum_{i=k+1}^p {\ell_i} \leq \frac{3}{4}$. \edes \noindent The interpretation of a $k$-heavy hitter leverage distribution is one in which only $k$ samples in $X$ contain the majority of the leverage score mass. The parameter $k$ acts as a measure of non-uniformity, in that the smaller the $k$, the more non-uniform are the leverage scores. The $k$-heavy hitter leverage distribution allows us to model highly non-uniform leverage scores which allows us to state the following result. \btheos \label{ThmTwo} For $S = S_{NR}$, with $\theta = 0$ and assuming a $k$-heavy hitter leverage distribution and $r \geq c_1 p \log\big(c_2 p\big)$, then with probability at least $0.6$, it holds that $\mbox{rank}(S_{NR}) = p$ and that: \begin{eqnarray*} C_{WC}(S_{NR}) & \leq & 1+ \frac{44 C^2}{c^2} \frac{p}{r} \\ C_{PE}(S_{NR}) & \leq & \frac{44 C^4}{c^2} \frac{k}{r}\\ C_{RE}(S_{NR}) & \leq & 1 + \frac{44 C^4}{c^2} \frac{p k}{n r} . \end{eqnarray*} \etheos \vspace{-3mm} Notice that when $k \ll n$, bounds in Theorem~\ref{ThmTwo} on $C_{PE}(S_{NR})$ and $C_{RE}(S_{NR})$ are significantly sharper than bounds in Theorem~\ref{ThmOne} on $C_{PE}(S_{R})$ and $C_{RE}(S_{R})$. Hence not re-scaling has the potential to provide sharper bound in the statistical setting. However a much stronger assumption on $X$ is needed for this~result. \vspace{-3mm} \subsection{Random projection methods} Here, we consider two random projection algorithms, one based on a sub-Gaussian projection matrix and the other based on a Hadamard projection matrix. To do so, define $[S_{SGP}]_{ij} = \frac{1}{\sqrt{r}} X_{ij}$, where $(X_{ij})_{1\leq i \leq r, 1 \leq j \leq n}$ are i.i.d. sub-Gaussian random variables with $\mathbb{E}[X_{ij}] = 0$, variance $\mathbb{E}[X_{ij}^2] = \sigma^2$ and sub-Gaussian parameter $1$. In this case, we have the following result. \btheos \label{ThmThree} For any matrix $X$, there exists a constant $c$ such that if $r \geq c' \log n$, then with probability greater than $0.7$, it holds that $\mbox{rank}(S_{SGP}) = p$ and that: \begin{eqnarray*} C_{WC}(S_{SGP}) & \leq & 1 + 11 \frac{p}{r}\\ C_{PE}(S_{SGP}) & \leq & 44(1 + \frac{n}{r})\\ C_{RE}(S_{SGP}) & \leq & 1 + 44 \frac{p}{r} . \end{eqnarray*} \etheos \vspace{-3mm} Notice that the bounds in Theorem~\ref{ThmThree} for $S_{SGP}$ are equivalent to the bounds in Theorem~\ref{ThmOne} for $S_{R}$, except that $r$ is required only to be larger than $O(\log n)$ rather than $O(p \log p)$. Hence for smaller values of $p$, random sub-Gaussian projections are more stable than leverage-score sampling based approaches. This reflects the fact that to a first-order approximation, leverage-score sampling performs as well as performing a smooth projection. Next, we consider the randomized Hadamard projection matrix. In particular, $S_{Had} = S_{Unif} H D$, where $H \in \mathbb{R}^{n \times n}$ is the standard Hadamard matrix (see e.g.~\cite{Hedayat78}), $S_{Unif} \in \mathbb{R}^{r \times n}$ is an $r \times n$ uniform sampling matrix, and $D \in \mathbb{R}^{n \times n}$ is a diagonal matrix with random equiprobable $\pm 1$ entries. \btheos \label{ThmFour} For any matrix $X$, there exists a constant $c$ such that if $r \geq c p \log n ( \log p + \log \log n)$, then with probability greater than $0.8$, it holds that $\mbox{rank}(S_{Had}) = p$ and that: \begin{eqnarray*} C_{WC}(S_{Had}) & \leq & 1 + 40 \log(np) \frac{p}{r}\\ C_{RE}(S_{Had}) & \leq & 40\log (np) (1 + \frac{n}{r})\\ C_{PE}(S_{Had}) & \leq & 1 + 40\log (np) (1 + \frac{p}{r}). \end{eqnarray*} \etheos \vspace{-3mm} Notice that the bounds in Theorem~\ref{ThmFour} for $S_{Had}$ are equivalent to the bounds in Theorem~\ref{ThmOne} for $S_{R}$, up to a constant and $\log(np)$ factor. As discussed in Drineas et al.~\cite{DrinMuthuMahSarlos11}, the Hadamard transformation makes the leverage scores of $X$ approximately uniform (up to $\log(np)$ factor), which is why the performance is similar to the sub-Gaussian projection (which also tends to make the leverage scores of $X$ approximately uniform). We suspect that the additional $\log(np)$ factor is an artifact of the analysis since we use an entry-wise concentration bound; using more sophisticated techniques, we believe that the $\log(np)$ can be improved. \vspace{-3mm} \subsection{Lower Bounds} \label{SecLower} In concurrent work, Pilanci and Wainwright~\cite{PilanciWainwright} amongst other results develop lower bounds on the numerator in $C_{PE}(S)$ which prove that our upper bounds on $C_{PE}(S)$ can not be improved. We re-state Theorem 1 (Example 1) in Pilanci and Wainwright~\cite{PilanciWainwright} in a way that makes it most comparable to our results. \btheos[Theorem 1 in ~\cite{PilanciWainwright}] For any sketching matrix satisfying $\|\mathbb{E}[S^T(S S^T)^{-1}S]\|_{op} \leq \eta \frac{r}{n}$, any estimator based on $(SX, SY)$ satisfies the lower bound with probability greater than $1/2$: \begin{eqnarray*} C_{PE}(S) & \geq & \frac{n}{128 \eta r}. \end{eqnarray*} \etheos Gaussian and Hadamard projections as well as re-weighted approximate leverage-score sampling all satisfy the condition $\|\mathbb{E}[S^T(S S^T)^{-1}S]\|_{op} \leq \eta \frac{r}{n}$. On the other hand un-weighted leverage-score sampling does not satisfy this condition and hence does not satisfy the lower bound which is why we are able to prove a tighter upper bound when the matrix $X$ has highly non-uniform leverage scores. This proves that $C_{PE}(S)$ is a quantity that is more challenging to control than $C_{RE}(S)$ and $C_{WC}(S)$ when only a single sketch is used. Using this insight, Pilanci and Wainwright~\cite{PilanciWainwright} show that by using a particular iterative Hessian sketch, $C_{PE}(S)$ can be controlled up to constant. In addition to providing a lower bound on the PE using a sketching matrix just once, Pilanci and Wainwright also develop a new iterative sketcthing scheme where sketching matrices are used repeatedly can reduce the PE significantly. \vspace{-3mm} \section{Discussion and conclusion} \label{SecDiscussion} In this paper, we developed a framework for analyzing algorithmic and statistical criteria for general sketching matrices $S \in \mathbb{R}^{r \times n}$ applied to the LS objective. Our framework reveals that the algorithmic and statistical criteria depend on different properties of the oblique projection matrix $\Pi^U_S = U(SU)^{\dagger} U$, where $U$ is the left singular matrix for $X$. In particular, the algorithmic WC criteria depends on the quantity $\sup_{U^T \epsilon = 0} \frac{\|\Pi^U_S \epsilon \|_2}{\|\epsilon\|_2}$, since in that case the data may be arbitrary and worst-case, whereas the two statistical criteria (RE and PE) depends on $\| \Pi^U_S\|_F$, since in that case the data follow a linear model with homogenous noise variance. Using our framework we develop upper bounds for three performance criterion applied to $4$ sketching schemes. Our upper bounds reveal that in the regime where $ p < r \ll n$, our sketching schemes achieve optimal performance up to constant in terms of WC and RE. On the other hand, the PE scales as $\frac{n}{r}$ meaning $r$ needs to be close to $n$ for good performance; and subsequent lower bounds in Pilanci and Wainwright~\cite{PilanciWainwright} show that this upper bound can not be improved.
proofpile-arXiv_069-4435
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Since beginning operations in July 2008, the \emph{Fermi}\xspace Gamma-ray Burst Monitor (GBM) has autonomously detected over 2000 gamma-ray bursts (GRBs), providing real-time alerts, degree-precision sky localizations, and high-quality data for temporal and spectral analysis. The wide field-of-view and high uptime of GBM make it a key instrument for detecting electromagnetic (EM) counterparts to gravitational wave (GW) signals, facilitating broad scientific analyses of coincident events. The GBM detection of GRB~170817A, shown in Figure~\ref{fig:BTTE_sum_zoom}, was not extraordinary. GBM detected the GRB in orbit in real time, a process referred to as ``triggering'', performed on-board classification and localization, and transmitted the results to EM and GW follow-up partners within seconds, as it has done for thousands of other transients. This particular trigger was different in one important aspect: a coincident GW trigger by the Laser Interferometer Gravitational Wave Observatory (LIGO) occurred $\sim$1.7 seconds prior to the GBM trigger~\citep{gcnLVCGW170817_1}, marking the first confident joint EM-GW observation in history. The GW observation yielded a localization incorporating information from the two LIGO detectors, L1 \& H1, and the {\it Virgo} detector, V1, and is therefore termed an HLV map. This initial HLV map was produced by the BAYESTAR algorithm~\citep{bayestar}, with a location centroid at RA = 12h57m, Dec = -17d51m and a 50\% (90\%) credible region spanning 9 (31) square degrees. An estimate for the luminosity distance was also reported as $40 \pm 8$ Mpc~\citep{gcnLVCGW170817_2}. An updated map incorporating Monte Carlo parameter estimation from the LALInference algorithm~\citep{veitch2015parameter} yielded a centroid at RA = 13h09m, Dec = -25d37m and a 50\% (90\%) credible region covering 8.6 (33.6) square degrees~\citep{gcnLVCGW170817_LAL}. Another gamma-ray instrument, the Anti-Coincidence Shield for the SPectrometer for Integral (SPI-ACS) also detected GRB~170817A as a weak, $> 3\sigma$ S/N signal coincident in time to the GBM trigger~\citep{gcnACSGRB170817A_1,acs170817apaper}. Utilizing the time difference between the GBM and SPI-ACS signals and the known positions of the parent spacecrafts, the Inter-Planetary Network~\citep[IPN;][]{IPN3} calculated an annulus on the sky which was consistent with both the GBM localization and the HLV map~\citep{gcnIPN}. Additionally, $\sim 12$ hours after the GBM and LIGO alerts, the discovery of a possible associated optical transient (OT) was reported~\citep{swope_sss17a,coulterinprep} and confirmed~\citep{decam_sss17a,decaminprep}, consistent with the location and distance reported by LIGO/{\it Virgo}. The position of the OT is RA = 13h09m48.085s, Dec = -23d22m53.343s. We detail the GBM observations of this event in the following manner: description of the GBM instrument and its capabilities, discussion of the trigger and prompt localization of the GRB, and results of the standard analyses that are performed for every triggered GRB so that this GRB can be easily compared to the population which GBM observes. Due to the important nature of the joint detection, we proceed beyond the standard analyses and present more detailed analyses of the duration, pulse shape, spectrum, and searches for other associated gamma-ray emission. We also determine how much dimmer the GRB could have been and still have been detected by GBM. For these analyses we assume the position of the OT. Further use of the distance, redshift, and other information from the GW and EM-followup observations to perform rest-frame calculations are left to a companion analysis~\citep{jointpaper} which, in part, relies on the GBM analysis provided here. \section{GBM Description} GBM is one of two instruments on-board the \emph{Fermi}\xspace Gamma-Ray Space Telescope and is comprised of 14 detectors designed to study the gamma-ray sky in the energy band of $\sim$8 keV--40 MeV~\citep{2009Meegan}. Twelve of the detectors are thalium-doped sodium iodide (NaI) scintillation detectors, which cover an energy range of 8--1000 keV and are pointed at various angles in order to survey the entire sky unocculted by the Earth at any time during the orbit. The relative signal amplitudes in the NaI detectors are also used to localize transients~\citep{2015ApJS..216...32C}. The other two detectors are composed of bismuth germanate (BGO) crystals, cover an energy range of 200 keV--40 MeV, and are placed on opposite sides of the spacecraft. Incident photons interact with the NaI and BGO crystals and create scintillation photons. Those photons are then collected by the attached photomultiplier tubes and converted into electronic signals. A recorded signal, which might be a gamma-ray or charged particle, is termed a count. Several data types are produced on-board GBM by binning the counts into pre-defined timescales (continuous/trigger): CTIME (256 ms/64 ms), CSPEC (4096/1024 ms), and TRIGDAT (variable, only by trigger; 64 ms--8.192 s). During the first several years of the mission, data on individual counts, termed Time-Tagged Events (TTE) were only produced during on-board triggers. An increase in telemetry volume and a flight software update in November 2012 allowed downlinking TTE data with continuous coverage for offline analysis. The TTE data type is especially useful as it provides arrival time information for individual photons at 2 $\mu$s precision. Additionally, while the CTIME and TRIGDAT data types only have a coarse energy resolution of 8 channels, the CSPEC and TTE data both have 128-channel energy resolution, facilitating spectral analysis of GRBs and other high-energy astrophysical, solar, and terrestrial phenomena. \section{GBM Trigger \& Localization} The flight software on-board GBM monitors the detector rates and triggers when a statistically significant rate increase occurs in two or more NaI detectors. Currently, 28 combinations of timescales and energy ranges are tested; the first combination tested by the flight software that exceeds the predefined threshold (generally $4.5\sigma$) is considered the trigger~\citep{GBMBurstCatalog_6Years}. \mod{The full trigger and reporting timeline for GRB~170817A is shown in Table~\ref{table:GBM_timeline}}. GRB~170817A was detected by the GBM flight software on a 256 ms accumulation from 50 to 300 keV ending at 12:41:06.474598 UTC on 2017 Aug 17 (hereafter T0), with a significance of $4.82\sigma$ in the second brightest detector (NaI 2) which, because of the two-detector requirement, sets the threshold. This value of $4.82\sigma$ does not represent the overall significance of the GRB, but only the significance of the excess for a single detector, for a particular time interval and energy range. Three detectors were above the threshold: NaIs 1, 2 and 5 (cf.~Table~\ref{table:detector_angles}). The significance is calculated as a simple signal-to-noise ratio: excess detector counts above the background model counts divided by expected fluctuations (i.e., the square root of the background model counts). The detection by the flight software occurred 2.4 ms after the end of the data interval. A rapid alert process was initiated, resulting in a GCN\footnote{\url{https://gcn.gsfc.nasa.gov}} Notice being transmitted to observers 14 s later, at 12:41:20 UTC\footnote{\url{https://gcn.gsfc.nasa.gov/other/524666471.fermi}}. The flight software assigned the trigger a 97\% chance of being due to a GRB, which was reported at 12:41:31 UTC. The initial, automated localizations generated by the GBM flight software and the ground locations had $1\sigma$ statistical uncertainties greater than $20$ degrees but broadly aligned with one of the quadrupole lobes from the skymap produced from the LIGO Hanford (H1) antenna pattern. The data stream ended 2 minutes post-trigger due to the entrance of \emph{Fermi}\xspace into the South Atlantic Anomaly (SAA), in which there are high fluxes of charged particles. During passage through the SAA, high-voltage to the GBM detectors is disabled to extend the lifetime of the detectors, and therefore the detectors cannot collect data. The dependence of the geographical extent of the SAA on the energy of the trapped particles results in different polygon definitions for the GBM and the LAT. The polygon definition for GBM (see Figure~\ref{fig:SAA}) is slightly smaller than the polygon used for the LAT, enabling the GRB to be detected by GBM while the LAT was turned off and unable to observe it. At 13:26:36 UTC, a human-in-the-loop manual localization of GRB~170817A was reported with the highest probability at RA = 180, Dec = $-40$ with a 50\% probability region covering $\sim$500 square degrees and a 90\% probability region of about 1800 square degrees. This localization in comparison to the HLV localization is shown in Figure~\ref{fig:skymaps}. The large localization region is due to the weak nature of the GRB, the extended tail in the systematic uncertainty for GBM GRB localizations \citep{2015ApJS..216...32C}, and the high backgrounds as \emph{Fermi}\xspace approached the SAA. Owing to the importance of this joint detection, an initial circular was sent out at 13:47:37 UTC describing the localization and the event being consistent with a weak short GRB~\citep{gcnGBMGRB170817A_1}. \mod{The GBM Team operates a targeted search for short GRBs that are below the triggering threshold of GBM~\citep{Blackburn15}. This search assumes three different spectral templates for GRBs, each of which are folded through each of the detector responses evaluated over a $1^\circ$ grid on the sky. This method enables a search in deconvolved flux for signals similar to the GRB spectral templates, rather than simpler count-based methods that do not consider the spectrum and detector response.} Guided by detection times from other instruments, such as those by LIGO/{\it Virgo}~\citep{2016PhRvD..93l2003A,PhysRevLett.116.241103,2017PhRvL.118v1101A}, this search requires the downlink of the TTE science data, which can have a latency of up to several hours. Improvements to this search were made in preparation for LIGO's second observing run~\citep{Goldstein16}, which include an un-binned Poisson maximum likelihood background estimation and a spectral template that is more representative of spectrally hard, short GRBs. The TTE data were transmitted to the ground, and the targeted search completed an automated run at 3.9 hours post-trigger. The localization from this search is shown in Figure~\ref{fig:skymaps} and is improved relative to the human-in-the-loop localization. This is primarily due to the improved background estimation provided by the targeted search. The localization incorporates a $7.6^\circ$ Gaussian systematic uncertainty, determined from processing with the targeted search other short GRBs that triggered GBM on-board and which have accurate locations determined by other instruments. The 50\% and 90\% localization credible regions cover $\sim$350 and $\sim$1100 square degrees, respectively. GBM is also operating an offline untargeted search, which agnostically searches all of the GBM TTE data, for almost all times and directions. This search runs autonomously and has been publishing candidates since January 2016\footnote{\url{http://gammaray.nsstc.nasa.gov/gbm/science/sgrb_search.html}} and via the GCN since 2017 July 17\footnote{\url{https://gcn.gsfc.nasa.gov/admin/fermi_gbm_subthreshold_announce.txt}}. Similar to the GBM flight software, it searches for statistically significant excesses in two or more NaI detectors. Compared to the flight software, it has a better background model and tests more time intervals. The untargeted search, in its standard form, did not detect GRB~170817A. This is because the untargeted search has several tests to ensure quality background fits in order to avoid spurious candidates. These tests cause $\approx 2$\% of the TTE data to be omitted from the search. One of the tests rejects data intervals with rapidly changing background rates, as sometimes occurs near the SAA as \emph{Fermi}\xspace moves into/away from high trapped particle fluxes. This test terminated the search at 12:40:50, 16 s before the GRB. Relaxing this standard test, a good background fit is obtained by the program and the GRB is found at high significance and is classified as highly reliable because more than two detectors had significant excesses. In the 320 ms detection interval, NaIs 1, 2, 4, 5 and 11 were found to have significances of $5.63\sigma, 5.67\sigma, 3.41\sigma, 6.34\sigma$ and $3.57\sigma$, respectively. The signal in NaI 11 is due to viewing GRB photons scattered from the Earth's atmosphere and viewing the GRB through the LAT radiator and the back of the detector (which gives a larger response then viewing perpendicular to a source). \section{Standard GBM Analysis} \label{sec:StandardAnalysis} As part of GBM operations, all triggered GRBs are analyzed following standardized procedures, and results from these analyses are released publicly in the form of catalog publications~\citep{GBMSpectralCatalog_4Years,GBMBurstCatalog_6Years} and a searchable online catalog hosted by the HEASARC\footnote{\url{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html}}. Additionally, all triggered data files are publicly available soon after the data are downlinked from the spacecraft and processed automatically in the ground pipeline. In this section, we present the result of the standardized analysis for GRB~170817A so that it may be placed in context of other GRBs that trigger GBM. The response of the GBM NaI detectors is strongly dependent on the angle between the detector and source location, with additional contributions from scattering from the spacecraft and off the Earth's atmosphere. Due to this, a source position must be assumed for the GRB so that detector responses can be generated, mapping incident photon energies to observed count energies. In all following analysis, we assume the position of the optical transient candidate. For standard analysis, we use the NaI detectors that have observing angles to the source position $\leq 60^\circ$ since the response is reduced beyond this angle, and the tradeoff of the low response with possible systematics is poorly understood. Although the BGO response does not depend as strongly on viewing angle as do the NaI detectors, the BGO detector with the smallest viewing angle to the source is used. Additionally, detectors are not used for analysis if portions of the spacecraft or LAT block the detector from viewing the source. The detector angles and detectors selected for analysis are shown in Table \ref{table:detector_angles}. \subsection{Duration} The duration of GRBs is usually defined by the $\rm T_{90}$\xspace, which is the time between reaching 5\% and 95\% of the cumulative observed fluence for the burst in the canonical observing energy range of 50-300 keV. Because the orientation of \emph{Fermi}\xspace may change with respect to the source position over the duration of a GRB, the GBM $\rm T_{90}$\xspace calculation is performed on a photon spectrum rather than the observed count spectrum. This removes the possibility of bias owing to the changing response of the detector to a changing source angle, an effect that is most important for long GRBs. This is the standard method for all GBM $\rm T_{90}$\xspace calculations although other techniques exist. A power-law spectrum with an exponential cut-off is fit to the background-subtracted data over a time interval that begins prior to the trigger time of the burst and extends well beyond its observed duration, using the detector response for the best available source position. The fits are performed sequentially over 64 ms (1.024 s) time bins for short (long) GRBs. Either side of the impulsive GRB emission, the presence of stable and long-lived plateaus in the deconvolved time history indicates the times at which 0\% and 100\% of the burst fluence has been recorded, and the 5\% and 95\% fluence levels and their associated times are measured relative to these plateaus to yield the $\rm T_{90}$\xspace duration. In addition to the $\rm T_{90}$\xspace, this analysis produces an estimate of the peak flux and fluence in the standard GBM reporting range of 10--1000 keV. Following the recipe of the standard analysis, we use detectors NaI 1, 2, and 5 to estimate the $\rm T_{90}$\xspace. A polynomial background is fit to 128-channel TTE data, binned to 8 energy channels in each detector. We find the $\rm T_{90}$\xspace to be $2.0 \pm 0.5$ s, starting at T0$-0.192$ s. We note that there appears to be emission below the 50--300 keV energy range after $\sim0.5$ s which contributes to the deconvolution of the spectrum during that time, thereby extending the $\rm T_{90}$\xspace beyond what is strictly observed in 50--300 keV (cf.~Figure~\ref{fig:BTTE_Chan_NaI}). For GRB~170817A, the peak photon flux measured on the 64 ms timescale and starting at T0 is $3.7 \pm 0.9 \ \rm ph \ s^{-1} \ cm^{-2}$. The fluence over the $\rm T_{90}$\xspace interval is $(2.8 \pm 0.2)\times 10^{-7} \ \rm erg \ cm^{-2}$. \subsection{Spectrum} A standard spectral analysis is performed for each triggered GRB and the results are included in the GBM spectral catalog. Two lightcurve selections are performed: a selection over the duration of the burst, and a selection performed at the brightest part of the burst. The first selection is performed by combining the lightcurves of the NaI detectors, identifying regions that have a SNR $\geq 3.5$, and applying those signal selections to each detector individually. This permits a time-integrated spectral fit of regions that are highly likely to be true signal with minimal background contamination. The second lightcurve selection is performed by summing up the same NaI detectors and selecting the single brightest bin---for short (long) GRBS the brightest 64 ms (1024 ms) bin---and the selection is applied to all detectors individually. For both the time-integrated and peak spectra, the data from each detector are jointly fit via the forward-folding technique using RMfit\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit}}. \mod{Specifically, minimization is sought for the Castor C-statistic~\citep{cstat} using the Levenberg-Marquardt non-linear least-squares minimization algorithm.} Further details on standard spectral fitting analysis procedures and selections are given in~\citet{goldstein12}. The fit results for GRB~170817A are shown in Table~\ref{tab:Spectra}. The time-integrated selection produces a 256 ms time interval from T0$-0.192$ s to T0$+0.064$ s and is statistically best fit by an exponentially cut-off power law, \mod{which is referred to as a Comptonized spectrum in the GBM spectroscopy catalog (see Eq. 3 in~\citet{GBMSpectralCatalog_4Years}).} This fit results in a weakly-constrained power-law index of $0.14 \pm 0.59$ and a break energy, \mod{characterized as the $\nu F_\nu$ peak energy}, $E_{\rm peak}$\xspace$=215 \pm 54$ keV. The averaged energy flux over this interval in 10--1000 keV is $(5.5 \pm 1.2)\times 10^{-7} \ \rm erg \ s^{-1} \ cm^{-2}$, and the corresponding fluence is $(1.4 \pm 0.3)\times 10^{-7} \ \rm erg \ cm^{-2}$. Note that the fluence over the $\rm T_{90}$\xspace interval is larger compared to this interval due to the fact that this time interval is considerably shorter than the $\rm T_{90}$\xspace. The 64 ms peak selection from T0$-0.128$ s to T0$-0.064$ s is also statistically best fit by a Comptonized spectrum. Again the parameters are poorly constrained with the power-law index $=0.85 \pm 1.38$ and $E_{\rm peak}$\xspace$=229 \pm 78$ keV. The resulting peak energy flux from this spectral fit in 10--1000 keV is $(7.3 \pm 2.5)\times 10^{-7} \ \rm erg \ s^{-1} \ cm^{-2}$. \subsection{Comparison to the GBM Catalogs} We compare the standard analysis of GRB~170817A to other GRBs contained in the GBM Burst Catalog~\citep{GBMBurstCatalog_6Years} and Spectral Catalog~\citep{GBMSpectralCatalog_4Years}. A GBM catalog of time-resolved spectroscopy has also been produced~\citep{GBMTimeResolvedSpectralCatalog}, but this GRB is too weak to perform the required time-resolved spectral fits to compare to that catalog. Keeping to the traditional definition of short and long GRBs, our sample comprises of 355 short bursts and 1714 long bursts spanning the beginning of the mission to 2017 August 27. For GRB~170817A, we compare the 64 ms peak photon flux and the fluence obtained from the duration analysis. The distributions are shown in Figure \ref{fig:spectral_comp}, as are the distributions of the cut-off power law parameters for the time-integrated and peak spectra. Because short GRBs are typically defined as those with duration $< 2$ s, they generally have lower fluences than those of long bursts. The fluence for GRB~170817A is consistent with those obtained for short GRBs, falling within the 40th--50th percentile of the short distribution. For the 64 ms peak photon flux, the long and short distributions are similar, with median values of 6.56 and 7.26 $\rm ph \ s^{-1} \ cm^{-2}$, respectively. GRB~170817A, in comparison, lies at the $\sim 10$th percentile of both distributions and is thus weaker than the average GRB on that timescale. \mod{As observed in GBM,} short bursts tend to have higher $E_{\rm peak}$\xspace values than long bursts. For both time selections, the $E_{\rm peak}$\xspace of GRB~170817A falls at the $\sim 15$th percentile of the short GRB distribution, corresponding to the softer tail, and near the median of the long GRB distribution. Long GRBs display a median lower power-law index of $-1.01$, while the short GRBs have a slightly harder index with a median of $-0.58$ for the time-integrated distribution and $-0.27$ for the peak distribution. The power-law index for GRB~170817A, though weakly constrained, lies within the positive tail of the distributions between the 85th--95th percentiles for long and short GRBs. \section{Classification} Historically, GRBs have been classified based on their duration. During the BATSE era, the duration distribution in the 50--300 keV band, plotted as a histogram, showed evidence for bimodality; the shorter population, peaking at $\sim 1$s duration was termed `short' while the longer and more dominant population peaking at $\sim 30$ s duration was termed `long'. The overlap of the two original distributions in BATSE at $\sim 2$ s was designated as the classification boundary between the two GRB types~\citep{1993ApJ...413L.101K}, although the accumulation of more GRBs has shown that the overlap of the two distributions has changed and can be affected by the energy band over which the duration is estimated. Figure~\ref{fig:t90_distrib} shows the $\rm T_{90}$\xspace duration distribution of GRBs that triggered GBM through 2014 July 11 \citep{GBMBurstCatalog_6Years}. The $\rm T_{90}$\xspace for GRB~170817A is shown relative to the distribution of the GBM GRBs, and when the distributions are modeled as two log-normals, the probability that the GRB belongs to the short class is $\sim73$\%. Short-duration GRBs were also observed to be spectrally harder than the average long GRB~\citep{FirstHardnessRatioPaper}. One way to represent this distinction is to calculate a hardness ratio, which is the ratio of the observed counts in 50--300 keV compared to the counts in the 10-50 keV band and is useful in estimating the spectral hardness of an event without the need to perform deconvolution and fitting a spectrum. Figure~\ref{fig:hardness_t90} shows the hardness--duration plot revealing the two distinct populations of short--hard GRBs and long--soft GRBs. Similar to the modeling of the $\rm T_{90}$\xspace distribution, the hardness-duration can be modeled as a mixture of two-dimensional log-normal distributions. The location of GRB~170817A is shown on this diagram, and using the mixture model, we estimate the probability that it belongs in the short--hard class as $\sim$72\%. Two types of progenitors have been proposed for these two GRB classes: collapsars as the progenitors for long GRBs~\citep{MacFadyenCollapsar}, and the compact binary mergers as the progenitors for short GRBs~\citep{eichler1989nucleosynthesis,fox2005afterglow,d2015short}. The connection between long GRBs and collapsars is well-established; however the connection of short GRBs and mergers has been only circumstantial. Owing to the fast transient nature of the prompt emission, the rapid fading of the afterglow emission, and the typical offset from the putative host galaxy, a firm connection between a short GRB and its theoretical progenitor required a coincident GW signal. \section{Detailed Analysis} \label{sec:DetailedAnalysis} In addition to the standard analysis that is performed on each GBM-triggered GRB, we include a more detailed analysis of this GRB: investigating the spectrum in different time intervals, estimating the spectral lag properties of this burst, estimating the minimum variability time, and commenting on possible periodic or extended emission. Where applicable, the following analyses employ an improved background estimation technique for weak signals--the same background method used in the targeted search~\citep{Goldstein16}. This background estimation provides a standardized method that does not rely on user selections of background regions, and models the background in each energy channel independently without assuming an approximating polynomial shape of the background. \subsection{Spectral Analysis}\label{sec:spectra} After visual inspection of the lightcurve (shown in Figures~\ref{fig:BTTE_Chan_NaI} and~\ref{fig:BTTE_sum_10to300}), we first select the main pulse from T0-0.320 s to T0+0.256 s for spectral analysis. We perform the spectral analysis in RMfit with a background model created from the un-binned Poisson maximum likelihood background estimation. This interval is best fit by a Comptonized function with $E_{\rm peak}$\xspace$=185 \pm 62$ keV, $\alpha=-0.62\pm 0.40$, and the resulting time-averaged flux is $(3.1 \pm 0.7)\times 10^{-7} \ \rm erg \ s^{-1} \ cm^{-2}$. The fit to the count rate spectrum is shown in Figure~\ref{fig:mainpulsefit}. We compare this model to the best-fit power-law over the same interval, resulting in a power-law index of $-1.48$. By performing 20,000 simulations assuming the power law as the true source spectrum, we find that the C-stat improvement of 10.6 as observed for the cut-off power law corresponds to a chance occurrence of $1.1\times 10^{-3}$. \mod{Therefore, we conclude that the Comptonized function is statistically preferred over the simple power law.} As can be seen in Figures~\ref{fig:BTTE_Chan_NaI} and~\ref{fig:BTTE_sum_10to300}, the main pulse of the GRB appears to be followed by a weak and soft emission episode. It is not immediately clear if it belongs to the GRB or if it is due to background variability. To ascertain the connection of the soft emission to the main pulse, we localize this soft excess using the standard GBM localization procedures, using the 10--50 keV data and a soft spectral template devised for the localization with good statistics of non-GRB transients with softer spectra, such as magnetars or solar flares. We find that the soft emission localizes to RA=181, Dec=-30 with the 50\% (90\%) credible region approximately circular with a radius of 15 (28) degrees, in good agreement with both the localization of the main pulse and the HLV sky map. In addition to localizing the softer emission, a Bayesian block method was used to analytically determine whether the longer softer emission could be significantly detected. The algorithm~\citep{Scargle2013} characterizes the variability in the TTE data by determining change points in the rate, thereby defining time intervals (called ``blocks'') of differing rates. This method can be used to test for separate statistically significant signals against the Poisson background. The algorithm has previously been used extensively to evaluate Terrestrial Gamma-ray Flash (TGF) candidates found in off-line searches of the GBM TTE data~\citep{Fitzpatrick2014,Roberts2017}. Initially, the TTE data for NaI detectors 1,2 and 5 were investigated $\pm$5 seconds on either side of the GBM trigger time of GRB~170817A and analyzed using a false positive probability (p$_{0}$) of 0.05, previously determined to be a good value from studies by~\citet{Scargle2013}. We find the Bayesian block duration using all three detectors to be 0.647~s, however when running the analysis again using just NaI 2 (the detector with the best source-detector geometry), some softer emission after the initial pulse is deemed significant enough by the algorithm to extend the duration time to 1.12~s. This soft emission after the main pulse is not deemed significant when the algorithm is used separately over the data from NaI 1 and 5. One possible explanation for this is that the effective area of NaI 1 and 5 are $\sim 20-25$\% lower compared to NaI 2 for soft emission (see~\citet{2009Meegan,Bissaldi2009} for effective area dependence on source--detector angle). We find that the spectrum of the soft emission from T0+0.832 to T0+1.984 is well fit by a blackbody with a temperature $kT=10.3\pm1.5$ keV (see Figure~\ref{fig:softfit} for the spectral fit). The blackbody fit has an improvement in C-stat of 18 compared to a power law fit (same number of degrees of freedom), and we find that it is statistically significant, at the $< 1\times10^{-4}$ level, via simulations. Assuming that the blackbody is the true spectrum of the soft emission, the fluence of the soft emission is $\sim$34\% of the main pulse (10--1000 keV range). The results of these spectral fits are listed in Table~\ref{tab:Spectra}. We also attempted to fit a Comptonized function, which approximates the shape of the blackbody with $E_{\rm peak}$\xspace$=38.4 \pm 4.2$ keV and an unconstrained power-law index of $4.3\pm3.0$. \mod{The large uncertainty in the power-law index is likely due to the fact that the $E_{\rm peak}$\xspace is near the low-energy end of the NaI observing band, so there are not enough energy channels to constrain the power-law index.} The improvement in C-stat of 2 units for the additional degree-of-freedom does not indicate that the Comptonized is statistically preferred. \mod{If we assume that this softer emission is indeed thermal, this pulse may be explained as photospheric emission from a cocoon. Postulated cocoon emission is ubiquitous in the case of collapsars~\citep{Peer+06cocoon,Nakar+17cocoon}, and may also be present in the binary neutron star merger scenario. In this picture, significant energy is deposited by the jet responsible for the GRB in the surrounding dense material (e.g. in the debris disk)~\citep{ramirezruiz02}. This results in a cocoon that expands until it achieves a mildly relativistic, coasting Lorentz factor, $\Gamma_c\sim $few, according to the same dynamics as GRB jets \citep{Meszaros+93gasdyn}. Cocoon emission, however, subtends a wider opening angle making it essentially isotropic. A $kT\approx10$ keV temperature blackbody spectrum is in agreement with expectations from such a scenario \citep{Lazzati+17cocoon}. Furthermore, the $T_{\rm soft}\approx1$ s duration of the soft pulse can be related to the typical angular timescale (assuming it is longer than the diffusion timescale at the start of the cocoon expansion), yielding an emission radius $R_{\rm phot,c}\approx 10^{12} ~{\rm cm}~(\Gamma_{\rm c}/4)^2 (T_{\rm soft}/1 ~{\rm s})$ that is also broadly consistent with expectations from a cocoon scenario.} \subsection{Spectral Lag} Spectral lag, the shift of the low-energy lightcurve for a GRB compared to a higher-energy lightcurve is a well-known observed phenomenon exhibited in GRBs~\citep{FenimoreLags}. Long GRBs typically have a soft lag, where the low-energy lightcurve lags behind the high-energy lightcurve. Short GRBs, due to their shorter timescale and generally lower fluence, have spectral lags that are more difficult to measure. Many short GRBs are consistent with zero lag~\citep{BernardiniLags}, while some are consistent with soft lag, and others are consistent with hard lag (high-energy lightcurve lags low energies)~\citep{YiLags}. \mod{There are a number of proposed explanations for the observed spectral lag. Among the likely explanations are effects from synchrotron cooling~\citep{KazanazLags} and kinematic effects due to observing the GRB jet at a large viewing angle~\citep{SariPiran,SalmonsonLags,ChuckCurvature}, both of which can manifest as observed spectral evolution of the prompt emission in the jet~\citep{BandLags,KocevskiLiangLags}.} Several methods have been devised to estimate the spectral lag. We choose to use the discrete cross-correlation function (DCCF) as defined in~\citet{BandLags} to measure the correlation between the lightcurve in two different energy bands. The DCCF has values that typically range from $-1$ (perfect anti-correlation) to +1 (perfect positive correlation). The general method is to shift one lightcurve relative to the other lightcurve, with each time shift discretized as a factor of the binning resolution. At each time shift, the DCCF is computed. \mod{The DCCF as a function of the time shift should peak when the correlation between the two lightcurves reaches the maximum. If this maximum is $< 1$, it could be due to different effects, particularly the brightness of the lightcurve relative to the background or intrinsic physics of the source that causes significant differences in the lightcurve at different energies.} In order to utilize multiple detectors, we account for the estimated background in each detector, combining the background uncertainties into the calculation of the DCCF. Sometimes a second-order polynomial is fit to the DCCF to find the maximum; however this is inadequate when a lightcurve contains many pulses, when the signal is relatively weak, or if the identification of the signal is not precise. Therefore, to find the maximum of the DCCF, \mod{we estimate the trend of the DCCF using non-parametric regression~\citep{lowess}. Because the regression produces no functional form, we perform quadratic interpolation of the regression between the evaluated data points and determine where the regression is at maximum.} To estimate the uncertainty, we create Monte Carlo deviates of the DCCF and fit using the same method. The median and credible interval can then be quoted for the spectral lag. Owing to the paucity of data above $\sim$300 keV, we constrain our inspection of spectral lag to energies below 300 keV. First, we broadly compare the lightcurve in 8--100 keV to the lightcurve in 150--300 keV. We compute the lag using 64 ms binned data for the lightcurve ranging from T0-0.32 s to T0+0.768 s and find a slight preference for a soft lag of $+150^{+106}_{-140}$ ms. We also sub-divide the low-energy interval into five energy ranges, and calculate the lag in each of those sub-ranges relative to the 150--300 keV lightcurve. As shown in Figure~\ref{fig:Spectral_Lags}, we do not find any significant evolution of spectral lag as a function of energy. There is a preference for a soft lag of $\sim$100 ms; however, due to large uncertainties, this is still generally consistent with zero. We also show in Figure~\ref{fig:Spectral_Lags} the DCCF as a function of time lag for the best constrained low-energy interval: 60--100 keV relative to 150--300 keV. \subsection{Minimum Variability Timescale}\label{sec:varibility} The minimum timescale on which a GRB exhibits significant flux variations has long been thought to provide an upper limit as to the size of the emitting region and yield clues to the nature of the burst progenitor\mod{~\citep{Schmidt78,Fenimore93}}. Here we employ a structure function (SF) estimator, based on non-decimated Haar wavelets, in order to infer the shortest timescale at which a GRB exhibits uncorrelated temporal variability. This technique was first employed in \citet{Kocevski2007} to study the variability of X-ray flares observed in afterglow emission associated with Swift detected GRBs, and further developed by \citet{Golkhou2014} and \citet{Golkhou2015} for use in \emph{Swift}\xspace BAT and GBM data, respectively. Here we follow the method outlined in~\citet{Golkhou2015} in applying the SF estimator to GBM TTE data. We summed 200 $\mu$s resolution light curve data for NaIs 1, 2, and 5 over an energy range of 10--1000 keV. We subtracted a linear background model estimated from T0 $\pm10$ s, which excludes data from the $\rm T_{90}$\xspace interval. The resulting Haar scaleogram showing the flux variation level (i.e.\ power) as a function of timescale can be seen in Figure~\ref{fig:mvt}. The red points represent 3$\sigma$ excesses over the power associated with Poisson noise at a particular timescale and the triangles denote $3\sigma$ upper limits. We define the minimum variability timescale as the transition between correlated (e.g. smooth, continuous emission) and uncorrelated (e.g. rapid variations or pulsed emission) variability in the data. \mod{As discussed in \citet{Golkhou2014}, the resulting minimum variability timescale $\Delta t_{\rm min}$ does not necessarily represent the shortest observable timescale in the light curve, which tends to heavily depend on the signal-to-noise of the data. Rather it is a characteristic timescale that more closely resembles the rise time of the shortest pulses in the data.} Such correlated variability appears in the scaleogram as a linear rise relative to the Poisson noise floor at the smallest timescales and the break in this slope represents the shift to uncorrelated variability. The linear rise phase and the subsequent break are demarcated by the dashed blue line. The blue circle marks the extracted value of $\Delta t_{\rm min}$. Using the full 10--1000 keV energy range, we obtain $\Delta t_{\rm min}=0.125 \pm 0.064$ s. \mod{Repeating the analysis over two restricted energy ranges covering 10--50 keV and 10--300 keV, we obtained values of 0.312 $\pm$ 0.065 s and 0.373 $\pm$ 0.069 s respectively. A decrease in $\Delta t_{\rm min}$ as a function of increasing energy matches the results reported by \citet{Golkhou2015} and is consistent with the observed trend of GRB pulse durations decreasing as a function of energy, with hardest energy channels having the shortest observed durations~\citep{Fenimore1995, Norris1996, KocevskiLiangLags}. Figure~\ref{fig:golkhou} shows the resulting $\Delta t_{\rm min}$ over 10--1000 keV energy range compared to the full sample of short and long GBM-detected GRBs analyzed by \citet{Golkhou2015}. It is apparent that GRB~170817A is broadly consistent with the short GRB population.} \subsection{Search for Periodic Activity}\label{sec:period} Some short GRB models invoke a newly born millisecond magnetar as a central engine, e.g., \cite{Bernardini2015}. The GBM TTE data were searched for evidence of periodic activity during or immediately before and after the burst that might indicate the pulse period of the magnetar. For two energy ranges, 8--300 keV and 50--300 keV, three time intervals were searched: T0-10 s to T0+10 s, T0-2 s to T0+2 s, and T0-0.4 s to T0+2.0 s, selected by eye to incorporate all possible emission from the burst. The TTE data were binned into 0.25 ms bins and input into PRESTO\footnote{\url{http://www.cv.nrao.edu/\%7Esransom/presto/}} \citep{Ransom2001}, a standard software suite used for searches for millisecond pulsars in \emph{Fermi}\xspace/LAT, X-ray, and radio data. Specifically, an accelerated search \citep{Ransom2002} was used to search for drifting periodic signals in the range 8-1999 Hz. Significant red noise, due to the variability of the burst itself was found at lower frequencies. No significant periodic signals were detected above $1.5\sigma$ that were present in all energy ranges and time intervals. To search for quasi-periodic signals, each time interval above was divided into subintervals (1 s, 0.5 s, and 0.4 s, respectively). Power spectra were generated for each sub-interval and then were averaged over each full time range. No significant quasiperiodic signals were found in any of the time intervals in either energy range. Red noise was present below about 1-2 Hz, consistent with the the noise in the periodic searches. \mod{The power above 1-2 Hz was consistent with white noise.} \subsection{Pulse Shape and Start Time}\label{sec:PulseShape} GRB pulse shapes can be well described by analytic functions~\citep{Norris+96pulse,Norris+05pulse,Bhat+12pulse}. These are especially useful to derive more accurate estimates of pulse properties in case the GRB is dim. We adapt the pulse profile described in \citet{Norris+96pulse}, where the pulse shape is given by $I(t)=A \exp{(-((t_{\rm peak}-t)/\sigma_{\rm rise})^\nu)}$ for $t<t_{\rm peak}$ and $I(t)=A \exp{(-((t-t_{\rm peak})/\sigma_{\rm decay})^\nu)}$ for $t>t_{\rm peak}$. Here $A$ is the amplitude at the peak time of the pulse, $t_{\rm peak}$, $\sigma_{\rm rise}$ and $\sigma_{\rm decay}$ are the characteristic rise and decay times of the pulse respectively. \mod{\citet{Lee+00peakedness} studied a large sample of GRB pulses, including short GRBs by fitting the same function. While they did not discuss short GRBs in particular, $\nu=2$ is close to the median of the distribution of fitted $\nu$ values. We therefore fix the shape parameter to $\nu=2$ which also aids the convergence of the fit.} By fitting the summed lightcurve of NaI 1, 2, and 5 with 32 ms resolution we find the shape of the main pulse is described by $t_{\rm peak}=-114 \pm 45$ ms, $\sigma_{\rm rise} =129 \pm 54$ ms, and $\sigma_{\rm decay} = 306 \pm 64$ ms \mod{($\chi^2_r=0.99$ for 276 degrees of freedom)}. We define the start time as the time where the pulse reaches the 10\% of its peak and from these pulse parameter values we find $t_{\rm start}=-310 \pm 48$ ms, relative to T0. This pulse shape is used in Section~\ref{sec:Detectability} for the production of a synthetic GRB to estimate its detectability at weaker than observed intensities. \section{Limits on Other Gamma-ray Emission}\label{sec:limits} Aside from the prompt emission that triggered GBM, we investigate other possible associated gamma-ray signals: precursors and extended emission lasting several seconds. \mod{While precursors and extended emission have been observed for some short GRBs, claims of long-term or flaring emission on the timescale of several hours or days is rarer. Due to the proximity of the GRB within the GW observing horizon, we also search for persistent emission from GRB~170817A at hard X-ray energies that might, for example, be associated with afterglow emission from the source (e.g.,~\citet{nustar130427a}).} \subsection{Limits on Precursors\label{sec:precursors}} Evidence for precursor emission has been found for GRBs detected by \emph{Swift}\xspace BAT \citep{0004-637X-723-2-1711}, and their existence has been searched for in the SPectrometer for INTEGRAL AntiCoincidence Shield \citep{2016arXiv161202418M}. There are several theoretical models to explain their existence (e.g. \citealt{2012PhRvL.108a1102T}, \citealt{MetzgerPrecursors}), and some ideas have been developed for their use in joint GW-EM detections (e.g. \citealt{schnittman2017electromagnetic}). In the GW era, there has been an increase in interest in precursors, as this emission is less relativistically beamed, or potentially isotropic, and might be observable to larger inclination angles than the prompt short GRB emission. A search for precursor emission associated with GBM detected GRBs with $\rm T_{90}$\xspace $< 2$ s was performed by \citet{burns2017searching}. This work was intended to inform the time range expected for EM counterparts to GWs, \mod{but little evidence was found for precursor activity in the GBM archive of more than 300 short GRBs, with few exceptions~\citep[e.g.][]{grb090510}.} \mod{The largest time offset claimed for possible precursor emission before a short GRB is $\sim$T0-140 s \citep{0004-637X-723-2-1711}, therefore} we use the targeted search~\citep{Blackburn15,Goldstein16} to examine an interval covering T0-200 s to T0, which encompasses all reported putative short GRB precursors offsets and most expected offsets from theoretical and numerical modeling. We note that the lowest $E_{\rm peak}$\xspace among the spectral templates used in the targeted search is only 70 keV, so this search is not especially sensitive to weak events with peak energies below a few tens of keV. We find no significant emission before T0. Therefore, with no detected precursor signals, we calculate upper limits on precursor emission for GRB~170817A using the procedure described in~\citet{FermiGW151226} and \citet{FermiGW170104}. We set a range of upper limits based on three template spectra which we use in our targeted search, generally referred to as a `soft,' `normal,' and `hard' template. \mod{Using these templates and assuming a 0.1 s (1 s) duration precursor up to 200 s before the GRB, we find a $3\sigma$ flux upper limit range of $6.8-7.3\times10^{-7}$ ($2.0-2.1\times10^{-7}$) $\rm erg \ s^{-1} \ cm^{-2}$ for the soft template, $1.3-1.5\times10^{-6}$ ($3.9-4.2\times10^{-7}$) $\rm erg \ s^{-1} \ cm^{-2}$ for the normal template, and $3.4-3.7\times10^{-6}$ ($9.8-11\times10^{-7}$) $\rm erg \ s^{-1} \ cm^{-2}$ for the hard template.} \subsection{Limits on Extended Emission} Since the launch of \emph{Swift}\xspace BAT, a class of short GRBs with softer extended emission has been discovered \citep{norris2006short}, with a signature short, hard spike typical of short GRBs followed by a weaker long, soft tail extending from a few seconds to more than a hundred seconds. For a source position within its coded-mask field-of-view, \emph{Swift}\xspace BAT background rates are low and relatively stable compared to the high and variable background flux experienced by the uncollimated GBM detector. Therefore, GBM typically does not find evidence of the extended emission unless a GRB is very bright, in which case the extended emission can contribute tens of seconds to the GBM-estimated $\rm T_{90}$\xspace. In general, however, \emph{Swift}\xspace BAT detects extended emission when GBM does not, although mild evidence for that emission can be found in the GBM data starting with the knowledge from the BAT data that it exists. We find no evidence for extended emission, though we note that the sensitivity to such extended emission was not optimal owing to the proximity of \emph{Fermi}\xspace to the SAA and the resulting higher and more variable background rates than elsewhere in the \emph{Fermi}\xspace orbit. Using the same procedure as in Section~\ref{sec:precursors}, \mod{we estimate a $3\sigma$ flux upper limit range averaged over a 10 s of $6.4-6.6\times10^{-8} \ \rm erg \ s^{-1} \ cm^{-2}$ for the presence of any soft extended emission out to 100 s after the GBM trigger.} \subsection{Long-Term Gamma-ray Emission Upper Limits} To estimate the amount of persistent emission during a 48 hour period centered at T0, we use the Earth Occultation technique \citep{2012ApJS..201...33W} to place $3\sigma$ day-averaged flux upper limits over the 90\% credible region of the HLV skymap. We use a coarse binning resolution on the sky to inspect the contribution of persistent emission over the sky map, and we compute a flux upper limit for each bin that has been occulted by the Earth at least 6 times in both the 24 hour period preceding and following T0. This is done to ensure some minimal statistics to compute a day-averaged flux. Nearby known bright sources, determined from \emph{Swift}\xspace BAT monitoring, are automatically included in the model fit and thus accounted for in any calculations of flux from the GRB source. In addition, position bins that contain known bright, flaring sources are removed during post-filtering of the data. The range and median of the flux upper limits over the sky map are shown in Table~\ref{table:upper_limits}, and are consistent with the observed background \mod{on this timescale. Therefore, we find no evidence for persistent emission from GRB~170817A, which is typical for GBM observations of GRBs.} \section{Detectability of GRB~170817A}\label{sec:Detectability} GBM triggered on this GRB despite an increasing background as \emph{Fermi}\xspace approached the SAA, primarily because the source--detector geometry was near optimal for the triggering detectors. After the GBM flight software triggers, it continues to evaluate the remaining trigger algorithms. Three other trigger algorithms also exceeded their threshold: a 256 ms interval ending 128 ms after the trigger time, and 512 ms and 1024 ms intervals that ended 256 ms after the trigger time. The significances were $5.16\sigma$, $6.25\sigma$ and $4.52\sigma$, respectively. All four of the trigger algorithms that exceeded their thresholds for this GRB were for the energy range 50--300 keV and are GBM trigger algorithms that typically detect short GRBs. As the thresholds for all four algorithms are $4.5\sigma$, the most sensitive algorithm was the one based on the 512 ms accumulation. At $6.25\sigma$, this GRB could have been dimmed to $\sim 70$\% of its observed brightness and it would still have triggered GBM. The precise sensitivity to a similar GRB at a different time depends on the direction of the GRB relative to \emph{Fermi}\xspace, the background in the GBM detectors at the time of the GRB, and the phasing of the accumulations used for triggering relative to the GRB pulse profile. If the flight software were unable to trigger on a weaker version of this GRB, the offline searches developed for multi-messenger counterpart searches of GBM data have lower detection thresholds. The targeted search found this GRB with the `normal' spectral template with a search SNR of 12.7. If we set the reporting threshold for an event of interest at a false alarm rate of $\sim 10^{-4}$ Hz (SNR $\sim5.4$), then this GRB could be weakened to $\sim43$\% of its observed brightness and still be detected with the targeted search, assuming the same background and detector--source geometry. Similarly, the untargeted search for short GRBs could detect this event with medium confidence if it were weakened to $\sim50$\% of its observed brightness estimated from simulations of the efficiency of the search using a synthetic pulse injected into suitable background data~\citep{untargetedsearch}. Background data from 30 orbits after T0 was used, when \emph{Fermi}\xspace was located in a similar position in its orbit and was in the same orientation. Simulated GRBs were added to the data, using the pulse shape found in section \ref{sec:PulseShape} and a cutoff power-law spectrum from the fit described in section \ref{sec:spectra}. The intensity was reduced until the simulated GRB was just found at a quality that would result in a \emph{Fermi}\xspace-GBM Subthreshold GCN Notice with a medium reliability score. \section{Summary} We presented observations by \emph{Fermi}\xspace GBM of the first GRB associated with a binary neutron star merger. Our observations show GRB~170817A to most likely be a short--hard GRB, although it appears softer than the typical short GRB detected by GBM. The progenitors of short--hard GRBs have been hypothesized to be mergers of compact binary systems, at least one member of which is a neutron star, which is directly confirmed for GRB~170817A by the associated GW emission~\citep{capstonepaper,jointpaper}. Comparing the standard analysis results to the GBM GRB catalog, we find that this GRB has a lower peak energy than the average short GRB, but may exhibit a harder power-law index. In terms of the 64 ms peak flux, it is one of the weakest short GRBs that GBM has triggered on, though owing to its $\sim 2$ s duration, it has near-median observed fluence. \mod{A more detailed analysis of this GRB uncovers some interesting results. The basic properties (peak energy, spectral slope, and duration) of the main peak are broadly consistent with the leading prompt emission models (e.g.\ dissipative photosphere \citep{Rees+05photdis} or internal shocks \citep{Rees+94is}). Aside from the main peak of $\sim0.5$ s in duration, there appears to be softer emission lasting for $\sim1.1$ s, which has a localization that is in agreement with both the localization of the main peak and the HLV sky map. This emission strongly favors a blackbody spectrum over the typical power-law found when fitting background-subtracted noise in GBM. If this weak soft emission is associated with GRB~170817A, there are some interesting implications, although the fact that the $\sim 1$ s long potentially thermal soft emission is after the hard non-thermal emission makes it difficult to interpret as typical GRB photospheric emission. There is evidence for a thermal component in some GRBs, but it occurs at the same time as the dominating non-thermal emission~\citep[e.g.][]{Zhang+09mag, Guiriec+13sgrbbb}, or the dominant emission itself can be modeled as a quasi-thermal spectrum which broadens in time to approximate a non-thermal spectrum, possibly a sign of photospheric dissipation~\citep{ryde090902b}. Significantly weaker and lower temperature thermal emission has been observed \citep{Starling+12bbxrt} a few hundred seconds after trigger with \emph{Swift}\xspace XRT, albeit for long GRBs. As discussed in Section~\ref{sec:spectra}, one potential explanation for the presence of this emission is photospheric emission from a cocoon, which is thought to be visible from larger viewing angles than the typical uniform-density annular GRB jet~\citep{Lazzati+17cocoon}.} Aside from this intriguing soft emission, we find no evidence for precursor emission, several second-long extended emission, or day-long flaring or fading from the source. We have also calculated the spectral lag, which we find is consistent with zero at $\sim 1 \sigma$, primarily owing to the weakness of the GRB in the GBM data. However, we do find a systematic preference for a positive (soft) lag, which may be an indication for hard-to-soft spectral evolution within the GRB main pulse. A calculation of the minimum variability timescale for GRB~170817A shows that it is consistent with the shorter variability timescales observed in short GRBs. A search for periodic emission associated with the GRB did not find any significant periodic or quasi-periodic activity preceding or following the GRB. \emph{Fermi}\xspace GBM, with instantaneous coverage of 2/3 of the sky and with high uptime ($\sim85$\%), is a key instrument for providing EM context observations to gravitational observations. Joint GW-EM detections with GBM can allow for the confirmation of progenitor types of GRBs, set constraints on fundamental physics such as the speed of gravity and Lorentz Invariance Violation, and further constrain the rates of multimessenger sources. In a companion paper~\citep{jointpaper} some of these analyses are performed for GRB~170817A. Future GBM triggers may, with latencies of ten seconds to minutes, provide localizations that can reduce the joint localization region, particularly when only a subset of GW detectors are online at the time of a detection. The offline searches can provide localizations for weaker events at delays of up to a few hours.\\ \noindent \mod{We dedicate this paper to the memory of Neil Gehrels who was an early and fervent advocate of multi-messenger time-domain astronomy and with whom we wish we could have shared the excitement of this tremendous observation.}\\ \noindent \mod{We thank the referee for an exceptionally prompt and helpful report.} The USRA co-authors gratefully acknowledge NASA funding through contract NNM13AA43C. The UAH co-authors gratefully acknowledge NASA funding from co-operative agreement NNM11AA01A. E.B. and T.D.C. are supported by an appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center, administered by Universities Space Research Association under contract with NASA. D.K., C.A.W.H., C.M.H., and T.L. gratefully acknowledge NASA funding through the \emph{Fermi}\xspace GBM project. Support for the German contribution to GBM was provided by the Bundesministerium f{\"u}r Bildung und Forschung (BMBF) via the Deutsches Zentrum f{\"u}r Luft und Raumfahrt (DLR) under contract number 50 QV 0301. A.v.K. was supported by the Bundesministeriums für Wirtschaft und Technologie (BMWi) through DLR grant 50 OG 1101. N.C. and J.B. acknowledge support from NSF under grant PHY-1505373. S.M.B acknowledges support from Science Foundation Ireland under grant 12/IP/1288. \bibliographystyle{aasjournal}
proofpile-arXiv_069-4593
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A \emph{complex structure} on a real vector bundle $F$ over a connected CW complex $X$ is a complex vector bundle $E$ over $X$ such that its underlying real vector bundle $E_\mathbb R$ is isomorphic to $F$. A \emph{stable complex structure} on $F$ is a complex structure on $F \oplus \varepsilon^d$, where $\varepsilon^d$ is the $d$ dimensional trivial real vector bundle over $X$. For $X$ a manifold we say that $X$ has an \emph{almost complex structure} (respectively \emph{stable almost complex structure}) if its tangent bundle admits an complex structure (respectively stable complex structure). Motivated by the question in \cite{mathoverflow} we consider in this paper the $m$-fold connected sum of complex projective spaces $m\#\mathbb C\mathbb P^{2n}$. As shown by Hirzebruch \cite[Kommentare, p.\ 777]{Hirzebruch}, a necessary condition for the existence of an almost complex structure on a $4n$-dimensional compact manifold $M$ is the congruence $\chi(M) \equiv (-1)^n\sigma(M) \text{ mod } 4$, where $\chi(M)$ is the Euler characteristic and $\sigma(M)$ the signature of $M$. Thus, for even $m$, the connected sums above cannot carry an almost complex structure. We will show that for odd $m$ they do admit almost complex structures, thus showing \begin{thm}\label{T:Maintheorem} The $m$-fold connected sum $m\# \mathbb C\mathbb P^{2n}$ admits an almost complex structure if and only if $m$ is odd. \end{thm} In odd complex dimensions, the connected sums $m\# \mathbb C\mathbb P^{2n+1}$ are K\"ahler, since $\mathbb C\mathbb P^{2n+1}$ admits an orientation reversing diffeomorphism and therefore $m \# \mathbb C\mathbb P^{2n+1}$ is diffeomorphic to $\mathbb C\mathbb P^{2n+1}\# (m-1)\overline{\mathbb C\mathbb P^{2n+1}}$ which is a blow--up of $\mathbb C\mathbb P^{2n+1}$ in $m-1$ points, hence K\"ahler. Furthermore Theorem \ref{T:Maintheorem} is known for $n=1$ and $n=2$, see \cite{Audin1991} and \cite{GeigesMueller2000} respectively. In both cases the authors use general results on the existence of almost complex structures on manifolds of dimension $4$ and $8$ respectively. We will prove Theorem \ref{T:Maintheorem} as follows. In \cite[Theorem 1.1]{Sutherland1965} or in \cite[Theorem 1.7]{Thomas1967} the authors showed \begin{thm}\label{T:SACSandACS} Let $M$ be a closed smooth $2d$-dimensional manifold. Then $TM$ admits an almost complex structure if and only if it admits a stable almost complex structure $E$ such that $c_d(E) = e(M)$, where $c_d$ is the $d$--th Chern class of $E$ and $e(M)$ is the Euler class of $M$. \end{thm} In Section \ref{S:Stable_almost_complex_structures} we will describe the full set of stable almost complex structures in the reduced $K$--theory of $m\#\mathbb C\mathbb P^{2n}$. In Section \ref{S:modd} we give, for odd $m$, an explicit example of a stable almost complex structure to which Theorem \ref{T:SACSandACS} applies. \\ \noindent {\it Acknowledgements} We wish to thank Thomas Friedrich for valuable comments on an earlier version of the paper. We are also grateful to the anonymous referee for his careful reading and helpful comments. \section{Stable almost complex structures on $m\# \mathbb C\mathbb P^{2n}$}\label{S:Stable_almost_complex_structures} For a CW complex $X$ let $K(X)$ and ${KO}(X)$ denote the complex and real $K$--groups respectively. Moreover we denote by $\widetilde K^{}\left(X\right)$ and $\widetilde{KO}^{}\left(X\right)$ the reduced groups. Let $r \colon K(X) \to KO(X)$ denote the real reduction map, which can be restricted to a map $\widetilde K^{}\left(X\right)\to \widetilde{KO}^{}\left(X\right)$. We denote the restricted map again with $r$. A real vector bundle $F$ over $X$ has a stable almost complex structure if there is a an element $y \in \widetilde K(X)$ such that $r(y)=F-\dim F$. Since $r$ is a group homomorphism, the set of all stable complex vector bundles, such that the underlying real vector bundle is stably isomorphic to $F$, is given by \[ y + \ker r \subset \tilde K(X), \] where $y$ is such that $r(y) =F -\dim F$. Let $c \colon KO(X) \to K(X)$ denote the complexification map and $t \colon K(X) \to K(X)$ the map which is induced by complex conjugation of complex vector bundles. The maps $t$ and $c$ are ring homomorphisms, but $r$ preserves only the group structure. The following idendities involving the maps $r,c$ and $t$ are well known \begin{align*} c\circ r &= 1+ t \colon K(X) \to K(X),\\ r\circ c &= 2 \colon KO(X) \to KO(X). \end{align*} We will write $\bar y=t(y)$ for an element $y \in K(X)$. For two oriented manifolds $M$ and $N$ of same dimension $d$, we denote by $M\# N$ the connected sum of $M$ with $N$ which inherits an orientation from $M$ and $N$. First, let us characterise the stable tangent bundle of $M\# N$ by \begin{lem}\label{L:StableTangentBundleConnectedSum} Let $p_M\colon M\# N \to M$ and $p_N \colon M\#N \to N$ be collapsing maps to each factor of $M\# N$. Then we have \[ p_M^\ast(M)\oplus p_N^\ast(N) \cong T(M\#N)\oplus \varepsilon^d. \] \end{lem} \begin{proof} Let $D_M \subset M$ and $D_N \subset N$ be embedded closed disks and $W_M$ and $W_N$ collar neighborhoods of $\partial(M \setminus \mathring D_M)$ and $\partial(N \setminus \mathring D_N)$ respectively, where $\mathring D$ denotes the interior of $D$. Thus $W_M \cong S^{d-1}\times [-2,0]$ and $W_N \cong S^{d-1} \times [0,2]$. The manifold $M\#N$ is obtained by identifying $S^{d-1}\times 0 \subset W_M$ with $S^{d-1}\times 0\subset W_N$ by the identity map. Set $W := W_M \cup W_N \subset M\# N$ and note that $V_1:=p_M^\ast(M)\oplus p_N^\ast(N)$ as well as $V_2:=T(M\# N)\oplus \varepsilon^n$ are trivial over $W$. Moreover let $U_M\subset M\#N$ be the open set diffeomorphic to $(M\setminus W_M)\cup (S^{d-1}\times [-2,-1[)$ and analogous for $U_N \subset M\# N$. Now, since $V_1|_{U_M} \cong p_M^\ast(TM)\oplus\varepsilon^d$ and $p^\ast_M(TM)|_{U_M} = T(M\# N)|_{U_M}$ we have an isomorphism given by $\Phi_M \colon V_2|_{U_M} \to V_1|_{U_M}$, $(\xi,w)\mapsto ((p_M)_\ast(\xi),w)$. For $\Phi_N \colon V_2|_{U_N} \to V_1|_{U_N}$, we set $\Phi_N(\eta,w) =(w,-(p_N)_\ast(\eta))$. Moreover both vector bundles $V_1$ and $V_2$ are trivial over $W$ and it is possible to choose trivializations of $V_1$ and $V_2$ over $W$ such that $\Phi_M$ is given by $(v,w)\mapsto (v,w)$ over $W_M$ and such that $\Phi_N$ is represented by $(v,w)\mapsto (w,-v)$ over $W_N$. Over $S^{d-1}\times [-1,1]$ we can interpolate these isomorphisms by \[ \begin{pmatrix} v \\ w \end{pmatrix}\mapsto \begin{pmatrix} \cos\left(\frac{\pi}{4}(t+1)\right) & \sin\left(\frac{\pi}{4}(t+1)\right)\\ -\sin\left(\frac{\pi}{4}(t+1)\right) & \cos\left(\frac{\pi}{4}(t+1)\right)\\ \end{pmatrix} \begin{pmatrix} v \\ w \end{pmatrix} \] for $t\in [-1,1]$. Using this interpolation we can glue $\Phi_M$ and $\Phi_N$ to a global isomorphism $V_2\to V_1$. \end{proof} Hence $T(M\#N) -d = TM + TN -2d$ in $\widetilde{KO}(M\# N)$, where $TM$ and $TN$ denote the elements in $\widetilde{KO}(M\# N)$ induced by $p_M^*(TM)$ and $p_N^*(TN)$ respectively. This shows that if $M$ and $N$ admit stable almost complex structures so does $M \# N$ (cf. \cite{MR0258037}). For $M=N=\mathbb C\mathbb P^{2n}$ we consider the natural orientation induced by the complex structure of $\mathbb C\mathbb P^{2n}$. We proceed with recalling some basic facts on complex projective spaces. Let $H$ be the tautological line bundle over $\mathbb C\mathbb P^{d}$ and let $x \in H^2(\mathbb C\mathbb P^{d};\mathbb Z)$ be the generator, such that the total Chern class $c(H)$ is given by $1+x$. The cohomology ring of $\mathbb C\mathbb P^{d}$ is isomorphic to $ \mathbb Z[x]/\langle x^{d+1} \rangle$. The $K$ and $KO$ theory of $\mathbb C\mathbb P^{d}$ are completely understood. Let $\eta:= H-1 \in \widetilde K^{}\left(\mathbb C\mathbb P^{d}\right)$ and $\eta_R:=r(\eta) \in \widetilde{KO}(\mathbb C\mathbb P^{d})$. Then we have \begin{thm}[cf.\ {\cite[Theorem 3.9]{Sanderson1964}}, {\cite[Lemma 3.5]{MR0202131}}, {\cite[p.\ 170]{MR0440554}} and {\cite[Proposition 4.3]{Thomas1974}}]\label{T:KtheoryOfComplexProjectiveSpace}~ \begin{enumerate}[label=(\alph*)] \item $K(\mathbb C\mathbb P^{d}) = \mathbb Z[\eta]/\langle \eta^{d+1}\rangle$. The following sets of elements are an integral basis of $K(\mathbb C\mathbb P^{d})$ \begin{enumerate}[label=(\roman*)] \item $1,\,\eta,\, \eta(\eta+\bar\eta),\, \ldots,\, \eta(\eta +\bar\eta)^{n-1}, (\eta+\bar\eta),\, \ldots,\, (\eta + \bar\eta)^{n}$, and also, in case $d$ is odd, $\eta^{2n+1} = \eta(\eta + \bar\eta)^n$. \item $1,\,\eta,\, \eta(\eta+\bar\eta),\, \ldots,\, \eta(\eta +\bar\eta)^{n-1}, (\eta-\bar\eta)(\eta+\bar\eta),\, \ldots,\, (\eta-\bar\eta)(\eta + \bar\eta)^{n-1}$, and also, in case $d$ is odd, $\eta^{2n+1}$ \end{enumerate} where $n$ is the largest integer $\leq d/2$. \item \begin{enumerate}[label=(\roman*)] \item if $d=2n$ then $KO(\mathbb C\mathbb P^{d}) = \mathbb Z[\eta_R]/\langle \eta_R^{n+1}\rangle$ \item if $d=4n+1$ then $KO(\mathbb C\mathbb P^{d}) = \mathbb Z[\eta_R]/\langle \eta_R^{2n+1},2\eta_R^{2n+2} \rangle$ \item if $d=4n+3$ then $KO(\mathbb C\mathbb P^{d}) = \mathbb Z[\eta_R]/\langle \eta^{2n+2}_R \rangle$. \end{enumerate} \item The complex stable tangent bundle is given by $(2n+1)\bar\eta \in \tilde K(\mathbb C\mathbb P^{2n}$) and the real stable tangent bundle is given by $r( (2n+1)\bar\eta)) \in \widetilde{KO}^{}\left(\mathbb C\mathbb P^{2n}\right)$. \item The kernel of the real reduction map $r \colon \widetilde K^{}\left(\mathbb C\mathbb P^{d}\right) \to \widetilde{KO}^{}\left(\mathbb C\mathbb P^{d}\right)$ is freely generated by the elements \begin{enumerate}[label=(\roman*)] \item $\eta-\bar\eta, (\eta-\bar\eta)(\eta+\bar\eta),\ldots, (\eta-\bar\eta)(\eta+\bar\eta)^{\tfrac{d}{2}-1}$, if $d$ is even, \item $\eta-\bar\eta, (\eta-\bar\eta)(\eta+\bar\eta),\ldots,(\eta-\bar\eta) (\eta+\bar\eta)^{2n-1}, 2\eta^d$, if $d = 4n+1$, \item $\eta-\bar\eta, (\eta-\bar\eta)(\eta+\bar\eta),\ldots,(\eta-\bar\eta) (\eta+\bar\eta)^{2n}, \eta^d$, if $d = 4n+3$. \end{enumerate} \end{enumerate} \end{thm} Next we would like to describe the integer cohomology ring of $m\#\mathbb C\mathbb P^{2n}$. For that we introduce the following notation: Let $\Lambda$ denote either $\mathbb Z$ or $\mathbb Q$. We define an ideal $R_d(X_1,\ldots,X_m)$ in $\Lambda[X_1,\ldots,X_m]$, where $X_1,\ldots,X_m$ are indeterminants, as the ideal generated by the following elements \begin{align*} X_i \cdot X_j &,\quad i\neq j\\ X_i^{d} -X_j^{d} &,\quad i\neq j,\\ X_j^{d+1} &,\quad j=1,\ldots,m. \end{align*} Hence we have \begin{equation} H^*\left( m\#\mathbb C\mathbb P^{d};\Lambda \right)\cong \Lambda[x_1,\ldots,x_m]/R_{d}(x_1,\ldots,x_m) \label{Eq:CohomologyOfConnectedSum} \end{equation} where $x_j = p^*_j(x) \in H^2\left( m\#\mathbb C\mathbb P^{d};\Lambda \right)$, for $x \in H^2(\mathbb C\mathbb P^{d};\Lambda)$ defined as above and $p_j\colon m\#\mathbb C\mathbb P^{d} \to \mathbb C\mathbb P^{d}$ the projection onto the $j$-th factor. Note that $p_j$ induces an monomorphism on cohomology. The stable tangent bundle of $m\#\mathbb C\mathbb P^{2n}$ in $\widetilde{KO}(m\#\mathbb C\mathbb P^{2n})$ is represented by \[ (2n+1)\sum_{j=1}^m r(\bar\eta_j) \] where $\eta_j := p_j^*(\eta) \in \widetilde K^{}\left(\mathbb C\mathbb P^{2n}\right) $ and $r\colon \widetilde K(m\#\mathbb C\mathbb P^{2n}) \to \widetilde{KO}(m\#\mathbb C\mathbb P^{2n})$ is the real reduction map. Hence the set of stable almost complex structures on $m\#\mathbb C\mathbb P^{2n}$ is given by \begin{equation}\label{Eq:Kernel_of_r} (2n+1)\sum_{j=1}^m \bar\eta_j + \ker r, \end{equation} \noindent For $k \in \mathbb N$ and $j=1,\ldots,m$, set $w_j^k=p_j^*(H)^k - p_j^*(H)^{-k}$, $e_j^{n-1}=\eta_j(\eta_j + \bar\eta_j)^{n-1}$ and $\omega = \eta_1^{2n}$. \begin{prop} The kernel of $r\colon \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)\to \widetilde{KO}^{}\left(m\# \mathbb C\mathbb P^{2n}\right)$ is freely generated by \begin{enumerate}[label=(\alph*)] \item $\{w_j^k : k=1,\ldots,n-1,\, j=1,\ldots,m \} \cup \{e^{n-1}_1-e^{n-1}_{j} : j= 2,\ldots,m\} \cup \{2e_1^{n-1} -\omega\}$, for $n$ even, \item $\{ w_j^k : k=1,\ldots,n,\, j=1,\ldots,m \}$, for $n$ odd. \end{enumerate} \end{prop} \begin{proof} Consider the cofiber sequence \begin{equation} \bigvee_{j=1}^m \mathbb C\mathbb P^{2n-1} \stackrel{i}{\longrightarrow} m\# \mathbb C\mathbb P^{2n} \stackrel{\pi}{\longrightarrow} S^{4n}. \label{Eq:Cofibration} \end{equation} Note that the line bundle $i^*p_j^*(H)$ is the tautological line bundle over the $j$-th summand of $\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}$ and the trivial bundle on the other summands, since the first Chern classes are the same. For the reduced groups we have \[ \widetilde K^{}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right) \cong \bigoplus_{j=1}^m \widetilde K^{}\left(\mathbb C\mathbb P^{2n-1}\right) \] and $i^*p_j^*(\eta)$ generates the $j$-th summand of the above sum according to Theorem \ref{T:KtheoryOfComplexProjectiveSpace}. The long exact sequence in $K$-theory of the cofibration \eqref{Eq:Cofibration} is given by \begin{equation}\label{Eq:LES} \cdots \to \widetilde K^{-1}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right)\to \widetilde K^{}\left(S^{4n}\right) \to \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)\to \widetilde K^{}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right)\to \widetilde K^{1}\left(S^{4n}\right)\to \cdots \end{equation} From Theorem $2$ in \cite{Fujii1967} we have $\widetilde K^{-1}\left(\mathbb C\mathbb P^{2n-1}\right)=0$, hence $\widetilde K^{-1}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right)=0$ and from Bott periodicity we deduce $\widetilde K^{1}\left(S^{4n}\right)=\widetilde K^{-1}\left(S^{4n}\right)=0$. So we obtain a short exact sequence \[ 0 \longrightarrow \widetilde K^{}\left(S^{4n}\right) \stackrel{\pi^*}{\longrightarrow} \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)\stackrel{i^*}{\longrightarrow} \widetilde K^{}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right)\longrightarrow 0. \] which splits, since the involving groups are finitely generated, torsion free abelian groups. Let $\omega_\mathbb C$ be the generator of $\widetilde K^{}\left(S^{4n}\right)$, then the set \[ \left\{\pi^*(\omega_\mathbb C)\right\}\cup \left\{\eta_j^k: j=1,\ldots,m,\; k=1,\ldots,2n-1\right\} \] is an integral basis of $\widetilde K^{}\left(m\# \mathbb C\mathbb P^{2n}\right)$. We claim that $\eta_j^{2n}=\pi^*(\omega_\mathbb C)$ for all $j$. Indeed, the elements $\eta_j^{2n}$ lie in the kernel of $i^*$, hence there are $k_j \in \mathbb Z$ such that $\eta_j^{2n}=k_j \cdot \pi^*(\omega_\mathbb C)$. Let $\widetilde{ch} \colon \widetilde K^{}\left(X\right) \to \widetilde{H}^{}\left(X;\mathbb Q \right)$ denote the Chern character for a finite CW complex $X$, then $\widetilde{ch}$ is a monomorphism for $X=m\#\mathbb C\mathbb P^{d}$ (since $\tilde H^{*}(m\#\mathbb C\mathbb P^{d};\mathbb Z)$ has no torsion, cf. \cite{MR0121801}) and an isomorphism for $X=S^{d}$ onto $\widetilde H^{*}(S^{d};\mathbb Z)$ embedded in $\widetilde H^{*}(S^{d};\mathbb Q)$. Using the notation of \eqref{Eq:CohomologyOfConnectedSum} we have \[ \widetilde{ch}(\eta_j^{2n}) = \left( e^{x_j}-1 \right)^{2n} = x_j^{2n} \] and using the naturality of $\widetilde{ch}$ \[ \widetilde{ch}\left( \pi^*(\omega_\mathbb C) \right) = \pi^*\left( \widetilde{ch}(\omega_\mathbb C) \right) =\pm x_j^{2n} \] since $\pi^*$ is an isomorphism on cohomology in dimension $2n$. We can choose $\omega_\mathbb C$ such that $\widetilde{ch}(\pi^*(\omega_\mathbb C))=x_j^{2n}$. This shows $k_j=1$ for all $j$ and $\widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)$ is freely generated by \[ \left\{\eta_j^k: j=1,\ldots,m,\; k=1,\ldots,2n-1\right\} \cup \{\eta_1^{2n}= \cdots = \eta_m^{2n}\}. \] Hence $K(m\#\mathbb C\mathbb P^{2n})=\mathbb Z[\eta_1,\ldots,\eta_m]/R_{2n}(\eta_1,\ldots,\eta_m)$. Since $p_j^*(H)\otimes p_j^*(\overline H)$ is the trivial bundle we compute the identity \[ \overline{\eta}_j = \frac{-\eta_j}{1+\eta_j}=-\eta_j+\eta_j^2 - \cdots +\eta_j^{2n}. \] The ring $\mathbb Z[\eta_1,\ldots,\eta_m]/R_{2n}(\eta_1,\ldots,\eta_m)$ is isomorphic to \[ \left. \left( \bigoplus_{j=1}^m \mathbb Z[\eta_j]/\langle \eta_j^{2n+1} \rangle \right) \right/\langle \eta_j^{2n} -\eta_i^{2n} : j\neq i \rangle \] and from Theorem \ref{T:KtheoryOfComplexProjectiveSpace} the set $\Gamma_j$ which contains the elements \begin{align*} &\eta_j,\, \eta_j(\eta_j+\overline \eta_j),\ldots,\eta_j(\eta_j+\overline\eta_j)^{n-1}\\ &\eta_j-\overline\eta_j, (\eta_j-\overline\eta_j)(\eta_j+\overline\eta_j),\ldots,(\eta_j-\overline\eta_j) (\eta_j+\overline\eta_j)^{n-1} \end{align*} together with $\{1\}$ is an integral basis of $\mathbb Z[\eta_j]/\langle \eta_j^{2n+1} \rangle$. Thus the set $\Gamma_1 \cup \ldots \cup \Gamma_m \subset \widetilde K(m\#\mathbb C\mathbb P^{2n})$ generates the group $\widetilde K(m\#\mathbb C\mathbb P^{2n})$. Observe that \begin{equation}\label{Eq:RelationBasisElements} (\eta_j+\bar\eta_j)^{k} = 2\eta_j(\eta_j + \bar\eta_j)^{k-1} - (\eta_j-\bar\eta_j)(\eta_j+\bar\eta_j)^{k-1}, \end{equation} thus \begin{equation}\label{Eq:TopBasisElement} \eta_j^{2n} = (\eta_j+\bar\eta_j)^{n} = 2\eta_j(\eta_j + \bar\eta_j)^{n-1} - (\eta_j-\bar\eta_j)(\eta_j+\bar\eta_j)^{n-1}. \end{equation} We set $\omega:=\eta_j^{2n}$ for any $j =1,\ldots,m$ and \begin{align*} e_j^k &:= \eta_j(\eta_j + \bar\eta_j)^{k},\quad j=1,\ldots,m,\quad k=0,\ldots,n-1\\ f_j^k &:= (\eta_j-\bar\eta_j)(\eta_j + \bar\eta_j)^{k},\quad j=1,\ldots,m,\quad k=0,\ldots,n-1 \end{align*} and in virtue of relation \eqref{Eq:TopBasisElement} the set \[ B:=\{\omega\} \cup \{e_j^k \colon j=1,\ldots,m,\, k=0,\ldots,n-1\} \cup \{f_j^k \colon j=1,\ldots,m,\, k=0,\ldots,n-2\} \] is an integral basis of $\widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)$. We proceed with the computation of $KO(m\#\mathbb C\mathbb P^{2n})$. We have a long exact sequence for $\widetilde{KO}$-theory like in \eqref{Eq:LES}. From Theorem $2$ in \cite{Fujii1967} we deduce $\widetilde{KO}^{-1}\left(\mathbb C\mathbb P^{2n}\right)=0$ and therefore $\widetilde{KO}^{-1}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n}\right)=0$. Moreover $\widetilde{KO}^{1}\left(S^{4n}\right)=\widetilde{KO}^{-7}\left(S^{4n}\right)= \widetilde{KO}^{}\left(S^{4n+7}\right)=0$ by Bott periodicity. Hence we obtain a short exact sequence \begin{equation}\label{EQ:SESKOgroups} 0 \longrightarrow \widetilde{KO}^{}\left(S^{4n}\right) \longrightarrow \widetilde{KO}^{}\left(m\#\mathbb C\mathbb P^{2n}\right)\longrightarrow \widetilde{KO}^{}\left(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}\right) \longrightarrow 0. \end{equation} Now we have to distinguish between the cases where $n$ is even or odd. We first assume that $n=2l$. In that case the ring $KO\left(\mathbb C\mathbb P^{2n-1}\right)$ is isomorphic to $\mathbb Z[\eta_R]/\langle \eta_R^{n} \rangle$, see Theorem \ref{T:KtheoryOfComplexProjectiveSpace} (b). Hence all groups in \eqref{EQ:SESKOgroups} are torsion free. Therefore the kernel of $r\colon \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right) \to \widetilde{KO}^{}\left(m\#\mathbb C\mathbb P^{2n}\right)$ is the same as the kernel of \[ \varphi := c\circ r = 1+t \colon \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right) \to \widetilde K^{}\left(m\# \mathbb C\mathbb P^{2n}\right) \] since $r\circ c = 2$ and thus $c$ is a monomorphism of the torsion free part of $\widetilde{KO}^{}\left(m\#\mathbb C\mathbb P^{2n}\right)$. Next we compute a basis of $\ker\varphi$. Using relation \eqref{Eq:RelationBasisElements} we have $\varphi(\omega) =2\omega$, $\varphi(e_j^k) =2e_j^k - f_j^k$ and $\varphi(f_j^k)=0$, thus if \[ y = \lambda \omega + \sum_{j=1}^m \sum_{k=0}^{n-1}\lambda_j^k e^k_j \] then \begin{align*} \varphi(y) &= 2\lambda \omega + \sum_{j=1}^m\sum_{k=0}^{n-1} \lambda_j^k (2e_j^k - f_j^k) = \left(2\lambda + \sum_{j=1}^m \lambda_j^{n-1}\right)\omega + \sum_{j=1}^m\sum_{k=0}^{n-2} \lambda_j^k(2e_j^k - f_j^k) \end{align*} where we used that $f_j^{n-1} = 2e_j^{n-1} - \omega$ by Equation \eqref{Eq:TopBasisElement}. As $\omega$ and $2e_j^k - f_j^k$, $j=1,\ldots,m$, $k=0,\ldots,n-2$, are linearly independent, we conclude that $\varphi(y)=0$ if and only if $\lambda^k_j=0$ for $j=1,\ldots,m$, $k=1,\ldots,n-2$ and \[ \sum_{j=1}^m \lambda^{n-1}_j +2\lambda =0. \] This implies that the set \[ \{f_j^k : j=1,\ldots,m,\quad k=0,\ldots,n-2\} \cup \{e^{n-1}_1-e^{n-1}_{j} : j= 2,\ldots,m\} \cup \{2e_1^{n-1} -\omega\}, \] is an integral basis of $\ker\varphi$. Note that from \eqref{Eq:TopBasisElement} we have $2e_1^{n-1}-\omega = (\eta_1-\bar\eta_1)(\eta_1 + \bar\eta_1)^{n-1}$. By an inductive argument we see that \begin{equation}\label{Eq:KernelExpressedInwjk} (\eta_j -\bar\eta_j)(\eta_j+\bar\eta_j)^k = w_j^{k+1} + \text{linear combinations of } w_j^1,\ldots,w^k_j \end{equation} and \[ e_1^{n-1} - e_j^{n-1}= \eta_1^{2n-1} -\eta_j^{2n-1}. \] Thus an integral basis of the kernel, in case $n$ is even, is given by \[ \{w_j^k \colon j=1,\ldots,m,\, k=1,\ldots,n-1\} \cup \{w_1^{n}\}\cup \{\eta_1^{2n-1}-\eta_j^{2n-1}\colon j=2,\ldots,m\}. \] Now let us assume that $n= 2l+1$. Consider the commutative diagram \begin{center} \begin{tikzcd} 0 \arrow{r}\arrow{d} & \widetilde K(S^{4n}) \arrow{r}{\pi^*} \arrow{d}{r_S}& \widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right) \arrow{r}{i^*}\arrow{d}{r_{\#}}& \widetilde K(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}) \arrow{r}\arrow{d}{r_{\vee}}& \arrow{d} 0 \\ 0 \arrow{r} & \widetilde{KO}(S^{4n}) \arrow{r}{\pi^*} & \widetilde KO^{}\left(m\#\mathbb C\mathbb P^{2n}\right) \arrow{r}{i^*}& \widetilde{KO}(\vee_{j=1}^m \mathbb C\mathbb P^{2n-1}) \arrow{r}& 0 \end{tikzcd} \end{center} The map $r_S \colon \widetilde K^{}\left(S^{8l+4}\right) \to \widetilde{KO}^{}\left(S^{8l+4}\right)$ is an isomorphism and therefore $i^*|_{\ker r_\#}\colon \ker r_\# \to \ker r_\vee$ is an isomorphism, hence the rank of $\ker r_\#$ is $mn$. We see that the set \[ \{f_j^k : j=1,\ldots,m,\, k=0,\ldots,n-2\} \cup \{ 2e_j^{n-1} : j=1,\ldots,m \} \cup \{\omega\} \] is an integral basis of $(i^*)^{-1}\left( \ker r_\vee \right)$, which follows because $e_j^{n-1} = \eta_j^{2n-1} -(n-1)\omega$ and the structure of the kernel of $r_\vee$, see Theorem \ref{T:KtheoryOfComplexProjectiveSpace} (d) (ii). The elements $f_j^k$ for $j=1,\ldots,m$ and $k=0,\ldots,n-2$ lie in the kernel of $r_\#$. Let \[ y = \lambda \omega + \sum_{j=1}^m \lambda_j^{n-1}2 e_j^{n-1} \] for $\lambda,\lambda_j^{n-1} \in\mathbb Z$ and suppose $r_\#(y)=0$. From $\varphi(\omega) = 2\omega$ and $\varphi(e_j^{n-1}) = (\eta_j + \bar\eta_j)^n = \eta_j^{2n} = \omega$ it follows that \[ \lambda + \sum_{j=1}^m \lambda_j^{n-1} =0. \] Hence $\ker r_\#$ is freely generated by the elements $f_j^k$ and $2e_j^{n-1} - \omega$. Observe from \eqref{Eq:TopBasisElement} that $2e_j^{n-1} - \omega =(\eta-\bar\eta)(\eta+\bar\eta)^{n-1}$. Thus in case of $n$ odd we deduce like in \eqref{Eq:KernelExpressedInwjk} that the kernel of $r_\#$ is freely generated by $w_j^k$ for $j=1,\ldots,m$ and $k=1,\ldots,n$. \end{proof} Hence by Equation \eqref{Eq:Kernel_of_r}, stable almost complex structures of $m\#\mathbb C\mathbb P^{2n}$ for $n$ even are given by elements of the form \begin{equation}\label{Eq:SACS} y = (2n+1)\sum_{i=1}^m \bar\eta_j+\sum_{j=1}^m\sum_{k=1}^{n-1} a^k_j w_j^k + a_1^n w_1^n + \sum_{j=2}^{m} b_j (\eta_1^{2n-1} - \eta_j^{2n-1}). \end{equation} and for $n$ odd \begin{equation}\label{Eq:SACSnOdd} y = (2n+1)\sum_{i=1}^m \bar\eta_j+\sum_{j=1}^m\sum_{k=1}^{n} a^k_j w_j^k \end{equation} for $a_j^k, b_j \in \mathbb Z$. For Theorem \ref{T:SACSandACS} we have to compute the $2n$--th Chern class $c_{2n}(E)$ of a vector bundle $E$ representing an element of the form \eqref{Eq:SACS} and \eqref{Eq:SACSnOdd}. Let $\eta_1^{2n-1}-\eta_j^{2n-1}$ denote also a vector bundle over $m\#\mathbb C\mathbb P^{2n}$ which represents the element $\eta_1^{2n-1}-\eta_j^{2n-1}$ in $\widetilde K^{}\left(m\#\mathbb C\mathbb P^{2n}\right)$. The total Chern class of $\eta_1^{2n-1} - \eta_j^{2n-1}$ can be computed through the Chern character: We have \[ \widetilde{ch}(\eta_1^{2n-1} - \eta_j^{2n-1}) =\widetilde{ch}(\eta_1)^{2n-1}- \widetilde{ch}(\eta_j)^{2n-1} = x_1^{2n-1} - x_j^{2n-1}. \] The elements of degree $k$ in the Chern character are given by $\nu_k(c_1,\ldots,c_k)/k!$ where $\nu_k$ are the Newton polynomials. The coefficient in front of $c_k$ in $\nu_k(c_1,\ldots,c_k)$ is $k$ (see \cite{MR1122592}, p. 195) and the other terms are products of Chern classes of lower degree, hence the only non vanishing Chern class is given by \[ c_{2n-1}(\eta_1^{2n-1}-\eta_j^{2n-1}) = (2n-2)!\; (x_1^{2n-1}-x_j^{2n-1}) \] Thus the total Chern class of a vector bundle $E$ representing an element of the form \eqref{Eq:SACS} is given by \begin{align*} c(E) &= (1-(x_1+\ldots + x_m))^{2n+1}\\ &\qquad \cdot \left( \frac{1+nx_1}{1-nx_1} \right)^{a_1^n} \prod_{j=2}^m(1+ (2n-2)! (x_1^{2n-1} - x_j^{2n-1}))^{b_j} \prod_{j=1}^m\prod_{k=1}^{n-1} \left(\frac{1+kx_j}{1-kx_j}\right)^{a_j^k} \end{align*} and for \eqref{Eq:SACSnOdd} \[ c(E) = (1-(x_1+\ldots + x_m))^{2n+1} \prod_{j=1}^m\prod_{k=1}^{n} \left(\frac{1+kx_j}{1-kx_j}\right)^{a_j^k}, \] where the coefficient in front of $x_1^{2n}=\ldots=x_m^{2n}$ is equal to $c_{2n}(E)$. \begin{rem} Note that for $m=1$ (and complex projective spaces of arbitrary dimension) this total Chern class was already computed by Thomas, see \cite[p.\ 130]{Thomas1974}. \end{rem} \section{Almost complex structures on $m\# \mathbb C\mathbb P^{2n}$} \label{S:modd} We now describe an explicit stable almost complex structure on $m\# \mathbb C \mathbb P^{2n}$, where $m=2u+1$, for which the assumptions of Theorem \ref{T:SACSandACS} are satisfied, thereby producing an almost complex structure on $m\# \mathbb C \mathbb P^{2n}$. We choose, in the notation of \eqref{Eq:SACS} and \eqref{Eq:SACSnOdd}, $a_{j}^k = 2$ for $j=1,\ldots,u$ and $k=1$, and all other coefficients $0$. Then the top Chern class is as desired: \begin{prop}\label{prop:modd} Let $m=2u+1$ be an odd number. In the cohomology ring of $m\#\mathbb C\mathbb P^{2n}$, the coefficient $c_{2n}$ of $x_1^{2n} = \cdots = x_m^{2n}$ of the class \[ c = (1-(x_1+\cdots + x_{2u+1}))^{2n+1} \prod_{r=1}^u \left(\frac{1+x_r}{1-x_r}\right)^2 \] is \[ c_{2n} = m(2n-1) + 2 = \chi(m\# \mathbb C\mathbb P^{2n}). \] \end{prop} \begin{proof} As $x_i\cdot x_j=0$ for $i\neq j$, we have \begin{align*} (1-(x_1+\cdots + x_{2u+1}))^{2n+1} &= \sum_{j_0=0}^{2n+1} (-1)^{j_0}{2n+1\choose j_0} (x_1^{j_0} + \cdots + x_{2u+1}^{j_0})\\ &=\sum_{r=1}^{2u+1} \sum_{j_0=0}^{2n+1} (-1)^{j_0} {2n+1 \choose j_0} x_r^{j_0}. \end{align*} Thus, \[ c = \prod_{r=1}^u (1-x_r)^{2n-1}(1+x_r)^2 \prod_{s=u+1}^{2u+1} (1-x_s)^{2n+1}. \] The factors $(1-x_s)^{2n+1}$ contribute $2n+1$ to $c_{2n}$, whereas the factors $(1-x_r)^{2n-1}(1+x_r)^2$ contribute $2n-3$. Thus, \[ c_{2n} = u(2n-3) + (u+1)(2n+1) = (2u+1)(2n-1) + 2 = \chi((2u+1)\# \mathbb C\mathbb P^{2n}). \] \end{proof}
proofpile-arXiv_069-4600
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} \input{introduction} \section{Background and Related Work} \label{sec:background} \input{contribution} \section{Methodology: Power and Runtime Modeling} \label{sec:models} \input{models} \section{Experimental Results} \label{sec:results} \input{results} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \small \subsection{Layer-Level Power and Runtime Modeling} \label{subsec:layer-wise model} The first part of \emph{NeuralPower}\xspace is layer-level power and runtime models. We construct these models for each type of layer for both runtime and power. More specifically, we select to model three types of layers, namely the \textit{convolutional}, the \textit{fully connected}, and the \textit{pooling} layer, since these layers carry the main computation load during CNN execution -- as also motivated by \cite{qi2016paleo}. Nevertheless, unlike prior work, our goal is to make our model flexible for various combinations of software/hardware platforms without knowing the details of these platforms. To this end, we propose a learning-based \textit{polynomial regression model} to learn the coefficients for different layers, and we assess the accuracy of our approach against power and runtime measurements on different commercial GPUs and Deep Learning software tools. There are three major reasons for this choice. \emph{First}, in terms of model accuracy, polynomial models provide more flexibility and low prediction error when modeling both power and runtime. The \emph{second} reason is the interpretability: runtime and power have clear physical correlation with the layer's configuration parameters (\emph{e.g.}, batch size, kernel size, \emph{etc.}). That is, the features of the model can provide an intuitive encapsulation of how the layer parameters affect the runtime and power. The \emph{third} reason is the available amount of sampling data points. Polynomial models allow for adjustable model complexity by tuning the degree of the polynomial, ranging from linear model to polynomials of high degree, whereas a formulation with larger model capacity may be prone to overfitting. To perform model selection, we apply ten-fold cross-validation and we use Lasso to decrease the total number of polynomial terms. \added{The detailed model selection process will be discussed in Section \ref{subsec:results:layer-wise}.} \textbf{Layer-level runtime model}: The runtime $\hat{T}$ of a layer can be expressed as: \begin{align} \label{eq:polynomial_runtime} \hat{T}(\mathbf{x}_T) = & \sum _{j} c_j \cdot \prod_{i = 1}^{D_T} {x}_i^{q_{ij}} + \sum_s c^\prime_s \mathcal{F}_s(\mathbf{x}_T)\\ \text{where } & \mathbf{x}_T \in \mathbb{R}^{D_T}; \ q_{ij} \in \mathbb{N}; \ \forall j, \ \sum_{i = 1}^{D_T} q_{ij} \leq K_T. \nonumber \end{align} The model consists of two components. The first component corresponds to the regular degree-$K_T$ polynomial terms which are a function of the features in input vector $\mathbf{x}_T \in \mathbb{R}^{D_T}$. $x_i$ is the $i$-th component of $\mathbf{x}_T$. $q_{ij}$ is the exponent for $x_i$ in the $j$-th polynomial term, and $c_j$ is the coefficient to learn. This feature vector of dimension $D_T$ includes layer configuration hyper-parameters, such as the batch size, the input size, and the output size. For different types of layers, the dimension $D_T$ is expected to vary. For convolutional layers, for example, the input vector includes the kernel shape, the stride size, and the padding size, whereas such features are not relevant to the formulation/configuration of a fully-connected layer. The second component corresponds to special polynomial terms $\mathcal{F}$, which encapsulate physical operations related to each layer (\emph{e.g.}, the total number of memory accesses and the total number of floating point operations). The number of the special terms differs from one layer type to another. For the convolutional layer, for example, the special polynomial terms include the memory access count for input tensor, output tensor, kernel tensor, and the total number of floating point operations for the all convolution computations. Finally, $c^\prime_s$ is the coefficient of the $s$-th special term to learn. Based on this formulation, it is important to note that not all input parameters are positively correlated with the runtime. For example, if the stride size increases, the total runtime will decrease since the total number of convolutional operations will decrease. This observation motivates further the use of a polynomial formulation, since it can capture such trends (unlike a posynomial model, for instance). \deleted{Please also note that we use Equation~\ref{eq:polynomial_runtime} to predict the runtime of a CNN during \emph{testing}. This is because (as already motivated in Section \ref{sec:introduction}) the runtime of CNNs at service time is a design metric of crucial importance, especially when designing for energy-constrained mobile platforms or for more complex CNN configurations. However, we can flexibly use a similar formulation to model the runtime during training (\emph{i.e.}, by simply incorporating a model for back-propagation runtime), whose investigation we leave for future work.} \textbf{Layer-level power model}: To predict the power consumption $\hat{P}$ for each layer type during \emph{testing}, we follow a similar polynomial-based approach: \begin{align} \label{eq:polynomial_power} \hat{P}(\mathbf{x}_P) = & \sum _{j} z_j \cdot \prod_{i = 1}^{D_P} {x}_i^{m_{ij}} + \sum_k z^\prime_k \mathcal{F}_k (\mathbf{x}_P)\\ \text{where } & \mathbf{x}_P \in \mathbb{R}^{D_P}; \ m_{ij} \in \mathbb{N}; \ \forall j, \ \sum_{i = 1}^{D_P} m_{ij} \leq K_P. \nonumber \end{align} where the regular polynomial terms have degree $K_P$ and they are a function of the input vector $\mathbf{x}_P \in \mathbb{R}^{D_P}$. $m_{ij}$ is the exponent for $x_i$ of the $j$-th polynomial term, and $z_j$ is the coefficient to learn. In the second sum, $z^\prime_k$ is the coefficient of the $k$-th special term to learn. Power consumption however has a non-trivial correlation with the input parameters. More specifically, as a metric, power consumption has inherent limits, \emph{i.e.}, it can only take a range of possible values constrained through the power budget. That is, when the computing load increases, power does not increase in a linear fashion. To capture this trend, we select an extended feature vector $\mathbf{x}_P \in \mathbb{R}^{D_P}$ for our power model, where we include the logarithmic terms of the features used for runtime (\emph{e.g.}, batch size, input size, output size, \emph{etc.}). As expected, the dimension $D_P$ is twice the size of the input feature dimension $D_T$. A logarithmic scale in our features vector can successfully reflect such a trend, as supported by our experimental results in Section \ref{sec:results}. \subsection{Network-Level Power, Runtime, and Energy Modeling} \label{subsec:whole_network} \added{We discuss the network-level models for \emph{NeuralPower}\xspace.} For the majority of CNN architectures readily available in a Deep Learning models ``zoo'' (as the one compiled by ~\cite{jia2014caffe}), the whole structure consists of and can be divided into several layers in series. Consequently, using our predictions for power and runtime as building blocks, we extend our predictive models to capture the runtime, the power, and eventually the energy, of the entire architecture at the \emph{network level}. \textbf{Network-level runtime model}: Given a network with $N$ layers connected in series, the predicted total runtime can be written as the sum of the predicted runtime $\hat{T}_n$ of each layer $n$: \begin{equation} \label{eq:runtime} \hat{T}_{total} = \sum_{n = 1}^N \hat{T}_n \end{equation} \textbf{Network-level power model}: Unlike the summation for total runtime, the average power can be obtained using both per layer runtime and power. More specifically, we can represent the average power $\hat{P}_{avg}$ of a CNN as: \begin{equation} \label{eq:power} \hat{P}_{avg} = \frac{\sum_{n = 1}^N \hat{P}_n \cdot \hat{T}_n}{\sum_{n = 1}^N \hat{T}_n} \end{equation} \textbf{Network-level energy model}: From here, it is easy to derive our model for the total energy consumption $\hat{E}_{total}$ of an entire network configuration: \begin{equation} \label{eq:energy} \hat{E}_{total} = \hat{T}_{total} \cdot \hat{P}_{avg} = \sum_{n = 1}^N \hat{P}_n \cdot \hat{T}_n \end{equation} which is basically the scalar product of the layer-wise power and runtime vectors, or the sum of energy consumption for all layers in the model. \subsection{Dataset Collection} \label{subsec:setup} \textbf{Experiment setup}: The main modeling and evaluation steps are performed on the platform described in Table \ref{tab:Target architecture}. To exclude the impact of voltage/frequency changing on the power and runtime data we collected, we keep the GPU in a fixed state and CUDA libraries ready to use by enabling the persistence mode. We use \texttt{nvidia-smi} to collect the instantaneous power per 1 ms for the entire measuring period. Please note that while this experimental setup constitutes our configuration basis for investigating the proposed modeling methodologies, in Section~\ref{subsec:models on other platforms} we present results of our approach on other GPU platforms, such as Nvidia GTX 1070, and Deep Learning tools, such as Caffe by~\cite{jia2014caffe}. \begin{table}[ht] \centering \vspace{-6pt} \caption{Target platform} \label{tab:Target architecture} \small \begin{tabular} {l|l} \toprule \textbf{CPU / Main memory} & Intel Core-i7 5820K / 32GB \\ \hline \textbf{GPU} & Nvidia GeForce GTX Titan X (12GB DDR5) \\ \hline \textbf{GPU max / idle power} & 250W / 15W \\ \hline \textbf{Deep learning platform} & TensorFlow 1.0 on Ubuntu 14 \\ \hline \textbf{Power meter} & NVIDIA System Management Interface\\ \bottomrule \end{tabular} \end{table} \textbf{CNN architectures investigated}: To comprehensively assess the effectiveness of our modeling methodology, we consider several CNN models. Our analysis includes state-of-the-art configurations, such as the AlexNet by~\cite{krizhevsky2014one}, VGG-16 \& VGG-19 by~\cite{simonyan2014very}, R-CNN by~\cite{ren2015faster}, NIN network by~\cite{lin2013network}, CaffeNet by~\cite{jia2014caffe}, GoogleNet by~\cite{szegedy2015going}, and Overfeat by~\cite{sermanet2013overfeat}. We also consider different flavors of smaller networks such as vanilla CNNs used on the MNIST by~\cite{lecun1998gradient} and CIFAR10-6conv~\cite{courbariaux2015binaryconnect} on CIFAR-10. This way, we can cover a wider spectrum of CNN applications. \textbf{Data collection for power/runtime models}: To train the layer-level predictive models, we collect data points by profiling power and runtime from all layers of all the considered CNN architectures in the training set. We separate the training data points into groups based on their layer types. \added{In this paper, the training data include 858 convolution layer samples, 216 pooling layer samples, and 116 fully connected layer samples. The statistics can change if one needs any form of customization. }For testing, we apply our learned models on the network in the testing set, and compare our predicted results against the actual results profiled on the same platform, including both layer-level evaluation and network-level evaluation. \subsection{Layer-Level Model Evaluation} \label{subsec:results:layer-wise} \subsubsection{Model selection} \added{To begin with model evaluation, we first illustrate how model selection has been employed in \emph{NeuralPower}\xspace. In general, \emph{NeuralPower}\xspace changes the order of the polynomial (\emph{e.g.}, $D_T$ in Equation \ref{eq:polynomial_runtime}) to expand/shrink the size of feature space. \emph{NeuralPower}\xspace applies Lasso to select the best model for each polynomial model. Finally, \emph{NeuralPower}\xspace selects the final model with the lowest cross-validation Root-Mean-Square-Error (RMSE), which is shown in Figure \ref{fig:model_selection}.} \begin{figure}[htbp] \centering \small \includegraphics[width=0.99\linewidth]{./fig/model_selection.pdf} \vspace{-15pt} \caption{Comparison of best-performance model with respect to each polynomial order for the fully-connected layers. In this example, a polynomial order of two is chosen since it achieves the best Root-Mean-Square-Error (RMSE) for both runtime and power modeling. At the same time, it also has the lowest Root-Mean-Square-Percentage-Error (RMSPE).} \label{fig:model_selection} \end{figure} \subsubsection{Runtime Models} Applying the model selection process, we achieve a polynomial model for each layer type in a CNN. The evaluation of our models is shown in Table~\ref{tab:perf_model}, where we report the Root-Mean-Square-Error (RMSE) and the relative Root-Mean-Square-Percentage-Error (RMSPE) of our runtime predictions for each one of the considered layers. Since we used Lasso in our model selection process, we also report the model size (\emph{i.e.}, the number of terms in the polynomial) per layer. More importantly, we compare against the state-of-the-art analytical method proposed by \cite{qi2016paleo}, namely Paleo. To enable a comparison here and for the remainder of section, we executed the Paleo code\footnote{Paleo is publicly available online -- github repository: \url{https://github.com/TalwalkarLab/paleo}} on the considered CNNs. We can easily observe that our predictions based on the layer-level models clearly outperform the best published model to date, yielding an \emph{improvement in accuracy} up to $68.5\%$. \begin{table}[ht] \centering \caption{Comparison of runtime models for common CNN layers -- Our proposed runtime model consistently outperforms the state-of-the-art runtime model in both root-mean-square-error (RMSE) and the Root-Mean-Square-Percentage-Error (RMSPE).} \label{tab:perf_model} \small \begin{tabular} {l|ccc|cc} \toprule \multirow{2}{*}{Layer type } & \multicolumn{3}{c}{\textbf{\emph{NeuralPower}\xspace}} & \multicolumn{2}{|c}{Paleo~\cite{qi2016paleo}} \\ \cline{2-4}\cline{5-6} & Model size & RMSPE & RMSE (ms) & RMSPE & RMSE (ms)\\ \hline Convolutional & 60 & 39.97\% & 1.019 & 58.29\% & 4.304\\ Fully-connected & 17 & 41.92\% & 0.7474 & 73.76\% & 0.8265\\ Pooling & 31 & 11.41\% & 0.0686 & 79.91\% &1.763 \\ \bottomrule \end{tabular} \end{table} \textbf{Convolutional layer}: The convolution layer is among the most time- and power-consuming components of a CNN. To model this layer, we use a polynomial model of degree three. We select a features vector consisting of the batch size, the input tensor size, the kernel size, the stride size, the padding size, and the output tensor size. In terms of the special terms defined in Equation~\ref{eq:polynomial_runtime}, we use terms that represent the total computation operations and memory accesses count. \textbf{Fully-connected layer}: We employ a regression model with degree of two, and as features of the model we include the batch size, the input tensor size, and the output tensor size. It is worth noting that in terms of software implementation, there are two common ways to implement the fully-connected layer, either based on default matrix multiplication, or based on a convolutional-like implementation (\emph{i.e.}, by keeping the kernel size exactly same as the input image patch size). Upon profiling, we notice that both cases have a tensor-reshaping stage when accepting intermediate results from a previous convolutional layer, so we treat them interchangeably under a single formulation. \textbf{Pooling layer}: The pooling layer usually follows a convolution layer to reduce the complexity of the model. As basic model features we select the input tensor size, the stride size, the kernel size, and the output tensor size. Using Lasso and cross-validation we find that a polynomial of degree three provides the best accuracy. \subsubsection{Power Models} As mentioned in Section \ref{subsec:layer-wise model}, we use the logarithmic terms of the original features (\emph{e.g.}, batch size, kernel size, \emph{etc.}) as additional features for the polynomial model since this significantly improves the model accuracy. This modeling choice is well suited for the nature of power consumption which does not scale linearly; more precisely, the rate of the increase in power goes down as the model complexity increases, especially when the power values get closer to the power budget limit. For instance, in our setup, the Titan X GPU platform has a maximum power of 250W. We find that a polynomial order of two achieves the best cross validation error for all three layer types under consideration. To the best of our knowledge, there is no prior work on power prediction at the layer level to compare against. We therefore compare our methods directly with the actual power values collected from TensorFlow, as shown in Table~\ref{tab:power_model}. Once again, we observe that our proposed model formulation achieves error always less that $9\%$ for all three layers. The slight increase in the model size compared to the runtime model is to be expected, given the inclusion of the logarithmic feature terms, alongside special terms that include memory accesses and operations count. We can observe, though, that the model is able to capture the trends of power consumption trained across layer sizes and types. \begin{table}[ht] \centering \caption{Power model for common CNN layers} \label{tab:power_model} \small \begin{tabular} {lccc} \toprule \multirow{2}{*}{Layer type } & \multicolumn{3}{c}{\textbf{\emph{NeuralPower}\xspace}} \\%& \multicolumn{2}{|c|}{Paleo~\cite{qi2016paleo}} \\ \cline{2-4} & Model size & RMSPE & RMSE (W) \\ \hline Convolutional & 75 & 7.35\% & 10.9172 \\ Fully-connected & 15 & 9.00\% & 10.5868 \\ Pooling & 30 & 6.16\% & 6.8618\\ \bottomrule \end{tabular} \end{table} \subsection{Network-level Modeling Evaluation} With the results from layer-level models, we can model the runtime, power, and energy for the whole network based on the network-level model (Section~\ref{subsec:whole_network}) in \emph{NeuralPower}\xspace. To enable a comprehensive evaluation, we assess \emph{NeuralPower}\xspace on several state-of-the-art CNNs, and we compare against the actual runtime, power, and energy values of each network. For this purpose, and as discussed in Section~\ref{subsec:setup}, we leave out a set of networks to be used only for testing, namely the VGG-16, NIN, CIFAR10-6conv, AlexNet, and Overfeat networks. \subsubsection{Runtime evaluation} \textbf{Enabling network runtime profiling}: Prior to assessing the predictions on the networks as a whole, we show the effectiveness of \emph{NeuralPower}\xspace as a useful aid for CNN architecture benchmarking and per-layer profiling. Enabling such breakdown analysis is significant for machine learning practitioners, since it allows to identify the bottlenecks across components of a CNN. \deleted{Hence, we show first that \emph{NeuralPower}\xspace can be used for the accurate profiling of different components of a network of interest, outperforming the best published model to date in accuracy. For runtime, we use state-of-the-art analytical model Paleo as the baseline. In Figure~\ref{fig:runtime}, we compare runtime prediction models from \emph{NeuralPower}\xspace and the baseline against actual runtime values of each layer in the NIN and VGG-16 networks. From Figure \ref{fig:runtime}, we can clearly see that our model outperforms the Paleo model for most layers in accuracy. For the NIN, our model clearly captures that \textit{conv4} is the dominant (most time-consuming) layer across the whole network. However, Paleo erroneously identifies \textit{conv2} as the dominant layer. For the VGG-16 network, we can clearly see that Paleo predicts the runtime of the first fully-connected layer \textit{fc6} as 3.30 ms, with a percentage prediction error as high as -96.16\%. In contrast, our prediction exhibits an error as low as -2.53\%. Since layer \textit{fc6} is the dominant layer throughout the network, it is critical to make a correct prediction on this layer. From the above, we can conclude that our proposed methodology generally has a better accuracy in predicting the runtime for each layer in a complete CNN, especially for the layers with larger runtime values. Therefore, our accurate runtime predictions, when employed for profiling each layer at the network level, can help the machine learners and practitioners quickly identify the real bottlenecks with respect to runtime for a given CNN. \begin{figure}[htbp] \centering \small \includegraphics[width=0.99\linewidth]{./fig/runtime.pdf} \vspace{-15pt} \caption{Comparison of runtime prediction for each layer in NIN and VGG-16: Our models provide accurate runtime breakdown of both network, while Paleo cannot. Our model captures the execution-bottleneck layers (\emph{i.e.}, \emph{conv4} in NIN, and \emph{fc6} in VGG-16) while Paleo mispredicts both.} \label{fig:runtime} \end{figure} \textbf{Network-level runtime evaluation}: Having demonstrated the effectiveness of our methodology at the layer level, we proceed to assess the accuracy of the network-level runtime prediction $\hat{T}_{total}$ (Equation~\ref{eq:runtime}). It is worth observing that in Equation~\ref{eq:runtime} there are two sources of potential error. First, error could result from mispredicting the runtime values $\hat{T}_{n}$ per layer $n$. However, even if these predictions are correct, another source of error could come from the formulation in Equation~\ref{eq:runtime}, where we assume that the sum of the runtime values of all the layers in a CNN provides a good estimate of the total runtime. Hence, to achieve a comprehensive evaluation of our modeling choices in terms of both the regression formulation and the summation in Equation~\ref{eq:runtime}, we need to address both these concerns. To this end, we compare our runtime prediction $\hat{T}_{total}$ against two metrics. \emph{First}, we compare against the actual overall runtime value of a network, notated as $T_{total}$. \emph{Second}, we consider another metric defined as the sum of the actual runtime values $T_n$ (and not the predictions) of each layer $n$: \begin{equation} \label{eq:runtime_comparison} \mathbb{T}_{total} = \sum_{n = 1}^N T_n \end{equation} Intuitively, a prediction value $\hat{T}_{total}$ close to both the $\mathbb{T}_{total}$ value and the actual runtime $T_{total}$ would not only show that our model has good network-level prediction, but that also that our underlying modeling assumptions hold. We summarize the results across five different networks in Table \ref{tab:whole_model_runtime}. More specifically, we show the networks' actual total runtime values ($T_{total}$), the runtime $\mathbb{T}_{total}$ values, our predictions $\hat{T}_{total}$, and the predictions from Paleo (the baseline). Based on the Table, we can draw two key observations. First, we can clearly see that our model always outperforms Paleo, with runtime predictions always within $24\%$ from the actual runtime values. Unlike our model, prior art could underestimate the overall runtime up to $42\%$. Second, as expected, we see that summing the true runtime values per layer does indeed approximate the total runtime, hence confirming our assumption in Equation~\ref{eq:runtime}. \vspace*{-6pt} \begin{table}[ht] \centering \caption{Performance model comparison for the whole network. We can easily observe that our model always provides more accurate predictions of the total CNN runtime compared to the best published model to date (Paleo). We assess the effectiveness of our model in five different state-of-the-art CNN architectures.} \label{tab:whole_model_runtime} \small \begin{tabular} {c|c|c|c||c} \toprule CNN & \cite{qi2016paleo} & \textbf{\emph{NeuralPower}\xspace} & Sum of per-layer actual & Actual runtime \\ name & Paleo (ms) & $\hat{T}_{total}$ (ms) & runtime $\mathbb{T}_{total}$ (ms) & $T_{total}$ (ms) \\ \hline VGG-16 & 345.83 & 373.82 & 375.20 & 368.42 \\ AlexNet & 33.16 & 43.41 & 42.19 & 39.02 \\ NIN & 45.68 & 62.62 & 55.83 & 50.66 \\ Overfeat & 114.71 & 195.21 & 200.75 & 197.99 \\ CIFAR10-6conv & 28.75 & 51.13 & 53.24 & 50.09 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Power evaluation} \textbf{Enabling power network profiling}: We present a similar evaluation methodology to assess our model for network-level power predictions. We first use our methodology to enable a per-layer benchmarking of the power consumption. Figure \ref{fig:power} shows the comparison of our power predictions and the actual power values for each layer in the NIN and the VGG-16 networks. We can see that convolutional layers dominate in terms of power consumption, while pooling layers and fully connected layers contribute relatively less. We can also observe that the convolutional layer exhibits the largest variance with respect to power, with power values ranging from 85.80W up to 246.34W. Another key observation is related to the fully-connected layers of the VGG-16 network. From Figure \ref{fig:runtime}, we know layer \textit{fc6} takes the longest time to run. Nonetheless, we can see in Figure \ref{fig:power} that its power consumption is relatively small. Therefore, the energy consumption related of layer \textit{fc6} will have a smaller contribution to the total energy consumption of the network relatively to its runtime. It is therefore evident that using only the runtime as a proxy proportional to the energy consumption of CNNs could mislead the machine learners to erroneous assumptions. This observation highlights that power plays also a key role towards representative benchmarking of CNNs, hence illustrating further the significance of accurate power predictions enabled from our approach. \begin{figure}[htbp] \centering \small \includegraphics[width=0.99\linewidth]{./fig/power.pdf} \vspace{-15pt} \caption{Comparison of power prediction for each layer in NIN and VGG-16.} \label{fig:power} \end{figure} \begin{table}[ht] \centering \vspace{-12pt} \caption{Evaluating our power predictions for state-of-the-art CNN architectures.} \label{tab:whole_model_power} \small \begin{tabular} {c|c|c||c} \toprule CNN & \textbf{\emph{NeuralPower}\xspace} & Sum of per-layer actual & Actual power \\ name & $\hat{P}_{total}$ (W) & power $\mathbb{P}_{total}$ (W) & $P_{avg}$ (W) \\ \hline VGG-16 & 206.88 & 195.76 & 204.80 \\ AlexNet & 174.25 & 169.08 & 194.62 \\ NIN & 179.98 & 187.99 & 226.34 \\ Overfeat & 172.20 & 168.40 & 172.30 \\ CIFAR10-6conv & 165.33 & 167.86 & 188.34 \\ \bottomrule \end{tabular} \end{table} \textbf{Network-level power evaluation}: As discussed in the runtime evaluation as well, we assess both our predictive model accuracy and the underlying assumptions in our formulation. In terms of average power consumption, we need to confirm that the formulation selected in Equation~\ref{eq:power} is indeed representative. To this end, besides the comparison against the actual average power of the network $P_{avg}$, we compare against the average value $\mathbb{P}_{avg}$, which can be computed by replacing our predictions $\hat{P}_n$ and $\hat{T}_n$ with with the actual per-layer runtime and power values: \begin{equation} \label{eq:power_comparison} \mathbb{P}_{avg} = \frac{\sum_{n = 1}^N {P}_n \cdot {T}_n}{\sum_{n = 1}^N {T}_n} \end{equation} We evaluate our power value predictions for the same five state-of-the-art CNNs in Table \ref{tab:whole_model_power}. Compared to the actual power value, our prediction have an RMSPE of $11.66\%$. We observe that in two cases, AlexNet and NIN, our prediction has a larger error, \emph{i.e.}, of $10.47\%$ and $20.48\%$ respectively. This is to be expected, since our formulation for $\mathbb{P}_{avg}$ depends on runtime prediction as well, and as observed previously in Table~\ref{tab:whole_model_runtime}, we underestimate the runtime in both cases. \subsubsection{Energy evaluation} Finally, we use Equation~\ref{eq:energy} to predict the total energy based on our model. To evaluate our modeling assumptions as well, we compute the energy value $\mathbb{E}_{total}$ based on the actual per-layer runtime and power values, defined as: \begin{equation} \label{eq:energy_comparison} \mathbb{E}_{total} = \sum_{n = 1}^N {P}_n \cdot {T}_n \end{equation} We present the results for the same five CNNs in Table~\ref{tab:whole_model_energy}. We observe that our approach enables good prediction, with an average RMSPE of $2.79\%$. \begin{table}[ht] \centering \caption{Evaluating our energy predictions for state-of-the-art CNN architectures.} \label{tab:whole_model_energy} \small \begin{tabular} {c|c|c||c} \toprule CNN & \textbf{\emph{NeuralPower}\xspace} & Sum of per-layer actual & Actual energy \\ name & $\hat{E}_{total}$ (J) & energy $\mathbb{E}_{total}$ (J) & $E_{total}$ (J) \\ \hline VGG-16 &77.312 & 73.446 & 75.452 \\ AlexNet & 7.565 & 7.134 & 7.594 \\ NIN &11.269 & 10.495 & 11.465 \\ Overfeat &33.616 & 33.807 & 34.113 \\ CIFAR10-6conv & 8.938 & 8.453 & 9.433 \\ \bottomrule \end{tabular} \end{table} \subsection{Energy-Precision Ratio} In this subsection, we propose a new metric, namely \textit{Energy-Precision Ratio} to be used as a guideline for machine learners towards accurate, yet energy efficient CNN architectures. We define the metric as: \begin{equation} M = Error^\alpha \cdot EPI \end{equation} where $Error$ is the data classification error of the CNN under consideration, and $EPI$ is the energy consumption per data item classified. Different values of the parameter $\alpha$ dictate how much importance is placed on the accuracy of the model, since a larger $\alpha$ places more weight on the CNN classification error. To illustrate how $\alpha$ affects the results, in Table \ref{tab:energy_ratio} we compute the $M$ score values for VGG-16, AlexNet, and NIN, all trained on ImageNet datasets (as $Error$ value we use their Top-5 error). We also use our predictive model for energy, and we compute the energy consumption per image classification. Intuitively, a lower $M$ value indicates a better trade-off between energy-efficiency and accuracy of a CNN architecture. For instance, we can see that while VGG-16 has the lowest error, this comes at the price of increased energy consumption compared to both NIN and AlexNet. Hence, for $\alpha=1$ both AlexNet and NIN have a smaller $M$ value. In this case, a machine learner of an embedded Internet-of-Things (IoT) system could use the \textit{Energy-Precision Ratio} to select the most energy efficient architecture. On the contrary, with $\alpha=4$, \emph{i.e.}, when accuracy is being heavily favored over energy efficiency, the $M$ value of VGG-16 is smaller than $M$ value of AlexNet. \begin{table}[ht] \centering \caption{$M$ metric for different CNN architectures and Energy-per-Image (EPI) values. Network choices could be different for different $\alpha$ values: AlexNet for $\alpha = 1, 2, 3$, VGG-16 for $\alpha = 4$.} \label{tab:energy_ratio} \small \begin{tabular} {ccccccc} \toprule \multirow{2}{*}{CNN name } & \multirow{2}{*}{Top-5 Error } & \multirow{2}{*}{EPI (mJ) } & \multicolumn{4}{c}{M} \\ \cline{4-7} & & & $\alpha$ = 1 & $\alpha$ = 2& $\alpha$ = 3 & $\alpha$ = 4\\ \hline VGG-16 & 7.30\% & 1178.9 & 86.062 & 6.283 & 0.459 & \textbf{0.033}\\ AlexNet & 17.00\% & 59.3 & \textbf{10.086} & \textbf{1.715}& \textbf{0.291} & 0.050 \\ NIN & 20.20\% & 89.6 & 18.093 & 3.655 & 0.738 & 0.149 \\\bottomrule \end{tabular} \end{table} With the recent surge of IoT and mobile learning applications, machine learners need to take the energy efficiency of different candidate CNN architectures into consideration, in order to identify the CNN which is more suitable to be deployed on a energy-constrained platform. For instance, consider the case of a machine learner that has to choose among numerous CNNs from a CNN model ``zoo'', with the best error of each CNN readily available. \emph{Which network provides the best trade-off between accuracy for energy spent per image classified?} Traditionally, the main criterion for choosing among CNN architectures has been the data item classification accuracy, given its intuitive nature as a metric. However, there has not been so far an easily interpretable metric to trade-off energy efficiency versus accuracy. Towards this direction, we can use our proposed model to quickly predict the total energy consumptions of all these different architectures, and we can then compute the $M$ score to select the one that properly trades off between accuracy for energy spent per image classified. We postulate that the \textit{Energy-Precision Ratio}, alongside our predictive model, could therefore be used as a useful aid for machine learners \emph{towards energy-efficient CNNs}. \subsection{Models on other platforms} \label{subsec:models on other platforms} Our key goal is to provide a modeling framework that could be flexibly used for different hardware and software platforms. To comprehensively demonstrate this property of our work, we extend our evaluation to a different GPU platform, namely Nvidia GTX 1070, and another Deep Learning software tool, namely Caffe by \cite{jia2014caffe}. \subsubsection{Extending to other hardware platforms: Tensorflow on Nvidia GTX 1070} We first apply our framework to another GPU platform, and more specifically to the Nvidia GTX 1070 with 6GB memory. We repeat the runtime and power data collection by executing Tensorflow, and we train power and runtime models on this platform. The layer-wise evaluation results are shown in Table \ref{tab:1070_layer}. For these results, we used the same polynomial orders as reported previously for the TensorFlow on Titan X experiments. Moreover, we evaluate the overall network prediction for runtime, power, and energy values and we present the predicted values and the prediction error (denoted as Error) in Table~\ref{tab:whole_model_runtime-tf-1070}. Based on these results, we can see that our methodology achieves consistent performance across different GPU platforms, thus enabling a scalable/portable framework from machine learning practitioners to use across different systems. \begin{table}[ht] \centering \caption{Runtime and power model for all layers using TensorFlow on GTX 1070.} \label{tab:1070_layer} \small \begin{tabular} {l|ccc|ccc} \toprule \multirow{2}{*}{Layer type } & \multicolumn{3}{c}{Runtime} & \multicolumn{3}{|c}{Power} \\ \cline{2-7} & Model size & RMSPE & RMSE (ms) & Model size & RMSPE & RMSE (W)\\ \hline Convolutional & 10 & 57.38 \% & 3.5261 & 52 & 10.23\% &9.4097\\ Fully-connected & 18 & 44.50\% & 0.4929 & 23 & 7.08\% & 5.5417\\ Pooling & 31 & 11.23\% & 0.0581 & 40 & 7.37\% & 5.1666 \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Evaluation of \emph{NeuralPower}\xspace on CNN architectures using TensorFlow on GTX 1070.} \label{tab:whole_model_runtime-tf-1070} \small \begin{tabular} {l|cc|cc|cc} \toprule \multirow{2}{*}{CNN name} & \multicolumn{2}{c}{Runtime} & \multicolumn{2}{|c|}{Power} & \multicolumn{2}{c}{Energy} \\ \cline{2-7} & Value (ms) & Error & Value (W) & Error & Value (J) & Error\\ \hline AlexNet & 44.82 & 17.40\% & 121.21 & -2.92\% & 5.44 & 13.98\% \\ NIN & 61.08 & 7.24\% & 120.92 & -4.13\% & 7.39 & 2.81\% \\ \bottomrule \end{tabular} \end{table} \subsubsection{Extending to other machine learning software environments: Caffe} Finally, we demonstrate the effectiveness of our approach across different Deep Learning software packages and we replicate our exploration on Caffe. To collect the power and runtime data for our predictive models, we use Caffe's supported mode of operation for benchmarking, namely \texttt{time}. While this functionality benchmarks the model execution layer-by-layer, the default Caffe version however reports only timing. To this end, we extend Caffe's \texttt{C++} codebase and we incorporate calls to Nvidia's \texttt{NVML C++ API} to report power values. We present the per-layer accuracy for runtime and power predictions in Table~\ref{tab:caffe-1070}. Furthermore, we evaluate our model on the AlexNet and NIN networks in Table~\ref{tab:whole_model_runtime-caffe-gtx}. Please note that the execution of the entire network corresponds to a different routine under the Caffe framework, so a direct comparison is not possible. We instead compare against the Equations~\ref{eq:runtime_comparison}-\ref{eq:energy_comparison} as in the previous subsection. Same as for the TensorFlow case before (Table~\ref{tab:1070_layer}), we observe that in this case as well the runtime predictions exhibit a larger error in this platform when executing on the GTX1070 system. This is to be expected, since the GTX 1070 drivers do not allow the user to clock the GPU to a particular frequency state, hence the system dynamically selects its execution state. Indeed, in our collected datapoints we observed that Caffe's (and TensorFlow's previously) higher variance in the runtime values. To properly capture this behavior, we experimented with regressors other that second and third degree polynomials. In these results for Caffe in particular, we select linear models since they provided a better trade off between training error and overfitting. \begin{table}[ht] \centering \caption{Runtime and power model for all layers using Caffe on GTX 1070.} \label{tab:caffe-1070} \small \begin{tabular} {l|ccc|ccc} \toprule \multirow{2}{*}{Layer type } & \multicolumn{3}{|c|}{Runtime} & \multicolumn{3}{|c}{Power} \\ \cline{2-7} & Model size & RMSPE & RMSE (ms) & Model size & RMSPE & RMSE (W)\\ \hline Convolutional & 32 & 45.58 \% & 2.2301 & 32 & 6.19\% & 11.9082\\ Fully-connected & 18 & 48.41 \% & 0.6626 & 18 & 8.63\% & 8.0291 \\ Pooling & 30 & 37.38 \% & 0.1711 & 26 & 6.72 \% & 11.9124 \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \centering \caption{Evaluation of our model on CNN architectures using Caffe on GTX 1070.} \label{tab:whole_model_runtime-caffe-gtx} \small \begin{tabular} {l|cc|cc|cc} \toprule \multirow{2}{*}{CNN name} & \multicolumn{2}{|c|}{Runtime} & \multicolumn{2}{|c|}{Power} & \multicolumn{2}{|c}{Energy} \\ \cline{2-7} & Value (ms) & Error & Value (W) & Error & Value (J) & Error\\ \hline AlexNet & 51.18 & -31.97\% & 107.63 & -5.07 \% & 5.51 & 35.42\% \\ NIN & 76.32 & 0.36 \% & 109.78 & -8.89\% & 8.38 & 8.56\% \\ \bottomrule \end{tabular} \end{table} \subsection{Discussion} \added{It is important to note that overhead introduced by \emph{NeuralPower}\xspace is not significant. More specifically, \emph{NeuralPower}\xspace needs to collect datasets to train the models, however, the overhead for training is very small, e.g., around 30 minutes for GPU Titan X. This includes data collection (under 10 minutes) and model training (less than 20 minutes). The process is done once for a new platform. This overhead can be easily offset if the CNN architecture search space is large. Even if machine learners only evaluate a few CNN architectures, \emph{NeuralPower}\xspace can still provide the detailed breakdown with respect to runtime, power, and energy to identify bottlenecks and possible improvement directions.}
proofpile-arXiv_069-4617
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Extension: Robust and Adaptive Optimization Counterparts}~\label{sec:extension} The base protecting spanning tree set approach for survivable probability cross-layer network design problem considers random physical link failure. Extended the failure probability into static uncertain failure set, the robust counterpart problem of cross-layer survivable probability problem seeks the maximal survivable probability under physical link failure uncertain set~\cite{ben2006extending}\cite{bertsimas2010power}; the adaptive counterpart problem seeks maximal survivable probability with adjustable recourse decision with realized uncertain parameters~\cite{bertsimas2010power}. The survivable cross-layer network topology for the cross-layer survivable probability design is tractable special case. The survivable cross-layer network topology provides 100\% cross-layer survivable probability under static uncertain physical link failure scenario in robust optimization (optimizing the worst case) and adaptive optimization (optimizing the adjustable two-stage set). Note here that survivable cross-layer network topology also provides the same objective for robust optimization and adaptive optimization. \end{comment} \section{Simulation Study}\label{sec:result} \textcolor{black}{In this section, we present our simulation design, testing cases setup, simulation results and observations.} The goal is to validate and demonstrate the effectiveness of the proposed base protecting spanning tree set in \textcolor{black}{calibrating} the survivable probability which supporting network slicing over small and median-size cross-layer networks. \subsection{Objectives for Simulations}\label{subsec:simDesign} \textcolor{black}{The testing cases and simulations are designed to verify that (1) given a survivable cross-layer network, our base protecting spanning tree set approach should provide 100\% survivable probability regardless of the probability of failure on physical links; (2) with unified failure probability, the minimal number of shared physical links in the logical-edge-to-physical-path mappings result in the same survivable probability as that of the base protection spanning tree set; (3) the maximal protecting spanning tree provides a lower bound estimation for the survivable probability of a cross-layer network; also, we want to know how tight the lower bound estimation performs numerically; and (4) the survivable probability can be an evaluation metric for both survivable and non-survivable networks with either unified or random probabilities of failure on physical links. Last but not least, we want to observe and report the behaviors between survivable and non-survivable cross-layer networks with either uniform or random failure probabilities, which may provide insights/directions for future studies.} \subsection{Simulation Setup}\label{subsec:testCase} Based on the objectives above, we now present the selection of small and medium size cross-layer networks, failure probabilities, and the composition of testing cases. \subsubsection{Small Size Cross-layer Network with NSF as the Physical Network} \begin{figure}[!t] \centering \includegraphics[scale=0.43]{NSF.pdf} \caption{NSF} \label{fig:nsf} \end{figure} \begin{figure}[!t] \includegraphics[scale=0.3]{LNet.pdf} \caption{LN1 and LN2} \label{fig:lns} \end{figure} We first select NSF network as a small-size physical network and create two logical networks denoted as ``LN1'' and ``LN2''. All networks are illustrated in Figs.~\ref{fig:nsf} and~\ref{fig:lns}. Two cross-layer network mappings are created: LN1-over-NSF, and LN2-over-NSF. We apply the survivable cross-layer routing MIP formulation (SUR-TEST) (see Appendix~\ref{app:MIP}) which verifies that LN1-over-NSF is survivable and LN2-over-NSF is non-survivable. \begin{figure}[!t] \centering \includegraphics[scale=0.4]{CONUSmap.jpg} \caption{CONUS network~\cite{conusNet}} \label{fig:conus} \end{figure} \begin{table}[!h] \centering \begin{tabular}{>{\centering\arraybackslash}p{0.9cm} >{\centering\arraybackslash}p{0.5cm} >{\centering\arraybackslash}p{0.35cm} >{\centering\arraybackslash}p{0.35cm} >{\centering\arraybackslash}p{0.8cm} >{\centering\arraybackslash}p{0.4cm} >{\centering\arraybackslash}p{0.3cm} >{\centering\arraybackslash}p{0.3cm} >{\centering\arraybackslash}p{0.4cm} >{\centering\arraybackslash}p{0.4cm} }\\ \hline\hline \rule{0pt}{8pt}\multirow{2}{*}{PhyNet} &\multirow{2}{*}{LogNet} &\multirow{2}{*}{Suv} &\multirow{2}{*}{nSuv}&\multirow{2}{*}{FPbRg}&\multirow{2}{*}{uFPb}&\multicolumn{2}{c}{rFPb}&\multicolumn{2}{c}{NumFPb}\\ \rule{0pt}{8pt} & & &&&&Mean&Vrn&uFPb&rFPb\\ \hline \rule{0pt}{8pt}NSF &LN1 &1 &0 &[15\%,0\%)&0.1\%&0.5\%&2\%&150&30\\ \rule{0pt}{8pt}NSF &LN2 &0 &1 &[15\%,0\%)&0.1\%&0.5\%&2\%&150&30\\ \rule{0pt}{8pt}CORONET&CLN1 &9/40&31/40&[15\%,0\%)&-&0.5\%&2\%&-&30\\ \rule{0pt}{8pt}CORONET&CLN2 &7/40&33/40&[15\%,0\%)&-&0.5\%&2\%&-&30\\ \hline\hline \end{tabular} \caption{Parameters for testing cases} \label{tbl:suvNonSuvCase} \end{table} \subsubsection{Medium Size Cross-layer Network with CORONET as the Physical Network} To further validate the scalability of our proposed approach, we select the CORONET network~\cite{saleh2006dynamic} as the physical network, which has 75 nodes, 99 links, and an average nodal degree of 2.6. With CORONET as the physical network, we create 80 logical networks; half of them have nodes randomly selected from 20\% of the physical nodes (denoted as CLN1), and the other half have 30\% (denoted as CLN2). The average nodal degree for all logical networks is 4. With the logical nodes in CLN1 and CLN2, we generate the cross-layer networks as follows. We first generate a random spanning tree, and then utilize the Erd\H{o}s-R\'{e}nyi random graph model~\cite{erdos1960evolution} to guarantee the connectivity of logical nodes. Finally, random logical-to-physical node mapping are constructed. Out of all generated cross-layer networks, we report the number of survivable and unsurvivable cases in Table~\ref{tbl:suvNonSuvCase}, which are all validated by the SUR-TEST MIP formulation. \subsubsection{Probability of Failure on Physical Links} The failure probabilities are chosen as follows. The unified failure probability $\rho$ is selected in the range of $15.0\%\geq \rho > 0\%$ with 0.1\% per step. In total, we have 150 uniform probabilities $[15\%, 14.9\%, \ldots, 0.2\%, 0.1\%]$. For the random failure probabilities, we generate them based on the normal distribution with the mean from 15.0\% to 0\%, 0.5\% per step, and the variance is 2\%. Note here that the randomly generated probabilities are selected if less than 100\%. In total, we have 30 random failure probabilities. \subsubsection{Testing Cases} Parameters to construct all simulation cases are presented in Table~\ref{tbl:suvNonSuvCase}, in which ``PhyNet'', ``LogNet'',``Suv'',``nSuv'',``FPbRg'',``uFPb'' denote the physical network, logical network, the number of survivable and non-survivable cases, the range of failure probabilities, and the incremental step width of unified failure probability. Let ``rFPb''``Mean'',``Vrn'' be random failure probability, mean/step width, and variance; and let ``NumFPb'',``uFPb'', and ``rFPb'' indicate the total number of unified failure probabilities, and the total number of random failure probabilities for each cross-layer network. The simulation results for all these cases are grouped by the failure probabilities, survivability of the networks, and the size of networks (small and medium). The performance of the simulations with unified failure probability is only reported with two cross-layer networks, namely LN1-over-NSF and LN2-over-NSF, where the NSF network in both of them is associated with the 150 failure probabilities mentioned above. Similarly, we also evaluate each of them with randomly generated failure probabilities. Since the unified failure probability is a special case of the random failure probability, we only consider random failure probabilities in the medium-size cross-layer networks based on the generation of CLN1-over-CORONET and CLN2-over-CORONET. 30 failure probabilities are generated for each of the medium-size networks, and these testing cases are grouped and reported by the mean of failure probability and its survivability. Note here that as part of the validation, LN1-over-NSF and the survivable medium-size networks are expected to reach 100\% survivable probability regardless of their failure probabilities. \subsection{Simulation Results}\label{subsec:cResults} In this section, we report the simulation results based on the testing cases described above. \subsubsection{Small-size Cross-layer Networks}\label{subsubsec:cRsl_small} The computational results for the survivable probability of the maximal protecting spanning tree and base protecting spanning tree set are denoted as ``MaxPrctTree'' and ``BasePrctTreeSet'', respectively. Figures~\ref{fig:uniNSF} and~\ref{fig:ranNSF} illustrate the survivable probability of MaxPrctTree and BasePrctTreeSet for LN1-over-NSF and LN2-over-NSF with unified and random failure probabilities, respectively. \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{suvUniNSF} \caption{LN1-over-NSF} \label{subfig:suvUniNSF} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{nonSuvUniNSF} \caption{LN2-over-NSF} \label{subfig:nonsuvUniNSF} \end{subfigure} \caption{Survivable probability with unified failure probability for small-size cross-layer networks} \label{fig:uniNSF} \end{figure} \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{suvRanNSF} \caption{LN1-over-NSF} \label{subfig:suvUniNSF} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{nonSuvRanNSF} \caption{LN2-over-NSF} \label{subfig:nonsuvUniNSF} \end{subfigure} \caption{Survivable probability with random failure probability for small-size cross-layer networks} \label{fig:ranNSF} \end{figure} These results validate our proposed solution approach as follows: (1) all testing cases for the survivable LN1-over-NSF network are with 100\% survivable probability through the base protecting spanning tree set, regardless of the values/distribution of the failure probabilities; (2) with the unified failure probability, the minimal number of physical links shared by the trees in the base protecting spanning tree set, denoted as $k_{\text{min}}$, is 3 in the LN1-over-NSF network. We validate that the survivable probability obtained by the base protecting spanning tree approach, illustrated in Fig.~\ref{fig:uniNSF}, which matches $(1-\rho)^{k_{\text{min}}}$. These results provide the numerical proof for Theorem~\ref{thm:failProbMinLk}; (3) the curves of survivable probabilities of MaxPrctTree and BasePrctTreeSet over randomly generated failure probabilities are not smooth. But in general, their survivable probabilities are still monotonically increasing while the mean of the failure probability decreases. In other word, as expected, the lower the failure probability, the higher the survivable probability of MaxPrctTree and BasePrctTreeSet are achieved; and (4) the base protecting spanning tree approach works for both survivable and non-survivable cross-layer networks. To demonstrate that the MaxPrctTree may be used to estimate the lower bound of survivable probability, Fig.~\ref{fig:NSFratio} illustrates the ratio of the maximal protecting spanning tree's survivable probability to the survivable probability of a cross-layer network (through a base protecting spanning tree set). \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{ratioUniNSF} \caption{Unified physical link failure probability} \label{subfig:suvUniNSF} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{RatioNonSuvRanNSF} \caption{Random physical link failure probability} \label{subfig:nonsuvUniNSF} \end{subfigure} \caption{Survivable probability ratio between the maximal protecting spanning tree and a base protecting spanning tree set} \label{fig:NSFratio} \end{figure} These results show that for all testing cases, the survivable probability of BasePrctTreeSet is higher than that of MaxPrctTree. The lower the probability of failure on physical links, the better the lower bound estimation the maximal protecting spanning tree can provide. With up to 15\% of the average failure probability, the lower bound estimation is higher than $\frac{1}{2}$ of the survivable probability of all the generated cross-layer networks. \subsubsection{Medium-size Cross-layer Networks} Figs.~\ref{fig:ranSuvConus} and~\ref{fig:nonSuvRanConus} illustrate the survivable probability of the survivable and non-survivable cross-layer networks, respectively, where each testing instance is with random failure probabilities on physical links. Figs.~\ref{fig:SuvRanConusRatio} and~\ref{fig:nonSuvRanConusRatio} present the survivable probability ratio of MaxPrctTree to BasePrctTreeSet for all network instances in box plots, which are grouped by their respective failure probabilities. \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln1_suv_ranNet_ranFP} \caption{CLN1-over-CONUS} \label{subfig:SuvUniConus1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln2_suv_ranNet_ranFP} \caption{CLN2-over-CONUS} \label{subfig:SuvUniConus2} \end{subfigure} \caption{Survivable probability with unified failure probability for medium-size survivable cross-layer networks} \label{fig:ranSuvConus} \end{figure} \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln1_nonSuv_ranNet_ranFP} \caption{CLN1-over-CONUS} \label{subfig:nonSuvRanConus1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln2_nonSuv_ranNet_ranFP} \caption{CLN2-over-CONUS} \label{subfig:nonSuvRanConus2} \end{subfigure} \caption{Survivable probability with random failure probability for medium-size non-survivable cross-layer networks} \label{fig:nonSuvRanConus} \end{figure} \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln1_conus_suv_ratio} \caption{CLN1-over-CONUS} \label{subfig:nonSuvRanConus1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln2_conus_suv_ratio} \caption{CLN2-over-CONUS} \label{subfig:nonSuvRanConus2} \end{subfigure} \caption{Survivable probability ratio of MaxPrctTree to BasePrctTreeSet for survivable medium size networks} \label{fig:SuvRanConusRatio} \end{figure} \begin{figure}[!t] \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln1_conus_unsuv_ratio} \caption{CLN1-over-CONUS} \label{subfig:nonSuvRanConus1} \end{subfigure}% \begin{subfigure}[b]{0.25\textwidth} \includegraphics[scale=0.26]{cln2_conus_unsuv_ratio} \caption{CLN2-over-CONUS} \label{subfig:nonSuvRanConus2} \end{subfigure} \caption{Survivable probability ratio of MaxPrctTree to BasePrctTreeSet for non-survivable medium-size networks} \label{fig:nonSuvRanConusRatio} \end{figure} These results further validate our proposed solution approaches that (1) for all survivable cases (verified by the SUR-TEST formulation), our approaches produce 100\% survivable probability; (2) the survivable probability of BasePrctTreeSet is higher than that of MaxPrctTree for all testing cases; (3) with larger logical networks (CLN2), more physical links are utilized by logical link mappings, which bring down the survivable probability of MaxPrctTree significantly compared with the smaller-size ones. The computational time of all MIP formulations are finished within 15 minutes, thus our proposed solution approaches can produce results effectively at least for the medium-size networks. We also observe some interesting facts which may direct our future studies on network properties. (1) The average survivable probability ratio for both survivable and non-survivable networks is monotonically increasing when failure probability decreases. (2) When failure probability decreases, gaps of the survivable probability ratios for all tested survivable networks are increasing (see Fig.~\ref{fig:SuvRanConusRatio}; and the gaps of survivable probability ratios for all tested non-survivable networks are decreasing (see Fig.~\ref{fig:nonSuvRanConusRatio}). (3) In general, the computational time for the survivable cases is higher than that of the non-survivable ones. \begin{comment} FOR JOURNAL PAPER The NSF based cross-layer network is a set of small-scale cross-layer network, for the large-scale cross-layer network. We select large-scale DWDM network, DARPA CORONET CONUS (CONUS), which is with 75 nodes, 99 links and an average of 2.6 nodal degrees~\cite{conusNet} as the physical network illustrated in Fig.~\ref{fig:conus}. \begin{figure}[t] \centering \includegraphics[scale=0.45]{CONUSmap.jpg} \caption{CONUS network} \label{fig:conus} \end{figure}65= vccccc - Logical network is constructed based on randomly selected 30\%, 40\%, 50\%, and 60\% of physical network with connectivity degree 3. With node ratio between logical nodes and physical nodes, 100 random cases are generated. We report the average number of logical nodes and links in Table~\ref{tbl:vNet} and denote them as testing cases 1-4. In Table I, we let ``\% of PNode'' be the ratio of node-level requests over the total number of physical nodes, ``CDegree'' be the connectivity degree, and let ``Num VNode'' and ``Num VLink'' represent the number of logical nodes and links. With generate cross-layer networks, we first examine whether the cross-layer network is survivable or not, which remains its logical network connection after any physical link failure. We report the number of survivable cross-layer networks generated in Table~\ref{tbl:vNet} in each testing cases, denoted as ``Num Suv''. The validation of the survivability is based on the protecting spanning tree formulation in~\cite{lin2014unified}, when the formulation returns feasible solution, the cross-layer network is survivable, otherwise, the network is not survivable. For all testing cases, at least xxx survivable cross-layer networks are generated. \begin{comment} \begin{table} \centering \begin{tabular}{ >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{2cm} } \hline\hline \rule{0pt}{8pt} \multirow{2}{*}{Unified} & \multicolumn{2}{c}{Non-unified}\\ \rule{0pt}{8pt} & Mean &Variance\\ \hline \rule{0pt}{8pt} 0.05 & 0.05 &0.01\\ \rule{0pt}{8pt} 0.10 & 0.10 &0.02\\ \rule{0pt}{8pt} 0.15 & 0.15 &0.03\\ \hline\hline \end{tabular} \vspace{2pt} \caption{Physical link failure probability} \label{tbl:fProb} \end{table} \end{comment} \begin{comment} JOURNAL PAPER In each testing case, at least xx of networks are survivable. They are with 100\% survivable probability under any physical link failure probability. For these networks, we first construct the maximal PST based on survivable routing and report the survivable probability of the maximal PST. Then, PST sets are constructed with $k$ PSTs where $k\geq 2$. We report survivable probability of PST set with $k$-tree. Finally, the base protecting spanning tree sets are selected with reported number of trees in the base protecting spanning tree set. \end{comment} \section{Introduction} 5G communications ``empower socio-economic transformation in countless ways, including those for productivity, sustainability, and well-being''~\cite{ngmn2016whitepaper5G}. The latest optical techniques~\cite{liu2016emerging}\cite{wang2016handover} and architectures~\cite{iovanna2016future}\cite{assimakopoulos2016switched} serve as the global network infrastructure which provides capacities and guarantees the performance of 5G networks, especially network diversity, availability, and coverage. To satisfy the requirements of different subscriber types, applications, and use cases, network slicing was introduced which enables programmability of network instances called \textit{network slices}. These instances should satisfy the bilateral service level agreement (SLA)~\cite{5gamerica2016network}\cite{ngmn2016netSlicingSecurity}, such as latency, reliability, and value-added services, among \textcolor{black}{virtual} network operators and subscribers, \textcolor{black}{especially mobile operators and subscribers}, in 5G systems. \textcolor{black}{Network slicing allows multiple virtual networks to be created on top of a common underlying physical infrastructure (including physical and/or virtual networks)~\cite{ngmn2016description}.} Since the instantiation of network slices involves \textcolor{black}{a physical network and multiple virtual networks}, a general way to model such networks is through the cross-layer network topologies. The reliability of the physical infrastructure directly affects the network capabilities and performance level \textcolor{black}{that} a network slice can provide. Thus, a way to identify and quantify the reliability of a \textcolor{black}{cross-layer network} when disruptions occur to the physical infrastructure, \textcolor{black}{which leads to more reliable network slicing}, would be of interest to the virtual network operators. To design a reliable cross-layer network, a key question to be answered is how to quantify the reliability of a cross-layer network. When considering the reliability of single-layer networks, link failures are described as random events with corresponding failure probabilities, and the \textit{survivable probability} is the probability of a network to remain connected after random physical link failure(s)~\cite{yallouz2014tunable}\cite{yallouz2017tunable}. Comparatively, a failure in the physical infrastructure of a cross-layer network may not only disrupt the flows in the physical network, but also affect demands satisfaction in the network slices as the demands from each slice are routed/realized through the physical infrastructure. In this paper, we assume that each physical link may carry its own probability of failure (reliability index) and introduce the concept of \textit{cross-layer network survivable probability} to capture the probability of \textcolor{black}{virtual networks of} a network slice to remain connected after any physical link failure. In the rest of the paper, we'll use \textit{survivable probability} as an abbreviation for cross-layer network survivable probability. Different from prior research on the survivable cross-layer network design where all physical links have either 0\% or 100\% probability of failure, the survivable probability concept offers network operators a way to fine-tune a \textcolor{black}{cross-layer network} with the corresponding level of SLA before offering it to the subscriber. This concept can also be applied to several related applications, such as the design of reliable cloud~\cite{yang2015software} and IP-over-WDM~\cite{develder2012optical} networks, \textcolor{black}{where an IP-over-WDM network carries the traffic of each IP link through a lightpath in the WDM network, which utilizes a single wavelength through optical nodes like OXCs and OADMs without opto-electro-optical (O-E-O) conversion on intermediate optical nodes; and a cloud network constructed on top of a data center network is connected by fiber optics.} \section{Literature Review}\label{sec:lr} The design of a \textcolor{black}{reliable} single-layer network has two main mechanisms, namely protection and restoration~\cite{cholda2005network}\cite{heegaard2009network}\cite{ramamurthy1999survivable} which guarantee the network's connectivity after the failure(s) of network component(s). Two lines of investigation were conducted in the fields of operations research and telecommunication networks.~\cite{grotschel1995design}\cite{dahl1998cutting}\cite{smith2008algorithms}\cite{botton2013benders} explored mixed-integer programming techniques and proposed solution approaches for the survivable network design problem \textcolor{black}{(with 100\% survivable probability)} through polyhedron studies. \textcolor{black}{They usually do not consider network failures as random events but with 0\% or 100\% \textcolor{black}{reliable probability.}}~\cite{koster2003demand}\cite{shaikh2011anycast}\cite{dzida2008path}\cite{orlowski2012complexity} studied the reliable optical network and optical routing design through $p$-cycles, any-cast routing, and path set protection.~\cite{floyd1997reliable}\cite{li2013scaling}\cite{biswas2004opportunistic} discussed reliable wireless network design with scalable reliable multicast protocols and opportunistic routing in multi-hop wireless networks.~\cite{tarique2009survey}\cite{sara2014routing}\cite{liu2016survey}\cite{salayma2017wireless} reviewed works on reliable mobile networks emphasizing multipath or position-based routing in mobile ad hoc, wireless sensor, and vehicular ad hoc networks. The studies of cross-layer networks \textcolor{black}{focusing on their survivable design}, an $\mathcal{NP}$-complete problem~\cite{garey79}\cite{ModNar01}, consider both logical and physical networks, where logical nodes and links are mapped onto physical nodes and paths, respectively (with different routing schemes).~\cite{KurThi05}\cite{TodRam07}\cite{parandehgheibi2014survivable} utilized a sufficient condition, disjoint mappings of logical links, for survivable cross-layer network design, which transformed the cross-layer network design problem into the single-layer setting. Necessary and sufficient conditions for survivable cross-layer network design were proposed in ~\cite{ModNar01}\cite{lee2011cross}\cite{rahman2013svne} via cross-layer cutsets, which require the enumeration of all cross-layer cutsets. To avoid the enumeration,~\cite{zhou2017survivable}\cite{zhou2017novel} proposed another necessary and sufficient conditions based on a cross-layer protecting spanning tree set (in short, protecting spanning tree set), which guarantee the connectivity of the logical network through the existence of a protecting spanning tree after any physical link failure. It has been shown theoretically and computationally that a survivable cross-layer routing/network design may not exist for a given network; its existence highly relies on the network topology. Thus, unless some specific network structure which guarantees survivability is embedded in a given network~\cite{KTLin10}, the analysis and study on how to quantify and design a good/maximal partially survivable cross-layer routing also motivate this work. Survivable probability, an evaluation metric applicable to all cross-layer network topologies, is our attempt to address this problem in a general sense. In this paper, we develop the \textcolor{black}{survivable probability} of a cross-layer network, which describes the chance of a network slice to maintain its service against failure(s) in the physical infrastructure. Its single-layer counterpart, discussed in~\cite{yallouz2014tunable}\cite{yallouz2017tunable}, introduced the level of survivability which is ``a quantitative measure for specifying any desired level of survivability'' through survivable spanning trees. Our design and its single-layer counterpart share the same assumption that each physical link is associated with a probability of failure. Nevertheless, these two problems are fundamentally different due to their network settings. Another related work in~\cite{lee2014maximizing} evaluated the reliability of a cross-layer network under random physical link failure by calculating the failure polynomials. Our proposed approach differs from that in~\cite{lee2014maximizing} in three aspects: (1) we seek an exact solution approach with the objective to quantify the maximal survivable probability rather than an approximation through failure polynomials (which involves enumeration of cross-layer cutsets); (2) \textcolor{black}{relieving from cross-layer cutset enumeration, our approach is scalable to larger size cross-layer networks;} and (3) our approach can address both random or unified failure probabilities on physical links compared with the unified one in~\cite{lee2014maximizing}. Our contributions in this paper are as follows. (1) We define \textcolor{black}{cross-layer network survivable probability}, an evaluation metric on the reliability of cross-layer networks. (2) We demonstrate the existence of the protecting spanning tree set (as the \textcolor{black}{base protecting spanning tree set}) which shares the same \textcolor{black}{survivable probability} as that of a given cross-layer network. We prove the necessary and sufficient conditions to identify a base protecting spanning tree set. (3) Our proposed approach, which requires at most $|E_P|$ (the number of physical links) protecting spanning trees, directly calibrates the survivable probability through a base protecting spanning tree set while avoiding the enumeration of cross-layer cutsets. (4) By constructing a base protecting spanning tree set, the maximal survivable probability of a cross-layer network is tractable. Given a unified physical link failure probability, we prove that the design of a cross-layer network with the maximal survivable probability is equivalent to the cross-layer network design with the minimal number of shared physical links utilized by a base protecting spanning tree set. (5) We prove that the maximal protecting spanning tree, a protecting spanning tree with the maximal \textcolor{black}{survivable probability}, is a Steiner tree in the physical network whose terminal nodes are the corresponding physical nodes \textcolor{black}{onto which the logical nodes are mapped}. We also discuss that the Steiner tree packing problem along with network augmentation may provide the maximal \textcolor{black}{survivable probability} (100\%) in a cross-layer routing. The rest of this paper is organized as follows. Section~\ref{sec:problem} provides formal definitions and descriptions of the survivable probability and base protecting spanning tree set. Mathematical formulations for the maximal protecting spanning tree and the maximal survivable probability are presented in Section~\ref{sec:approach}. We discuss the relationship between the \textcolor{black}{protecting spanning tree in a cross-layer network} and Steiner tree in a single-layer network in Section~\ref{sec:PST-Steiner}, followed by the simulation results in Section~\ref{sec:result} and conclusions in Section~\ref{sec:conclusion}. \section{Definitions and Problem Description}\label{sec:problem} Given a physical network denoted as $G_P=(V_P, E_P)$, and a logical network (i.e., \textcolor{black}{a virtual network in a network slice}) denoted as $G_L=(V_L, E_L)$, where each logical node has an one-to-one mapping onto a physical node and each logical edge has an one-to-one mapping onto a physical path. \textcolor{black}{We let $M(\cdot)$ denote the general logical-to-physical mapping function.} The logical-to-physical node mapping is denoted as $\textcolor{black}{M}(s) = i$, $s\in V_L$ and $i\in V_P$; $\textcolor{black}{M}(u)=p_u$, $u\in E_L$ and $p_u \subset E_P$ is the logical-edge-to-physical-path mapping; \textcolor{black}{and $M(\tau)=\cup_{u\in \tau}M(u)$ is the mapping of a logical spanning tree $\tau\subset G_L$} onto $G_P$. Notations and parameters used in this paper are listed in Table~\ref{tbl:notation}. \begin{table}[t] \begin{tabular}{p{2cm}|p{6cm}} \hline\hline \rule{0pt}{9pt}Notation &Description\\ \hline \rule{0pt}{8pt} $G_P = (V_P,E_P)$ &Physical network, where $V_P$ and $E_P$ represent the node and edge set, respectively, with node indices $i,j$ and link index $e$\\ \rule{0pt}{8pt} $G_L = (V_L,E_L)$ &Logical network, where $V_L$ and $E_L$ denote the node and edge set, respectively, with node indices $s,t$ and link indices $\mu$,$\nu$\\ \rule{0pt}{8pt} $(G_P,G_L)$ & The cross-layer network with known logical-to-physical mapping\\ \rule{0pt}{8pt}$\mathcal{P}_{u}$ & A set of physical paths (routings) for $u\in E_L$, where $p_u$ is an element of $\mathcal{P}_{u}$, i.e., $p_u\in \mathcal{P}_u$\\ \rule{0pt}{8pt} $T, \tau$& A protecting spanning tree set with $\tau$ as a protecting spanning tree, i.e., $\tau\in T$\\ \rule{0pt}{8pt} $M(\cdot)$ & A general logical-to-physical mapping function, with node mapping $M(s) = i$, link mapping $M(\mu)=p_{\mu}$, and protecting spanning tree mapping $M(\tau)=\cup_{\mu\in \tau}p_{\mu}$\\ \rule{0pt}{8pt} $\lambda$ &A tuple which denotes a protecting spanning tree and its mapping, i.e., $\lambda = [\tau, M(\tau)]$\\ \rule{0pt}{8pt} $\Lambda$ & A tuple which denotes a protecting spanning tree set and its mapping, i.e., $\Lambda=\{\lambda\}$\\ \rule{0pt}{8pt} $\Lambda^{F}(M(E_L))$ & A collection of protecting spanning tree's with link mapping $M(E_L)$\\ \rule{0pt}{8pt} $\Lambda^{B}(G_P,G_L)$ & A base protecting spanning tree set and its mapping of a cross-layer network $G_P$ and $G_L$\\ \rule{0pt}{8pt} $E_{P}(\lambda)$ &All physical links utilized by the routings of $\lambda$'s branches\\ \rule{0pt}{8pt} $E^{M}_{P}(T)$ & Common physical links shared by the routings of all $\lambda\in T$\\ \rule{0pt}{8pt} $\Omega(E_L)$ &A set of logical-to-physical link mappings, where $M(E_L)\in \Omega(E_L)$ is one of its instances\\ \rule{0pt}{8pt} $R(M(E_L))$ &A set of physical links whose failures disconnect $G_L$ over a given mapping $M(E_L)$\\ \rule{0pt}{8pt} $\Phi(G_P,G_L)$&The survivable probability of a cross-layer network\\ \hline \hline \rule{0pt}{9pt}Parameter&Description\\ \hline \rule{0pt}{8pt} $\rho_e$ &Probability of failure for physical link $e$, $e\in E_P$\\ \rule{0pt}{8pt} $\rho$ &Unified probability of failure for all $e\in E_P$\\ \hline\hline \end{tabular} \vspace{2pt} \caption{Notations and parameters} \label{tbl:notation} \end{table} \subsection{Protecting Spanning Tree Set}\label{subsec:protSpanningTree} For a given logical-to-physical mapping $M(\cdot)$ of a cross-layer network $(G_P, G_L)$, the corresponding co-mapping~\cite{zhou2017survivable}, denoted as $M^{C}(\cdot)$, is defined as follows. Co-mapping of a logical edge $\nu$ is $M^{C}(\nu) = E_P \setminus M(\nu)$ with $\nu\in E_L$; and co-mapping of \textcolor{black}{logical spanning tree} $\tau$ is $M^{C}(\tau) = E_P \setminus \underset{\nu\in \tau} \bigcup M(\nu)$; that is, $M^{C}(\tau) = \bigcap_{\nu\in \tau} M^{C}(\nu)$. Given $M(\cdot)$, $M^C(\cdot)$, and a set of logical spanning trees $T$ of a cross-layer network $(G_P, G_L)$, the protecting spanning tree set~\cite{zhou2017survivable} is defined as follows. If physical link $(i, j)$ is in $M^C(\tau)$, $\tau\in T$, then $\tau$ is called a \textit{protecting spanning tree} which \textit{protects} $(i, j)$. If for every physical link $(i, j)$, there exists a spanning tree in $T$ which protects $(i,j)$, then the routing is a \textit{survivable routing}, and $T$ is called a \textit{protecting spanning tree set} for survivable routing. In this paper, given a protecting spanning tree $\tau$, we let $\lambda=[\tau, M(\tau)]$ denote a protecting spanning tree and its mapping, and $E_{P}(\lambda)=\{e: e\in \cup_{\mu\in \tau}p_{\mu}\}$ be the physical link set utilized by the routings of \textcolor{black}{$\lambda$}. Given these definitions, we may now derive the evaluation metric, the survivable probability, in the following section. \subsection{Survivable Probability}\label{subsec:definition} Given a cross-layer network ($G_P, G_L)$ and its node mapping $M(\nu)$ for all $\nu\in V_L$. We assume that each physical link $e\in E_P$ is associated with probability of failure $\rho_e$, where $0\leq \rho_e\leq 1$. The survivable probability of $(G_P, G_L)$ is defined as follows. \begin{define}\label{def:netSp} Given $(G_P, G_L)$ and the failure probability $\rho_e$, for all $e\in E_P$, the survivable probability of this network is the probability of the logical network to remain connected after any physical link failure(s). \end{define} Given a logical link $\mu\in E_L$ and its mapping $M(\mu) = p_{\mu}$, the survivable probability of $\mu$ is $\text{Prob}(\mu)=\prod_{e\in p_{\mu}}(1-\rho_e)$. Similarly, the survivable probability of a logical spanning tree $\tau$ is defined below. \begin{define}\label{def:proTreeSp} Given a cross-layer network $(G_P, G_L)$, a protecting spanning tree and its mapping $\lambda=[\tau, M(\tau)]$, the survivable probability of $\lambda$ is $\text{Prob}(\lambda)=\prod_{e\in E_{P}(\lambda)}(1-\rho_e)$. \end{define} The \textit{maximal protecting spanning tree} \textcolor{black}{is a protecting spanning tree with one of its possible mappings that provide the maximal survivable probability, which is greater than or equal to the survivable probability of any other trees and their mappings. We now demonstrate how a protecting spanning tree set can be used to improve the survivable probability even with a given logical-to-physical mapping. Let $T=\{\tau\}$ be a protecting spanning tree set, $\Lambda=\{\lambda\}$ be the set of \textcolor{black}{protecting spanning tree and its mappings}, and $E^{M}_{P}(\Lambda)=\cap_{\lambda_i\in \Lambda}E_{P}(\lambda_i)$ be the common physical links utilized by the routings of $\lambda_i\in \Lambda$. We use Fig.~\ref{fig:2tree} as an instance to illustrate the concept of a protecting spanning tree set and survivable probability. Given $G_L$ (top), $G_P$ (bottom), and $\rho_e, e\in E_P$ (labeled on each physical link). Logical-to-physical link mappings are given as follows: $M(1,2)=\{(1,5),(5,2)\}$, $M(1,3)=\{(1,4),(4,6),(6,3)\}$, $M(2,4)=\{(2,3),(3,6),(6,4)\}$, $M(3,4)=\{(3,6),(6,4)\}$. \begin{figure}[!h] \centering \includegraphics[scale=0.45]{twoTree.jpg} \caption{survivable probability of a protecting spanning tree set}\label{fig:2tree} \end{figure} \begin{table}[!h] \begin{tabular} { >{\centering\arraybackslash}m{1.1cm} >{\centering\arraybackslash}m{0.3cm} >{\raggedright\arraybackslash}m{2.2cm} >{\raggedright\arraybackslash}m{0.9cm} >{\raggedright\arraybackslash}m{2.5cm} } \hline\hline \rule{0pt}{9pt} &$\tau$ & $M(\tau)$ &$E_P(\lambda)$ &Prob($\lambda$)\\ \hline \rule{0pt}{10pt} Red $\lambda_1\quad$ $[\tau_1,M(\tau_1)]$&(1,2), (1,3), (3,4)&$\{(1,5),(5,2)\}$; $\{(1,4),(4,6),(6,3)\}$; $\{(4,6),(6,3)\}$&\{(1,4),(1,5), (2,5),(3,6), (4,6)\} &$\prod_{e\in E_P(\lambda_1)}(1-\rho_e)$ = (1-0.2) (1-0.1) (1-0.2) (1-0.1) (1-0.1) = 0.46656\\ \rule{0pt}{10pt}Green $\lambda_2\;$ $[\tau_2,M(\tau_2)]$&(1,2), (2,4), (4,3)&$\{(1,5),(5,2)\}$; $\{(2,3),(3,6),(6,4)\}$; $\{(4,6),(6,3)\}$&\{(1,5),(2,3), (2,5),(3,6), (4,6)\} &$\prod_{e\in E_P(\lambda_2)}(1-\rho_e)$ = (1-0.1) (1-0.2) (1-0.1) (1-0.1) (1-0.1)=\textcolor{black}{0.52488}\\ \hline\hline \end{tabular} \vspace{2pt} \caption{Protecting spanning trees and their mappings} \label{tbl:instance} \end{table} We select a set of two protecting spanning trees: (red tree) $\lambda_1=[\tau_1, M(\tau_1)]$ and (green) $\lambda_2=[\tau_2, M(\tau_2)]$, whose branches, link mappings, utilized physical link sets, and survivable probability are presented in Table~\ref{tbl:instance}. When considering a protecting spanning tree set and its mappings $\Lambda=\{\lambda_1, \lambda_2\}$, the common physical links used by the routings of both trees are $E^{M}_{P}(\Lambda)=\cap_{\lambda_i\in \Lambda}E_P(\lambda_i)=\{(1,5),(2,5),(3,6),(4,6)\}$. Therefore, any failure(s) occur among these links would disconnect both $\lambda_1$ and $\lambda_2$. Hence, the survivable probability of $\Lambda= (1-0.1)(1-0.2)(1-0.1)(1-0.1) = 0.5832$ which is higher than that of either $\lambda_1$ or $\lambda_2$. Derived from the example above we have the following definition. \begin{define}\label{def:treeSetSp} Given a cross-layer network $(G_P, G_L)$, failure probability $\rho_e, e\in E_P$, a protecting spanning tree set and its mappings $\Lambda=\{\lambda\}$, the survivable probability of $\Lambda$ is $\text{Prob}(\Lambda)=\prod_{e\in E^{M}_{P}(\Lambda)}(1- \rho_{e})$. \end{define} We also define the \textit{maximal protecting spanning tree set} as a protecting spanning tree set with the maximal survivable probability given any logical link mappings. \subsection{Survivable Probability, Link Mapping, and Base Protecting Spanning Tree Set}\label{subsec:netTreeSetRlt} Given a cross-layer network $(G_P, G_L)$, and mappings of all logical links $M(E_L)=\{M(\mu): M(\mu)=p_{\mu}, p_{\mu}\in \mathcal{P}_{\mu}, \mu\in E_L\}$. Let $\Omega(E_L)=\{M(E_L)\}$ be the set of all logical link mappings, \textcolor{black}{i.e.,} $\Omega(E_L)$ contains all possible combinations of logical link mappings for all logical links. In this section, we explore the relation among $(G_P, G_L)$, $M(E_L)$, and protecting spanning tree set $T$. We demonstrate that the existence of the maximal protecting spanning tree set whose survivable probability is the same as \textcolor{black}{the maximal survivable probability} of $(G_P, G_L)$. We also provide the necessary and sufficient conditions to identify such a $T$ and then evaluate the survivable probability accordingly. We denote the maximal survivable probability of $(G_P, G_L)$ as $\Phi(G_P, G_L)$. \begin{proposition}\label{prop:cldNetSurProb} Given a cross-layer network $(G_P, G_L)$, all possible logical link mappings $\Omega(E_L)$, and failure probability $\rho_e$, $e\in E_P$. The maximal survivable probability of $(G_P, G_L)$, $\Phi(G_P, G_L) = \max_{M(E_L)\in \Omega(E_L)}\prod_{e\in R(M(E_L))}(1-\rho_e)$, where $R(M(E_L))$ denotes a set of physical links whose failure(s) disconnect $G_L$ with $M(E_L)$. \end{proposition} \begin{IEEEproof} With Definition~\ref{def:netSp}, $(G_P, G_L)$'s survivable probability is determined by physical links whose failures disconnect $G_L$. For given logical link mappings $M(E_L)$, $R(M(E_L))$ contains all physical links whose failure(s) disconnect $G_L$. Hence, $G_L$ remains connected if and only if none of the links in $R(M(E_L))$ fail. Hence, $\prod_{e\in R(M(E_L))}(1-\rho_e)$ provides the survivable probability for $(G_P, G_L)$ over a given mapping $M(E_L)$. If $\Omega(E_L)=\{M(E_L)\}$ contains all possible combinations of the logical-to-physical link mappings, $\Phi(G_P, G_L)=\max_{M(E_L)\in \Omega(E_L)}\prod_{e\in R(M(E_L))}(1-\rho_e)$, which provides the maximal survivable probability of $(G_P, G_L)$. \end{IEEEproof} With Proposition~\ref{prop:cldNetSurProb}, $(G_P, G_L)$'s survivable probability is determined by its logical link mapping. We let $M^{*}(E_L)$ denote the logical link mapping which provides the maximal survivable probability for $(G_P, G_L)$, i.e., $M^{*}(E_L)=\arg_{M(E_L)\in \Omega(E_L)}\Phi(G_P, G_L)$. \begin{theorem}\label{thm:existTreeSet} Given a cross-layer network $(G_P, G_L)$ and failure probability $\rho_e$, $e\in E_P$, there exists a protecting spanning tree set and its mapping whose survivable probability is the same as that of $(G_P, G_L)$. \end{theorem} Please refer to Appendix~\ref{app:existTreeSet-proof} for the proof of Theorem~\ref{thm:existTreeSet}. \begin{comment} \textit{Sketch of Proof:} First, given a logical link mapping $M(E_L)$, and a protecting spanning tree set and its mapping, containing all protecting spanning tree's and its tree mappings, denoted as $\Lambda^{F}(M(E_L))$. We have following two claims hold.\\ Claim 1: After a physical link's failure, the logical network remains connected, if and only if exists a protecting spanning tree in a protecting spanning tree set protects the physical link. Please See details in Lemma 1 in Appendix A. \\ Claim 2: $R(M(E_L))=E^{M}_{P}(\Lambda^{F}(M(E_L)))$. Please see in Lemma 2 in Appendix A.\\ Then, with the set of logical link mapping $\Omega(E_L)$, and $M'(E_L)$ and $\Lambda^{F}(M'(E_L))$ be the logical link mapping and protecting spanning tree set and its mapping provides the maximal survivable probability , i.e., $M'(E_L)=\arg_{M(E_L)\in \Omega(E_L)}\prod_{e\in E^{M}_{P}(\Lambda^{F}(M'(E_L)))}(1-\rho_e)$, We finally prove that survivable probability under $M'(E_L)$ and $M^{*}(E_L)$ are same with above claims. Detailed proof of Theorem~\ref{thm:existTreeSet}, please see in Appendix A.\\ \end{comment} With Theorem~\ref{thm:existTreeSet}, we define the protecting spanning tree set which provides the maximal survivable probability, i.e., $\Phi(G_P, G_L)$, as a \textit{base protecting spanning tree set}. \begin{comment} \begin{define}\label{def:baseTreeSet} Given cross-layer network $(G_P, G_L)$, a \textit{base protecting spanning tree set} for the cross-layer network has the same survivable probability as that of the cloud network. \end{define} We let $\Lambda^{B}(G_P,G_L)$ represent a base protecting spanning tree set and its mappings of a given cross-layer network $(G_P, G_L)$. \begin{corollary}\label{cor:ncSufBaseProTreeSet} A protecting spanning tree set is a base protecting spanning tree set if and only if it is the maximal protecting spanning tree set. \end{corollary} \begin{IEEEproof} Proof of the necessary condition: follow the proof in Theorem~\ref{thm:existTreeSet}.\\ Proof of the sufficient condition: by contradiction. If there exists a protecting spanning tree set with higher survivable probability than that of the base protecting spanning tree set, it leads to higher survivable probability. Contradiction! \end{IEEEproof} \textcolor{black}{With Lemma~\ref{lm:flLink} in Appendix~\ref{app:existTreeSet-proof}, given a survivable routing, every physical link is protected by a least one protecting spanning tree. A base protecting spanning tree set also protects all physical links. In other word, $E^{M}_{P}(E_L)=\emptyset$. Hence, a survivable cross-layer network has 100\% survivable probability against arbitrary physical link failure.} Therefore, the survivable cross-layer network design problem with guaranteed 100\% survivable probability is a subproblem of the \textcolor{black}{cross-layer network design with maximal survivable probability}. \subsection{Unified Physical Link Failure Probability}\label{subsec:failProb} Unified probability of failure, where the failure probability for all physical links is the same (i.e., $\rho_e=\rho$ with $e\in E_P$), is a special case of \textcolor{black}{random physical link failure}. We first study the maximal protecting spanning tree and have the following conclusion. \begin{proposition}~\label{prop:mxTreeSpCase} Given $(G_P, G_L)$ and physical link failure probability, $\rho_e=\rho$ with $e\in E_P$, the maximal protecting spanning tree is a tree $\tau$ with the minimal number of physical links utilized in $M(\tau)$. \end{proposition} \begin{IEEEproof} Based on Definition~\ref{def:proTreeSp}, the maximal survivable probability \textcolor{black}{of $\tau$ and its mapping is} $\max \prod_{e\in E_{P}(\lambda)}(1-\rho_e)$ $ = \max (1-\rho)^{|E_{P}(\lambda)|}$ (because $\rho_{e} = \rho$). Thus, $\lambda$ with $\min |E_{P}(\lambda)|$ will produce the maximal survivable probability. \end{IEEEproof} \begin{theorem}~\label{thm:failProbMinLk} Given $(G_P, G_L)$ and unified failure probability $\rho$, a base protecting spanning tree set and its mapping with $\min|E^{M}_{P}(\Lambda^{B}(G_P,G_L))|$ \textcolor{black}{provides the maximal} survivable probability of $(G_P, G_L)$ as $(1-\rho)^{\min|E^{M}_{P}(\Lambda^{B}(G_P,G_L))|}$. \end{theorem} \begin{IEEEproof} With Definition~\ref{def:treeSetSp}, the survivable probability of a base protecting spanning tree set and its mapping is $\text{Prob}(\Lambda^{B}(G_P,G_L))=\prod_{e\in E^{M}_{P}(\Lambda^{B}(G_P,G_L))}(1-\rho_e)=(1-\rho)^{|E^{M}_{P}(\Lambda^{B}(G_P,G_L))|}$. According to Corollary~\ref{cor:ncSufBaseProTreeSet} and Proposition~\ref{prop:mxTreeSpCase}, minimizing $|E^{M}_{P}(\Lambda^{B}(G_P,G_L))|$ leads to $\Phi(G_P, G_L)$\textcolor{black}{,} the maximum $\text{Prob}(\Lambda^{B}(G_P,G_L))$. \end{IEEEproof} Based on Theorem~\ref{thm:failProbMinLk}, finding survivable probability of $(G_P, G_L)$ with unified failure probability is equivalent to solving a cross-layer network design problem targeting the minimal number of shared physical links in its logical link mappings. The above proof also demonstrates that a base protecting spanning tree set and its mappings can provide a (partially) survivable network design along with a more precise evaluation metric on its reliability. Note that Theorem~\ref{thm:failProbMinLk} only holds when all physical links have the unified failure probability. If considering random link failure probabilities, the minimal set of physical links whose failures disconnect the logical network may not be equivalent to $E^{M}_{P}(M(E_L))$, thus leads to a survivable probability different from that of the cross-layer network. Compared with the approach considered in~\cite{lee2014maximizing} where the reliability of a cross-layer network is approximated through the failure polynomials generated by enumerating exponential number of cross-layer cutsets with unified link failure probability, the base protecting spanning tree set can provide \textcolor{black}{exact calibration of survivable probability} under both unified and random physical link failure probabilities. \section{Solution Approach}\label{sec:approach} Based on Theorems~\ref{thm:existTreeSet} and~\ref{thm:failProbMinLk}, we present in this section the mathematical programming formulations to compute survivable probability of a cross-layer network. We first present the formulation for survivable probability with unified physical link failure probability as a special case in Section~\ref{subsec:speCase}, followed by a generalized formulation addressing random probabilities of failure in Section~\ref{subsec:baseTreeSet}. Variables and parameters used in the formulations are listed in Table~\ref{tbl:vrPr}. \begin{table}[!b] \begin{tabular}{p{2cm}|p{6cm}} \hline\hline \rule{0pt}{9pt} Variable &Description\\ \hline \rule{0pt}{8pt} $x_{ij}$& Binary variable indicating whether $(i,j)$'s failure disconnects the logical network. If yes, $x_{ij}=1$; otherwise, $x_{ij}=0$\\ \rule{0pt}{8pt} $y^{st}_{ij}$& Binary variable indicating whether logical link $(s,t)$ is routed through physical link $(i,j)$ or not. If yes, $y^{st}_{ij}=1$; otherwise, $y^{st}_{ij}=0$\\ \rule{0pt}{8pt} $z_{st}$& Binary variable indicating whether logical link $(s,t)$ is connected and forms a protecting spanning tree. If yes, $z_{st}=1$; otherwise, $z_{st}=0$\\ \rule{0pt}{8pt} $w^{ij}_{st}$& Binary variable indicating whether logical link $(s,t)$ is connected and forms a protecting spanning tree after physical link $(i,j)$ failed. If yes, $z^{ij}_{st}=1$; otherwise, $z^{ij}_{st}=0$\\ \rule{0pt}{8pt} $g_{ij}$& Binary variable indicating whether physical link $(i,j)$ is shared by trees in a base protecting spanning tree set. If yes, $g_{ij}=1$; otherwise, $g_{ij}=0$\\ \hline\hline \rule{0pt}{9pt} Parameter & Description\\ \hline \rule{0pt}{8pt} $c_{ij}$ & The coefficient for physical link. With unified failure probability, $c_{ij}=1$; with random physical link failure probability, $c_{ij}=\textcolor{black}{-}\ln(1-\rho_{ij})$\\ \hline\hline \end{tabular} \vspace{2pt} \caption{Variables and parameters used in mathematical formulations} \label{tbl:vrPr} \end{table} \subsection{Survivable Probability of Cross-layer Networks with Unified Physical Link Failure Probability}\label{subsec:speCase} We first present a mixed-integer programming formulation to generate the maximal protecting spanning tree, followed by a formulation to generate a base protecting spanning tree set. \subsubsection{Maximal Protecting Spanning Tree}\label{subsubsec:maxTree} Given unified failure probability $\rho$ on physical links, we propose a mixed integer programming formulation with the objective to minimize the number of physical links utilized in tree branches' routings (based on Proposition~\ref{prop:mxTreeSpCase}). \begin{align} & \min_{x,y,z}\sum_{(i,j)\in E_P}x_{ij}\nonumber\\ s.t. &\sum_{(i,j)\in E_P}y^{st}_{ij}-\sum_{(j,i)\in E_P}y^{st}_{ji}= \left\{\begin{matrix} z_{st}, &\,\mbox{if } i=s,\\ -z_{st}, &\,\mbox{if } i=t,\\ 0, &\,\mbox{if } i\neq \{s, t\}, &\end{matrix}\right.\label{fm:lightpath1}\\ &y^{st}_{ij}+y^{st}_{ji} \leq x_{ij}, (s,t)\in E_L, (i,j)\in E_P\label{fm:minLinkSel}\\ &\sum_{(s,t)\in E_L}z_{st} - \sum_{(t,s)\in E_L} z_{ts} = \left\{\begin{matrix} |V_L|-1, \text{if } s=s_0\\ -1, \text{if } s\neq s_0, s\in V_L &\end{matrix}\right.\label{fm:maxTreeTree1}\\ &\sum_{(s,t) \in E_L}z_{st} =|V_L|-1, (i,j)\in E_P \label{fm:zBnForced}\\ &x_{ij}, y^{st}_{ij}, z_{st} \in \{0,1\}, (s,t)\in E_L, (i,j)\in E_P\label{fm:maxTreeRegion} \end{align} Constraint (\ref{fm:lightpath1}) maps logical links onto physical paths and \textcolor{black}{selects logical links} forming a logical spanning tree, in which $z_{st}$ on the right hand side indicates whether $(s,t)$ is a branch of a logical spanning tree or not. Constraint (\ref{fm:minLinkSel}) indicates which physical links are utilized by the routings of a selected protecting spanning tree. Constraints (\ref{fm:maxTreeTree1}) and (\ref{fm:zBnForced}) form a protecting spanning tree corresponding to the logical link mapping generated by constraint (\ref{fm:lightpath1}). Constraint~(\ref{fm:maxTreeRegion}) provides the feasible regions for all decision variables. \subsubsection{Base Protecting Spanning Tree Set}\label{subsubsec:treeSet} The above formulation generates a maximal protecting spanning tree. Extending the formulation, we now present a mixed-integer programming formulation to compute the survivable probability with unified physical link failure probability. Based on Corollary~\ref{cor:ncSufBaseProTreeSet} and Theorem~\ref{thm:failProbMinLk}, the proposed formulation has the objective to minimize the total number of physical links shared by $M(T)$ of a protecting spanning tree set $T$. \begin{align} \min_{\textcolor{black}{g,w,y}} &\sum_{(i,j)\in E_P} g_{ij} \nonumber\\\ s.t. &\sum_{(i,j)\in E_P}y^{st}_{ij}-\sum_{(j,i)\in E_P}y^{st}_{ji}= \left\{\begin{matrix} \textcolor{black}{1}, &\,\mbox{if } i=s,\\ \textcolor{black}{-1}, &\,\mbox{if } i=t,\\ 0, &\,\mbox{if } i\neq \{s, t\}, &\end{matrix}\right.\label{fm:lightpath2}\\ &w^{ij}_{st}\leq 1-(y^{st}_{ij}+y^{st}_{ji}), (s,t)\in E_L, (i,j)\in E_P \label{fm:protTreeSet2}\\ \sum_{(s,t)\in E_L}&w^{ij}_{st} - \sum_{(t,s)\in E_L} w^{ij}_{ts} = \left\{\begin{matrix} \textcolor{black}{(1 - g_{ij})}, \qquad\text{if } s=s_0\\ \textcolor{black}{(g_{ij}-1)/(|V_L|-1)},\;\; \\ \qquad\quad\text{if } s\neq s_0, s\in V_L &\end{matrix}\right.\label{fm:maxTreeTreeSet2}\\ y^{st}_{ij}, &g_{ij} \in \{0,1\}, w^{ij}_{st}\geq 0, (s,t)\in E_L, (i,j)\in E_P\label{fm:maxTreeRegion2} \end{align} Similar to constraint (\ref{fm:lightpath1}), constraint (\ref{fm:lightpath2}) generates physical paths for logical links which are branches of a spanning tree in a base protecting spanning tree set. \textcolor{black}{Constraints (\ref{fm:protTreeSet2})--(\ref{fm:maxTreeTreeSet2}) generate a protecting spanning tree after any physical link's failure if the physical link is protected; otherwise, the physical link is identified as unprotected. With the information of unprotected physical links, the generated protecting spanning tree set where each of its element protects at least one physical link is then identified as a base protecting spanning tree set.} Constraint~(\ref{fm:maxTreeRegion2}) provides the feasible regions for all decision variables. \subsection{Survivable Probability of Cross-Layer Networks with Random Physical Link Failure Probability}\label{subsec:baseTreeSet} In this section, we discuss a more generalized and realistic physical link failure scenario, where the physical link failure probability is not unique. Based on Corollary~\ref{cor:ncSufBaseProTreeSet}, the objective used to select a base protecting spanning tree set is as follows. \begin{align} &\max_{\Lambda}\prod_{_{e\in E^{M}_{P}(\Lambda)}}(1-\rho_e)\label{obj:nonlinear} \end{align} Constraint (\ref{obj:nonlinear}) is nonlinear. Applying the $\ln$ function to this constraint converts it into the linear form. \begin{align} \max_{\Lambda}\sum_{_{e\in E^{M}_{P}(\Lambda)}}\ln(1-\rho_e).\label{fm:objMaxWeight} \end{align} \textcolor{black}{We let $c_{ij}$ be the weights on physical links. $c_{ij}=1$ for unified physical link failure probability, and $c_{ij}=-\ln(1-p_{ij})$ for random probabilities of link failure.} The generalized formulation for the survivable probability of a cross-layer network is then \begin{align} \min_{y,g}&\sum_{(i,j)\in E_P}c_{ij}g_{ij}\nonumber\\ s.t. & \text{ Constraints (\ref{fm:lightpath2}) -- (\ref{fm:maxTreeRegion2})} \end{align} Note that \textcolor{black}{with unified failure probability, the formulation for base protecting spanning tree set is to minimize} the total number of shared physical links. But with random failure probability, after linearization, the objective is to \textcolor{black}{maximize the total weight of the shared physical links~\label{fm:objMaxWeight} with non-positive physical link weights}, because physical link failure probability is in $[0,1]$ and the value $\ln(1-p_e)$ \textcolor{black}{is non-positive}. When we let the physical link weight $c_{ij}$ be $-\ln(1-p_{ij})$ with $(i,j)\in E_P$ \textcolor{black}{as the non-negative weight, the generalized objective becomes minimizing the total weight (non-negative) of the shared physical links.} \section{Protecting Spanning Tree v.s. Steiner Tree}~\label{sec:PST-Steiner} In this section, we discuss the relationship between a protecting spanning tree in a cross-layer network and a Steiner tree in the physical network whose terminal nodes are the physical nodes corresponding to the logical nodes and Steiner nodes are a subset of the remaining physical nodes. First, we show that the maximal protecting spanning tree in a cross-layer network is a minimum Steiner tree in its physical network in which the terminal nodes are a subset of physical nodes onto which logical nodes are mapped. This conclusion leads to a factor $\ln 4 +\varepsilon$ approximation algorithm for the maximal protecting spanning tree in a cross-layer network. Motivated by the conclusion, we further study the relationship between survivable cross-layer network design and edge-disjoint Steiner tree packing problem (with 100\% survivable probability). We demonstrate that the existence of edge-disjoint Steiner tree packing with logical network augmentation provides necessary and sufficient conditions for survivable cross-layer routing. \subsection{Maximal Protecting Spanning Tree v.s. Minimum Steiner Tree} The minimum Steiner tree problem~\cite{garey2002computers} is defined as follows. \begin{problem}~\label{pb:mStp} The minimum Steiner tree problem~\cite{hauptmann2013compendium}\\ INSTANCE: Graph $G=(V,E)$, edge cost $c: E \rightarrow R^{+}$, a set of terminal nodes $S\subseteq V$. \\ SOLUTION: A tree $\gamma=(V_{\gamma}, E_{\gamma})$ in $G$ such that $E_{\gamma} \subseteq E$ and $S\subseteq V_{\gamma} \subseteq V$ with $s\in S$ \\ OBJECTIVE: Minimize cost function $\sum_{e\in E_{\gamma}}c(e)$. \end{problem} The minimum Steiner tree problem is $\mathcal{NP}$-hard~\cite{garey2002computers} and has a factor $(\ln 4 + \varepsilon)$ approximation algorithm~\cite{byrka2010improved}. Its special case in a planar graph is polynomial solvable in $\mathcal{O}(3^{k}n + 2^k (n\log n +m))$, where $n =|V|$, $k=|S|$, and $m=|E|$~\cite{borradaile2009n}. We now demonstrate that the maximal protecting spanning tree problem is the minimum Steiner tree problem in a physical network. Let $V^{L}_{P}$ be a set of physical nodes which logical nodes are mapped onto. \begin{theorem}\label{thm:maxPTminST} Given a cross-layer network $(G_P, G_L)$, the maximal protecting spanning tree and its mapping $\lambda^{*}=[\tau^{*}, M(\tau^{*})]$. $M(\tau^{*})$ is the minimum Steiner tree in $G_P$ with $V^{L}_{P}$ as the terminal nodes and $c_e=-\ln(1-\rho_e)$ as the link costs. \end{theorem} The proof of Theorem~\ref{thm:maxPTminST} is given in Appendix~\ref{app:maxPTminST}. \begin{corollary}\label{cor:mPTApp} Given a cross-layer network $(G_P, G_L)$ and failure probability $\rho_e$ with $e\in E_P$, a factor $(\ln 4 + \varepsilon)$ approximation algorithm exists for the maximal protecting spanning tree problem. If $G_P$ is a planner graph, a polynomial algorithm exists for the maximal protecting spanning tree problem. \end{corollary} Let $G_P=(V_P,E_P)$ be the physical network, $V^{L}_{P}$ be the terminal node set, and $V_P\setminus V^{L}_{P}$ be the superset of the Steiner node set. Each physical link is assigned a \textcolor{black}{non-negative cost $c_e=-\ln(1-\rho_e)$} with $e\in E_P$. Based on Theorem~\ref{thm:maxPTminST}, we can apply the factor $(\ln 4 + \varepsilon)$ approximation algorithm in~\cite{byrka2010improved}, and the maximal protecting spanning tree can be approximated by a $\ln 4 + \varepsilon$ factor. Furthermore, a polynomial-time algorithm with complexity $\mathcal{O}(3^k n + 2^k (n \log n + m)$, where $n =|V_P|$, $k=|V^{L}_{P}|$, and $m=|E_P|$~\cite{borradaile2009n} exists for the maximal cross-layer protecting spanning tree problem, which only requires the physical network to be planar. \subsection{Survivable Cross-layer Network Design with Augmentation v.s. Steiner Tree Packing}\label{subsubsec:maxSPCN} Motivated by the construction of the maximal protecting spanning tree via a minimum Steiner tree in the physical network, if considering \textcolor{black}{multiple protecting spanning trees}, it leads us to the problem of packing edge-disjoint Steiner trees described below. \begin{problem} Packing edge-disjoint Steiner trees~\cite{hauptmann2013compendium} INSTANCE: An undirected multigraph $G=(V,E)$, and a set of terminal nodes $S\subseteq V$.\\ SOLUTION: A set $\Gamma=\{\gamma_1,\cdots, \gamma_m\}$ of Steiner trees $\gamma_i$ for $S$ in $G$ which have pairwise disjoint sets of edges.\\ OBJECTIVE: Maximize $|\Gamma|$.\\ \end{problem} \cite{kaski2004packing} proved that finding two edge-disjoint Steiner trees is $\mathcal{NP}$-hard. Next, we build the connection between survivable cross-layer network design with logical augmentation and Steiner tree packing. We define the link augmentation as follows. \begin{define} \textit{Logical link augmentation}~\cite{zhou2017novel}\label{def:logAug}\\ Given a cross-layer network $(G_P, G_L)$ and a logical link $\mu=(s,t)\in E_L$. An augmented logical link $\mu'$ is a link parallel to $\mu$, and $M(\mu)$ and $M(\mu')$ are edge-disjoint. \end{define} \begin{theorem}\label{thm:logTreePhyTree} Given a cross-layer network \textcolor{black}{$(G_P, G_L)$}. Let $V^{L}_{P}$ be the set of physical nodes corresponding to the logical nodes. If 2 edge-disjoint Steiner trees are packed in $G_P$, where $V^{L}_{P}$ are the terminal nodes and $V_P\setminus V^{L}_{P}$ is the superset of the Steiner nodes, the survivability of the cross-layer routing is guaranteed with logical link augmentation. \end{theorem} \begin{IEEEproof} Given a logical link $\mu=(s,t)\in G_L$, let $\mu'$ be the augmented logical link of $\mu$. With Definition~\ref{def:logAug}, $M(\mu)$ and $M(\mu')$ are edge-disjoint. 2 edge-disjoint Steiner trees in $G_P$ with $V^{L}_{P}$ as their terminal nodes guarantee the existence of two edge-disjoint paths $p_1$ and $p_2$ connecting $i$ and $j$ with $M(s)=i$, $M(t)=j$, and $i,j \in V^{L}_{P}$. Hence, after any physical link failure, $s$ and $t$ remain connected. Thus, the two edge-disjoint Steiner trees actually provide two protecting spanning trees in the logical network, which guarantee the connectivity of logical network after the failure of any physical link. \end{IEEEproof} With Theorem~\ref{thm:logTreePhyTree}, we have the following conclusions for the necessary condition to identify the survivability of a cross-layer network with logical link augmentation. \begin{corollary}\label{col:augSurvNecCondition} Given $G_P=(V_P,E_P)$, if $V_P\setminus V^{L}_{P}$ is 13-edge connected, then, 2 edge-disjoint Steiner trees exists. \end{corollary} The conclusion directly follows~\cite{west2012packing} that if the terminal nodes are 6.5$k$-edge connected, there exists $k$ edge-disjoint Steiner trees. Note here that two special cases require less edge connectivity on terminal nodes, namely $k$-regular graph~\cite{kriesell2009edge} and planar graph~\cite{aazami2012approximation}. Furthermore, solution approaches solving edge-disjoint Steiner tree packing lead to solution approaches for survivable cross-layer routing design with logical link augmentation, which has a factor $\mathcal{O}(\sqrt{|V_P|}\log |V_P|)$ approximation algorithm~\cite{cheriyan2006hardness}. Theorem~\ref{thm:logTreePhyTree} demonstrates that the cross-layer network design problem can be solved as its single-layer network counterpart with logical link augmentation. However, the same claim does not hold if the logical augmentation is not allowed. \input{newSimResults-L} \section{Conclusion}\label{sec:conclusion} In this paper, we introduced a new evaluation metric, the survivable probability, to evaluate the probability of the logical network to remain connected against physical link failure(s) with either unified or random failure probabilities. We explored the exact solution approaches in the form of mathematical programming formulations. We also discussed the relationship between the survivable probability of a cross-layer network and the protecting spanning tree set, which led to the base protecting spanning tree set approach. We proved the existence of a base protecting spanning tree set in a given cross-layer network and its necessary and sufficient conditions. We demonstrated that cross-layer network survivability may be solved or approximated through the single-layer network structures with some techniques such as logical augmentation and some criteria such as planar graphs. Our simulation results showed the effectiveness of proposed solution approaches. \begin{appendices} \section{Proof of Theorem~\ref{thm:existTreeSet}}\label{app:existTreeSet-proof} Given a cross-layer network $(G_P, G_L)$, a set of all logical-to-physical link mappings $\Omega(E_L)$, and \textcolor{black}{a logical link mapping $M(E_L)\in \Omega(E_L)$. We let $\Lambda^{F}(M(E_L))=[\mathcal{T}^{F}, M(E_L)]$ be a protecting spanning tree set (containing all protecting spanning trees $mathcal{T}^{F}$) with logical link mapping $M(E_L)$.} \begin{lemma}\label{lm:flLink} Given $(G_P, G_L)$, $T^F$, and $\Lambda^{F}(M(E_L))$. $G_L$ remains connected after any physical link failure if and only if a protecting spanning tree $\tau$ exists which protects physical link $e$, with $e\in E_P, \tau\in T^{F}(M(E_L))$. \end{lemma} \begin{IEEEproof} Proof of the necessary condition: given $M(E_L)\in\Omega(E_L)$, if $G_L$ remains connected after the failure of $e$, then, a logical spanning tree $\tau$ exists with branch mapping $M(\tau)\subset M(E_L)$. \\ Proof of sufficient condition: if a protecting spanning tree $\tau\in T^{F}$ protects $e$, then, $e\notin E_{P}(\tau)$. Hence, after $e$'s failure, $\tau$ guarantees the connectivity of $G_L$. \end{IEEEproof} With Lemma~\ref{lm:flLink}, if $G_L$ is disconnected due to the failure of $e$, then, no protecting spanning tree exists to protect $e$ for the given $T^F$ and its mappings. \begin{lemma}~\label{lm:eqSet} For a logical link mapping $M(E_L)\in \Omega(E_L)$, $R(M(E_L))= E^{M}_{P}(\Lambda^{F}(M(E_L)))$. \end{lemma} \begin{IEEEproof} We first prove that $R(M(E_L))\subseteq E^{M}_{P}(\Lambda^{F}(M(E_L)))$. Given $e\in R(M(E_L))$, with Lemma~\ref{lm:flLink}, no protecting spanning tree exists for $e$. Then, we have $e\notin E_P\setminus E_P(\lambda)$ with $\lambda\in \Lambda^{F}(M(E_L))$. Hence, $e\notin \cup_{\lambda\in \Lambda^{F}(M(E_L))}E_P\setminus E_P(\lambda)$. Let $A^{c}$ be the complement of set $A$. Then, $e\in [\cup_{\lambda\in \Lambda^{F}(M(E_L))}E_P\setminus E_P(\lambda)]^{c}$. We have $e\in \cap_{\lambda\in \Lambda^{F}(M(E_L))}E_P(\lambda)$. Therefore, $R(M(E_L))\subseteq E^{M}_{P}(\Lambda^{F}(M(E_L)))$.\\ We now prove that $E^{M}_{P}(\Lambda^{F}(M(E_L)))\subseteq R(M(E_L))$. Given a physical link $e\in E^{M}_{P}(\Lambda^{F}(M(E_L)))$, then, $e\in E_P(\lambda)$ for all $\lambda\in \Lambda^{F}(M(E_L))$. With Lemma~\ref{lm:flLink}, no protecting spanning tree protects $e$, hence, $e\in R(M(E_L))$. Therefore, $E^{M}_{P}(\Lambda^{F}(M(E_L)))\subseteq R(M(E_L))$. \end{IEEEproof} \textit{Theorem~\ref{thm:existTreeSet}:} For a cross-layer network $(G_P,G_L)$, there exists a protecting spanning tree set which has the same survivable probability as that of $(G_P,G_L)$. \begin{IEEEproof} We let $M^{*}(E_L)$ be the logical link mapping with the maximal survivable probability, i.e., $M^{*}(E_L)=\arg_{M(E_L)\in \Omega(E_L)}\max\prod_{e\in R(M(E_L))}(1-\rho_e)$. Let $M'(E_L)$ be the logical link mapping for the maximal survivable probability of a cross-layer spanning tree set, i.e., $M'(E_L)=\arg_{M(E_L)\in \Omega(E_L)}\max\prod_{e\in E^{M}_{P}(\Lambda(M(E_L)))}(1-\rho_e)$. We now prove that $M^{*}(E_L)=M'(E_L)$. \\ With Lemma~\ref{lm:eqSet}, we have $R(M^{*}(E_L))= E^{M}_{P}(\Lambda^{F}(M^{*}(E_L)))$ and $R(M'(E_L))= E^{M}_{P}(\Lambda^{F}(M'(E_L)))$. With the definition of $M^{*}(E_L)$ and $M'(E_L)$, we have $\prod_{e\in E^{M}_{P}(\Lambda^{F}(M'(E_L)))}(1-\rho_{e}) =\prod_{e\in R(M'(E_L))}(1-\rho_{e})\leq \prod_{e\in R(M^{*}(E_L))}(1-\rho_{e})$; and $\prod_{e\in R(M^{*}(E_L))}(1-\rho_{e})=\prod_{e\in E^{M}_{P}(\Lambda^{F}(M^{*}(E_L)))}(1-\rho_{e}) \leq \prod_{e\in E^{M}_{P}(\Lambda^{F}(M'(E_L)))}(1-\rho_{e})$. Hence, $\prod_{e\in R(M^{*}(E_L))}(1-\rho_{e})=\prod_{e\in E^{M}_{P}(\Lambda^{F}(M^{*}(E_L)))}(1-\rho_{e})$. The conclusion holds. \end{IEEEproof} \section{Proof of Theorem~\ref{thm:maxPTminST}}\label{app:maxPTminST} \begin{lemma}\label{lm:noCycleInMapping} Given a cross-layer network $(G_P, G_L)$, and the maximal $\mathcal{PST}\ \tau^{*}$ and its mapping $\lambda^{*}=[\tau^{*}, M(\tau^{*})]$. $M(\tau^{*})=(V^{L}_{P}, E(M(\tau^{*})))$ is a tree in $G_P$. \end{lemma} \begin{IEEEproof} We prove this conclusion by contradiction. Given a maximal protecting spanning tree and its mapping, $\lambda^{*}$. Since $\tau^{*}$ is a logical spanning tree and $M(\tau^{*})$ is its mapping onto $G_P$, physical nodes in $V^{L}_{P}$ are connected. With the maximal protecting spanning tree, we have $\tau^{*}=\arg_{\tau\in \{\tau\}}\textcolor{black}{\max}\prod_{e\in E(M(\tau))}(1-\rho_{e})$. The $\textcolor{black}{\max}\prod_{e\in E(M(\tau))}(1-\rho_{e})$ leads to $\min \sum_{e\in E(M(\tau))} c_e$. As discussed earlier, we consider $c_e=\textcolor{black}{-}\ln(1-\rho_e)$ as the edge cost. If $M(\tau^{*})$ is not a tree in $G_P$, then, at least a cycle exists in $M(\tau^{*})$, denoted as $\varsigma=(\textcolor{black}{V^{L}_{P}}, E_\varsigma)\subseteq G_P$. By removing an edge subset of $\varsigma$, a spanning tree could be constructed with $\textcolor{black}{V^{L}_{P}}$ remaining connected; otherwise, $\textcolor{black}{V^{L}_{P}}$ is not fully connected in $M(\tau^{*})$, which contradicts the condition that $M(\tau^{*})$ is connected and with minimal weight (after removing edges in $\varsigma$). Hence, the conclusion holds. \end{IEEEproof} \begin{IEEEproof} [\textit{Proof of Theorem~\ref{thm:maxPTminST}}] With Lemma~\ref{lm:noCycleInMapping}, $M(\tau^{*})$ connects $V^{L}_{P}$ without cycles. Taking $V^{L}_{P}$ as terminal nodes and $V_P\setminus V^{L}_{P}$ as the superset of Steiner nodes, $M(\tau^{*})$ constructs a spanning tree connecting all $V^{L}_{P}$ via nodes in $V_P \setminus V^{L}_{P}$ and edges in $E_P$. Meanwhile, with edge cost $c_e=\textcolor{black}{-}\ln(1-\rho_e)$, $\sum_{e\in E(M(\tau^*))}c_{e}$ is minimal as $\lambda^{*}=[\tau^{*}, M(\tau^*)]$ is the maximal protecting spanning tree and its mapping. Hence, the conclusion holds. \end{IEEEproof} \section{MIP Formulation for Survivable Cross-layer Network Routing}\label{app:MIP} We utilize the following MIP formulation~\cite{zhou2017novel} (SUR-TEST) to test whether a a cross-layer network is survivable or not. \textcolor{black}{The definitions of variables are in Table~\ref{tbl:vrPr}}. After executing the formulation, if a feasible solution exists, the cross-layer network is survivable; otherwise, the cross-layer network is non-survivable. \begin{align} \min &\sum_{(i,j)\in E_P} y_{ij}\nonumber\\ s.t.&\sum_{(i,j)\in E_P}y^{st}_{ij}-\sum_{(j,i)\in E_P}y^{st}_{ji}= \left\{\begin{matrix} \textcolor{black}{1}, &\,\mbox{if } i=s,\\ \textcolor{black}{-1}, &\,\mbox{if } i=t,\\ 0, &\,\mbox{if } i\neq \{s, t\}, &\end{matrix}\right.\label{fm:lightpath2_Suv}\\ &w^{ij}_{st}\leq 1-(y^{st}_{ij}+y^{st}_{ji}), (s,t)\in E_L, (i,j)\in E_P \label{fm:protTreeSet2_Suv}\\ \sum_{(s,t)\in E_L}&w^{ij}_{st} - \sum_{(t,s)\in E_L} w^{ij}_{ts} = \left\{\begin{matrix} \textcolor{black}{1}, \qquad\text{if } s=s_0\\ \textcolor{black}{-1/(|V_L|-1)},\;\; \\ \qquad\quad\text{if } s\neq s_0, s\in V_L &\end{matrix}\right.\label{fm:maxTreeTreeSet2_Suv}\\ y^{st}_{ij}&\in \{0,1\}, w^{ij}_{st} \in [0,1], (s,t)\in E_L, (i,j)\in E_P \end{align} \end{appendices} \bibliographystyle{IEEEtran}
proofpile-arXiv_069-4669
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} At least two main categories of GRBs have been identified so far \citep{1993ApJ...413L.101K,1996ApJ...471..915K,2005Natur.437..859H,2014ARA&A..52...43B,2016Ap&SS.361..155H}. These two categories are the short-duration, spectrally hard GRBs (SGRBs) and the long-duration, spectrally soft GRBs (LGRBs). They are, based on both their temporal and spectral properties, probably released by two different astrophysical sources \citep{2004IJMPA..19.2385Z}. SGRBs typically last for several tens of milliseconds. LGRBs typically last for dozens of seconds. It is possible that there exist another category in between \citep[the so-called intermediate-duration GRBs,][]{2009Ap&SS.323...83H,2016Ap&SS.361..155H}, or that ultra-long GRBs should be categorized separately from the normally long ones \citep{2014ApJ...781...13L}. In this review, I focus on progenitor models proposed for SGRBs. The simplistic picture is that while LGRBs are produced by massive stars at collapse, SGRBs are produced by the merger of two compact objects. The latter is especially interesting in the context of gravitational wave detections. Therefore, my review pays special attention to hypotheses explaining a double black hole merger accompanied by electromagnetic radiation. Sect.~\ref{sec:NS} reviews the most recent results of those SGRB progenitor theories that involve a neutron star. Sect.~\ref{sec:BH} discusses possible scenarios for forming a SGRB from two merging black holes, which therefore may be accompanied by an observable gravitational wave emission. Sect.~\ref{sec:Discus} concludes the review, speculating about the possible host environment of SGRBs with detectable gravitational wave counterparts. \section{Mergers involving neutron stars}\label{sec:NS} \subsection{NS+NS mergers} SGRBs have typical duration values of milliseconds up to two seconds \citep{2014ARA&A..52...43B}. Thus, the progenitor model should have a dynamical timescale of milliseconds to seconds, too. Such a progenitor was suggested by \citet{Blinnikov:1984} and \citet{1989Natur.340..126E} who both proposed that the merger of two neutron stars (NS) may be responsible. Recently, \citet{2016ApJ...824L...6R} created magneto-hydrodynamic simulations of NS-NS mergers. Fig.~\ref{fig:1} shows snapshots of their results. When the hypermassive NS forms as the merger product, it sheds a significant amount of barionic material which creates an accretion disk. This disk is instrumental in the formation of the SGRB, as GRB-emission requires the presence of relativistic jets. The lifetime of the accretion disc, which corresponds to the lifetime of the jet and thus to the duration of the GRB, is found to be 0.1~s in their simulation. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Ruiz+2016.png} \caption{Snapshots of the magneto-hydrodynamic simulation of \citet{2016ApJ...824L...6R} showing the merger of two neutron stars. The coloring represents the density of the barionic material, the while lines are magnetic field lines. During the merger, a hypermassive neutron star forms (top right panel) and soon falls into a black hole (bottom right panel). Note the accretion disc and the axial jets on the bottom middle panel. The lifetime of the accretion disc, which corresponds to the lifetime of the jet, is found to be 0.1~s. It is worth to visit the website of the project where one can follow the same system's evolution on video. Link for video: \textsf{http://aasnova.org/2016/06/10/jets-from-merging-neutron-stars/} \textsl{Credit: \citet{2016ApJ...824L...6R}.} } \label{fig:1} \end{figure} \subsection{BH+NS mergers} Another possibility for SGRB progenitor was suggested by \citet{1992ApJ...395L..83N}, namely the merger of a neutron star and a black hole (BH). This possibility was also recently simulated by \citet{2015ApJ...806L..14P}, who showed that using reasonable initial parameters for the magnetic field, it is possible to form an accretion disc around the merger product from the material shed by the NS, and that jets emerge from the system. Thus, the merger of a BH and a NS is also a promising progenitor model for SGRBs. Both a NS+NS merger and a BH+NS merger should, of course, emit gravitational wave radiation. However, it is still below the detection limit of our most state-of-the-art gravitational wave detector, aLIGO, to observe such an event \citep{2012MNRAS.425.2668C}. \section{The merger of two black holes}\label{sec:BH} As mentioned above, the prerequisite for forming a GRB is the presence of an accretion disc around the central object. In the case of a NS+BH or a NS+NS merger, the accretion disc is created from the barionic material shed by the NS or NSs. In the case of a BH+BH merger, however, there is normally no material to be shed. Therefore, the possibility of SGRB coming from BH+BH merger used to be rarely discussed until recently. The recent detection of gravitational wave (GW) emissions \citep{2016PhRvL.116x1103A} and the Fermi detection of a (moderately probable) electromagnetic counterpart \citep{2016ApJ...826L...6C} suddenly changed the direction of research, and many people started to theorize about possible channels through which even a BH+BH merger may produce a SGRB. Although these hypotheses of a BH+BH merger with an SGRB do not reach the level of sophistication NS+NS or NS+BH models possess, the simultaneous detection of a future GW emission and a SGRB holds huge scientific potential. Therefore, it is interesting to discuss this possibility from a theoretical point of view. \subsection{Chemically homogeneously evolving massive binaries, and a dead disc: GW+SGRB} One interesting hypothesis was presented by \citet{2016ApJ...821L..18P}, which is built on the work of \citet{2016A&A...588A..50M}. The latter presented massive binary evolutionary models at low-metallicity, and showed that both stars in these models avoid the supergiant phase due to chemically homogeneous evolution. Avoiding the supergiant phase means they also avoid the common envelope phase -- a phase which is currently undergoing serious investigation \citep{2016Natur.534..512B,2016A&A...596A..58K} but is still weighted with many uncertainties in the case of massive stars. But when both massive stars evolve chemically homogeneously, they do not have a common envelope phase. Rather, they reach a contact phase during which they exchange mass. As a result, the companions have almost equal masses at the end of their main-sequence evolution. After the main-sequence phase, they shrink further and become fast rotating helium stars. Since the mass ratio is close to one, their explosions will happen soon after one another. As for the explosion, a supernova of type~Ib/c accompanied by a LGRB is expected in both cases. The LGRBs may be observed only if the jets are along the line-of-sight. After the explosions, the remnants are two black holes orbiting around each other. They are losing orbital energy via gravitational wave radiation, slowly spiraling in. \citet{2016A&A...588A..50M} showed that their spiral-in may be well within the Hubble time. When they merge, they emit a well-defined signal of gravitational wave radiation, which can be searched for in the aLIGO data. \citet{2016ApJ...821L..18P} suggested that since these stars rotate very fast in the moment of their core-collapse, it is possible that -- in case of a weak supernova explosion -- at least one of them keeps a disc. They hypothesize that accretion of this disc stops soon after the core-collapse. The disc then cools down, suppressing the magnetorotational instability and hence the viscosity, and becomes inactive for a long time. This way, the disk may survive the slow spiral-in of the BHs. But when the companion BH reaches the outer rim of the disk, the disk becomes active again. Thus, we have a situation when a BH with an active accretion disc is merging with another BH. This opens the possibility for a SGRB while the BH+BH merger is producing detectable GW emission in the same time. \citet{2016ApJ...821L..18P} estimated a GRB timescale of 0.005~s. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{Perna-model.png} \caption{Set-up of the hypothetical scenario forming a double black hole merger together with an electromagnetic counterpart. Once the two black holes come close enough, the disc is re-heated and accretion starts again. \textsl{Credit: \citet{2016ApJ...821L..18P}.} } \label{fig:2} \end{figure} \subsection{But what kind of stars are these?} The double black hole system was formed from a special type of massive star binary. This binary is on a very close orbit and is, therefore, initially synchronized (i.e. the orbital period is the same as the spin period of the companions). These stars not only orbit around each other fast but spin fast, too \citep{2009A&A...497..243D,2016MNRAS.460.3545D,2016MNRAS.458.2634M}. At low-metallcity ($\sim$1/10 of solar and below) fast rotating massive stars evolve chemically homogeneously \citep{2006A&A...460..199Y,2011A&A...530A.115B,2015A&A...573A..71K,2015A&A...581A..15S}. This means that they do not develop a distinct core-envelope structure as typical massive stars do. Instead, they are homogeneously mixed. Their surface composition reflects the nuclear burning of the core. They do not become supergiants, but stay rather small (a few tens of solar radii) during all their evolution. At the end of their main-sequence phase, they consist almost entirely of helium. Many such stellar models were created and analysed by \citet{2015A&A...581A..15S}. They showed that although some of the surface properties (e.g. temperature, composition) are similar to those of Wolf--Rayet (WR) stars, these chemically homogeneously evolving objects have \textsl{weak} stellar winds. Therefore, they are not WR stars in the classical sense: WR stars are observed to have strong, optically thick winds. \citet{2015A&A...581A..15S} called these objects Transparent Wind Ultraviolet Intense (TWUIN) stars. TWUIN stars are the result of chemically homogenenous evolution. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{core-env-label.png} \caption{Schematic structure of some massive stars. While supergiants and hypergiants have a distinct core-envelope structure, TWUIN stars are rather compact ($\sim$10-20~R$_{\mathrm{solar}}$), homogeneous, hot objects. } \label{fig:3} \end{figure} This evolutionary channel of massive stars is only possible at low-metallicity. The reason is the following. In the case of massive stars with line-driven stellar winds, the mass-loss rate scales with metallicity. Since losing mass means losing also angular momentum, with a stronger mass-loss comes a faster spin-down. Thus, massive stars at high metallicity (e.g. at solar composition) typically do not rotate fast. Fast rotation, however, is needed for chemically homogeneous evolution: the process responsible for keeping the whole star homogeneous is called rotational mixing. The effectivity of rotational mixing scales with the rotational rate. Therefore, according to our most state-of-the-art stellar models, chemically homogeneous evolution can only happen at low-metallicity. If born in a close binary system, chemically homogeneously evolving TWUIN stars are expected to produce two consequent supernova explosions of type Ib/c (and a LGRB, as explained above). If at least one the the supernovae is weak, some fractions of the stellar material may stick on a circumstellar orbit and may cool. Then the two remnant BHs are slowly spiralling in, and when the merger happens, the re-vitalized accretion disc provides the conditions for a SGRB. In this theory, \textsl{TWUIN binaries are the stellar progenitors of an exotic explosion which emits both gravitational waves and electromagnetic radiation.} \subsection{Alternative hypothesis: charged BHs} Another promising hypothesis was presented by \citet{2016ApJ...827L..31Z}. In his theory, there is no barionic material present in the BH+BH merger system. On the other hand, one of the BHs is carrying a high amount of charge. The rapidly evolving magnetic moment during the merger may first lead to a fast radio burst (FRB), and then a SGRB. \citet{2016ApJ...827L..31Z} suggest that combined with future GW detections with an electromagnetic counterpart, this theory may put constraint on the charges carried by isolate BHs. \section{Discussion}\label{sec:Discus} \subsection{Dwarf galaxies as birthplaces of GW+SGRB events} TWUIN stars are theoretical predictions at low-metallicity. However, their presence in irregular, star-forming dwarf galaxies has been recently considered based on the fact that TWUIN star models predict that a huge amount of ionizing photons is emitted in the He~II continuum \citep{2015A&A...581A..15S}. Some of the dwarf galaxies are observed to have an ionized interstellar medium, but no traces of any other ionizing source \citep[such as X-ray binaries or WR stars,][]{2015ApJ...801L..28K}. Therefore, \citet{2015wrs..conf..189S} speculated that the extent of ionization in dwarf galaxies may be an indication for the existence of chemically homogeneous evolution and, therefore, for TWUIN stars. Since massive stars are almost always in binaries \citep{2012Sci...337..444S}, some of them close binaries \citep{2009A&A...497..243D}, TWUIN binaries may be common in low-metallicity dwarf galaxies. \textsl{Dwarf galaxies are therefore expected to host the event of a GW emission accompanied by a SGRB.} \subsection{Detections and theories} From an observational point of view, simultaneous detection of a GW and a SGRB is challenging, but well possible with our current instrumentation \citep{2016ApJ...826L...6C}. It is indeed an interesting data mining problem how to identify faint electromagnetic counterparts for any GW event \citep[][]{2013A&A...557A...8S,2016A&A...593L..10B}. In case such a simultaneous detection happens, it is expected that more and more theories will emerge. These new theories will either build on and expand, or contradict the theories described above. Hence, these are exciting times not only for observers but for theorists, too. \acknowledgements D. Sz. was supported by the Czech grant 13-10589S \v{C}R.
proofpile-arXiv_069-4743
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} \label{sec:intro} In the classical star formation scenario, the magnetic field is assumed to be aligned with the initial rotation axis of the core for simplicity \citep[e.g.][]{nakamura1995,tomisaka1996,machida2008,matsumoto2004,machida2012,machida2013ii}. Thus, in such setting, the outflow propagation direction and the disk rotation axis become parallel to the initial magnetic field. The magnetohydrodynamic (MHD) simulations based on this scenario provides results which the disk rotation axis is parallel to the outflow direction \citep[e.g.,][]{machida11, Banerjee2006}. However, many observations of dust polarization unveiled the directions of the magnetic fields which are not always aligned with the disk rotation axes and outflow directions. The magnetic field directions in the protostellar envelope scale are randomly aligned with the outflow directions in class 0/I sources \citep[][]{Hull2013}. Similar results are also found in T-Tauri stars \citep[e.g.][]{Menard2004}. These observations imply that the stars are formed in the molecular cloud cores whose initial rotation axes are rondomly aligned with the magnetic fields. Recently, 3-dimensional MHD simulations were conducted to reveal the evolution of the protostellar core whose initial rotation axis and outflow direction are misaligned with magnetic field \citep[e.g.][]{matsumoto2004,Joos2012, Li2013}. It is, therefore, important to study the detailed structure and kinematics of the outflow/envelope systems with misaligned magnetic fields, and compare them with the MHD simulations. We choose NGC1333 IRAS 4A as our target because this object is a good example showing the misalignment between magnetic field and outflow direction. IRAS 4A is a Class 0 binary system located in star forming region NGC1333, which is 293 $\pm$ 22 pc away from the solar system \citep[][]{Zucker2018}. Previous high resolution observations toward IRAS 4A suggested that this system consists of two compact continuum sources having a separation of $1.8 \arcsec$ \citep[e.g.][]{Looney2000, Reipurth2002} ($\sim 530$ au). The names, IRAS 4A1 and IRAS 4A2, are assigned to the eastern and western continuum peaks, respectively. The two sources are embedded in a circumbinary-envelope traced by the C$^{17}$O emission, which shows a North-West (redshifted) to South-East (blueshifted) velocity gradient \citep[][]{Ching2016}. Interestingly, the ammonia line emission shows an opposite velocity gradient near IRAS 4A2, which is implied as a Keplarian disk rotating oppositely to the circumbinary-envelope \citep[][]{choi07}. IRAS 4A is associated with a large (a few arcminutes) scale bipolar outflow, which is traced by several molecular lines such as CO, SiO, etc \citep[e.g.][]{Choi2001,Santangelo2015, yildiz2012}. The interferometric image of SiO 1--0 revealed that the Northern red-shifted lobe is bent to east at $\sim$20$\arcsec$ away from the continuum peaks, while two blue-shifted southern lobes are ejected toward south-east and south-west \citep{Choi2001}. The magnetic field in the IRAS 4A envelope shows a classical pinched hour-glass shape with an orientation angle of $61 \arcdeg$ \citep[][]{doi2020}, which is misaligned with the rotation axis of the circumbinary-envelope ($38 \arcdeg$). On smaller scale, the magnetic field is misaligned with the axes of the outflows from both IRAS 4A1 (Northern lobe: $19 \arcdeg$, Southern lobe: $-9 \arcdeg$) and IRAS 4A2 ($19 \arcdeg$)\citep[][]{Ching2016}. Furthermore, orientation of the magnetic field is also misaligned with the angular momentum vector of the IRAS 4A2 disk ($19 \arcdeg$) \citep[][]{Choi2011}. In this paper, we present the ALMA (Atacama Large Millimeter/sub-millimeter Array) observations with higher resolution (0\farcs3--0\farcs7 resolution) toward IRAS 4A system. The molecular line observations include two transitions of SO ($J_N = 6_5-5_4$ and $J_N = 7_6-6_5$), CO ($J = 2-1$) and CCH ($N = 3-2$, $J = 7/2-5/2$). We have studied the morphology and kinematics of the outflow and envelope around IRAS 4A2 in detail, and compared the observational results with the recent resistive MHD simulations with misaligned magnetic fields and rotation (Hirano \& Machida 2019). \section{OBSERVATIONS} \label{sec:observations} We used three sets of ALMA archival data, ALMA2013.1.01102 in 262 GHz (P.I. N. Sakai), ALMA2017.1.00053 and ALMA2013.1.00031 in 230 GHz (P.I. J. Tobin). The 262 GHz observations were carried out on June 13 in 2015. The total observing time and the on-source time were 57 minutes and 26 minutes, respectively. The number of antennas was 35. The minimum projected baseline length was 21.3 m, which corresponds to the maximum recoverable size of $6\farcs 72$ (1969 au). The spectral windows for the SO ($J_N=7_6-6_5$) and CCH ($N=3-2, J=7/2-5/2, F=4-3$) lines have 960 channels covering 58.594 MHz at a frequency resolution of 61.035 kHz. In making maps, 7 channels were binned for both lines. The resultant velocity resolution is $0.49\ {\rm km~s^{-1}}$ (427.239 kHz). There is no spectral window assigned to the continuum in this data set. Therefore, we separated the continuum from the line observation. The 230 GHz observations were carried out on September 27 in 2015 (ALMA2013.1.00031) and from 2017 December to 2018 September (ALMA2017.1.00053). For ALMA2013.1.00031, the total observing time and the on-source time were 107 minutes and 3.5 minutes, respectively. The number of antennas was 33. The minimum projected baseline length was 43.3 m, which corresponds to the maximum recoverable size of $3\farcs91$ (1146 au). For ALMA2017.1.00053, the total observing time and the on-source time were 347.4 minutes and 18.14 minutes, respectively. The number of antennas was 45. The minimum projected baseline length was 15.1 m, which corresponds to the maximum recoverable size of $11\farcs2$ ($\sim 3000$ au). In ALMA2017.1.00053, the spectral windows for the SO ($J_N=6_5-5_4$) and CO $(J=2-1)$ lines have 960 and 1920 channels covering 58.593 MHz and 234.375 MHz, at the frequency resolution of 61.035 kHz and 244.141 kHz, respectively. In ALMA2013.1.00031, one spectral window covering 232.5-234.5 GHz was assigned to continuum emission. We used Common Astronomical Software Applications (CASA) for image processing. The raw visibility data of ALMA2013.1.00031, ALMA2017.1.00053 and ALMA2013.1.01102 were calibrated by CASA version 4.5.0, 5.4.0, and 4.3.1, respectively. The calibrated visibilities were Fourier transformed and CLEANed by the task {\it tclean} with the natural weighting and a threshold of 1$\sigma$. We also performed self-calibration for the continuum data using tasks in CASA ({\it tclean}, {\it gaincal}, and {\it applycal}). First, the phase was calibrated with the time bin of 3 scans ($\sim 60$s). Then, using the derived gain table, the amplitude and the phase were calibrated together. The self-calibration improved the RMS noise level of the continuum maps by a factor of $\sim 2$. The obtained calibration tables for the continuum data were also applied to the line data. The noise levels of the line maps were measured in emission-free channels. The parameters of our observations mentioned above and others are summarized in Table \ref{ch4:tab:obs}. \begin{deluxetable*}{ccccccc} \tabletypesize{\footnotesize} \tablecaption{Summary of the ALMA observational parameters \label{ch4:tab:obs}} \tablehead{ \colhead{Data set} & \multicolumn{2}{c}{262GHz} & \multicolumn{4}{c}{230GHz}\\ \colhead{Project Code} & \multicolumn{2}{c}{ALMA2013.1.01102.S} & \multicolumn{2}{c}{ALMA2013.1.00031.S} & \multicolumn{2}{c}{ALMA2017.1.00053.S}\\ \colhead{Date} & \multicolumn{2}{c}{13-Jun-2015} & \multicolumn{2}{c}{27-Sep-2015} & \multicolumn{2}{c}{\begin{tabular}{c} 17-Dec-2017, 07-Jan-2018, 11-Jan-2018,\\ 18-Jan-2018, 20-Sep-2018 \end{tabular}}\\ \colhead{Projected baseline length} & \multicolumn{2}{c}{ \begin{tabular}{c} 21.3 - 783.5 m\\(18.2 - 669.7 k$\lambda$) \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} 43.3 - 2270 m\\(31.6 - 1669.1 k$\lambda$) \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} 15.1 - 2516.9 m\\(11.6 - 1931 k$\lambda$) \end{tabular} }\\ \colhead{Maximum recoverable scale} & \multicolumn{2}{c}{\begin{tabular}{c} $6\farcs72$\\(2000 au) \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} $3\farcs91$\\(1000 au) \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} $11\farcs2$\\(3000 au) \end{tabular} } \\ \colhead{Primary beam} & \multicolumn{2}{c}{$23.6\arcsec$} & \multicolumn{2}{c}{$25.5\arcsec$} & \multicolumn{2}{c}{$25.5\arcsec$}\\ \colhead{Bandpass calibrator} & \multicolumn{2}{c}{J0237$+$2848} & \multicolumn{2}{c}{J0237$+$2848} & \multicolumn{2}{c}{J0237$+$2848}\\ \colhead{Flux calibrator} & \multicolumn{2}{c}{Titan} & \multicolumn{2}{c}{Titan} & \multicolumn{2}{c}{J0237$+$2848}\\ \colhead{Phase calibrator} & \multicolumn{2}{c}{J0319$+$4130} & \multicolumn{2}{c}{J0319$+$4130} & \multicolumn{2}{c}{J0336$+$3218}\\ \colhead{Phase center coordinate (J2000)} & \multicolumn{2}{c}{$03^{\rm h}29^{\rm m}$10\fs 51, $31^{\circ}13\arcmin 31\farcs 3$} & \multicolumn{2}{c}{$03^{\rm h}29^{\rm m}$10\fs 536, $31^{\circ}13\arcmin 30\farcs 93$} & \multicolumn{2}{c}{$03^{\rm h}29^{\rm m}$10\fs 536, $31^{\circ}13\arcmin 30\farcs 93$}\\} \startdata & \begin{tabular}{c} Rest \\frequency\\(GHz) \end{tabular} & \begin{tabular}{c} Center \\frequency\\(GHz) \end{tabular} & \begin{tabular}{c} Velocity\\resolution\\(${\rm km~s^{-1}}$) \end{tabular} & \begin{tabular}{c} Total\\bandwidth\\(MHz) \end{tabular} & \begin{tabular}{c} Beam\\Size\\(P.A.) \end{tabular} & \begin{tabular}{c} RMS noise level\\(${\rm mJy~beam^{-1}}$) \end{tabular}\\ \hline 1.3 mm Continuum & - & 232.529000 & - & 2000 & $0\farcs 26\times 0\farcs16\ (25\arcdeg)$ & 0.28 \\ $^{12}$CO ($J=2-1$) & 230.538000 & - & 0.32 & 234.375 & $0\farcs 29\times 0\farcs21\ (-26\arcdeg)$ & 3.4 \\ $^{32}$SO ($J_N=6_5-5_4$) & 219.949433 & - & 0.083 & 58.593 & $0\farcs 29\times 0\farcs21\ (-26\arcdeg)$ & 3.88 \\ SO ($J_N=7_6-6_5$) & 261.843721 & - & 0.49 & 58.594 & $0\farcs 65\times 0\farcs35\ (-28\arcdeg)$ & 2.38 \\ CCH ($N=3-2, J=7/2-5/2$) &&&&&\\ $F=4-3$ & 262.00426 & - & 0.49 & 58.594 & $0\farcs 75\times 0\farcs4\ (-32\arcdeg)$ & 2.11 \\ $F=3-2$ & 262.00648 & - & 0.49 & 58.594 & $0\farcs 75\times 0\farcs4\ (-32\arcdeg)$ & 2.11 \\ \enddata \end{deluxetable*} \newpage \section{RESULTS} \label{sec:results} \subsection{1.3mm Continuum} \label{sec:cont13} Figure \ref{fig:cont13} shows a map of 1.3 mm continuum emission from the IRAS 4A system at an angular resolution of $0\farcs26 \times 0\farcs16$. IRAS 4A is clearly resolved into two local peaks, which correspond to IRAS 4A1 and IRAS 4A2. We applied a 2D two-component Gaussian fitting to derive the peaks. The fitting is conducted using the area bounded by the black contours in Figure \ref{fig:cont13}. The derived peak locations in IRAS 4A1 and IRAS 4A2 are $\alpha ({\rm J2000})=03^{\rm h}29^{\rm m}10\fs 533,\ \delta ({\rm J2000})=31\arcdeg 13\arcmin 30\farcs 99$ and $\alpha ({\rm J2000})=03^{\rm h}29^{\rm m}10\fs 425,\ \delta ({\rm J2000})=31\arcdeg 13\arcmin 32\farcs 14$, respectively. The angular separation between IRAS 4A1 and IRAS 4A2 continuum peaks is $1\farcs80$ ($\sim 530\ {\rm au}$), which is consistent with the previous measurements \citep{Sahu2019, Sepulcre2017, su2019}. The continuum emission around IRAS 4A1 shows a compact elliptical structure with an extension to the southwest. Its peak intensity and beam-deconvolved size are $165.2\pm3.7\ {\rm mJy~beam^{-1}}$ and $0\farcs45 \times 0\farcs43$ (P.A.=38$\arcdeg$), respectively. The observed peak intensity corresponds to the brightness temperature of $T_{\rm b}=89$ K, and the Planck temperature of $T_{\rm p}=95$ K. In order to compare the peak intensity with previous research, we convolved our map to the same beam size ($0\farcs3 \times 0\farcs2$) as that of \citet[][]{Sahu2019}. The peak intensity after convolution is $199.1\ {\rm mJy~beam^{-1}}$, which corresponds to the brightness temperature of $T_{\rm b}=75$ K and the Planck temperature of $T_{\rm p}=80.4$ K. These are higher than the peak brightness temperature of $T_{\rm b}=57$ K and Planck temperature of $T_{\rm p}=65.2$ K at 0.84 mm \citep[][]{Sahu2019}. The higher brightness temperature at 1.3 mm indicates that the 1.3 mm continuum having the lower optical depth than 0.84 mm traces the inner region with higher temperature. In addition, the continuum emission from IRAS 4A1 is considered to be optically thick at 1.3 mm. To estimate the dust mass in the extended component (emission above 3$\sigma$ in Figure \ref{fig:cont13}) and compact component (emission within the black contour in Figure \ref{fig:cont13}) of IRAS 4A1 through equation $M_d=\frac{D^2F_\nu}{\kappa_\nu B_\nu(T_d)}$, we assume a gas-to-dust ratio of 100, a dust opacity of 0.01 $cm^{2}\ g^{-1}$ for the compact component and 0.008 $cm^{2}\ g^{-1}$ for the extended component \citep[][]{ossenkopf1994}, a dust temperature of 90 K for the compact component and 60 K for extended component \citep[][]{Sahu2019}. The masses of the extended component and the compact component are estimated to be 0.86 M$_\odot$ (1.51 Jy) and 0.14 M$_\odot$ (0.46 Jy), respectively. The mass of the extended component is a factor of two higher than that of \citet[][]{Sahu2019} probably due to a different aperture size when estimating the total flux and the slightly higher dust mass opacity used for our calculation. It should be noted that the estimated mass of the compact component is a lower limit due to the optically thick condition. The continuum emission from IRAS 4A2 shows a compact elliptical structure smaller than that from IRAS 4A1 and with an extension along the northwest-southeast direction. Its peak intensity and the beam-deconvolved size are $117.3\pm3.2\ {\rm mJy~beam^{-1}}$ and $0\farcs23 \times 0\farcs21$ (P.A.=50$\arcdeg$), respectively. The observed peak intensity corresponds to the brightness temperature of $T_{\rm b}=63\ {\rm K}$, and the Planck temperature of $T_{\rm p}=69.0\ {\rm K}$. After convolving our image to the beam size of $0\farcs3 \times 0\farcs2$, the peak intensity, the corresponding brightness temperature and the Planck temperature are $114.2\ {\rm mJy~beam^{-1}}$, $T_{\rm b}=43\ {\rm K}$ and $T_{\rm p}=48.4\ {\rm K}$, respectively. These are the same as the peak brightness temperature and Planck temperature at 0.84 mm. This indicates that the beam averaged continuum emission from IRAS 4A2 is also optically thick at both 0.84 mm and 1.3 mm. The line emission observed toward the continuum peak of IRAS 4A2 at these wavebands \citep{Sahu2019, Sepulcre2017, su2019} suggests that the beam filling factor of the optically thick continuum emission is rather small. Assuming the same parameters as those adopted for IRAS 4A1 except that the temperature of the compact component is 65 K for IRAS 4A2, the gas masses of the extended component and the compact component of IRAS 4A2 are estimated to be 0.32 M$_\odot$ (0.57 Jy) and 0.06 M$_\odot$ (0.13 Jy), respectively. The area of the compact component of IRAS 4A2 continuum is smaller than the disk traced by ammonia \citet[][]{Choi2010}, while the ammonia emitting area corresponds to the 12 $\sigma$ contour in our continuum map. Additionally, \citet[][]{Santangelo2015} claimed to detect another continuum source, IRAS 4A3, at $4\arcsec$ north of IRAS 4A2 continuum peak at the wavelength of 1.3 mm. However, there is no counterpart of IRAS 4A3 in our continuum image of the same wavelength. The upper limit of the continuum flux at the IRAS 4A3 position is $1.2{\rm mJy~beam^{-1}}$ ($3\sigma$, after primary beam correction), which corresponds to $T_B=0.65$ K. This source is not detected by other observations either; \citet[][]{tobin2018ii} attributed this to the artificial effects in PdBI observations, and \citet[][]{maury2019} to the continuum emission produced by the interaction between outflow and envelope. \begin{figure}[ht!] \epsscale{1.2} \plotone{Cont254GHz.pdf} \caption{The map of 1.3 mm continuum emission of IRAS 4A1 and IRAS 4A2 after primary beam correction. White contour levels are $1,4,9,16,25 \times 3\sigma$, where 1 $\sigma$ corresponds to $0.28\ {\rm mJy~beam^{-1}}$. The color map shows intensity in the unit of ${\rm Jy~beam^{-1}}$. Black plus signs and contours denote two peak positions and the area used for 2-component 2D Gaussian fittings, respectively. A white filled ellipse at the bottom-left corner denotes the ALMA synthesized beam; $0\farcs26 \times 0\farcs16,\ {\rm P.A.}=25^{\circ}$. The white dashed line separates the extended component of IRAS 4A1 and IRAS 4A2. \label{fig:cont13}} \end{figure} \newpage \subsection{CO ($J = 2 - 1$) outflow} \label{sec:co} Figure \ref{fig:co} shows the CO ($J=2-1$) moment 0 (integrated-intensity) and moment 1 (mean-velocity) maps with different velocity ranges. Figure \ref{fig:co}a clearly shows that each protostar, IRAS 4A1 and IRAS 4A2, drives its own outflow toward the north and the south. The two northern lobes are redshifted and bent toward the northeast. The two southern lobes are blueshifted; the one from IRAS 4A1 extends to the southeast, while the one from IRAS 4A2 extends to the southwest. The typical velocity range of the IRAS 4A1 outflow is $\delta V = 0-20\ {\rm km~s^{-1}}$ with respect to the systematic velocity of $V_{\rm LSR}=7.35\ {\rm km~s^{-1}}$ (see section \ref{sec:pv}). The typical velocity range of the IRAS 4A2 outflow is $\delta V$ $= 0-15\ {\rm km~s^{-1}}$, which is smaller than that of the IRAS 4A1 outflow. The outflow from IRAS 4A2 shows an S-shaped morphology with two curved lobes elongated along a P.A.= 20$\arcdeg$ direction. The central $\pm 1\arcsec$ region of the S-shaped feature is elongated along the northwest-southeast direction (P.A. 148\arcdeg). The gap at the center is due to the continuum subtraction effect. \citet[][]{Santangelo2015} reported the S-shape bending of the IRAS 4A2 outflow at $\pm$4\arcsec from the driving source. This bending is also seen in the blueshifted lobe of our CO map. On the other hand, the S-shaped feature seen in our map at the central 2\arcsec region is not clearly resolved in the images of \citet[][]{Santangelo2015}. At a larger scale of $\pm 20\arcsec$ , which is beyond our field of view, the outflow lobes show further bent toward a direction of P.A.= 45$\arcdeg$ \citep[e.g.][]{yildiz2012, choi2005}. Toward the center of IRAS 4A1, CO is observed as absorption because of the opaque continuum. The absorption at IRAS 4A1 is also observed in the SO lines presented in Section \ref{sec:so} and many other lines at 0.85 mm \citep[][]{Sahu2019}. Figure \ref{fig:co}c shows that the high velocity component of the IRAS 4A1 outflow consists of two red shifted components at the northern lobe and two blue shifted components in the southern lobe. The southernmost part of the blueshifted lobe shows an up-side-down U-shaped feature. The emission at the very high velocity (Figure \ref{fig:co}d, from 20 to 50 ${\rm km~s^{-1}}$) is seen only in the northern redshifted lobe of the IRAS 4A1 outflow. The location of this component is slightly east to those of the emission seen in the low (Figure \ref{fig:co}b, from $-$8 to 22 ${\rm km~s^{-1}}$) and high (Figure \ref{fig:co}c, from $-$14 to $-$8 and from 22 to 27 ${\rm km~s^{-1}}$) velocity ranges, which is consistent with that of the extremely high-velocity peaks (from 41 to 55 ${\rm km~s^{-1}}$) in \citet[][]{Santangelo2015}. There is no blueshifted counterpart of this very high velocity emission in the southern lobe. Although Figure \ref{fig:co}c does not show clearly, the IRAS 4A2 outflow also shows very high velocity components at $\sim \pm 45 {\rm km~s^{-1}}$ and $\sim \pm 90 {\rm km~s^{-1}}$. As shown in Figure \ref{fig:COA2HV}a, the spectral line toward the IRAS 4A2 continuum peak consists of five velocity components above the 3 $\sigma$ level. In addition to the central one that is associated with the low velocity component, there are two peaks at $\sim 45\ {\rm km~s^{-1}}$ and $\sim -45\ {\rm km~s^{-1}}$ and additional two peaks at $\sim 95\ {\rm km~s^{-1}}$ and $\sim -60\ {\rm km~s^{-1}}$. These additional four peaks are likely to be the very high velocity components from IRAS 4A2 because there is no spectral line having a frequency corresponds to these emission peaks. The spatial distributions of these extremely high velocity components are shown in Figure \ref{fig:COA2HV}b and Figure \ref{fig:COA2HV}c. The integrated velocity ranges are from $V_{\rm LSR}= -46$ to $-$43 km s$^{-1}$ (blue) and from 42 to 46 km s$^{-1}$ (red) for Figure \ref{fig:COA2HV}b, and from $V_{\rm LSR}= -66$ to $-$63 km s$^{-1}$ (blue) and from 90 to 93 km s$^{-1}$ (red) for Figure \ref{fig:COA2HV}c. The morphology of these compact components is not clearly resolved under the current resolution. \begin{figure*}[ht!] \epsscale{0.95} \plotone{co_new.jpg} \caption{Integrated-intensity (moment 0; contours) and mean-velocity (moment 1; color) maps with integrated velocity ranges of (a) $V_{\rm LSR}=-16\sim64.7\ {\rm km~s^{-1}}$, (b) $V_{\rm LSR}=-8-22\ {\rm km~s^{-1}}$, (c) $V_{\rm LSR}=-14\sim8$ and $22-27\ {\rm km~s^{-1}}$, and (d) $V_{\rm LSR}=27 - 64.7\ {\rm km~s^{-1}}$. The contour levels are $1,2,3,4,\dots \times 3\sigma$. 1 $\sigma$ corresponds to (a) 43.26, (b) 41.9, (c) 21.9, and (d) $28.6\ {\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. Magenta plus signs denote the continuum peak positions of IRAS 4A1 and IRAS 4A2. Blue-filled ellipses at the bottom-left corners denote the ALMA synthesized beam size: $0\farcs29 \times 0\farcs21,\ {\rm P.A.}=-26^{\circ}$. The green contour in panel (c) denotes the 3 $\sigma$ cutoff in panel (a). We only choose the peaks with counter parts in SO ($J_N=7_6-6_5$) high velocity moment maps (overlapped with SO knots, see Figure \ref{fig:so}d). \label{fig:co}} \end{figure*} \begin{figure*}[ht!] \epsscale{1.0} \plotone{COA2HV.jpg} \caption{(a) The spectral line profile of CO $2-1$. A circle area centered at IRAS 4A2 continuum peak with the radius of $0.25\arcsec$ is used for this line profile. The red dashed line denotes the $3\sigma$ level of the line profile. Two spectral lines, CO $2-1$ and $^{30}$SiC$_2$, are placed for reference, with there rest frequencies offset by $V_{sys} = 7.35 {\rm km~s^{-1}}$. The red arrows and green arrows point out the local peaks correspond to panel (b) and (c), respectively. (b) Integrated-intensity (moment 0; contours) and continuum (color) maps with integrated velocity ranges of $V_{LSR}$ = $-46 - -43 {\rm km~s^{-1}}$ and $V_{LSR}$ = $42 - 46 {\rm km~s^{-1}}$ for blue and red contours, respectively. (c) Same as panel (b), but with integrated velocity ranges of $V_{LSR}$ = $-66 - -63 {\rm km~s^{-1}}$ and $V_{LSR}$ = $90 - 93 {\rm km~s^{-1}}$ for blue and red contours, respectively. Blue-filled ellipses at the bottom-left corners of panels (b) and (c) denote the ALMA synthesized beam: $0\farcs29 \times 0\farcs21,\ {\rm P.A.}=-26^{\circ}$. \label{fig:COA2HV}} \end{figure*} \clearpage \subsection{SO outflow} \label{sec:so} Figure \ref{fig:so} shows SO ($J_N=7_6-6_5$) moment 0 and moment 1 maps with different velocity ranges. As seen in the CO emission, two bipolar outflows driven from two protostars are clearly traced by the SO ($J_N=7_6-6_5$) emission. Overall features of the SO outflows such as outflow directions, typical velocity ranges, and morphology are the similar to those of the CO outflows. As shown in Figure \ref{fig:so}a, the SO emission from the northern lobe of the IRAS 4A1 outflow is enhanced at the western wall where the two redshifted lobes from two outflows are overlapping. The S-shaped feature associated with IRAS 4A2 is also seen in the SO emission. These outflow features are also seen in the images of another transition of SO ($J_N = 6_5-5_4$) presented in Figure \ref{fig:app_sol} in Appendix \ref{sec:app_sol}. The spatially extended features from the outflows are better traced by the lower resolution SO ($J_N = 7_6-6_5$) images. As in the case of CO, both SO lines are also observed as absorption toward IRAS 4A1. In the very low velocity range (Figure \ref{fig:so}b, from 4 to 10 ${\rm km~s^{-1}}$), the SO emission from two outflows shows spatially extended structures; the SO emission in the northern lobe of the IRAS 4A1 outflow extends to the east and fills the lobe. Meanwhile, the S-shaped feature in the IRAS 4A2 outflow is less clear. Instead, the outflow morphology is rather bipolar in this velocity range. Two bright blobs in the blueshifted velocity range labeled SOA2S1 and SOA2S2 in Figure \ref{fig:cchso} delineate the eastern and western walls of the southern lobe. Although the morphology of the northern lobe is contaminated by the IRAS 4A1 outflow, the redshifted blob (SOA2N) and the emission extending to the north delineate the western wall of the triangle-shaped northern lobe. The eastern wall of this northern lobe is likely to be the emission ridge extending to the northeast as shown in the dashed line in Figure \ref{fig:cchso}. It should be noted that this very low-velocity component in CO is contaminated by the ambient component, which is spatially extended and resolved out by the interferometer. The morphology of the low velocity components (Figure \ref{fig:so}c) is similar to that of the whole outflow (Figure \ref{fig:so}a), whereas that of the high velocity emission (Figure \ref{fig:so}d) is knotty. The peak positions of the SO knots generally align with the CO high velocity components, with a few exceptions at the west of the components. These exceptions are likely to be associated with the northern lobe of the IRAS 4A2 outflow because they are located west to the CO outflow and inline with the overall IRAS 4A2 outflow extension. There is also an additional component at the south of those knots, which partially coincides with the CO emission in the high velocity range. Unfortunately, the SO emission in the very high velocity ranges was not covered by this observation. \begin{figure*}[ht!] \epsscale{0.83} \plotone{SO.jpg} \caption{Integrated-intensity (moment 0; contours) and mean-velocity (moment 1; color) maps with integrated velocity ranges of (a) $V_{\rm LSR}=-15\sim31 {\rm km~s^{-1}}$, (b) $V_{\rm LSR}=4-10\ {\rm km~s^{-1}}$, (c) $V_{\rm LSR}=-8\sim4$ and $10-22\ {\rm km~s^{-1}}$, and (d) $V_{\rm LSR}=-14\sim8$ and $22-27\ {\rm km~s^{-1}}$. The contour levels are $1,4,9,16,\dots \times 3\sigma$ for (a), (b), and (c) while $1,2,3,\dots \times 3\sigma$ for (d). 1 $\sigma$ corresponds to (a) 13.5, (b), 3.9, (c) 17.0, and (d) 6.4 ${\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. Magenta plus signs denote the continuum peak positions of IRAS 4A1 and IRAS 4A2. Blue-filled ellipses at the bottom-left corners denote the ALMA synthesized beam: $0\farcs65 \times 0\farcs35,\ {\rm P.A.}=-28^{\circ}$. The green contour in (d) denotes the 9 $\sigma$ cut off in the moment 0 of panel (a). \label{fig:so}} \end{figure*} \newpage \subsection{Column Density and Temperature} \label{sec:anal} To further understand the physical properties of the SO outflows, we derive the excitation temperature and column density of SO using the two SO transitions. The moment 0 map of SO ($J_N=6_5-5_4$) (Figure \ref{fig:app_sol}a) was convolved to the same beam size as that of SO ($J_N=7_6-6_5$) moment 0 map (Figure \ref{fig:so}. Then, we assumed the optically thin and local thermal equilibrium (LTE) assumption, and derived the excitation temperature and the column density of SO following the method of \citet[][]{goldsmith1999}. Figure \ref{fig:TNmap}(a) and Figure \ref{fig:TNmap}(b) show the excitation temperature and column density distributions, respectively. On a larger scale, the excitation temperature and column density are higher at the vicinity of IRAS 4A1 and IRAS 4A2, whereas these are lower in the northern and southern outflow lobes. This trend is consistent with that of SO$_2$ derived from the rotational diagram analysis \citep[][]{taquet2020}. In a smaller scale ($4\arcsec$ around IRAS 4A2), the excitation temperature and column density show the maxima at the northwest of IRAS 4A2 and the northern tip of southern IRAS 4A1 outflow, where the line ratio in Figure \ref{fig:TNmap}(c) shows the enhancement. However, the derived excitation temperature and column density values in these regions are not reliable because of the large uncertainties shown in (Figure \ref{fig:TNmap}(d) and Figure \ref{fig:TNmap}(e). \begin{figure*}[ht!] \epsscale{1.0} \plotone{TNmap_thin.jpg} \caption{(a) The excitation temperature distribution. (b) The column density distribution. (c) The line ratio of SO ($J_N=7_6-6_5$) over SO ($J_N=6_5-5_4$) emission. The uncertainty estimation for the physical quantities in panels (a), (b), and (c) are in panels (d), (e), and (f), respectively. The uncertainties are derived following the law of error propagation. \label{fig:TNmap}} \end{figure*} \newpage \subsection{Velocity Structure in the IRAS 4A2 Envelope} \label{sec:pv} As shown in the previous sections, IRAS 4A2 is surrounded by the elongated structure, the northwestern and southeastern edges of which are connected to the bases of the redshifted and blueshifted outflow lobes, respectively. In order to investigate the kinematics of this elongated component, we first determine the systemic velocity of IRAS 4A2 using the line profiles of two transitions of SO shown in Figure \ref{fig:slp}. The line profiles are derived at the continuum peak of IRAS 4A2 and fitted with a single Gaussian function using the emission above $3\sigma$, where 1$\sigma$ corresponds to $3.6\ {\rm mJy~beam^{-1}}$ and $1.3\ {\rm mJy~beam^{-1}}$ for SO ($J_N=7_6-6_5$) and SO ($J_N=6_5-5_4$), respectively. The line profile shows a dip near $V_{\rm LSR}$ = $7\ {\rm km~s^{-1}}$ due to the absorption by the foreground component \citep[][]{su2019}. Hence, we excluded these data in the Gaussian fitting. The peak velocity derived from the SO ($J_N=7_6-6_5$) line profile is $7.30\pm 0.05\ {\rm km~s^{-1}}$, while that from the SO ($J_N=6_5-5_4$) line profile is $7.40\pm 0.04\ {\rm km~s^{-1}}$. Averaging the two derived velocities, the systemic velocity for IRAS 4A2 is determined to be $7.35\ {\rm km~s^{-1}}$. This derived systemic velocity is slightly more redshifted than previous values, $6.83\ {\rm km~s^{-1}}$, derived from ammonia \citep[][]{Choi2010} and $7.00\ {\rm km~s^{-1}}$, derived from six transitions of H$_2$CO \citep[][]{su2019}. \begin{figure}[ht!] \epsscale{0.8} \plotone{spectral.jpg} \caption{(a) Gaussian fitting to an SO ($J_N=7_6-6_5$) line profile derived at the continuum peak of IRAS 4A2. This fitting excludes the data points below 3$\sigma$, where 1$\sigma$ corresponds to $3.6\ {\rm mJy~beam^{-1}}$. (b) Same as panel (a), but fitting to SO ($J_N=6_5-5_4$). This fitting excludes the data points below 3$\sigma$, where 1$\sigma$ corresponds to $1.3\ {\rm mJy~beam^{-1}}$. The yellow data points affected by ambient cloud absorption are ignored in the Gaussian fitting in the dip around $V=7\ {\rm km~s^{-1}}$. \label{fig:slp}} \end{figure} \newpage Figure \ref{fig:pv}a and \ref{fig:pv}b are the integrated-intensity (moment 0) and mean-velocity (moment 1) maps of SO ($J_N=6_5-5_4$) line emission, centered at the IRAS 4A2 continuum peak position. The higher resolution image of SO ($J_N=6_5-5_4$) reveals an elongated structure around IRAS 4A2 with extensions to the northwest and the southeast. This elongated structure appears to have a double peaked feature having a separation of $\sim 0\farcs 5$ ($\sim 150$ au). This double peaked feature is likely to be caused by the continuum subtraction. We confirm that the SO moment 0 map shows a single peak before continuum subtraction. This elongated structure shows a significant velocity variation along its major axis; the mean-velocity map shows a red-blue-red-blue feature from the northwest to the southeast. Within $0\farcs5$ scale, SO shows a velocity gradient from the blushifted emission in the northwest to the redshifted emission in the southeast, while the emission beyond $0\farcs5$ scale shows an opposite gradient. The higher transition SO ($J_N=7_6-6_5$) also shows the similar kinematic feature. However, the velocity gradient at the central $0\farcs5$ is not well resolved in this line due to the lower resolution of $0\farcs65$. The CO ($J = 2-1$) moment 0 and moment 1 maps in Figure \ref{fig:pv}e and \ref{fig:pv}f clearly show that the outer parts ($\pm 0.5-1\arcsec$ from IRAS 4A2 continuum peak) of the elongated structure are connected with the outflow lobes associated with IRAS 4A2, although no apparent boundary exists between the elongated structure and the outflow lobes. In addition, the velocity distribution in the outer region of the elongated structure appears to be smoothly connected to the outflow. This implies that the outflow dominates the velocity distribution in the outer parts. Another kinematic component could dominate the smaller scale velocity distribution because it has an opposite velocity gradient to that of the outer parts. This additional kinematic component is also reported in \citet[][]{su2019}. The velocity gradient within the $0\farcs5$ scale is likely to be the rotation of the envelope surrounding IRAS4A2 because the observed velocity gradient from the blueshifted emission in the northwest to the redshifted emission in the southeast is consistent with that of the rotating disk traced by the ammonia emission \citep[][]{Choi2010}. It should be noted that the velocity gradient in the vicinity of IRAS4A2 observed in NH$_3$ and SO are opposite to that of the circumbinary envelope \citep[][]{Ching2016}. Figure \ref{fig:pv}c (SO $J_N=6_5-5_4$) and Figure \ref{fig:pv}g (CO) are the position-velocity (PV) diagrams along the line with a P.A. of $148.0\arcdeg$, which passes through the continuum peak and the double peaked feature in the SO ($J_N=6_5-5_4$) moment 0 map. These PV diagrams show two pairs of emission in the diagonal pairs of quadrants. The second and forth quadrant pair correspond to the velocity gradient of the inner parts of the elongated structure, while the first and third quadrant pair correspond to that of the outer parts. We interpret these two pairs of velocity components as the combination of two different motions: (1) the asymmetric double peaked pair in the second and forth quadrant represents the rotation of the elongated structure, and (2) the fainter linear pair in the first and third quadrant represents the outflow motion. Although the velocity gradient of the inner part of the elongated structure is interpreted as a rotation, there are several differences from the previous NH$_3$ observations \citep{Choi2010}. First, the position angle of the major axis of the elongated structure, 149.0$^{\circ}$, is different from that of the NH$_3$ disk, 108.9$^{\circ}$. Second, the SO PV diagram suffers from continuum subtraction near the central protostar. Third, the emission at the low velocity suffers from the foreground absorption \citep[][]{su2019} and the filtering out effect of the interferometer. Similarly, the CO PV diagram shows no emission at low velocities, which is due to filtering effect of the interferometer. Figures \ref{fig:pv}d and \ref{fig:pv}h compare the PV diagrams of SO and CO (color) with that of NH$_3$ (contours), which is interpreted as a Keplerian rotation of a circumstellar disk \citep{Choi2010}. The PV cuts of SO and CO are same as that of NH$_3$, $108.9\arcdeg$, which is the major axis of the ammonia moment 0 map. The PV diagrams of SO and CO generally agree with that of NH$_3$, although the rotation curve is not well traced in the SO and CO due to the missing low velocity components. \begin{figure*}[ht!] \epsscale{1.0} \plotone{eight_block.jpg} \caption{(a) Integrated intensity map in SO ($J_N=6_5-5_4$). The integrated velocity range is from 2.4 to $13.5\ {\rm km~s^{-1}}$. Contour levels are $1,2,3,4,\dots \times 3\sigma$, where 1$\sigma$ corresponds to $14.2\ {\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. The continuum peak is located at the center. The blue and green lines are the cutting lines through the center for position-velocity diagrams of SO (length: $2 \arcsec$, P.A.: $148\arcdeg$) and ammonia (length: $1 \arcsec$, P.A.: $109\arcdeg$), respectively. (b) Same as panel (a) but mean velocity map of SO ($J_N=6_5-5_4$). (c) Position-velocity diagram of SO ($J_N=6_5-5_4$) (colour) derived from the major axis of SO emission (blue cut in panel (a)) and NH$_3$ ($N_J=2_2-2_2$, $F_1=3-3$ averaged with $N_J=3_3-3_3$, $F_1=4-4$) (contour) derived from the major axis of ammonia (green cut in panel (a)). The contour levels are $3,4,5,6,7 \dots \times 1\sigma$ where $1\sigma$ is 10.3 K \citep[][]{Choi2010} (d) Same as (c), but with SO ($J_N=6_5-5_4$) PV derived from the ammonia cut. (e) Integrated intensity map of CO ($J = 2-1$). The integrated velocity range is from 2.4 to $13.5\ {\rm km~s^{-1}}$. Contour levels are $1,2,3,4,\dots \times 3\sigma$, where 1$\sigma$ corresponds to $14.3\ {\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. The continuum peak is located at the center. (f) Same as panel (e) but mean velocity map of CO ($J = 2-1$). (g) Position-velocity diagram of CO ($J = 2-1$) (colour) and NH$_3$ (contour) derived along the blue cut and green cut in panel (a), respectively. (h) Same as (g), but with CO ($J = 2-1$) PV derived from the ammonia cut. \label{fig:pv}} \end{figure*} \newpage \clearpage \subsection{CCH ($N=3-2$) lines} \label{sec:cch} Figure \ref{fig:cch}a shows CCH integrated components overlapped on the {\it Spitzer} $4.5\ \micron$ map. To avoid the overlap between two hyperfine structures of CCH, we integrated the emission from 0.7 to $5.7\ {\rm km~s^{-1}}$ of the $F =$3--2 line for the blueshifted components, and that from 6.3 to $10.3\ {\rm km~s^{-1}}$ of the $F =$4--4 line is integrated for the redshifted component. The CCH also shows the redshifted component to the north and the blueshifted component to the south of the two protostars. However, the spatial distribution of CCH is significantly different from those of CO and SO; the CCH emission clearly shows two bright blobs associated with IRAS 4A2 and two faint blobs at the east of the two bright blobs, associated with IRAS 4A1. The blobs associated with IRAS 4A2 have larger sizes and higher peak intensities than those of the IRAS 4A1 blobs. The typical velocity of the IRAS 4A1 and IRAS 4A2 blobs are $\sim \pm 3\ {\rm km~s^{-1}}$ and $\sim \pm 2\ {\rm km~s^{-1}}$, with respect to the systematic velocity of IRAS 4A1 (6.86 ${\rm km~s^{-1}}$, \citet[][]{su2019}) and IRAS 4A2 (7.35 ${\rm km~s^{-1}}$), respectively. The two IRAS 4A2 blobs exhibit clear bipolarity with an extension of P.A.$=23\arcdeg$. This position angle is different from the extension of the CO and SO outflows, $\sim$0$\arcdeg$, in the central 3$\arcsec$ region of IRAS 4A2. The extension of the CCH outflow is more perpendicular to the major axis of the SO elongated structure around IRAS 4A2 than CO and SO outflow (i.e. compare CCH and CO moment 0 maps in Figure \ref{fig:cch}a). To the north and south of these blobs, CCH emission shows U-shaped structures with their tips pointing away from IRAS 4A1 and IRAS 4A2 continuum peaks. These U-shaped component coincide with the bright components in {\it Spitzer} $4.5\ \micron$ emission. The $4.5\ \micron$ emission represents H$_2$ regions, which usually traces shocked regions \citep[e.g.][]{Santangelo2015}. This indicates that the U-shaped feature in CCH might also trace a shocked region. Since the bright components in {\it Spitzer} $4.5\ \micron$ emission are assumed to be associated with IRAS 4A1, the U-shaped feature in CCH emission is also associated with IRAS 4A1. Figure \ref{fig:cch}b and \ref{fig:cch}c show that each blob except the south-western blob is connected with a fainter extension toward the north and the south. The fainter extension consists of a western edge and an eastern edge, forming a V-shaped structure with its tip pointing toward IRAS 4A2 continuum peak. The western edge of the V-shaped structure is more redshifted than the eastern edge of that. The southern blob of IRAS 4A2 also shows the same velocity gradient. This velocity gradient is identical to the west-to-east velocity gradient in the circumbinary envelope and the velocity gradient in SO very low velocity component. \begin{figure*}[ht!] \epsscale{1.0} \plotone{SPITZER_CCH.jpg} \caption{ (a) Integrated CCH red- and blueshifted components (colored contours) overlapped with a {\it Spitzer} IRAC map at 4.5 $\micron$ (grey scale). The contour levels are $1,2,3,4,\dots \times 3\sigma$, where $1\sigma$ corresponds to 2.39 (red) and 3.34 (blue) ${\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. The green contour denotes the 9 $\sigma$ cutoff of CO moment 0 map in Figure \ref{fig:co}a. The continuum peak positions of IRAS 4A1 and IRAS 4A2 are denoted by black crosses. (b) Moment 0 (contours) and moment 1 (colour) maps for the redshifted component. Contours are the same as panel (a). Blue-filled ellipses at the bottom-left corner denote the ALMA synthesized beam: $0\farcs65 \times 0\farcs35,\ {\rm P.A.}=-28^{\circ}$. The black and grey dotted line denotes the V-shaped and U-shaped structures, respectively. (c) Same as panel (b) but for the blueshifted component. \label{fig:cch}} \end{figure*} \newpage While both overall CO and SO emission (Figure \ref{fig:co}a and Figure \ref{fig:so}a) show an asymmetric S-shape in the IRAS 4A2 outflow, the CCH outflow shows a symmetric feature which extends along northeast to southwest direction. However, in IRAS 4A2, the SO emission at the very low velocity shows similar feature to CCH outflow. Figure \ref{fig:cchso} shows the very low velocity SO moment 0 map (contour) overlaid by the CCH (color) moment 0 map with the same integrated velocity. The CCH and the very low velocity SO emission extended in the same northeast-to-southwest direction, while both of the line emission show blob components near the IRAS 4A2 continuum peak. The northern CCH blob of IRAS 4A2 (CCHA2N in Figure \ref{fig:cchso}) overlaps with the northern SO blob (SOA2N) at the very low velocity, with the intensity peak of CCH blob slightly shifted to the west of the SO blob. Meanwhile, the southern CCH blob of IRAS 4A2 (CCHA2S) is at the north of the two southern SO blobs (SOA2S1 and SOA2S2). The V-shaped component in northern CCH outflow traces the two edges of the triangular component in northern SO outflow. The northeastern edge of the V-shape is unclear due to the overlap of IRAS 4A1 and IRAS 4A2 lobes. Accordingly, the CCH and SO low velocity emission enhances at the same region in northern IRAS 4A2 outflow lobe. Assuming that the very low velocity SO outflow share the same P.A. with CCH outflow in IRAS 4A2, the extension of the very low velocity SO outflow is elongated along the same direction as the CO and SiO \cite[][]{Ching2016, Santangelo2015} while misaligned with the SO high velocity outflow. Additionally, the two fainter blobs in CCH (CCHA1N and CCHA1S) associated with IRAS 4A1 are also traced by the low velocity SO emission (SOA1N1 and SOA1S). There is one additional faint blob (SOA1N2) in the SO moment 0 map which is not seen in the CCH moment 0 map near the continuum peak of IRAS 4A1. \begin{figure}[ht!] \epsscale{1.0} \plotone{SOCCH.jpg} \caption{CCH integrated intensity (contours) overlapped with SO very low velocity component (colour). To avoid the overlap between two hyperfine structures in CCH, the emission from $5.7\ {\rm km~s^{-1}}$ to $6.3\ {\rm km~s^{-1}}$ is removed. The contour levels are $1,2,3,4,\dots \times 3\sigma$, where $1\sigma$ corresponds to 5.0 ${\rm mJy~beam^{-1}}~{\rm km~s^{-1}}$. The continuum peak positions of IRAS 4A1 and IRAS 4A2 are denoted by black crosses. The white and black texts denote the name of each blob shown in CCH and SO emission, respectively. Blue-filled ellipse at the bottom-left corner denote the ALMA synthesized beam: $0\farcs65 \times 0\farcs35,\ {\rm P.A.}=-28^{\circ}$. \label{fig:cchso}} \end{figure} \newpage \subsection{Main Results About the IRAS 4A2 Outflow-Envelope System} Here we summarize the main results in IRAS 4A2 with the high resolution observation of CO and SO line emission. \begin{enumerate} \item IRAS 4A2 drives its own bipolar outflow toward the north (redshifted) and the south (blueshifted), respectively. \item The IRAS 4A2 outflow at different velocity ranges show different morphologies, i.e. the very low velocity outflow shows a symmetric cone shape while the low velocity outflow shows an asymmetric S-shape. \item The low velocity asymmetric S-shape outflow consists of two curved outflow lobes extended to the P.A. of 20$\arcdeg$ and an elongated structure along the P.A. of 148$\arcdeg$ at the center of the S-shape. \item Extremely high velocity components at the base of the IRAS 4A2 outflow is observed in CO emission. \item The velocity gradient within $\pm 0\farcs5$ of the elongated structure, which is considered to be the envelope of IRAS 4A2, is opposite to that of the circum-binary envelope. \item The PV diagrams of CO and SO line emission show two pairs of velocity components, which corresponds to the rotation of the elongated structure and outflow motion. \end{enumerate} In order to explain the observational features, we compare our results with the models in Section \ref{sec:mech}. \section{Possible Mechanisms for S-shaped Outflow Associated with IRAS 4A2} \label{sec:mech} In this section, we will focus on the S-shape outflow associated with IRAS 4A2 and attribute it to three possible mechanisms, rotation of the outflow, the outflow precession caused by a close binary, and the misaligned B-field and cloud initial rotation axis. \subsection{Rotation of the Outflow} \label{sec:outrot} A rotating outflow could be the origin of the S-shaped outflow. The rotation of outflow lobes shifts the radial velocity of one side of the outflow cavity close to the systemic velocity, which can be resolved out by the interferometer, leaving the opposite side of the outflow cavity. Since this resolving out would occur at different sides of the blueshifted and the redshifted outflow cavities, the observed outflow shows an asymmetric S-shape morphology. In such a case, the outflow is expected to be rotating in the same direction as the protostellar disk, and has a specific angular momentum comparable with that of the disk. However, this scenario is unlikely, because the required outflow rotation, which has an approaching side in the east and a receding side in the west, is opposite to the rotation of the disk traced by ammonia \citep[][]{Choi2010}. \subsection{Outflow Precsession Caused by Close Binary} \label{sec:prec} Close binaries could reproduce curved outflows \citep[e.g.][]{bate2000,hirano2010,kwon2015}, which are similar to the S-shaped outflow associated with IRAS 4A2. So far, none of the existing continuum images shows a signature of binary in IRAS 4A2. The upper limit of the projected binary separation is estimated from the beamsize of the highest resolution image of \citet[][]{tobin2016}, i.e. 0\farcs{075} $\times$ 0\farcs{05} (22.4 au $\times$ 14.7 au). Then, we tried to model the outflows launched from precessing binaries separated by different distances (from 5.9 au ($0\farcs02$) to 22.4 au ($0\farcs075$)). We found that while a model with a separation larger than 12 au ($0\farcs04$) will fit the observed S-shape outflow, that with a separation lower than 12 au ($0\farcs04$) will start to deviate from the observed S-shape. Here, we present a succeeded model of 14.7 au separation and a failed model of 8.8 au separation. In the models, the precession period is 20 times longer than the orbital period of the binary \citep[][]{bate2000}. Assuming a Keplerian motion of the binary, the precession period is derived by the equation $T_{p} {\sim} 20\times T_{o} = 20\times\sqrt{\frac{4\pi^2D^3}{G\times M_{A2}}}$, where $T_p$ is the precessing period, $T_o$ is the orbital period, $D$ is the distance between the close binary, $G$ is the gravitational constant, and $M_{A2}$ is a total mass of the binary system, which is estimated to be $0.08\ M_\odot$ in \citet[][]{Choi2010}. The derived precessing period is 4000 years. We build the precession models in a 3D spherical coordinate system, and the precession axes of the models coincides with the zenith direction in the coordinate system. A positive $\theta$ value corresponds to the polar angle with respect to the zenith direction, and a positive $\phi$ value corresponds to the counter-clockwise azimuthal angle looking from positive zenith direction. To model the outflow trajectory, two outflow velocity patterns are considered in the precession models. One is a constant outflow velocity at every positions of the outflow \citep[][]{kwon2015}. The free parameters we fit in this model are the viewing angle, half the opening angle of precession ($\alpha$), and the outflow axial velocity ($v$). \begin{equation} \label{eq:3} \begin{Bmatrix} r=vt\\ \theta=\alpha\\ \phi = \frac{Nt}{T_{p}}\times 2\pi \end{Bmatrix} \end{equation} Where $N$ is $+$1 for counter-clockwise rotation and is $-$1 for clockwise rotation of the precessing binary, and $t$ is the traveling time of the outflow. We adopted the timescale of the model to be 6000 year, which is the outflow dynamical timescale estimated in \citet[][]{yildiz2012}. Another outflow velocity pattern is an outflow with its velocity increasing as a function of the distance \citep[e.g. the ``wind driven model"][]{lee2000}. The accelerating outflow is described by the equation $\frac{dv_z}{dz}=v_0$ in this model, and $v_0$ is estimated by the boundary conditions below: (1) $0\ {\rm km~s^{-1}}$ at outflow launching point ($r=0\arcsec$), (2) the highest radial velocity achieved by the IRAS 4A2 outflow, $V_{LOS}\sim5.0\ {\rm km~s^{-1}}$, at $\sim7\arcsec$ away from the outflow launching point. The highest 3D velocity of the outflow need to be corrected by the outflow inclination. Since \citet[][]{yildiz2012} and \citet[][]{choi2006} suggested different inclination angles of $45\arcdeg - 60\arcdeg$ and $79.3\arcdeg$ with respect to the line of site, respectively, we constructed the models for both cases with 52.5\arcdeg and 79.3\arcdeg. Thus, the $v_0$ solved by $V_{LOS}/cos(52.5\arcdeg)=8.2\ {\rm km~s^{-1}}$ and $V_{LOS}/cos(79.3\arcdeg)=27\ {\rm km~s^{-1}}$ resulted in $v_0=4.0\times10^{-3}\ {\rm km~s^{-1}} \ au^{-1}$ and $v_0=13.1\times10^{-3}\ {\rm km~s^{-1}}\ au^{-1}$, respectively. Solving $\frac{dv_z}{dz}=v_0$, we obtained the velocity of the outflow $v_z$ as a function of the outflow traveling time $t$, $v_z=Ce^{v_0t}$ and the outflow traveling distance $r$ as a function of time, $r=\frac{C}{v_0}e^{v_0t}-\frac{C}{v_0}$ (we assume $r=0$ when $t=0$), where $C$ is a constant. For accelerating outflow model, the free parameters to be fitted are the viewing angle, the half opening angle of the precession, and the constant $C$. \begin{equation} \label{eq:4} \begin{Bmatrix} r=\frac{C}{v_0}e^{v_0t}-\frac{C}{v_0}\\ \theta=\alpha\\ \phi = \frac{Nt}{T_{p}}\times 2\pi \end{Bmatrix} \end{equation} The 3D model is then projected on the plane of the sky. The free parameters are fitted so that the model trajectory to align with the intensity peak positions of the moment 0 map of CO emission, while the model line of site velocity to align with that of the CO emission at the peak positions. The intensity peak positions are estimated from the Gaussian fitting of each row of pixels in the moment 0 map. We choose the peak positions of the fitted Gaussian to be the outflow intensity peaks, and the $\sigma$ of the Gaussian to be the error of the peak positions. The emission associated with IRAS 4A1 outflow is masked out manually during the fitting. The value and error of the line of site velocity is the mean velocity (moment 1) and the velocity dispersion (moment 2) in CO emission, respectively. Figure \ref{fig:precession_1} shows the results of the fitting of the precession models for the binary separations of 14.7 au and 8.8 au. The model with a separation of 14.7 au can explain the overall morphology of the S-shaped outflow for both constant velocity and accelerating outflows. On the other hand, in the models with a narrower separation of 8.8 au, the the model trajectories starts to deviate with the observed S-shape. The observed velocity pattern is better explained with the accelerating outflow models for both 14.7 au and 8.8 au cases. It should be noted that no precession model could reproduce the red-blue red-blue feature in the vicinity ($\sim\pm2\arcsec$) of IRAS 4A2. The precession model can overall reproduce the S-shape outflow morphology if the binary separation is comparable to the beamsize of the highest resolution image, i.e. 14.7 au, and the outflow velocity increases as it travels. However, the precession models cannot explain the small scale kinematics. In order to search for the signature of binary in IRAS 4A2, higher resolution observations are necessary. Additionally, perturbations due to an orbital binary motion is not considered in our precession model because these perturbations should occur at a scale smaller than 100 au \citep[$\sim 0\farcs3$,][]{raga2009}, which is smaller than the beam size of our observation. \begin{figure}[ht!] \epsscale{1.0} \plotone{precession_1.jpg} \caption{(a), (c): CO moment 0 maps (integrated intensity, color) overlaid with the outflow precession model (red, blue and green lines). Panel (a) and (c) show the best fit outflow curve with a binary separation of 14.7, and 8.8 au, respectively. The red and blue lines are the best fit accelerating outflow curve with outflow inclination of 52.5$\arcdeg$ and 79.3$\arcdeg$ with respect to the line of site, respectively. The green lines are the best fit constant velocity outflow curve. The dotted lines and the white regions are the intensity peaks of CO emission from IRAS4A2’s outflow and it's error. (b), (d): Mean velocity (moment 1, grey dots) and velocity dispersion (moment 2, grey region) overlapped with the line of site velocity of the precession model (red, blue and green lines). Panel (b) and (d) show the best fit outflow curve with a binary separation of 14.7, and 8.8 au, respectively. The red, blue and green lines are the same as those in panel (a) and panel (c), but showing the line of site velocity of the precession model. \label{fig:precession_1}} \end{figure} \newpage \subsection{Magnetic Field Misalignment Model} \label{sec:analysis} Next, we examine the MHD simulations by \citet[][]{Hirano2019, machida2020} with misaligned B-field and cloud initial rotation axis, whose resultant density distribution shows potential to reproduce the S-shape feature. \subsubsection{The Numerical Simulation} \label{sec:bmam} Previous observations of IRAS 4A2 show that the position angles of the large scale magnetic field \citep[$\sim45\arcdeg$,][]{Matthews2009, attard2009, hull2014} and moderate scale magnetic field \citep[$61\arcdeg$,][]{Ching2016, hull2014} are not aligned with the rotation axis of the Keplerian disk \citep[$19\arcdeg$, ][]{Choi2010}. We, therefore, adopted the resistive MHD simulation of \citet[][]{Hirano2019} and \citet[][]{machida2020}, which simulated the evolution of the protostellar core with an angular momentum vector tilted from the magnetic field. This misalignment results in an S-shaped configuration of density distribution, because the collapse on large scale is dominated by the magnetic field, while that on small scale is dominated by the rotation. The initial magnetic field of this simulation is set in the direction of $\it{z}$ axis, and the initial rotation axis is inclined by 30$\arcdeg$, 45$\arcdeg$, 60$\arcdeg$, and 80$\arcdeg$ with respect to $\it{z}$ axis on the $\it{x}-\it{z}$ plane. The coordinate system of this simulation is a right-handed Cartesian system. We adopted the same initial condition as the simulation done in the paper. The initial temperature and number density are 10 K and 1.5$\times$10$^4$ cm${^-}{^3}$, respectively. The simulation time of the model we used is 500 years after the time when the collapsing center reaches $\rm{n_H}$ = $10 ^{18}\ \rm{cm}^{-3}$. Other details and simulation results are shown in \citet[][]{Hirano2019}. In order to compare the results of the MHD simulations with the observed SO image, we utilize Radmc3d radiative transfer code \cite[][]{radmc3d2012}. We set the SO abundance relative to H$_2$ abundance to $10^{-9}$ \citep[][]{Lee2010}. A continuum image with bandwidth of 4 GHz centered at the rest frequency of SO ($J_N=6_5-5_4$) (219.949442 GHz) is also created to conduct continuum subtraction of the line image. Then, using the task vis\_sample (For further references see \href{https://github.com/AstroChem/vis_sample.git}{vis sample}\footnote{https://github.com/AstroChem/vis\_sample.git}), we simulate the observation of the unsubtracted line image with the same UV coverage of our observation. We fine tune the initial rotation axis and viewing angle to identify the synthetic result which looks most similar to the observational image. The initial rotation axis is inclined by 60$\arcdeg$ with respect to $\it{z}$ axis, and the viewing angle is 121$\arcdeg$ inclined from $\it{z}$-axis, clockwise 68$\arcdeg$ from $\it{x}$-axis, and a position angle of 35$\arcdeg$. Last, the continuum is subtracted from the visibility created by the task vis\_sample through the CASA task {\it uvcontsub}, and CLEANed by the task {\it tclean} with natural weighting. \subsubsection{Simulation Results} \label{sec:simrslt} The results of the MHD simulation are presented in \citet[][]{Hirano2019} and \citet[][]{machida2020}. The simulated density profile shows that the normal of the inner disk and the outflow launched from the inner disk are roughly parallel to the initial rotation axis, while the outer flattened envelope is warped. The outflow launched from the inner disk overlaps with part of the large envelope and one side of the each outflow lobe is enhanced as well. The panels in Figure \ref{fig:smu} present the misalignment model after solving the radiative transfer (the first row), and that after the synthetic observation (the second row). The first, second, and third columns are moment 0 maps, moment 1 maps, and PV diagrams, respectively. All PV diagrams are derived from the cutting line through the center of moment 0 and 1 maps with the P.A. of 110$\arcdeg$, which is the major axis of the remaining emission from the flattened envelope in the moment 0 map after synthetic observation (see below). The simulation after solving the radiative transfer shows an asymmetric feature in moment 0 map (Figure \ref{fig:smu}a). This feature consists of a $\sim5 \arcsec$ scale elliptical component and a $\sim15 \arcsec$ scale S-shaped component, representing the flattened envelope and the outflow wall, respectively. Moment 1 map (Figure \ref{fig:smu}b) shows that the northern outflow is redshifted and the southern outflow is blueshifted. The flattened envelope has a velocity gradient from west (blueshifted) to east (redshifted). The PV diagram shows emission in all four quadrants. The emission in the second and forth quadrants corresponds to the rotation of the flattened envelope, and that in the first and third quadrants corresponds to the outflow. The center of the PV diagram shows absorption due to effects of continuum subtraction. After the synthetic observations, the moment 0 map (Figure \ref{fig:smu}) shows a $\sim15 \arcsec$ scale asymmetric S-shape feature. Some of the low velocity emission in the extended flattened envelope is filtered out, and only the emission from the compact region is remaining. The compact component shows double peaks at northwest and southeast with a fainter dip at its center. There are also two spikes attached to the northeast and southwest of the compact component. The moment 1 map (Figure \ref{fig:smu}e) shows a red-blue red-blue feature along the S-shaped feature from the northwest to southeast across the central source. The PV diagram (Figure \ref{fig:smu}f) also shows two pairs of emission in the diagonal pairs of quadrants. The center of the PV diagram shows absorption, which also appears in the PV map before synthetic observations. The simulation successfully reproduces three main features in the observation. First, the S-shaped feature is reproduced by means of the intensity enhancement in the western side of the northern lobe and the eastern side of the southern lobe. Second, the mean velocity in the simulated moment 1 map shows the same pattern as that in the observed moment 1 map. Third, the simulated PV diagram also shows rotation and outflow components. However, the simulation result also differs from the observation; the simulated moment 0 map shows spike like components which correspond to the bases of the eastern and western walls of the redshifted and blueshifted cavities, respectively. In addition, the elongation of the flattened envelope is not aligned with the outflow. The differences above could be yielded by the following assumptions which are not very realistic. Our simulation assumes the SO molecule to be uniformly distributed, while in reality, the SO abundance could be enhanced under certain conditions such as outflow shells \citep[][]{aso2018}, jets \citep[][]{lee2010}, and the outer edge of disks \citep{ohashi2014}. Because the temperature dependence of the SO abundance in our simulation is too simplified, some additional features could appear in the simulation. In addition, our simulation does not include the binary companion and the foreground ambient cloud, since it is too complicated to take these components into account. \begin{figure*}[ht!] \epsscale{1.2} \plotone{sim01.jpg} \caption{Results after solving the radiative transfer (upper row) compared with results after performing the synthetic observation (lower row), where the left, middle, and the right columns are moment 0, moment 1, and position-velocity diagrams, repectively. The integrated velocity ranges are showed above all panels. The contour levels are $1,2,3,4,\dots \times 3\sigma$, where 1$\sigma$ corresponds to $3.9\times 10^{-4}\ $Jy ${\rm km~s^{-1}}$ in panels (a) and (b), $9.4\times 10^{-3}\ {\rm Jy~beam^{-1}}~{\rm km~s^{-1}}$ in panels (d) and (e). The position-velocity diagrams are derived along the cut centered at continuum peak with the position angle of $110\arcdeg$ and the length of $3\farcs0$, whereas the colour bars are adjusted to highlight the outflow components. \label{fig:smu}} \end{figure*} \newpage In order to explore the outflow morphology in different velocity ranges, such as the symmetric morphology in the very low velocity range and the asymmetric morphology at higher velocity range in the SO emission, we compare the simulated moment 0 maps with different integrated velocities. Figure \ref{fig:smu_comp}a and Figure \ref{fig:smu_comp}b show the moment 0 maps at a lower and a higher velocity range, where the integrated velocity ranges are $V = 0.0 - \pm 2.0$ and $\pm 2.0 \sim \pm 10.0 {\rm km~s^{-1}}$, respectively. The lower velocity component shows an X-shape at a $15\arcsec$ scale, which is more extended and less asymmetric than the higher velocity S-shaped component. This difference between the high and low velocity components is similar to that observed in the SO line, i.e. the outflow is symmetric rather than S-shape in the very low velocity (Figure \ref{fig:so}b), while the S-shape is prominent in the low velocity (Figure \ref{fig:so}c). One explanation of this difference is that the high velocity outflow is launched by outer edge of Keplerian disk (inner part of the flattened envelope) and the low velocity outflow is launched by the outer part of the flattened envelope. Since the flattened envelope is warped, the Keplerian disk and the outer part of the flattened envelope are misaligned. As a result, the high velocity outflow is misaligned with the low velocity outflow \citep[][]{Hirano2019}. \begin{figure}[ht!] \epsscale{1.0} \plotone{sim_vel_comp.jpg} \caption{Moment 0 maps with integrated velocity ranges of (a) $0.0-\pm 2.0\ {\rm km~s^{-1}}$ and (b) $\pm 2.0-10.0\ {\rm km~s^{-1}}$. The contour levels are $1,2,3,4,\dots \times 3\sigma$, where 1$\sigma$ corresponds to (a) $4.5\times 10^{-3}$ and (b) $5.4\times 10^{-3}\ {\rm Jy~beam^{-1}}~{\rm km~s^{-1}}$. \label{fig:smu_comp}} \end{figure} \newpage \section{DISCUSSION} \label{sec:discussion} \subsection{A Schematic View of the Magnetic Field Misalignment Model} \label{sec:bend} The misalignment model can overall reproduce the outflow misaligned with the flattened envelope. Figure \ref{fig:sch} is a schematic view of this model. It tells us that while the inner velocity gradient mostly depends on the initial rotation, the outer part is shaped more by the magnetic field (or Lorentz force). This results in two features. First, it is more efficient for materials to fall along the magnetic field rather than across the magnetic field, so the flattened envelope is warped due to the different orientation between rotation dominated and B-field dominated regions. The outer parts of the envelope mainly flatten in the direction perpendicular to the magnetic field, while the central part of the envelope, which also corresponds to outer edge of Keplerian disk, contracts along the initial rotation vector. Consequently, the velocity gradient in a large scale is misaligned with that in a smaller scale due to the misaligned orientation of the large and the small scale. Secondly, the outflow materials launched from different parts of the disk show different orientations. The high velocity outflow launched from the outer edge of the Keplerian disk is ejected along disk rotation axis, while the slow extended outflow launched from the outer part of the flattened envelope is ejected along the direction between B-field and initial rotation, aligned with the B-field flattened envelope. As a result, the whole flattened envelope shows a warped structure and the two different outflow components show different extension. The strong high velocity outflow enhanced the density of one edge in the extended outflow, and thus looks like an S-shaped outflow misaligned with flattened envelope rotation. In this perspective, we expect to see two features in this model. One is that the velocity gradient of the Keplerian disk is misaligned with the larger scale velocity gradient. This is presented in ammonia moment 0 maps, showing that the major axis of a small scale Keplerian disk differs from that of large scale SO flattened envelope. The other is that the outflow launched from the outer edge of Keplerian disk is faster and more aligned with the rotation axis of the Keplerian disk than that from the flattened envelope. This is shown in both observation and simulation when we compare the moment 0 maps with the high and the low integrated velocity (Figure \ref{fig:so}). According to \citet[][]{Hirano2019}, the high speed outflow in the simulation is collimated and shows knot like components. While the collimated outflow is not obvious in our SO line observation, the CO observation shows extremely high velocity components at the base of the outflow, which could correspond to the high speed outflow in the simulation. In addition, MHD simulations \citep[e.g.][]{Hirano2019, ciardi2010, matsumoto2004} suggest that misaligned configuration of B-field and cloud angular momentum vector could be the origin of precession, which changes the direction of jet/outflow as a function of time. This could further explain the larger scale S-shaped pattern from P.A. $\sim20\arcdeg$ at $r<20\arcsec$ to P.A.$\sim45\arcdeg$ at $r>20\arcsec$ \citep[e.g.][]{yildiz2012,choi2005}. While we explain the S-shaped outflow with misaligned B-field and cloud angular momentum vector, the origin of the misalignment is still unclear. A possible mechanism of producing such misalignment is turbulent fragmentation \citep[][]{goodwin2004I, goodwin2004II, goodwin2006}. The angular momentum of the local parts in IRAS 4A cloud is independent from the large scale B-field in this mechanism. As a result, when the large cloud collapses and fragments into IRAS 4A1 and IRAS 4A2, the angular momentum vector of IRAS 4A2 could be misaligned from the large scale B-field, which then induced subsequent features such as S-shaped outflow. \begin{figure*}[ht!] \epsscale{1.2} \plotone{sch04.jpg} \caption{ A schematic view of the B-field misalignment model. The red line represents the initial B-field direction \cite[][]{Ching2016}. The dark blue components show the central rotation-dominated area and a collimated high speed outflow ejected from the center. The faint blue components show the outer B-field-dominated area of the flattened envelope and an extened low speed outflow ejected from the outer part of the flattened envelope. The black shaded regions denote the interaction area of low and high speed outflows. The interaction between the outflows with different ejection angles results in an edge enhanced outflow wall (black shaded), and apparently looks like an outflow misaligned with the normal of the flattened envelope. The red and blue arrows in the bottom left circle denote the initial magnetic field and initial angular momentum vector of the simulation. The initial angular momentum vector is assume to be perpendicular to the ammonia disk elongation. \label{fig:sch}} \end{figure*} \newpage \subsection{Possible Mechanisms of the Opposite Velocity Gradient} \label{sec:pmovg} The velocity gradient of the flatten envelope traced by SO emission is consistent with that traced by the ammonia \citep[][]{Choi2010}, while opposite from that of the circumbinary envelope traced by C$^{17}$O \citep[][]{Ching2016}. Here, we discuss two possible mechanisms for such opposite velocity gradient. One possible mechanism to change the velocity gradients is the turbulence in magnetized molecular cloud cores. Recently, several ideal MHD simulations of magnetized-turbulent molecular cloud cores were conducted \cite[e.g.][]{myers2013, joos2013, seifried2013, matsumoto2017}, showing that the large scale envelope rotation and small scale disk rotation could be misaligned. If the turbulence is strong enough, it could cause the small scale disk angular-momentum vector to be largely misaligned with large scale rotation and thus shows a nearly opposite velocity gradient between large and small scales \cite[][]{takakuwa2018}. In addition, the turbulence also contributes to the misalignment of disk rotation axis and large scale magnetic field, which latterly tilt the outflow direction from the disk rotation axis \cite[][]{matsumoto2017}. Although these features are consistent with the observation toward IRAS 4A2, these misalignment only take place in the scale smaller than $\sim 100$ au, so it is less possible to cause the opposite velocity gradient between a $\sim 300$ au flattened envelope and an even larger circumbinary envelope. On the other hand, in non-ideal MHD cases, Hall term in the induction equation induces a toroidal magnetic field on the cloud core. When the initial magnetic field is anti parallel to the cloud core angular momentum vector, the Hall induced magnetic field would run opposite to the gas rotation. Mean while, Hall effect also exerts a magnetic torque on the mid-plane of the flattenned envelope. If this torque is large enough and have opposite direction of initial rotation, the gas rotation vector could flip between the large scale core and the small scale flattened envelope. On the other hand, it is also possible for counter rotation between large and small scale components when the magnetic field is parallel to the disk rotation vector. In this case, the Hall induced magnetic torque produces an excessive angular momentum on the mid-plane of the disk. However, due to the conservation of angular momentum, a negative angular momentum is transferred to the upper region of the disk and form a counter rotating envelope. Recently, it is also found that the Hall induced counter rotation could appear in cases other than anti-parallel or parallel configuration (i.e. magnetic field and disk rotation vector misaligned by 135$\arcdeg$), so the counter rotation could happen in the misaligned source IRAS 4A2 \cite[][]{tsukamoto2017}. Such an opposite velocity gradient, possibly caused by Hall effect, is reported in IRAS 04169+2702 and L1527 \cite[][]{takakuwa2018, harsono2014}. In the case of IRAS 04169+2702, the opposite velocity gradient is observed with $^{13}$CO and C$^{18}$O line emission. By fitting a counter-rotating model and a forward rotation model to $^{13}$CO and C$^{18}$O position-velocity diagrams, it is found that counter-rotating model better explains the opposite velocity gradient. \subsection{Increase Abundance in CCH} \label{sec:iac} The spatial distribution of the CCH is significantly different from those of the CO and SO; although the CCH also reveals a bipolar structure, it consists of two blobs at north and south of IRAS 4A2. Normally, CCH appears on outflow cavity walls because its raw material, some ions like C$_2$H$_2^+$ or C$_2$H$_3^+$, are formed there. These ions are the leftovers of FUV light dissociation of some carbon bearing neutral molecules. When FUV light from protostars shed on the inner wall of cavities, neutral molecules will break down into ions thus start to form CCH on cavity walls. A typical source demonstrating CCH traced cavity wall is NGC1333 IRAS4C. Since CCH can be formed in PDRs irradiated by FUV radiation \cite[e.g.][]{pety2005, zhang2018}, it is interpreted that the observed CCH is tracing the outflow cavities irradiated by the FUV radiation from the central star. However, in contrast to IRAS 4A, the molecular outflow from IRAS4C is barely seen in the CO emission; only weak and compact blueshifted emission was detected to the east of the central source \cite[][]{stephens2018}, despite the clear cavity structure in the CCH and mid-IR images. However, in addition to cavity, there are also CCH clumps at the bottom of cavities associated with IRASA2. Although the morphology of the CCH in IRAS 4A2 is different from that of IRAS4C, the same mechanism of CCH formation in PDRs could also be applied to this case. Since the velocity of CCH in IRAS 4A2 is much lower than those of CO and SO, the observed CCH clumps could be the bases of the cavities. The different morphologies, i.e. clumps in IRAS 4A2 and cavities in IRAS4C, can be explained by means of different evolutionary stages. The outflow from IRAS4C having a wider opening angle is likely to be more evolved than that of IRAS 4A2 \cite[][]{machida2013, arce2006}. \section{CONCLUSIONS} \label{sec:conclusions} This paper presents the reduced image from ALMA archival data of Class 0 source NGC1333 IRAS 4A in Perseus molecular cloud. The resolutions of these images are 0.2$\arcsec$ (60 au) for the 1.3 mm continuum, $^{12}$CO $(J=2-1)$, SO $(J_N=6_5-5_4)$ lines, and 0.6$\arcsec$ (180 au) for the 1.17 mm continuum, the SO $(J_N=7_6-6_5)$, CCH ($J_{N,F}=7/2_{3,4}-5/2_{2,3}$) lines. The analysis of these images provides the following findings of kinematics in IRAS 4A. \begin{enumerate} \item The high resolution of CO $(J=2-1)$ and SO $(J_N=6_5-5_4)$ line observations revealed that IRAS 4A1 and IRAS 4A2 are driving independent outflows. The northern outflow lobes are overlapped each other at roughly $\sim 3\arcsec$ north the IRAS 4A1 and IRAS 4A2 while southern outflows propagate in different directions. \item In the IRAS 4A1 and IRAS 4A2 outflows, the column density and the rotation temperature estimated from two transitions of SO emission increase toward the vicinity of IRAS 4A1 and IRAS 4A2 continuum peaks. \item IRAS 4A2 drives S-shaped outflow from the edges of a flattened rotational envelope centered at IRAS 4A2's continuum peak. The outflow shows different degrees of asymmetry at different velocity ranges. Additionally, CO observation reveals extremely high velocity components at the base of the IRAS 4A2 outflow. These features may be explained by magnetic field misalignment model, in which the initial core angular momentum vector is misaligned with the magnetic field, or the precession caused by a close unresolved binary system, with a separation larger than $\sim$12 au. \item The flattened envelope around IRAS 4A2 has an opposite velocity gradient with the circumbinary envelope. This could possibly be explained by Hall effect: The Hall induced magnetic field may flip the angular momentum vector between the flattened envelope and the circumstellar envelope. \item The CCH ($J_{N,F}=7/2_{3,4}-5/2_{2,3}$) emission shows two pairs of blobs attaching to the bottom of shell like feature, and the morphology is significantly different from emission of other molecular lines. We suggest that CCH emission is enhanced in the outflow cavity walls. \end{enumerate} \acknowledgments This paper makes use of the following ALMA data: ADS/JAO.ALMA2013.1.01102.S, ADS/JAO.A-LMA2013.1.00031.S, and ADS/JAO.ALMA2017.1.00053.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. N.H. acknowledges a grant from the Ministry of Science and Technology (MoST) of Taiwan (MoST 108-2112-M-001-017-, MoST 109-2112-M-001-023-). \vspace{5mm} \facilities{ALMA} \software{CASA \citep{mcmu07}, MIRIAD \citep{saul95}, Radmc3D \cite[][]{radmc3d2012}, vis\_sample (For further references see \href{https://github.com/AstroChem/vis_sample.git}{vis sample}), Astropy \citep{astropy:2013, astropy:2018}, Aplpy \citep{robitaille2012}} \newpage
proofpile-arXiv_069-4781
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We consider a system of $N$ identical weakly interacting bosons confined to a thin waveguide, i.e.\ to a region $\mathcal{T}_\epsi\subset \field{R}^3$ contained in an $\epsi$-neighborhood of a curve $c:\field{R}\to\field{R}^3$. The Hamiltonian of the system is \begin{equation}\label{hamilton1} H_{\mathcal{T}_\epsi}(t)= \sum_{i=1}^N \left(- \Delta_{z_i} + V(t,z_i) \right)+ \sum_{i \leq j} \frac{a}{\mu^3} \,w \left( \frac{ z_i-z_j}{\mu}\right)\,, \end{equation} where $z_j\in \field{R}^3$ is the coordinate of the $j$th particle, $\Delta_{z_j}$ the Laplacian on $\mathcal{T}_\epsi$ with Dirichlet boundary conditions, $V$ a possibly time-dependent external potential and $w$ a positive pair interaction potential. The coupling $a:=\epsi^2/N$ is chosen such that for $N$-particle states supported along a fixed part of the curve the interaction energy per particle remains of order one for all $N\in\field{N}$ and $\epsi>0$. For $\beta>0$ the effective range of the interaction $\mu:= \left(\epsi^2/N\right)^\beta$ goes to zero for $N\to \infty$ and $\epsi\to 0$ and $\mu^{-3} w(\cdot/\mu)$ converges to a point interaction. We consider in the following only $\beta\in (0,1/3)$, the so called mean-field regime where $a/\mu^3$ still goes to zero. For recent papers containing concise reviews of the mean-field and NLS limit for bose gases we refer to \cite{LewNamRou14,NamRouSei15}. For a detailed discussion of bose condensation in general and also the problem of dimensional reduction we refer to \cite{LieSeiSolYng05}. Let us give a somewhat informal account of our result before we discuss the details. Assume that the initial state $\psi^{N,\epsi}\in L^2_+(\mathcal{T}_\epsi^N) := \bigotimes^N_{\rm sym} L^2(\mathcal{T}_\epsi)$ has a one-particle density matrix $\gamma_1$, i.e.\ the operator with kernel \begin{equation}\label{redendef} \gamma_1(z,z') := \int \psi^{N,\epsi}(z,z_2,\cdots ,z_N) \,\overline \psi^{N,\epsi}(z',z_2,\cdots ,z_N) {\mathrm{d}} z_2 \cdots {\mathrm{d}} z_N\,, \end{equation} that is asymptotically close to a projection $p=|\varphi\rangle\langle \varphi|$ onto a single particle state $\varphi= \Phi_0\chi\in L^2(\mathcal{T}_\epsi)$, where $\Phi_0$ is the wavefunction along the curve and $\chi$ is the ``ground state'' in the confined direction. Then we show that all $M$-particle density matrices $\gamma_M(t)$ of the solution $\psi^{N,\epsi}(t)$ of the Schr\"odinger equation \begin{align*} \mathrm{i} \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \psi^{N,\epsi} (t)= H_{\mathcal{T}_\epsi} (t) \,\psi^{N,\epsi} (t) \end{align*} are asymptotically close to $|\varphi(t)\rangle\langle\varphi(t)|^{\otimes M}$, where $ \varphi(t) = \Phi(t)\chi$ with $\Phi(t)$ the solution of the one-dimensional non-linear Schr\"odinger equation \begin{align}\label{equ:grosspqwgintro} \mathrm{i} \partial_t \Phi(t,x) = \left(-\tfrac{\partial^2}{\partial x^2} + V_{\rm geom}(x) + V(t,x,0)+ b |\Phi(t,x)|^2\right) \,\Phi(t,x) \qquad \mbox{with } \;\Phi(0)=\Phi_0\,. \end{align} The strength $b$ of the nonlinearity depends on the details of the asymptotic limit. We distinguish two regimes: In the case of moderate confinement the width $\epsi$ of the waveguide shrinks slower than the range $\mu$ of the interaction and $b= \int_{\Omega_{\rm f}} |\chi(y)|^4 \, {\mathrm{d}}^2 y \cdot \int_{\field{R}^3} w(r)\, {\mathrm{d}}^3 r$, where $\Omega_{\rm f}$ is the cross section of the waveguide and $\chi$ the ground state of the 2$d$-Dirichlet Laplacian on $\Omega_{\rm f}$. In the case of strong confinement the width $\epsi$ of the waveguide shrinks faster than the range $\mu$ of the interaction and $b=0$. The geometric potential $V_{\rm geom}(x)$ depends on the geometry of the waveguide and is the sum of two parts. The curvature $\kappa(x)$ of the curve contributes a negative potential $-\kappa(x)^2/4$, while the twisting of the cross-section relative to the curve contributes a positive potential. Note that quasi one-dimensional Bose-Einstein condensates in non-trivial geometric structures have been realised experimentally \cite{GoeVogKet01,HeRy09} and that the transport and manipulation of condensates in waveguides is a highly promising experimental technique, see e.g.\ the review \cite{FZ}. The rigorous derivation of the non-linear Gross-Piteavskii equation from the underlying linear many-body Schr\"odinger equation has been a very active topic in mathematics during the last decade, however, almost exclusively without the confinement to a waveguide. Then the Gross-Piteavskii equation \eqref{equ:grosspqwg} is still an equation on $\field{R}^3$. The first rigorous and complete derivation in $\field{R}^3$ is due to Erd\"os, Schlein and Yau \cite{ErdSchYau07}. Their proof is based on the BBGKY hierarchy, a system of coupled equations for all $M$-particle density matrices $\gamma_M(t)$, $M=1,\ldots,N$. Independently Adami, Golse and Teta solved the problem in one dimension \cite{AGT}. Shortly after, Pickl developed an alternative approach \cite{Pic08} that turned out very flexible concerning time-dependent external potentials \cite{Pic10a}, non-positive interactions \cite{Pic10b}, and singular interactions \cite{KnoPic09}. Yet another approach based on Bogoliubov transformations and coherent states on Fock space was developed for the most difficult case $\beta=1$ in \cite{BenOliSch12}. Recently also corrections to the mean-field dynamics were established in \cite{GrMa13,NaNa15}. There are also several lecture notes reviewing the different approaches to the NLS-limit, e.g.\ \cite{Sch08,Gol13,BenPorSch15,Rou15}. For our purpose the approach of Pickl turned out fruitful and our proof follows his general strategy and uses his formalism. However, since the NLS limit in a geometrically nontrivial waveguide required also crucial modifications, our paper is fully self-contained. Also the problem of deriving lower dimensional effective equations for strongly confined bose gases has been considered before. In \cite{abdmehschweis05} the authors start with the Gross-Pitaevskii equation in dimension $n+d$ confined to a $n$-dimensional plane by a strong harmonic potential and derive an effective NLS in dimension $n$. In \cite{MehRay15} the reduction of the Gross-Pitaevskii equation in dimension two to an $\epsi$-neighbourhood of a curve is considered. In both cases this corresponds to first taking the mean field limit and then the limit of strong confinement. However, we will see that the two limits do not commute and thus, that a direct derivation of the Gross-Pitaevskii equation in lower dimension from the $N$-particle Schr\"odinger evolution in higher dimension is of interest. This was done for a gas confined to a plane in $\field{R}^3$ in \cite{CheHol13}, and for a gas confined to a straight line in \cite{CheHol14} using the BBGKY-approach of \cite{ErdSchYau07}. \section{Main result} In order to explain our result in full detail we need to start with the construction of the wave\-guide~$\mathcal{T}_\epsi$. Consider a smooth curve $c:\field{R} \rightarrow \field{R}^3$ parametrized by arc-length, i.e.\ with $\|c'(x)\|_{\field{R}^3}=1$. Along the curve we define a frame by picking an orthonormal frame $(\tau(0), e_1(0), e_2(0))$ at $c(0)$ with $\tau(0)=c'(0)$ tangent to the curve and then defining $(\tau(x), e_1(x), e_2(x))$ by parallel transport along the curve, i.e.\ by solving \begin{equation*}{\small \label{eq:DGLBishop} \begin{pmatrix} \tau' \\ e'_1 \\ e'_2 \end{pmatrix} = \begin{pmatrix} 0 & \kappa_1 & \kappa_2 \\ - \kappa_1 & 0 & 0 \\ -\kappa_2 & 0 & 0 \end{pmatrix} \begin{pmatrix} \tau \\ e_1 \\ e_2 \end{pmatrix}} \end{equation*} with the components of the mean curvature vector $\kappa_j:\field{R}\to\field{R}$ ($j=1,2$) given by \[ \kappa_j(x) := \langle \tau'(x), e_j(x)\rangle _{\field{R}^3} = \langle c''(x),e_j(x)\rangle_{\field{R}^3} \, . \] Let the cross-section $\Omega_{\rm f}\subset \field{R}^2$ of the waveguide be open and bounded and let $\theta: \field{R}\to\field{R}$ be a smooth function that defines the twisting of the cross-section relative to the parallel frame. In order to define the thin waveguide it is convenient to introduce the following maps separately. Denote the scaling map by \[ D_\epsi : \field{R}^3 \to \field{R}^3\,,\quad r=(x,y) \mapsto (x,\epsi y)=:r^\epsi\,, \] the twisting map by \[ T_\theta : \field{R}^3 \to \field{R}^3\,,\quad (x,y)\mapsto (x,T_{\theta(x)}y)\,,\quad \mbox{where } \; T_{\theta(x)} = \begin{pmatrix} \cos \theta(x) & -\sin\theta(x)\\\sin\theta(x)&\cos\theta(x)\end{pmatrix}\,, \] and the embedding map by \begin{equation*}\label{def:f} f : \field{R}^3\to \field{R}^3 \,, \quad r=(x, y_1,y_2 )\mapsto f (r) = c(x) + y_1 e_1(x)+ y_2 e_2(x)\,. \end{equation*} The waveguide is now defined by first scaling, then twisting and finally embedding the set $\Omega := \field{R}\times \Omega_{\rm f}\subset \field{R}^3$ into a neighbourhood of $c(\field{R})$. For $\epsi$ small enough, the map \[ f_\epsi: \Omega:= \field{R}\times \Omega_{\rm f}\to \field{R}^3\,, r \mapsto f_\epsi(r) := f\circ T_\theta\circ D_\epsi (r) \] is, by Assumption~{\bf A1}, a diffeomorphism onto its range \[ \mathcal{T}_\epsi := f_\epsi (\Omega ) \subset \field{R}^3\,, \] which defines the region in space accessible to the particles, i.e.\ the waveguide. Now the evolution of an $N$-particle system in a waveguide is given by the Hamiltonian \eqref{hamilton1}, which acts on $L^2(\mathcal{T}_\epsi)^{\otimes N} \cong L^2({\mathcal{T}_\epsi}^N) $ with Dirichlet boundary conditions. However, for the formulation and the derivation of our result it is more convenient to always work on the fixed, $\epsi$-independent product-domain $\Omega = \field{R}\times \Omega_{\rm f}$ instead of the tube $\mathcal{T}_\epsi$. This is achieved by the natural unitary transformation. For $\epsi$ small enough the map $f_\epsi$ is a diffeomorphism and therefore the map \[ U_\epsi : L^2({\mathcal{T}_\epsi} )\to L^2 (\Omega )\,,\quad \psi \mapsto (U_\epsi\psi)(r ) :=\sqrt{\det Df_\epsi(r)} \;\psi(f_\epsi(r)) =: \sqrt{\rho_\epsi(r)} \;\psi(f_\epsi(r)) \] is unitary. Using $(U_\epsi)^{\otimes N}$ we can unitarily map the waveguide Hamiltonian $H_{\mathcal{T}_\epsi}(t)$ in \eqref{hamilton1} to \begin{eqnarray}\label{equ:hamqwg} H(t)&:= &(U_\epsi)^{\otimes N} H_{\mathcal{T}_\epsi}(t)(U_\epsi^*)^{\otimes N} + \sum_{i=1}^N \tfrac{1}{\epsi^2} V^\perp(y_i)\\& =& \sum_{i=1}^N \left( -\left( U_\epsi \Delta U_\epsi^* \right)_{z_i} + \tfrac{1}{\epsi^2}V^\perp(y_i)+V(t, f_\epsi(r_i))\right)+ a \sum_{i \leq j}\frac{1}{\mu^3} \,w \left( \frac{ f_\epsi(r_i)-f_\epsi(r_j)}{\mu}\right)\,,\nonumber \end{eqnarray} where we allow for an additional confining potential $V^\perp:\Omega_{\rm f} \to \field{R}$. We denote the lowest eigenvalue of $-\Delta_y + V^\perp(y)$ on $\Omega_{\rm f}$ with Dirichlet boundary conditions by $E_0$ and the corresponding real valued and normalised eigenfunction by $\chi$. We will consider simultaneously the mean-field limit $N\to \infty$ and the limit of strong confinement $\epsi\to 0$ for the time-dependent Schr\"odinger equation with Hamiltonian $H(t)$ on the Dirichlet domain $ D(H(t)) \equiv D(H)=H^2(\Omega^N) \cap H^1_0(\Omega^N)$. Recall that the effective coupling $a$ is given by $a=\epsi^2/{N}$ and the effective range of the interaction by $\mu = (\epsi^{2}/N)^{\beta}$. Compared to the standard $N$-particle Schr\"odinger operator we thus have in \eqref{equ:hamqwg} the shrinking domain and the strongly confining potential $V^\perp$, a pair interaction that is no longer exactly a function of the separation $r_i-r_j$ of two particles, and a modified kinetic energy operator. \begin{lem} \label{LapKoord} The Laplacian in the new coordinates has the form \[ U_\epsi \Delta U_\epsi^* = - \left(\partial_x + \theta' (x)L \right)^2\;-\; \tfrac{1}{\epsi^2} \Delta_y\; - \;V_{\rm bend}(r) \;-\; \epsi \,S^\epsi\,, \] where \begin{eqnarray*} L &=& y_1\partial_{y_2} - y_2\partial_{y_1} \,,\\[2mm] V_{\rm bend}(r)&=& - \frac{\kappa(x)^2}{4\rho_\epsi(r)^2} - \epsi\,\frac{T_{\theta(x)}y\cdot\kappa(x)''}{2\rho_\epsi(r)^3} - \epsi^2\, \frac{5( T_{\theta(x)}y\cdot\kappa'(x))^2}{4\rho_\epsi(r)^4}\,, \\[2mm] S^\epsi &=& \left(\partial_x +\theta' (x) L \right) s^\epsi(r) \left(\partial_x + \theta' (x) L \right)\,,\\[2mm] \rho_\epsi(r) &=& 1- \epsi\, T_{\theta(x)}y\cdot \kappa(x)\,,\quad\mbox{and} \quad s^\epsi(r) = \frac{\rho_\epsi^2(r)-1}{\epsi\,\rho_\epsi^2(r)} \,. \end{eqnarray*} \end{lem} \begin{proof} This is an elementary computation and the result is, somewhat implicitly, used in many papers on quantum waveguides, see e.g.\ \cite{Kre08} and references therein. The explicit result using our notation is derived in the introduction of \cite{HaLaTe14} for the case $\theta\equiv 0$ and yields the corresponding expression with $\partial_x$ instead of $\partial_x+\theta L$. Now the rotation by the angle $\theta(x)$ in the $y$-plane is implemented on $L^2(\field{R}^3)$ by the operator $R(\theta(x))={\mathrm{e}}^{\theta(x) ( y_1\partial_{y_2} - y_2\partial_{y_1} )}$, such that \[ R(\theta(x))^* \,\partial_x\,R(\theta(x))= \partial_x + \theta' L\,. \] \end{proof} Before stating our main result we give a list of assumptions. \begin{itemize} \item[\bf A1] {\em Waveguide}: Let $\Omega_{\rm f}\subset\field{R}^2$ be open and bounded. Let $c:\field{R}\to \field{R}^3$ be injective and six times continuously differentiable with all derivatives bounded, i.e.\ $c \in C^6_{\rm b}(\field{R},\field{R}^3)$, and such that $\|c'(x)\|_{\field{R}^3}\equiv 1$. To avoid overlap of different parts of the waveguide injectivity is not sufficient and we assume that there are constants $c_1,c_2>0$ such that \[ \|c(x_1) - c(x_2)\|_{\field{R}^3} \geq \min\{ c_1|x_1-x_2|, c_2\}\,. \] Finally let $\theta:\field{R}\to\field{R}$ satisfy $\theta\in C^5_{\rm b}(\field{R})$. \item[\bf A2] {\em Interaction}: Let the interaction potential $w$ be a non-negative, radially symmetric function such that $w(r) = \tilde w(|r|^2)$ for a function $\tilde w\in C^2(\field{R})$ with support in $(-1,1)$.\\[1mm] If the waveguide is straight and untwisted, i.e.\ if $f= T_\theta= {\rm id}$, then we only assume that $w$ is a non-negative function in $L^2(\field{R}^3; {\mathrm{d}}^3 r)\cap L^1(\field{R}^3; (1+|r|)\, {\mathrm{d}}^3 r)$. \item[\bf A3] {\em External potentials}: Let the external single particle potential $V:\field{R}\times\field{R}^3\to \field{R}$ for each fixed $t\in\field{R}$ be bounded and four times continuously differentiable with bounded derivatives, $ V(t,\cdot) \in C^4_{\rm b}(\field{R}^3)$. Moreover assume that the map $\field{R}\to L^\infty(\Omega)$, $t\mapsto V(t,\cdot)$ is differentiable and that $\dot V(t,\cdot)\in C_{\rm b}^1(\field{R}^3)$. Let the confining potential $V^\perp:\Omega_{\rm f}\to \field{R}$ be relatively bounded with respect to the Dirichlet Laplacian on $\Omega_{\rm f}$ with relative bound smaller than one. \end{itemize} \begin{rem} \begin{enumerate}[(a)] \item Note that for geometrically non-trivial waveguides we will have to Taylor expand the interaction~$w$ up to second order, hence condition {\bf A2}. Otherwise the much weaker condition formulated for straight and untwisted waveguides suffices. Note also that any radially symmetric function can be written uniquely in the form $w(r) = \tilde w(|r|^2)$ and that the regularity we need for the Taylor expansion is most conveniently formulated in terms of $\tilde w$. \item The high regularity requirements for the wave\-guide in {\bf A1} and the external potential in {\bf A3} are only needed to ensure the existence of global solutions of the NLS equation \eqref{equ:grosspqwgintro} that remain bounded in~$H^2(\field{R})$. \end{enumerate} \end{rem} Let $\psi^{N,\epsi}(t)$ be the solution to the time-dependent $N$-particle Schr\"odinger equation \begin{align}\label{equ:schrodinger} \mathrm{i} \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \psi^{N,\epsi} (t)= H (t) \,\psi^{N,\epsi} (t) \end{align} with the Hamiltonian $H(t)$ defined in \eqref{equ:hamqwg} and $\psi^{N,\epsi} (0)\in D(H(t)) \equiv H^2(\Omega^N) \cap H^1_0(\Omega^N)$. In order to study simultaneously the mean-field limit $N\to \infty$ and the limit of strong confinement $\epsi\to 0$, we consider families of initial data $\psi^{N,\epsi} (0)$ along sequences $(N_n,\epsi_n)\to (\infty,0)$. \begin{defn} For $\beta\in \left(0,\frac13\right)$ we call a sequence $(N_n,\epsi_n)$ in $\field{N}\times (0,1]$ {\bf admissible}, if \begin{equation}\label{admis} \lim_{n\to \infty} (N_n,\epsi_n) = (\infty,0) \qquad\mbox{and}\qquad \lim_{n\to \infty} \frac{(\epsi_n)^{\frac43}}{\mu_n} =0 \qquad \mbox{for}\qquad \mu_n := \left( \frac{\epsi_n^2}{N_n}\right)^\beta\,. \end{equation} We say that the sequence $(N_n,\epsi_n)$ is {\bf moderately confining}, if, in addition, \[ \lim_{n\to \infty} \frac{\mu_n}{\epsi_n} = 0 \,, \] i.e.\ if the effective range $\mu$ of the interaction shrinks faster than the width $\epsi$ of the waveguide. We say that the sequence $(N_n,\epsi_n)$ is {\bf strongly confining}, if instead \[ \lim_{n\to \infty} \frac{\epsi_n}{\mu_n} = 0 \,, \] i.e.\ if the width of the waveguide is small even on the scale of the interaction. \end{defn} Note that the admissibility condition in \eqref{admis} requires that the width $\epsi$ of the waveguide cannot shrink too slowly compared to the range of the interaction $\mu$. This is a technical requirement that simplifies the proof considerably. It assures that the energy gap between $E_0$ and the first excited state in the normal direction, which is of order $\frac{1}{\epsi^2}$, grows sufficiently quickly so that transitions into excited states in the normal direction become negligible at leading order. In the following we will be concerned almost exclusively with the case of moderate confinement, where the effective one dimensional equation is nonlinear. The analysis of the strongly confining case turns out to be much simpler. Before we can formulate our precise assumptions on the family of initial states, we need to introduce the one-particle energies. For $\psi\in D(H)$ the ``renormalised energy per particle'' is \begin{align* E^{\psi} (t):= \tfrac{1}{N}\big\langle \psi ,H(t)\,\psi \big\rangle_{L^2(\Omega^{N})} - \tfrac{E_0}{\epsi^2} \,, \end{align*} and for $\Phi\in H^2(\field{R})$ let the ``effective energy per particle'' be \begin{align}\label{equ:enggross2} E^{\Phi} (t):&= \Big\langle \Phi ,\underbrace{\left(-\tfrac{\partial^2}{\partial x^2} - \tfrac{\kappa(x)^2}{4} + |\theta'(x)|^2 \,\|L\chi\|^2 + V(t,x,0)+ \tfrac{b}{2} |\Phi |^2 \right)}_{\displaystyle =: \mathcal{E}^\Phi (t)} \Phi \Big\rangle_{L^2(\field{R})}\,. \end{align} Recall that $\chi$ is the ground state wave function of $-\Delta_y + V^\perp(y)$ on $\Omega_{\rm f}$ with Dirichlet boundary conditions and $E_0$ the corresponding ground state eigenvalue. As with $L^2_+(\mathcal{T}_\epsi)$, we also denote the symmetric subspace of $L^2(\Omega^N)$ by $ L^2_+(\Omega^N):=\bigotimes^N_{\rm sym} L^2(\Omega)$. \begin{itemize} \item[\bf A4] {\em Initial data}: Let the family of initial data $\psi^{N,\epsi}(0)\in D(H)\cap L^2_+(\Omega^N)$, $\|\psi^{N,\epsi}(0)\|^2=1$, be close to a condensate with single particle wave function $\varphi_0 = \Phi_0\chi$ for some $\Phi_0\in H^2(\field{R})$ in the following sense: for some admissible sequence $(N,\epsi)\to (\infty,0)$ it holds that \begin{align*} \lim_{(N,\epsi)\to (\infty,0)} \Tr_{L^2(\Omega)} \big |\gamma^{N,\epsi}(0)-|\varphi_0 \rangle \langle \varphi_0| \big |=0\,, \end{align*} where $\gamma^{N,\epsi}(0)$ is the one particle density matrix of $\psi^{N,\epsi}(0)$, cf.\ \eqref{redendef}. In addition we assume that also the energy per particle converges, \begin{align*} \lim_{(N,\epsi)\to (\infty,0) }|E^{\psi^{N,\epsi}(0)} (0)-E^{\Phi_0} (0)|=0\,. \end{align*} \end{itemize} Finally, let $\Phi(t)$ be the corresponding solution of the effective nonlinear Schr\"odinger equation \begin{align}\label{equ:grosspqwg} \mathrm{i} \partial_t \Phi(t) = \underbrace{\left(-\tfrac{\partial^2}{\partial x^2} - \tfrac{\kappa(x)^2}{4} + |\theta'(x)|^2 \,\|L\chi\|^2 + V(t,x,0)+ b |\Phi(t)|^2\right)}_{\displaystyle =: \,h^\Phi(t)} \,\Phi(t) \qquad \mbox{with } \;\Phi(0)=\Phi_0, \end{align} where \begin{equation*}\label{bdef} b:= \left\{ \begin{array}{cl} \int_{\Omega_{\rm f}} |\chi(y)|^4 \, {\mathrm{d}}^2 y \cdot \int_{\field{R}^3} w(r)\, {\mathrm{d}}^3 r & \mbox{ in the case of moderate confinement,}\\[1mm] 0& \mbox{ in the case of strong confinement.} \end{array}\right. \end{equation*} The unique existence and properties of solutions to \eqref{equ:schrodinger} and \eqref{equ:grosspqwg} are well known and briefly discussed in Appendix\,\ref{app:regsol}. \begin{thm}\label{thm:thm1} Let the waveguide satisfy assumption {\rm \bf A1} and let the potentials satisfy assumptions {\rm \bf A2} and {\rm \bf A3}. For $\beta\in \left(0,\frac13\right)$ let $\psi^{N,\epsi}(0)$ be a family of initial data satisfying {\rm \bf A4}. Let $\psi^{N,\epsi}(t)$ be the solution of the $N$-particle Schr\"odinger equation \eqref{equ:schrodinger} with initial datum $\psi^{N,\epsi}(0)$ and $\gamma^{N,\epsi}_M(t)$ its $M$-particle reduced density matrix. Let $\Phi(t)$ be the solution of the effective equation \eqref{equ:grosspqwg} with initial datum $\Phi_0$. Then for any $t\in \field{R}$ and any $M\in\field{N}$ \[ \lim_{(N,\epsi)\to (\infty,0)} \Tr \Big |\gamma^{N,\epsi}_M(t)-|\Phi(t) \chi \rangle \langle \Phi(t) \chi |^{\otimes M} \Big |=0\,, \] and \[ \lim_{(N,\epsi)\to (\infty,0)} \left|E^{\psi^{N,\epsi}(t)}(t)-E^{\Phi(t)}(t)\right|=0 \] where the limits are taken along the sequence from {\rm \bf A4}. \end{thm} \begin{rem} \begin{enumerate}[(a)] \item In Assumption {\bf A4} we assume that the initial state is close to a complete Bose-Einstein condensate. To show that the ground state of a bose gas is actually of this form is in itself an important and difficult problem. For a straight wave guide and the case $\beta=1$ this was shown in \cite{LieSeiYng03}, see also \cite{LieSeiSolYng05} for a detailed review and \cite{SchYng06}. The analysis of ground states in geometrically non-trivial wave guides is, as far as we know, an open problem. For the latest results for $\beta \in (0,1) $, but without strong confinement, we refer to \cite{LewNamRou14}. \item The assumption in {\bf A2} that the interaction $w$ is non-negative seems to be crucial to our proof, although it is used only once in the proof of the energy estimate of Lemma~\ref{lem:energyestimate}. The results of \cite{CheHol14} suggest, however, that also our result should hold for interactions with a certain negative part. \item The negative part $-\kappa(x)^2/4$ of the geometric potential stemming from the curvature $\kappa(x)$ of the curve is often called the bending potential, while the positive part $|\theta'(x)|^2\|L\chi\|^2$ is called the twisting potential. Both appear in exactly the same form also for non-interacting particles in a waveguide, as they originate just from the transformation of the Laplacian in Lemma~\ref{LapKoord}. See also \cite{Kre08} for a review in the one-particle case. \item One could also consider a waveguide with a cross-section that varies along the curve, e.g.\ having constrictions or thickenings. But then $E_0=E_0(x)$ would be a function of $x$ and an effective potential of size $\frac{E_0(x)}{\epsi^2}$ would appear in the effective equation. As a consequence also the kinetic energy in the $x$-direction would be of order $\frac{1}{\epsi^2}$, i.e.\ $\|\Phi\|_{H_1(\field{R})}^2 = {\mathcal{O}}(\frac{1}{\epsi^2})$. It is conceivable that a similar result to Theorem~\ref{thm:thm1} holds also in this setting of large tangential energies. However, this is a much more difficult problem, since transitions into excited normal modes will be energetically possible. Using adiabatic theory, this problem is treated in the single-particle case in \cite{WT,LT,HaLaTe14}. \item Another interesting modification of the setup is the confinement only by potentials, without the Dirichlet boundary. Also this would introduce additional technical complications, since in this case the map $f$ is no longer a global diffeomorphism and has to be cut off, c.f.\ \cite{WT}. \item Let us breifly comment on the main differences of our result compared to the work of Chen and Holmer \cite{CheHol14}. While our focus is on geometrically non-trivial wave guides, the authors of \cite{CheHol14} consider confinement by a harmonic potential of constant shape to a straight line. However, their main focus are attractive pair interactions, more precisely pair potentials with $\int_{\field{R}^3} w(r){\mathrm{d}} r \leq 0$, a situation which is excluded in our result. On the other hand, at least in the case of a straight wave guide, our approach needs much less regularity for $w$ and can incorporate external time-dependent potentials. Finally, our proof yields also convergence rates, which, as far as we understand, is not the case for \cite{CheHol14}. As explained below, we refrain from stating these rates because they are quite complicated and most likely far from optimal. \item In \cite{LieSeiYng03} the authors exhibit five different scaling regimes with different effective energy functionals for the ground state energy. Note that a direct comparison with our two regimes is not sensible for two reasons: First we assume $\beta\in(0,\frac13)$ while in \cite{LieSeiYng03} the Gross-Pitaevskii scaling $\beta=1$ is considered. As a consequence, in \cite{LieSeiYng03} the scattering length, i.e.\ the range of the interaction $w$, is always small compared to the small diameter $\epsi$ of the wave guide. The siutation $\epsi/\mu\to 0$ (what we called strong confinement) does not occur for $\beta =1$. Secondly, \cite{LieSeiYng03} is specifically concerned with the ground state energy, where some terms in the energy functional can become negligible or can take a specific form depending on details of the ground state. \end{enumerate} \end{rem} \subsection*{Acknowledgements} We thank Steffen Gilg, Stefan Haag, Christian Hainzl, Jonas Lampart, S\"oren Petrat, Peter Pickl, Guido Schneider, and Christof Sparber for helpful discussions. The support by the German Science Foundation (DFG) within the GRK 1838 ``Spectral theory and dynamics of quantum systems'' is gratefully acknowledged. \subsection*{Ethical Statement} Funding: This work was funded by the German Science Foundation (DFG) within the GRK 1838. Conflict of Interest: The authors declare that they have no conflict of interest. \section{Structure of the proof and the main argument} In the proof we will not directly control the difference $\Tr \big |\gamma^{N,\epsi}_M(t)-|\varphi(t) \rangle \langle \varphi(t) |^{\otimes M} \big |$, but use a functional $\alpha(\psi^{N,\epsi}(t),\varphi(t))$ introduced by Pickl \cite{Pic08,KnoPic09, Pic11} to measure the ``distance'' between $\psi^{N,\epsi} $ and $\varphi$. For this measure of distance our proof yields also rates of convergence, which could be translated into rates of convergence also for $\Tr \big |\gamma^{N,\epsi}_M(t)-|\varphi(t) \rangle \langle \varphi(t) |^{\otimes M} \big |$. However, since these rates are presumably far from optimal, we refrain from stating them explicitly. The functional $\alpha$ is constructed from the following projections in the $N$-particle Hilbert space. \begin{defn}\label{def:pP} Let $p$ be an orthogonal projection in the one-particle space $L^2(\Omega)$. For $i \in \{1, \dots ,N\} $ define on $L^2(\Omega )^{\otimes N}$ the projection operators \begin{align*} p_i:= \underbrace{ \id \otimes \cdots \otimes \id}_{i-1 \; \mathrm{times}} \otimes \, p \otimes \underbrace{ \id \otimes \cdots \otimes \id}_{N-i \; \mathrm{times}} \qquad\mbox{ and } \qquad q_i:=\id - p_i\,. \end{align*} For $0 \leq k \leq N$ let \begin{align* P_{k} := \Big( q_1\cdots q_k p_{k+1} \cdots p_N \Big)_{\mrm{sym}}: = \sum_{\substack{J\subset\{1,\ldots, N\}\\ |J|=k}} \,\prod_{j\in J} q_j \prod_{j\notin J} p_j\,. \end{align*} For $k<0 $ and $k>N$ we set $P_{k } =0$. \end{defn} We will use the many-body projections $P_k$ exclusively for $p = |\varphi\rangle\langle\varphi|$, the orthogonal projection onto the subspace spanned by the condensate state $\varphi \in L^2(\Omega) $ with $\norm{\varphi}_{L^2(\Omega)}=1$. However, a number of simple algebraic relations, like \begin{equation}\label{Prel} \sum_{k=0}^N P_{k } = \id\,, \qquad \sum_{i=1}^N q_i P_{k } = k P_{k } \,, \end{equation} hold independently of the special choice for $p$ and will turn out very useful in the analysis of the mean field limit. The first identity in \eqref{Prel} follows from the fact that $q_i+p_i=\id$. For the second identity note that together with the first identity we have \[ \sum_{i=1}^N q_i = \sum_{i=1}^N q_i \sum_{k'=0}^N P_{k' } = \sum_{k'=0}^N \sum_{j=1}^N q_i P_{k' }= \sum_{k'=0}^N k' P_{k' }\,. \] Projecting with $P_k$ yields the second identity, since $P_kP_{k'} = \delta_{k,k'} P_k$. \begin{defn}\label{hutdef} For any function $f:\field{N}_0\to \field{R}$ define the bounded linear operator \[ \widehat f :L^2(\Omega^N)\to L^2(\Omega^N)\,,\quad \psi\mapsto \widehat f \psi:= \sum_{k=0}^N f(k) P_{k } \psi \] and the functional $\alpha_f: L^2(\Omega^N)\times L^2(\Omega)\to \field{R} $ \[ \alpha_{f}\big(\psi,\varphi\big):= \left\langle \psi ,\widehat f\, \psi \right\rangle_{L^2(\Omega^{N})} = \sum_{k=0}^N f(k) \,\left\langle \psi , P_{k }\, \psi \right\rangle_{L^2(\Omega^{N})} \,. \] \end{defn} The heuristic idea behind this definition is the following. The operator $P_k$ projects onto the subspace of $L^2(\Omega^N)$ of those states, where exactly $k$ out of the $N$ particles are not condensed into $\varphi$. Components of $\psi\in L^2(\Omega^N)$ with $k$ particles outside the condensate are weighted by $f(k)$ in $\alpha_f(\psi, \varphi)$. In order to obtain a useful measure of distance between $\psi$ and the condensate $\varphi^{\otimes N}$, the function $f$ should thus be increasing and $f(0)$ should be (close to) zero. For $n(k) := \sqrt{k/N}$ it is easily seen that the functional $ \alpha_{n^2} $ is a good measure for condensation: Using the shorthand \[ \llangle \cdot,\cdot \rrangle := \langle \cdot , \cdot \rangle_{L^2(\Omega^N)}\,, \] we find for any symmetric $\psi\in L^2(\Omega^N)$ \begin{eqnarray} \alpha_{n^2}(\psi,\varphi) &= &\sum_{k=0}^N \frac{k}{N} \left\llangle \psi ,P_{k } \psi \right\rrangle \;\stackrel{\eqref{Prel}}{=}\; \sum_{k=0}^N \sum_{i=1}^N \frac{1}{N} \left\llangle \psi ,q_iP_{k } \psi \right\rrangle\nonumber\\& \stackrel{\rm symmetry}{=}& \sum_{k=0}^N \left\llangle \psi ,q_1P_{k } \psi \right\rrangle \;\stackrel{\eqref{Prel}}{=}\; \llangle \psi , q_1 \psi \rrangle =\|q_1\psi\|^2\,.\label{n2comp} \end{eqnarray} And in general we have the following equivalences. \begin{lem}\label{lem:equi} Let $\psi^N \in L^2_+(\Omega^N)$ be a sequence of normalised $N$-particle wave functions and let $\gamma^M_N$ be the sequence of corresponding $M$-particle density matrices for some fixed $M\in\field{N}$. Let $\varphi \in L^2(\Omega)$ be normalised. Then the following statements are equivalent: \begin{enumerate}[(i)] \item $\lim_{N\rightarrow \infty} \alpha_{n^a}(\psi^N,\varphi)=0 $ for some $ a>0 $ \item $\lim_{N\rightarrow \infty} \alpha_{n^a}(\psi^N,\varphi)=0 $ for any $ a>0 $ \item $ \lim_{N\rightarrow \infty} \left\| \gamma^N_M - | \varphi \rangle \langle\varphi |^{\otimes M}\right\| = 0$ for all $M\in \field{N}$ \item $ \lim_{N\rightarrow \infty} {\rm Tr} \left| \gamma^N_M - | \varphi \rangle \langle\varphi |^{\otimes M}\right|=0$ for all $M\in \field{N}$ \item $ \lim_{N\rightarrow \infty} {\rm Tr} \left| \gamma^N_1 - | \varphi \rangle \langle\varphi | \right|=0$ \end{enumerate} \end{lem} The proof of this lemma collects different statements somewhat scattered in the literature, c.f.\ \cite{Pic11, KnoPic09}. Since the claim is at the basis of our result and since the proof is short and simple, we give it at the end of Subsection~\ref{sec:beta} for the convenience of the reader. In the proof of our main theorem we will work with the functional $\alpha_{m}$, where \begin{equation}\label{mdef} m (k):= \begin{cases} n(k) & \mathrm{for}\; k \geq N^{1-2\xi}\\ \frac{1}{2}(N^{-1+\xi}k+N^{-\xi})& \mathrm{else} \end{cases} \end{equation} for some $ 0<\xi<\frac12$ to be specified below. Since $n(k) \leq m (k) \leq \max(n(k),N^{-\xi})$ holds for all $ k \in \field{N}_0$, convergence of $\alpha_{m }$ to zero is equivalent to convergence of $ \alpha_{n}$ to zero and thus to all cases in Lemma~\ref{lem:equi}. We will use the shorthand \begin{align*} \alpha_{m }(t):= \alpha_{m } \big(\psi^{N,\epsi}(t), \varphi(t)\big) \end{align*} when we evaluate the functional $\alpha_{m }$ on the solutions to the time-dependent equations. Finally, the quantity that we can actually control in the proof is \begin{equation}\label{alphadef} \alpha_\xi (t):= \alpha_{m }(t)+ \left|E^{\psi^{N,\epsi}(t)}(t)-E^{\varphi(t)}(t)\right|\,. \end{equation} We will now state two key propositions and then give the proof of Theorem~\ref{thm:thm1}. The simple strategy is to show bounds for the time-derivative of $\alpha_\xi$ and then use Gr\"onwall's inequality. With the expression from Lemma~\ref{LapKoord} for the Laplacian in the adapted coordinates we find that \begin{eqnarray*} H(t) &=& \sum_{i=1}^N \left( -\left( U_\epsi \Delta U_{\epsi}^* \right)_{r_i} +\frac{1}{\epsi^2}V^\perp(y_i)+ V(t, f_\epsi(r_i))\right)+ a \sum_{i < j}\frac{1}{\mu^3} \,w \left( \frac{ f_\epsi(r_i)-f_\epsi(r_j)}{\mu}\right)\\ &=& \sum_{i=1}^N \left( -\frac{\partial^2}{\partial x_i^2} -(\theta'(x_i) L_i) ^2 - \frac{1}{\epsi^2} \Delta_{y_i} +\frac{1}{\epsi^2}V^\perp(y_i) + V(t, r_i^\epsi) - \frac{\kappa(x_i)^2}{4} + R^{(1)}_i\right) \\ &&\qquad + \; \frac{1}{N-1} \sum_{i<j} w^{\epsi, \beta,N}_{ij} \end{eqnarray*} with \[ R^{(1)}_i := - \partial_{x_i} \theta'(x_i) L_i - \theta'(x_i) L_i \partial_{x_i} \;+ \;\left(V_{\rm bend}(r_i)+\frac{\kappa(x_i)^2}{4} \right) \;-\; \epsi \,S^\epsi _i \] and \[ w^{\epsi, \beta,N}_{ij}(r_1,\ldots,r_N) := (N-1)\frac{a}{\mu^3} \,w \left( \frac{ f_\epsi(r_i)-f_\epsi(r_j)}{\mu}\right) \,. \] \begin{prop}\label{lem:beta.g} Let the assumptions of Theorem~\ref{thm:thm1} hold and let $\alpha_\xi(t)$ be given by \eqref{alphadef}. Then the time-derivative of $\alpha_\xi(t)$ is bounded by \begin{align*} \left|\frac{{\mathrm{d}}}{{\mathrm{d}} t} \,\alpha_\xi(t)\right| \leq 2|\mathrm{I}(t)| + |\mathrm{II}(t)| + 2|\mathrm{III}(t)|+ |\mathrm{IV}(t)| \end{align*} with \begin{eqnarray*} \mathrm{I}(t)&:=&N\left\llangle \psi^{N,\epsi}(t), p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(t,x_2)|^2 ,\widehat m \right] p_1q_2 \psi^{N,\epsi}(t) \right\rrangle\\ \mathrm{II}(t)&:=& N\left\llangle \psi^{N,\epsi}(t), p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}, \widehat m\right] q_1q_2 \psi^{N,\epsi}(t) \right\rrangle\\ \mathrm{III}(t)&:= &N\left\llangle \psi^{N,\epsi}(t), p_1 q_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(t,x_1)|^2 , \widehat m\right] q_1q_2 \psi^{N,\epsi}(t) \right\rrangle\\ \mathrm{IV}(t)&:=& \left|\left\llangle \psi^{N,\epsi}(t), \dot V(t,x_1,\epsi y_1) \psi^{N,\epsi}(t) \right\rrangle - \langle \Phi, \dot V(t,x_1,0) \Phi \rangle_{L^2(\field{R})}\right| \\ &&+\; 2N \left\llangle \psi^{N,\epsi}(t), p_1 \left[ V(t,x_1,\epsi y_1)-V(t,x_1,0) , \widehat m\right] q_1 \psi^{N,\epsi}(t) \right\rrangle\\ &&+\; 2N \left\llangle \psi^{N,\epsi}(t), p_1 \left[ (\theta'(x_1)L_1)^2 + |\theta'(x)|^2 \,\|L\chi\|^2 ,\widehat m \right] q_1 \psi^{N,\epsi}(t) \right\rrangle \\ &&+\; 2N \left\llangle \psi^{N,\epsi}(t), p_1 \left[ R^{(1)}_1 ,\widehat m \right] q_1 \psi^{N,\epsi}(t) \right\rrangle \end{eqnarray*} \end{prop} The three terms I--III contain the two-body interaction and are delicate to bound because of the factor $N$ in front. Very roughly speaking, Term I is small because in between the projection $p_1$ onto the state $\varphi$ in the first variable the full interaction and the mean-field interaction cancel each other at leading order. In Term~II and Term~III the full interaction $w^{\epsi, \beta,N}_{12}$ acting on the range of $q_1q_2$ becomes singular as $(N,\epsi)\to(\infty,0)$, but both can still be bounded in terms of $\alpha_\xi$, however, with considerable effort. The one-particle contributions in Term~IV are rather easy to handle, as all potentials appearing remain bounded also on the range of $q_1$. However, the first line of IV is only small if $\psi$ is close to the condensate. In the following estimates we use the function $g(t)>0$ given in terms of the a priori bound on the energy per particle, \[ |E^{\psi^\epsi_N(t)} (t) | \;\leq \;|E^{\psi^\epsi_N(0)} (0) | + \int_0^t \| \dot V(s,\cdot)\|_{L^\infty(\Omega)}{\mathrm{d}} s \;=:\; g^2(t)-1\,. \] If the external potential is time-independent, then $g^2(t)\equiv1+|E^{\psi^\epsi_N(0)} (0) |$. We defer the proof of Proposition~\ref{lem:beta.g} and also of the following one to Section~4. \begin{prop}\label{lem:3termeg} For moderate confinement we have the bounds \[ \begin{array}{ll} |\mathrm{I}(t)|\lesssim g(t)\,\left\| \Phi(t)\right\|^3_{H^2(\field{R})} \left( \frac{\mu}{\epsi} + N^\xi \epsi + \frac{ \epsi^2}{\mu^{\frac32}}\right) \,,& |\mathrm{II}(t)|\lesssim \norm{\Phi(t)}_{H^2(\field{R})}^2 {\alpha_\xi(t)} + N^\xi \frac{a}{\mu^3} \,, \\ |\mathrm{III}(t)| \lesssim g(t)\,\left\| \Phi(t)\right\|^\frac{3}{2}_{H^2(\field{R})} \left( \alpha_\xi(t) + \frac{\mu}{\epsi} +\frac{a}{\mu^3}+ \frac{ \epsi^2}{\mu^{\frac32}} \right) \,,& |\mathrm{IV}(t)| \lesssim \alpha_\xi(t)+\epsi\,\|\Phi(t)\|_{H^2(\field{R})} + g(t) N^\xi \epsi\,. \end{array} \] For strong confinement we have the bounds \[ \begin{array}{ll} |\mathrm{I}(t)|\lesssim \mu \left\| \Phi(t)\right\|^2_{H^2(\field{R})} \,,& |\mathrm{II}(t)|\lesssim (\alpha_\xi(t)+\mu)\, \left\| \Phi(t)\right\|_{H^2(\field{R})} \,, \\[2mm] |\mathrm{III}(t)| \lesssim (\alpha_\xi(t)+\mu)\, \left\| \Phi(t)\right\|_{H^2(\field{R})} \,,& |\mathrm{IV}(t)| \lesssim \alpha_\xi(t)+\epsi\,\|\Phi(t)\|_{H^2(\field{R})} + g(t) N^\xi \epsi\,. \end{array} \] \end{prop} Here and in the remainder of the paper we use the notation $A \lesssim B$ to indicate that there exists a constant $C\in\field{R}$ independent of all ``variable quantities'' $\epsi,N,t,\xi, \Psi^{\epsi,N}(0)$, and $ \Phi_0$ such that $A\leq CB$. Note that $C$ can depend on ``fixed quantities'' like the shape of the waveguide determined by $c,\theta, \Omega_{\rm f}$, and also on the potentials $V,w$, $V^\perp$ and on $\beta$. \begin{proof}[Proof of Theorem\,\ref{thm:thm1}] Combining Propositions~\ref{lem:beta.g} and\,\ref{lem:3termeg} we obtain for the case of moderate confinement that \[ \left|\frac{{\mathrm{d}}}{{\mathrm{d}} t} \,\alpha_\xi(t)\right| \leq C\, g(t) \,\left\| \Phi(t)\right\|^3_{H^2(\field{R})}\left( \alpha_\xi(t) + \frac{\mu}{\epsi} + \frac{ \epsi^2}{\mu^{\frac32}} +N^\xi \epsi + N^\xi \frac{a}{\mu^3}\right) \] for a constant $C<\infty$ independent of $t,\epsi,N,\beta,\xi$ and $\psi^{N,\epsi}(0)$. Thus Gr\"onwall's lemma proves Theorem~\ref{thm:thm1} once we show that for some $\xi>0$ all terms in the bracket besides $\alpha_\xi(t)$ vanish in the limit $(N,\epsi )\to ( \infty,0)$ along any admissible and moderately confining sequence $( N,\epsi)$. This is true for ${\mu}/{\epsi}$ and ${ \epsi^2}/{\mu^{\frac32}} = \sqrt{{ \epsi^4}/{\mu^{3}}}$ by assumption. Since \[ \frac{\epsi^4}{\mu^3} = \epsi^{4-6\beta} N^{3\beta} \rightarrow 0 \quad\mbox{implies}\quad \epsi N^{ \frac{3\beta}{4-6\beta}}\rightarrow 0\,, \] we have that \[ N^\xi \epsi = \left(N^\xi N^{-\frac{3\beta}{4-6\beta}} \right) \left(\epsi N^{ \frac{3\beta}{4-6\beta}}\right)\to 0 \quad \mbox{for} \quad 0<\xi \leq \frac{3\beta}{4-6\beta} \] and \[ N^\xi \frac{a}{\mu^3} =N^\xi \epsi^{2-6\beta} N^{3\beta-1}= \left( N^\xi N^{-\frac{3\beta(2-6\beta)}{4-6\beta}}\right) \left(\epsi N^{ \frac{3\beta}{4-6\beta}}\right)^{2-6\beta} N^{3\beta-1} \to 0 \quad \mbox{for} \quad 0<\xi \leq \frac{2-6\beta}{2-3\beta}\,. \] Thus in the case of moderate confinement \[ \lim_{( N,\epsi)\to ( \infty,0)} \alpha_\xi(t) = 0 \] follows by Gr\"onwall's lemma for $0<\xi\leq \min\left\{ \frac{3\beta}{4-6\beta}, \frac{2-6\beta}{2-3\beta} \right\}$ and thus with Lemma~\ref{lem:equi} also Theorem~\ref{thm:thm1}. Analogously the statement for strong confinement follows for $0<\xi\leq \frac{3\beta}{4-6\beta} $. \end{proof} \section{Proofs of the Propositions} \subsection{Preliminaries}\label{sec:beta} In this section we prove several lemmata that will be used repeatedly in the proofs of the propositions. The first ones are concerned with properties of the operators $\widehat f$ that are at the basis of the condensation-measures $\alpha_f$ (see Definition~\ref{hutdef}). One should keep in mind, that they are defined with respect to some orthogonal projection $p $ in the one-particle space $L^2(\Omega)$. While the first lemma is purely algebraic and holds for general $p$, later on $p = |\varphi\rangle\langle\varphi|$ will always be the projection onto the one-dimensional subspace spanned by the condensate vector $\varphi\in L^2(\Omega)$. \begin{defn} For $j \in \field{Z} $ we define the shift operator on a function $f:\{0,\cdots, N \} \to \field{R}$ by \begin{align*} (\tau_j f)(k) = f(k+j), \end{align*} where we set $(\tau_j f)(k)=0 $ for $k+j \notin \{0, \dots , N \} $. \end{defn} \begin{lem}\label{lem:weights} Let $f,g: \{0,\cdots, N \} \rightarrow \field{R} $, $j\in \{1,\dots, N\}$, and $k\in \{0,\dots, N\}$. \begin{enumerate}[(a)] \item \label{a} It holds that \begin{align*} \widehat{f}\,\widehat{g}=\widehat{fg}=\widehat{g}\,\widehat{f}\,, \qquad \widehat f \,p_j = p_j \widehat f\,, \qquad \widehat f \,q_j = q_j \widehat f \,, \quad \mbox{and }\quad \widehat f P_{k }= P_{k } \widehat f\,. \end{align*} \item \label{c} Let $\phi,\psi \in L^2_+(\Omega^N) $ be symmetric and $n(k) = \sqrt{k/N}$, then \begin{align*} \left\llangle \phi, \widehat f \,q_j \psi \right\rrangle &=\left\llangle \phi, \widehat f\, \widehat n^2 \psi \right\rrangle \,. \end{align*} If, in addition, $f$ is non-negative, then for $i\in \{1,\dots N\}$, $i \neq j$, it holds that \begin{align*} \left\llangle \psi, \widehat f q_i q_j \psi \right\rrangle &\leq \tfrac{N}{N-1} \left\llangle \psi, \widehat f \,\widehat n^4 \psi \right\rrangle . \end{align*} \item \label{lem:weightsc} Let $T:L^2(\Omega)^{\otimes N}\to L^2(\Omega)^{\otimes N}$ be a bounded operator that acts only on the factors $i$ and $j$ in the tensor product, e.g.\ the two-body potential $w_{ij}$. Then for $Q_0 := p_i p_j$, $Q_1 \in \{ p_i q_j, q_i p_j\}$, and $ Q_2:= q_i q_j $ we have \begin{align*} \widehat f \,Q_\nu T Q_\mu = Q_\nu T Q_\mu \,\widehat{\tau_{\nu-\mu} f} \\ Q_\nu T Q_\mu\, \widehat f=\widehat { \tau_{\mu-\nu} f} \,Q_\nu T Q_\mu \,. \end{align*} \end{enumerate} \end{lem} \begin{proof \begin{enumerate}[(a)] \item All commutation relations follow immediately from the definitions. E.g.\ \begin{align*} \widehat f \, \widehat g= \sum_{k,l} f(k)g(l) P_{k } P_{l } = \sum_{k } f(k)g(k)P_k = \widehat{fg}= \widehat g\, \widehat f. \end{align*} \item For the equality we find using the symmetry of $\psi$ and $\phi$ and \eqref{Prel} that \[ \llangle \phi, \widehat f \,q_j \psi \rrangle =\frac{1}{N}\sum_{j=1}^N \llangle \phi, \widehat f \,q_j \psi \rrangle =\sum_{k=0}^N\sum_{j=1}^N \frac{f(k)}{N} \llangle \phi, q_j P_k\psi \rrangle =\sum_{k=0}^N f(k) \frac{k}{N}\llangle \phi, P_k\psi \rrangle = \llangle \phi, \widehat f\, \widehat n^2 \psi \rrangle \] For the proof of the inequality let without loss of generality $i=1,j=2$. Then \begin{eqnarray*} \llangle \psi, \widehat f q_1 q_2 \psi \rrangle &=& \tfrac{1}{N(N-1)} \sum_{i \neq j}\llangle \psi, \widehat f q_i q_j \psi \rrangle \stackrel{f\geq0}{ \leq} \tfrac{1}{N(N-1)} \sum_{i , j}\llangle \psi, \widehat f q_i q_j \psi \rrangle \\& =& \tfrac{1}{N(N-1)} \sum_{k=0}^N\sum_{i,j=1}^N f(k) \llangle \psi, q_i q_j P_k\psi \rrangle\stackrel{\eqref{Prel}}{=} \tfrac{N^2}{N(N-1)} \sum_{k=0}^N f(k) \frac{k^2}{N^2} \llangle \psi, P_k\psi \rrangle\\&=&\tfrac{N}{(N-1)} \llangle \psi, \widehat f \widehat n^4 \psi \rrangle . \end{eqnarray*} \item Let without loss of generality $i=1$ and $j=2$, and let $P^{12}_k$ be the operator $P_k:= \id\otimes \id\otimes P_{k,N-2}$, where $P_{k,N-2}$ is the operator $P_k$ defined on $L^2(\Omega^{N-2})$. Then \begin{eqnarray*} \widehat f \,Q_\nu T Q_\mu &=& \sum_{k=0}^N f(k) P_k \,Q_\nu TQ_\mu = \sum_{k=\nu}^{N-2+\nu}f(k) P_k \,Q_\nu TQ_\mu\\&=& \sum_{k=\nu}^{N-2+\nu}f(k) P_{k-\nu}^{12} \,Q_\nu TQ_\mu = \sum_{k=\nu}^{N-2+\nu}f(k) \,Q_\nu TQ_\mu\,P_{k-\nu}^{12} \\&=& \sum_{k=\nu}^{N-2+\nu}f(k) \,Q_\nu TQ_\mu\,P_{k-\nu+\mu} = \sum_{k'=\mu}^{N-2+\mu}f(k'+(\nu-\mu)) \,Q_\nu TQ_\mu\,P_{k'} \\&=& Q_\nu T Q_\mu\, \widehat{\tau_{\nu-\mu} f} \, \end{eqnarray*} and the converse direction follows in the same way. \end{enumerate} \end{proof} From now on $P_k$ and the derived operations\,\, $\widehat{}$\,\, and $\alpha$ refer to the projection $p = |\varphi\rangle\langle\varphi|$ onto the one-dimensional subspace spanned by the one-particle wave function $\varphi\in L^2(\Omega)$. We make this explicit only within the following lemma. \begin{lem}\label{hat.} Let $\varphi(t)= \Phi(t) \chi $, where $\chi $ is an eigenfunction of $- \Delta_y + V^\perp$ on $\Omega_\mathrm{f}$ and $\Phi(t)$ a solution to \eqref{equ:grosspqwg} with $\Phi_0\in H^2(\field{R})$. Then for all $f:\{0,\ldots,N\}\to \field{R}$ \begin{enumerate}[(a)] \item $ P_k^{\varphi(\cdot)} \in C^1(\field{R}, \mathcal{L}(L^2(\Omega^N) ) )$ for all $k\in \{0, \dots ,N\}$ and thus also $\widehat f^{\varphi(t)} \in C^1(\field{R}, \mathcal{L}(L^2(\Omega^N) ) )$, \item $ \left[ -\Delta_{y_i} + V^\perp(y_i),\widehat f^{\varphi(t)}\,\right]=0$ for all $ i\in \{1, \dots ,N\}\,, $ \item Let $H^\Phi(t):= \sum_{i=1}^N h_i^\Phi(t) $ where $h_i^\Phi(t) $ denotes the one-particle operator $h^\Phi(t)$ (c.f.\ \eqref{equ:grosspqwg}) acting on the $i$th factor in $L^2(\Omega^N)$. Then \[ \mathrm{i}\frac{{\mathrm{d}}}{{\mathrm{d}} t} \widehat f^{\varphi(t)} =\left[H^\Phi(t), \widehat f^{\varphi(t)}\,\right]\,. \] \end{enumerate} \end{lem} \begin{proof \begin{enumerate}[(a)] \item This follows immediately from $\varphi \in C^1(\field{R}, L^2(\Omega))$. \item This is the fact that a self-adjoint operator commutes with its spectral projections. \item Because of \eqref{equ:grosspqwg} the projection $|\Phi(t)\rangle\langle\Phi(t)|$ satisfies the differential equation $\mathrm{i} \partial_t |\Phi(t)\rangle\langle\Phi(t)|= \left[h^\Phi(t) ,|\Phi(t)\rangle\langle\Phi(t)| \right]$ and thus $\mathrm{i} \partial_t p_i(t)= \left[h^\Phi_i(t) ,p_i(t) \right]$ and $\mathrm{i} \partial_t q_i(t)= \left[h^\Phi_i(t),q_i(t) \right]$. The product rule then implies for any $J\subset \{1,\ldots,N\}$ that \[ \mathrm{i} \, \partial_t \prod_{j\in J} q_j(t) \prod_{j\notin J} p_j(t) = \sum_{j=1}^N \Big[ h_j^\Phi(t), \prod_{j\in J} q_j(t) \prod_{j\notin J} p_j(t) \Big] = \Big[ H^\Phi(t), \prod_{j\in J} q_j(t) \prod_{j\notin J} p_j(t)\Big] \,. \] As $\widehat f^\varphi$ is a linear combination of operators of the above form, the claim follows. \end{enumerate} \end{proof} For the next lemma recall the definition \eqref{mdef} of the function $m(k)$ defining our weight $\alpha_{m}$. Because of Lemma~\ref{lem:weights} (c) and the form of the terms I--IV in Proposition~\ref{lem:beta.g}, the difference $m_\ell(k)$ defined below will appear many times in our estimates. \begin{lem}\label{lem:qs&N} Let $\psi\in L^2_+(\Omega^N)$ be symmetric, $\ell \in \field{N}$ and \begin{equation}\label{melldef} m_\ell(k) := N (m(k) - \tau_{-\ell}m(k) )= N ( m(k)- m(k-\ell))\,, \end{equation} where the function $m(k)$ was defined in \eqref{mdef}. \begin{enumerate}[(a)] \item It holds that \begin{align* 0\leq m_\ell(k) \leq \begin{cases} \ell\sqrt{\frac{ N}{k}} \qquad &k\geq N^{1-2\xi}+\ell\\ \frac{\ell}{2} N^{\xi} \qquad &k < N^{1-2\xi}+\ell \end{cases}\,, \end{align*} and \[ \norm { \widehat{m_\ell} q_1 \psi} \leq \ell\,\|\psi\| \quad\mbox{and}\quad \norm { N\big(\widehat{n}-\widehat{\tau_{-\ell} n}\big) q_1 \psi} \leq \ell\,\|\psi\|\,. \] \item Let $q^\chi := {\bf 1}_{L^2(\field{R})}\otimes( {\bf1}_{L^2(\Omega_{\rm f})}- |\chi\rangle\langle\chi|)$ be the projection onto the orthogonal complement of the ground state in the confined direction. Then \[ \left\llangle \psi, q^\chi_1 \psi \right\rrangle \lesssim \epsi^2\left(1+ |E^\psi (t) |\right) \] and \[ \left\| \widehat{m_1}\, q^\chi_1 \psi \right\|\lesssim N^\xi \,\epsi \, \left(1+ |E^\psi (t) |\right)^\frac12 \,. \] \end{enumerate} \end{lem} \begin{proof First recall that $n$ and $m$ are monotonically increasing functions, c.f.~\eqref{mdef}. Moreover \begin{align*} (n(k)-n(k-\ell))^2= \Big (\tfrac{\sqrt{k}-\sqrt{ k-\ell }}{\sqrt{N}}\Big )^2= \tfrac{\ell^2}{(\sqrt{k}+\sqrt{ k-\ell })^2N} \leq \tfrac{\ell^2}{ k N} \end{align*} and thus also $m_\ell(k) \leq \ell\sqrt{\frac{N}{k}}$ for $k\geq N^{1-2\xi}+\ell$ follows. The bound $m_\ell(k)\leq \frac{\ell}{2} N^{\xi}$ is obvious for $k < N^{1-2\xi}$ and holds also for $ N^{1-2\xi}\leq k<N^{1-2\xi}+\ell$, since $\sqrt{\frac{k}{N}} \leq \frac{1}{2}(N^{-1+\xi}k+N^{-\xi})$ for such $k$. For any $f:\{0,\ldots,N\}\to \field{R}$ we find with Lemma~\ref{lem:weights} (b) that \begin{align* \norm { \left(\widehat{f}-\widehat{\tau_{-l} f}\right) q_1 \psi}^2= \left\llangle \psi, \left(\widehat{f}-\widehat{\tau_{-l} f}\right)^2 q_1 \psi\right\rrangle= \sum_{k=1}^N \Big(f(k)-f(k-l)\Big)^2\frac{k}{N}\left\llangle \psi, P_{k } \psi \right\rrangle\, . \end{align*} Hence part (a) of the lemma follows with the above estimates on $m_\ell$ and the identity \eqref{Prel}. From \begin{eqnarray*}\lefteqn{ E^\psi (t) = \tfrac{1}{N}\left\llangle \psi, \left( H(t)- N\tfrac{E_0}{\epsi^2}\right) \psi\right\rrangle} \\ & =& \left\llangle \psi, \frac{1}{N}\left( \sum_{i=1}^N \left( \left(\partial_{x_i} +\theta'(x_i) L_i \right) \rho_\epsi^{-2}(r_i) \left(\partial_{x_i} +\theta'(x_i) L_i \right) - V_{\rm bend}(r_i)+V(r_i)\right.\right.\right.\\ &&\hspace{2cm} \;+\left.\tfrac{1}{\epsi^2}(-\Delta_{y_i}+V^\perp(y_i)-E_0)\right)\left. + \frac{a}{\mu^3}\sum_{j < i } w \left( \frac{ f_\theta^\epsi(r_i)-f_\theta^\epsi(r_j)}{\mu}\right) \Bigg)\psi \right\rrangle \\ &\weq{{\rm \bf A2}}{ \geq} & \left\llangle \psi,\left(- V_{\rm bend}(r_1) + \tfrac{1}{\epsi^2}\left(-\Delta_{y_1}+V^\perp(y_1 )-E_0\right)\right) \psi \right\rrangle\\ & =& -\left\llangle \psi, V_{\rm bend}(r_1)\psi \right\rrangle + \left\llangle \psi, \tfrac{1}{\epsi^2}\left(-\Delta_{y_1}+V^\perp(y_1)-E_0\right)q_1^\chi \psi \right\rrangle\\ & \geq &-\left\| V_{\rm bend} \right\|_{L^\infty(\Omega)}+ \tfrac{C}{\epsi^2} \left\llangle \psi, q_1^\chi \psi \right\rrangle \end{eqnarray*} we infer that $\llangle \psi, q^\chi_1 \psi \rrangle \lesssim \epsi^2\left(1+ |E^\psi (t) |\right)$. For the proof of the remaining estimate we use that $q_1^\chi$ commutes with $q_1$ and thus also with~$P_k$. Hence \begin{eqnarray*} \norm{\widehat{m_1}\, q_1^\chi \psi }^2 &=& N^2 \sum_{k=1}^N m_1^2(k) \llangle \psi, P_k \, q_1^\chi\psi \rrangle \\ &\leq& \tfrac{1}{4}\sum_{k=1}^{ \lfloor N^{1-2\xi} \rfloor} N^{ 2\xi}\, \left\llangle \psi, P_{k} q_1^\chi\psi \right\rrangle + \tfrac{1}{4} \sum_{k= \lceil N^{1-2\xi} \rceil}^N \frac{ N }{k } \left\llangle \psi, P_k q_1^\chi\psi \right\rrangle\\ &\leq& \tfrac12 N^{2\xi}\sum_{k=1}^{ N} \llangle \psi, P_k\,q^\chi_1 \psi \rrangle = \tfrac12 N^{2\xi} \llangle \psi, q_1^\chi\psi\rrangle \lesssim N^{2\xi} \,\epsi^2\, \left(1+ |E^\psi (t) |\right)\,. \end{eqnarray*} \end{proof} The meaning of Lemma~\ref{lem:qs&N} (b) is the following. Due to symmetry of the wave function, the ``probability'' that one specific particle in a many-body state gets excited in the confined direction can be controlled by the renormalised energy per particle $E^\psi(t)$ for any $t\in\field{R}$. Next we Taylor expand the scaled two-body interaction $w^{\epsi, \beta,N}_{12}$. \begin{lem}\label{R2Lemma} Assuming {\bf A2} for the two-body potential $w$ and {\bf A1} for the geometry of the waveguide, it holds that \begin{eqnarray*} \frac{\epsi^2}{\mu^3} \; w \left( \frac{ f_\epsi(r_1)-f_\epsi(r_2)}{\mu}\right) &=& \frac{\epsi^2}{\mu^3}\; w \left( \frac{ r^\epsi_1-r^\epsi_2}{\mu}\right) \,+\, R(r_1 , r_2 ) \, \frac{\epsi^2}{\mu^3}\; \tilde w'\left( \tfrac{\left\| r^\epsi_2-r^\epsi_1 \right\|^2}{\mu^2} \right) \, +\, \frac{\epsi^2}{\mu^3} \,{\mathcal{O}}\left( R(r_1 , r_2 )^2 \right) \\[1mm] &=: & w^0_{12}(r_1,r_2) + T_1(r_1 , r_2 ) + T_2(r_1 , r_2 ) \end{eqnarray*} with \[ \overline R := \sup_{r_1,r_2\in\Omega} | R(r_1 , r_2 )| \lesssim \epsi +\mu \,. \] \end{lem} \begin{proof The proof is in essence a Taylor expansion, but we need to be careful with the different scalings. First recall the maps $f$, $T_\theta$ and $D_\epsi$ defined in the introduction. $D_\epsi$ is linear and for the differentials of $f$ and $T_\theta$ one easily computes \begin{eqnarray*} DT_\theta(x,y) &=& \begin{pmatrix} 1 & 0 && 0 \\ T'_{\theta(x)}y&& T_{\theta(x)} \end{pmatrix}\\[2mm] Df (r) h &=& (c'(x), e_1(x), e_2(x)) h \;+\; (y^1 e'_1(x)+y^2 e'_2(x)) \,h_x\\ &=:& A(x) h - y\cdot\kappa(x)c'(x) h_x\,. \end{eqnarray*} For $ f_\theta := f\circ T_\theta$ we thus obtain \begin{eqnarray*} D f_\theta (r) &=& D(f\circ T_\theta) (r) h = Df\circ T_\theta(r) \;DT_\theta(r) h \\ &=&\left( (A\circ T_\theta)(x) + (b\circ T_\theta)(r) (1,0,0)^T\right) \begin{pmatrix} 1 & 0 && 0 \\ T'_{\theta(x)}y&& T_{\theta(x)}\end{pmatrix}h\\ &=& A(x) T_\theta(x) h + A(x) \begin{pmatrix} 0 & 0 & 0 \\ T'_{\theta(x)}y&0&0\end{pmatrix}h - (T_{\theta(x)}y\cdot \kappa(x)) c'(x) h_x\\ &=& A(x) T_\theta(x) h + \left( (e_1(x),e_2(x)) T'_{\theta(x)}y - (T_{\theta(x)}y\cdot \kappa(x)) c'(x) \right) h_x\\ &=:& A_\theta(x) h + b_\theta(r) h_x \end{eqnarray*} Note that $A_\theta(x)$ is an orthogonal matrix for all $x\in\field{R}$ and $\|b(x,y)\|_{\field{R}^3} \lesssim \|y\|_{\field{R}^2}$. Hence for $\|y\|_{\field{R}^2}$ small enough, $D f_\theta (r)$ is invertible and \begin{equation}\label{festi} \big| \left\| f_\theta(r_2) - f_\theta(r_1) \right\|_{\field{R}^3} - \left\| r_2-r_1 \right\|_{\field{R}^3}\big| \lesssim \|y\|_{\field{R}^2} \left\| f_\theta(r_2) - f_\theta(r_1) \right\|_{\field{R}^3} \,. \end{equation} Since $f\in C^\infty(\field{R}^3)$, Taylor expansion gives \begin{eqnarray*} f_\theta(r_2) - f_\theta(r_1) &=& A_\theta\left(\tfrac{x_1+x_2}{2}\right)\,(r_2-r_1) +\; b_\theta\left(\tfrac{r_1+r_2}{2}\right) \,(x_2-x_1) + {\mathcal{O}}(|r_2-r_1|^3) \end{eqnarray*} and thus \begin{eqnarray*} \|f_\theta(r_2) - f_\theta(r_1) \|^2 &=& \left\langle A_\theta\left(\tfrac{x_1+x_2}{2}\right)\,(r_2-r_1 ), A_\theta\left(\tfrac{x_1+x_2}{2}\right)\,(r_2-r_1 )\right\rangle\\ && +\; 2 \left\langle A_\theta \left(\tfrac{x_1+x_2}{2}\right)\,(r_2-r_1 ), b_\theta\left(\tfrac{r_1+r_2}{2}\right) \right\rangle\,(x_2-x_1) \\ && +\; \left\langle b_\theta\left(\tfrac{r_1+r_2}{2}\right) ,b_\theta\left(\tfrac{r_1+r_2}{2}\right) \right\rangle\,(x_2-x_1) ^2+ {\mathcal{O}}(|r_2-r_1|^3)\\ &=& \left\| r_2-r_1 \right\|^2 \;+\; \widetilde R(r_1,r_2) \end{eqnarray*} with \[ |\widetilde R(r_1,r_2)|= {\mathcal{O}}(\|r_2-r_1\|\,|x_2-x_1|\|y\|_{\field{R}^2}+ |x_2-x_1|^2\|y\|_{\field{R}^2}^2 +\|r_2-r_1\|^3 )\,. \] Now recall that \[ f_\epsi := f\circ T_\theta\circ D_\epsi\quad\mbox{i.e.}\quad f_\epsi (r) = f_\theta(r^\epsi)\,. \] Since $w\left( \tfrac{1}{\mu} \left( f_\epsi(r_2) - f_\epsi(r_1) \right)\right) \not=0$ only for $\|f_\epsi(r_2) - f_\epsi(r_1)\|<\mu$ and thus according to \eqref{festi} also $\|r^\epsi_2-r^\epsi_1\|<\mu(1+\epsi)$, we have that \[ |\widetilde R(r_1^\epsi, r_2^\epsi)| = {\mathcal{O}}(\mu^2 \epsi +\mu^2 \epsi^2+ \mu^3 )\,. \] Taylor expansion of $w(r) = \tilde w(|r|^2)$ finally gives the desired result with $R:=\widetilde R/\mu^2$, \begin{eqnarray*}\lefteqn{\hspace{-0.5cm} w\left( \tfrac{1}{\mu} \left( f_\epsi(r_2) - f_\epsi(r_1) \right)\right) \;=\; \tilde w\left( \tfrac{1}{\mu^2} \left\| f_\epsi(r_2) - f_\epsi(r_1) \right\|^2\right)}\\ &=& \tilde w\left( \tfrac{1}{\mu^2}\left\| r^\epsi_2-r^\epsi_1 \right\|^2\right) + \frac{\widetilde R(r_1^\epsi, r_2^\epsi)}{\mu^2} \, \tilde w'\left( \tfrac{1}{\mu^2} \left\| r^\epsi_2-r^\epsi_1 \right\|^2 \right) + {\mathcal{O}}\left(\frac{\widetilde R(r_1^\epsi, r_2^\epsi)^2}{\mu^4}\right)\,. \end{eqnarray*} \end{proof} The next lemma collects elementary facts that will allow us to estimate one- and two-body potentials in different situations. \begin{lem}\label{lem:young} Let $g:\field{R}^3\times \field{R}^3\to\field{R}$ be a measurable function such that $|g(r_1,r_2)|\leq v(r_2-r_1)$ almost everywhere for some measurable function $v:\field{R}^3\to \field{R}$, and let $\varphi\in L^2(\Omega)\cap L^\infty(\Omega)$ with $\|\varphi\|_2=1$, and $p= |\varphi \rangle \langle \varphi |$. \begin{enumerate}[(a)] \item For $v\in L^2(\field{R}^3)$ we have \\[1mm] $ \norm{v(r)p}_{\mathcal{L}(L^2(\Omega))} \leq \norm{v}_{L^2(\Omega)}\norm{\varphi}_{L^\infty(\Omega)} $ \quad and\quad $ \norm{g(r_1,r_2)p_1}_{\mathcal{L}(L^2(\Omega^2))} \leq \norm{v}_{L^2(\field{R}^3)} \norm{\varphi}_{L^\infty(\Omega)} $ \item For $v\in L^1(\field{R}^3)$ we have $ \norm{ p_1 g(r_1,r_2)p_1}_{\mathcal{L}(L^2(\Omega^2))} \leq \norm{v}_{L^1(\field{R}^3)}\norm{\varphi}^2_{L^\infty(\Omega)} $. \item For $\Phi\in H^2(\field{R})$ \[ \|\Phi\|_{L^\infty}^2 \leq \|\Phi\|_{H^1}^2 \leq \|\Phi\|_{H^2}^2\qquad\mbox{and}\qquad \|\nabla |\Phi|^2 \|_{L^2} \leq 2 \|\Phi\|_{L^\infty} \|\Phi\|_{H^1} \,. \] \end{enumerate} \end{lem} \begin{proof All three estimates in (a) and (b) are elementary: \begin{eqnarray*} \norm{v(r )p }^2_{\mathcal{L}(L^2(\Omega))} &=&\sup_{\norm{\psi}=1} \left\langle \psi, p |v(r )|^2 p \psi \right\rangle_{L^2(\Omega)} = \left\langle \varphi(r ) , |v(r )|^2 \varphi(r ) \right\rangle_{L^2(\Omega)} \sup_{\norm{\psi}=1} \| p \psi \|^2_{L^2(\Omega)}\\ &\leq & \left\langle v(r ) , |\varphi(r )|^2 v(r ) \right\rangle_{L^2(\Omega)} \leq \|v\|_{L^2(\Omega)}^2\, \|\varphi\|_{L^\infty(\Omega)}^2\,, \end{eqnarray*} \begin{eqnarray*} \norm{g(r^\epsi_1,r^\epsi_2)p_1}^2_{\mathcal{L}(L^2(\Omega^2))}&=&\sup_{\norm{\psi}=1} \left \langle p_1 \psi, |g(r^\epsi_1,r^\epsi_2)|^2 p_1 \psi \right \rangle_{L^2(\Omega^2)} \\ &\leq& \sup_{\norm{\psi}=1} \|p_1\psi\|_{L^2(\Omega^2)}\; \sup_{r_2\in\Omega} \int_{\Omega } |\varphi(r_1)|^2 |v(r^\epsi_2-r^\epsi_1)|^2 \,{\mathrm{d}} r_1 \; \\ &\leq & \norm{\varphi}_{L^\infty(\Omega)}^2 \;\sup_{r_2\in\Omega} \int_{\Omega } |v(r^\epsi_2-r^\epsi_1)|^2\, {\mathrm{d}} r_1 \\ &\leq & \norm{\varphi}_{L^\infty(\Omega)}^2 \tfrac{1}{\epsi^2} \norm{v}^2_{L^2(\field{R}^3)}\,, \end{eqnarray*} \begin{eqnarray*} \norm{p_1 g(r^\epsi_1,r^\epsi_2)p_1}_{\mathcal{L}(L^2(\Omega^2))} &\leq & \sup_{r_2\in\Omega} \int_{\Omega } |\varphi(r_1)|^2 |g(r^\epsi_1,r^\epsi_2)| \, {\mathrm{d}} r_1 \;\leq \;\sup_{r_2\in\Omega} \int_{\Omega } |\varphi(r_1)|^2 |v(r^\epsi_2-r^\epsi_1)| \, {\mathrm{d}} r_1 \;\\ &\leq & \norm{\varphi}^2_{L^\infty(\Omega)} \tfrac{1}{\epsi^2} \norm{v}_{L^1(\field{R}^3)}. \end{eqnarray*} For (c) note that for $\Phi\in H^2(\field{R}) \subset C^1(\field{R})$ we have with Cauchy-Schwarz \[ \overline{\Phi(x)}\Phi(x) = \int_{-\infty}^x \overline{\Phi'(s)}\Phi(s) +\overline{\Phi(s)}\Phi'(s)\,{\mathrm{d}} s \leq 2 \|\Phi'\|_{L^2(\field{R})} \|\Phi\|_{L^2(\field{R})}\leq \|\Phi'\|_{L^2 }^2 + \|\Phi\|_{L^2 }^2 = \|\Phi\|_{H^1 }\,, \] and \[ \|\nabla |\Phi|^2 \|_{L^2}^2 \leq \int 4 |\Phi'(x)|^2 |\Phi(x)|^2 \,{\mathrm{d}} x\leq 4 \|\Phi\|_{L^\infty}^2\|\Phi'\|_{L^2 }^2 \leq 4 \|\Phi\|_{L^\infty}^2 \|\Phi\|_{H^1} \,. \] \end{proof} In the following corollary we collect bounds on the two-body interaction that will be used repeatedly. \begin{cor}\label{wcor} For the scaled two-body interaction $w^{\epsi, \beta,N}_{12}$ we have that \begin{eqnarray*} \left\| w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} &\lesssim& \|\Phi\|_{H^2(\field{R})}\cdot \left\{\begin{array}{ll} \tfrac{\epsi}{\mu^{3/2}} & \mbox{for moderate confinement}\\ \sqrt{\mu}& \mbox{for strong confinement} \end{array}\right. \\ \left\| \sqrt{w^{\epsi, \beta,N}_{12}} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} &\lesssim& \|\Phi\|_{H^2(\field{R})}\cdot \left\{\begin{array}{ll} 1 & \mbox{for moderate confinement}\\ \sqrt{\mu} & \mbox{for strong confinement} \end{array}\right. \\ \|T_1 p_1\|_{\mathcal{L}(L^2(\Omega^2))}+ \|T_2p_1\|_{\mathcal{L}(L^2(\Omega^2))}&\lesssim& \frac{(\epsi+\mu)\epsi}{\mu^{3/2}}\; \|\Phi\|_{H^2(\field{R})} \,, \end{eqnarray*} where $T_1$ and $T_2$ are defined in Lemma~\ref{R2Lemma}. \end{cor} \begin{proof According to Lemma~\ref{lem:young} (b) and (c) we have \[ \norm{ w^0_{12} p_1}^2_{\mathcal{L}(L^2(\Omega^2))}\leq \|\varphi\|^2_{L^\infty(\Omega)} \,\|w^0_{12} \|_{L^2(\Omega)}^2 \;\lesssim\; \|\Phi\|^2_{L^\infty(\field{R})}\cdot \left\{\begin{array}{ll} \tfrac{\epsi^2}{\mu^3} & \mbox{for moderate confinement}\\ \frac{\epsi^4}{\mu^3} & \mbox{for strong confinement} \end{array}\right. \] and \[ \norm{p_1 w^0_{12} p_1}_{\mathcal{L}(L^2(\Omega^2))}\leq \|\varphi\|^2_{L^\infty(\Omega)} \,\|w^0_{12}\|_{L^1(\Omega)} \;\lesssim\; \|\Phi\|^2_{L^\infty(\field{R})} \cdot \left\{\begin{array}{ll} 1 & \mbox{for moderate confinement}\\ \epsi^2 & \mbox{for strong confinement} \end{array}\right. \,. \] Using the bound \[ \left| T_1(r_1^\epsi, r_2^\epsi)\right| \leq \frac{\epsi^2}{\mu^3} \, \overline R \, \tilde w'\left( \tfrac{\left\| r^\epsi_2-r^\epsi_1 \right\|^2}{\mu^2} \right) =: v(r_2^\epsi- r_1^\epsi) \] we obtain analogously \[ \norm{ T_1 p_1}^2_{\mathcal{L}(L^2(\Omega^2))}\leq \|\varphi\|^2_{L^\infty(\Omega)} \,\|v\|_{L^2(\Omega)}^2 \;\lesssim\; \|\Phi\|^2_{L^\infty(\field{R})} \tfrac{\epsi^2}{\mu^3}(\epsi+\mu)^2\ \] and \[ \norm{p_1 T_1 p_1}_{\mathcal{L}(L^2(\Omega^2))}\leq \|\varphi\|^2_{L^\infty(\Omega)} \,\|v\|_{L^1(\Omega)} \;\lesssim\; \|\Phi\|^2_{L^\infty(\field{R})} (\epsi+\mu) \] for moderate confinement and an additional factor $\epsi^2$ for strong confinement. With $\|T_2\|\leq \frac{\epsi^2(\epsi+\mu)^2}{\mu^3}$ and $\|\Phi\|_{L^\infty(\field{R})}\leq \|\Phi\|_{H^2(\field{R})}\geq 1$ we arrive at \[ \left\| w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} \lesssim \frac{\epsi}{\mu^{\frac32}}\|\Phi\|_{H^2(\field{R})} \quad\mbox{and}\quad \left\| p_1 w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} \lesssim \|\Phi\|_{H^2(\field{R})}^2 \] for moderate confinement, and \[ \left\| w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} \lesssim \sqrt{\mu}\,\|\Phi\|_{H^2(\field{R})} \quad\mbox{and}\quad \left\| p_1 w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} \lesssim \mu\, \|\Phi\|_{H^2(\field{R})}^2 \] for strong confinement. Finally \[ \left\| \sqrt{w^{\epsi, \beta,N}_{12}} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))}^2= \sup_{\|\psi\|=1} \left\langle \psi, p_1 w^{\epsi, \beta,N}_{12} p_1 \psi\right\rangle \leq \left\| p_1 w^{\epsi, \beta,N}_{12} p_1\right\|_{\mathcal{L}(L^2(\Omega^2))} \,. \] \end{proof} Finally we need also the following lemma that shows how to bound the ``kinetic energy'' of $q_1\psi$ in terms of $\alpha_\xi$. \begin{lem}\label{lem:energyestimate} Let the assumptions of Theorem~\ref{thm:thm1} hold. Then in the moderately confining case \begin{align*} \left\| \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi^{N,\epsi}(t) \right\|^2 \;\lesssim \;\|\Phi(t)\|_{H^2(\field{R})}^3 \left(\alpha_\xi(t) +\frac{\mu}{\epsi} + \frac{a}{ \mu^3} \right)\,. \end{align*} \end{lem} This energy estimate is actually quite subtle and we postpone its proof to Subsection~\ref{energylemmaproof}. Note that it is the only place in our argument where the positivity of the interaction is crucial. We end this subsection with the proof of Lemma~\ref{lem:equi}. \begin{proof}[Proof of Lemma~\ref{lem:equi}] $(i) \Leftrightarrow (ii)$: Let $\lim \alpha_{n^a}(\psi^N,\varphi)=0$ for some $a>0$. Then $ \alpha_{n^b}(\psi^N,\varphi)\leq \alpha_{n^a}(\psi^N,\varphi)$ for all $b>a$ since $n^b\leq n^a$. If $\frac{a}{2}\leq b<a$, then \[ \alpha_{n^b}(\psi^N,\varphi) = \left\llangle \psi^N, \widehat n^b\,\psi^N\right\rrangle = \left\llangle \widehat n^{ b-\frac{a}{2}} \psi^N, \widehat n^\frac{a}{2}\,\psi^N\right\rrangle \leq \|\widehat n^{ b-\frac{a}{2}} \psi^N\|\,\|\widehat n^\frac{a}{2}\,\psi^N\|\leq \sqrt{\alpha_{n^a}(\psi^N,\varphi)}\,. \] $(i) \Rightarrow (iii)$: For $a=2$ we have $\lim_{N\to \infty} \|q_1\psi^N\|=0$ according to \eqref{n2comp}. Let $P_k^M$ be the projector from Definition~\ref{def:pP} acting on $L^2(\Omega^M)$ with $p=|\varphi\rangle\langle\varphi|$. Then in \[ \gamma^N_M = \sum_{k=0}^M \sum_{k'=0}^M P_k^M \gamma^N_M P_{k'}^M \] all terms but the one with $k=k'=0$ go to zero in norm for $N\to\infty$. Hence, \[ \lim_{N\to \infty} \gamma^N_M = \lim_{N\to \infty} p^{\otimes M} \,\gamma^N_M \,p^{\otimes M} = p^{\otimes M}\,, \] where the last equality follows from Tr$\,\gamma^N_M\equiv 1$ and the fact that $p^{\otimes M}$ has rank one. $(iii) \Rightarrow (iv)$: We learned the following argument from \cite{RodSch07}. Since $p^{\otimes M}$ has rank one, the operator $\gamma^N_M - p^{\otimes M}$ can have at most one negative eigenvalue $\lambda_-<0$. Since Tr$\,(\gamma^N_M - p^{\otimes M})=0$, $|\lambda_-|$ equals the sum of all positive eigenvalues. Hence \[ {\rm Tr}\left|\gamma^N_M - p^{\otimes M}\right| = 2\left|\lambda_-\right| = 2 \left\|\gamma^N_M - p^{\otimes M}\right\|\,. \] $(iv) \Rightarrow (v)$ is obvious and $(v) \Rightarrow (i)$ follows for $a=2$ from \begin{eqnarray*} \alpha_{n^2}(\psi^N,\varphi)& \stackrel{ \eqref{n2comp}}{=}& \left\llangle \psi, (1-p_1)\psi\right\rrangle = {\rm Tr}\left(p- p\gamma^N_1p \right) = {\rm Tr}\left|p - p\gamma^N_1p \right| = {\rm Tr}\left|p \left(p- \gamma^N_1\right) \right|\\& \leq& \|p\| {\rm Tr}\left|p - \gamma^N_1 \right|\,. \end{eqnarray*} \end{proof} \subsection{Proof of Proposition~\ref{lem:beta.g}} Recalling the definition \eqref{alphadef} we need to estimate \[ \left| \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \alpha_\xi(t)\right|\leq \left|\tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \alpha_{m}(\psi^{N,\epsi}(t) ,\varphi(t)) \right|+\left| \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} |E^{\psi^{N,\epsi}(t)}(t)-E^{\Phi(t)}(t)| \right|\,. \] For better readability we abbreviate $\psi = \psi^{N,\epsi}(t)$ and $\Phi=\Phi(t)$ in the remainder of this proof. The derivative of the second term yields \begin{align*} \left| \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} |E^{\psi }(t)-E^{\Phi}(t)| \right|= |\llangle \psi, \dot V(t,x_1,\epsi y_1) \psi \rrangle - \langle \Phi, \dot V(t,x_1,0) \Phi \rangle_{L^2(\field{R})}| \,. \end{align*} As of Lemma\,\ref{hat.}, the map $t\mapsto \alpha_{m}(\psi,\varphi) \in C^1(\field{R},\field{R})$ and we find \begin{eqnarray} \tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \alpha_{m}&= &\tfrac{{\mathrm{d}}}{{\mathrm{d}} t} \left\llangle \psi, \widehat m \psi \right\rrangle \stackrel{\ref{hat.}}{ =} \mathrm{i} \left\llangle \psi, \left[H^\epsi_N-H^\Phi, \widehat m\right] \psi \right\rrangle\nonumber\\ &\stackrel{\mathclap{\ref{hat.}}}{ =} & \mathrm{i} \left\llangle \psi, \Big[ \tfrac{1}{N-1}\sum_{i< j} w^{\epsi, \beta,N}_{ij}-\sum_{i=1}^N b|\Phi(x_i)|^2 , \widehat m\Big] \psi \right\rrangle+ \mathrm{i} \left\llangle \psi, \Big[\sum_{i=1}^N V(x_i,\epsi y_i)- \sum_{i=1}^N V(x_i,0) , \widehat m\Big] \psi \right\rrangle \nonumber \\ && +\;\mathrm{i} \left\llangle \psi, \Big[\sum_{i=1}^N - (\theta'(x_i)L_i)^2 - |\theta'(x_i)|^2 \,\|L\chi\|^2 , \widehat m\Big] \psi \right\rrangle\nonumber + \mathrm{i} \left\llangle \psi, \Big[\sum_{i=1}^N R^{(1)}_i , \widehat m\Big] \psi \right\rrangle\nonumber \\ & = & \tfrac{ \mathrm{i} N}{2} \left\llangle \psi, \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] \psi \right\rrangle\;+\;\mathrm{i} N \left\llangle \psi, \left[ V(x_1,\epsi y_1)-V(x_1,0) , \widehat m\right] \psi \right\rrangle \nonumber\\ &&-\; \mathrm{i} N \left\llangle \psi, \left[ (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2 \,\|L\chi\|^2 , \widehat m\right] \psi \right\rrangle +\mathrm{i} N \left\llangle \psi, \left[ R^{(1)}_1 , \widehat m\right] \psi \right\rrangle\nonumber \\ &= &\tfrac{\mathrm{i}}{2} N \left\llangle \psi,(p_1+q_1)(p_2+q_2) \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right](p_1+q_1)(p_2+q_2) \psi \right\rrangle\label{firstsummand}\\ && + \;\mathrm{i} N \left\llangle \psi, (p_1+q_1) \left[ V(t,x_1,\epsi y_1)-V(t,x_1,0) , \widehat m\right] (p_1+q_1) \psi \right\rrangle\, .\label{secondsummand}\\ &&-\; \mathrm{i} N \left\llangle \psi, (p_1+q_1) \left[ (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2 \,\|L\chi\|^2 , \widehat m\right] (p_1+q_1) \psi \right\rrangle\label{thirdsummand}\\&& +\;\mathrm{i} N \left\llangle \psi, (p_1+q_1) \left[ R^{(1)}_1 , \widehat m\right] (p_1+q_1) \psi \right\rrangle\label{lastsummand} \end{eqnarray} According to Lemma~\ref{lem:weights} (\ref{lem:weightsc}) all terms with the same number of $p$'s and $q$'s on each side of the commutator vanish. Therefore we find that \eqref{secondsummand}--\eqref{lastsummand} are bounded by \begin{eqnarray*} |\eqref{secondsummand}+\eqref{thirdsummand}+\eqref{lastsummand}|&\leq & 2 N \left|\left\llangle \psi, p_1 \left[ V(t,x_1,\epsi y_1)-V(t,x_1,0) , \widehat m\right] q_1 \psi \right\rrangle\right|\\ && + \; 2N \left| \left\llangle \psi, p_1 \left[ (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2 \,\|L\chi\|^2 , \widehat m\right] q_1 \psi \right\rrangle \right|\\ && + \; 2N \left| \left\llangle \psi, p_1 \left[ R^{(1)}_1 , \widehat m\right] q_1 \psi \right\rrangle \right| \,. \end{eqnarray*} The crucial step (c.f.\ \cite{Pic08}) is to split \eqref{firstsummand} according to \begin{eqnarray*} \lefteqn{ \tfrac{\mathrm{i}}{2} N \left\llangle \psi,(p_1+q_1)(p_2+q_2) \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right](p_1+q_1)(p_2+q_2) \psi \right\rrangle } \\ &=&\tfrac{\mathrm{i}}{2} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] p_1p_2 \psi \right\rrangle\\ &&+\;\tfrac{\mathrm{i}}{2} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] (p_1q_2+q_1p_2 +q_1q_2) \psi \right\rrangle\\ &&+\;\tfrac{\mathrm{i}}{2} N \left\llangle \psi, (p_1 q_2+ q_1 p_2) \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right](p_1p_2+q_1q_2) \psi \right\rrangle\\ && +\;\tfrac{\mathrm{i}}{2}N \left\llangle \psi, q_1 q_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right](p_1p_2+ q_1p_2 + p_1q_2) \psi \right\rrangle\\ &\weq{\mathrm{sym.}}{=}& \mathrm{i} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] p_1q_2 \psi \right\rrangle + c.c.\\ &&+\;\tfrac{\mathrm{i}}{2} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12} - b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] q_1q_2 \psi \right\rrangle +c.c.\\ && +\;\mathrm{i} N \left\llangle \psi, p_1 q_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2- b|\Phi(x_2)|^2 , \widehat m\right] q_1q_2 \psi \right\rrangle + c.c.\\ &=& \mathrm{i} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_2)|^2 , \widehat m\right] p_1q_2 \psi \right\rrangle + c.c.\\ && +\;\tfrac{\mathrm{i}}{2} N \left\llangle \psi, p_1 p_2 \left[ w^{\epsi, \beta,N}_{12} , \widehat m\right] q_1q_2 \psi \right\rrangle +c.c.\\ && +\;\mathrm{i} N \left\llangle \psi, p_1 q_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi(x_1)|^2 , \widehat m\right] q_1q_2 \psi \right\rrangle + c.c.\\ &=& - 2 \Im \mathrm I- \Im \mathrm {II} - 2 \Im \mathrm {III}. \end{eqnarray*} The term with $p_1p_2$ on both sides of the commutator vanishes again because of Lemma~\ref{lem:weights} (\ref{lem:weightsc}). For the second to last equality we used that for $i\not=j$ the projection $p_j$ commutes with $\widehat m$ and with $|\Phi(x_i)|^2$ and that $p_jq_j=0$. \subsection{Proof of Proposition~\ref{lem:3termeg}} \begin{proof}[Proof of the bound for {\rm I}] In the case of moderate confinement, the term I is small due to the cancellation of the mean field and the full interaction. Since $b|\Phi|^2 $ is the mean field for a condensate in the state $\Phi\chi$, i.e.\ a condensate that is in the ground state with respect to the confined directions, this cancellation works only for the part of $\psi$ that is in this confined ground state. We thus need to split $\psi$ accordingly and introduce the following projections on $L^2(\Omega)$, \[ p ^\chi :=1 \otimes |\chi \rangle \langle \chi | \,,\quad q^\chi =1-p^\chi \,, \quad p ^\Phi = |\Phi \rangle \langle \Phi | \otimes 1 \,,\quad q^\Phi =1-p^\Phi \,. \] As in Definition~\ref{def:pP} we also introduce the projections $p^\chi_j$, $q^\chi_j$, $p^\Phi_j$, and $q_j^\Phi$ on $L^2(\Omega^N)$. With these projections we can rewrite \begin{equation}\label{equ:qinqx} q_j= 1- p_j = 1 - p^\Phi_j p^\chi_j = (1-p_j^\chi)+ (1-p_j^\Phi)p_j^\chi=q_j^\chi+q_j^\phi p_j^\chi\,, \end{equation} where we recall that $p_j := p_j^\varphi$. Now with Lemma~\ref{lem:weights}, \eqref{equ:qinqx} and \eqref{melldef} we find \begin{eqnarray} |I|&=&N \left|\left\llangle \psi, p_1 p_2 \left[ w^{\epsi,\beta,N}_{12}- b|\Phi|^2(x_2) , \widehat {m}\right] p_1q_2 \psi \right\rrangle\right|\nonumber\\ & \weq{\ref{lem:weights}}{=} &N\left|\left\llangle \psi, p_1 p_2 \left( w^{\epsi,\beta,N}_{12}- b|\Phi|^2(x_2)\right) \left({\widehat { {m}}}- { \widehat {\tau_{-1} m}} \right) p_1 q_2 \psi \right\rrangle \right|\nonumber\\ &\weq{\eqref{melldef},\eqref{equ:qinqx}}{=} &\;\left|\left\llangle \psi, p_1 p_2 \left( w^{\epsi,\beta,N}_{12}- b|\Phi|^2(x_2)\right) \,\widehat{m_1}\, p_1 \left( p_2^\chi q_2^\Phi+ q_2^\chi \right) \psi \right\rrangle \right|\nonumber\\ &\leq & \left|\left\llangle \psi, p_1 p_2 \left( w^{\epsi,\beta,N}_{12}- b|\Phi|^2(x_2)\right) p_1 p_2^\chi \,\widehat{m_1}\, q_2^\Phi \psi \right\rrangle\right| + \left|\left\llangle \psi, p_1 p_2 \,w^{\epsi,\beta,N}_{12} \,\widehat{m_1}\, p_1 q_2^\chi \psi \right\rrangle\right|\nonumber \\ &\stackrel{\ref{R2Lemma}}{=}& \left|\left\llangle \psi, p_1 p_2 \left( w^0_{12}- b|\Phi|^2(x_2)\right) p_1 p_2^\chi \,\widehat{m_1}\, q_2^\Phi \psi \right\rrangle\right| \label{I1term} \\ &&+\; \left|\left\llangle \psi, p_1 p_2 \left(T_1+T_2 \right) p_1 p_2^\chi \,\widehat{m_1}\, q_2^\Phi \psi \right\rrangle\right|\label{I3term}\\ && +\; \left|\left\llangle \psi, p_1 p_2 \,w^{\epsi,\beta,N}_{12} \,\widehat{m_1}\, p_1 q_2^\chi \psi \right\rrangle\right| \label{I2term}\,. \end{eqnarray} In the first term \eqref{I1term} the interaction $w^0_{12}$ acts between states that are fixed in the $r_1$ and the $y_2$ variable, so only a $x_2$-dependence remains that approximately cancels the mean field $b|\Phi(x_2)|^2$. More precisely, between $p_1p_2$ and $p_1 p_2^\chi$ the leading part $w^0_{12}$ of the interaction can be replaced by the effective potential \begin{eqnarray}\lefteqn{ \left\langle \varphi\otimes\chi, \frac{\epsi^2}{\mu^3} w \left(\frac{r^\epsi_1-r^\epsi_2}{\mu}\right) \varphi\otimes \chi\right\rangle_{L^2(\Omega\times \Omega_{\rm f})} (x_2)=}\nonumber\\ &=& \frac{\epsi ^2}{\mu^3} \int |\Phi(x_1)|^2\, |\chi(y_1)|^2 \,w\left(\mu^{-1}( (x_1-x_2), \epsi(y_1-y_2))\right) \,|\chi(y_2)|^2\,{\mathrm{d}} x_1{\mathrm{d}} y_1 {\mathrm{d}} y_2\nonumber\\ &=& \int |\chi(y_2)|^2\, \left( \frac{\epsi ^2}{\mu^3} \int |\Phi(x_2-x)|^2\, |\chi(y_2-y)|^2 \,w\left(\mu^{-1}( x , \epsi \, y )\right) \,{\mathrm{d}} x\, {\mathrm{d}} y \right) {\mathrm{d}} y_2\label{wexpansion}\,. \end{eqnarray} To see that this is close to $b|\Phi(x_2)|^2$, first note that for $f\in C^\infty_0(\Omega)$ we have with $z := (x,y)$ that \begin{eqnarray*}\lefteqn{ \frac{\epsi ^2}{\mu^3} \int f(z_2-z) \,w\left(\mu^{-1}( x , \epsi \, y )\right) \,{\mathrm{d}} x\, {\mathrm{d}}^2 y}\\ & = & f(z_2)\,\|w\|_{L^1(\field{R}^3)} \;-\; \frac{\epsi ^2}{\mu^3} \int \int_0^1 \nabla f(z_2 - s z)\cdot z \,w\left(\mu^{-1}( x , \epsi \, y )\right) \,{\mathrm{d}} s\,{\mathrm{d}} x\, {\mathrm{d}}^2 y\nonumber\\ &=:& f(z_2)\,\|w\|_{L^1(\field{R}^3)} \;+\; R(z_2)\,, \end{eqnarray*} where the $L^2$-norm of the remainder is bounded by \begin{eqnarray} \|R\|^2_{L^2(\Omega)} &\leq& \|\nabla f \|^2_{L^2(\Omega)}\,\left(\frac{\epsi^2}{\mu^3} \int |z | \,w\left(\frac{( x , \epsi \, y )}{\mu}\right) \,{\mathrm{d}} x\, {\mathrm{d}}^2 y\right)^2\nonumber\\ &=& \|\nabla f \|^2_{L^2(\Omega)}\,\left(\frac{\epsi^2}{\mu^2} \int \frac{|z |}{\mu} \,w\left(\frac{( x , \epsi \, y )}{\mu}\right) \,{\mathrm{d}} x\, {\mathrm{d}}^2 y\right)^2\nonumber\\ &=& \|\nabla f \|^2_{L^2(\Omega)}\,\left(\mu\epsi^2 \int {|z |} \,w\left(( x , \epsi \, y )\right) \,{\mathrm{d}} x\, {\mathrm{d}}^2 y\right)^2\nonumber\\ &\leq& \|\nabla f \|^2_{L^2(\Omega)}\,\left(\mu\epsi^2 \int \frac{|z |}{\epsi} \,w\left(( x , y )\right) \,{\mathrm{d}} x\, \frac{{\mathrm{d}}^2 y}{\epsi^2}\right)^2\;\leq\; \frac{\mu^2}{\epsi^2} \,\|\nabla f \|^2_{L^2(\Omega)}\, \||z|w(z)\|_{L^1(\field{R}^3)}\,.\nonumber \end{eqnarray} Hence \begin{equation} \left\| \frac{\epsi ^2}{\mu^3} \int f(\cdot-z) \,w\left(\mu^{-1}( x , \epsi \, y )\right) \,{\mathrm{d}} x\, {\mathrm{d}}^2 y - f\|w\|_{L^1(\field{R}^3)} \right\|_{L^2(\Omega)}\;\lesssim\; \frac{\mu }{\epsi } \,\|\nabla f \| _{L^2(\Omega)}\label{faltest} \end{equation} and this bound extends to $f\in H^1(\Omega)$ by density, in particular, to $f= |\Phi|^2|\chi|^2$. Inserting this bound with \eqref{wexpansion} into \eqref{I1term} yields, together with Lemma~\ref{lem:young} (a+d) and Lemma~\ref{lem:qs&N} the bound \[ \eqref{I1term} \lesssim \frac{\mu}{\epsi}\norm{\nabla |\Phi|^2}_{L^2(\field{R})} \norm{\Phi}_{L^\infty(\field{R})} \lesssim \frac{\mu}{\epsi}\,\norm{\Phi}_{H^2(\field{R})}^3 \,. \] For the term \eqref{I3term} we have with Corollary~\ref{wcor} and Lemma~\ref{lem:qs&N} that \begin{eqnarray*} \eqref{I3term} &\leq& \left( \|T_1p_1\| + \|T_2p_1\|\right) \|\widehat{m_1}\, q_2 \psi\| \lesssim \frac{(\epsi+\mu)\epsi}{\mu^{3/2}} \|\Phi\|_{H^2(\field{R})} \,. \end{eqnarray*} The term \eqref{I2term} is small due to energy conservation and the energy gap of order $\epsi^{-2}$ between the ground state and the first excited state in the confined direction. With the help of Lemma~\ref{lem:qs&N} we get \begin{eqnarray*}\lefteqn{\hspace{-1cm} \left|\left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} \,\widehat{m_1}\,p_1 q_2^\chi \psi \right\rrangle \right| \;\leq\; \left\| p_1 w^{\epsi,\beta,N}_{12} p_1 \right\| \left\llangle \psi, \,\widehat{m_1}^2\, q_2^\chi \psi \right\rrangle^\frac{1}{2}} \\ &\weq{\ref{wcor}}{\lesssim}& \|\Phi\|_{H^2(\field{R})}^2 \left\llangle \psi, \,\widehat{m_1}^2 \,q_2^\chi \psi \right\rrangle^\frac{1}{2} \;\weq{\ref{lem:qs&N}}{ \lesssim}\; \|\Phi\|_{H^2(\field{R})}^2 N^{\xi}\,\epsi\, g(t) \,. \end{eqnarray*} In the strongly confining case we have $b=0$ and instead estimate I by \[ |{\rm I}|\leq \left|\left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} \widehat m_1 p_1 q_2 \psi \right\rrangle \right| \leq \|p_1 w^{\epsi,\beta,N}_{12} p_1\|\,\|\widehat m_1 q_2 \psi \|\lesssim \mu\,\|\Phi\|^2_{H^2(\field{R})}\,. \] \end{proof} \begin{proof}[Proof of the bound for {\rm II}] We start with the case of moderate confinement. Using again Lemma~\ref{lem:weights} (c) we find that \begin{eqnarray} |{\rm II}|&=&\left| \left\llangle \psi, p_1 p_2 \,w^{\epsi,\beta,N}_{12} \widehat{m_2} \, q_1 q_2 \psi \right\rrangle \right| \;=\; \left|\left\llangle \psi, p_1 p_2 (\widehat {\tau_2 m_2} )^\frac{1}{2} w^{\epsi,\beta,N}_{12} \widehat{m_2} ^\frac{1}{2} q_1 q_2 \psi \right\rrangle \right|\nonumber\\ &= &\frac{1}{N} \Big|\sum_{j=2}^N \left\llangle \psi, (\widehat{ \tau_2 m_2} )^\frac{1}{2} p_1 p_j w^{\epsi,\beta,N}_{1j} q_1 q_j \widehat{m_2}^\frac{1}{2} \psi \right\rrangle \Big|\nonumber\\ & \lesssim & \frac{1}{N} \Big\| \sum_{j=2}^N q_j w^{\epsi,\beta,N}_{1j} ( \widehat { \tau_2 m_2} )^\frac{1}{2} p_1 p_j \psi \Big\| \,\left\| \widehat{m_2}^\frac{1}{2} q_1 \psi \right\|\, . \label{equ:II.1} \end{eqnarray} The second factor of \eqref{equ:II.1} is easily estimated by \[ \left\| \widehat{m_2}^\frac{1}{2} q_1 \psi\right\|^2=\left\llangle \psi, \widehat{m_2} \, q_1 \psi \right\rrangle \weq{\ref{lem:weights}}{ =} \left\llangle \psi, \widehat{m_2} \, \widehat{n}^2 \psi \right\rrangle = \left\llangle \widehat n^\frac12 \psi, \widehat{ m_2n}\, \widehat n^\frac12 \psi \right\rrangle \leq \| m_2n\|_\infty \left\| \widehat n^\frac12\,\psi\right\|^2 \lesssim \alpha_\xi\,. \] The first factor of \eqref{equ:II.1} we split into a ``diagonal'' and an ``off-diagonal'' term and find \begin{eqnarray}\lefteqn{\hspace{-2cm} \norm{\sum_{j=2}^N q_j w^{\epsi,\beta,N}_{1j} ( \widehat { \tau_2 m_2} )^\frac{1}{2} p_1 p_j \psi }^2 = \sum_{j,l=2}^N \left\llangle \psi, p_1 p_l \left( \widehat { \tau_2 m_2} \right)^\frac{1}{2} w^{\epsi,\beta,N}_{1l} q_l q_j w^{\epsi,\beta,N}_{1j} \left( \widehat { \tau_2 m_2} \right)^\frac{1}{2} p_1 p_j \psi \right\rrangle}\nonumber\\ &\leq &\sum_{2 \leq j < l \leq N} \left\llangle \psi, q_j p_1 p_l \left( \widehat { \tau_2 m_2} \right)^\frac{1}{2} w^{\epsi,\beta,N}_{1l} w^{\epsi,\beta,N}_{1j} q_l \left( \widehat { \tau_2 m_2} \right)^\frac{1}{2} p_1 p_j \psi \right\rrangle \nonumber\\ &&+\;(N-1) \norm{w^{\epsi,\beta,N}_{12} p_1 p_2 \left( \widehat { \tau_2 m_2} \right)^\frac{1}{2}\,\psi }^2\, .\label{equ:II.3gp} \end{eqnarray} The first summand of \eqref{equ:II.3gp} is bounded by \begin{eqnarray}\lefteqn{\hspace{-.5cm} (N-1)(N-2)\llangle \psi, q_2 p_1 p_3 ( \widehat { \tau_2 m_2} )^\frac{1}{2} w^{\epsi,\beta,N}_{13} w^{\epsi,\beta,N}_{12} q_3 ( \widehat { \tau_2 m_2} )^\frac{1}{2} p_1 p_2 \psi \rrangle }\nonumber\\ & \leq& N^2 \norm { \sqrt{ w^{\epsi,\beta,N}_{13}} \sqrt{ w^{\epsi,\beta,N}_{12}} q_3 ( \widehat { \tau_2 m_2} )^\frac{1}{2} p_1 p_2 \psi}^2\nonumber\\ &\leq& N^2 \norm{ \sqrt{ w^{\epsi,\beta,N}_{12}} p_2 \sqrt{ w^{\epsi,\beta,N}_{13}} p_1 ( \widehat { \tau_2 m_2} )^\frac{1}{2} q_3 \psi } ^2 \leq N^2 \norm{ \sqrt{ w^{\epsi,\beta,N}_{12}} p_2 }^4 \norm{ ( \widehat { \tau_2 m_2} )^\frac{1}{2} \,q_3 \psi }^2\nonumber\\ &\weq{\ref{wcor},\ref{lem:weights}}{\lesssim} &N^2 \|\Phi\|_{H^2(\field{R})}^4 \norm{ ( \widehat { \tau_2 m_2} )^\frac{1}{2}\, \widehat n \,\psi }^2 = N^2 \|\Phi\|_{H^2(\field{R})}^4 \left\llangle \widehat n^\frac12 \psi, \widehat{ \tau_2m_2n}\, \widehat n^\frac12 \psi \right\rrangle \nonumber\\ &\lesssim &N^2 \|\Phi\|_{H^2(\field{R})}^4 \,\alpha_\xi \,, \label{equ:II.4gp} \end{eqnarray} where we used that $ \tau_2m_2n$ is bounded. The second summand of \eqref{equ:II.3gp} is bounded by \begin{eqnarray} \lefteqn{\hspace{-4cm} N \left\llangle ( \widehat { \tau_2 m_2} )^\frac{1}{2} \, \psi, p_1 p_2 (w^{\epsi,\beta,N}_{12})^2 p_1 p_2 ( \widehat { \tau_2 m_2} )^\frac{1}{2}\,\psi \right\rrangle \leq N \left\|p_1 (w^{\epsi,\beta,N}_{12})^2 p_1\right\| \left\| ( \widehat { \tau_2 m_2} )^\frac{1}{2}\right\|^2} \nonumber\\& \weq{\ref{wcor}}{ \lesssim } & N \frac{\epsi^2}{\mu^{3}}\|\Phi\|_{H^2(\field{R})}^2 N^\xi \,, \label{equ:II.5gp} \end{eqnarray} since $\sup_{1\leq k \leq N } m_2(k) \; \leq \; N^{\xi}$. Inserting the bounds \eqref{equ:II.4gp} and \eqref{equ:II.5gp} into \eqref{equ:II.3gp}, we obtain in continuation of \eqref{equ:II.1} the desired bound, \begin{eqnarray*} |{\rm II}| & \lesssim& \Big ( \norm{\Phi}_{H^2(\field{R})}^2 \sqrt{\alpha_\xi} + N^{-\frac{1}{2}} \frac{\epsi}{\mu^{3/2}} \norm{\Phi}_{H^2(\field{R})} N^\frac{\xi}{2}\Big ) \sqrt{\alpha_\xi}\\ &=& \norm{\Phi}_{H^2(\field{R})}^2 {\alpha_\xi} + N^\frac{\xi}{2} \sqrt\frac{a}{\mu^3} \norm{\Phi}_{H^2(\field{R})} \sqrt{\alpha_\xi} \;\leq\;\tfrac32 \norm{\Phi}_{H^2(\field{R})}^2 {\alpha_\xi} + N^\xi \frac{a}{\mu^3}\,. \end{eqnarray*} \end{proof} In the strongly confining case we can easily estimate \begin{eqnarray*} |{\rm II}|&\leq &\left|\left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} \widehat m_2 q_1 q_2 \psi \right\rrangle \right| \leq \|w^{\epsi,\beta,N}_{12} p_1\|\,\|\widehat m_2 q_1 q_2 \psi \|\lesssim \sqrt{\mu}\,\|\Phi\|_{H^2(\field{R})}\sqrt{\alpha_\xi}\\&\leq& \|\Phi\|_{H^2(\field{R})}(\alpha_\xi+\mu)\,. \end{eqnarray*} \begin{proof}[Proof of the bound for {\rm III}] The same manipulations as before yield \begin{eqnarray} |\mathrm{III}|&=& \left|N\left\llangle \psi, p_1 q_2 \left[ w^{\epsi, \beta,N}_{12}- b|\Phi|^2(x_1) , \widehat m\right] q_1q_2 \psi \right\rrangle\right| \nonumber\\& \weq{\ref{lem:weights}}{=} & \left|\left\llangle \psi, p_1 q_2 \left( w^{\epsi, \beta,N}_{12}- b|\Phi|^2(x_1) \right) \widehat {m_1} \,q_1q_2 \psi \right\rrangle\right| \nonumber\\& \leq& \left|\left\llangle \psi, p_1 q_2 \, w^{\epsi, \beta,N}_{12}\, \widehat {m_1} \,q_1q_2 \psi \right\rrangle\right| + \left|\left\llangle \psi, p_1 q_2 \, b|\Phi|^2(x_1) \, \widehat {m_1} \,q_1q_2 \psi \right\rrangle\right|\,. \label{equ:III.1gp} \end{eqnarray} The second summand of \eqref{equ:III.1gp} is easily bounded by \[ \left|\left\llangle \psi, p_1 q_2 \, b|\Phi|^2(x_1) \, \widehat {m_1} \,q_1q_2 \psi \right\rrangle\right| \lesssim \|q_2\psi\| \,\|\widehat {m_1}q_1q_2\psi\|\lesssim \alpha_\xi\,. \] For the first term of \eqref{equ:III.1gp} we use $q=q^\chi + p^\chi q^\Phi$ to obtain four terms \begin{eqnarray} |\llangle \psi, p_1 q_2 w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1 q_2 \psi \rrangle | &\leq&|\llangle \psi, p_1 q_2^\chi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1 q_2 \psi \rrangle |\;+\;|\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1 q_2^\chi \psi \rrangle |. \nonumber\\ &&+\;|\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1^\chi q_2 \psi \rrangle |\nonumber\\ && +\;|\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \rrangle |\,. \label{equ:III.2gp} \end{eqnarray} All terms but the last are easy to handle. The first term of \eqref{equ:III.2gp} can be estimated by \begin{eqnarray}\lefteqn{\hspace{-1cm} |\llangle \psi, p_1 q_2^\chi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1 q_2 \psi \rrangle | \leq \norm{ q_2^\chi \psi} \norm{w^{\epsi,\beta,N}_{12}p_1} \norm{ \, \widehat {m_1} \,q_1 q_2 \psi}}\nonumber\\ &\lesssim& \epsi g(t) \,\tfrac{\epsi}{\mu^{3/2}} \norm{\Phi}_{H^2(\field{R})} \,\sqrt{\alpha_\xi}\leq g(t) \norm{\Phi}_{H^2(\field{R})} \left( \alpha_\xi + \tfrac{\epsi^4}{\mu^3}\right)\,, \label{equ:III.8gp} \end{eqnarray} where we used Lemmas~\ref{lem:weights} and \ref{lem:qs&N} (b) and Corollary~\ref{wcor} in the second step. For the second (and completely analogous the third) term in \eqref{equ:III.2gp} we find in the same way \begin{eqnarray}\lefteqn{\hspace{-1cm} |\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, q_1 q_2^\chi \psi \rrangle |=|\llangle \psi, p_1 p_2^\chi q_2^\Phi \, \widehat {\tau_1 m_1}^\frac12 \, w^{\epsi,\beta,N}_{12} \, \widehat {m_1}^\frac12 \,q_1 q_2^\chi \psi \rrangle |}\nonumber\\ &\leq& \norm{ \widehat{\tau_1 m_1}^\frac12q_2 \psi} \left\|w^{\epsi,\beta,N}_{12}p_1\right\| \norm{ \, \widehat {m_1}^\frac12 \,q_1 q_2^\chi \psi}\nonumber\\ &\lesssim&\sqrt{\alpha_\xi} \,\tfrac{\epsi}{\mu^{3/2}} \,\norm{\Phi}_{H^2(\field{R})}\, \epsi g(t) \leq g(t)\norm{\Phi}_{H^2(\field{R})} \left( \alpha_\xi + \tfrac{\epsi^4}{\mu^3}\right)\,, \label{equ:III.7gp} \end{eqnarray} where we used \begin{eqnarray*} \norm{ \, \widehat {m_1}^\frac12 \,q_1 q_2^\chi \psi}^2&=& \left\llangle q_2^\chi\psi, \widehat {m_1}q_1\psi\right\rrangle \;= \;\frac{1}{N-1}\sum_{j=2}^N \left\llangle q_j^\chi\psi, \widehat {m_1}q_1\psi\right\rrangle\\ &=& \frac{1}{N-1} \left\llangle \sum_{j=1}^Nq_j^\chi\psi, \widehat {m_1}q_1\psi\right\rrangle - \frac{1}{N-1} \left\llangle q_1^\chi\psi, \widehat {m_1}q_1\psi\right\rrangle\\ &=&\frac{1}{N-1} \left\llangle \sum_{j=1}^Nq_j^\chi\psi, \widehat {m_1}\widehat{n}^2\psi\right\rrangle - \left\llangle q_1^\chi\psi, \frac{\widehat {m_1}}{N-1}q_1^\chi\psi\right\rrangle\\ &\leq& \frac{N}{N-1} \|q_1^\chi\psi\|^2 +\|q_1^\chi\psi\|^2\;\lesssim\; \epsi^2 g(t)^2 \,. \end{eqnarray*} In the last term of \eqref{equ:III.2gp} we again split the interaction according to Lemma~\ref{R2Lemma} \begin{eqnarray*} |\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{\epsi,\beta,N}_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \rrangle | &=& |\llangle \psi, p_1 p_2^\chi q_2^\Phi w^{0}_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \rrangle |\\ &&+\;|\llangle \psi, p_1 p_2^\chi q_2^\Phi (T_1+T_2) \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \rrangle | \end{eqnarray*} and bound the second term with the help of Corollary~\ref{wcor} and Lemmas~\ref{lem:weights} and \ref{lem:qs&N} (b), \[ |\llangle \psi, p_1 p_2^\chi q_2^\Phi (T_1+T_2) \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \rrangle |\lesssim \tfrac{(\epsi+\mu)\epsi}{\mu^{3/2}} \|\Phi\|_{H^2(\field{R})}\sqrt{\alpha_\xi} \leq \|\Phi\|_{H^2(\field{R})}\left(\alpha_\xi + \tfrac{(\epsi+\mu)^2\epsi^2}{\mu^{3}} \right)\,. \] For the leading term containing $w^0_{12}$ we have to use a different approach. Here we know that the potential only acts on the function $\chi$ in the confined directions. Thus, we can replace \[ p_1 p_2^\chi \, w^0_{12} \, p_1^\chi p_2^\chi = p_1 p_2^\chi \, \overline w^0_{12} \,p_1^\chi p_2^\chi \] with \[ \overline w^0(x_1-x_2): = \frac{1}{\mu } \int_{\Omega_{\rm f}^2} \frac{\epsi^2}{\mu^2} \, w\Big (\mu^{-1} \big(x_1-x_2,\epsi(y_1-y_2)\big) \Big) |\chi(y_1)|^2 |\chi(y_2)|^2 {\mathrm{d}} y_1 {\mathrm{d}} y_2 \,. \] By inspection of the above formula on checks that $\| \overline w^0\|_{L^1(\field{R})} \lesssim 1$ and thus its anti-derivative \begin{align*} \overline W^0(x):= \int_{-\infty}^{x'} \overline w^0(x') {\mathrm{d}} x' \leq \left\| \overline w^0\right\|_{L^1(\field{R})} \end{align*} remains bounded. Integration by parts therefore yields \begin{eqnarray*}\lefteqn{ \left\llangle \psi, p_1 p_2^\chi q_2^\Phi w^0_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle = \left\llangle \psi, p_1 p_2^\chi q_2^\Phi \left(\tfrac{\partial}{\partial x_1} \overline W^0_{12}\right) \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle}\\ &=& -\,\left\llangle \psi, \left(\tfrac{\partial}{\partial x_1} p_1 \right) p_2^\chi q_2^\Phi \overline W^0_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle - \left\llangle \psi, p_1 p_2^\chi q_2^\Phi \, \overline W^0_{12} \,\tfrac{\partial}{\partial x_1}\, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle\,, \end{eqnarray*} where the first term is easily bounded by \begin{eqnarray*} \left|\left\llangle \psi, \left(\tfrac{\partial}{\partial x_1} p_1 \right) p_2^\chi q_2^\Phi \overline W^0_{12} \, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle \right| &\leq & \left\|\tfrac{\partial}{\partial x_1} p_1\right\| \left\| q_2\psi\right\| \left\|\overline W^0_{12}\right\|_\infty \left\|\widehat {m_1} q_1q_2\psi\right\| \\ &\lesssim& \|\Phi\|_{H^1(\field{R})} \alpha_\xi\,. \end{eqnarray*} The second term is \begin{eqnarray*} \lefteqn{\hspace{-1cm} \left| \left\llangle \psi, p_1 p_2^\chi q_2^\Phi \, \overline W^0_{12} \,\tfrac{\partial}{\partial x_1}\, \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle\right| =}\\&=& \left| \left\llangle \psi, p_1 p_2^\chi q_2^\Phi \,\overline W^0_{12} (p_1+q_1) q_2 \,\tfrac{\partial}{\partial x_1}\, q_1q_2 \widehat {m_1} \, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle\right| \\ &=&\left| \left\llangle \psi, p_1 p_2^\chi q_2^\Phi \,\overline W^0_{12} (p_1 \widehat {\tau_{1} m_1} + q_1\widehat {m_1} ) q_2 \,\tfrac{\partial}{\partial x_1}\, p_1^\chi q_1^\Phi p_2^\chi q_2^\Phi \psi \right\rrangle\right| \\ &\leq& \left\| q_2\psi\right\| \left\|\overline W^0_{12}\right\|_\infty \left\|\left(\widehat {\tau_{1} m_1} +\widehat {m_1}\right)q_2 \, \tfrac{\partial}{\partial x_1}\,q_1\psi\right\|\\ & \lesssim& \sqrt{\alpha_\xi} \left( \|\Phi\|_{H^2(\field{R})}^\frac{3}{2} \sqrt{ \alpha_\xi +\frac{\mu}{\epsi} +\frac{a}{\mu^3}} +\epsi g(t)\right)\\& \lesssim & \|\Phi\|_{H^2(\field{R})}^\frac{3}{2} \left( \alpha_\xi +\frac{\mu}{\epsi}+\frac{a}{\mu^3} \right) + g(t) (\alpha_\xi +\epsi^2) \,, \end{eqnarray*} where we used Lemma~\ref{lem:energyestimate} and for $\ell=0,1$ \begin{eqnarray*} \left\|\widehat {\tau_{\ell} m_1} q_2 \,q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi1\psi\right\|^2 &=& \left\llangle q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi, q_2 \widehat {\tau_{\ell} m_1}^2 q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\rrangle\\&\leq& \left\llangle q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi, \sum_{j=1}^N q_j\frac{\widehat {\tau_{\ell} m_1}^2}{N-1} q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\rrangle \\ &=& \left\llangle q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi, \sum_{k=0}^N \frac{ {\tau_{\ell} m_1(k) }^2}{N-1} \sum_{j=1}^N q_j P_k q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\rrangle \\ &=& \left\llangle q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi, \sum_{k=0}^N \frac{ {\tau_{\ell} m_1(k) }^2}{N-1} k P_k q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\rrangle \\&\leq& \left\| q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\|^2 \end{eqnarray*} and \begin{eqnarray*} \left\| q_1\,\tfrac{\partial}{\partial x_1}\,p_1^\chi q_1^\Phi\psi\right\| &\leq& \left\|\left(q_1\,p_1^\chi \tfrac{\partial}{\partial x_1} \,p_1^\chi q_1^\Phi + q_1\,p_1^\chi \theta'(x_1) L_1 q_1\right)\psi\right\| + \left\|\,p_1^\chi \theta'(x_1) L_1 q_1\psi\right\| \\&=& \left\|q_1\,p_1^\chi \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi\right\| + \left\|\,p_1^\chi \theta'(x_1) L_1 q_1^\chi\psi\right\| \\ &\leq & \left\| \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi\right\| + \left\|\,p_1^\chi \theta'(x_1) L_1\right\| \| q_1^\chi\psi\|\\ &\lesssim & \left( \|\Phi\|_{H^2(\field{R})}^3 \left(\alpha_\xi +\frac{\mu}{\epsi} + \frac{a}{ \mu^3} \right)\right)^\frac12\;+\; \epsi g(t)\,. \end{eqnarray*} In the strongly confining case we find again \begin{eqnarray*} |{\rm III}|&\leq& \left|\left\llangle \psi, p_1 q_2 w^{\epsi,\beta,N}_{12} \widehat m_1 q_1 q_2 \psi \right\rrangle \right| \leq \|w^{\epsi,\beta,N}_{12} p_1\|\,\|\widehat m_1 q_1 q_2 \psi \|\lesssim \sqrt{\mu}\,\|\Phi\|_{H^2(\field{R})}\sqrt{\alpha_\xi} \\ &\leq &\|\Phi\|_{H^2(\field{R})}(\alpha_\xi+\mu)\,. \end{eqnarray*} \end{proof} \begin{proof}[Proof of the bound for {\rm IV}] For the first two summands in IV we expand the potential around $y_1=0$. The assumption A3 guarantees that in both cases the error is a bounded operator. Therefore, we can write \[ \dot V(t,x_1,\epsi y_1)= \dot V(t,x_1,0)+ \epsi Q \qquad V(t,x_1,\epsi y_1)= V(t,x_1,0)+ \epsi \tilde Q \] with $\norm{Q} , \|\tilde Q \| \leq C$. Thus we find \begin{eqnarray} \left|\left\llangle \psi, p_1 N \left[ V(x_1,\epsi y_1)-V(x_1,0) , \widehat m\right] q_1 \psi \right\rrangle\right| &=& \left|\left\llangle \psi, p_1 \epsi\, \tilde Q\, \widehat{m}_1 q_1 \psi \right\rrangle\right| \lesssim \epsi \norm{ \widehat{m}_1 q_1 \psi } \weq{\ref{lem:qs&N} }{\leq} \epsi\,. \label{easybound} \end{eqnarray} For the term containing $\dot V$ we first note that for $f \in L^\infty(\field{R})$ \begin{align}\label{equ:einteilchenop} \left|\left\llangle \psi, f(x_1) \psi \right\rrangle -\left \langle \Phi, f(x) \Phi \right\rangle\right| \lesssim \norm{f}_{L^\infty(\field{R})} \alpha_\xi. \end{align} Thus we can estimate \begin{eqnarray*} \left|\left\llangle \psi, \dot V(x_1,\epsi y_1) \psi \right\rrangle - \left\langle \Phi, \dot V(x_1,0) \Phi \right\rangle\right| &\lesssim& \left|\left\llangle \psi, \dot V(x_1,0)\psi \right\rrangle - \left\langle \Phi, \dot V(x_1,0) \Phi \right\rangle\right| +\epsi\\ & \weq{ \eqref{equ:einteilchenop}}{ \lesssim} & \norm{\dot V(\cdot,0)}_{L^\infty(\field{R})} \alpha_\xi +\epsi\,. \end{eqnarray*} Equation \eqref{equ:einteilchenop} holds since \begin{eqnarray*} |\llangle \psi, f(x_1) \psi \rrangle - \langle \Phi, f(x) \Phi \rangle|&\leq& |\llangle \psi, p_1 f(x_1) p_1 \psi \rrangle - \langle \Phi, f(x) \Phi \rangle| + |\llangle \psi, q_1 f(x_1) p_1 \psi \rrangle| \\ && +\; |\llangle \psi, p_1 f(x_1) q_1 \psi \rrangle| + |\llangle \psi, q_1 f(x_1) q_1 \psi \rrangle | \\ &\leq& \alpha_\xi\,\langle \Phi, f(x) \Phi \rangle+ 2 |\llangle \psi, \widehat{\tau_1 n}^{1/2} p_1 f(x_1) \widehat n^{-1/2} q_1 \psi \rrangle | + \norm{f}_{L^\infty(\field{R})} \alpha_\xi\\ &\weq{\ref{lem:weights}}{ \lesssim}& \norm{f}_{L^\infty(\field{R})} \alpha_\xi. \end{eqnarray*} For the ``twisting'' term we find \begin{eqnarray*} \big\llangle \psi, p_1 \left( (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2\|L\chi\|^2 \right) \widehat{m}_1 \,q_1 \psi \big\rrangle &=& \big\llangle \psi, p_1 \left( (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2\|L\chi\|^2 \right) q_1^\Phi p_1^\chi \widehat{m}_1 \, \psi \big\rrangle \\&& + \; \big\llangle \psi, p_1 \left( (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2\|L\chi\|^2 \right) q_1^\chi \widehat{m}_1 \, \psi \big\rrangle\,. \end{eqnarray*} With \[ \left\langle \chi, (\theta'(x ) L )^2 \chi\right\rangle_{L^2(\Omega_{\rm f})} = - |\theta'(x) |^2 \left\langle L \chi, L \chi\right\rangle \] we see that the first term vanishes identically. For the second term we find with Lemma~\ref{lem:qs&N}~(b) that \begin{eqnarray*}\lefteqn{\hspace{-1cm} \left|\big\llangle \psi, p_1 \left( (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2\|L\chi\|^2 \right) q_1^\chi \widehat{m}_1 \psi \big\rrangle\right|}\\ &\leq& \| \left( (\theta'(x_1)L_1)^2 + |\theta'(x_1)|^2\|L\chi\|^2 \right)p_1 \psi\|\, \|\widehat{m}_1\,q_1^\chi \, \psi\|\lesssim g(t) N^\xi \epsi\,. \end{eqnarray*} The remaining one-body terms are \[ R^{(1)} = - \partial_x \theta' (x) L - \theta' (x) L \partial_x \;+ \;\left(V_{\rm bend}(r)+\frac{\kappa(x)^2}{4} \right) \;-\; \epsi \,S^\epsi\,. \] With $\langle\chi,L\chi\rangle = 0$ it holds that \[ \llangle \psi, p_1 (\partial_{x_1} \theta' (x_1) L_1 + \theta' (x_1) L_1 \partial_{x_1}) q_1^\Phi p_1^\chi \widehat{m}_1 \psi \big\rrangle = 0 \] and for the remaining term \[ | \llangle \psi, p_1 (\partial_{x_1} \theta' (x_1) L_1 + \theta' (x_1) L_1 \partial_{x_1}) q_1^\chi \widehat{m}_1 \psi \big\rrangle| \lesssim g(t) N^\xi \epsi \] as before. With \[ V_{\rm bend} (r) + \frac{\kappa(x)^2}{4\rho_\epsi(r)^2} = - \epsi\,\frac{T_{\theta(x)}y\cdot\kappa(x)''}{2\rho_\epsi(r)^3} - \epsi^2\, \frac{5( T_{\theta(x)}y\cdot\kappa'(x))^2}{4\rho_\epsi(r)^4} = {\mathcal{O}}(\epsi) \] we can proceed as in \eqref{easybound} for this part. For the $S^\epsi$ term first note that \[ s^\epsi(r):= \epsi^{-1} (\rho_\epsi^{-2}(r) - 1) = \frac{ 2 T_{\theta(x)}y\cdot \kappa(x) - \epsi (T_{\theta(x)}y\cdot \kappa(x))^2}{(1- \epsi\, T_{\theta(x)}y\cdot \kappa(x))^2} \] is uniformly bounded on $\Omega$ with all its derivatives. Hence \begin{eqnarray*} \epsi \big\llangle \psi, p_1 S^\epsi \widehat{m}_1 q_1 \psi \big\rrangle&=& \epsi \big\llangle \psi, p_1 \left(\partial_{x_1} +\theta'(x_1) L_1 \right) s^\epsi(r_1) \left(\partial_{x_1} +\theta'(x_1) L_1 \right) \widehat{m}_1 q_1 \psi \big\rrangle\\ &\leq & \epsi \| \left(\partial_{x_1} +\theta'(x_1) L_1 \right) s^\epsi(r_1) \left(\partial_{x_1} +\theta'(x_1) L_1 \right)p_1\psi\| \,\|\widehat{m}_1 q_1 \psi\|\lesssim \epsi\,\|\Phi\|_{H^2(\field{R})} \,, \end{eqnarray*} concluding the bound for $\rm IV$. \end{proof} \subsection{Proof of Lemma~\ref{lem:energyestimate}}\label{energylemmaproof} The strategy is to control the expression in terms of the energy per particle. To this end we observe that \begin{eqnarray*}\lefteqn{ \left\| \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi\right\|^2= - \left\llangle q_1 \psi, \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right)^2q_1\psi\right\rrangle} \\ &\leq & \left\llangle q_1 \psi, \left(\left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right)^2 - \tfrac{1}{\epsi^2}\Delta_{y_1} + \tfrac{1}{\epsi^2} V^\perp(y_1)-\tfrac{E_0}{\epsi^2} \right)q_1\psi\right\rrangle \\ &\leq & 2 \left\llangle q_1\psi, \left( - \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) (1 + \epsi s^\epsi(r_1)) \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right)- \tfrac{1}{\epsi^2}\Delta_{y_1}+ \tfrac{1}{\epsi^2} V^\perp(y_1)-\tfrac{E_0}{\epsi^2} \right)q_1\psi\right\rrangle\\ &=:& 2 \left\llangle q_1 \psi, \, \tilde h_1\;q_1\psi\right\rrangle\,. \end{eqnarray*} Hence we have \begin{eqnarray*} \left\| \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi\right\|^2&\leq& 2\norm{\sqrt{ \tilde h_1} q_1 \psi }^2 \leq \norm{\sqrt {\tilde h_1}(1-p_1 p_2 )\psi}^2 + \norm{\sqrt {\tilde h_1}p_1q_2\psi}^2 \nonumber\\&\leq&\left\llangle \psi , (1-p_1p_2 ) \tilde h_1 (1-p_1p_2) \psi \right\rrangle+ \langle \varphi,\tilde h_1\varphi\rangle {\alpha_\xi}\, \end{eqnarray*} Note that \begin{eqnarray*} \langle \varphi,\tilde h_1\varphi\rangle&=&- \left\langle \varphi,\, \left(\tfrac{\partial}{\partial x} + \theta'(x) L_1 \right) (1 + \epsi s^\epsi(r)) \left(\tfrac{\partial}{\partial x} + \theta'(x) L \right)\varphi\right\rangle\\ &\leq &-2 \left\langle \varphi,\, \left(\tfrac{\partial}{\partial x} + \theta'(x) L \right)^2 \varphi\right\rangle\;=\; 2 \left( \left\| \tfrac{\partial}{\partial x} \Phi\right\|^2 + \left\| |\theta'(x)|^2 \|L\chi\|^2 \Phi\right\|^2\right)\\ &\lesssim &\|\Phi\|^2_{H^1(\field{R})}\,. \end{eqnarray*} Then, after expanding and rearranging the energy difference \begin{eqnarray*} E^\psi -E^\Phi &=& \tfrac{1}{N}\big\llangle \psi ,H(t)\,\psi \big\rrangle - \tfrac{E_0}{\epsi^2} - \Big\langle \Phi , \mathcal{E}^\Phi (t)\Phi \Big\rangle_{L^2(\field{R})}\\ &=& \left\llangle \psi ,\left(\tilde h_1+ \tfrac12 w^{\epsi,\beta,N}_{12}+V(x_1,\epsi y_1) + V_{\rm bend}(r_1)\right)\psi \right\rrangle\\ &&-\; \Big\langle \Phi , \left(-\tfrac{\partial^2}{\partial x^2} - \tfrac{\kappa(x)^2}{4} + |\theta'(x)|^2 \,\|L\chi\|^2 + V(x,0)+ \tfrac{b}{2} |\Phi |^2 \right) \Phi \Big\rangle_{L^2(\field{R})} \end{eqnarray*} we arrive at \begin{eqnarray}\label{hpg}\lefteqn{ \left\llangle \psi , (1-p_1p_2 ) \tilde h_1 (1-p_1p_2) \psi \right\rrangle = E^\psi- E^\Phi \notag} \\ &&-\;\left( \left\llangle \psi , p_1p_2 \tilde h_1 p_1p_2 \psi \right\rrangle- \left\langle \varphi, - \tfrac{\partial^2}{\partial x^2} - \tfrac{1}{\epsi^2} (\Delta_y+E_0) + |\theta'(x)|^2 \|L\chi\|^2\varphi \right\rangle\right) \label{grad2} \\ &&-\;\left\llangle \psi , (1-p_1p_2 )\tilde h_1 p_1p_2 \psi \right\rrangle-\left\llangle \psi , p_1p_2 \tilde h_1 (1-p_1p_2) \psi \right\rrangle\label{grad3} \\ &&-\;\tfrac12\left( \llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} p_1 p_2 \psi \rrangle - \langle \Phi, b|\Phi|^2 \Phi \rangle\right)\label{grad4} \\ &&-\; \tfrac12 \left( \left\llangle \psi,(1- p_1 p_2)w^{\epsi,\beta,N}_{12}p_1 p_2 \psi \right\rrangle+ \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12}(1- p_1 p_2) \psi \right\rrangle \right) \label{grad5} \\ && -\; \tfrac12 \left\llangle \psi,(1- p_1 p_2) w^{\epsi,\beta,N}_{12}(1- p_1 p_2) \psi \right\rrangle \label{grad6} \\ && -\;\left( \left\llangle \psi , V(x_1,\epsi y_1) \psi \right\rrangle - \left\langle \Phi, V(x,0) \Phi \right\rangle\right) +\;\left( \left\llangle \psi , \tfrac{\kappa^2(x_1)}{4} \psi \right\rrangle - \left\langle \Phi, \tfrac{\kappa^2(x)}{4} \Phi \right\rangle\right) \label{grad7}\\ &&-\; \left\llangle \psi ,\left(V_{\rm bend}(r_1)+\tfrac{\kappa(x_1)^2}{4} \right) \psi \right\rrangle\label{grad8}\,. \end{eqnarray} We will estimate each line separately. For \eqref{grad2} we find \begin{eqnarray*} \eqref{grad2}&\leq& \left|\left\llangle \psi , p_1p_2 \tilde h_1 p_1p_2 \psi \right\rrangle - \left\langle \varphi, \tilde h_1 \varphi \right\rangle \right| +\epsi \langle \varphi,\tilde h_1\varphi\rangle \\&=& \left|\left\langle \varphi, \tilde h_1 \varphi \right\rangle\left\llangle \psi , p_1p_2 \psi \right\rrangle- \left\langle \varphi, \tilde h_1 \varphi \right\rangle \right| +\epsi \langle \varphi,\tilde h_1\varphi\rangle \\ &= &\left|\left\langle \varphi, \tilde h_1 \varphi \right\rangle \left\llangle \psi , (1-p_1p_2 )\psi \right\rrangle\right|+\epsi \langle \varphi,\tilde h_1\varphi\rangle \\ &=& \langle \varphi,\tilde h_1\varphi\rangle\left( \left|\left\llangle \psi , (p_1q_2 +q_1p_2+ q_1q_2) \psi \right\rrangle\right|+\epsi\right)\; \weq{\ref{lem:weights}}{ \lesssim }\; \|\Phi\|^2_{H^1(\field{R})}( \alpha_\xi+\epsi) \,, \end{eqnarray*} and \eqref{grad3} is bounded in absolute value by \begin{eqnarray*} |\eqref{grad3}| &\leq& 2 \left|\left\llangle \psi, (1-p_1p_2 ) \tilde h_1 p_1p_2 \psi \right\rrangle\right| =2 \left|\left\llangle \psi, q_1 \tilde h_1 p_1p_2 \psi \right\rrangle\right| = 2\left|\left\llangle \psi, q_1 \widehat n^{-\frac12} \tilde h_1 \widehat{\tau_1n}^\frac12 p_1p_2 \psi \right\rrangle\right| \\&\leq & 2 \left\| \widehat n^{-\frac12}q_1\psi\right\| \left\| \tilde h_1 p_1\right\| \left\| \widehat{\tau_1n}^\frac 12 \psi\right\| \lesssim \sqrt{\alpha_\xi} \norm{ \Phi}_{H^2(\field{R})}\sqrt{ \alpha_\xi + \tfrac{1}{\sqrt N}}\lesssim \norm{ \Phi}_{H^2(\field{R})} \left(\alpha_\xi + \tfrac{1}{\sqrt N}\right)\,. \end{eqnarray*} For \eqref{grad4} we first note that \ \left|\left\langle \Phi, b|\Phi|^2 \Phi \right\rangle- \left\langle \psi, p_1 p_2 b|\Phi|^2_1 p_1 p_2 \psi \right\rangle \right| = \left|\left\langle \Phi, b|\Phi|^2 \Phi \right\rangle\right| \left|\left\langle \psi, (1-p_1p_2) \psi \right\rangle\right| \lesssim \norm{\Phi}_{L^\infty(\field{R})}^2 \alpha_\xi\,. \] Hence, \begin{eqnarray*} | \eqref{grad4}|&\leq & \left|\left\llangle \psi, p_1 p_2 \left(b|\Phi|^2- w^{\epsi,\beta,N}_{12}\right) p_1 p_2 \psi \right\rrangle\right| + \norm{\Phi}_{L^\infty(\field{R})}^2 \alpha_\xi\\ &\leq& \left|\left\llangle \psi, p_1 p_2 \left(b|\Phi|^2- w^0_{12}\right) p_1 p_2 \psi \right\rrangle\right| +\|(T_1+T_2)p_1\| + \norm{\Phi}_{L^\infty(\field{R})}^2 \alpha_\xi\\ &\stackrel{\eqref{faltest}}{\lesssim} &\frac{\mu}{\epsi}\,\|\nabla |\Phi|^2\|_{L^2(\field{R})}\norm{\Phi}_{L^\infty(\field{R})}+ \frac{\epsi(\epsi+\mu)}{\mu^{3/2}}\|\Phi\|_{H^2(\field{R})} +\norm{\Phi}_{L^\infty(\field{R})}^2 \alpha_\xi\,. \end{eqnarray*} For \eqref{grad5} we have that \begin{eqnarray} |\eqref{grad5}|& \leq & 2 \left|\left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12}(1- p_1 p_2) \psi \right\rrangle\right| = \left|\left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12}(q_1p_2+ p_1q_2+ q_1q_2) \psi \right\rrangle\right| \nonumber\\ &\leq& 2\left| \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} q_1p_2 \psi \right\rrangle\right|+\left| \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} q_1q_2 \psi \right\rrangle \right|\,.\label{grad5s1} \end{eqnarray} The first summand in \eqref{grad5s1} is bounded by \begin{eqnarray*} \left| \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} q_1p_2 \psi \right\rrangle\right|&=& \left| \left\llangle \psi, p_1 p_2 \,\widehat {\tau_1 n}^\frac{1}{2}\,w^{\epsi,\beta,N }_{12}\, \widehat n^{-\frac{1}{2}} \, q_1 p_2 \psi \right\rrangle\right|\\&\leq &\norm{p_2 w^{\epsi,\beta,N }_{12} p_2} \norm{\widehat {\tau_1 n}^\frac{1}{2} \psi} \norm{\widehat n^{-\frac{1}{2}} q_1 \psi}\lesssim \norm{\Phi}_{H^2(\field{R})}^2 \left(\alpha_\xi+\tfrac{1}{\sqrt N}\right) \,. \end{eqnarray*} For the second summand in \eqref{grad5s1} we first use symmetry to write \begin{eqnarray*} \left| \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} q_1q_2 \psi \right\rrangle \right|&=& \frac{1}{N-1} \left|\sum_{j=2}^N \left\llangle \psi, p_1 p_j w^{\epsi,\beta,N}_{1j} q_1 q_j \psi \right\rrangle \right|\\&\leq& \frac{\norm{ q_1 \psi}}{N-1} \norm{\sum_{j=2}^N q_j w^{\epsi,\beta,N}_{1j} p_1 p_j \psi }\leq\frac{\sqrt{\alpha_\xi }}{N-1} \norm{\sum_{j=2}^N q_j w^{\epsi,\beta,N}_{1j} p_1 p_j \psi } \label{equ:II.1g}. \end{eqnarray*} Now the second factor can be split into a ``diagonal'' and an ``off-diagonal'' term, \begin{eqnarray*}\lefteqn{\hspace{-1cm} \norm{\sum_{j=2}^N q_j w^{\epsi,\beta,N}_{12} p_1 p_j \psi }^2 = \sum_{j,k=2}^N \left\llangle \psi, p_1 p_l w^{\epsi,\beta,N}_{1l} q_l q_j w^{\epsi,\beta,N}_{1j} p_1 p_j \psi \right\rrangle }\\ & \leq & \hspace{-10pt}\sum_{2 \leq j < k \leq N} \left\llangle \psi, q_j p_1 p_l w^{\epsi,\beta,N}_{1l} w^{\epsi,\beta,N}_{1j} q_l p_1 p_j \psi \right\rrangle+ (N-1) \norm{w^{\epsi,\beta,N}_{12} p_1 p_2 \psi }^2 . \label{equ:II.3g} \end{eqnarray*} The ``off-diagonal'' term is bounded by \begin{eqnarray*}\lefteqn{\hspace{-3cm} (N-1)(N-2)\left\llangle \psi,q_2 p_1 p_3 w^{\epsi,\beta,N}_{13} w^{\epsi,\beta,N}_{12} q_3 p_1 p_2 \psi \right\rrangle \leq N^2 \norm{ \sqrt{ w^{\epsi,\beta,N}_{12}} p_2 \sqrt{ w^{\epsi,\beta,N}_{13}} p_1 q_3 \psi } ^2} \\ &\leq &N^2 \norm{ \sqrt{ w^{\epsi,\beta,N}_{12}} p_2 }^4 \norm{ q_3 \psi }^2\;\weq{\ref{wcor}}{ \lesssim} \; N^2 \norm{\Phi}_{H^2(\field{R})}^4 \alpha_\xi \label{equ:II.4g}. \end{eqnarray*} The ``diagonal'' term is bounded by \begin{eqnarray*} N \left\llangle \psi, p_1 p_2 \left(w^{\epsi,\beta,N}_{12}\right)^2 p_1 p_2 \psi \right\rrangle &\leq& N \norm{p_1 (w^{\epsi,\beta,N}_{12})^2 p_1 } \weq{\ref{wcor}}{ \leq} \tfrac{N\epsi^2}{\mu^3} \norm{\Phi}^2_{H^2(\field{R})} \label{equ:II.5g} \end{eqnarray*} and we conclude that the second summand of \eqref{grad5s1} is bounded by \begin{eqnarray*} \left| \left\llangle \psi, p_1 p_2 w^{\epsi,\beta,N}_{12} q_1q_2 \psi \right\rrangle \right| &\lesssim &\frac{\sqrt{ \alpha_\xi }}{N} \sqrt{ N^2 \norm{\Phi}^4_{H^2(\field{R})} \alpha_\xi + \tfrac{N\epsi^2}{\mu^3}\norm{\Phi}^2_{H^2(\field{R})} }\\& \leq& \norm{\Phi}^2_{H^2(\field{R})} \alpha_\xi + \tfrac{\sqrt{\alpha_\xi}}{\sqrt{N}}\sqrt{\tfrac{\epsi^2}{\mu^3}}\norm{\Phi}_{H^2(\field{R})} \lesssim \norm{\Phi}^2_{H^2(\field{R})} \alpha_\xi+ \tfrac{\epsi^2}{N\mu^3}\,. \end{eqnarray*} In summary we thus have that \[ |\eqref{grad5}|\;\lesssim \;\norm{\Phi}^2_{H^2(\field{R})} \left( \alpha_\xi+\tfrac{1}{\sqrt{N}}\right) + \frac{\epsi^2}{N\mu^3}\,. \] Since the interaction is non-negative, we have $\eqref{grad6}\leq 0$. With the same arguments as used in the proof of Proposition~\ref{lem:3termeg} part IV we find \[ |\eqref{grad7}|\; \lesssim \; \alpha_\xi +\epsi\,, \] and obviously $|\eqref{grad8}|\lesssim \epsi$. In summary we thus showed \begin{eqnarray*} \left\| \left(\tfrac{\partial}{\partial x_1} + \theta'(x_1) L_1 \right) q_1\psi\right\|^2 &\lesssim& \|\Phi\|_{H^2(\field{R})}^3 \left(\alpha_\xi + \frac{1}{\sqrt{N}} + \epsi +\frac{\mu}{\epsi}+\sqrt{\mu}+ \frac{a}{ \mu^3} \right) \end{eqnarray*} and with \[ \epsi\lesssim \frac{\mu}{\epsi}\,,\quad \frac{1}{\sqrt{N}}\lesssim \frac{\mu}{\epsi} \,,\quad \sqrt{\mu}\lesssim \frac{\mu}{\epsi}\,, \] which holds for moderate confinement, the statement of the lemma follows. \begin{appendix} \section{Well-posedness of the dynamical equations }\label{app:regsol} The Hamiltonian $H_{\mathcal{T}_\epsi}(t)$ given in \eqref{hamilton1} is self-adjoint on $H^2(\mathcal{T}_\epsi^N)\cap H^1_0(\mathcal{T}_\epsi^N)$ for every $t\in\field{R}$, since the potentials $V$ and $w$ are bounded by assumptions {\bf A2} and {\bf A3}. Hence $(U_\epsi)^{\otimes N} H_{\mathcal{T}_\epsi}(t)(U_\epsi^*)^{\otimes N} + \sum_{i=1}^N \tfrac{1}{\epsi^2} V^\perp(y_i)$ is self-adjoint on $U_\epsi H^2(\mathcal{T}_\epsi^N)\cap U_\epsi H^1_0(\mathcal{T}_\epsi^N)= H^2(\Omega^N)\cap H^1_0(\Omega^N)$, as $\sum_{i=1}^N \tfrac{1}{\epsi^2} V^\perp(y_i)$ is relatively bounded with respect to $(U_\epsi)^{\otimes N} H_{\mathcal{T}_\epsi}(t)(U_\epsi^*)^{\otimes N} $ with relative bound smaller than one. Finally $t\mapsto V(t)\in \mathcal{L}(L^2)$ is continuous, so $H(t)$ generates an strongly continuous evolution family $U(t,0)$ such that for $\psi_0\in H^2(\Omega^N)\cap H^1_0(\Omega^N)$ the map $t\mapsto U(t,0)\psi_0$ satisfies the time-dependent Schr\"odinger equation. Although the questions of well-posedness, global existence and conservation laws for the NLS equation in our setting are well understood, we couldn't find a reference for global existence of $H^2$-solutions to \eqref{equ:grosspqwg} with time-dependent potential. We thus briefly comment on this point. The standard contraction argument (see e.g.\ Proposition~3.8 \cite{Tao06}) gives unique local existence of $H^s$-solutions $\Phi(t)$ for all $\frac12<s\leq 4$, since under the hypotheses {\bf A1} and {\bf A3} on the external potential and the waveguide all potentials appearing in \eqref{equ:grosspqwg} are $C^4_{\rm b}$. Moreover, $\|\Phi(t)\|_{L^2} = \|\Phi(0)\|_{L^2}$ and the solution map $\Phi(0)\mapsto \Phi(t)$ is continuous in $H^s$. See \cite{Sp14} for the details of this argument in the case of time-dependent potentials. In order to show also global existence, assume without loss of generality (we can always add a real constant to the potential) that \[ \inf_{t,x\in\field{R}} \left( -\tfrac{\kappa^2(x)}{4} + |\theta'(x)|^2 \,\|L\chi\|^2 + V(t,x,0) \right) \geq 0 \] and recall the definition $E^{\Phi(t)}(t) := \left\langle\Phi(t), \mathcal{E}^{\Phi(t)}(t)\Phi(t)\right\rangle$ in \eqref{equ:enggross2}. Then for $\Phi(t)\in H^2$ the map $t\mapsto E^{\Phi(t)}(t)$ is differentiable and we have \begin{eqnarray*} \| \Phi(t)\|^2_{H^1} &\leq& E^{\Phi (t)}(t) +1 \;=\; E^{\Phi (0)}(0) +1+ \int_0^t \tfrac{{\mathrm{d}}}{{\mathrm{d}} s} E^{\Phi(s) }(s) \,{\mathrm{d}} s \\&=&E^{\Phi(0) }(0) +1+\int_0^t \left\langle \Phi(s), \dot V(s,\cdot,0) \Phi(s)\right\rangle\,{\mathrm{d}} s\\ &\leq & C\| \Phi(0)\|^2_{H^1} + \|\Phi(0)\|^2 \int_0^t \|\dot V(s,\cdot,0)\|_{L^\infty}\,{\mathrm{d}} s\,, \end{eqnarray*} which, by continuity of the solution map, extends to $\Phi(t)\in H^1$. Hence $\|\Phi(t)\|_{L^\infty} \leq \|\Phi(t)\|_{H^1} $ cannot blow up in finite time, which implies global existence of $H^1$-solutions. To control also the $H^2$-norm, first note that with \begin{eqnarray*} \| \mathcal{E}^{\Phi(t)}(t)\Phi(t)\|^2 &=& \Big\langle \Phi (t), \left(-\tfrac{\partial^2}{\partial x^2} - \tfrac{\kappa ^2}{4} + |\theta' |^2 \,\|L\chi\|^2 + V(t,\cdot,0)+ \tfrac{b}{2} |\Phi (t)|^2 \right)^2 \Phi (t) \Big\rangle\\ &\geq& \left\|\tfrac{\partial^2}{\partial x^2} \Phi(t)\right\|^2 + 2\Re \Big\langle \Phi(t), \tfrac{\partial^2}{\partial x^2} \Big(\underbrace{-\tfrac{\kappa^2 }{4} + |\theta' |^2 \,\|L\chi\|^2 + V(t,\cdot,0)}_{=:f(t,x)} + \tfrac{b}{2}|\Phi(t)|^2\Big) \Phi(t)\Big\rangle \end{eqnarray*} and \begin{eqnarray*} \big| \big\langle \Phi(t), \tfrac{\partial^2}{\partial x^2} \big(f+ \tfrac{b}{2}|\Phi(t)|^2\big) \Phi(t)\big\rangle \big| &\leq &\big| \big\langle \Phi'(t), \big(f+ b|\Phi(t)|^2\big) \Phi'(t)\big\rangle \big| + \big| \big\langle \Phi'(t), \tfrac{b}{2} \overline{\Phi'(t)} \Phi(t)^2\big\rangle \big| + \big| \big\langle \Phi'(t), f' \Phi(t)\big\rangle \big| \\ &\leq & \|\Phi(t)\|^2_{H^1} \left( C +b \|\Phi(t)\|^2_{L^\infty}\right) + \tfrac{b}{2} \|\Phi(t)\|^2_{H^1} \|\Phi(t)\|^2_{L^\infty} + C \|\Phi(t)\|_{H^1} \\ &\leq& C_1\|\Phi(t)\|^4_{H^1} \end{eqnarray*} for some constant $C_1\in \field{R}$ we have \begin{eqnarray*} \left\|\tfrac{\partial^2}{\partial x^2} \Phi(t)\right\|^2 &\leq& \| \mathcal{E}^{\Phi(t)}(t)\Phi(t)\|^2 + 2 C_1\|\Phi(t)\|^4_{H^1} \,. \end{eqnarray*} Moreover, for $\Phi(t)\in H^4(\field{R})$ we have \begin{eqnarray*} \big\| \mathcal{E}^{\Phi(t)}(t)\Phi(t)\big\|^2+1 &=& \big\| \mathcal{E}^{\Phi(0)}(0)\Phi(0)\big\|^2+1 + \int_0^t \tfrac{{\mathrm{d}}}{{\mathrm{d}} s} \left\langle \mathcal{E}^{\Phi(s)}(s)\Phi(s), \mathcal{E}^{\Phi(s)}(s)\Phi(s)\right\rangle \,{\mathrm{d}} s \\&=& \big\| \mathcal{E}^{\Phi(0)}(0)\Phi(0)\big\|^2 +1+ 2 \int_0^t \Re\left\langle \dot V (s,x,0) \Phi(s), \mathcal{E}^{\Phi(s)}(s)\Phi(s)\right\rangle {\mathrm{d}} s\\ &\leq& C_2\|\Phi(0)\|_{H^2}^2 + C_3 \int_0^t \big\| \mathcal{E}^{\Phi(s)}(s)\Phi(s)\big\| \,{\mathrm{d}} s\\ &\leq& C_2\|\Phi(0)\|_{H^2}^2 + C_3 \int_0^t \left( \big\| \mathcal{E}^{\Phi(s)}(s)\Phi(s)\big\|^2+1\right) \,{\mathrm{d}} s\,. \end{eqnarray*} An application of the Gr\"onwall inequality yields a bound of $ \left\| \mathcal{E}^{\Phi(t)}(t)\Phi(t)\right\|^2$ in terms of $ \|\Phi(0)\|_{H^2}^2$, which, again by continuity of the solution map, extends to $\Phi(t)\in H^2(\field{R})$. Hence the $H^2$-norm of $\Phi(t)$ remains bounded on bounded intervals in time. \end{appendix} \bibliographystyle{alphanum} \newcommand{\etalchar}[1]{$^{#1}$}
proofpile-arXiv_069-5104
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }